Technology & Change

posted in: Technology | 0

I originally wrote this in 2012, but it seems appropriate still, so I am sharing it again here.

The computer we know and love, and sometimes hate, has become a staple of the design and artistic processes. It has brought design production capabilities to the masses [and now AI has added another significant tool in this space]. Despite that, I feel design specialists are still necessary.

The modern computer has roots going back to ancient Sumeria, where the abacus was developed around four and a half thousand years ago. The concept of condensing large numbers into layers and representing them with small movable beads (or bits) laid the foundation for facilitating menial mental task-work with physical representations. This allows the user to concentrate on the meta-task of organizing the overall picture shown by the figures, without holding all those figures in their mind at once. Throughout the following centuries, many forward-thinking people tried to apply that concept to different areas. However, the complexity of the small, precise parts needed could not be met by manufacturing technologies until the mid 1600’s. At that point Blaise Pascal was able to produce a mechanical calculator capable of doing complex mathematical operations.

The next big step was the development of the Jaquard Loom by Joseph Jaquard in 1801. This loom allowed the reproduction of very complex patterns using a system of rods and hooks which could raise each individual thread on its own. The system was controlled by punch cards, a hole in the card meant the hook would lift the thread, no hole meant no lift. It was an early form of binary language, where a large variety of options are available using only 2 characters or states: 1 and 0, on and off. Unique patterns of 1’s and 0’s could be assigned to represent any pattern the user (programmer) desired. In the late 1800’s Herman Hollerith adapted Jaquard’s system and invented methods to store and reference data for US Census’s tabulation of the US population in 1890. The company he formed for that project went on to become the foundation of IBM. In 1936 Alan Turing developed methods for applying algorithms and computation sequences that utilized the on/off binary language in much more efficient ways.

Vacuum tubes, transistors, and finally microprocessors were just smaller and more efficient ways to represent sequences of on/off patterns, and each allowed the computer to shrink in size and further reduced mechanical com- plexity. As the the mechanical complexity decreased, conversely the programming complexity increased. With more and more “space” available, more complicated and longer strings of on/off instructions were achievable, and programmers took advatange of this to create applications with more capabilities. What we have now are sets of instructions that control sets of instructions that control sets of instructions….on and on depending on the complexity of the system. These upper-levels of instructions would represent things like the iOS or Windows, Photoshop or Illustrator user interfaces we are familiar with, while the lower levels are perfuming the actual operations that are requested by modified instructions handed down from upper levels. For example we don’t have to manually type in a series of 1’s and 0’s to open a web browser, our mouse clicks or keyboards inputs are translated into that series of 1’s and 0’s by intermediate programs.

It wasn’t long before programmers began to apply the potential of these systems to creative areas, making digital tools that allowed text and image creation/manipulation. One of the early innovations that shaped how we interface with computers was Ivan Sutherland’s Sketchpad program and “light-pen” (basically a stylus/mouse combination) from 1959. For example, when creating a square shape with this program instead of drawing and lining up each side, one simply specified the location and size of the desired shape and the computer created it perfectly. This was a revolutionary step, and I’m sure anyone who has used anything from MS Draw to Adobe Illustrator will be thankful for it. This system of digital representation allowed otherwise difficult physical design tasks (like accurately masking areas of an image, moving text or type around a layout while maintaining certain spacing or ratios, or combining and manipulating images) to be performed with a speed and accuracy impossible with traditional methods. As the actual capabilities of the technology caught up to their type, most of the commercial design world jumped on board and never looked back.

One of the first people to be recognized as a “computer artist” was Charles Csuri. He began creating imagery and animations using computers in 1964. He worked on government-sponsored research projects into computer graphics for over 22 years, and the results were used in applications ranging from scientific data-display methods to special effects in movies. His methods of combining the artistic and scientific processes probably saw inspiration in the Modernist philosophy, and then went on to influence many later designers. In this way he functions as link between past tradition and the progression of new thinking, much like Warhol or Picasso.

In 1965 Rudolph Hell introduced the ‘Digiset” typeset system; replacing the film-based typesetting process with the digital font families we know today.


In the 19070’s Adobe company founder John Warnock was in school at the University of Utah. He was a major contributor to the type-setting industry with his Postcript page description language, not to mention the Adobe family of digital production tools. Also at the University of Utah then was 3D graphics pioneer Edwin Catmull.

In the 1980’s the potential of the computer for graphic design was finally becoming clear to consumers, with the Macintosh and Commodore Amiga providing a platform and tools that made certain design tasks much faster and more accurate. With the introduction of graphical user interfaces instead of “/” commands, the technology became much more accessible as well. Pixar created their first fully animated short films in the late mid 80’s.

The 1990’s saw an explosion of these tools, and fully integrated three dimensions into them, allowing the creation of virtual models that could be studied and manipulated in the same ways as their physical counterparts. Now the digital production tools available to us are more advanced and easy to obtain and use than ever. Graphic design was no longer solely the province of professionals with specialized tools, but possible for anyone with a computer. A secondary impact was the way these tools combined all aspects of the design: typesetting, process separation, colour management and many other aspects. These conditions resulted in a lot of subpar design in the early years especially, as “laypeople” and professionals experimented with the capabilities (and limitations) of the new tool.

In the early days of commercial digital graphics and design, resolution was a strong limiting factor. The detail available to the designer depended on the computing power of the intended end product, which could range from a mass-produced consumer item to a specialized industrial tool. This created some interesting limitations that design ers had to work around in creative ways if they wanted to achieve successful results. In one way the early digital design was a throwback to the ancient mosaic style of creating pictures with tiles, because at the base level they were working with pixels, essentially square tiles whose shape can’t be changed, but can be coloured any way the artist desires. I would like to use the design of a character from an early video game to illustrate some of these limitations and the ways artists worked around them.

Mario, created by Shigeru Miyamoto in 1981, and star of Nintendo’s famous family of video games, was in many ways defined by the limitations of the technology he was created for. In order to provide a relatively cheap product, resolution for artwork in the game was very low, meaning the individual square pixels that make up the images were clearly visible to the naked eye. In his original iteration in the first “Donkey Kong” arcade game Mario (originally know as “Mr. Video, then “Jumpman”, before settling into his now familiar moniker) was depicted as capped and moustachioed figure, notable for his large nose and bright red and blue outfit. All those decisions were defined by the limita tions of the medium: the cap and moustache eliminated the need to animate mouth or hair movements, the nose had to be large to make the blob of pink pixels read as a “face” in profile, and the red shirt and blue coveralls increased visibility and differentiated limbs as the highest contrast colours available to the designer. The other characters show similar constraints and clever solutions; Donkey Kong’s teeth are represented by just three crosses over a white oblong, but clearly read as teeth, Pauline’s shoes/feet are limited to 4-5 pixels but manage to look like high-heels. More examples of Miyamoto and other’s clever solutions can be seen in the original Zelda game by Nintendo, which was designed at the same time as the first few Mario titles.

This type of simplification is interesting, and it allows the viewer to “fill in the blanks” left by the basic design with their imagination. That may have been one of the reasons the character and resulting family of products were so successful.

Some think that the ease of design production now has made trained designers obsolete, because for the same price (or of course for free if one is willing to sidestep certain legal guidelines) as a designed logo one can find tools capable of producing that and any other logo imaginable. As well, there are many do-it-yourself templates for basic design needs like business cards, websites, and the like that can provide traditional graphic design tasks for free. However, I feel that it means that a solid foundation in good design principles is more important than ever. It is the only way to stand out from all the “clutter” of the sub-par designs. Understanding the ways colour, space, and form work together is essential to create designs with deeper merit than just the message contained within.

To make an analogy to food: anyone can buy and eat a frozen dinner; but that experience is nothing like going to a fantastic restaurant and eating a meal prepared by a master chef. The basic elements are the same, (cook ingredients, eat food, receive nourishment) and it could even be the same ingredients, but the end result is very different. One is mass-produced and profit-oriented, the other is handcrafted and experience-oriented. Each has their place in our world, and while the rules and pay scale may change, I don’t think the true chefs are going anywhere, no matter how cheap and ubiquitous frozen chicken  fingers become.