What are Color Models?
What are color models? And which do you need for your graphic work? An indepth tutorial that goes into RGB, HSV, HSL and CMYK with applied interactives.
Computer graphics, animations and interactions with digital equipment are now self-evident. You just have to pick up your smartphone, tablet, desktop computer or what else and you feel intuitively when you have to swipe, click, drag or pinch zoom. You also expect nothing less than nice interfaces with smooth animations.
In this blog series, of which this is the third part of six, I like to take you on a journey through time with our focus on the development before and during the creation of computers, digital graphics, animations, graphical interfaces, graphics software, interactivity, 3D, a pinch of the first games, the creation of the internet and a touch of virtual reality. If I could even mention the largest part of influential events, that would be a world achievement. So that's just impossible. Instead, I like to point out a number of events that I think have made an important contribution to getting where we are now. Sometimes with a slight side-road to a development that indirectly made an important contribution or reflects the spirit of the times and relations between events. Although I personally find audio and music are very important and interesting and I have always been involved in producing music, I have made the decision to omit audio developments in this series to keep the series somewhat concise and focused.
I have made more than 110 illustrations for this series and also provide each part with at least one interactive to bring the events alive as good as possible for you.
If you haven't read both part one and part two of this series yet, it's worth reading those first.
In this third part we continue where we ended part two: with the first game consoles. You will also see that during the period in this part an important new market developed; personal home computers. Also many important new graphics techniques were invented. Techniques we still use in 2020. There will also be some surprises in this part. So, like me, you may be surprised that a technique everyone uses on their smartphone a lot nowadays is not as young as many people may think and was already invented in the 70's. Also who the actual creators of software were that have been running on our computers for a long time and may even still run on systems will probably surprise some.
So let's quickly continue our journey through time with what, contrary to popular belief, was the very first home game console. I'm not talking about Atari, but about the Magnavox Odyssey.
As early as 1968, the German-American Ralph Baer was designing a prototype for a game console for use in our living rooms; the Magnavox Odyssey. On May 24, 1972, this first-ever commercial home game console was released in the United States. That's about 3.5 years earlier than Atari came with their first game for home use.
When Atari came up with the game Pong, there was a disagreement between Atari and Magnavox, because the game Pong was very similar to the ping pong game that Magnavox had previously released. I found a copy of an interesting documentary of Magnavox's Ping Pong game on YouTube here.
There were no digital chips in the Odyssey, but other electronic components, including transistors and diodes. Baer himself called the device digital, but due to the lack of digital chips, it is nowadays considered an analog game console.
Unfortunately, the sales of the device were disappointing. In total, only 330,000 systems were sold. Probably because many people thought that you could only connect the console to a Magnavox television. A marketing problem, therefore, because in practice the brand of the television did not matter at all to connect the console.
The spring of 1972 could also be called the spring of the development of 3D computer graphics. At least for 3D wireframes. Ed Catmull, nowadays (in 2020) director of Pixar and inventor of many other important 3D and graphics techniques, worked on his project 'A Computer Animated Hand', a 3D animation of a human hand.
To create this animation, he made a mold from his own hand with plaster and then divided the surface of that mold into 350 triangles and polygons. Literally by drawing them on the mold. Then he entered the coordinates of the vertices of these triangles and polygons in the computer so that the 3D hand was rebuild digitally on the computer. He spent about ten weeks in total, but those ten weeks have revolutionized the development of 3D graphics. Because from now on it was possible to generate complex 3D shapes with triangles and polygons using computer software where previously only primitive shapes could be made.
We still use triangles and polygons to create complex models in 3D modeling software today.
The audience at the science conference where the four-minute animation of the open and close hand was shown was amazed. Especially when the virtual camera in the computer animation literally showed the hand from the inside. That had never been seen before.
Watch a copy of the animation here on YouTube.
If you aren’t experiencing failure, then you are making a far worse mistake: You are being driven by the desire to avoid it.
The world has certainly not stood still in the field of graphic computer interfaces. The Xerox Alto computer was launched on March 1, 1973. This is considered to be the very first computer with a graphical interface (GUI) as we still use it today.
The metaphor 'desktop computer' was also used for the first time for this computer. Desktop, because initially the screen of the computer literally had to represent a work desk. Like looking at a work desk with a camera from above.
The Xerox Alto was therefore a revolutionary device in every way. Curious about the Alto? I found a commercial video with the Alto on YouTube here.
The Xerox Alto was only a prototype, was only used internally by Xerox and never (directly) appeared on the market.
Many people nowadays think that Apple was the first with a graphical interface as we now recognize it. But in reality, after seeing the Xerox Alto interface, Apple bought the software, including the responsible developer, from them, thereby providing the future graphics interfaces for the Macintosh.
Developments were now accelerating. Since 1969, Peter Foldes, born in Hungary, was writing an animation application in collaboration with the National Research Council's department of the 'Radio and Electrical Engineering's Data Systems Group'. I already wrote about this in the second part of this blog series.
An important result of this could already be admired in 1971; the movie Metadata, as described in the previous part of this blog series. But in 1973, Peter Foldes released his movie Hunger, which is arguably the very first movie ever to feature computer graphics that was shown to a wide audience. Although it was still a short film.
Hunger has won several awards, including a Special Jury Prize at the Cannes Film Festival, a BAFTA Award for Best Animation Film (British Academy of Film and Television Arts Award) and a Silver Hugo at the Chicago International Film Festival. The film was also nominated for an Academy Award for Best Animated Short Film at the Academy Awards.
The film was not only a technical tour de force and had special synthetic music, but also had a clear story with a moral.
[edit Feb 18, 2021] Hunger can be viewed in its entirety on the website of the Canadian NFB. You can find it here. Many thanks to computer pioneer Bill Buxton who sent me this tip, among other things (Thank you, I feel very honored that you read these blog articles and let me know!). Bill Buxton wrote that he used the computer during the night hours, between 6:00 pm and 6:00 am, to make electronic music, while Peter Foldes then had the day shift to work on Hunger with the same computer. Those are pioneering times!
In the 70s, many revolutionary 3D techniques were developed by the Computer Science Faculty in Utah, United States. A professor there, Ivan Sutherland, whom we already saw in part two as the creator of the Sketchpad 1, 'The Sword of Damocles' and as co-developer of ARPANET, had founded in 1968, together with co-professor of the same university David C. Evans, their company Evans & Sutherland. The company was located on the university grounds and the employees were mainly active and former students of the university.
The 70s was a very fruitful period in the university and many important contributions were made under Evans & Sutherland for rendering, shading, animation, visualization and virtual reality. The group of students at the time contains impressive names: In 1969 John Warnock (later co-founder of Adobe), in 1974 Ed Catmul (who we just saw passing by, later director of Pixar) and Frederic Parke (who was in 1971 the first to make a 3D animation of a human face), in 1975 Martin Newell (more about him later), in 1976 Frank Crow (among other things inventor of anti-aliasing techniques), in 1978 Jim Blinn (known from Blinn shaders) and in 1979 Jim Kajiya (developer of rendering techniques). But let's go back to 1973 and don't get too far ahead in time.
In 1973, there was another student who would put an important mark on the development of computer graphics. His name was Bui Tuong Phong. This Vietnam-born computer pioneer is best known today as the developer of the Phong Shader interpolation method which is still used worldwide in computer graphics. This shader was the very first to use a simulated specular reflection. With the Phong shader it was therefore possible to make 3D surfaces shine and reflect light like a mirror. So this was a big step forward in making realistic and attractive 3D animations.
As we saw earlier, a number of short films were already made in 1973 with computer graphics. But combining digital computer effects (CGI) with real camera images was a different challenge. The first feature film to use digital imaging was the science fiction film Westworld, written and directed by the American Michael Chrichton. In this film, robots live among people.
The film made use of a digital pixelation effect in which a man was made pixelated by separating the colors of each frame and then digitally scanning them into rectangular blocks. The result was then returned to film.
Fortunately we have YouTube. There, in honor of this groundbreaking effect, someone posted a fragment of it on YouTube. See here.
In 1973, a computer game called Maze War was born. In this game, the player was able to move in four directions through a labyrinth and shoot other players.
This game was revolutionary. Not only was this the very first game with 3D graphics on a 2D screen, but also one of thé first, if not the first, first-person shooter game ever. This was also the very first multi-user game and it used tile-based movements, so the player moved from tile to tile. The game became a great example for many to follow and laid a solid foundation for many future games. Also the graphic style of the game was adopted by many other games.
The original game was written for an IMLAC computer. But it was also made for other computer systems. See a short fragment of the IMLAC computer on which someone plays Maze War on YouTube here.
Ed Catmull, about who we read about earlier in this article, has meant a lot to computer graphics and techniques we still use today.
That started to take shape in 1974 when he received a call from the New York Institute of Technology (NYIT). This was a club of multi-millionaire Alex Shure (Alexander Schure) who wanted to focus on making computer-generated cinema films. Alex Shure thought that films could be made entirely by computers, instead of people.
All his life, Ed Catmul had the passionate dream of making a digital animated film, even though computers were barely capable of anything at the time. So he liked to come on board and set up a department at NYIT that would come up with important innovations.
Ed Catmul quickly brought in many key players, including key computer animation pioneers Alvy Ray Smith, David DiFrancesco, Ralph Guggenheim, Jim Blinn and Jim H. Clark. And as a club, groundbreaking computer graphics techniques were devised.
Computer graphics became an important thing and in 1974 the very first SIGGRAPH conference was held in the United States. A computer graphics conference organized by the Association for Computing Machinery (ACM), which was founded by Sam Matsa, Doug Ross and Andy van Dam. Initially, the conference was therefore still called CICGRAPH, ie with CG. But later this was renamed to ACM SIGGRAPH, so with GG.
This annual conference, which is still there in 2020, quickly became an important place for presenting new innovations in the field of computer graphics, which were also widely used by important players such as Ed Catmul. Therefore, a lot of networking was done. Important players from various disciplines in this field found each other and many contacts were made, which regularly led to new collaborations.
In the previous part of this series we read about the founding of Atari and the creation of their arcade game Pong. At the beginning of this part, we saw that the Magnavox Odyssey was the first game console being released mid-1972. So when the very first game console from Atari appeared on the market in 1975, it was certainly not the first for the home market. Also, the game Pong was not new. Nevertheless, this was an important event that has led to many important developments in the game industry and the use of computers in people's homes.
Because computers were still expensive, the first version of the Atari game console still consisted of single analog electronics parts. Just like the Magnavox. So this was not yet a digital game console by current standards. All parts in the cabinet were literally specially made for the game Pong only. So that was the only game you could play on it. Just like the Magnavox, you also had to connect the system to the television. But Atari had handled the marketing a little better than Magnavox and people now also understood that the game console could be connected to almost every TV with an input at the time. Atari not only introduced the device well, but was also wise to do that just before Christmas. All 150,000 available Pong consoles were sold out right before Christmas!
After this hit, many other companies quickly came up with outright copies of the game and techniques to benefit from this success. It seems that there were even about 75 companies that brought their own Pong console to the market at the time! Magnavox, meanwhile, filed a lawsuit against Atari, because Atari according to them had recreated the game Pong from the table tennis game Magnavox had previously released. In the end, Atari won that lawsuit, because the game was indeed invented by Atari itself. On the YouTube channel of Chris NEO there is a very entertaining and interesting video, in which Chris talks about this Pong war that broke out and in which mainly Atari and Magnavox clashed against each other. Highly recommended to go and watch that video!
Everyone who's ever taken a shower has an idea. It's the person who gets out of the shower, dries off and does something about it who makes a difference.
The development of 3D techniques was in full swing, so the need to properly preview digital materials increased. Not all properties of materials could be seen on a cube or even a sphere. The British-born computer scientist Martin Newell of the University of Utah had also discovered this. Martin Newell was a member of the University's Pioneering Graphics Program. He thought that there should be a standard 3D model that showed all material properties well.
Drinking an afternoon tea with his wife when his wife asked him to model their physical tea set with 3D graphics gave him the idea that the test primitive for materials should be the teapot. This teapot, which is now well known among 3D modelers, is also called the 'Utah Teapot' or 'Newell Teapot'.
The 3D model of the teapot, which was based on the 'Melitta' teapot model that was and still is for sale in the store, was adopted in many 3D modeling software as standard primitive, so it became a pseudo standard. Actually the teapot for 3D modelers is what 'Hello world' is for software programmers; a way to quickly see the effect of a material on a 3D model.
The reason why the teapot is so great for this purpose is because it is round, has a hole at the top, contains arcs that display critical mathematical calculations well, can project shadow on itself, can properly show the effect of reflections and doesn't need a texture to properly display the object.
Even today you often see the teapot as primitive in 3D modeling software. For example, 3D Studio Max still has the teapot on board. Maya doesn't have this though (2020). And Blender also never had the teapot, but they do have an original alternative to the teapot; their own mascot and logo: the chimpanzee Suzanne.
Click and drag the model to view it from all sides.
In the same year Polish-born mathematician Benoît B. Mandelbrot came up with a revolutionary mathematical development: fractals. Fractals are figures where, in simple terms, the smallest detail resembles the largest whole and all stages in between have almost the same structure. So the parts of a fractal are more or less uniform with the whole figure. Repetition of a basic shape, with or without deviations, plays an important role. The fern is a well-known example of a fractal; if you zoom in on the fern you will see that the smallest parts have the same shape as the total fern.
Fractals have meant a huge step forward for 3D work. With fractals we can literally create organic shapes with a programmed growth formula. But not only nature organic shapes, such as trees, plants, coastlines and mountains could be built with a fractal formula, repeating and executing simple rules to create complex shapes later also proved to be a solution for the creation of entire 3D worlds that you could zoom in endlessly. Where new details were always redrawn with the same basic formula.
Use the +
and -
buttons or two fingers to zoom and the middle button to maximize the tree in its window.
After all progress made in the field of 3D modeling, it became clear that for designing realistic 3D models it was important to be able to project bitmap images on objects. Martin Newell, who also came up with the idea of the teapot, therefore came up with the concept of texture mapping together with the American computer scientist Jim Blinn. Blinn provided more good stuff on 3D techniques, about which more later.
With texture mapping for the first time it was possible to 'project' an image on a 3D surface, so that an image could be used as a material. This was of course a good progress in the quality and realistic appearance of 3D computer graphics.
Harvard graduate Professor Nelson Max, who graduated from Harvard in 1967, came out with the film 'Turning a Sphere Inside Out'. This was a milestone both in the field of mathematics as well as in the field of computer graphics, because for the first time a 3D animation showed how a sphere could purely (and only) theoretically be turned inside out. Although the fact that this cannot be imagined for normal people and it is known that a 2D circle cannot be turned inside out, the transition of the sphere was now made visible in a 3D animation.
Not only was this a major breakthrough in this area, it was also the first time 3D morphing was shown. The sphere was animatedly transformed into another shape.
Later, other mathematicians also started to turn a sphere inside out in other ways.
See a copy of the movie on YouTube here. There is also a video explaining the film on YouTube here.
That same year the very first Apple computer had come on the market in 1976. It was still built by hand by Steve Wozniak and financed with, among other things, the money he had made by selling an expensive calculator and what his friend Steve Jobs had earned by selling his only motorized vehicle, a Volkswagen bus. A year later, in 1977, a sequel followed; the Apple II.
In June 1978, 3D computer software for the home computer was released for the first time. The program written for the Apple II was named '3D Art Graphics' and was written by Kazumasa Mitazawa.
If you thought Google had invented their Google Maps and Google Street View themselves: think again!
As a principal investigator, Andrew B. Lippman and a number of colleagues introduced the Aspen Movie Map in 1978. You could actually call this the grandfather of Google Street View. The interactive Movie Maps were developed to allow American soldiers to explore and learn locations from elsewhere in the world in advance.
To record images for the interactive floor plans, some stop-motion cameras were mounted all around on the roof of a car. A bicycle wheel, which turned behind the car, always measured the distance traveled and gave a signal to take the pictures every ten meters. Actually very similar to how Google with their Street View cars now also takes pictures for the Street View maps, albeit with more modern techniques.
With the help of a touch screen, the device could be operated and the user could virtually drive through the streets of the map with 'moving image'. Hence 'Movie Map'. It was named 'Aspen Movie Map' because the sample system was created with an interactive map of the American city of Aspen in Colorado.
Using a database, the software ensured that the correct images were always loaded from an analog laser disc. The map could work with photos, but also with primitive digital worlds.
On a touch screen, you could zoom in and out on a map and click on an area to explore it. Also buildings could be clicked on the touch screen for more details. All as we do in 2020 in Google Maps and Street View. The user could determine which direction he/she navigated through the map and thus, just like in Google Street View, drove virtually through a neighborhood.
See a video about the Aspen Movie Map on YouTube here.
Although computer graphics were still in its infancy as to what it would eventually become in the future, the attention for it was increasing by people with foresighted innovative vision.
In 1979, George Lucas of Lucasfilm recruited people from NYIT, the computer club with Ed Catmul to the day-to-day management who had already made great strides, to set up a new department. This department was called Computer Division.
The department was set up to innovate digital optical prints for film, digital audio, digital non-linear editing and computer graphics (CGI).
The club has experienced a lot of ups and downs, but has also developed many important techniques that we still use every day in graphic systems still. We will come across some of these in this blog series.
We are now in the 80's. On July 14, 1980, Loren Carpenter, who was then still working at aircraft manufacturer Boeing, showed a two-minute film for the first time at the SIGGRAPH conference in which fractals occur. Fractals, as we saw earlier in this article, were a revolutionary step forward in that we could use a formula to describe a complex shape, such as a fern. However, the use of fractals in combination with 3D computer graphics was new.
His film 'Vol Libre' that people saw at the congress mainly consisted of a digitally generated 3D landscape with mountains built up with fractal formulas through which a camera 'flew' like an airplane.
Loren received a standing ovation, because it had never been seen before. And, as he had hoped, he was invited to work for Lucasfilms 'Computer Division'. There he would later collaborate on 'The Genesis Effect' scene of Star Trek II for which he had built a complete planet landscape with fractals.
A copy of 'Vol Libre' can also be found on YouTube. Click here.
In 1980 IBM decided to develop a relatively cheap computer for the individual user, specifically in the home market. On August 12, 1981, this very first IBM Personal Computer PC came on the market.
This system was soon after literally copied by other companies. Because more and more brands made their computers exactly like the IBM PC and thus worked with the same philosophy and the same parts, software written for the IBM PC could also run on clones of other brands. That was very special and an important step towards standardizing software, although this was not done consciously. In 2020 this, what we now call PC, is still based on those techniques and we call every PC an 'IBM Compatible'.
IBM did not initially have an operating system for this 'PC', so although they had already started developing their own operating system for it, that didn't go very smoothly, so they went for the advice of the very young Bill Gates. Hoping he might have an operating system. Bill would also already supply a BASIC interpreter to IBM. This was a whole new world and Bill Gates had an eye for that IBM thought.
Bill Gates knew about someone who had developed an operating system and referred IBM to this Gary Kildall. He and his company Digital Research had written the operating system CP/M and could certainly help them.
In August 1981, together with the first IBM Home PC, the first version of the operating system Microsoft Dos, or MS-DOS, was released. This was a direct clone of the pre-existing Q-Dos which stood for Quick and Dirty Dos, which was again a direct clone of ... uh ... Gary Kildall's CP/M operating system. So how come IBM didn't get the OS from Gary?
Gary Kildall was an American computer scientist and entrepreneur who was one of the first to not look at microprocessors as simple controllers, but to see fully capable computers with many possibilities. By the way, Gary was also a co-host on a TV show about computers. His operating system CP/M was used under license on more than 3000 different computer models that were important at the time. So he was quite successful.
Gary also came up with the concept of the computer BIOS with his company, so storing programs in a ROM or EPROM chip. Still, sadly, it is true that today Gary is mostly remembered for his major failure or bad luck, however you want to see it around 1980, about the beginning of MS-DOS.
As we just saw, Bill Gates of Microsoft referred IBM to Gary Kildall to speak about his operating system. IBM quickly came to Gary's door with a group of important people, including lawyers. But even though Bill Gates had told Gary that 'important people' would drop by (he had not said who), Gary did not want to move an already planned work appointment to deliver software to a factory and so left negotiations with the 'important people' over to his wife, with whom he ran his business together. They often did it that way.
However, when Gary's wife let IBM's men in and refused to sign a nondisclosure agreement, the delegation of IBM left angry. Gary tried to re-establish the conversation with IBM that same afternoon, but IBM was no longer open to that.
IBM returned to Bill Gates. Bill Gates then bluffed that he and his company Microsoft also had or could make an operating system. However, he had nothing in reality and therefore bought the company Q-Dos, which had recreated the original system from Gary Kildall. Bill then renamed this to MS-DOS and clearly with that planted a Microsoft flag.
This was a pivotal move from Bill Gates. But a perhaps even more important decision for Bill Gates and Microsoft was this: Bill Gates sold 'his' operating system MS-DOS for a low price to IBM, who had long been happy with it, but never spoke about exclusive rights for IBM in the agreement. This allowed Bill Gates to sell MS-DOS to other companies, such as HP. Bill Gates eventually became a multi-billionaire this way.
Gary Kildall was very disappointed that he had missed the boat by missing the IBM deal. He had eventually made up his mind to let it go, but he continued to be reminded of this by people years later, always mistakenly outlined as irresponsible and lax and as if he'd prioritized a private outing over the meeting with IBM. He was often compared to Bill Gates, who had actually made a name for himself by stealing his project and for whom it had turned out a lot better.
Gary was certainly not poor. He had quite some successes, owned boats and a private plane. He had also made million dollar deals. But around IBM, he hadn't hit it. After a failed attempt to sue IBM, due to the lack of good software rights laws and his wife's leaving him, he went downhill. Also because all stories about him. He's got into depression and drinking and never got the recognition for his operating system it deserved.
The last years of his life he was an alcoholic. The exact reason for his death is unclear, but the story goes that a fall on his head in a biker bar or a cardiac arrest afterwards eventually led to his death. Unfortunately, wrong stories are still circulating about him.
Gary was the very first to create a digital encyclopedia, developed the BIOS we still use in PCs, and made significant strides in optical disc development and multitasking. But I suggest that from now on, we mainly remember Gary for what should have been his biggest success: as the actual creator and developer of Dos, who has long been the basis for millions of PCs worldwide.
Perhaps a somewhat sad end to this third part, but that certainly does not apply to the period in this part, in which many good developments took place. A period when computer games started to emerge, also for the home market. In which computer generated graphics in films began to emerge. In which there was even a full short film made with the computer. During this full period, computer graphics began to be taken so seriously that even a conference was set up for it. In addition, we also saw important operating systems emerging and the first home computers, including the IBM compatible PC that nowadays in 2020 is still based on the same important basis as back then. And who would have thought that some sort of Street View already existed in 1978? And that the first graphical interface as we still recognize today was conceived so early?
In the fourth part, we continue where we left off and will see how these developments continued. Towards motion caption, more knowledge of good animation techniques, digital graphics and 3D modeling for a growing home market, the next step in virtual reality and more. The speed of new developments was still rising.
In the coming months I will continue to write new parts in this series up to six parts in total. Did you find this interesting or do you want to ask something or comment? Let me hear from you below and share the blog on social media. It motivates me to keep writing quality blogs like this one. Also after clicking on like or dislike you have the option to add comments if you'd like to comment on something (optional). Thanks and 'till next time!