|   | 
        BETTER, QUICKER, FASTER 
         
        IN PART ONE AND TWO OF THIS SERIES PETER HAYES HAS
        EXPLORED THE KEY FACTORS THAT GOVERN COMPUTER PROCESSING
        SPEED. TODAY - IN PART THREE - HE ROUNDS
        UP THE SERIES BY LOOKING BACK AT THE MILESTONES OF CHIP
        DESIGN AND THEN FORWARD TO THE FUTURE OF THE MEDIUM. 
         
        In part one and two of this series
        I have tried to emphasis that computing is very much a
        team effort, and that while faster processors play an
        important role they are not the whole story. 
         
        I also outlined the fact that the computer chip industry
        is becoming more and more competitive with leading player
        Intel now feeling the pressure from companies such as
        AMD, Cyrix and IBM. 
         
        Today, however, we examine the steps that have been taken
        to reach this point in history and have an educated guess
        at the close and distant future. 
         
        In computer design circles the 1970 "Intel
        4004" will always have a special place. Today the
        "4004" is acknowledged as being the first
        example of a "general purpose" microprocessor.
        Before that integrated chips had been designed to serve
        only one specific purpose. 
         
        Designed by Ted Hoff and Frederico Faggin the 4004 chip
        was used in the very first generation of (very expensive)
        silicon-based calculators. It contained the equivalent of
        2300 single transistors and was the tiny acorn from which
        chips such as the Pentium Pro range would grow. 
         
        Today modern chips can contained over 5.5 million
        transistors - and given that newer models have caches and
        co-processors built-in - the number seems certain to
        grow. However, while these figures give a good
        rule-of-thumb as to how powerful the processors are,
        further improvements can still be achieved by further
        streamlined design. 
         
        Progress was quick in those early days and by 1974 Intel
        had come up with the 8080, the first general purpose chip
        designed for "full computer" use and already
        twenty times faster than the 4004 family. This 8-bit chip
        found its way in to many kit computers - including the
        famous Altair - and was more notable for bringing the
        home computer a step closer. 
         
        In 1979 Intel produced the 8088 - perhaps the biggest
        breakthrough chip of all time. This 16-bit processor
        drove the first IBM PC which was soon cloned by a wide
        variety of producers. It's introduction sent Intel share
        prices in to orbit. 
         
        The next milestone was the 80286 - the first of the
        "286" processors - in 1982. The chip contained
        the equivalent of 130,000 transistors and was powered by
        a 12 MHZ clock. Natural progress followed with the
        introduction of the 80386 chip (the first of the 386
        models) in 1985 and 80486 (the first of the 486 models)
        in 1989. The 386 model featured 275,000 transistors, but
        this number grew to more than a million for the 486
        model. 
         
        The next breakthrough came in 1993 with the first Pentium
        range. While containing more than 3 million transistors
        the chip was better designed to take into account
        supporting graphics and communications applications. In
        short Intel had looked more closely at the role of
        computers in the modern world rather than simply go for
        number-crunching speed. 
         
        The Pentium Pro came out in 1995 and featured what Intel
        called "dynamic instruction execution" which
        means that the maths was streamlined before being
        performed. The chip also featured the first in-built
        cache. The Pentium Pro now features more than 5.5 million
        transistors. 
         
        While it may be crude to say that other producers have
        been playing catch-up, this is essentially what has been
        happening within the chip industry. Intel have even tried
        to claim, through the courts, that their rivals are
        merely "copying" there products - although
        without success. 
         
        While many people view the games industry as a
        bit-of-a-joke many great strides forward have come about
        by the games industry - who have been at the forefront of
        producing high standards in graphic images. Commercial
        areas such as virtual reality and flight simulation have
        borrowed heavily from the games industry. 
         
        While I've outlined most of the key problems of computer
        speed in parts one and two of this series, the key debate
        has been about affordable and practical computing. 
         
        For those with bottomless pockets many computer problems
        can be overcome: Computers from Cray are - at their very
        heart - simply endless rows of processors wired together
        like a team of horses. In technical circles these are
        called massively parallel systems or MPPs. 
         
        The problem with this type of computer is not only the
        expensive component parts, but having to invest far more
        in software that has to be more structured and
        multi-functional. 
         
        Nevertheless, as outlined in parts one and two, the
        next-step-forward is to provide better support for the
        main CPU through co-processors. In short, the same sort
        of idea, but on a much smaller and more automatic scale. 
         
        To continue our central idea of computing being separate
        parts held together by a central theme, we must consider
        the central demands of computing. 
         
        Computer hardware manufactures can only provide
        components that their customers want. A chip that can
        perform a record amount of floating-point maths (per
        second) will be useless unless there is a reasonable
        commercial market for it. In other words the designers
        have to design for the commercial market not for the
        record books. 
         
        Equally important is that computers can be improved
        simply by being more focused and targeted to the purpose
        that it has to perform. If the computer is a games
        console, it is obvious that the user wants fast screen
        updates and multi-channel sound - things that the
        "serious" computer user might not. 
         
        In certain cases there are components - such as hardware
        caches - that might actually hinder efficiency when
        running certain pieces of software, because the design of
        the software doesn't gain any advantages from having
        them. Or else the cache is too small or too large for the
        individual application. Therefore I'd be happier
        describing these functions as "most of a good
        thing" rather than "all of a good thing."  
         
        In more simple terms the design of a computer can never
        be perfect. There will always be debates as to whether a
        particular added function helps or hinders in the world
        it is likely to encounter. 
         
        The biggest stone wall facing chip manufacture is the
        nature of electricity itself. There is a built in limit
        to how quickly the central electrons can travel so chips
        cannot simply become faster and faster without end. Some
        experts say that the future of computing lies in the use
        of semiconductor lasers and memory based in chemicals -
        but this will require a breakthrough that will swamp all
        those that have gone before. 
         
         
         | 
           |