BETTER, QUICKER, FASTER

IN THIS THREE PART SERIES PETER HAYES LOOKS AT THE TECHNIQUES AND POLITICS OF MAKING COMPUTERS FASTER AND MORE FLEXIBLE.
TODAY - IN PART ONE - HE LOOKS AT THE KEY COMPONENTS THAT GOVERN COMPUTER SPEED.

One of the perennial problems of explaining the work of computers in layman's terms is that all useful computing is made up of various stages and elements. These stages come under the very separate categories of "hardware" and "software", but given that computing is a team-effort weakness in any of these individual components will be reflected in the overall performance of a computer.

In other words, computing is a little like a chain - only as strong as its very weakest link.

In most cases making a computer faster and more efficient requires many individual parts to be improved; not just one. While we will look at these "speed critical" components today, and over the next two parts, it is important to firstly grasp that "individual performance" and "overall improvement" can be two different matters.

Another headline point is that a computer (through software) performs many invisible (or semi-invisible) functions, including checking "system integrity." This can provide pop-up information and "error checks."

While this may prevent the system from crashing (or doing something undesirable), it also takes the edge off overall efficiency. Therefore there is - and will always be - a play-off between raw operating speed and basic human convenience.

Another headline factor to remember that speed is nothing without reliability. The fastest, most modern PC on the market will still crash when running bugged or unreliable software - in fact faster computers will come to these errors quicker and will crash/misbehave even sooner!

It is also true that the computer may well be attached to third party peripherals such as scanners and printers. Some of these are not unlike like mini-computers in themselves and they- rather than the central computer - could be the reason for disappointing performance.

Any computer magazine features letters pages complaining that there computers are not operating at the speeds they would like. This can be a problem of their own making, such as trying to run too many programs at the same time or using networked computers (where the speed will be govern by outside factors); but sometimes it's just a basic misunderstanding of the computing process itself.

Software is just a "mathematical journey" with a beginning, a middle and an end. In certain cases these three stages are easy to spot - such as in the case of photo-scanning software: You put the photo in the scanner, you prime and run the software, the image appears on the computer screen. Job complete.

In far more cases the programme loops around on itself and these processes are highly disguised. Nevertheless the amount of mathematics needed to complete the programme's duties varies wildly from application to application.

Word processors or note pads (when simply excepting text) have one the lightest mathematical loads, three dimensional rendering packages (software that creates artificial worlds from bare wireframe models) have one of the highest.

Therefore the human perception of speed can be deceptive unless the user has an outline idea of how much mathematics needs to be performed and how their basic computer components go about breaking down this work.

(In expert circles comparisons between computers are made via a system of "bench-tests" where different models - or configurations - are made to perform like-and-like tests.)

The most talked about piece of equipment that in a computerise the CPU or Central Processing Unit. People often talk about these as if they were a computer in itself, but they are simply the most important component in the series of hardware pieces that makes a modern computer work.

While it might seem desirable - on the whole - to have a quicker and more powerful processor at the centre of your computer, this advantage will be nullified if it is producing data faster than the other support chips (or add-on device) can deal with this output - in technical language the these log-jams are called "wait states."

When studying computer advertising the reader will be presented with many components and their "spec." These will often be CPU, graphic support facilities (such as boards), co-processors, and memory type - although memory type plays only a minute role in governing overall computer speed.

The CPU is the main "heartbeat" chip in a computer and regulates and controls all the other elements of the computer. It does this by following the instructions of software, because it is - in itself - totally brainless. At its heart it adds two numbers together and gives an answer - just like a calculator.

The jump between this simple action and the output of a complicated multi-functional software package is hard for many users to comprehend. But useful software is merely thousands of small mathematical sums performed one after another.

Obviously the quicker the central mathematics is performed the quicker the "mathematical journey" will be completed. Some chips perform better than others due to a faster clock speed (in the crude terms "the handle is being cranked quicker") and better "density levels" (the circuits - or switches - being placed closer together or there are improvements in overall design.)

Another way of getting the job done quicker is increasing the size of the sum that can be performed at one time (from 16 bit to 32 bit for example) or laying off some of the more difficult (or time consuming work) to a co-processor - an idea which we will flesh-out a little later.

Yet another idea is to cut down on the number of mathematical instructions capable of being performed which increases working efficiency (in the manner that a filling cabinet with 50 files in will be quicker to use than one with a 100). These chips are called "RISC" (Reduced InStruCtion) chips.

Another way to improve efficiency is through a built-in cache system: This stores a section of software instructions in a special quick access memory meaning the fetching and carrying time is reduced. This system works best with software titles that require a lot of repetitive maths - such as a graphic programme.

The co-processor is one of the fastest growing areas of computing. These work under the instruction of the main CPU and provides mathematical support. In plain English it says "perform this sum for me while I get on with other things." It would also perform duties such as receiving raw data that it then turns in to sound or special screen images - but this is just two possibilities of many.

Next time - in part two - we will use what we have learnt today to look more at the politics of faster and more efficient computing as well as exploring how such improvements in hardware technology have to be directly tied to computing need.