How Computers Work (general overview)
This information is what I believe now, and is not complete. Please email
me if there are errors. Let's take it from the bottom...
Electricity, Transistors, Gates, ALU
- It all begins here. Electrons travel through transistors.
- Transistors can be arranged to make simple gates.
- The gates, such as AND, OR, and NOT are simple logical functions (1
AND 1 = 1; 1 AND 0 = 0).
- Many gates can be combined to make an ALU, or arithmetic-logical unit.
Gates can be arranged to make adders, comparitors, and even multipliers.
The ALU is a giant unit, and its function can be selected (depending
on its input A, input B, and a control signal).
- A sequence of signals can be a simple program for the ALU. (Let's
say you want to add A, B, and C).
- Give input A, inputB, and a control signal (ADD operation)
- The result is A+B, call this X. Write this down.
- Give input X, input C, and ADD
- The result is A+B+C, the result we want.
Memory, CPU
- Memory (RAM) allows the *computer* to remember things, so we do not
have to input it. RAM stands for Random Access Memory, which means you
can access any location at any time. This is sort of like a CD - you
can jump to any song. Sequential memory is the opposite, where you have
to skip over memory to get to a location. This is like a tape - you
have to fast-forward or rewind to get to the part you want, and cannot
just jump there. (In reality, wires called address lines are used to
activate and get the data at a location).
- Simple memory can be created from a gate that loops back on itself
(or even a single transistor). There is ROM, DRAM, SRAM, etc.
- The smallest, simplest unit of memory is a bit. It can be on or off,
charged or discharged, represented as 1 or 0. A group of 8 bits is a
byte. At the lowest level, this is all a computer can understand.
- More gates can be arranged to create a CPU (central processing unit).
It's job is to coordinate the reading of instructions, the use of the
ALU, and the writing of data to memory. The number of cycles it can
perform is listed in megahertz (millions of cycles/second) or gigahertz
(billions of cycles/second).
- The Von-Neumann design is to have data and instruction in the same
area of memory. This is interesting; there is no real difference between
an instruction and data (they are all bits).
Programs, Scheduling, Operating Systems
- Originally, to add a program to RAM you would select the first address,
then set the value manually (such as using switches). Eventually a machine
could do this, and it took punched cards. Give it a stack of punched
cards (representing the instruction and data values) and the machine
would input them into RAM. Then run the CPU. The computer would then
start from the first location, read and execute the instruction, increase
a counter, read from the second location, etc.
- The computer ran one program at a time, like a giant calculator. To
allow multiple users, a clock was used. At set intervals, the clock
would interrupt the CPU, forcing it to jump to a particular instruction
in RAM. This instruction (and the following ones) would be the task
scheduler. They would figure out which task was to run, load its registers
into memory, then jump to the proper instruction in that program (where
it had been previously stopped). This gave the appearance of running
multiple programs, because it was rapidly switching between them.
- To maximize the use of RAM, a virtual memory system was created. It
saved unused portions of programs onto a magnetic disk, loading them
again when they needed to run. That way, programs not currently running
were kept out of RAM, and more programs could be loaded at one time
(some in RAM, some on disk).
- The task scheduler and virtual memory system were the beginnings of
an operating system - a program whose sole purpose was to manage the
computer. Recall that originally, programs just ran directly on the
computer. Only when a program finished (or crashed) could another run.
Also, the program's resources were limited to the physical RAM on the
machine. With virtual memory, the hard disk could be used as extra RAM.
Display, Other components
- But who wants a computer with no display? It's part of the magic.
Traditionally, a cathode ray tube (CRT) has been used. The computer
writes what should appear into a bitmap in video RAM. The video card
has a RAMDAC (RAM digital-to-analog converter) that transforms the bitmap
into analog signals that control the CRT. The video card could connect
to the CPU using the bus, as could other types of devices (sound cards,
network cards, etc.). It is essential that the CPU has access to the
new cards, because it needs to transfer data from the device to RAM
(and vice versa). Sometimes the cards may access RAM themselves - this
is called DMA (Direct Memory Access). This takes some load away from
the CPU, which is then free to do other things.
It is amazing that a computer can eventually be broken down into its
constituent transistors. But looking at a computer that way is mind boggling
- it is better to see it as a group of interacting components.
Development of Programming Languages
At first, there was only machine code. To give the machine instructions,
the explicit control signals had to be input into RAM (1 corresponding
to high voltage, 0 to low). This was painstaking, as you can imagine,
because humans do not think/read in terms of 1's and 0's. Eventually,
an assembler was developed. This took mnemonic instructions (such as ADD
or OR) and converted them into their appropriate 1 and 0 code (How was
this done? In hardware first, I suppose. Punch out the code you need.
If done in software, the first assembler had to be written in machine
code - yucky). This was a great improvement, but barely hid the nitty-gritty
details of the machine. Higher-level languages were developed. These languages
contained complex instructions that could transform into many assembly-level
instructions. Of course, the first compiler had to be written in assembly
- I've been told it took 17 man-years to complete.
However, these painstaking developments were quite useful. Programmers
did not have to worry about every detail of the machine - if they did,
they would not get anything done. It's been shown that programmers write
the same number of instructions per day, be it in a high or low-level
language. If that is the case, it might as well be a high-level language.
(Of course, for fine-tuning an application, using a low-level language
is better. But in the common case (just getting the application to work),
a high-level language is much more useful.
The latest wave has been object-oriented programming, which looks at
programs in a whole new light. It provides a way to organize code into
reusable units, which helps to maintain code and to lay it out in the
beginning. Also, event-driven programming has sprung up. This is a non-linear
approach to programming. Programs wait for an event to occur (such as
a mouse clicking on a menu item). This executes a certain portion of code,
and when the program is finished it returns to the waiting state.
Links
www.howstuffworks.com
www.karbosguide.com
Do a google search :)
|