Lecture Notes 07 (5 February 2002)

Machine Architecture


Overall Reading
Brookshear: Ch. 2.1-2.3
Decker/Hirshfield: pp. 207-214; Mod. 7.4

Outline:

  • Basic Layout of a Computer ("Architecture")
  • Components
  • Communication between Components (the "bus")
  • More complicated architectures
  • The Stored-Program Concept
  • Instructions Represented as Bit Patterns
  • Program Execution (Fetch-Decode-Execute)
  • A sample CPU

  • Typical Instruction Sets

  • A Sample Instruction Set - PIPPIN

  • Basic Layout of a Computer

  • Components There are many different pieces of hardware that are components of a computer system. If you have ever considered buying a computer, you probably aware of the many, many options that can be ordered. A sample of common components includes:

  • Central Processing Unit (CPU)
    The CPU does all arithmetic and logical processing, and essentially controls everything. It consists of many gates and circuits.
    Some popular CPU's these days include the Pentium and the PowerPC, among many others.
  • Main memory (RAM)
    A sequence of cells, each of which is reference based on a numeric address.
  • Hard Disk Drive, Floppy Disk Drive
  • CD/DVD Drives (read only? read/write?)
  • Keyboard, Mouse and other input devices
  • Monitor, Speakers, Printers and other output devices
  • What is nice is that you can generally mix and match various components when configuring a system.

  • Communication between Components (the "bus")
    These components need to effectively communicate information with each other. In the simplest architecture, a collection of wires known as the "bus" is used to pass information among components. For example, we will soon see that a bus must connect the CPU to the main memory.
    
      [CPU] -------[bus]------- [Main Memory]
    
    
    It is possible that many components share the same bus for communications. In this case, they must cooperate when sending messages from one device to another, so that the communication is clear.
    
             CD Drive       Modem
                |            |
      [CPU] --------[bus]-------- [Main Memory]
              |            |
           Monitor        Disk
    
    

  • More complicated architectures
    The above view is a simplification. Today's computers may have significantly more complicated architectures, involving multiple buses, multiple processors and various memory hierarchies.

    For example, the CPU is separate from the main memory. However it is often convenient for the CPU to have a minimal amount of its own memory (sort of like 'scratch paper'). Often, cells of the CPU's memory are called register and a collection of such memory is called a cache. The advantage is that the CPU can access this memory much quicker than passing a message down the bus to the main memory and waiting for a response. The disadvantage of increasing the amount of memory on the CPU is that more memory would make CPUs physically larger and more complicated. Worse yet, referencing more memory cells would require more bits per address.

    You might even see computers with multiple levels of cache (e.g. a "second level cache"). These provide a tradeoff between the amount of memory and the efficiency of accessing the memory. (Analogy: organizing my desk, drawers, file cabinets, closets, boxes).


  • The Stored-Program Concept

    We want computers to be programmable! We draw upon the historical perspective contrasting Pascal's gear-based adder to the more general vision of Babbage's Analytical Engine. For electrical computers, we have seen how circuits can be wired together to perform binary additions. Other circuits might be able to do other types of processing. But how do we design a general-purpose, programmable computer.

    The first key may be in thinking about how we would even represent a computer program. Babbage envision punched cards, based on the technology of the time. For electrical computers, the intuition credited to John Von Neumann, is to represent instructions using bits so that program instructions could be stored in memory using the same technology used for other types of information. This basic idea is still used in all modern computer.


  • Instructions Represented as Bit Patterns
    There are two parts to each instruction:

  • Operator
    There are many possible operations which a CPU can perform (for example, an addition, a division, loading data from memory). For every distinct type on operation, we will designate a unique combination of bits known as an op code. Of course, the more distinct types of operations which exsit, the more bits we will need to use per op code. For example, we will look at a hypothetical machine which uses an 8-bit op code, and thus has the potential for up to 256 distinct instructions.

    If we think about the bit patterns as binary numbers, we are essentially assigning unique numbers to each operator. (e.g. addition is operator #0, subtraction is operator #1)

  • Operand Field
    Many operations to be performed involve processing of associated data. For example, if you want to instruct the CPU to perform an addition, you not only need to inform it of the operator, but you also must tell it, in some way, what numbers you want added. These associated pieces of data are traditionally called operands.

    Operands can be specified in various ways, depending on the CPU design.

  • A number to be added can be given in absolute terms as a binary number.
    (e.g. "add value 74 to the total")
  • An operand might be expressed through an indirect reference to memory.
    (e.g. "add the value stored in memory cell #74 to the total")
  • If the CPU has its own registers or cache, an indirect reference may refer to those
    (e.g. "add the value stored in register #3 to the total")
  • If a CPU is designed which allows data to be specified in more than one way, instructions like addition will usually have several distinct op codes, one for each form.

    The number of bits used as part of the operand field depends on the maximum value allowed for absolute operands, or the number of memory cells which can be referenced indirectly.


  • Program Execution (Fetch-Decode-Execute)
    The instructions which comprise a computer program will generally be stored in main memory. A special register within the CPU, known as the Program Counter (PC) will store the memory address which references the next instruction to be executed.

    An instruction is performed using the following general cycle of events (though some variations exist):

    Fetch-Decode-Execute cycle

  • Fetch
    The Program Counter is read. Then the next instruction (both the op code and the operand) is fetched from the associated memory location and stored in the Instruction Register (IR).

  • Decode
    This instruction is a sequence of bits. The processor must interpret the op code to determine what type of instruction is desired before proceedings. A Decoder is used based on the specific instruction codes which a processor supports.

  • Execute
    Now the work must be done, and often this involves more than one step. For example
  • If an indirect operand is specified, the value must be read from memory.
  • For arithmetic operations, the new operand and any previously accumluated value must be loaded into the Arithmetic/Logic Unit and then the operation performed.
  • If an instruction is supposed to store a value in main memory, that value and the target address must be sent across the bus to the main memory.
  • Finally, near the end of the process, the Program Counter must be changed to prepare for the next instruction to be executed. By default, after each instruction is executed, the PC is incremented so that it references the next sequential instruction. The amount of incremental change depends on the number of bits used in representing instructions (for example, we will see a sample machine in which the PC is incremented by 2, due to instructions with length equal to 2 bytes). Furthermore, some instructions purposefully change the PC in order to jump forward or backward to a different instruction.

  • A sample CPU

    Note: Each of our texts discusses a hypothetical CPU and a set of instructions which the CPU can perform. The two are not the same as each other, and neither is the same as a real processor. Yet they both are in the same style as true processors, though simplified. For continuity with our assignments, we will use the Decker/Hirschfield CPU in our examples.


    Typical Instruction Sets

    Each particular CPU will be designed to support a specific set of instructions. Different CPU's will support different sets. Generally though, instructions fall into one of three general groups:
  • Data Transfer
    These handle movement of data among main memory and the various CPU registers. Examples include loading from memory to the CPU, storing a value form the CPU to main memory, or setting a value to a particular binary value.

  • Arithmetic/Logic
    One part of the CPU is the Arithmetic/Logical Unit (ALU). This circuitry is responsible for computing arithmetic operations such as addition or division, for testing equalities and inequalities such as "equal to" or "less than," and for providing logical operators such as AND and OR.

  • Control
    Operations are provided which allow the program to alter the Program Counter in various ways. Examples including jumping past certain instructions, jumping backwards to repeat some instructions, or halting execution.

  • A Sample Instruction Set - PIPPIN

    Page 260 of Decker/Hirschfield gives a User's Guide for PIPPIN, their version of a simple assembly language.


    comp150 Class Page
    mhg@cs.luc.edu
    Last modified: 5 February 2002