November 15, 2016

Design Principles for Modern Systems

There is a set of design principles, sometimes called the RISC (reduced instruction set computer) design principles that architects of new general-purpose CPUs do their best to follow. Some major ones are:

1.   All Instructions Are Directly Executed by Hardware
All common instructions are directly executed by the hardware. They are not interpreted by microinstructions. Eliminating a level of interpretation provides high speed for most instructions.

2.   Maximize the Rate at Which Instructions Are Issued
Modern computers resort to many tricks to maximize their performance, chief among which is trying to start as many instructions per second as possible. This principle suggests that parallelism can play a major role in improving performance, since issuing large numbers of slow instructions in a short time interval is possible only if multiple instructions can execute at once.
Although instructions are always encountered in program order, they are not always issued in program order (because some needed resource might be busy) and they need not finish in program order. Getting this right requires a lot of bookkeeping but has the potential for performance gains by executing multiple instructions at once.

3.   Instructions Should Be Easy to Decode
A critical limit on the rate of issue of instructions is decoding individual instructions to determine what resources they need. Anything that can aid this process is useful. That includes making instructions regular, of fixed length, and with a small number of fields. The fewer different formats for instructions, the better.

4.   Only Loads and Stores Should Reference Memory
One of the simplest ways to break operations into separate steps is to require that operands for most instructions come from—and return to—CPU registers. The operation of moving operands from memory into registers can be performed in separate instructions. Since access to memory can take a long time, and the delay is unpredictable, these instructions can best be overlapped with other instructions assuming they do nothing, except move operands between registers and memory.
This observation means that only LOAD and STORE instructions should reference memory. All other instructions should operate only on registers.

5.   Provide Plenty of Registers

Since accessing memory is relatively slow, many registers (at least 32) need to be provided, so that once a word is fetched, it can be kept in a register until it is no longer needed. Running out of registers and having to flush them back to memory only to later reload them is undesirable and should be avoided as much as possible. The best way to accomplish this is to have enough registers.

Data Path

Data path is a place in the CPU where the data is being manipulated. The registers, the ALU, and the interconnecting bus are collectively referred to as the data path. Data paths, along with a control unit, make up the central processing unit (CPU) of a computer system. As the data travels to the different parts of the data path, the control unit issues control signals to the data path that causes the data to be manipulated in specific ways, according to the instruction.

The process of running two operands through ALU and storing the result in called the data path cycle and is the heart of most CPU. To a considerable extent, it defines what the machine can do. The faster the data path cycle is, the faster the machine runs.

CPU Organization/Structure

That portion of a computer that fetches and executes instructions. It consists of an Arithmetic and Logic Unit (ALU), a control unit, and registers. Often simply referred to as a processor.
The figure shows a detailed view of the internal organization of processor. The major components of the processor are an arithmetic and logic unit (ALU) and a control unit (CU). The ALU does the actual computation or processing of data. The control unit controls the movement of data and instructions into and out of the processor and controls the operation of the ALU. In addition, the figure shows a minimal internal memory, consisting of a set of storage locations, called registers.
The data transfer and logic control paths are indicated, including an element labelled internal processor bus. This element is needed to transfer data between the various registers and the ALU because the ALU in fact operates only on data in the internal processor memory. The figure also shows typical basic elements of the ALU.

Register Organization

Within the processor, there is a set of registers that function as a level of memory above main memory and cache in the hierarchy. The registers in the processor perform two roles:
 
User-visible registers: A user-visible register is one that may be referenced by means of the machine language that the processor executes. Enable the machine- or assembly language programmer to minimize main memory references by optimizing use of registers. We can characterize these in the following categories:
General purpose: General-purpose registers can be assigned to a variety of functions by the programmer.
Data: Data registers may be used only to hold data and cannot be employed in the calculation of an operand address.
Address: Address registers may themselves be somewhat general purpose, or they may be devoted to a particular addressing mode.
Condition codes: Condition codes are bits set by the processor hardware as the result of operations. E.g. result of last operation was zero.

Control and status registers: Used by the control unit to control the operation of the processor and by privileged, operating system programs to control the execution of programs. Most of these, on most machines, are not visible to the user.
Different machines will have different register organizations and use different terminology. Four registers are essential to instruction execution:
Program counter (PC): Contains the address of an instruction to be fetched.
Instruction register (IR): Contains the instruction most recently fetched.
Memory address register (MAR): Contains the address of a location in memory.
Memory buffer register (MBR): Contains a word of data to be written to memory or the word most  recently read.

Addressing Modes

Each instruction requires certain operands on which it has to operate. There are various technique to specify the operand of an instruction. These techniques are called addressing modes. The most common addressing techniques, or modes are:

1.   Immediate Addressing:
The simplest form of addressing is immediate addressing, in which the operand value is present in the instruction itself. This mode can be used to define and use constants or set initial values of variables.
The advantage of immediate addressing is that no memory reference other than the instruction fetch is required to obtain the operand, thus saving one memory or cache cycle in the instruction cycle. The disadvantage is that the size of the number is restricted to the size of the address field, which, in most instruction sets, is small compared with the word length.

2.   Direct Addressing:
In this modes of addressing the address of the operand is specified in the instruction. The operand will be in memory. Thus, the address field contains the effective address of the operand.
It requires only one memory reference and no special calculation. The obvious limitation is that it provides only a limited address space.

3.   Indirect Addressing:
In this mode the address field of the instruction gives the address where the effective address is stored in the memory.
With direct addressing, the length of the address field is usually less than the word length, thus limiting the address range. One solution is to have the address field refer to the address of a word in memory, which in turn contains a full-length address of the operand. This is known as indirect addressing.
There does not appear to be any particular advantage to this approach, and its disadvantage is that three or more memory references could be required to fetch an operand.

4.   Register Addressing:
In this modes of addressing, the instruction specifies the name of the register in which the operand is available.
The advantages of register addressing are that (1) only a small address field is needed in the instruction, and (2) no time-consuming memory references are required. The disadvantage of register addressing is that the address space is very limited.

5.   Register Indirect Addressing:
In this mode of addressing, the instruction specifies the register in which address of data is available. Here, operand will be in the memory and address in register pair.
The advantages and limitations of register indirect addressing are basically the same as for indirect addressing. In both cases, the address space limitation (limited range of addresses) of the address field is overcome by having that field refer to a word length location containing an address. In addition, register indirect addressing uses one less memory reference than indirect addressing.

6.   Displacement Addressing:
Displacement addressing is a very powerful mode of addressing that combines the capabilities of direct addressing and register indirect addressing.
Displacement addressing requires that the instruction have two address fields, at least one of which is explicit. The value contained in one address field (value = A) is used directly. The other address field, or an implicit reference based on opcode, refers to a register whose contents are added to A to produce the effective address.

7.   Stack Addressing:
A stack is a linear array of locations. It is sometimes referred to as a pushdown list or last-in-first-out queue. Items are appended to the top of the stack. Associated with the stack is a pointer whose value is the address of the top of the stack. Alternatively, the top two elements of the stack may be in processor registers, in which case the stack pointer references the third element of the stack. The stack pointer is maintained in a register. Thus, references to stack locations in memory are in fact register indirect addresses.
The stack mode of addressing is a form of implied addressing. The machine instructions need not include a memory reference but implicitly operate on the top of the stack.


Future Trends in Computer


  1. Number of transistors contained in a computer chip would double every 18 months that would also double the speed of computer.
  2. Quantum computers are under development, they use components of chloroform molecules to compute at molecular level. These computers may be very faster than the present computers.
  3. Progress in Artificial Intelligence will make intelligent computers that can learn and adopt the new environment and help the humans in many ways.
  4. Advancements in virtual system will increase the productivity of computers and also help to achieve green technology.
  5. AI languages, VRML (Virtual Reality Modelling Languages) will be effectively used.
  6. Progress in network and communication system will provide congestion free high speed internet for transmitting audio, video and large files over the internet.
  7. Optimal security strategies will be developed to secure the computer system and resources from the intruders.
  8. Computers will manage essential global systems, such as transportation and food production, better than humans will.
  9. Human and computer evolution will converge. Synthetic intelligence will greatly enhance the next generations of humans.
  10. As computers surpass humans in intelligence, a new digital species and a new culture will evolve that is parallel to ours.