The concept of stored program computers appeared in 1945 when
John von Neumann drafted the first version of EDVAC (Electronic Discrete
Variable Computer). Those ideas have since been the milestones of computers:
• an input device through which data and
instructions can be entered
• storage in which data can be read/written;
instructions are like data, they reside in the same memory
• an arithmetic unit to process data
• a control unit which fetches instructions,
decode and execute them
• output devices for the user to access the
results.
Four lines of evolution have emerged from the
first computers (definitions are very loose and in many case the borders
between different classes are blurring):
1. Mainframes: large computers that can support very many
users while delivering great computing power. It is mainly in mainframes
where most of the innovations (both in architecture
and in organization) have been made.
2. Minicomputers: have adopted many of the mainframe techniques,
yet being designed to sell for less, satisfying the computing needs for
smaller groups of users. It is the minicomputer group that improved at the
fastest pace (since 1965 when DEC introduced the first minicomputer, PDP-8),
mainly due to the evolution of integrated circuits technology (the first IC
appeared in 1958).
3. Supercomputers: designed for scientific applications, they
are the most expensive computers (over one million dollars), processing is
usually done in batch mode, for reasons of performance.
4. Microcomputers: have appeared in the microprocessor era (the
first microprocessor, Intel 4004, was introduced in 1971). The term micro refers
only to physical dimensions, not to
computing performance. A typical microcomputer
(either a PC or a workstation) nicely fits on a desk. Microcomputers are a
direct product of technological advances: faster CPUs, semiconductor memories, etc.
For many years the evolution of computers was
concerned with the problem of object code compatibility. A new
architecture had to be, at least partly, compatible with older ones.
The assembly language is no longer the
language in which new applications are written, although the most sensitive
parts continue to be written in assembly language, and this is due to
advances in languages and compiler technology.
Performance Definition
As a user you are interested in reducing the response
time (also called the execution time or latency). The
computer manager is more interested in increasing the throughput (also
called bandwidth), the number of jobs done in a certain amount of
time.
Response time, execution time and throughput
are usually connected to tasks and whole computational events. Latency and
bandwidth are mostly used when discussing about memory performance.
CPU Performance
What is the time the CPU of your machine is
spending in running a program? Assuming that your CPU is driven by a constant
rate clock generator (and this is sure the case), we have:
CPUtime = Clock_cycles_for_the_program * clock
cycle time
The above formula computes the time CPU spends
running a program, not the elapsed time: it does not make sense to compute
the elapsed time as a function of Tck, mainly because the elapsed time also
includes the I/O time, and the response time of I/O devices is not a function
of Tck.
If we know the number of instructions that are
executed since the program starts until the very end, lets call this
the Instruction Count (IC), then we can compute the average number of clock
cycles per instruction (CPI) as
follows:
The CPUtime can then be expressed as: CPUtime
IC * CPI * clock cycle time
The scope of a designer is to lower the
CPUtime, and here are the
parameters that can be modified to achieve
this:
• IC: the instruction count depends on the
instruction set architecture and the compiler technology
• CPI: depends upon machine organization and
instruction set architecture. RISC tries to reduce the CPI
• Tck: hardware technology and machine
organization.
CPI has
to be measured and not simply calculated from the system's specification.
This is because CPI strongly depends of the memory hierarchy organization: a
program running on the system without cache will certainly have a larger CPI
than the same program running on the same machine but with a cache.
What Drives the Work of a Computer Designer
Designing a computer is a challenging task. It
involves software (at least at the level of designing the instruction set),
and hardware as well at all levels: functional organization, logic design,
implementation. Implementation itself deals with designing/specifying ICs,
packaging,
noise, power, cooling etc.
It would be a terrible mistake to disregard
one aspect or other of computer design, rather the computer designer has to
design an optimal machine across all mentioned levels. You can not find a
minimum unless you are familiar with a wide range of technologies, from
compiler and operating system design to logic design and packaging.
Architecture is the art and science of building. Vitruvius,
in the 1st century AD, said that architecture was a building that
incorporated utilitas, firmitas and venustas, in English terms
commodity, firmness and delight. This definition recognizes that architecture
embraces functional, technological and aesthetic aspects.
Thus a computer architect has to specify the
performance requirements of various parts of a computer system, to define the
interconnections between them, and to keep it harmoniously balanced. The
computer architect's job is more than designing the Instruction Set, as it
has been understood for many years. The more an architect is exposed to all
aspects of computer design,
the more efficient she will be.
• the instruction set architecture refers
to what the programmer sees as the machine's instruction set. The instruction
set is the boundary between the hardware and the software, and most of the
decisions concerning the instruction set affect the hardware, and the
converse is also true, many hardware decisions may beneficially/adversely
affect the instruction set.
• the implementation of a machine
refers to the logical and physical design techniques used to implement an
instance of the architecture. It is possible to have different
implementations for
some architecture, in the same way there are
different possibilities to build a house using the same plans, but other
materials and techniques. The implementation has two aspects:
• the organization refers to logical
aspects of an implementation. In other words it refers to the high level
aspects of the design: CPU design, memory system, bus structure(s) etc.
• the hardware refers to the specifics
of an implementation.
|
BE/ME/B.TECH/M.TECH ENGINEERING & LECTURER NOTES & QUESTION PAPERS, GENERAL TOPICS,INTERVIEW QUESTIONS,APTITUDE PAPERS,MODEL PAPERS,PLACEMENT PAPERS, EXAM RESULTS,ANNA UNIVERSITY REVALUATION RESULTS 2012 & MANY MORE....
Friday, January 4, 2013
UNIT I Fundamentals of Computer Design
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment