COMPUTER SCIENCE
INTRODUCTION: Computer Science, study of the theory, experimentation, and engineering that form the basis for the design and use of computers—devices that automatically process information. Computer science traces its roots to work done by English mathematician Charles Babbage, who first proposed a programmable mechanical calculator in 1837. Until the advent of electronic digital computers in the 1940s, computer science was not generally distinguished as being separate from mathematics and engineering. Since then it has sprouted numerous branches of research that are unique to the discipline.
Charles Babbage
Charles Babbage (1792-1871), British mathematician and inventor, who designed and built mechanical computing machines on principles that anticipated the modern electronic computer. Babbage was born in Teignmouth, Devonshire, and was educated at the University of Cambridge . He became a fellow of the Royal Society in 1816 and was active in the founding of the Analytical, the Royal Astronomical, and the Statistical societies.
In the 1820s Babbage began developing his Difference Engine, a mechanical device that can perform simple mathematical calculations. Babbage started to build his Difference Engine, but was unable to complete it because of a lack of funding. However, in 1991 British scientists, following Babbage's detailed drawings and specifications, constructed the Difference Engine. The machine works flawlessly, calculating up to a precision of 31 digits, proving that Babbage's design was sound. In the 1830s Babbage began developing his Analytical Engine, which was designed to carry out more complicated calculations, but this device was never built. Babbage's book Economy of Machines and Manufactures (1832) initiated the field of study known today as operational research.
THE DEVELOPMENT OF COMPUTER SCIENCE
Early work in the field of computer science during the late 1940s and early 1950s focused on automating the process of making calculations for use in science and engineering. Scientists and engineers developed theoretical models of computation that enabled them to analyze how efficient different approaches were in performing various calculations. Computer science overlapped considerably during this time with the branch of mathematics known as numerical analysis, which examines the accuracy and precision of calculations. (See ENIAC; UNIVAC.)
As the use of computers expanded between the 1950s and the 1970s, the focus of computer science broadened to include simplifying the use of computers through programming languages—artificial languages used to program computers, and operating systems—computer programs that provide a useful interface between a computer and a user. During this time, computer scientists were also experimenting with new applications and computer designs, creating the first computer networks, and exploring relationships between computation and thought.
In the 1970s, computer chip manufacturers began to mass produce microprocessors—the electronic circuitry that serves as the main information processing center in a computer. This new technology revolutionized the computer industry by dramatically reducing the cost of building computers and greatly increasing their processing speed. The microprocessor made possible the advent of the personal computer, which resulted in an explosion in the use of computer applications. Between the early 1970s and 1980s, computer science rapidly expanded in an effort to develop new applications for personal computers and to drive the technological advances in the computing industry. Much of the earlier research that had been done began to reach the public through personal computers, which derived most of their early software from existing concepts and systems.
Computer scientists continue to expand the frontiers of computer and information systems by pioneering the designs of more complex, reliable, and powerful computers; enabling networks of computers to efficiently exchange vast amounts of information; and seeking ways to make computers behave intelligently. As computers become an increasingly integral part of modern society, computer scientists strive to solve new problems and invent better methods of solving current problems.
The goals of computer science range from finding ways to better educate people in the use of existing computers to highly speculative research into technologies and approaches that may not be viable for decades. Underlying all of these specific goals is the desire to better the human condition today and in the future through the improved use of information.
III | | THEORY AND EXPERIMENT |
Computer science is a combination of theory, engineering, and experimentation. In some cases, a computer scientist develops a theory, then engineers a combination of computer hardware and software based on that theory, and experimentally tests it. An example of such a theory-driven approach is the development of new software engineering tools that are then evaluated in actual use. In other cases, experimentation may result in new theory, such as the discovery that an artificial neural network exhibits behavior similar to neurons in the brain, leading to a new theory in neurophysiology.
It might seem that the predictable nature of computers makes experimentation unnecessary because the outcome of experiments should be known in advance. But when computer systems and their interactions with the natural world become sufficiently complex, unforeseen behaviors can result. Experimentation and the traditional scientific method are thus key parts of computer science.
Scientific Method
Scientific Method, term denoting the principles that guide scientific research and experimentation, and also the philosophic bases of those principles. Whereas philosophy in general is concerned with the why as well as the how of things, science occupies itself with the latter question only, but in a scrupulously rigorous manner. The era of modern science is generally considered to have begun with the Renaissance, but the rudiments of the scientific approach to knowledge can be observed throughout human history.
Definitions of scientific method use such concepts as objectivity of approach to and acceptability of the results of scientific study. Objectivity indicates the attempt to observe things as they are, without falsifying observations to accord with some preconceived world view. Acceptability is judged in terms of the degree to which observations and experimentations can be reproduced. Scientific method also involves the interplay of inductive reasoning (reasoning from specific observations and experiments to more general hypotheses and theories) and deductive reasoning (reasoning from theories to account for specific experimental results). By such reasoning processes, science attempts to develop the broad laws—such as Isaac Newton's law of gravitation—that become part of our understanding of the natural world.
Science has tremendous scope, however, and its many separate disciplines can differ greatly in terms of subject matter and the possible ways of studying that subject matter. No single path to discovery exists in science, and no one clear-cut description can be given that accounts for all the ways in which scientific truth is pursued. One of the early writers on scientific method, the English philosopher and statesman Francis Bacon, wrote in the early 17th century that a tabulation of a sufficiently large number of observations of nature would lead to theories accounting for those operations—the method of inductive reasoning. At about the same time, however, the French mathematician and philosopher René Descartes was attempting to account for observed phenomena on the basis of what he called clear and distinct ideas—the method of deductive reasoning.
A closer approach to the method commonly used by physical scientists today was that followed by Galileo in his study of falling bodies. Observing that heavy objects fall with increasing speed, he formulated the hypothesis that the speed attained is directly proportional to the distance traversed. Being unable to test this directly, he deduced from his hypothesis the conclusion that objects falling unequal distances require the same amount of elapsed time. This was a false conclusion, and hence, logically, the first hypothesis was false. Therefore Galileo framed a new hypothesis: that the speed attained is directly proportional to the time elapsed, not the distance traversed. From this he was able to infer that the distance traversed by a falling object is proportional to the square of the time elapsed, and this hypothesis he was able to verify experimentally by rolling balls down an inclined plane.
Such agreement of a conclusion with an actual observation does not itself prove the correctness of the hypothesis from which the conclusion is derived. It simply renders the premise that much more plausible. The ultimate test of the validity of a scientific hypothesis is its consistency with the totality of other aspects of the scientific framework. This inner consistency constitutes the basis for the concept of causality in science, according to which every effect is assumed to be linked with a cause.
Scientists, like other human beings, may individually be swayed by some prevailing worldview to look for certain experimental results rather than others, or to “intuit” some broad theory that they then seek to prove. The scientific community as a whole, however, judges the work of its members by the objectivity and rigor with which that work has been conducted; in this way the scientific method prevails.
IV | | MAJOR BRANCHES OF COMPUTER SCIENCE |
Computer science can be divided into four main fields: software development, computer architecture (hardware), human-computer interfacing (the design of the most efficient ways for humans to use computers), and artificial intelligence (the attempt to make computers behave intelligently). Software development is concerned with creating computer programs that perform efficiently. Computer architecture is concerned with developing optimal hardware for specific computational needs. The areas of artificial intelligence (AI) and human-computer interfacing often involve the development of both software and hardware to solve specific problems.
A | | Software Development |
In developing computer software, computer scientists and engineers study various areas and techniques of software design, such as the best types of programming languages and algorithms (see below) to use in specific programs, how to efficiently store and retrieve information, and the computational limits of certain software-computer combinations. Software designers must consider many factors when developing a program. Often, program performance in one area must be sacrificed for the sake of the general performance of the software. For instance, since computers have only a limited amount of memory, software designers must limit the number of features they include in a program so that it will not require more memory than the system it is designed for can supply.
Software engineering is an area of software development in which computer scientists and engineers study methods and tools that facilitate the efficient development of correct, reliable, and robust computer programs. Research in this branch of computer science considers all the phases of the software life cycle, which begins with a formal problem specification, and progresses to the design of a solution, its implementation as a program, testing of the program, and program maintenance. Software engineers develop software tools and collections of tools called programming environments to improve the development process. For example, tools can help to manage the many components of a large program that is being written by a team of programmers.
Algorithms and data structures are the building blocks of computer programs. An algorithm is a precise step-by-step procedure for solving a problem within a finite time and using a finite amount of memory. Common algorithms include searching a collection of data, sorting data, and numerical operations such as matrix multiplication. Data structures are patterns for organizing information, and often represent relationships between data values. Some common data structures are called lists, arrays, records, stacks, queues, and trees.
Computer scientists continue to develop new algorithms and data structures to solve new problems and improve the efficiency of existing programs. One area of theoretical research is called algorithmic complexity. Computer scientists in this field seek to develop techniques for determining the inherent efficiency of algorithms with respect to one another. Another area of theoretical research called computability theory seeks to identify the inherent limits of computation.
Software engineers use programming languages to communicate algorithms to a computer. Natural languages such as English are ambiguous—meaning that their grammatical structure and vocabulary can be interpreted in multiple ways—so they are not suited for programming. Instead, simple and unambiguous artificial languages are used. Computer scientists study ways of making programming languages more expressive, thereby simplifying programming and reducing errors. A program written in a programming language must be translated into machine language (the actual instructions that the computer follows). Computer scientists also develop better translation algorithms that produce more efficient machine language programs.
Databases and information retrieval are related fields of research. A database is an organized collection of information stored in a computer, such as a company’s customer account data. Computer scientists attempt to make it easier for users to access databases, prevent access by unauthorized users, and improve access speed. They are also interested in developing techniques to compress the data, so that more can be stored in the same amount of memory. Databases are sometimes distributed over multiple computers that update the data simultaneously, which can lead to inconsistency in the stored information. To address this problem, computer scientists also study ways of preventing inconsistency without reducing access speed.
Information retrieval is concerned with locating data in collections that are not clearly organized, such as a file of newspaper articles. Computer scientists develop algorithms for creating indexes of the data. Once the information is indexed, techniques developed for databases can be used to organize it. Data mining is a closely related field in which a large body of information is analyzed to identify patterns. For example, mining the sales records from a grocery store could identify shopping patterns to help guide the store in stocking its shelves more effectively. (see Information Storage and Retrieval.)
Operating systems are programs that control the overall functioning of a computer. They provide the user interface, place programs into the computer’s memory and cause it to execute them, control the computer’s input and output devices, manage the computer’s resources such as its disk space, protect the computer from unauthorized use, and keep stored data secure. Computer scientists are interested in making operating systems easier to use, more secure, and more efficient by developing new user interface designs, designing new mechanisms that allow data to be shared while preventing access to sensitive data, and developing algorithms that make more effective use of the computer’s time and memory.
The study of numerical computation involves the development of algorithms for calculations, often on large sets of data or with high precision. Because many of these computations may take days or months to execute, computer scientists are interested in making the calculations as efficient as possible. They also explore ways to increase the numerical precision of computations, which can have such effects as improving the accuracy of a weather forecast. The goals of improving efficiency and precision often conflict, with greater efficiency being obtained at the cost of precision and vice versa.
Symbolic computation involves programs that manipulate nonnumeric symbols, such as characters, words, drawings, algebraic expressions, encrypted data (data coded to prevent unauthorized access), and the parts of data structures that represent relationships between values (see Encryption). One unifying property of symbolic programs is that they often lack the regular patterns of processing found in many numerical computations. Such irregularities present computer scientists with special challenges in creating theoretical models of a program’s efficiency, in translating it into an efficient machine language program, and in specifying and testing its correct behavior.
B | | Computer Architecture |
Computer architecture is the design and analysis of new computer systems. Computer architects study ways of improving computers by increasing their speed, storage capacity, and reliability, and by reducing their cost and power consumption. Computer architects develop both software and hardware models to analyze the performance of existing and proposed computer designs, then use this analysis to guide development of new computers. They are often involved with the engineering of a new computer because the accuracy of their models depends on the design of the computer’s circuitry. Many computer architects are interested in developing computers that are specialized for particular applications such as image processing, signal processing, or the control of mechanical systems. The optimization of computer architecture to specific tasks often yields higher performance, lower cost, or both.
C | | Artificial Intelligence |
Artificial intelligence (AI) research seeks to enable computers and machines to mimic human intelligence and sensory processing ability, and models human behavior with computers to improve our understanding of intelligence. The many branches of AI research include machine learning, inference, cognition, knowledge representation, problem solving, case-based reasoning, natural language understanding, speech recognition, computer vision, and artificial neural networks.
A key technique developed in the study of artificial intelligence is to specify a problem as a set of states, some of which are solutions, and then search for solution states. For example, in chess, each move creates a new state. If a computer searched the states resulting from all possible sequences of moves, it could identify those that win the game. However, the number of states associated with many problems (such as the possible number of moves needed to win a chess game) is so vast that exhaustively searching them is impractical. The search process can be improved through the use of heuristics—rules that are specific to a given problem and can therefore help guide the search. For example, a chess heuristic might indicate that when a move results in checkmate, there is no point in examining alternate moves.
D | | Robotics |
Another area of computer science that has found wide practical use is robotics—the design and development of computer controlled mechanical devices. Robots range in complexity from toys to automated factory assembly lines, and relieve humans from tedious, repetitive, or dangerous tasks. Robots are also employed where requirements of speed, precision, consistency, or cleanliness exceed what humans can accomplish. Roboticists—scientists involved in the field of robotics—study the many aspects of controlling robots. These aspects include modeling the robot’s physical properties, modeling its environment, planning its actions, directing its mechanisms efficiently, using sensors to provide feedback to the controlling program, and ensuring the safety of its behavior. They also study ways of simplifying the creation of control programs. One area of research seeks to provide robots with more of the dexterity and adaptability of humans, and is closely associated with AI.
E | | Human-Computer Interfacing |
Human-computer interfaces provide the means for people to use computers. An example of a human-computer interface is the keyboard, which lets humans enter commands into a computer and enter text into a specific application. The diversity of research into human-computer interfacing corresponds to the diversity of computer users and applications. However, a unifying theme is the development of better interfaces and experimental evaluation of their effectiveness. Examples include improving computer access for people with disabilities, simplifying program use, developing three-dimensional input and output devices for virtual reality, improving handwriting and speech recognition, and developing heads-up displays for aircraft instruments in which critical information such as speed, altitude, and heading are displayed on a screen in front of the pilot’s window. One area of research, called visualization, is concerned with graphically presenting large amounts of data so that people can comprehend its key properties.
V | | CONNECTION OF COMPUTER SCIENCE TO OTHER DISCIPLINES |
Because computer science grew out of mathematics and , it retains many close connections to those disciplines. Theoretical computer science draws many of its approaches from mathematics and logic. Research in numerical computation overlaps with mathematics research in numerical analysis. Computer architects work closely with the electrical engineers who design the circuits of a computer.
Beyond these historical connections, there are strong ties between AI research and psychology, neurophysiology, and linguistics. Human-computer interface research also has connections with psychology. Roboticists work with both mechanical engineers and physiologists in designing new robots.
Computer science also has indirect relationships with virtually all disciplines that use computers. Applications developed in other fields often involve collaboration with computer scientists, who contribute their knowledge of algorithms, data structures, software engineering, and existing technology. In return, the computer scientists have the opportunity to observe novel applications of computers, from which they gain a deeper insight into their use. These relationships make computer science a highly interdisciplinary field of study.
Contributed By:
Charles C. Weems
Microsoft ® Encarta ® 2009. © 1993-2008 Microsoft Corporation. All rights reserved.
A. | Beginnings |
The history of computing began with an analog machine. In 1623 German scientist Wilhelm Schikard invented a machine that used 11 complete and 6 incomplete sprocketed wheels that could add, and with the aid of logarithm tables, multiply and divide.
French philosopher, mathematician, and physicist Blaise Pascal invented a machine in 1642 that added and subtracted, automatically carrying and borrowing digits from column to column. Pascal built 50 copies of his machine, but most served as curiosities in parlors of the wealthy. Seventeenth-century German mathematician Gottfried Leibniz designed a special gearing system to enable multiplication on Pascal’s machine.
A. | Early Computers |
In the first computers, CPUs were made of vacuum tubes and electric relays rather than microscopic transistors on computer chips. These early computers were immense and needed a great deal of power compared to today’s microprocessor-driven computers. The first general purpose electronic computer, the ENIAC (Electronic Numerical Integrator And Computer), was introduced in 1946 and filled a large room. About 18,000 vacuum tubes were used to build ENIAC’s CPU and input/output circuits. Between 1946 and 1956 all computers had bulky CPUs that consumed massive amounts of energy and needed continual maintenance, because the vacuum tubes burned out frequently and had to be replaced.
TYPES OF COMPUTERS
Digital and Analog
Computers can be either digital or analog. Virtually all modern computers are digital. Digital refers to the processes in computers that manipulate binary numbers (0s or 1s), which represent switches that are turned on or off by electrical current. A bit can have the value 0 or the value 1, but nothing in between 0 and 1. Analog refers to circuits or numerical values that have a continuous range. Both 0 and 1 can be represented by analog computers, but so can 0.5, 1.5, or a number like p (approximately 3.14).
A desk lamp can serve as an example of the difference between analog and digital. If the lamp has a simple on/off switch, then the lamp system is digital, because the lamp either produces light at a given moment or it does not. If a dimmer replaces the on/off switch, then the lamp is analog, because the amount of light can vary continuously from on to off and all intensities in between.
Analog computer systems were the first type to be produced. A popular analog computer used in the 20th century was the slide rule. To perform calculations with a slide rule, the user slides a narrow, gauged wooden strip inside a rulerlike holder. Because the sliding is continuous and there is no mechanism to stop at any exact values, the slide rule is analog. New interest has been shown recently in analog computers, particularly in areas such as neural networks. These are specialized computer designs that attempt to mimic neurons of the brain. They can be built to respond to continuous electrical signals. Most modern computers, however, are digital machines whose components have a finite number of states—for example, the 0 or 1, or on or off bits. These bits can be combined to denote information such as numbers, letters, graphics, sound, and program instructions.
Computers exist in a wide range of sizes and power. The smallest are embedded within the circuitry of appliances, such as televisions and wristwatches. These computers are typically preprogrammed for a specific task, such as tuning to a particular television frequency, delivering doses of medicine, or keeping accurate time. They generally are “hard-wired”—that is, their programs are represented as circuits that cannot be reprogrammed.
Programmable computers vary enormously in their computational power, speed, memory, and physical size. Some small computers can be held in one hand and are called personal digital assistants (PDAs). They are used as notepads, scheduling systems, and address books; if equipped with a cellular phone, they can connect to worldwide computer networks to exchange information regardless of location. Hand-held game devices are also examples of small computers.
Portable laptop and notebook computers and desktop PCs are typically used in businesses and at home to communicate on computer networks, for word processing, to track finances, and for entertainment. They have large amounts of internal memory to store hundreds of programs and documents. They are equipped with a keyboard; a mouse, trackball, or other pointing device; and a video display monitor or liquid crystal display (LCD) to display information. Laptop and notebook computers usually have hardware and software similar to PCs, but they are more compact and have flat, lightweight LCDs instead of television-like video display monitors. Most sources consider the terms “laptop” and “notebook” synonymous.
Workstations are similar to personal computers but have greater memory and more extensive mathematical abilities, and they are connected to other workstations or personal computers to exchange data. They are typically found in scientific, industrial, and business environments—especially financial ones, such as stock exchanges—that require complex and fast computations.
Mainframe computers have more memory, speed, and capabilities than workstations and are usually shared by multiple users through a series of interconnected computers. They control businesses and industrial facilities and are used for scientific research. The most powerful mainframe computers, called supercomputers, process complex and time-consuming calculations, such as those used to create weather predictions. Large businesses, scientific institutions, and the military use them. Some supercomputers have many sets of CPUs. These computers break a task into small pieces, and each CPU processes a portion of the task to increase overall speed and efficiency. Such computers are called parallel processors. As computers have increased in sophistication, the boundaries between the various types have become less rigid. The performance of various tasks and types of computing have also moved from one type of computer to another. For example, networked PCs can work together on a given task in a version of parallel processing known as distributed computing.
IV. | CURRENT DEVELOPMENTS |
The competitive nature of the computer industry and the use of faster, more cost-effective computing continue the drive toward faster CPUs. The minimum transistor size that can be manufactured using current technology is fast approaching the theoretical limit. In the standard technique for microprocessor design, ultraviolet (short wavelength) light is used to expose a light-sensitive covering on the silicon chip. Various methods are then used to etch the base material along the pattern created by the light. These etchings form the paths that electricity follows in the chip. The theoretical limit for transistor size using this type of manufacturing process is approximately equal to the wavelength of the light used to expose the light-sensitive covering. By using light of shorter wavelength, greater detail can be achieved and smaller transistors can be manufactured, resulting in faster, more powerful CPUs. Printing integrated circuits with X-rays, which have a much shorter wavelength than ultraviolet light, may provide further reductions in transistor size that will translate to improvements in CPU speed.
Many other avenues of research are being pursued in an attempt to make faster CPUs. New base materials for integrated circuits, such as composite layers of gallium arsenide and gallium aluminum arsenide, may contribute to faster chips. Alternatives to the standard transistor-based model of the CPU are also being considered. Experimental ideas in computing may radically change the design of computers and the concept of the CPU in the future. These ideas include quantum computing, in which single atoms hold bits of information; molecular computing, where certain types of problems may be solved using recombinant DNA techniques; and neural networks, which are computer systems with the ability to learn.
THE COMPUTER MEMORY
HISTORY |
Early electronic computers in the late 1940s and early 1950s used cathode ray tubes (CRT), similar to a computer display screen, to store data. The coating on a CRT remains lit for a short time after an electron beam strikes it. Thus, a pattern of dots could be written on the CRT, representing 1s and 0s, and then be read back for a short time before fading. Like DRAM, CRT storage had to be periodically refreshed to retain its contents. A typical CRT held 128 bytes, and the entire memory of such a computer was usually 4 kilobytes.
International Business Machines Corporation (IBM) developed magnetic core memory in the early 1950s. Magnetic core (often just called “core”) memory consisted of tiny rings of magnetic material woven into meshes of thin wires. When the computer sent a current through a pair of wires, the ring at their intersection became magnetized either clockwise or counterclockwise (corresponding to a 0 or a 1), depending on the direction of the current. Computer manufacturers first used core memory in production computers in the 1960s, at about the same time that they began to replace vacuum tubes with transistors. Magnetic core memory was used through most of the 1960s and into the 1970s.
The next step in the development of computer memory came with the introduction of integrated circuits, which enabled multiple transistors to be placed on one chip. Computer scientists developed the first such memory when they constructed an experimental supercomputer called Illiac-IV in the late 1960s. Integrated circuit memory quickly displaced core and has been the dominant technology for internal memory ever since.
DEVELOPMENTS AND LIMITATIONS
Since the inception of computer memory, the capacity of both internal and external memory devices has grown steadily at a rate that leads to a quadrupling in size every three years. Computer industry analysts expect this rapid rate of growth to continue unimpeded. Computer engineers consider it possible to make multigigabyte memory chips and disks capable of storing a terabyte (one trillion bytes) of memory.
Some computer engineers are concerned that the silicon-based memory chips are approaching a limit in the amount of data they can hold. However, it is expected that transistors can be made at least four times smaller before inherent limits of physics make further reductions difficult. Engineers also expect that the external dimensions of memory chips will increase by a factor of four, meaning that larger amounts of memory will fit on a single chip. Current memory chips use only a single layer of circuitry, but researchers are working on ways to stack multiple layers onto one chip. Once all of these approaches are exhausted, RAM memory may reach a limit. Researchers, however, are also exploring more exotic technologies with the potential to provide even more capacity, including the use of biotechnology to produce memories out of living cells. The memory in a computer is composed of many memory chips. While current memory chips contain megabytes of RAM, future chips will likely have gigabytes of RAM on a single chip. To add to RAM, computer users can purchase memory cards that each contain many memory chips. In addition, future computers will likely have advanced data transfer capabilities and additional caches that enable the CPU to access memory faster.

RAM, in computer science, acronym for random access memory. Semiconductor-based memory that can be read and written by the microprocessor or other hardware devices. The storage locations can be accessed in any order. Note that the various types of ROM memory are capable of random access. The term RAM, however, is generally understood to refer to volatile memory, which can be written as well as read. See also Computer; EPROM; PROM.

RAM
In memory, bits are grouped together so they can represent larger values. A group of eight bits is called a byte and can represent decimal numbers ranging from 0 to 255. The particular sequence of bits in the byte encodes a unit of information, such as a keyboard character. One byte typically represents a single character such as a number, letter, or symbol. Most computers operate by manipulating groups of 2, 4, or 8 bytes called words.
Memory capacity is usually quantified in terms of kilobytes, megabytes, and gigabytes. Although the prefixes kilo-, mega-, and giga-, are taken from the metric system, they have a slightly different meaning when applied to computer memories. In the metric system, kilo- means 1 thousand; mega-, 1 million; and giga-, 1 billion. When applied to computer memory, however, the prefixes are measured as powers of two, with kilo- meaning 2 raised to the 10th power, or 1,024; mega- meaning 2 raised to the 20th power, or 1,048,576; and giga- meaning 2 raised to the 30th power, or 1,073,741,824. Thus, a kilobyte is 1,024 bytes and a megabyte is 1,048,576 bytes. It is easier to remember that a kilobyte is approximately 1,000 bytes, a megabyte is approximately 1 million bytes, and a gigabyte is approximately 1 billion bytes.
Number Systems, in mathematics, various notational systems that have been or are being used to represent the abstract quantities called numbers. A number system is defined by the base it uses, the base being the number of different symbols required by the system to represent any of the infinite series of numbers. Thus, the decimal system in universal use today (except for computer application) requires ten different symbols, or digits, to represent numbers and is therefore a base-10 system.
Number Systems
Throughout history, many different number systems have been used; in fact, any whole number greater than 1 can be used as a base. Some cultures have used systems based on the numbers 3, 4, or 5. The Babylonians used the sexagesimal system, based on the number 60, and the Romans used (for some purposes) the duodecimal system, based on the number 12. The Mayas used the vigesimal system, based on the number 20. The binary system, based on the number 2, was used by some tribes and, together with the system based on 8, is used today in computer systems. For historical background, see Numerals.
PLACE VALUES
Except for computer work, the universally adopted system of mathematical notation today is the decimal system, which, as stated, is a base-10 system. As in other number systems, the position of a symbol in a base-10 number denotes the value of that symbol in terms of exponential values of the base. That is, in the decimal system, the quantity represented by any of the ten symbols used—0, 1, 2, 3, 4, 5, 6, 7, 8, and 9—depends on its position in the number. Thus, the number 3,098,323 is an abbreviation for (3 × 106) + (0 × 105) + (9 × 104) + (8 × 103) + (3 × 102) + (2 × 101) + (3 × 100, or 3 × 1). The first “3” (reading from right to left) represents 3 units; the second “3,” 300 units; and the third “3,” 3 million units. In this system the zero plays a double role; it represents naught, and it also serves to indicate the multiples of the base 10: 100, 1000, 10,000, and so on. It is also used to indicate fractions of integers: 1/10 is written as 0.1, 1/100 as 0.01, 1/1000 as 0.001, and so on.
Two digits—0, 1—suffice to represent a number in the binary system; 6 digits—0, 1, 2, 3, 4, 5—are needed to represent a number in the sexagesimal system; and 12 digits—0, 1, 2, 3, 4, 5, 6, 7, 8, 9, t (ten), e (eleven)—are needed to represent a number in the duodecimal system. The number 30155 in the sexagesimal system is the number (3 × 64) + (0 × 63) + (1 × 62) + (5 × 61) + (5 × 60) = 3959 in the decimal system; the number 2et in the duodecimal system is the number (2 × 122) + (11 × 121) + (10 × 120) = 430 in the decimal system
To write a given base-10 number n as a base-b number, divide (in the decimal system) n by b, divide the quotient by b, the new quotient by b, and so on until the quotient 0 is obtained. The successive remainders are the digits in the base-b expression for n. For example, to express 3959 (base 10) in the base 6, one writes
from which, as above, 395910 = 301556. (The base is frequently written in this way as a subscript of the number.) The larger the base, the more symbols are required, but fewer digits are needed to express a given number. The number 12 is convenient as a base because it is exactly divisible by 2, 3, 4, and 6; for this reason, some mathematicians have advocated adoption of base 12 in place of the base 10.
BINARY SYSTEM
The binary system plays an important role in computer technology. The first 20 numbers in the binary notation are 1, 10, 11, 100, 101, 110, 111, 1000, 1001, 1010, 1011, 1100, 1101, 1110, 1111, 10000, 10001, 10010, 10011, 10100. The zero here also has the role of place marker, as in the decimal system. Any decimal number can be expressed in the binary system by the sum of different powers of two. For example, starting from the right, 10101101 represents (1 × 20) + (0 × 21) + (1 × 22) + (1 × 23) + (0 × 24) + (1 × 25) + (0 × 26) + (1 × 27) = 173. This example can be used for the conversion of binary numbers into decimal numbers. For the conversion of decimal numbers to binary numbers, the same principle can be used, but the other way around. Thus, to convert, the highest power of two that does not exceed the given number is sought first, and a 1 is placed in the corresponding position in the binary number. For example, the highest power of two in the decimal number 519 is 29 = 512. Thus, a 1 can be inserted as the 10th digit, counted from the right: 1000000000. In the remainder, 519 - 512 = 7, the highest power of 2 is 22 = 4, so the third zero from the right can be replaced by a 1: 1000000100. The next remainder, 3, consists of the sum of two powers of 2: 21 + 20, so the first and second zeros from the right are replaced by 1: 51910 = 10000001112.
Arithmetic operations in the binary system are extremely simple. The basic rules are: 1 + 1 = 10, and 1 × 1 = 1. Zero plays its usual role: 1 × 0 = 0, and 1 + 0 = 1. Addition, subtraction, and multiplication are done in a fashion similar to that of the decimal system:
Because only two digits (or bits) are involved, the binary system is used in computers, since any binary number can be represented by, for example, the positions of a series of on-off switches. The “on” position corresponds to a 1, and the “off” position to a 0. Instead of switches, magnetized dots on a magnetic tape or disk also can be used to represent binary numbers: a magnetized dot stands for the digit 1, and the absence of a magnetized dot is the digit 0. Flip-flops—electronic devices that can only carry two distinct voltages at their outputs and that can be switched from one state to the other state by an impulse—can also be used to represent binary numbers; the two voltages correspond to the two digits. Logic circuits in computers (see Computer; Electronics) carry out the different arithmetic operations of binary numbers; the conversion of decimal numbers to binary numbers for processing, and of binary numbers to decimal numbers for the readout, is done electronically.
Internal RAM
Random access memory is also called main memory because it is the primary memory that the CPU uses when processing information. The electronic circuits used to construct this main internal RAM can be classified as dynamic RAM (DRAM), synchronized dynamic RAM (SDRAM), or static RAM (SRAM). DRAM, SDRAM, and SRAM all involve different ways of using transistors and capacitors to store data. In DRAM or SDRAM, the circuit for each bit consists of a transistor, which acts as a switch, and a capacitor, a device that can store a charge. To store the binary value 1 in a bit, DRAM places an electric charge on the capacitor. To store the binary value 0, DRAM removes all electric charge from the capacitor. The transistor is used to switch the charge onto the capacitor. When it is turned on, the transistor acts like a closed switch that allows electric current to flow into the capacitor and build up a charge. The transistor is then turned off, meaning that it acts like an open switch, leaving the charge on the capacitor. To store a 0, the charge is drained from the capacitor while the transistor is on, and then the transistor is turned off, leaving the capacitor uncharged. To read a value in a DRAM bit location, a detector circuit determines whether a charge is present or absent on the relevant capacitor.
DRAM is called dynamic because it is continually refreshed. The memory chips themselves cannot hold values over long periods of time. Because capacitors are imperfect, the charge slowly leaks out of them, which results in loss of the stored data. Thus, a DRAM memory system contains additional circuitry that periodically reads and rewrites each data value. This replaces the charge on the capacitors, a process known as refreshing memory. The major difference between SDRAM and DRAM arises from the way in which refresh circuitry is created. DRAM contains separate, independent circuitry to refresh memory. The refresh circuitry in SDRAM is synchronized to use the same hardware clock as the CPU. The hardware clock sends a constant stream of pulses through the CPU’s circuitry. Synchronizing the refresh circuitry with the hardware clock results in less duplication of electronics and better access coordination between the CPU and the refresh circuits.
In SRAM, the circuit for a bit consists of multiple transistors that hold the stored value without the need for refresh. The chief advantage of SRAM lies in its speed. A computer can access data in SRAM more quickly than it can access data in DRAM or SDRAM. However, the SRAM circuitry draws more power and generates more heat than DRAM or SDRAM. The circuitry for a SRAM bit is also larger, which means that a SRAM memory chip holds fewer bits than a DRAM chip of the same size. Therefore, SRAM is used when access speed is more important than large memory capacity or low power consumption.
The time it takes the CPU to transfer data to or from memory is particularly important because it determines the overall performance of the computer. The time required to read or write one bit is known as the memory access time. Current DRAM and SDRAM access times are between 30 and 80 nanoseconds (billionths of a second). SRAM access times are typically four times faster than DRAM.
The internal RAM on a computer is divided into locations, each of which has a unique numerical address associated with it. In some computers a memory address refers directly to a single byte in memory, while in others, an address specifies a group of four bytes called a word. Computers also exist in which a word consists of two or eight bytes, or in which a byte consists of six or ten bits.
When a computer performs an arithmetic operation, such as addition or multiplication, the numbers used in the operation can be found in memory. The instruction code that tells the computer which operation to perform also specifies which memory address or addresses to access. An address is sent from the CPU to the main memory (RAM) over a set of wires called an address bus. Control circuits in the memory use the address to select the bits at the specified location in RAM and send a copy of the data back to the CPU over another set of wires called a data bus. Inside the CPU, the data passes through circuits called the data path to the circuits that perform the arithmetic operation. The exact details depend on the model of the CPU. For example, some CPUs use an intermediate step in which the data is first loaded into a high-speed memory device within the CPU called a register.
ROM, acronym for read-only memory. In computer science, semiconductor-based memory that contains instructions or data that can be read but not modified. To create a ROM chip, the designer supplies a semiconductor manufacturer with the instructions or data to be stored; the manufacturer then produces one or more chips containing those instructions or data. Because creating ROM chips involves a manufacturing process, it is economically viable only if the ROM chips are produced in large quantities; experimental designs or small volumes are best handled using PROM or EPROM. In general usage, the term ROM often means any read-only device, including PROM and EPROM. See also Computer
B. | Internal ROM |
Read-only memory is the other type of internal memory. ROM memory is used to store items that the computer needs to execute when it is first turned on. For example, the ROM memory on a PC contains a basic set of instructions, called the basic input-output system (BIOS). The PC uses BIOS to start up the operating system. BIOS is stored on computer chips in a way that causes the information to remain even when power is turned off.
Information in ROM is usually permanent and cannot be erased or written over easily. A ROM is permanent if the information cannot be changed—once the ROM has been created, information can be retrieved but not changed. Newer technologies allow ROMs to be semi-permanent—that is, the information can be changed, but it takes several seconds to make the change. For example, a FLASH memory acts like a ROM because values remain stored in memory, but the values can be changed.
Computer Memory
I | | INTRODUCTION |
Computer Memory, a mechanism that stores data for use by a computer. In a computer all data consist of numbers. A computer stores a number into a specific location in memory and later fetches the value. Most memories represent data with the binary number system. In the binary number system, numbers are represented by sequences of the two binary digits 0 and 1, which are called bits (see Number Systems). In a computer, the two possible values of a bit correspond to the on and off states of the computer's electronic circuitry.
In memory, bits are grouped together so they can represent larger values. A group of eight bits is called a byte and can represent decimal numbers ranging from 0 to 255. The particular sequence of bits in the byte encodes a unit of information, such as a keyboard character. One byte typically represents a single character such as a number, letter, or symbol. Most computers operate by manipulating groups of 2, 4, or 8 bytes called words.
Memory capacity is usually quantified in terms of kilobytes, megabytes, and gigabytes. Although the prefixes kilo-, mega-, and giga-, are taken from the metric system, they have a slightly different meaning when applied to computer memories. In the metric system, kilo- means 1 thousand; mega-, 1 million; and giga-, 1 billion. When applied to computer memory, however, the prefixes are measured as powers of two, with kilo- meaning 2 raised to the 10th power, or 1,024; mega- meaning 2 raised to the 20th power, or 1,048,576; and giga- meaning 2 raised to the 30th power, or 1,073,741,824. Thus, a kilobyte is 1,024 bytes and a megabyte is 1,048,576 bytes. It is easier to remember that a kilobyte is approximately 1,000 bytes, a megabyte is approximately 1 million bytes, and a gigabyte is approximately 1 billion bytes.
II | | HOW MEMORY WORKS |
Computer memory may be divided into two broad categories known as internal memory and external memory. Internal memory operates at the highest speed and can be accessed directly by the central processing unit (CPU)—the main electronic circuitry within a computer that processes information. Internal memory is contained on computer chips and uses electronic circuits to store information (see Microprocessor). External memory consists of storage on peripheral devices that are slower than internal memories but offer lower cost and the ability to hold data after the computer’s power has been turned off. External memory uses inexpensive mass-storage devices such as magnetic hard drives. See also Information Storage and Retrieval.
Internal memory is also known as random access memory (RAM) or read-only memory (ROM). Information stored in RAM can be accessed in any order, and may be erased or written over. Information stored in ROM may also be random-access, in that it may be accessed in any order, but the information recorded on ROM is usually permanent and cannot be erased or written over.
A | | Internal RAM |
Random access memory is also called main memory because it is the primary memory that the CPU uses when processing information. The electronic circuits used to construct this main internal RAM can be classified as dynamic RAM (DRAM), synchronized dynamic RAM (SDRAM), or static RAM (SRAM). DRAM, SDRAM, and SRAM all involve different ways of using transistors and capacitors to store data. In DRAM or SDRAM, the circuit for each bit consists of a transistor, which acts as a switch, and a capacitor, a device that can store a charge. To store the binary value 1 in a bit, DRAM places an electric charge on the capacitor. To store the binary value 0, DRAM removes all electric charge from the capacitor. The transistor is used to switch the charge onto the capacitor. When it is turned on, the transistor acts like a closed switch that allows electric current to flow into the capacitor and build up a charge. The transistor is then turned off, meaning that it acts like an open switch, leaving the charge on the capacitor. To store a 0, the charge is drained from the capacitor while the transistor is on, and then the transistor is turned off, leaving the capacitor uncharged. To read a value in a DRAM bit location, a detector circuit determines whether a charge is present or absent on the relevant capacitor.
DRAM is called dynamic because it is continually refreshed. The memory chips themselves cannot hold values over long periods of time. Because capacitors are imperfect, the charge slowly leaks out of them, which results in loss of the stored data. Thus, a DRAM memory system contains additional circuitry that periodically reads and rewrites each data value. This replaces the charge on the capacitors, a process known as refreshing memory. The major difference between SDRAM and DRAM arises from the way in which refresh circuitry is created. DRAM contains separate, independent circuitry to refresh memory. The refresh circuitry in SDRAM is synchronized to use the same hardware clock as the CPU. The hardware clock sends a constant stream of pulses through the CPU’s circuitry. Synchronizing the refresh circuitry with the hardware clock results in less duplication of electronics and better access coordination between the CPU and the refresh circuits.
In SRAM, the circuit for a bit consists of multiple transistors that hold the stored value without the need for refresh. The chief advantage of SRAM lies in its speed. A computer can access data in SRAM more quickly than it can access data in DRAM or SDRAM. However, the SRAM circuitry draws more power and generates more heat than DRAM or SDRAM. The circuitry for a SRAM bit is also larger, which means that a SRAM memory chip holds fewer bits than a DRAM chip of the same size. Therefore, SRAM is used when access speed is more important than large memory capacity or low power consumption.
The time it takes the CPU to transfer data to or from memory is particularly important because it determines the overall performance of the computer. The time required to read or write one bit is known as the memory access time. Current DRAM and SDRAM access times are between 30 and 80 nanoseconds (billionths of a second). SRAM access times are typically four times faster than DRAM.
The internal RAM on a computer is divided into locations, each of which has a unique numerical address associated with it. In some computers a memory address refers directly to a single byte in memory, while in others, an address specifies a group of four bytes called a word. Computers also exist in which a word consists of two or eight bytes, or in which a byte consists of six or ten bits.
When a computer performs an arithmetic operation, such as addition or multiplication, the numbers used in the operation can be found in memory. The instruction code that tells the computer which operation to perform also specifies which memory address or addresses to access. An address is sent from the CPU to the main memory (RAM) over a set of wires called an address bus. Control circuits in the memory use the address to select the bits at the specified location in RAM and send a copy of the data back to the CPU over another set of wires called a data bus. Inside the CPU, the data passes through circuits called the data path to the circuits that perform the arithmetic operation. The exact details depend on the model of the CPU. For example, some CPUs use an intermediate step in which the data is first loaded into a high-speed memory device within the CPU called a register.
B | | Internal ROM |
Read-only memory is the other type of internal memory. ROM memory is used to store items that the computer needs to execute when it is first turned on. For example, the ROM memory on a PC contains a basic set of instructions, called the basic input-output system (BIOS). The PC uses BIOS to start up the operating system. BIOS is stored on computer chips in a way that causes the information to remain even when power is turned off.
Information in ROM is usually permanent and cannot be erased or written over easily. A ROM is permanent if the information cannot be changed—once the ROM has been created, information can be retrieved but not changed. Newer technologies allow ROMs to be semi-permanent—that is, the information can be changed, but it takes several seconds to make the change. For example, a FLASH memory acts like a ROM because values remain stored in memory, but the values can be changed.
C | | External Memory |
External memory can generally be classified as either magnetic or optical, or a combination called magneto-optical. A magnetic storage device, such as a computer's hard drive, uses a surface coated with material that can be magnetized in two possible ways. The surface rotates under a small electromagnet that magnetizes each spot on the surface to record a 0 or 1. To retrieve data, the surface passes under a sensor that determines whether the magnetism was set for a 0 or 1. Optical storage devices such as a compact disc (CD) player use lasers to store and retrieve information from a plastic disk. Magneto-optical memory devices use a combination of optical storage and retrieval technology coupled with a magnetic medium.
C1 | | Magnetic Media |
Memory stored on external magnetic media include magnetic tape, a hard disk, and a floppy disk. Magnetic tape is a form of external computer memory used primarily for backup storage. Like the surface on a magnetic disk, the surface of tape is coated with a material that can be magnetized. As the tape passes over an electromagnet, individual bits are magnetically encoded. Computer systems using magnetic tape storage devices employ machinery similar to that used with analog tape: open-reel tapes, cassette tapes, and helical-scan tapes (similar to video tape).
Another form of magnetic memory uses a spinning disk coated with magnetic material. As the disk spins, a sensitive electromagnetic sensor, called a read-write head, scans across the surface of the disk, reading and writing magnetic spots in concentric circles called tracks.
Magnetic disks are classified as either hard or floppy, depending on the flexibility of the material from which they are made. A floppy disk is made of flexible plastic with small pieces of a magnetic material imbedded in its surface. The read-write head touches the surface of the disk as it scans the floppy. A hard disk is made of a rigid metal, with the read-write head flying just above its surface on a cushion of air to prevent wear.
C2 | | Optical Media |
Optical external memory uses a laser to scan a spinning reflective disk in which the presence or absence of nonreflective pits in the disk indicates 1s or 0s. This is the same technology employed in the audio CD. Because its contents are permanently stored on it when it is manufactured, it is known as compact disc-read only memory (CD-ROM). A variation on the CD, called compact disc-recordable (CD-R), uses a dye that turns dark when a stronger laser beam strikes it, and can thus have information written permanently on it by a computer.
C3 | | Magneto-Optical Media |
Magneto-optical (MO) devices write data to a disk with the help of a laser beam and a magnetic write-head. To write data to the disk, the laser focuses on a spot on the surface of the disk heating it up slightly. This allows the magnetic write-head to change the physical orientation of small grains of magnetic material (actually tiny crystals) on the surface of the disk. These tiny crystals reflect light differently depending on their orientation. By aligning the crystals in one direction a 0 can be stored, while aligning the crystals in the opposite direction stores a 1. Another, separate, low-power laser is used to read data from the disk in a way similar to a standard CD-ROM. The advantage of MO disks over CD-ROMs is that they can be read and written to. They are, however, more expensive than CD-ROMs and are used mostly in industrial applications. MO devices are not popular consumer products.
D | | Cache Memory |
CPU speeds continue to increase much more rapidly than memory access times decrease. The result is a growing gap in performance between the CPU and its main RAM memory. To compensate for the growing difference in speeds, engineers add layers of cache memory between the CPU and the main memory. A cache consists of a small, high-speed memory system that holds recently used values. When the CPU makes a request to fetch or store a memory value, the CPU sends the request to the cache. If the item is already present in the cache, the cache can honor the request quickly because the cache operates at higher speed than main memory. For example, if the CPU needs to add two numbers, retrieving the values from the cache can take less than one-tenth as long as retrieving the values from main memory. However, because the cache is smaller than main memory, not all values can fit in the cache at one time. Therefore, if the requested item is not in the cache, the cache must fetch the item from main memory.
Cache cannot replace conventional RAM because cache is much more expensive and consumes more power. However, research has shown that even a small cache that can store only 1 percent of the data stored in main memory still provides a significant speedup for memory access. Therefore, most computers include a small, external memory cache attached to their RAM. More important, multiple caches can be arranged in a hierarchy to lower memory access times even further. In addition, most CPUs now have a cache on the CPU chip itself. The on-chip internal cache is smaller than the external cache, which is smaller than RAM. The advantage of the on-chip cache is that once a data item has been fetched from the external cache, the CPU can use the item without having to wait for an external cache access.
III | | DEVELOPMENTS AND LIMITATIONS |
Since the inception of computer memory, the capacity of both internal and external memory devices has grown steadily at a rate that leads to a quadrupling in size every three years. Computer industry analysts expect this rapid rate of growth to continue unimpeded. Computer engineers consider it possible to make multigigabyte memory chips and disks capable of storing a terabyte (one trillion bytes) of memory.
Some computer engineers are concerned that the silicon-based memory chips are approaching a limit in the amount of data they can hold. However, it is expected that transistors can be made at least four times smaller before inherent limits of physics make further reductions difficult. Engineers also expect that the external dimensions of memory chips will increase by a factor of four, meaning that larger amounts of memory will fit on a single chip. Current memory chips use only a single layer of circuitry, but researchers are working on ways to stack multiple layers onto one chip. Once all of these approaches are exhausted, RAM memory may reach a limit. Researchers, however, are also exploring more exotic technologies with the potential to provide even more capacity, including the use of biotechnology to produce memories out of living cells. The memory in a computer is composed of many memory chips. While current memory chips contain megabytes of RAM, future chips will likely have gigabytes of RAM on a single chip. To add to RAM, computer users can purchase memory cards that each contain many memory chips. In addition, future computers will likely have advanced data transfer capabilities and additional caches that enable the CPU to access memory faster.
IV | | HISTORY |
Early electronic computers in the late 1940s and early 1950s used cathode ray tubes (CRT), similar to a computer display screen, to store data. The coating on a CRT remains lit for a short time after an electron beam strikes it. Thus, a pattern of dots could be written on the CRT, representing 1s and 0s, and then be read back for a short time before fading. Like DRAM, CRT storage had to be periodically refreshed to retain its contents. A typical CRT held 128 bytes, and the entire memory of such a computer was usually 4 kilobytes.
International Business Machines Corporation (IBM) developed magnetic core memory in the early 1950s. Magnetic core (often just called “core”) memory consisted of tiny rings of magnetic material woven into meshes of thin wires. When the computer sent a current through a pair of wires, the ring at their intersection became magnetized either clockwise or counterclockwise (corresponding to a 0 or a 1), depending on the direction of the current. Computer manufacturers first used core memory in production computers in the 1960s, at about the same time that they began to replace vacuum tubes with transistors. Magnetic core memory was used through most of the 1960s and into the 1970s.
The next step in the development of computer memory came with the introduction of integrated circuits, which enabled multiple transistors to be placed on one chip. Computer scientists developed the first such memory when they constructed an experimental supercomputer called Illiac-IV in the late 1960s. Integrated circuit memory quickly displaced core and has been the dominant technology for internal memory ever since.
Pointing Device, in computer science, an input device used to control an on-screen cursor for such actions as “pressing” on-screen buttons in dialog boxes, choosing menu items, and selecting ranges of cells in spreadsheets or groups of words in a document. A pointing device is also often used to create drawings or graphical shapes. The most common pointing device is the mouse, which was popularized by its central role in the design of the Apple Macintosh. Other pointing devices include the graphics tablet, the stylus, the light pen, the joystick, the puck, and the trackball. See also Graphics Tablet; Input/Output Device; Joystick; Light Pen; Mouse; Puck; Stylus; Trackball.
Cursor, in computer science, a special on-screen indicator, such as a blinking underline or rectangle, that marks the place at which keystrokes will appear when typed. The word cursor is not always used in reference to such a screen marker. On the Apple Macintosh and in Microsoft Windows, for example, a blinking vertical bar called the insertion point calls attention to the place in a document where text or graphics will be inserted. With the input devices known as digitizing tablets, the stylus (pointer or “pen”) is sometimes called a cursor. In applications and operating systems that use a mouse, the arrow or other on-screen icon that moves with movements of the mouse is often called a cursor or pointer
Icon (computer science), in graphical environments, a small graphic image displayed on the screen to represent an object that can be manipulated by the user. Icons are visual mnemonics; for example, a trash can represents a command for deleting unwanted text or files. Icons allow the user to control certain computer actions without having to remember commands or type them at the keyboard. Icons are a significant factor in the “user-friendliness” of graphical user interfaces
Joystick, in computer science, a popular pointing device, used mostly for playing computer games but used for other tasks as well. A joystick usually has a square or rectangular plastic base to which is attached a vertical stem. Control buttons are located on the base and sometimes on top of the stem. The stem can be moved omnidirectionally to control the movement of an object on the screen. The buttons activate various software features, generally producing on-screen events. A joystick is usually a relative pointing device, moving an object on the screen when the stem is moved from the center and stopping the movement when the stem is released. In industrial control applications, the joystick can also be an absolute pointing device, with each position of the stem mapped to a specific on-screen location.
Central Processing Unit (CPU), in computer science, microscopic circuitry that serves as the main information processor in a computer. A CPU is generally a single microprocessor made from a wafer of semiconducting material, usually silicon, with millions of electrical components on its surface. On a higher level, the CPU is actually a number of interconnected processing units that are each responsible for one aspect of the CPU’s function. Standard CPUs contain processing units that interpret and implement software instructions, perform calculations and comparisons, make logical decisions (determining if a statement is true or false based on the rules of Boolean algebra), temporarily store information for use by another of the CPU’s processing units, keep track of the current step in the execution of the program, and allow the CPU to communicate with the rest of the computer.
How computer work
A CPU is similar to a calculator, only much more powerful. The main function of the CPU is to perform arithmetic and logical operations on data taken from memory or on information entered through some device, such as a keyboard, scanner, or joystick. The CPU is controlled by a list of software instructions, called a computer program. Software instructions entering the CPU originate in some form of memory storage device such as a hard disk, floppy disk, CD-ROM, or magnetic tape. These instructions then pass into the computer’s main random access memory (RAM), where each instruction is given a unique address, or memory location. The CPU can access specific pieces of data in RAM by specifying the address of the data that it wants.
As a program is executed, data flow from RAM through an interface unit of wires called the bus, which connects the CPU to RAM. The data are then decoded by a processing unit called the instruction decoder that interprets and implements software instructions. From the instruction decoder the data pass to the arithmetic/logic unit (ALU), which performs calculations and comparisons. Data may be stored by the ALU in temporary memory locations called registers where it may be retrieved quickly. The ALU performs specific operations such as addition, multiplication, and conditional tests on the data in its registers, sending the resulting data back to RAM or storing it in another register for further use. During this process, a unit called the program counter keeps track of each successive instruction to make sure that the program instructions are followed by the CPU in the correct order.
A. | CPU Function |
A CPU is similar to a calculator, only much more powerful. The main function of the CPU is to perform arithmetic and logical operations on data taken from memory or on information entered through some device, such as a keyboard, scanner, or joystick. The CPU is controlled by a list of software instructions, called a computer program. Software instructions entering the CPU originate in some form of memory storage device such as a hard disk, floppy disk, CD-ROM, or magnetic tape. These instructions then pass into the computer’s main random access memory (RAM), where each instruction is given a unique address, or memory location. The CPU can access specific pieces of data in RAM by specifying the address of the data that it wants.
As a program is executed, data flow from RAM through an interface unit of wires called the bus, which connects the CPU to RAM. The data are then decoded by a processing unit called the instruction decoder that interprets and implements software instructions. From the instruction decoder the data pass to the arithmetic/logic unit (ALU), which performs calculations and comparisons. Data may be stored by the ALU in temporary memory locations called registers where it may be retrieved quickly. The ALU performs specific operations such as addition, multiplication, and conditional tests on the data in its registers, sending the resulting data back to RAM or storing it in another register for further use. During this process, a unit called the program counter keeps track of each successive instruction to make sure that the program instructions are followed by the CPU in the correct order.
Computer Program, set of instructions that directs a computer to perform some processing function or combination of functions. For the instructions to be carried out, a computer must execute a program, that is, the computer reads the program, and then follows the steps encoded in the program in a precise order until completion. A program can be executed many different times, with each execution yielding a potentially different result depending upon the options and data that the user gives the computer.
Programs fall into two major classes: application programs and operating systems. An application program is one that carries out some function directly for a user, such as word processing or game-playing. An operating system is a program that manages the computer and the various resources and devices connected to it, such as RAM (random access memory), hard drives, monitors, keyboards, printers, and modems, so that they may be used by other programs. Examples of operating systems are DOS, Windows 95, OS/2, and UNIX.
Application, in computer science, a computer program designed to help people perform a certain type of work. An application thus differs from an operating system (which runs a computer), a utility (which performs maintenance or general-purpose chores), and a language (with which computer programs are created). Depending on the work for which it was designed, an application can manipulate text, numbers, graphics, or a combination of these elements. Some application packages offer considerable computing power by focusing on a single task, such as word processing; others, called integrated software, offer somewhat less power but include several applications, such as a word processor, a spreadsheet, and a database program. See also Computer; Operating System; Programming Language; Spreadsheet Program; Utility.
. | The Operating System |
When a computer is turned on it searches for instructions in its memory. These instructions tell the computer how to start up. Usually, one of the first sets of these instructions is a special program called the operating system, which is the software that makes the computer work. It prompts the user (or other machines) for input and commands, reports the results of these commands and other operations, stores and manages data, and controls the sequence of the software and hardware actions. When the user requests that a program run, the operating system loads the program in the computer’s memory and runs the program. Popular operating systems, such as Microsoft Windows and the Macintosh system (Mac OS), have graphical user interfaces (GUIs)—that use tiny pictures, or icons, to represent various files and commands. To access these files or commands, the user clicks the mouse on the icon or presses a combination of keys on the keyboard. Some operating systems allow the user to carry out these tasks via voice, touch, or other input methods.
Microprocessor
I | | INTRODUCTION |

Handheld Computer
The handheld computing device attests to the remarkable miniaturization of computer hardware. The early computers of the 1940s were so large that they filled entire rooms. Techonological innovations, such as the integrated circuit in 1959 and the microprocessor in 1971, shrank computers’ central processing units to the size of tiny silicon chips. Handheld computers are sometimes called personal digital assistants (PDAs).
James Leynse/Corbis
Microprocessor, electronic circuit that functions as the central processing unit (CPU) of a computer, providing computational control. Microprocessors are also used in other advanced electronic systems, such as computer printers, automobiles, and jet airliners.
The microprocessor is one type of ultra-large-scale integrated circuit. Integrated circuits, also known as microchips or chips, are complex electronic circuits consisting of extremely tiny components formed on a single, thin, flat piece of material known as a semiconductor. Modern microprocessors incorporate transistors (which act as electronic amplifiers, oscillators, or, most commonly, switches), in addition to other components such as resistors, diodes, capacitors, and wires, all packed into an area about the size of a postage stamp.
A microprocessor consists of several different sections: The arithmetic/logic unit (ALU) performs calculations on numbers and makes logical decisions; the registers are special memory locations for storing temporary information much as a scratch pad does; the control unit deciphers programs; buses carry digital information throughout the chip and computer; and local memory supports on-chip computation. More complex microprocessors often contain other sections—such as sections of specialized memory, called cache memory, to speed up access to external data-storage devices. Modern microprocessors operate with bus widths of 64 bits (binary digits, or units of information represented as 1s and 0s), meaning that 64 bits of data can be transferred at the same time.
A crystal oscillator in the computer provides a clock signal to coordinate all activities of the microprocessor. The clock speed of the most advanced microprocessors allows billions of computer instructions to be executed every second.
II | | COMPUTER MEMORY |
Because the microprocessor alone cannot accommodate the large amount of memory required to store program instructions and data, such as the text in a word-processing program, transistors can be used as memory elements in combination with the microprocessor. Separate integrated circuits, called random-access memory (RAM) chips, which contain large numbers of transistors, are used in conjunction with the microprocessor to provide the needed memory. There are different kinds of random-access memory. Static RAM (SRAM) holds information as long as power is turned on and is usually used as cache memory because it operates very quickly. Another type of memory, dynamic RAM (DRAM), is slower than SRAM and must be periodically refreshed with electricity or the information it holds is lost. DRAM is more economical than SRAM and serves as the main memory element in most computers.
III | | MICROCONTROLLER |
A microprocessor is not a complete computer. It does not contain large amounts of memory or have the ability to communicate with input devices—such as keyboards, joysticks, and mice—or with output devices, such as monitors and printers. A different kind of integrated circuit, a microcontroller, is a complete computer on a chip, containing all of the elements of the basic microprocessor along with other specialized functions. Microcontrollers are used in video games, videocassette recorders (VCRs), automobiles, and other machines.
IV | | SEMICONDUCTORS |
All integrated circuits are fabricated from semiconductors, substances whose ability to conduct electricity ranks between that of a conductor and that of a nonconductor, or insulator. Silicon is the most common semiconductor material. Because the electrical conductivity of a semiconductor can change according to the voltage applied to it, transistors made from semiconductors act like tiny switches that turn electrical current on and off in just a few nanoseconds (billionths of a second). This capability enables a computer to perform many billions of simple instructions each second and to complete complex tasks quickly.
The basic building block of most semiconductor devices is the diode, a junction, or union, of negative-type (n-type) and positive-type (p-type) materials. The terms n-type and p-type refer to semiconducting materials that have been doped—that is, have had their electrical properties altered by the controlled addition of very small quantities of impurities such as boron or phosphorus. In a diode, current flows in only one direction: across the junction from the p- to n-type material, and then only when the p-type material is at a higher voltage than the n-type. The voltage applied to the diode to create this condition is called the forward bias. The opposite voltage, for which current will not flow, is called the reverse bias. An integrated circuit contains millions of p-n junctions, each serving a specific purpose within the millions of electronic circuit elements. Proper placement and biasing of p- and n-type regions restrict the electrical current to the correct paths and ensure the proper operation of the entire chip.
V | | TRANSISTORS |
The transistor used most commonly in the microelectronics industry is called a metal-oxide semiconductor field-effect transistor (MOSFET). It contains two n-type regions, called the source and the drain, with a p-type region in between them, called the channel. Over the channel is a thin layer of nonconductive silicon dioxide topped by another layer, called the gate. For electrons to flow from the source to the drain, a voltage (forward bias) must be applied to the gate. This causes the gate to act like a control switch, turning the MOSFET on and off and creating a logic gate that transmits digital 1s and 0s throughout the microprocessor.
VI | | CONSTRUCTION OF MICROPROCESSORS |
Microprocessors are fabricated using techniques similar to those used for other integrated circuits, such as memory chips. Microprocessors generally have a more complex structure than do other chips, and their manufacture requires extremely precise techniques.
Economical manufacturing of microprocessors requires mass production. Several hundred dies, or circuit patterns, are created on the surface of a silicon wafer simultaneously. Microprocessors are constructed by a process of deposition and removal of conducting, insulating, and semiconducting materials one thin layer at a time until, after hundreds of separate steps, a complex sandwich is constructed that contains all the interconnected circuitry of the microprocessor. Only the outer surface of the silicon wafer—a layer about 10 microns (about 0.01 mm/0.0004 in) thick, or about one-tenth the thickness of a human hair—is used for the electronic circuit. The processing steps include substrate creation, oxidation, lithography, etching, ion implantation, and film deposition.
The first step in producing a microprocessor is the creation of an ultrapure silicon substrate, a silicon slice in the shape of a round wafer that is polished to a mirror-like smoothness. At present, the largest wafers used in industry are 300 mm (12 in) in diameter.
In the oxidation step, an electrically nonconducting layer, called a dielectric, is placed between each conductive layer on the wafer. The most important type of dielectric is silicon dioxide, which is “grown” by exposing the silicon wafer to oxygen in a furnace at about 1000°C (about 1800°F). The oxygen combines with the silicon to form a thin layer of oxide about 75 angstroms deep (an angstrom is one ten-billionth of a meter).
Nearly every layer that is deposited on the wafer must be patterned accurately into the shape of the transistors and other electronic elements. Usually this is done in a process known as photolithography, which is analogous to transforming the wafer into a piece of photographic film and projecting a picture of the circuit on it. A coating on the surface of the wafer, called the photoresist or resist, changes when exposed to light, making it easy to dissolve in a developing solution. These patterns are as small as 0.13 microns in size. Because the shortest wavelength of visible light is about 0.5 microns, short-wavelength ultraviolet light must be used to resolve the tiny details of the patterns. After photolithography, the wafer is etched—that is, the resist is removed from the wafer either by chemicals, in a process known as wet etching, or by exposure to a corrosive gas, called a plasma, in a special vacuum chamber.
In the next step of the process, ion implantation, also called doping, impurities such as boron and phosphorus are introduced into the silicon to alter its conductivity. This is accomplished by ionizing the boron or phosphorus atoms (stripping off one or two electrons) and propelling them at the wafer with an ion implanter at very high energies. The ions become embedded in the surface of the wafer.
The thin layers used to build up a microprocessor are referred to as films. In the final step of the process, the films are deposited using sputterers in which thin films are grown in a plasma; by means of evaporation, whereby the material is melted and then evaporated coating the wafer; or by means of chemical-vapor deposition, whereby the material condenses from a gas at low or atmospheric pressure. In each case, the film must be of high purity and its thickness must be controlled within a small fraction of a micron.
Microprocessor features are so small and precise that a single speck of dust can destroy an entire die. The rooms used for microprocessor creation are called clean rooms because the air in them is extremely well filtered and virtually free of dust. The purest of today's clean rooms are referred to as class 1, indicating that there is no more than one speck of dust per cubic foot of air. (For comparison, a typical home is class one million or so.)
VII | | HISTORY OF THE MICROPROCESSOR |

Pentium Microprocessor
The Pentium microprocessor (shown at 2.5X magnification) is manufactured by the Intel Corporation. It contains more than three million transistors. The most common semiconductor materials used in making computer chips are the elements silicon and germanium, although nearly all computer chips are made from silicon.
Michael W. Davidson/Photo Researchers, Inc.
The first microprocessor was the Intel 4004, produced in 1971. Originally developed for a calculator, and revolutionary for its time, it contained 2,300 transistors on a 4-bit microprocessor that could perform only 60,000 operations per second. The first 8-bit microprocessor was the Intel 8008, developed in 1972 to run computer terminals. The Intel 8008 contained 3,300 transistors. The first truly general-purpose microprocessor, developed in 1974, was the 8-bit Intel 8080 (see Microprocessor, 8080), which contained 4,500 transistors and could execute 200,000 instructions per second. By 1989, 32-bit microprocessors containing 1.2 million transistors and capable of executing 20 million instructions per second had been introduced.
In the 1990s the number of transistors on microprocessors continued to double nearly every 18 months. The rate of change followed an early prediction made by American semiconductor pioneer Gordon Moore. In 1965 Moore predicted that the number of transistors on a computer chip would double every year, a prediction that has come to be known as Moore ’s Law. In the mid-1990s chips included the Intel Pentium Pro, containing 5.5 million transistors; the UltraSparc-II, by Sun Microsystems, containing 5.4 million transistors; the PowerPC620, developed jointly by Apple, IBM, and Motorola, containing 7 million transistors; and the Digital Equipment Corporation's Alpha 21164A, containing 9.3 million transistors. By the end of the decade microprocessors contained many millions of transistors, transferred 64 bits of data at once, and performed billions of instructions per second.

Operating System
I | | INTRODUCTION |
Operating System (OS), in computer science, the basic software that controls a computer. The operating system has three major functions: It coordinates and manipulates computer hardware, such as computer memory, printers, disks, keyboard, mouse, and monitor; it organizes files on a variety of storage media, such as floppy disk, hard drive, compact disc, digital video disc, and tape; and it manages hardware errors and the loss of data.
II | | HOW AN OS WORKS |
Operating systems control different computer processes, such as running a spreadsheet program or accessing information from the computer's memory. One important process is interpreting commands, enabling the user to communicate with the computer. Some command interpreters are text oriented, requiring commands to be typed in or to be selected via function keys on a keyboard. Other command interpreters use graphics and let the user communicate by pointing and clicking on an icon, an on-screen picture that represents a specific command. Beginners generally find graphically oriented interpreters easier to use, but many experienced computer users prefer text-oriented command interpreters.
Operating systems are either single-tasking or multitasking. The more primitive single-tasking operating systems can run only one process at a time. For instance, when the computer is printing a document, it cannot start another process or respond to new commands until the printing is completed.
All modern operating systems are multitasking and can run several processes simultaneously. In most computers, however, there is only one central processing unit (CPU; the computational and control unit of the computer), so a multitasking OS creates the illusion of several processes running simultaneously on the CPU. The most common mechanism used to create this illusion is time-slice multitasking, whereby each process is run individually for a fixed period of time. If the process is not completed within the allotted time, it is suspended and another process is run. This exchanging of processes is called context switching. The OS performs the “bookkeeping” that preserves a suspended process. It also has a mechanism, called a scheduler, that determines which process will be run next. The scheduler runs short processes quickly to minimize perceptible delay. The processes appear to run simultaneously because the user's sense of time is much slower than the processing speed of the computer.
Operating systems can use a technique known as virtual memory to run processes that require more main memory than is actually available. To implement this technique, space on the hard drive is used to mimic the extra memory needed. Accessing the hard drive is more time-consuming than accessing main memory, however, so performance of the computer slows.
III | | CURRENT OPERATING SYSTEMS |
Operating systems commonly found on personal computers include UNIX, Macintosh OS, and Windows. UNIX, developed in 1969 at AT&T Bell Laboratories, is a popular operating system among academic computer users. Its popularity is due in large part to the growth of the interconnected computer network known as the Internet. Software for the Internet was initially designed for computers that ran UNIX. Variations of UNIX include SunOS (distributed by SUN Microsystems, Inc.), Xenix (distributed by Microsoft Corporation), and Linux (available for download free of charge and distributed commercially by companies such as Red Hat, Inc.). UNIX and its clones support multitasking and multiple users. Its file system provides a simple means of organizing disk files and lets users control access to their files. The commands in UNIX are not readily apparent, however, and mastering the system is difficult. Consequently, although UNIX is popular for professionals, it is not the operating system of choice for the general public.
Instead, windowing systems with graphical interfaces, such as Windows and the Macintosh OS, which make computer technology more accessible, are widely used in personal computers (PCs). However, graphical systems generally have the disadvantage of requiring more hardware—such as faster CPUs, more memory, and higher-quality monitors—than do command-oriented operating systems.
IV | | FUTURE TECHNOLOGIES |
Operating systems continue to evolve. A recently developed type of OS called a distributed operating system is designed for a connected, but independent, collection of computers that share resources such as hard drives. In a distributed OS, a process can run on any computer in the network (presumably a computer that is idle) to increase that process's performance. All basic OS functions—such as maintaining file systems, ensuring reasonable behavior, and recovering data in the event of a partial failure—become more complex in distributed systems.
Research is also being conducted that would replace the keyboard with a means of using voice or handwriting for input. Currently these types of input are imprecise because people pronounce and write words very differently, making it difficult for a computer to recognize the same input from different users. However, advances in this field have led to systems that can recognize a small number of words spoken by a variety of people. In addition, software has been developed that can be taught to recognize an individual's handwriting.
CURRENT OPERATING SYSTEMS |
Operating systems commonly found on personal computers include UNIX, Macintosh OS, and Windows. UNIX, developed in 1969 at AT&T Bell Laboratories, is a popular operating system among academic computer users. Its popularity is due in large part to the growth of the interconnected computer network known as the Internet. Software for the Internet was initially designed for computers that ran UNIX. Variations of UNIX include SunOS (distributed by SUN Microsystems, Inc.), Xenix (distributed by Microsoft Corporation), and Linux (available for download free of charge and distributed commercially by companies such as Red Hat, Inc.). UNIX and its clones support multitasking and multiple users. Its file system provides a simple means of organizing disk files and lets users control access to their files. The commands in UNIX are not readily apparent, however, and mastering the system is difficult. Consequently, although UNIX is popular for professionals, it is not the operating system of choice for the general public.
Instead, windowing systems with graphical interfaces, such as Windows and the Macintosh OS, which make computer technology more accessible, are widely used in personal computers (PCs). However, graphical systems generally have the disadvantage of requiring more hardware—such as faster CPUs, more memory, and higher-quality monitors—than do command-oriented operating systems.
HEARD WEAR
INTRODUCTION: Hardware (computer), equipment involved in the function of a computer. Computer hardware consists of the components that can be physically handled. The function of these components is typically divided into three main categories: input, output, and storage. Components in these categories connect to microprocessors, specifically, the computer’s central processing unit (CPU), the electronic circuitry that provides the computational ability and control of the computer, via wires or circuitry called a bus.
Software, on the other hand, is the set of instructions a computer uses to manipulate data, such as a word-processing program or a video game. These programs are usually stored and transferred via the computer's hardware to and from the CPU. Software also governs how the hardware is utilized; for example, how information is retrieved from a storage device. The interaction between the input and output hardware is controlled by software called the Basic Input Output System software (BIOS).
Although microprocessors are still technically considered to be hardware, portions of their function are also associated with computer software. Since microprocessors have both hardware and software aspects they are therefore often referred to as firmware.
Hardware (computer)
I | | INTRODUCTION |

Computer System
A typical computer system consists of a central processing unit (CPU), input devices, storage devices, and output devices. The CPU consists of an arithmetic/logic unit, registers, control section, and internal bus. The arithmetic/logic unit carries out arithmetical and logical operations. The registers store data and keep track of operations. The control unit regulates and controls various operations. The internal bus connects the units of the CPU with each other and with external components of the system. For most computers, the principal input devices are a keyboard and a mouse. Storage devices include hard disks, CD-ROM drives, and random access memory (RAM) chips. Output devices that display data include monitors and printers.
© Microsoft Corporation. All Rights Reserved.
Hardware (computer), equipment involved in the function of a computer. Computer hardware consists of the components that can be physically handled. The function of these components is typically divided into three main categories: input, output, and storage. Components in these categories connect to microprocessors, specifically, the computer’s central processing unit (CPU), the electronic circuitry that provides the computational ability and control of the computer, via wires or circuitry called a bus.
Software, on the other hand, is the set of instructions a computer uses to manipulate data, such as a word-processing program or a video game. These programs are usually stored and transferred via the computer's hardware to and from the CPU. Software also governs how the hardware is utilized; for example, how information is retrieved from a storage device. The interaction between the input and output hardware is controlled by software called the Basic Input Output System software (BIOS).
Although microprocessors are still technically considered to be hardware, portions of their function are also associated with computer software. Since microprocessors have both hardware and software aspects they are therefore often referred to as firmware.
II | | INPUT HARDWARE |
Input hardware consists of external devices—that is, components outside of the computer’s CPU—that provide information and instructions to the computer. A light pen is a stylus with a light-sensitive tip that is used to draw directly on a computer’s video screen or to select information on the screen by pressing a clip in the light pen or by pressing the light pen against the surface of the screen. The pen contains light sensors that identify which portion of the screen it is passed over.
A mouse is a pointing device designed to be gripped by one hand. It has a detection device (usually a ball, a light-emitting diode [LED], or a low-powered laser) on the bottom that enables the user to control the motion of an on-screen pointer, or cursor, by moving the mouse on a flat surface. As the device moves across the surface, the cursor moves across the screen. To select items or choose commands on the screen, the user presses a button on the mouse. A joystick is a pointing device composed of a lever that moves in multiple directions to navigate a cursor or other graphical object on a computer screen.
A keyboard is a typewriter-like device that allows the user to type in text and commands to the computer. Some keyboards have special function keys or integrated pointing devices, such as a trackball or touch-sensitive regions that let the user’s finger motions move an on-screen cursor.
Touch-screen displays, which are video displays with a special touch-sensitive surface, are also becoming popular with personal electronic devices—examples include the Apple iPhone and Nintendo DS video game system. Touch-screen displays are also becoming common in everyday use. Examples include ticket kiosks in airports and automated teller machines (ATM).
An optical scanner uses light-sensing equipment to convert images such as a picture or text into electronic signals that can be manipulated by a computer. For example, a photograph can be scanned into a computer and then included in a text document created on that computer. The two most common scanner types are the flatbed scanner, which is similar to an office photocopier, and the handheld scanner, which is passed manually across the image to be processed.
A microphone is a device for converting sound into signals that can then be stored, manipulated, and played back by the computer. A voice recognition module is a device that converts spoken words into information that the computer can recognize and process.
A modem, which stands for modulator-demodulator, is a device that connects a computer to a telephone line or cable television network and allows information to be transmitted to or received from another computer. Each computer that sends or receives information must be connected to a modem. The digital signal sent from one computer is converted by the modem into an analog signal, which is then transmitted by telephone lines or television cables to the receiving modem, which converts the signal back into a digital signal that the receiving computer can understand.
A network interface card (NIC) allows the computer to access a local area network (LAN) through either a specialized cable similar to a telephone line or through a wireless (Wi-Fi) connection. The vast majority of LANs connect through the Ethernet standard, which was introduced in 1983.
III | | OUTPUT HARDWARE |
Output hardware consists of internal and external devices that transfer information from the computer’s CPU to the computer user. Graphics adapters, which are either an add-on card (called a video card) or connected directly to the computer’s motherboard, transmit information generated by the computer to an external display. Displays commonly take one of two forms: a video screen with a cathode-ray tube (CRT) or a video screen with a liquid crystal display (LCD). A CRT-based screen, or monitor, looks similar to a television set. Information from the CPU is displayed using a beam of electrons that scans a phosphorescent surface that emits light and creates images. An LCD-based screen displays visual information on a flatter and smaller screen than a CRT-based video monitor. Laptop computers use LCD screens for their displays.
Printers take text and image from a computer and print them on paper. Dot-matrix printers use tiny wires to impact upon an inked ribbon to form characters. Laser printers employ beams of light to draw images on a drum that then picks up fine black particles called toner. The toner is fused to a page to produce an image. Inkjet printers fire droplets of ink onto a page to form characters and pictures.
Computers can also output audio via a specialized chip on the motherboard or an add-on card called a sound card. Users can attach speakers or headphones to an output port to hear the audio produced by the computer. Many modern sound cards allow users to create music and record digital audio, as well.
IV | | STORAGE HARDWARE |
Storage hardware provides permanent storage of information and programs for retrieval by the computer. The two main types of storage devices are disk drives and memory. There are several types of disk drives: hard, floppy, magneto-optical, magnetic tape, and compact. Hard disk drives store information in magnetic particles embedded in a disk. Usually a permanent part of the computer, hard disk drives can store large amounts of information and retrieve that information very quickly. Floppy disk drives also store information in magnetic particles embedded in removable disks that may be floppy or rigid. Floppy disks store less information than a hard disk drive and retrieve the information at a much slower rate. While most computers still include a floppy disk drive, the technology has been gradually phased out in favor of newer technologies.
Magneto-optical disk drives store information on removable disks that are sensitive to both laser light and magnetic fields. They can store up to 9.1 gigabytes (GB) of data, but they have slightly slower retrieval speeds as opposed to hard drives. They are much more rugged than floppy disks, making them ideal for data backups. However, the introduction of newer media that is both less expensive and able to store more data has made magneto-optical drives obsolete.
Magnetic tape drives use magnetic tape similar to the tape used in VCR cassettes. Tape drives have a very slow read/write time, but have a very high capacity; in fact, their capacity is second only to hard disk drives. Tape drives are mainly used to back up data.
Compact disc drives store information on pits burned into the surface of a disc of reflective material (see CD-ROM). CD-ROMs can store up to 737 megabytes (MB) of data. A Compact Disc-Recordable (CD-R) or Compact Disc-ReWritable (CD-RW) drive can record data onto a specialized disc, but only the CD-RW standard allows users to change the data stored on the disc. A digital versatile disc (DVD) looks and works like a CD-ROM but can store up to 17.1 GB of data on a single disc. Like CD-ROMs, there are specialized versions of DVDs, such as DVD-Recordable (DVD-R) and DVD-ReWritable (DVD-RW), that can have data written onto them by the user. More recently Sony Electronics developed DVD technology called Blu-ray. It has much higher storage capacities than standard DVD media.
Memory refers to the computer chips that store information for quick retrieval by the CPU. Random access memory (RAM) is used to store the information and instructions that operate the computer's programs. Typically, programs are transferred from storage on a disk drive to RAM. RAM is also known as volatile memory because the information within the computer chips is lost when power to the computer is turned off. Read-only memory (ROM) contains critical information and software that must be permanently available for computer operation, such as the operating system that directs the computer's actions from start up to shut down. ROM is called nonvolatile memory because the memory chips do not lose their information when power to the computer is turned off.
A more recent development is solid-state RAM. Unlike standard RAM, solid state RAM can contain information even if there is no power supply. Flash drives are removable storage devices that utilize solid-state RAM to store information for long periods of time. Solid-state drives (SSD) have also been introduced as a potential replacement for hard disk drives. SSDs have faster access speeds than hard disks and have no moving parts. However, they are quite expensive and do not have the ability to store as much data as a hard disk. Solid-state RAM technology is also used in memory cards for digital media devices, such as digital cameras and media players.
Some devices serve more than one purpose. For example, floppy disks may also be used as input devices if they contain information to be used and processed by the computer user. In addition, they can be used as output devices if the user wants to store the results of computations on them.
V | | HARDWARE CONNECTIONS |

Computer chipset
The various components of a computer communicate with each other through a chipset, which is a collection of microprocessors connected to each other through a series of wires (also called buses). Shown here is a diagram of a typical chipset, displaying the computer's components and how they are connected to each other.
© Microsoft Corporation. All Rights Reserved.
To function, hardware requires physical connections that allow components to communicate and interact. A bus provides a common interconnected system composed of a group of wires or circuitry that coordinates and moves information between the internal parts of a computer. A computer bus consists of two channels, one that the CPU uses to locate data, called the address bus, and another to send the data to that address, called the data bus. A bus is characterized by two features: how much information it can manipulate at one time, called the bus width, and how quickly it can transfer these data. In today’s computers, a series of buses work together tmj-uo communicate between the various internal and external devices.
A | | Internal Connections |
Expansion, or add-on, cards use one of three bus types to interface with the computer. The Peripheral Connection Interface (PCI) is the standard expansion card bus used in most computers. The Accelerated Graphics Port (AGP) bus was developed to create a high-speed interface with the CPU that bypassed the PCI bus. This bus was specifically designed for modern video cards, which require a large amount of bandwidth to communicate with the CPU. A newer version of PCI called PCI Express (PCIe) was designed to replace both PCI and AGP as the main bus for expansion cards.
Internal storage devices use one of three separate standards to connect to the bus: parallel AT attachment (PATA), serial AT attachment (SATA), or small computer system interface (SCSI). The term AT refers to the IBM AT computer, first released in 1984. The PATA and SCSI standards were first introduced in 1986; the SATA standard was introduced in 2002 as a replacement for the PATA standard. The SCSI standard is mainly used in servers or high-end systems.
A1 | | Parallel and Serial Connections |
For most of the history of the personal computer, external and internal devices have communicated to each other through parallel connections. However, given the limitations of parallel connections, engineers began to develop technology based on serial connections, since these have greater data transfer rates, as well as more reliability.
A serial connection is a wire or set of wires used to transfer information from the CPU to an external device such as a mouse, keyboard, modem, scanner, and some types of printers. This type of connection transfers only one piece of data at a time. The advantage to using a serial connection is that it provides effective connections over long distances.
A parallel connection uses multiple sets of wires to transfer blocks of information simultaneously. Most scanners and printers use this type of connection. A parallel connection is much faster than a serial connection, but it is limited to shorter distances between the CPU and the external device than serial connections.
The best way to see the difference between parallel and serial connections is to imagine the differences between a freeway and a high-speed train line. The freeway is the parallel connection—lots of lanes for cars. However, as more cars are put onto the freeway, the slower each individual car travels, which means more lanes have to be built at a high cost if the cars are to travel at high speed. The train line is the serial connection; it consists of two tracks and can only take two trains at a time. However, these trains do not need to deal with traffic and can go at higher speeds than the cars on the freeway.
As CPU speeds increased and engineers increased the speed of the parallel connections to keep up, the main problem of parallel connections—maintaining data integrity at high speed—became more evident. Engineers began to look at serial connections as a possible solution to the problem. This led to the development of both SATA and PCI Express, which, by using serial connections, provide high data transfer rates with less materials used and no data loss.
B | | External Connections |
The oldest external connections used by computers were the serial and parallel ports. These were included on the original IBM PC from 1981. Originally designed as an interface to connect computer to computer, the serial port was eventually used with various devices, including modems, mice, keyboards, scanners, and some types of printers. Parallel ports were mainly used with printers, but some scanners and external drives used the parallel port.
The Universal Serial Bus (USB) interface was developed to replace both the serial and parallel ports as the standard for connecting external devices. Developed by a group of companies including Microsoft, Intel, and IBM, the USB standard was first introduced in 1995. Besides transferring data to and from the computer, USB can also provide a small amount of power, eliminating the need for external power cables for most peripherals. The USB 2.0 standard, which came into general usage in 2002, drastically improved the data transfer rate.
A competing standard to USB was developed at the same time by Apple and Texas Instruments. Officially called IEEE 1394, it is more commonly called FireWire. It is capable of transferring data at a higher rate than the original USB standard and became the standard interface for multimedia hardware, such as video cameras. But Apple’s royalty rate and the introduction of USB 2.0—as well as the fact that Intel, one of the companies behind USB, is responsible for most motherboards and chipsets in use—meant that FireWire was unlikely to become the standard peripheral interface for PCs. Today most computers have both USB and FireWire ports connected to the motherboard.
Wireless devices have also become commonplace with computers. The initial wireless interface used was infrared (IR), the same technology used in remote controls. However, this interface required that the device have a direct line of sight to the IR sensor so that the data could be transferred. It also had a high power requirement. Most modern wireless devices use radio frequency (RF) signals to communicate to the computer. One of the most common wireless standards used today is Bluetooth. It uses the same frequencies as the Wi-Fi standard used for wireless LANs.
LIGHT PEN, a pointing device in which the user holds a wand, which is attached to the computer, up to the screen and selects items or chooses commands on the screen (the equivalent of a mouse click) either by pressing a clip on the side of the light pen or by pressing the light pen against the surface of the screen. The wand contains light sensors and sends a signal to the computer whenever it records a light, as during close contact with the screen when the non-black pixels beneath the wand's tip are refreshed by the display's electron beam. The computer's screen is not all lit at once—the electron beam that lights pixels on the screen traces across the screen row by row, all in the space of 1/60 of a second. By noting exactly when the light pen detected the electron beam passing its tip, the computer can determine the light pen's location on the screen. The light pen doesn't require a special screen or screen coating, as does a touch screen, but its disadvantage is that holding the pen up for an extended length of time is tiring to the user. See also Automation; Computer-Aided Design/Computer-Aided Manufacturing; Graphics Tablet; Touch Screen.
Mouse (computer)
I | | INTRODUCTION |

Mouse
A mouse is a pointing device that helps a user navigate through a graphical computer interface. Connected to the computer by a cable, it is generally mapped so that an on-screen cursor may be controlled by moving the mouse across a flat surface. Two common types of mouse, the Microsoft mouse (bottom) and the Apple ADB (Apple Desktop Bus) mouse (top) are shown here.
© Microsoft Corporation. All Rights Reserved.
MOUSE (computer), common pointing device used with personal computers that have a graphical user interface (GUI). A user typically operates a mouse with one hand in order to move a cursor over images or text on a computer screen. Clicking buttons on the mouse activates, opens, or moves icons or other graphical objects on the screen when they are displayed under the floating cursor. Another type of pointing device called a joystick is typically used for interacting with computer games.
A mouse is commonly attached to a personal computer by a cord that connects to a universal serial bus (USB) port. The rectangular USB interface allows the mouse to report its position at a very high rate. Other types of interfaces include a PS/2 port, which uses a smaller, round connector and reports the mouse’s position at a lower rate. The PS/2 port is a dedicated mouse port built into the motherboard of the computer. Earlier personal computers often had a serial mouse that connected to the computer through a standard serial port of the type that could also be used for other purposes, such as attaching a modem. Early types of bus mice attached to the computer’s bus through a special card or port.
The basic features of a mouse are a casing with a flat bottom, designed to be gripped by one hand; one or more buttons on the top; a multidirectional detection device on the bottom; and a cable connecting the mouse to the computer. By moving the mouse on a surface (such as a desk), the user typically controls an on-screen cursor. A mouse is a relative pointing device because there are no defined limits to the mouse’s movement and because its placement on a surface does not map directly to a specific screen location. To select items or choose commands on the screen, the user presses one of the mouse's buttons, producing a “mouse click.”
Most computer mice now have a small vertical wheel between two buttons to allow easy scrolling up and down a screen. Left-handed people can also reprogram a mouse to switch functions assigned to the right and left buttons.
II | | TYPES OF COMPUTER MICE |
A | | Mechanical Mouse |
A mechanical mouse translates the motion of a ball on the bottom of the mouse into directional signals. As the user moves the mouse, the ball typically spins a pair of wheels inside the mouse. These conductive wheels might, in turn, rotate additional wheels via axles or gears. At least one pair of wheels has conductive markings on their surface. Because the markings permit an electric current to flow, a set of conductive brushes that ride on the surface of the conductive wheels can detect the conductive markings. The electronics in the mouse translate these electrical-movement signals into mouse-movement information that can be used by the computer.
B | | Optical Mouse |
An optical mouse uses a light-emitting diode (LED) and a small CCD (charge coupled device) camera to detect motion. Modern optical mice can work on virtually any surface. The early designs for optical mice used two lights of different colors, and a special mouse pad that had a grid of lines in the same two colors, one color for vertical lines and another for horizontal lines.
C | | Optomechanical Mouse |

Optomechanical Mouse
As a mouse is moved, a ball in the mouse’s interior is rolled. This motion turns two axles, corresponding to the two dimensions of movement. Each axle spins a slotted wheel. On one side of each wheel, a light-emitting diode (LED) sends a path of light through the slots to a receiving phototransistor on the other side. The pattern of light to dark is then translated to an electrical signal, which reports the mouse’s position and speed and is reflected in the movement of the cursor on the computer’s screen.
© Microsoft Corporation. All Rights Reserved.
An optomechanical mouse translates motion into directional signals through a combination of optical and mechanical means. The optical portion includes pairs of light-emitting diodes (LEDs) and matching sensors; the mechanical portion consists of rotating wheels with cutout slits. When the mouse is moved, the wheels turn and the light from the LEDs either passes through the slits and strikes a light sensor or is blocked by the solid portions of the wheels. These changes in light contact are detected by the pairs of sensors and interpreted as indications of movement. Because the sensors are slightly out of phase with one another, the direction of movement is determined by which sensor is the first to regain light contact. Because it uses optical equipment instead of mechanical parts, an optomechanical mouse eliminates the need for many of the wear-related repairs and maintenance necessary with purely mechanical mice.
D | | Cordless Mouse |
A cordless or wireless mouse uses a radio or an infrared broadcasting system to link the mouse to the computer. Such a mouse typically needs batteries for power.

E | | Touch Pad and TrackPoint |
A standard external mouse that is physically independent of a computer is not practical for use with laptops or other portable computers. A touch pad (also called a trackpad or a glidepoint) usually takes the place of a mouse. The user drags a finger across the flat surface of the touchpad, causing the cursor to move in the corresponding direction. Two buttons allow the user to click on icons similar to a standard mouse. An earlier system that used the “J” key on the keyboard as a mouse substitute is considered obsolete.
Another pointing device that can be used with some portable computers is TrackPoint, introduced by IBM in 1992. Also called a pointing stick, the small rubber cap sits on the keyboard above the B key and between the G and H keys and can be clicked and double-clicked.
III | | ERGONOMIC ISSUES |

Trackball
A trackball is basically an inverted mouse; the user rotates the ball itself while clicking nearby buttons. Trackball users argue the device is more efficient because it is stationary and saves arm movement; however, many mouse users are uncomfortable with the different style of input.
© Microsoft Corporation. All Rights Reserved.
The restricted and repetitive hand movements required to move a mouse and click buttons for long periods of time may cause fatigue or painful repetitive stress injuries (RSI) to arms, wrists, and hands such as carpal tunnel syndrome or “mouse elbow.” Special ergonomically designed mice that allow different types of grips or hand and finger movements are available. Another alternative is a trackball, which functions like an upside-down mouse. Users can rotate the trackball with a thumb or finger to move the cursor. Less wrist and arm movement is needed. A foot mouse is a specialized alternative that allows users to use feet rather than hands to move a cursor.
IV | | HISTORY |
Apple Macintosh Computer
The Apple Macintosh, released in 1984, was among the first personal computers to use a graphical user interface. A graphical user interface enables computer users to easily execute commands by clicking on pictures, words, or icons with a pointing device called a mouse.
© Microsoft Corporation. All Rights Reserved.
The first computer mouse was developed by Douglas Engelbart in 1964 at the Stanford Research Institute (SRI), where he headed a team that researched ways for people to interact with computers. The mouse allowed a user to rapidly perform computer tasks by simple hand actions instead of typing complicated strings of characters or code. The device was demonstrated in public in 1968 and granted a patent in 1970. It used two perpendicular rolling wheels to translate movements by the user into vertical and horizontal motion on a screen. The name “mouse” was said to be inspired by the long cord attached to the box-like device, which resembled a tail. By 1973 the basic mouse design had replaced the original wheels with a moving ball.
Xerox Corporation used a mouse and graphical user interface with its early experimental Alto personal computer developed at its Palo Alto Research Center (PARC) in 1980. Apple developed a much more reliable and practical computer mouse for its highly successful Macintosh computer, introduced in 1984. Microsoft had provided a mouse for MS-DOS with some of its software as early as 1983, but it was not until the company’s Windows graphical-user-interface software became available in 1985 that an improved Microsoft mouse became standard. The computer mouse can also be used with versions of the UNIX, Linux, and OS/2 operating systems.

The microprocessor is one type of ultra-large-scale integrated circuit. Integrated circuits, also known as microchips or chips, are complex electronic circuits consisting of extremely tiny components formed on a single, thin, flat piece of material known as a semiconductor. Modern microprocessors incorporate transistors (which act as electronic amplifiers, oscillators, or, most commonly, switches), in addition to other components such as resistors, diodes, capacitors, and wires, all packed into an area about the size of a postage stamp.
A microprocessor consists of several different sections: The arithmetic/logic unit (ALU) performs calculations on numbers and makes logical decisions; the registers are special memory locations for storing temporary information much as a scratch pad does; the control unit deciphers programs; buses carry digital information throughout the chip and computer; and local memory supports on-chip computation. More complex microprocessors often contain other sections—such as sections of specialized memory, called cache memory, to speed up access to external data-storage devices. Modern microprocessors operate with bus widths of 64 bits (binary digits, or units of information represented as 1s and 0s), meaning that 64 bits of data can be transferred at the same time.
A crystal oscillator in the computer provides a clock signal to coordinate all activities of the microprocessor. The clock speed of the most advanced microprocessors allows billions of computer instructions to be executed every second.
Integrated Circuit, tiny electronic circuit used to perform a specific electronic function, such as amplification; it is usually combined with other components to form a more complex system. It is formed as a single unit by diffusing impurities into single-crystal silicon, which then serves as a semiconductor material, or by etching the silicon by means of electron beams. Several hundred identical integrated circuits (ICs) are made at a time on a thin wafer several centimeters wide, and the wafer is subsequently sliced into individual ICs called chips. In large-scale integration (LSI), as many as 5000 circuit elements, such as resistors and transistors, are combined in a square of silicon measuring about 1.3 cm (.5 in) on a side. Hundreds of these integrated circuits can be arrayed on a silicon wafer 8 to 15 cm (3 to 6 in) in diameter. Larger-scale integration can produce a silicon chip with millions of circuit elements. Individual circuit elements on a chip are interconnected by thin metal or semiconductor films, which are insulated from the rest of the circuit by thin dielectric layers. Chips are assembled into packages containing external electrical leads to facilitate insertion into printed circuit boards for interconnection with other circuits or components.
During recent years, the functional capability of ICs has steadily increased, and the cost of the functions they perform has steadily decreased. This has produced revolutionary changes in electronic equipment—vastly increased functional capability and reliability combined with great reductions in size, physical complexity, and power consumption. Computer technology, in particular, has benefited greatly. The logic and arithmetic functions of a small computer can now be performed on a single VLSI chip called a microprocessor, and the complete logic, arithmetic, and memory functions of a small computer can be packaged on a single printed circuit board, or even on a single chip. Such a device is called a microcomputer.
In consumer electronics, ICs have made possible the development of many new products, including personal calculators and computers, digital watches, and video games. They have also been used to improve or lower the cost of many existing products, such as appliances, televisions, radios, and high-fidelity equipment. They have been applied in the automotive field for diagnostics and pollution control, and they are used extensively in industry, medicine, traffic control (both air and ground), environmental monitoring, and communications.
Transistor, in electronics, common name for a group of electronic devices used as amplifiers or oscillators in communications, control, and computer systems (see Amplifier; Computer; Electronics). Until the advent of the transistor in 1948, developments in the field of electronics were dependent on the use of thermionic vacuum tubes, magnetic amplifiers, specialized rotating machinery, and special capacitors as amplifiers. See Vacuum Tubes.

ATOMIC STRUCTURE OF SEMICONDUCTORS |
The electrical properties of a semiconducting material are determined by its atomic structure. In a crystal of pure germanium or silicon, the atoms are bound together in a periodic arrangement forming a perfectly regular diamond-cubic lattice (see Crystal ). Each atom in the crystal has four valence electrons, each of which interacts with the electron of a neighboring atom to form a divalent bond. Because the electrons are not free to move, the pure crystalline material acts, at low temperatures, as an insulator.
TRANSISTOR OPERATION

Figure 2: NPN Transistor Amplifier
The voltage from a source is applied to the base of the transistor (labeled P). Small changes in this applied voltage across R1 (input) result in large changes in the voltage across the resistor labeled R2 (output). One possible application of this circuit would be to amplify sounds. In this case the input would be a microphone and the resistor R2 would be a speaker. “Hi-fi” amplifiers have many more transistors, both to increase the power output and to reduce the distortion that occurs in simple circuits like this.
crosoft Corporation. All Rights Reserved.
In the transistor, a combination of two junctions may be used to achieve amplification. One type, called the n-p-n junction transistor, consists of a very thin layer of p-type material between two sections of n-type material, arranged in a circuit as shown in Fig. 2. The n-type material at the left of the diagram is the emitter element of the transistor, constituting the electron source. To permit the forward flow of current across the n-p junction, the emitter has a small negative voltage with respect to the p-type layer, or base component, that controls the electron flow. The n-
type material in the output circuit serves as the collector element, which has a large positive voltage with respect to the base to prevent reverse current flow. Electrons moving from the emitter enter the base, are attracted to the positively charged collector, and flow through the output circuit. The input impedance, or resistance to current flow, between the emitter and the base is low, whereas the output impedance between collector and base is high. Therefore, small changes in the voltage of the base cause large changes in the voltage drop across the collector resistance, making this type of transistor an effective amplifier.
Similar in operation to the n-p-n type is the p-n-p junction transistor, which also has two junctions and is equivalent to a triode vacuum tube. Other types with three junctions, such as the n-p-n-p junction transistor, provide greater amplification than the two-junction transistor.
APPLICATION
Transistor
I | | INTRODUCTION |

Bipolar Junction Transistors
The bipolar junction transistor consists of three layers of highly purified silicon (or germanium) to which small amounts of boron (p-type) or phosphorus (n-type) have been added. The boundary between each layer forms a junction, which only allows current to flow from p to n. Connections to each layer are made by evaporating aluminum on the surface; the silicon dioxide coating protects the nonmetalized areas. A small current through the base-emitter junction causes a current 10 to 1000 times larger to flow between the collector and emitter. (The arrows show a positive current; the names of layers should not be taken literally.) The many uses of the junction transistor, from sensitive electronic detectors to powerful hi-fi amplifiers, all depend on this current amplification.
© Microsoft Corporation. All Rights Reserved.
Transistor, in electronics, common name for a group of electronic devices used as amplifiers or oscillators in communications, control, and computer systems (see Amplifier; Computer; Electronics). Until the advent of the transistor in 1948, developments in the field of electronics were dependent on the use of thermionic vacuum tubes, magnetic amplifiers, specialized rotating machinery, and special capacitors as amplifiers. See Vacuum Tubes.
Capable of performing many functions of the vacuum tube in electronic circuits, the transistor is a solid-state device consisting of a tiny piece of semiconducting material, usually germanium or silicon, to which three or more electrical connections are made. The basic components of the transistor are comparable to those of a triode vacuum tube and include the emitter, which corresponds to the heated cathode of the triode tube as the source of electrons. See Electron.
The transistor was developed at Bell Telephone Laboratories by the American physicists Walter Houser Brattain, John Bardeen, and William Bradford Shockley. For this achievement, the three shared the 1956 Nobel Prize in physics. Shockley is noted as the initiator and director of the research program in semiconducting materials that led to the discovery of this group of devices; his associates, Brattain and Bardeen, are credited with the invention of an important type of transistor.
II | | ATOMIC STRUCTURE OF SEMICONDUCTORS |
The electrical properties of a semiconducting material are determined by its atomic structure. In a crystal of pure germanium or silicon, the atoms are bound together in a periodic arrangement forming a perfectly regular diamond-cubic lattice (see Crystal ). Each atom in the crystal has four valence electrons, each of which interacts with the electron of a neighboring atom to form a divalent bond. Because the electrons are not free to move, the pure crystalline material acts, at low temperatures, as an insulator.
III | | FUNCTION OF IMPURITIES |
Germanium or silicon crystals containing small amounts of certain impurities can conduct electricity even at low temperatures. Such impurities function in the crystal in either of two ways. An impurity element, such as phosphorus, antimony, or arsenic, is called a donor impurity because it contributes excess electrons. This group of elements has five valence electrons, only four of which enter into divalent bonding with the germanium or silicon atoms. Thus, when an electronic field is applied, the remaining electron in donor impurities is free to move through the crystalline material.
In contrast, impurity elements, such as gallium and indium, have only three valence electrons, lacking one to complete the interatomic-bond structure within the crystal. Such impurities are known as acceptor impurities because these elements accept electrons from neighboring atoms to satisfy the deficiency in valence-bond structure. The resultant deficiencies, or so-called holes, in the structure of neighboring atoms, in turn, are filled by other electrons. These holes behave as positive charges, appearing to move under an applied voltage in a direction opposite to that of the electrons.
IV | | N-TYPE AND P-TYPE SEMICONDUCTORS |

Transistor
Transistors are electronic devices that are used as amplifiers, oscillators, or switches in communication, control, and computer systems. A transistor consists of small layers of silicon or germanium that have been "doped," or treated with impurity atoms, to create n-type and p-type semiconductors. The diagram shows the structure of various kinds of transistors.
© Microsoft Corporation. All Rights Reserved.
A germanium or silicon crystal, containing donor-impurity atoms, is called a negative, or n-type, semiconductor to indicate the presence of excess negatively charged electrons. The use of an acceptor impurity produces a positive, or p-type, semiconductor, so called because of the presence of positively charged holes.
A single crystal containing both n-type and p-type regions may be prepared by introducing the donor and acceptor impurities into molten germanium or silicon in a crucible at different stages of crystal formation. The resultant crystal has two distinct regions of n-type and p-type material, and the boundary joining the two areas is known as an n-p junction. Such a junction may be produced also by placing a piece of donor-impurity material against the surface of a p-type crystal or a piece of acceptor-impurity material against an n-type crystal and applying heat to diffuse the impurity atoms through the outer layer.

Figure 1: N-P Junction
An n-p junction (also known as a diode) will only allow current to flow in one direction. The electrons from the n-type material can pass to the right through the p-type material, but the lack of excess electrons in the p-type material will prevent any flow of electrons to the left. Note that the current is defined to flow in a direction that is opposite to the direction of the flow of the electrons.
© Microsoft Corporation. All Rights Reserved.
When an external voltage is applied, the n-p junction acts as a rectifier, permitting current to flow in only one direction (see Rectification). If the p-type region is connected to the positive terminal of a battery and the n-type to the negative terminal, a large current flows through the material across the junction. If the battery is connected in the opposite manner, as shown in the diagram in Fig. 1, current does not flow.
V | | TRANSISTOR OPERATION |

Figure 2: NPN Transistor Amplifier
The voltage from a source is applied to the base of the transistor (labeled P). Small changes in this applied voltage across R1 (input) result in large changes in the voltage across the resistor labeled R2 (output). One possible application of this circuit would be to amplify sounds. In this case the input would be a microphone and the resistor R2 would be a speaker. “Hi-fi” amplifiers have many more transistors, both to increase the power output and to reduce the distortion that occurs in simple circuits like this.
© Microsoft Corporation. All Rights Reserved.
In the transistor, a combination of two junctions may be used to achieve amplification. One type, called the n-p-n junction transistor, consists of a very thin layer of p-type material between two sections of n-type material, arranged in a circuit as shown in Fig. 2. The n-type material at the left of the diagram is the emitter element of the transistor, constituting the electron source. To permit the forward flow of current across the n-p junction, the emitter has a small negative voltage with respect to the p-type layer, or base component, that controls the electron flow. The n-type material in the output circuit serves as the collector element, which has a large positive voltage with respect to the base to prevent reverse current flow. Electrons moving from the emitter enter the base, are attracted to the positively charged collector, and flow through the output circuit. The input impedance, or resistance to current flow, between the emitter and the base is low, whereas the output impedance between collector and base is high. Therefore, small changes in the voltage of the base cause large changes in the voltage drop across the collector resistance, making this type of transistor an effective amplifier.
Similar in operation to the n-p-n type is the p-n-p junction transistor, which also has two junctions and is equivalent to a triode vacuum tube. Other types with three junctions, such as the n-p-n-p junction transistor, provide greater amplification than the two-junction transistor.
VI | | APPLICATIONS |

Circuit Board and Transistors
A close-up on a smoke detector’s circuit board reveals its components, which include transistors, resistors, capacitors, diodes, and inductors. Rounded silver containers house the transistors that make the circuit work. Transistors are capable of serving many functions, such as amplifier, switch, and oscillator. Each transistor consists of a small piece of silicon that has been “doped,” or treated with impurity atoms, to create n-type and p-type semiconductors. Invented in 1940, transistors are a fundamental component in nearly all modern electronic devices.
H. Schneebeli/Science Source/Photo Researchers, Inc.
At its present stage of development, the transistor is as effective as a vacuum tube, both of which can amplify to an upper limit of about 1000 megahertz. Among the advantages of the transistor are its small size and very small power requirements. In contrast to the vacuum tube, it does not need power for heating the cathode. Therefore, transistors have replaced most vacuum-tube amplifiers in light, portable electronic equipment, such as airborne navigational aids and the control systems of guided missiles, in which weight and size are prime considerations (see Navigation). Commercial applications include very small hearing aids and compact portable radio and television receivers. In addition, transistors have completely replaced vacuum tubes in electronic computers, which require a great many amplifiers.
Transistors are also used in miniaturized diagnostic instruments, such as those used to transmit electrocardiograph, respiratory, and other data from the bodies of astronauts on space flights (see Space Exploration). Nearly all transmitting equipment used in space-exploration probes employs transistorized circuitry. Transistors also aid in diagnosing diseases. Miniature radio transmitters using transistors can also be implanted in the bodies of animals for ecological studies of feeding habits, patterns of travel, and other factors. A recent commercial application is the transistorized ignition system in automobiles.
During the late 1960s a new electronic technique, the integrated circuit, began to replace the transistor in complex electronic equipment. Although roughly the same size as a transistor, an integrated circuit performs the function of 15 to 20 transistors. A natural development from the integrated circuit in the 1970s has been the production of medium-, large-, and very large-scale integrated circuits (MSI, LSI, and VLSI), which have permitted the building of a compact computer, or minicomputer, containing disk storage units and the communication-control systems on the same frame.
The so-called microprocessor, which came into use in the mid-1970s, is a refinement of the LSI. As a result of further miniaturization, a single microprocessor can incorporate the functions of a number of printed-circuit boards and deliver the performance of the central processing unit of a much larger computer in a hand-held, battery-powered microcomputer.
SOFT WEAR
Software, computer programs; instructions that cause the hardware—the machines—to do work. Software as a whole can be divided into a number of categories based on the types of work done by programs. The two primary software categories are operating systems (system software), which control the workings of the computer, and application software, which addresses the multitude of tasks for which people use computers. System software thus handles such essential, but often invisible, chores as maintaining disk files and managing the screen, whereas application software performs word processing, database management, and the like. Two additional categories that are neither system nor application software, although they contain elements of both, are network software, which enables groups of computers to communicate, and language software, which provides programmers with the tools they need to write programs. See also Operating System; Programming Language.
In addition to these task-based categories, several types of software are described based on their method of distribution. These include the so-called canned programs or packaged software developed and sold primarily through retail outlets; freeware and public-domain software, which is made available without cost by its developer; shareware, which is similar to freeware but usually carries a small fee for those who like the program; and the infamous vaporware, which is software that either does not reach the market or appears much later than promised
Programming Language
I | | INTRODUCTION |

Application of Programming Languages
Programming languages allow people to communicate with computers. Once a job has been identified, the programmer must translate, or code, it into a list of instructions that the computer will understand. A computer program for a given task may be written in several different languages. Depending on the task, a programmer will generally pick the language that will involve the least complicated program. It may also be important to the programmer to pick a language that is flexible and widely compatible if the program will have a range of applications. The examples shown here are programs written to average a list of numbers. Both C and BASIC are commonly used programming languages. The machine interpretation shows how a computer would process and execute the commands from the programs.
© Microsoft Corporation. All Rights Reserved.
Programming Language, in computer science, artificial language used to write a sequence of instructions (a computer program) that can be run by a computer. Similar to natural languages, such as English, programming languages have a vocabulary, grammar, and syntax. However, natural languages are not suited for programming computers because they are ambiguous, meaning that their vocabulary and grammatical structure may be interpreted in multiple ways. The languages used to program computers must have simple logical structures, and the rules for their grammar, spelling, and punctuation must be precise.
Programming languages vary greatly in their sophistication and in their degree of versatility. Some programming languages are written to address a particular kind of computing problem or for use on a particular model of computer system. For instance, programming languages such as Fortran and COBOL were written to solve certain general types of programming problems—Fortran for scientific applications, and COBOL for business applications. Although these languages were designed to address specific categories of computer problems, they are highly portable, meaning that they may be used to program many types of computers. Other languages, such as machine languages, are designed to be used by one specific model of computer system, or even by one specific computer in certain research applications. The most commonly used programming languages are highly portable and can be used to effectively solve diverse types of computing problems. Languages like C, PASCAL, and BASIC fall into this category.
II | | LANGUAGE TYPES |
Programming languages can be classified as either low-level languages or high-level languages. Low-level programming languages, or machine languages, are the most basic type of programming languages and can be understood directly by a computer. Machine languages differ depending on the manufacturer and model of computer. High-level languages are programming languages that must first be translated into a machine language before they can be understood and processed by a computer. Examples of high-level languages are C, C++, PASCAL, and Fortran. Assembly languages are intermediate languages that are very close to machine language and do not have the level of linguistic sophistication exhibited by other high-level languages, but must still be translated into machine language.
A | | Machine Languages |
In machine languages, instructions are written as sequences of 1s and 0s, called bits, that a computer can understand directly. An instruction in machine language generally tells the computer four things: (1) where to find one or two numbers or simple pieces of data in the main computer memory (Random Access Memory, or RAM), (2) a simple operation to perform, such as adding the two numbers together, (3) where in the main memory to put the result of this simple operation, and (4) where to find the next instruction to perform. While all executable programs are eventually read by the computer in machine language, they are not all programmed in machine language. It is extremely difficult to program directly in machine language because the instructions are sequences of 1s and 0s. A typical instruction in a machine language might read 10010 1100 1011 and mean add the contents of storage register A to the contents of storage register B.
B | | High-Level Languages |
High-level languages are relatively sophisticated sets of statements utilizing words and syntax from human language. They are more similar to normal human languages than assembly or machine languages and are therefore easier to use for writing complicated programs. These programming languages allow larger and more complicated programs to be developed faster. However, high-level languages must be translated into machine language by another program called a compiler before a computer can understand them. For this reason, programs written in a high-level language may take longer to execute and use up more memory than programs written in an assembly language.
C | | Assembly Language |
Computer programmers use assembly languages to make machine-language programs easier to write. In an assembly language, each statement corresponds roughly to one machine language instruction. An assembly language statement is composed with the aid of easy to remember commands. The command to add the contents of the storage register A to the contents of storage register B might be written ADD B,A in a typical assembly language statement. Assembly languages share certain features with machine languages. For instance, it is possible to manipulate specific bits in both assembly and machine languages. Programmers use assembly languages when it is important to minimize the time it takes to run a program, because the translation from assembly language to machine language is relatively simple. Assembly languages are also used when some part of the computer has to be controlled directly, such as individual dots on a monitor or the flow of individual characters to a printer.
III | | CLASSIFICATION OF HIGH-LEVEL LANGUAGES |
High-level languages are commonly classified as procedure-oriented, functional, object-oriented, or logic languages. The most common high-level languages today are procedure-oriented languages. In these languages, one or more related blocks of statements that perform some complete function are grouped together into a program module, or procedure, and given a name such as “procedure A.” If the same sequence of operations is needed elsewhere in the program, a simple statement can be used to refer back to the procedure. In essence, a procedure is just a mini-program. A large program can be constructed by grouping together procedures that perform different tasks. Procedural languages allow programs to be shorter and easier for the computer to read, but they require the programmer to design each procedure to be general enough to be used in different situations.
Functional languages treat procedures like mathematical functions and allow them to be processed like any other data in a program. This allows a much higher and more rigorous level of program construction. Functional languages also allow variables—symbols for data that can be specified and changed by the user as the program is running—to be given values only once. This simplifies programming by reducing the need to be concerned with the exact order of statement execution, since a variable does not have to be redeclared, or restated, each time it is used in a program statement. Many of the ideas from functional languages have become key parts of many modern procedural languages.
Object-oriented languages are outgrowths of functional languages. In object-oriented languages, the code used to write the program and the data processed by the program are grouped together into units called objects. Objects are further grouped into classes, which define the attributes objects must have. A simple example of a class is the class Book. Objects within this class might be Novel and Short Story. Objects also have certain functions associated with them, called methods. The computer accesses an object through the use of one of the object’s methods. The method performs some action to the data in the object and returns this value to the computer. Classes of objects can also be further grouped into hierarchies, in which objects of one class can inherit methods from another class. The structure provided in object-oriented languages makes them very useful for complicated programming tasks.
Logic languages use logic as their mathematical base. A logic program consists of sets of facts and if-then rules, which specify how one set of facts may be deduced from others, for example:
If the statement X is true, then the statement Y is false.
In the execution of such a program, an input statement can be logically deduced from other statements in the program. Many artificial intelligence programs are written in such languages.
IV | | LANGUAGE STRUCTURE AND COMPONENTS |
Programming languages use specific types of statements, or instructions, to provide functional structure to the program. A statement in a program is a basic sentence that expresses a simple idea—its purpose is to give the computer a basic instruction. Statements define the types of data allowed, how data are to be manipulated, and the ways that procedures and functions work. Programmers use statements to manipulate common components of programming languages, such as variables and macros (mini-programs within a program).
Statements known as data declarations give names and properties to elements of a program called variables. Variables can be assigned different values within the program. The properties variables can have are called types, and they include such things as what possible values might be saved in the variables, how much numerical accuracy is to be used in the values, and how one variable may represent a collection of simpler values in an organized fashion, such as a table or array. In many programming languages, a key data type is a pointer. Variables that are pointers do not themselves have values; instead, they have information that the computer can use to locate some other variable—that is, they point to another variable.
An expression is a piece of a statement that describes a series of computations to be performed on some of the program’s variables, such as X + Y/Z, in which the variables are X, Y, and Z and the computations are addition and division. An assignment statement assigns a variable a value derived from some expression, while conditional statements specify expressions to be tested and then used to select which other statements should be executed next.
Procedure and function statements define certain blocks of code as procedures or functions that can then be returned to later in the program. These statements also define the kinds of variables and parameters the programmer can choose and the type of value that the code will return when an expression accesses the procedure or function. Many programming languages also permit minitranslation programs called macros. Macros translate segments of code that have been written in a language structure defined by the programmer into statements that the programming language understands.
V | | HISTORY |
Programming languages date back almost to the invention of the digital computer in the 1940s. The first assembly languages emerged in the late 1950s with the introduction of commercial computers. The first procedural languages were developed in the late 1950s to early 1960s: Fortran (FORmula TRANslation), created by John Backus, and then COBOL (COmmon Business Oriented Language), created by Grace Hopper. The first functional language was LISP (LISt Processing), written by John McCarthy in the late 1950s. Although heavily updated, all three languages are still widely used today.
In the late 1960s, the first object-oriented languages, such as SIMULA, emerged. Logic languages became well known in the mid 1970s with the introduction of PROLOG, a language used to program artificial intelligence software. During the 1970s, procedural languages continued to develop with ALGOL, BASIC, PASCAL, C, and Ada . SMALLTALK was a highly influential object-oriented language that led to the merging of object-oriented and procedural languages in C++ and more recently in JAVA. Although pure logic languages have declined in popularity, variations have become vitally important in the form of relational languages for modern databases, such as SQL (Structured Query Language).
A. | Machine Languages |
In machine languages, instructions are written as sequences of 1s and 0s, called bits, that a computer can understand directly. An instruction in machine language generally tells the computer four things: (1) where to find one or two numbers or simple pieces of data in the main computer memory (Random Access Memory, or RAM), (2) a simple operation to perform, such as adding the two numbers together, (3) where in the main memory to put the result of this simple operation, and (4) where to find the next instruction to perform. While all executable programs are eventually read by the computer in machine language, they are not all programmed in machine language. It is extremely difficult to program directly in machine language because the instructions are sequences of 1s and 0s. A typical instruction in a machine language might read 10010 1100 1011 and mean add the contents of storage register A to the contents of storage register B.
Programming Language
I | | INTRODUCTION |
Programming Language, in computer science, artificial language used to write a sequence of instructions (a computer program) that can be run by a computer. Similar to natural languages, such as English, programming languages have a vocabulary, grammar, and syntax. However, natural languages are not suited for programming computers because they are ambiguous, meaning that their vocabulary and grammatical structure may be interpreted in multiple ways. The languages used to program computers must have simple logical structures, and the rules for their grammar, spelling, and punctuation must be precise.
Programming languages vary greatly in their sophistication and in their degree of versatility. Some programming languages are written to address a particular kind of computing problem or for use on a particular model of computer system. For instance, programming languages such as Fortran and COBOL were written to solve certain general types of programming problems—Fortran for scientific applications, and COBOL for business applications. Although these languages were designed to address specific categories of computer problems, they are highly portable, meaning that they may be used to program many types of computers. Other languages, such as machine languages, are designed to be used by one specific model of computer system, or even by one specific computer in certain research applications. The most commonly used programming languages are highly portable and can be used to effectively solve diverse types of computing problems. Languages like C, PASCAL, and BASIC fall into this category.
II | | LANGUAGE TYPES |
Programming languages can be classified as either low-level languages or high-level languages. Low-level programming languages, or machine languages, are the most basic type of programming languages and can be understood directly by a computer. Machine languages differ depending on the manufacturer and model of computer. High-level languages are programming languages that must first be translated into a machine language before they can be understood and processed by a computer. Examples of high-level languages are C, C++, PASCAL, and Fortran. Assembly languages are intermediate languages that are very close to machine language and do not have the level of linguistic sophistication exhibited by other high-level languages, but must still be translated into machine language.
A | | Machine Languages |
In machine languages, instructions are written as sequences of 1s and 0s, called bits, that a computer can understand directly. An instruction in machine language generally tells the computer four things: (1) where to find one or two numbers or simple pieces of data in the main computer memory (Random Access Memory, or RAM), (2) a simple operation to perform, such as adding the two numbers together, (3) where in the main memory to put the result of this simple operation, and (4) where to find the next instruction to perform. While all executable programs are eventually read by the computer in machine language, they are not all programmed in machine language. It is extremely difficult to program directly in machine language because the instructions are sequences of 1s and 0s. A typical instruction in a machine language might read 10010 1100 1011 and mean add the contents of storage register A to the contents of storage register B.
B | | High-Level Languages |
High-level languages are relatively sophisticated sets of statements utilizing words and syntax from human language. They are more similar to normal human languages than assembly or machine languages and are therefore easier to use for writing complicated programs. These programming languages allow larger and more complicated programs to be developed faster. However, high-level languages must be translated into machine language by another program called a compiler before a computer can understand them. For this reason, programs written in a high-level language may take longer to execute and use up more memory than programs written in an assembly language.
C | | Assembly Language |
Computer programmers use assembly languages to make machine-language programs easier to write. In an assembly language, each statement corresponds roughly to one machine language instruction. An assembly language statement is composed with the aid of easy to remember commands. The command to add the contents of the storage register A to the contents of storage register B might be written ADD B,A in a typical assembly language statement. Assembly languages share certain features with machine languages. For instance, it is possible to manipulate specific bits in both assembly and machine languages. Programmers use assembly languages when it is important to minimize the time it takes to run a program, because the translation from assembly language to machine language is relatively simple. Assembly languages are also used when some part of the computer has to be controlled directly, such as individual dots on a monitor or the flow of individual characters to a printer.
III | | CLASSIFICATION OF HIGH-LEVEL LANGUAGES |
High-level languages are commonly classified as procedure-oriented, functional, object-oriented, or logic languages. The most common high-level languages today are procedure-oriented languages. In these languages, one or more related blocks of statements that perform some complete function are grouped together into a program module, or procedure, and given a name such as “procedure A.” If the same sequence of operations is needed elsewhere in the program, a simple statement can be used to refer back to the procedure. In essence, a procedure is just a mini-program. A large program can be constructed by grouping together procedures that perform different tasks. Procedural languages allow programs to be shorter and easier for the computer to read, but they require the programmer to design each procedure to be general enough to be used in different situations.
Functional languages treat procedures like mathematical functions and allow them to be processed like any other data in a program. This allows a much higher and more rigorous level of program construction. Functional languages also allow variables—symbols for data that can be specified and changed by the user as the program is running—to be given values only once. This simplifies programming by reducing the need to be concerned with the exact order of statement execution, since a variable does not have to be redeclared, or restated, each time it is used in a program statement. Many of the ideas from functional languages have become key parts of many modern procedural languages.
Object-oriented languages are outgrowths of functional languages. In object-oriented languages, the code used to write the program and the data processed by the program are grouped together into units called objects. Objects are further grouped into classes, which define the attributes objects must have. A simple example of a class is the class Book. Objects within this class might be Novel and Short Story. Objects also have certain functions associated with them, called methods. The computer accesses an object through the use of one of the object’s methods. The method performs some action to the data in the object and returns this value to the computer. Classes of objects can also be further grouped into hierarchies, in which objects of one class can inherit methods from another class. The structure provided in object-oriented languages makes them very useful for complicated programming tasks.
Logic languages use logic as their mathematical base. A logic program consists of sets of facts and if-then rules, which specify how one set of facts may be deduced from others, for example:
If the statement X is true, then the statement Y is false.
In the execution of such a program, an input statement can be logically deduced from other statements in the program. Many artificial intelligence programs are written in such languages.
IV | | LANGUAGE STRUCTURE AND COMPONENTS |
Programming languages use specific types of statements, or instructions, to provide functional structure to the program. A statement in a program is a basic sentence that expresses a simple idea—its purpose is to give the computer a basic instruction. Statements define the types of data allowed, how data are to be manipulated, and the ways that procedures and functions work. Programmers use statements to manipulate common components of programming languages, such as variables and macros (mini-programs within a program).
Statements known as data declarations give names and properties to elements of a program called variables. Variables can be assigned different values within the program. The properties variables can have are called types, and they include such things as what possible values might be saved in the variables, how much numerical accuracy is to be used in the values, and how one variable may represent a collection of simpler values in an organized fashion, such as a table or array. In many programming languages, a key data type is a pointer. Variables that are pointers do not themselves have values; instead, they have information that the computer can use to locate some other variable—that is, they point to another variable.
An expression is a piece of a statement that describes a series of computations to be performed on some of the program’s variables, such as X + Y/Z, in which the variables are X, Y, and Z and the computations are addition and division. An assignment statement assigns a variable a value derived from some expression, while conditional statements specify expressions to be tested and then used to select which other statements should be executed next.
Procedure and function statements define certain blocks of code as procedures or functions that can then be returned to later in the program. These statements also define the kinds of variables and parameters the programmer can choose and the type of value that the code will return when an expression accesses the procedure or function. Many programming languages also permit minitranslation programs called macros. Macros translate segments of code that have been written in a language structure defined by the programmer into statements that the programming language understands.
V | | HISTORY |
Programming languages date back almost to the invention of the digital computer in the 1940s. The first assembly languages emerged in the late 1950s with the introduction of commercial computers. The first procedural languages were developed in the late 1950s to early 1960s: Fortran (FORmula TRANslation), created by John Backus, and then COBOL (COmmon Business Oriented Language), created by Grace Hopper. The first functional language was LISP (LISt Processing), written by John McCarthy in the late 1950s. Although heavily updated, all three languages are still widely used today.
In the late 1960s, the first object-oriented languages, such as SIMULA, emerged. Logic languages became well known in the mid 1970s with the introduction of PROLOG, a language used to program artificial intelligence software. During the 1970s, procedural languages continued to develop with ALGOL, BASIC, PASCAL, C, and Ada . SMALLTALK was a highly influential object-oriented language that led to the merging of object-oriented and procedural languages in C++ and more recently in JAVA. Although pure logic languages have declined in popularity, variations have become vitally important in the form of relational languages for modern databases, such as SQL (Structured Query Language).
Computer Security
I | | INTRODUCTION |
Computer Security, techniques developed to safeguard information and information systems stored on computers. Potential threats include the destruction of computer hardware and software and the loss, modification, theft, unauthorized use, observation, or disclosure of computer data.
Computers and the information they contain are often considered confidential systems because their use is typically restricted to a limited number of users. This confidentiality can be compromised in a variety of ways. For example, computers and computer data can be harmed by people who spread computer viruses and worms. A computer virus is a set of computer program instructions that attaches itself to programs in other computers. The viruses are often parts of documents that are transmitted as attachments to e-mail messages. A worm is similar to a virus but is a self-contained program that transports itself from one computer to another through networks. Thousands of viruses and worms exist and can quickly contaminate millions of computers.
People who intentionally create viruses are computer experts often known as hackers. Hackers also violate confidentiality by observing computer monitor screens and by impersonating authorized users of computers in order to gain access to the users’ computers. They invade computer databases to steal the identities of other people by obtaining private, identifying information about them. Hackers also engage in software piracy and deface Web sites on the Internet. For example, they may insert malicious or unwanted messages on a Web site, or alter graphics on the site. They gain access to Web sites by impersonating Web site managers.
Malicious hackers are increasingly developing powerful software crime tools such as automatic computer virus generators, Internet eavesdropping sniffers, password guessers, vulnerability testers, and computer service saturators. For example, an Internet eavesdropping sniffer intercepts Internet messages sent to other computers. A password guesser tries millions of combinations of characters in an effort to guess a computer’s password. Vulnerability testers look for software weaknesses. These crime tools are also valuable security tools used for testing the security of computers and networks.
An increasingly common hacker tool that has gained widespread public attention is the computer service saturator, used in denial-of-service attacks, which can shut down a selected or targeted computer on the Internet by bombarding the computer with more requests than it can handle. This tool first searches for vulnerable computers on the Internet where it can install its own software program. Once installed, the compromised computers act like “zombies” sending usage requests to the target computer. If thousands of computers become infected with the software, then all would be sending usage requests to the target computer, overwhelming its ability to handle the requests for service.
A variety of simple techniques can help prevent computer crimes, such as protecting computer screens from observation, keeping printed information and computers in locked facilities, backing up copies of data files and software, and clearing desktops of sensitive information and materials. Increasingly, however, more sophisticated methods are needed to prevent computer crimes. These include using encryption techniques, establishing software usage permissions, mandating passwords, and installing firewalls and intrusion detection systems. In addition, controls within application systems and disaster recovery plans are also necessary.
II | | BACKUP |
Storing backup copies of software and data and having backup computer and communication capabilities are important basic safeguards because the data can then be restored if it was altered or destroyed by a computer crime or accident. Computer data should be backed up frequently and should be stored nearby in secure locations in case of damage at the primary site. Transporting sensitive data to storage locations should also be done securely.
III | | ENCRYPTION |
Another technique to protect confidential information is encryption. Computer users can scramble information to prevent unauthorized users from accessing it. Authorized users can unscramble the information when needed by using a secret code called a key. Without the key the scrambled information would be impossible or very difficult to unscramble. A more complex form of encryption uses two keys, called the public key and the private key, and a system of double encryption. Each participant possesses a secret, private key and a public key that is known to potential recipients. Both keys are used to encrypt, and matching keys are used to decrypt the message. However, the advantage over the single-key method lies with the private keys, which are never shared and so cannot be intercepted. The public key verifies that the sender is the one who transmitted it. The keys are modified periodically, further hampering unauthorized unscrambling and making the encrypted information more difficult to decipher.
IV | | APPROVED USERS |
Another technique to help prevent abuse and misuse of computer data is to limit the use of computers and data files to approved persons. Security software can verify the identity of computer users and limit their privileges to use, view, and alter files. The software also securely records their actions to establish accountability. Military organizations give access rights to classified, confidential, secret, or top-secret information according to the corresponding security clearance level of the user. Other types of organizations also classify information and specify different degrees of protection.
V | | PASSWORDS |

Smart Card
Smart cards, like this one for an employee of the Microsoft Corporation, are becoming increasingly common as security devices for accessing computer networks and corporate buildings. In addition to an identifying photograph, the smart card contains an embedded microchip on the reverse side that stores data about the user, including a password that changes periodically. This information is read by a device attached to a computer and ensures that only authorized persons can access a corporation's internal computer network.
Kathleen Green
Passwords are confidential sequences of characters that allow approved persons to make use of specified computers, software, or information. To be effective, passwords must be difficult to guess and should not be found in dictionaries. Effective passwords contain a variety of characters and symbols that are not part of the alphabet. To thwart imposters, computer systems usually limit the number of attempts and restrict the time it takes to enter the correct password.
A more secure method is to require possession and use of tamper-resistant plastic cards with microprocessor chips, known as “smart cards,” which contain a stored password that automatically changes after each use. When a user logs on, the computer reads the card's password, as well as another password entered by the user, and matches these two respectively to an identical card password generated by the computer and the user's password stored in the computer in encrypted form. Use of passwords and 'smart cards' is beginning to be reinforced by biometrics, identification methods that use unique personal characteristics, such as fingerprints, retinal patterns, facial characteristics, or voice recordings.
VI | | FIREWALLS |
Computers connected to communication networks, such as the Internet, are particularly vulnerable to electronic attack because so many people have access to them. These computers can be protected by using firewall computers or software placed between the networked computers and the network. The firewall examines, filters, and reports on all information passing through the network to ensure its appropriateness. These functions help prevent saturation of input capabilities that otherwise might deny usage to legitimate users, and they ensure that information received from an outside source is expected and does not contain computer viruses.
VII | | INTRUSION DETECTION SYSTEMS |
Security software called intrusion detection systems may be used in computers to detect unusual and suspicious activity and, in some cases, stop a variety of harmful actions by authorized or unauthorized persons. Abuse and misuse of sensitive system and application programs and data such as password, inventory, financial, engineering, and personnel files can be detected by these systems.
VIII | | APPLICATION SAFEGUARDS |
The most serious threats to the integrity and authenticity of computer information come from those who have been entrusted with usage privileges and yet commit computer fraud. For example, authorized persons may secretly transfer money in financial networks, alter credit histories, sabotage information, or commit bill payment or payroll fraud. Modifying, removing, or misrepresenting existing data threatens the integrity and authenticity of computer information. For example, omitting sections of a bad credit history so that only the good credit history remains violates the integrity of the document. Entering false data to complete a fraudulent transfer or withdrawal of money violates the authenticity of banking information. These crimes can be prevented by using a variety of techniques. One such technique is checksumming. Checksumming sums the numerically coded word contents of a file before and after it is used. If the sums are different, then the file has been altered. Other techniques include authenticating the sources of messages, confirming transactions with those who initiate them, segregating and limiting job assignments to make it necessary for more than one person to be involved in committing a crime, and limiting the amount of money that can be transferred through a computer.
IX | | DISASTER RECOVERY PLANS |
Organizations and businesses that rely on computers need to institute disaster recovery plans that are periodically tested and upgraded. This is because computers and storage components such as diskettes or hard disks are easy to damage. A computer's memory can be erased or flooding, fire, or other forms of destruction can damage the computer’s hardware. Computers, computer data, and components should be installed in safe and locked facilities.
Virus (computer)
I | | INTRODUCTION |
Virus (computer), a self-duplicating computer program that spreads from computer to computer, interfering with data and software. Just as biological viruses infect people, spreading from person to person, computer viruses infect personal computers (PCs) and servers, the computers that control access to a network of computers. Some viruses are mere annoyances, but others can do serious damage. Viruses can delete or change files, steal important information, load and run unwanted applications, send documents via electronic mail (e-mail), or even cripple a machine’s operating system (OS), the basic software that runs the computer.
II | | HOW INFECTIONS OCCUR |
A virus can infect a computer in a number of ways. It can arrive on a floppy disk or inside an e-mail message. It can piggyback on files downloaded from the World Wide Web or from an Internet service used to share music and movies. Or it can exploit flaws in the way computers exchange data over a network. So-called blended-threat viruses spread via multiple methods at the same time. Some blended-threat viruses, for instance, spread via e-mail but also propagate by exploiting flaws in an operating system.
Traditionally, even if a virus found its way onto a computer, it could not actually infect the machine—or propagate to other machines—unless the user was somehow fooled into executing the virus by opening it and running it just as one would run a legitimate program. But a new breed of computer virus can infect machines and spread to others entirely on its own. Simply by connecting a computer to a network, the computer owner runs the risk of infection. Because the Internet connects computers around the world, viruses can spread from one end of the globe to the other in a matter of minutes.
III | | TYPES OF VIRUSES |
There are many categories of viruses, including parasitic or file viruses, bootstrap-sector, multipartite, macro, and script viruses. Then there are so-called computer worms, which have become particularly prevalent. A computer worm is a type of virus. However, instead of infecting files or operating systems, a worm replicates from computer to computer by spreading entire copies of itself.
Parasitic or file viruses infect executable files or programs in the computer. These files are often identified by the extension .exe in the name of the computer file. File viruses leave the contents of the host program unchanged but attach to the host in such a way that the virus code is run first. These viruses can be either direct-action or resident. A direct-action virus selects one or more programs to infect each time it is executed. A resident virus hides in the computer's memory and infects a particular program when that program is executed.
Bootstrap-sector viruses reside on the first portion of the hard disk or floppy disk, known as the boot sector. These viruses replace either the programs that store information about the disk's contents or the programs that start the computer. Typically, these viruses spread by means of the physical exchange of floppy disks.
Multipartite viruses combine the abilities of the parasitic and the bootstrap-sector viruses, and so are able to infect either files or boot sectors. These types of viruses can spread if a computer user boots from an infected diskette or accesses infected files.
Other viruses infect programs that contain powerful macro languages (programming languages that let the user create new features and utilities). These viruses, called macro viruses, are written in macro languages and automatically execute when the legitimate program is opened.
Script viruses are written in script programming languages, such as VBScript (Visual Basic Script) and JavaScript. These script languages can be seen as a special kind of macro language and are even more powerful because most are closely related to the operating system environment. The 'ILOVEYOU' virus, which appeared in 2000 and infected an estimated 1 in 5 personal computers, is a famous example of a script virus.
Strictly speaking, a computer virus is always a program that attaches itself to some other program. But computer virus has become a blanket term that also refers to computer worms. A worm operates entirely on its own, without ever attaching itself to another program. Typically, a worm spreads over e-mail and through other ways that computers exchange information over a network. In this way, a worm not only wreaks havoc on machines, but also clogs network connections and slows network traffic, so that it takes an excessively long time to load a Web page or send an e-mail.
IV | | ANTI-VIRAL TACTICS |
A | | Preparation and Prevention |
Computer users can prepare for a viral infection by creating backups of legitimate original software and data files regularly so that the computer system can be restored if necessary. Viral infection can be prevented by obtaining software from legitimate sources or by using a quarantined computer—that is, a computer not connected to any network—to test new software. Plus, users should regularly install operating system (OS) patches, software updates that mend the sort of flaws, or holes, in the OS often exploited by viruses. Patches can be downloaded from the Web site of the operating system’s developer. However, the best prevention may be the installation of current and well-designed antiviral software. Such software can prevent a viral infection and thereby help stop its spread.
B | | Virus Detection |
Several types of antiviral software can be used to detect the presence of a virus. Scanning software can recognize the characteristics of a virus's computer code and look for these characteristics in the computer's files. Because new viruses must be analyzed as they appear, scanning software must be updated periodically to be effective. Other scanners search for common features of viral programs and are usually less reliable. Most antiviral software uses both on-demand and on-access scanners. On-demand scanners are launched only when the user activates them. On-access scanners, on the other hand, are constantly monitoring the computer for viruses but are always in the background and are not visible to the user. The on-access scanners are seen as the proactive part of an antivirus package and the on-demand scanners are seen as reactive. On-demand scanners usually detect a virus only after the infection has occurred and that is why they are considered reactive.
Antivirus software is usually sold as packages containing many different software programs that are independent of one another and perform different functions. When installed or packaged together, antiviral packages provide complete protection against viruses. Within most antiviral packages, several methods are used to detect viruses. Checksumming, for example, uses mathematical calculations to compare the state of executable programs before and after they are run. If the checksum has not changed, then the system is uninfected. Checksumming software can detect an infection only after it has occurred, however. As this technology is dated and some viruses can evade it, checksumming is rarely used today.
Most antivirus packages also use heuristics (problem-solving by trial and error) to detect new viruses. This technology observes a program’s behavior and evaluates how closely it resembles a virus. It relies on experience with previous viruses to predict the likelihood that a suspicious file is an as-yet unidentified or unclassified new virus.
Other types of antiviral software include monitoring software and integrity-shell software. Monitoring software is different from scanning software. It detects illegal or potentially damaging viral activities such as overwriting computer files or reformatting the computer's hard drive. Integrity-shell software establishes layers through which any command to run a program must pass. Checksumming is performed automatically within the integrity shell, and infected programs, if detected, are not allowed to run.
C | | Containment and Recovery |
Once a viral infection has been detected, it can be contained by immediately isolating computers on networks, halting the exchange of files, and using only write-protected disks. In order for a computer system to recover from a viral infection, the virus must first be eliminated. Some antivirus software attempts to remove detected viruses, but sometimes with unsatisfactory results. More reliable results are obtained by turning off the infected computer; restarting it from a write-protected floppy disk; deleting infected files and replacing them with legitimate files from backup disks; and erasing any viruses on the boot sector.
V | | VIRAL STRATEGIES |
The authors of viruses have several strategies to circumvent antivirus software and to propagate their creations more effectively. So-called polymorphic viruses make variations in the copies of themselves to elude detection by scanning software. A stealth virus hides from the operating system when the system checks the location where the virus resides, by forging results that would be expected from an uninfected system. A so-called fast-infector virus infects not only programs that are executed but also those that are merely accessed. As a result, running antiviral scanning software on a computer infected by such a virus can infect every program on the computer. A so-called slow-infector virus infects files only when the files are modified, so that it appears to checksumming software that the modification was legitimate. A so-called sparse-infector virus infects only on certain occasions—for example, it may infect every tenth program executed. This strategy makes it more difficult to detect the virus.
By using combinations of several virus-writing methods, virus authors can create more complex new viruses. Many virus authors also tend to use new technologies when they appear. The antivirus industry must move rapidly to change their antiviral software and eliminate the outbreak of such new viruses.
VI | | VIRUS-LIKE COMPUTER PROGRAMS |
There are other harmful computer programs that can be part of a virus but are not considered viruses because they do not have the ability to replicate. These programs fall into three categories: Trojan horses, logic bombs, and deliberately harmful or malicious software programs that run within a Web browser, an application program such as Internet Explorer and Netscape that displays Web sites.
A Trojan horse is a program that pretends to be something else. A Trojan horse may appear to be something interesting and harmless, such as a game, but when it runs it may have harmful effects. The term comes from the classic Greek story of the Trojan horse found in Homer’s Iliad.
A logic bomb infects a computer’s memory, but unlike a virus, it does not replicate itself. A logic bomb delivers its instructions when it is triggered by a specific condition, such as when a particular date or time is reached or when a combination of letters is typed on a keyboard. A logic bomb has the ability to erase a hard drive or delete certain files.
Malicious software programs that run within a Web browser often appear in Java applets and ActiveX controls. Although these applets and controls improve the usefulness of Web sites, they also increase a vandal’s ability to interfere with unprotected systems. Because those controls and applets require that certain components be downloaded to a user’s personal computer (PC), activating an applet or control might actually download malicious code.
A | | History |
In 1949 Hungarian American mathematician John von Neumann, at the Institute for Advanced Study in Princeton , New Jersey , proposed that it was theoretically possible for a computer program to replicate. This theory was tested in the 1950s at Bell Laboratories when a game called Core Wars was developed, in which players created tiny computer programs that attacked, erased, and tried to propagate on an opponent's system.
In 1983 American electrical engineer Fred Cohen, at the time a graduate student, coined the term virus to describe a self-replicating computer program. In 1985 the first Trojan horses appeared, posing as a graphics-enhancing program called EGABTR and as a game called NUKE-LA. A host of increasingly complex viruses followed.
The so-called Brain virus appeared in 1986 and spread worldwide by 1987. In 1988 two new viruses appeared: Stone, the first bootstrap-sector virus, and the Internet worm, which crossed the United States overnight via computer network. The Dark Avenger virus, the first fast infector, appeared in 1989, followed by the first polymorphic virus in 1990.
Computer viruses grew more sophisticated in the 1990s. In 1995 the first macro language virus, WinWord Concept, was created. In 1999 the Melissa macro virus, spread by e-mail, disabled e-mail servers around the world for several hours, and in some cases several days. Regarded by some as the most prolific virus ever, Melissa cost corporations millions of dollars due to computer downtime and lost productivity.
The VBS_LOVELETTER script virus, also known as the Love Bug and the ILOVEYOU virus, unseated Melissa as the world's most prevalent and costly virus when it struck in May 2000. By the time the outbreak was finally brought under control, losses were estimated at U.S.$10 billion, and the Love Bug is said to have infected 1 in every 5 PCs worldwide.
The year 2003 was a particularly bad year for computer viruses and worms. First, the Blaster worm infected more than 10 million machines worldwide by exploiting a flaw in Microsoft’s Windows operating system. A machine that lacked the appropriate patch could be infected simply by connecting to the Internet. Then, the SoBig worm infected millions more machines in an attempt to convert systems into networking relays capable of sending massive amounts of junk e-mail known as spam. SoBig spread via e-mail, and before the outbreak was 24 hours old, MessageLabs, a popular e-mail filtering company, captured more than a million SoBig messages and called it the fastest-spreading virus in history. In January 2004, however, the MyDoom virus set a new record, spreading even faster than SoBig, and, by most accounts, causing even more damage.
E-Mail
I | | INTRODUCTION |
E-Mail, in computer science, abbreviation of the term electronic mail, method of transmitting data, text files, digital photos, or audio and video files from one computer to another over an intranet or the Internet. E-mail enables computer users to send messages and data quickly through a local area network or beyond through the Internet. E-mail came into widespread use in the 1990s and has become a major development in business and personal communications.
II | | HOW E-MAIL WORKS |
E-mail users create and send messages from individual computers using commercial e-mail programs or mail-user agents (MUAs). Most of these programs have a text editor for composing messages. The user sends a message to one or more recipients by specifying destination addresses. When a user sends an e-mail message to several recipients at once, it is sometimes called broadcasting.
The address of an e-mail message includes the source and destination of the message. Different addressing conventions are used depending upon the e-mail destination. An interoffice message distributed over an intranet, or internal computer network, may have a simple scheme, such as the employee’s name, for the e-mail address. E-mail messages sent outside of an intranet are addressed according to the following convention: The first part of the address contains the user’s name, followed by the symbol @, the domain name, the institution’s or organization’s name, and finally the country name.
A typical e-mail address might be sally@abc.com. In this example sally is the user’s name; abc is the domain name—the specific company, organization, or institution that the e-mail message is sent to or from; and the suffix com indicates the type of organization that abc belongs to—com for commercial, org for organization, edu for educational, mil for military, and gov for governmental. An e-mail message that originates outside the United States or is sent from the United States to other countries has a supplementary suffix that indicates the country of origin or destination. Examples include uk for the United Kingdom , fr for France , and au for Australia .
E-mail data travels from the sender’s computer to a network tool called a message transfer agent (MTA) that, depending on the address, either delivers the message within that network of computers or sends it to another MTA for distribution over the Internet (see Network). The data file is eventually delivered to the private mailbox of the recipient, who retrieves and reads it using an e-mail program or MUA. The recipient may delete the message, store it, reply to it, or forward it to others.
E-mail messages display technical information called headers and footers above and below the main message body. In part, headers and footers record the sender’s and recipient’s names and e-mail addresses, the times and dates of message transmission and receipt, and the subject of the message.
In addition to the plain text contained in the body of regular e-mail messages, most e-mail programs allow the user to send separate files attached to e-mail transmissions. This enables the user to append large text- or graphics-based files, including audio and video files and digital photographs, to e-mail messages.
III | | IMPACT OF E-MAIL |
E-mail has had a great impact on the amount of information sent worldwide. It has become an important method of transmitting information previously relayed via regular mail, telephone, courier, fax, television, or radio.
E-mail, however, has also been abused by certain businesses that send unsolicited commercial e-mail messages known as spam. To address this problem, the U.S. Congress in 2003 passed legislation designed to curb spam. The law makes it illegal to send e-mail messages that use deceptive subject lines and false return addresses, providing fines as high as $6 million and possible prison terms for violators. Senders of pornographic or adult-related content must clearly identify such content in the subject line. The law requires all commercial e-mail messages, solicited or unsolicited, to include a valid postal address and an opt-out mechanism within the body of the text so that recipients can prevent future e-mail solicitations.
The federal law supplants all previous state laws and requires some change in behavior on the part of e-mail users. Previously, e-mail users were advised not to respond to spam because such a response would merely verify the validity of the e-mail address. For the new law to work, however, e-mail recipients must notify the sender that their solicitations are no longer wanted. The Federal Trade Commission (FTC) is compiling a database of spam violators. People who receive deceptive e-mail messages or who continue to receive unsolicited messages after notification should forward the solicitations to uce@ftc.gov to lodge a complaint with the FTC.
Internet
I | | INTRODUCTION |

Electronic Newspaper
In the late 1990s newspapers began offering their content on the Internet in record numbers. By the end of the decade, more than 1,000 North American newspapers offered online versions, most available to Internet users free of charge. Electronic newspapers spared publishers one of their highest expenses—newsprint—and many brought publishers additional advertising revenue. The New York Times on the Web, an exerpt of which is shown here, offers readers the same content as its print publication as well as stories and features available only in its online version.
Gary Morrison
Internet, computer-based global information system. The Internet is composed of many interconnected computer networks. Each network may link tens, hundreds, or even thousands of computers, enabling them to share information and processing power. The Internet has made it possible for people all over the world to communicate with one another effectively and inexpensively. Unlike traditional broadcasting media, such as radio and television, the Internet does not have a centralized distribution system. Instead, an individual who has Internet access can communicate directly with anyone else on the Internet, post information for general consumption, retrieve information, use distant applications and services, or buy and sell products.
The Internet has brought new opportunities to government, business, and education. Governments use the Internet for internal communication, distribution of information, and automated tax processing. In addition to offering goods and services online to customers, businesses use the Internet to interact with other businesses. Many individuals use the Internet for communicating through electronic mail (e-mail), retrieving news, researching information, shopping, paying bills, banking, listening to music, watching movies, playing games, and even making telephone calls. Educational institutions use the Internet for research and to deliver online courses and course material to students.
Use of the Internet has grown tremendously since its inception. The Internet’s success arises from its flexibility. Instead of restricting component networks to a particular manufacturer or particular type, Internet technology allows interconnection of any kind of computer network. No network is too large or too small, too fast or too slow to be interconnected. Thus, the Internet includes inexpensive networks that can only connect a few computers within a single room as well as expensive networks that can span a continent and connect thousands of computers. See Local Area Network.
Internet service providers (ISPs) provide Internet access to customers, usually for a monthly fee. A customer who subscribes to an ISP’s service uses the ISP’s network to access the Internet. Because ISPs offer their services to the general public, the networks they operate are known as public access networks. In the United States , as in many countries, ISPs are private companies; in countries where telephone service is a government-regulated monopoly, the government often controls ISPs.
An organization that has many computers usually owns and operates a private network, called an intranet, which connects all the computers within the organization. To provide Internet service, the organization connects its intranet to the Internet. Unlike public access networks, intranets are restricted to provide security. Only authorized computers at the organization can connect to the intranet, and the organization restricts communication between the intranet and the global Internet. The restrictions allow computers inside the organization to exchange information but keep the information confidential and protected from outsiders.
The Internet has doubled in size every 9 to 14 months since it began in the late 1970s. In 1981 only 213 computers were connected to the Internet. By 2000 the number had grown to more than 400 million. The current number of people who use the Internet can only be estimated. Some analysts said that the number of users was expected to top 1 billion by the end of 2005.
II | | USES OF THE INTERNET |

Marketing and the Internet
The Internet enables marketers to promote products and services to millions of potential customers through the World Wide Web. This Web site provides information about a product designed to keep vegetables fresh.
Courtesy of Dennis Green, Ltd.
Before the Internet was created, the U.S. military had developed and deployed communications networks, including a network known as ARPANET. Uses of the networks were restricted to military personnel and the researchers who developed the technology. Many people regard the ARPANET as the precursor of the Internet. From the 1970s until the late 1980s the Internet was a U.S. government-funded communication and research tool restricted almost exclusively to academic and military uses. It was administered by the National Science Foundation (NSF). At universities, only a handful of researchers working on Internet research had access. In the 1980s the NSF developed an “acceptable use policy” that relaxed restrictions and allowed faculty at universities to use the Internet for research and scholarly activities. However, the NSF policy prohibited all commercial uses of the Internet. Under this policy advertising did not appear on the Internet, and people could not charge for access to Internet content or sell products or services on the Internet.
By 1995, however, the NSF ceased its administration of the Internet. The Internet was privatized, and commercial use was permitted. This move coincided with the growth in popularity of the World Wide Web (WWW), which was developed by British physicist and computer scientist Timothy Berners-Lee. The Web replaced file transfer as the application used for most Internet traffic. The difference between the Internet and the Web is similar to the distinction between a highway system and a package delivery service that uses the highways to move cargo from one city to another: The Internet is the highway system over which Web traffic and traffic from other applications move. The Web consists of programs running on many computers that allow a user to find and display multimedia documents (documents that contain a combination of text, photographs, graphics, audio, and video). Many analysts attribute the explosion in use and popularity of the Internet to the visual nature of Web documents. By the end of 2000, Web traffic dominated the Internet—more than 80 percent of all traffic on the Internet came from the Web.
Companies, individuals, and institutions use the Internet in many ways. Companies use the Internet for electronic commerce, also called e-commerce, including advertising, selling, buying, distributing products, and providing customer service. In addition, companies use the Internet for business-to-business transactions, such as exchanging financial information and accessing complex databases. Businesses and institutions use the Internet for voice and video conferencing and other forms of communication that enable people to telecommute (work away from the office using a computer). The use of e-mail speeds communication between companies, among coworkers, and among other individuals. Media and entertainment companies run online news and weather services over the Internet, distribute music and movies, and actually broadcast audio and video, including live radio and television programs. File sharing services let individuals swap music, movies, photos, and applications, provided they do not violate copyright protections. Online chat allows people to carry on discussions using written text. Instant messaging enables people to exchange text messages; share digital photo, video, and audio files; and play games in real time. Scientists and scholars use the Internet to communicate with colleagues, perform research, distribute lecture notes and course materials to students, and publish papers and articles. Individuals use the Internet for communication, entertainment, finding information, and buying and selling goods and services.
III | | HOW THE INTERNET WORKS |
A | | Internet Access |
The term Internet access refers to the communication between a residence or a business and an ISP that connects to the Internet. Access falls into three broad categories: dedicated, dial-up, and wireless. With dedicated access, a subscriber’s computer remains directly connected to the Internet at all times through a permanent, physical connection. Most large businesses have high-capacity dedicated connections; small businesses or individuals that desire dedicated access choose technologies such as digital subscriber line (DSL) or cable modems, which both use existing wiring to lower cost. A DSL sends data across the same wires that telephone service uses, and cable modems use the same wiring that cable television uses. In each case, the electronic devices that are used to send data over the wires employ separate frequencies or channels that do not interfere with other signals on the wires. Thus, a DSL Internet connection can send data over a pair of wires at the same time the wires are being used for a telephone call, and cable modems can send data over a cable at the same time the cable is being used to receive television signals. Another, less-popular option is satellite Internet access, in which a computer grabs an Internet signal from orbiting satellites via an outdoor satellite dish. The user usually pays a fixed monthly fee for a dedicated connection. In exchange, the company providing the connection agrees to relay data between the user’s computer and the Internet.
Dial-up is the least expensive access technology, but it is also the least convenient. To use dial-up access, a subscriber must have a telephone modem, a device that connects a computer to the telephone system and is capable of converting data into sounds and sounds back into data. The user’s ISP provides software that controls the modem. To access the Internet, the user opens the software application, which causes the dial-up modem to place a telephone call to the ISP. A modem at the ISP answers the call, and the two modems use audible tones to send data in both directions. When one of the modems is given data to send, the modem converts the data from the digital values used by computers—numbers stored as a sequence of 1s and 0s—into tones. The receiving side converts the tones back into digital values. Unlike dedicated access technologies, a dial-up modem does not use separate frequencies, so the telephone line cannot be used for regular telephone calls at the same time a dial-up modem is sending data.
B | | How Information Travels Over the Internet |
Internet Topology
Connecting individual computers to each other creates networks. The Internet is a series of interconnected networks. Personal computers and workstations are connected to a Local Area Network (LAN) by either a dial-up connection through a modem and standard phone line or by being directly wired into the LAN. Other modes of data transmission that allow for connection to a network include T-1 connections and dedicated lines. Bridges and hubs link multiple networks to each other. Routers transmit data through networks and determine the best path of transmission.
© Microsoft Corporation. All Rights Reserved.
All information is transmitted across the Internet in small units of data called packets. Software on the sending computer divides a large document into many packets for transmission; software on the receiving computer regroups incoming packets into the original document. Similar to a postcard, each packet has two parts: a packet header specifying the computer to which the packet should be delivered, and a packet payload containing the data being sent. The header also specifies how the data in the packet should be combined with the data in other packets by recording which piece of a document is contained in the packet.

Hardware devices that connect networks in the Internet are called IP routers because they follow the IP protocol when forwarding packets. A router examines the header in each packet that arrives to determine the packet’s destination. The router either delivers the packet to the destination computer across a local network or forwards the packet to another router that is closer to the final destination. Thus, a packet travels from router to router as it passes through the Internet. In some cases, a router can deliver packets across a local area wireless network, allowing desktop and laptop computers to access the Internet without the use of cables or wires. Today’s business and home wireless local area networks (LANs), which operate according to a family of wireless protocols known as Wi-Fi, are fast enough to deliver Internet feeds as quickly as wired LANs.
Increasingly, cell phone and handheld computer users are also accessing the Internet through wireless cellular telephone networks. Such wide area wireless access is much slower than high-capacity dedicated, or broadband, access, or dial-up access. Also, handheld devices, equipped with much smaller screens and displays, are more difficult to use than full-sized computers. But with wide area wireless, users can access the Internet on the go and in places where access is otherwise impossible. Telephone companies are currently developing so-called 3G—for “third generation”—cellular networks that will provide wide area Internet access at DSL-like speeds. See also Wireless Communications.
C | | Network Names and Addresses |
To be connected to the Internet, a computer must be assigned a unique number, known as its IP (Internet Protocol) address. Each packet sent over the Internet contains the IP address of the computer to which it is being sent. Intermediate routers use the address to determine how to forward the packet. Users almost never need to enter or view IP addresses directly. Instead, to make it easier for users, each computer is also assigned a domain name; protocol software automatically translates domain names into IP addresses. See also Domain Name System.
Users encounter domain names when they use applications such as the World Wide Web. Each page of information on the Web is assigned a URL (Uniform Resource Locator) that includes the domain name of the computer on which the page is located. Other items in the URL give further details about the page. For example, the string http specifies that a browser should use the http protocol, one of many TCP/IP protocols, to fetch the item.
D | | Client/Server Architecture |
Internet applications, such as the Web, are based on the concept of client/server architecture. In a client/server architecture, some application programs act as information providers (servers), while other application programs act as information receivers (clients). The client/server architecture is not one-to-one. That is, a single client can access many different servers, and a single server can be accessed by a number of different clients. Usually, a user runs a client application, such as a Web browser, that contacts one server at a time to obtain information. Because it only needs to access one server at a time, client software can run on almost any computer, including small handheld devices such as personal organizers and cellular telephones. To supply information to others, a computer must run a server application. Although server software can run on any computer, most companies choose large, powerful computers to run server software because the company expects many clients to be in contact with its server at any given time. A faster computer enables the server program to return information with less delay.
E | | Electronic Mail |
Electronic mail, or e-mail, is a widely used Internet application that enables individuals or groups of individuals to quickly exchange messages, even if they are separated by long distances. A user creates an e-mail message and specifies a recipient using an e-mail address, which is a string consisting of the recipient’s login name followed by an @ (at) sign and then a domain name. E-mail software transfers the message across the Internet to the recipient’s computer, where it is placed in the specified mailbox, a file on the hard drive. The recipient uses an e-mail application to view and reply to the message, as well as to save or delete it. Because e-mail is a convenient and inexpensive form of communication, it has dramatically improved personal and business communications.
In its original form, e-mail could only be sent to recipients named by the sender, and only text messages could be sent. E-mail has been extended in two ways, and is now a much more powerful tool. Software has been invented that can automatically propagate to multiple recipients a message sent to a single address. Known as a mail gateway or list server, such software allows individuals to join or leave a mail list at any time. Such software can be used to create lists of individuals who will receive announcements about a product or service or to create online discussion groups.
E-mail software has also been extended to allow the transfer of nontext documents, such as photographs and other images, executable computer programs, and prerecorded audio. Such documents, appended to an e-mail message, are called attachments. The standard used for encoding attachments is known as Multipurpose Internet Mail Extensions (MIME). Because the Internet e-mail system only transfers printable text, MIME software encodes each document using printable letters and digits before sending it and then decodes the item when e-mail arrives. Most significantly, MIME allows a single message to contain multiple items, enabling a sender to include a cover letter that explains each of the attachments.
F | | Other Internet Applications |
Although the World Wide Web is the most popular application, some older Internet applications are still used. For example, the Telnet application enables a user to interactively access a remote computer. Telnet gives the appearance that the user’s keyboard and monitor are connected directly to the remote computer. For example, a businessperson who is visiting a location that has Internet access can use Telnet to contact their office computer. Doing so is faster and less expensive than using a dial-up modem.
Another application, known as the File Transfer Protocol (FTP), is used to download files from an Internet site to a user’s computer. The FTP application is often automatically invoked when a user downloads an updated version of a piece of software. Applications such as FTP have been integrated with the World Wide Web, making them transparent so that they run automatically without requiring users to open them. When a Web browser encounters a URL that begins with ftp:// it automatically uses FTP to access the item.
Network News discussion groups (newsgroups), originally part of the Usenet network, are another form of online discussion. Thousands of newsgroups exist, on an extremely wide range of subjects. Messages to a newsgroup are not sent directly to each user. Instead, an ordered list is disseminated to computers around the world that run news server software. Newsgroup application software allows a user to obtain a copy of selected articles from a local news server or to use e-mail to post a new message to the newsgroup. The system makes newsgroup discussions available worldwide.
A service known as Voice Over IP (VoIP) allows individuals and businesses to make phone calls over the Internet. Low-cost services (some of them free) often transfer calls via personal computers (PCs) equipped with microphones and speakers instead of the traditional telephone handset. But a growing number of services operate outside the PC, making calls via a special adapter that connects to a traditional telephone handset. The calls still travel over the Internet, but the person using the special adapter never has to turn on his or her computer. Thousands now use such VoIP services in lieu of traditional phone service. VoIP services are not subject to the same government regulation as traditional phone service. Thus, they are often less expensive.
G | | Bandwidth |
Computers store all information as binary numbers. The binary number system uses two binary digits, 0 and 1, which are called bits. The amount of data that a computer network can transfer in a certain amount of time is called the bandwidth of the network and is measured in kilobits per second (kbps) or megabits per second (mbps). A kilobit is 1 thousand bits; a megabit is 1 million bits. A dial-up telephone modem can transfer data at rates up to 56 kbps; DSL and cable modem connections are much faster and can transfer at a few mbps. The Internet connections used by businesses can operate at 45 mbps or more, and connections between routers in the heart of the Internet may operate at rates from 2,488 to 9,953 mbps (9.953 gigabits per second). The terms wideband or broadband are used to characterize networks with high capacity, such as DSL and cable, and to distinguish them from narrowband networks, such as dial-up modems, which have low capacity.
IV | | HISTORY |

Timothy Berners-Lee
Timothy Berners-Lee, a British computer scientist, developed the World Wide Web during the 1980s.
Elise Amendola/AP/Wide World Photos
Research on dividing information into packets and switching them from computer to computer began in the 1960s. The U.S. Department of Defense Advanced Research Projects Agency (ARPA) funded a research project that created a packet switching network known as the ARPANET. ARPA also funded research projects that produced two satellite networks. In the 1970s ARPA was faced with a dilemma: Each of its networks had advantages for some situations, but each network was incompatible with the others. ARPA focused research on ways that networks could be interconnected, and the Internet was envisioned and created to be an interconnection of networks that use TCP/IP protocols. In the early 1980s a group of academic computer scientists formed the Computer Science NETwork, which used TCP/IP protocols. Other government agencies extended the role of TCP/IP by applying it to their networks: The Department of Energy’s Magnetic Fusion Energy Network (MFENet), the High Energy Physics NETwork (HEPNET), and the National Science Foundation NETwork (NSFNET).
In the 1980s, as large commercial companies began to use TCP/IP to build private internets, ARPA investigated transmission of multimedia—audio, video, and graphics—across the Internet. Other groups investigated hypertext and created tools such as Gopher that allowed users to browse menus, which are lists of possible options. In 1989 many of these technologies were combined to create the World Wide Web. Initially designed to aid communication among physicists who worked in widely separated locations, the Web became immensely popular and eventually replaced other tools. Also during the late 1980s, the U.S. government began to lift restrictions on who could use the Internet, and commercialization of the Internet began. In the early 1990s, with users no longer restricted to the scientific or military communities, the Internet quickly expanded to include universities, companies of all sizes, libraries, public and private schools, local and state governments, individuals, and families.
V | | THE FUTURE OF THE INTERNET |
Several technical challenges must be overcome if the Internet is to continue growing at the current phenomenal rate. The primary challenge is to create enough capacity to accommodate increases in traffic. Internet traffic is increasing as more people become Internet users and existing users send greater amounts of data. If the volume of traffic increases faster than the capacity of the network increases, congestion will occur, similar to the congestion that occurs when too many cars attempt to use a highway. To avoid congestion, researchers have developed technologies, such as Dense Wave Division Multiplexing (DWDM), that transfer more bits per second across an optical fiber. The speed of routers and other packet-handling equipment must also increase to accommodate growth. In the short term, researchers are developing faster electronic processors; in the long term, new technologies will be required.
Another challenge involves IP addresses. Although the original protocol design provided addresses for up to 4.29 billion individual computers, the addresses have begun to run out because they were assigned in blocks. Researchers developed technologies, such as Network Address Translation (NAT), to conserve addresses. NAT allows multiple computers at a residence to “share” a single Internet address. Engineers have also planned a next-generation of IP, called IPv6, which will handle many more addresses than the current version.
Short, easy-to-remember domain names were once in short supply. Many domain names that used the simple format http://www.[word].com, where [word] is a common noun or verb, and .com referred to a for-profit business were mostly taken by 2001. Until 2001, only a few endings were allowed, such as .com, .org, and .net. By 2002, however, additional endings began to be used, such as .biz for businesses and .info for informational sites. This greatly expanded the number of possible URLs.
Other important questions concerning Internet growth relate to government controls, especially taxation and censorship. Because the Internet has grown so rapidly, governments have had little time to pass laws that control its deployment and use, impose taxes on Internet commerce, or otherwise regulate content. Many Internet users in the United States view censorship laws as an infringement on their constitutional right to free speech. In 1996 the Congress of the United States passed the Communications Decency Act, which made it a crime to transmit indecent material over the Internet. The act resulted in an immediate outcry from users, industry experts, and civil liberties groups opposed to such censorship. In 1997 the Supreme Court of the United States declared the act unconstitutional because it violated First Amendment rights to free speech. The U.S. Congress responded in 1998 by passing a narrower antipornography bill, the Child Online Protection Act (COPA). COPA required commercial Web sites to ensure that children could not access material deemed harmful to minors. In 1999 a federal judge blocked COPA as well, ruling that it would dangerously restrict constitutionally protected free speech. The judge’s ruling was upheld by a federal appeals court on the grounds that the law’s use of “community standards” in deciding what was pornographic was overly broad.
The issue reached the Supreme Court of the United States in 2002, and in a limited ruling the Supreme Court found that the community standard provision was not inherently unconstitutional. Supporters of the law welcomed the Court’s ruling. However, opponents noted that the Court had sent the case back to the federal appeals court for a more comprehensive review and had ruled that the law could not go into effect until that review occurred. Some analysts who studied the various opinions written by the justices concluded that a majority of the Court was likely to find the law unconstitutional.
Increasing commercial use of the Internet has heightened security and privacy concerns. With a credit or debit card, an Internet user can order almost anything from an Internet site and have it delivered to their home or office. Companies doing business over the Internet need sophisticated security measures to protect credit card, bank account, and social security numbers from unauthorized access as they pass across the Internet (see Computer Security). Any organization that connects its intranet to the global Internet must carefully control the access point to ensure that outsiders cannot disrupt the organization’s internal networks or gain unauthorized access to the organization’s computer systems and data.
Disruptions that could cause loss of life or that could be part of a coordinated terrorist attack have also become an increasing concern. For example, using the Internet to attack computer systems that control electric power grids, pipelines, water systems, or chemical refineries could cause the systems to fail, and the resulting failures could lead to fatalities and harm to the economy. To safeguard against such attacks, the U.S. Congress passed the Homeland Security Act in November 2002. The new law creates criminal penalties, including life imprisonment, for disruptions of computer systems and networks that cause or attempt to cause death. The law also allows ISPs to reveal subscriber information to government officials without a court-approved warrant if there is a risk of death or injury. It also enables government officials to trace e-mails and other Internet traffic during an Internet disruption without obtaining court approval. Civil liberties groups objected to the lack of court supervision of many provisions in the new law.
Virus (computer)
I | | INTRODUCTION |
Virus (computer), a self-duplicating computer program that spreads from computer to computer, interfering with data and software. Just as biological viruses infect people, spreading from person to person, computer viruses infect personal computers (PCs) and servers, the computers that control access to a network of computers. Some viruses are mere annoyances, but others can do serious damage. Viruses can delete or change files, steal important information, load and run unwanted applications, send documents via electronic mail (e-mail), or even cripple a machine’s operating system (OS), the basic software that runs the computer.
Personal Computer
I | | INTRODUCTION |

Personal Computers at Home
Personal computers (PCs) have changed the way families track finances, write reports, and play games. PCs also help students enhance math, spelling, and reading skills.
Comstock Images/age fotostock
Personal Computer (PC), computer in the form of a desktop or laptop device designed for use by a single person. PCs function using a display monitor and a keyboard. Since their introduction in the 1980s, PCs have become powerful and extremely versatile tools that have revolutionized how people work, learn, communicate, and find entertainment. Many households in the United States now have PCs, thanks to affordable prices and software that has made PCs easy to use without special computer expertise. Personal computers are also a crucial component of information technology (IT) and play a key role in modern economies worldwide.
The usefulness and capabilities of personal computers can be greatly enhanced by connection to the Internet and World Wide Web, as well as to smaller networks that link to local computers or databases. Personal computers can also be used to access content stored on compact discs (CDs) or digital versatile discs (DVDs), and to transfer files to personal media devices and video players.
Personal computers are sometimes called microcomputers or micros. Powerful PCs designed for professional or technical use are known as work stations. Other names that reflect different roles for PCs include home computers and small-business computers. The PC is generally larger and more powerful than handheld computers, including personal digital assistants (PDAs) and gaming devices.
II | | PARTS OF A PERSONAL COMPUTER |
The different types of equipment that make a computer function are known as hardware; the coded instructions that make a computer work are known as software.
A | | Types of Hardware |

Personal Computer Components
A typical personal computer has components to display and print information (monitor and laser printer); input commands and data (keyboard and mouse); retrieve and store information (CD-ROM and disk drives); and communicate with other computers (modem).
© Microsoft Corporation. All Rights Reserved.
PCs consist of electronic circuitry called a microprocessor, such as the central processing unit (CPU), that directs logical and arithmetical functions and executes computer programs. The CPU is located on a motherboard with other chips. A PC also has electronic memory known as random access memory (RAM) to temporarily store programs and data. A basic component of most PCs is a disk drive, commonly in the form of a hard disk or hard drive. A hard disk is a magnetic storage device in the form of a disk or disks that rotate. The magnetically stored information is read or modified using a drive head that scans the surface of the disk.
Removable storage devices—such as floppy drives, compact disc (CD-ROM) and digital versatile disc (DVD) drives, and additional hard drives—can be used to permanently store as well as access programs and data. PCs may have CD or DVD “burners” that allow users to write or rewrite data onto recordable discs. Other external devices to transfer and store files include memory sticks and flash drives, small solid-state devices that do not have internal moving parts.

Computer Docking Station
A computer docking station enables a notebook, or laptop, computer to operate the hard drive and peripheral devices of a desktop computer. When removed from the docking station, the smaller computer is portable and functions as a notebook.
© Microsoft Corporation. All Rights Reserved.
Cards are printed circuit boards that can be plugged into a PC to provide additional functions such as recording or playing video or audio, or enhancing graphics (see Graphics Card).
A PC user enters information and commands with a keyboard or with a pointing device such as a mouse. A joystick may be used for computer games or other tasks. Information from the PC is displayed on a video monitor or on a liquid crystal display (LCD) video screen. Accessories such as speakers or headphones allow audio to be listened to. Files, photographs, or documents can be printed on laser, dot-matrix, or inkjet printers. The various components of the computer system are physically attached to the PC through the bus. Some PCs have wireless systems that use infrared or radio waves to link to the mouse, the keyboard, or other components.
PC connections to the Internet or local networks may be through a cable attachment or a phone line and a modem (a device that permits transmission of digital signals). Wireless links to the Internet and networks operate through a radio modem. Modems also are used to link other devices to communication systems.
B | | Types of Software |

Computer Software
Arithmetic and logic form the basis of all computer software—the instructions that tell computers what to do. Shown on this computer screen are programs running on the Windows XP operating system, the software that allows a computer’s other software to run.
© Microsoft Corporation. All Rights Reserved.
PCs are run by software called the operating system. Widely used operating systems include Microsoft’s Windows, Apple’s Mac OS, and Linux. Other types of software called applications allow the user to perform a wide variety of tasks such as word processing; using spreadsheets; manipulating or accessing data; or editing video, photographs, or audio files.
Drivers are special software programs that operate specific devices that can be either crucial or optional to the functioning of the computer. Drivers help operate keyboards, printers, and DVD drives, for example.
Most PCs use software to run a screen display called a graphical user interface (GUI). A GUI allows a user to open and move files, work with applications, and perform other tasks by clicking on graphic icons with a mouse or other pointing device.
In addition to text files, PCs can store digital multimedia files such as photographs, audio recordings, and video. These media files are usually in compressed digital formats such as JPEG for photographs, MP3 for audio, and MPEG for video.
III | | USES FOR PERSONAL COMPUTERS |

Personal Computer
A personal computer (PC) enables people to carry out an array of tasks, such as word processing and slide presentations. With a connection to the Internet, users can tap into a vast amount of information on the World Wide Web, send e-mail, and download music and videos. As a family tool, the PC may be used for school, research, communication, record keeping, work, and entertainment.
Jose Luis Pelaez, Inc./Corbis
The wide variety of tasks that PCs can perform in conjunction with the PC’s role as a portal to the Internet and World Wide Web have had profound effects on how people conduct their lives and work, and pursue education.
In the home, PCs can help with balancing the family checkbook, keeping track of finances and investments, and filing taxes, as well as preserving family documents for easy access or indexing recipes. PCs are also a recreational device for playing computer games, watching videos with webcasting, downloading music, saving photographs, or cataloging records and books. Together with the Internet, PCs are a link to social contacts through electronic mail (e-mail), text-messaging, personal Web pages, blogs, and chat groups. PCs can also allow quick and convenient access to news and sports information on the World Wide Web, as well as consumer information. Shopping from home over the Internet with a PC generates billions of dollars in the economy.

Computers in Schools
Students work on their classroom computers as a teacher supervises. Nearly every school in the United States has desktop computers that can be used by students. Computers aid education by providing students with access to learning tools and research information.
LWA-JDC/Corbis
PCs can greatly improve productivity in the workplace, allowing people to collaborate on tasks from different locations and easily share documents and information. Many people with a PC at home are able to telecommute, working from home over the Internet. Laptop PCs with wireless connections to the Internet allow people to work in virtually any environment when away from the office. PCs can help people to be self-employed. Special software can make running a small business from home much easier. PCs can also assist artists, writers, and musicians with their creative work, or allow anyone to make their own musical mixes at home. Medical care has been improved and costs have been reduced by transferring medical records into electronic form that can be accessed through PC terminals.
PCs have become an essential tool in education at all levels, from grammar school to university. Many school children are given laptop computers to help with schoolwork and homework. Classrooms of all kinds commonly use PCs. Many public libraries make PCs available to members of the public. The Internet and World Wide Web provide access to enormous amounts of information, some of it free and some of it available through subscription or fee. Online education as a form of distance education or correspondence education is a growing service, allowing people to take classes and work on degrees at their convenience using PCs and the Internet.
PCs can also be adapted to help people with disabilities, using special devices and software. Special keyboards, cursors that translate head movements, or accessories such as foot mice can allow people with limited physical movement to use a PC. PCs can also allow people with speech or auditory disabilities to understand or generate speech. Visual disabilities can be aided by speech-recognition software that allows spoken commands to work a PC or for e-mail and text to be read aloud. Text display can also be magnified for individuals with low vision.
IV | | EARLY HISTORY AND DEVELOPMENT OF PERSONAL COMPUTERS |

Apple Macintosh Computer
The Apple Macintosh, released in 1984, was among the first personal computers to use a graphical user interface. A graphical user interface enables computer users to easily execute commands by clicking on pictures, words, or icons with a pointing device called a mouse.
© Microsoft Corporation. All Rights Reserved.
The first true modern computers were developed during World War II (1939-1945) and used vacuum tubes. These early computers were the size of houses and as expensive as battleships, but they had none of the computational power or ease of use that are common in modern PCs. More powerful mainframe computers were developed in the 1950s and 1960s, but needed entire rooms and large amounts of electrical power to operate.
A major step toward the modern PC came in the 1960s when a group of researchers at the Stanford Research Institute (SRI) in California began to explore ways for people to interact more easily with computers. The SRI team developed the first computer mouse and other innovations that would be refined and improved in the 1970s by researchers at the Xerox PARC (Palo Alto Research Center, Inc). The PARC team developed an experimental PC design in 1973 called Alto, which was the first computer to have a graphical user interface (GUI).
Two crucial hardware developments would help make the SRI vision of computers practical. The miniaturization of electronic circuitry as microelectronics and the invention of integrated circuits and microprocessors enabled computer makers to combine the essential elements of a computer onto tiny silicon computer chips, thereby increasing computer performance and decreasing cost.
The integrated circuit, or IC, was developed in 1959 and permitted the miniaturization of computer-memory circuits. The microprocessor first appeared in 1971 with the Intel 4004, created by Intel Corporation, and was originally designed to be the computing and logical processor of calculators and watches. The microprocessor reduced the size of a computer’s CPU to the size of a single silicon chip.
Because a CPU calculates, performs logical operations, contains operating instructions, and manages data flows, the potential existed for developing a separate system that could function as a complete microcomputer. The first such desktop-size system specifically designed for personal use appeared in 1974; it was offered by Micro Instrumentation Telemetry Systems (MITS). The owners of the system were then encouraged by the editor of Popular Electronics magazine to create and sell a mail-order computer kit through the magazine.
The Altair 8800 is considered to be the first commercial PC. The Altair was built from a kit and programmed by using switches. Information from the computer was displayed by light-emitting diodes on the front panel of the machine. The Altair appeared on the cover of Popular Electronics magazine in January 1975 and inspired many computer enthusiasts who would later establish companies to produce computer hardware and software. The computer retailed for slightly less than $400.

Computer Circuit Board
Integrated circuits (ICs) make the microcomputer possible; without them, individual circuits and their components would take up far too much space for a compact computer design. Also called a chip, the typical IC consists of elements such as resistors, capacitors, and transistors packed on a single piece of silicon. In smaller, more densely-packed ICs, circuit elements may be only a few atoms in size, which makes it possible to create sophisticated computers the size of notebooks. A typical computer circuit board features many integrated circuits connected together.
James Green/Robert Harding Picture Library
The demand for the microcomputer kit was immediate, unexpected, and totally overwhelming. Scores of small entrepreneurial companies responded to this demand by producing computers for the new market. The first major electronics firm to manufacture and sell personal computers, Tandy Corporation (Radio Shack), introduced its model in 1977. It quickly dominated the field, because of the combination of two attractive features: a keyboard and a display terminal using a cathode-ray tube (CRT). It was also popular because it could be programmed and the user was able to store information by means of cassette tape.
American computer designers Steven Jobs and Stephen Wozniak created the Apple II in 1977. The Apple II was one of the first PCs to incorporate a color video display and a keyboard that made the computer easy to use. Jobs and Wozniak incorporated Apple Computer Inc. the same year. Some of the new features they introduced into their own microcomputers were expanded memory, inexpensive disk-drive programs and data storage, and color graphics. Apple Computer went on to become the fastest-growing company in U.S. business history. Its rapid growth inspired a large number of similar microcomputer manufacturers to enter the field. Before the end of the decade, the market for personal computers had become clearly defined.
In 1981 IBM introduced its own microcomputer model, the IBM PC. Although it did not make use of the most recent computer technology, the IBM PC was a milestone in this burgeoning field. It proved that the PC industry was more than a current fad, and that the PC was in fact a necessary tool for the business community. The PC’s use of a 16-bit microprocessor initiated the development of faster and more powerful microcomputers, and its use of an operating system that was available to all other computer makers led to what was effectively a standardization of the industry. The design of the IBM PC and its clones soon became the PC standard, and an operating system developed by Microsoft Corporation became the dominant software running PCs.
A graphical user interface (GUI)—a visually appealing way to represent computer commands and data on the screen—was first developed in 1983 when Apple introduced the Lisa, but the new user interface did not gain widespread notice until 1984 with the introduction of the Apple Macintosh. The Macintosh GUI combined icons (pictures that represent files or programs) with windows (boxes that each contain an open file or program). A pointing device known as a mouse controlled information on the screen. Inspired by earlier work of computer scientists at Xerox Corporation, the Macintosh user interface made computers easy and fun to use and eliminated the need to type in complex commands (see User Interface).
Beginning in the early 1970s, computing power doubled about every 18 months due to the creation of faster microprocessors, the incorporation of multiple microprocessor designs, and the development of new storage technologies. A powerful 32-bit computer capable of running advanced multiuser operating systems at high speeds appeared in the mid-1980s. This type of PC blurred the distinction between microcomputers and minicomputers, placing enough computing power on an office desktop to serve all small businesses and most medium-size businesses.

Handheld Computer
The handheld computing device attests to the remarkable miniaturization of computer hardware. The early computers of the 1940s were so large that they filled entire rooms. Techonological innovations, such as the integrated circuit in 1959 and the microprocessor in 1971, shrank computers’ central processing units to the size of tiny silicon chips. Handheld computers are sometimes called personal digital assistants (PDAs).
James Leynse/Corbis
During the 1990s the price of personal computers came down at the same time that computer chips became more powerful. The most important innovations, however, occurred with the PC operating system software. Apple’s Macintosh computer had been the first to provide a graphical user interface, but the computers remained relatively expensive. Microsoft Corporation’s Windows software came preinstalled on IBM PCs and clones, which were generally less expensive than Macintosh. Microsoft also designed its software to allow individual computers to easily communicate and share files through networks in an office environment. The introduction of the Windows operating systems, which had GUI systems similar to Apple’s, helped make Microsoft the dominant provider of PC software for business and home use.
PCs in the form of portable notebook computers also emerged in the 1990s. These PCs could be carried in a briefcase or backpack and could be powered with a battery or plugged in. The first portable computers had been introduced at the end of the 1980s. The true laptop computers came in the early 1990s with Apple’s Powerbook and IBM’s ThinkPad.
Despite its spectacular success in the software market, Microsoft was initially slow to understand the importance of the Internet, which had been developed for government and academic use in the 1960s and 1970s, and the World Wide Web, developed in the late 1980s. The ability to access the Internet and the growing World Wide Web greatly enhanced the usefulness of the PC, giving it enormous potential educational, commercial, and entertainment value. In 1994 Netscape became the first browser designed to make the Internet and the World Wide Web user friendly, similar to how a GUI makes using a PC much simpler. The success of Netscape prompted Microsoft to develop its own Web browser called Internet Explorer, released in 1995. Explorer was then included with the preinstalled Windows software on PCs sold to consumers. This “bundling” of the Explorer browser was controversial and led to lawsuits against Microsoft for unfair trade practices.
Connecting PCs to the Internet had unanticipated consequences. PCs were vulnerable to malicious software designed to damage files or computer hardware. Other types of software programs could force a PC to send out e-mail messages or store files, or allow access to existing files and software as well as track a user’s keystrokes and Internet activity without the user's knowledge. Computer viruses and other malicious programs could be easily sent over the Internet using e-mail or by secretly downloading files from Web pages a user visited. Microsoft’s software was a particular target and may have been vulnerable in part because its platforms and applications had been developed to allow computers to easily share files.
Since the late 1990s computer security has become a major concern. PC users can install firewalls to block unwanted access or downloads over the Internet. They can also subscribe to services that periodically scan personal computers for viruses and malicious software and remove them. Operating-system software has also been designed to improve security.
PCs continue to improve in power and versatility. The growing use of 64-bit processors and higher-speed chips in PCs in combination with broadband access to the Internet greatly enhances media such as motion pictures and video, as well as games and interactive features. The increasing use of computers to view and access media may be a further step toward the merger of television and computer technology that has been predicted by some experts since the 1990s.
II | | HOW INFECTIONS OCCUR |
A virus can infect a computer in a number of ways. It can arrive on a floppy disk or inside an e-mail message. It can piggyback on files downloaded from the World Wide Web or from an Internet service used to share music and movies. Or it can exploit flaws in the way computers exchange data over a network. So-called blended-threat viruses spread via multiple methods at the same time. Some blended-threat viruses, for instance, spread via e-mail but also propagate by exploiting flaws in an operating system.
Traditionally, even if a virus found its way onto a computer, it could not actually infect the machine—or propagate to other machines—unless the user was somehow fooled into executing the virus by opening it and running it just as one would run a legitimate program. But a new breed of computer virus can infect machines and spread to others entirely on its own. Simply by connecting a computer to a network, the computer owner runs the risk of infection. Because the Internet connects computers around the world, viruses can spread from one end of the globe to the other in a matter of minutes.
III | | TYPES OF VIRUSES |
There are many categories of viruses, including parasitic or file viruses, bootstrap-sector, multipartite, macro, and script viruses. Then there are so-called computer worms, which have become particularly prevalent. A computer worm is a type of virus. However, instead of infecting files or operating systems, a worm replicates from computer to computer by spreading entire copies of itself.
Parasitic or file viruses infect executable files or programs in the computer. These files are often identified by the extension .exe in the name of the computer file. File viruses leave the contents of the host program unchanged but attach to the host in such a way that the virus code is run first. These viruses can be either direct-action or resident. A direct-action virus selects one or more programs to infect each time it is executed. A resident virus hides in the computer's memory and infects a particular program when that program is executed.
Bootstrap-sector viruses reside on the first portion of the hard disk or floppy disk, known as the boot sector. These viruses replace either the programs that store information about the disk's contents or the programs that start the computer. Typically, these viruses spread by means of the physical exchange of floppy disks.
Multipartite viruses combine the abilities of the parasitic and the bootstrap-sector viruses, and so are able to infect either files or boot sectors. These types of viruses can spread if a computer user boots from an infected diskette or accesses infected files.
Other viruses infect programs that contain powerful macro languages (programming languages that let the user create new features and utilities). These viruses, called macro viruses, are written in macro languages and automatically execute when the legitimate program is opened.
Script viruses are written in script programming languages, such as VBScript (Visual Basic Script) and JavaScript. These script languages can be seen as a special kind of macro language and are even more powerful because most are closely related to the operating system environment. The 'ILOVEYOU' virus, which appeared in 2000 and infected an estimated 1 in 5 personal computers, is a famous example of a script virus.
Strictly speaking, a computer virus is always a program that attaches itself to some other program. But computer virus has become a blanket term that also refers to computer worms. A worm operates entirely on its own, without ever attaching itself to another program. Typically, a worm spreads over e-mail and through other ways that computers exchange information over a network. In this way, a worm not only wreaks havoc on machines, but also clogs network connections and slows network traffic, so that it takes an excessively long time to load a Web page or send an e-mail.
IV | | ANTI-VIRAL TACTICS |
A | | Preparation and Prevention |
Computer users can prepare for a viral infection by creating backups of legitimate original software and data files regularly so that the computer system can be restored if necessary. Viral infection can be prevented by obtaining software from legitimate sources or by using a quarantined computer—that is, a computer not connected to any network—to test new software. Plus, users should regularly install operating system (OS) patches, software updates that mend the sort of flaws, or holes, in the OS often exploited by viruses. Patches can be downloaded from the Web site of the operating system’s developer. However, the best prevention may be the installation of current and well-designed antiviral software. Such software can prevent a viral infection and thereby help stop its spread.
B | | Virus Detection |
Several types of antiviral software can be used to detect the presence of a virus. Scanning software can recognize the characteristics of a virus's computer code and look for these characteristics in the computer's files. Because new viruses must be analyzed as they appear, scanning software must be updated periodically to be effective. Other scanners search for common features of viral programs and are usually less reliable. Most antiviral software uses both on-demand and on-access scanners. On-demand scanners are launched only when the user activates them. On-access scanners, on the other hand, are constantly monitoring the computer for viruses but are always in the background and are not visible to the user. The on-access scanners are seen as the proactive part of an antivirus package and the on-demand scanners are seen as reactive. On-demand scanners usually detect a virus only after the infection has occurred and that is why they are considered reactive.
Antivirus software is usually sold as packages containing many different software programs that are independent of one another and perform different functions. When installed or packaged together, antiviral packages provide complete protection against viruses. Within most antiviral packages, several methods are used to detect viruses. Checksumming, for example, uses mathematical calculations to compare the state of executable programs before and after they are run. If the checksum has not changed, then the system is uninfected. Checksumming software can detect an infection only after it has occurred, however. As this technology is dated and some viruses can evade it, checksumming is rarely used today.
Most antivirus packages also use heuristics (problem-solving by trial and error) to detect new viruses. This technology observes a program’s behavior and evaluates how closely it resembles a virus. It relies on experience with previous viruses to predict the likelihood that a suspicious file is an as-yet unidentified or unclassified new virus.
Other types of antiviral software include monitoring software and integrity-shell software. Monitoring software is different from scanning software. It detects illegal or potentially damaging viral activities such as overwriting computer files or reformatting the computer's hard drive. Integrity-shell software establishes layers through which any command to run a program must pass. Checksumming is performed automatically within the integrity shell, and infected programs, if detected, are not allowed to run.
C | | Containment and Recovery |
Once a viral infection has been detected, it can be contained by immediately isolating computers on networks, halting the exchange of files, and using only write-protected disks. In order for a computer system to recover from a viral infection, the virus must first be eliminated. Some antivirus software attempts to remove detected viruses, but sometimes with unsatisfactory results. More reliable results are obtained by turning off the infected computer; restarting it from a write-protected floppy disk; deleting infected files and replacing them with legitimate files from backup disks; and erasing any viruses on the boot sector.
V | | VIRAL STRATEGIES |
The authors of viruses have several strategies to circumvent antivirus software and to propagate their creations more effectively. So-called polymorphic viruses make variations in the copies of themselves to elude detection by scanning software. A stealth virus hides from the operating system when the system checks the location where the virus resides, by forging results that would be expected from an uninfected system. A so-called fast-infector virus infects not only programs that are executed but also those that are merely accessed. As a result, running antiviral scanning software on a computer infected by such a virus can infect every program on the computer. A so-called slow-infector virus infects files only when the files are modified, so that it appears to checksumming software that the modification was legitimate. A so-called sparse-infector virus infects only on certain occasions—for example, it may infect every tenth program executed. This strategy makes it more difficult to detect the virus.
By using combinations of several virus-writing methods, virus authors can create more complex new viruses. Many virus authors also tend to use new technologies when they appear. The antivirus industry must move rapidly to change their antiviral software and eliminate the outbreak of such new viruses.
VI | | VIRUS-LIKE COMPUTER PROGRAMS |
There are other harmful computer programs that can be part of a virus but are not considered viruses because they do not have the ability to replicate. These programs fall into three categories: Trojan horses, logic bombs, and deliberately harmful or malicious software programs that run within a Web browser, an application program such as Internet Explorer and Netscape that displays Web sites.
A Trojan horse is a program that pretends to be something else. A Trojan horse may appear to be something interesting and harmless, such as a game, but when it runs it may have harmful effects. The term comes from the classic Greek story of the Trojan horse found in Homer’s Iliad.
A logic bomb infects a computer’s memory, but unlike a virus, it does not replicate itself. A logic bomb delivers its instructions when it is triggered by a specific condition, such as when a particular date or time is reached or when a combination of letters is typed on a keyboard. A logic bomb has the ability to erase a hard drive or delete certain files.
Malicious software programs that run within a Web browser often appear in Java applets and ActiveX controls. Although these applets and controls improve the usefulness of Web sites, they also increase a vandal’s ability to interfere with unprotected systems. Because those controls and applets require that certain components be downloaded to a user’s personal computer (PC), activating an applet or control might actually download malicious code.
A | | History |
In 1949 Hungarian American mathematician John von Neumann, at the Institute for Advanced Study in Princeton , New Jersey , proposed that it was theoretically possible for a computer program to replicate. This theory was tested in the 1950s at Bell Laboratories when a game called Core Wars was developed, in which players created tiny computer programs that attacked, erased, and tried to propagate on an opponent's system.
In 1983 American electrical engineer Fred Cohen, at the time a graduate student, coined the term virus to describe a self-replicating computer program. In 1985 the first Trojan horses appeared, posing as a graphics-enhancing program called EGABTR and as a game called NUKE-LA. A host of increasingly complex viruses followed.
The so-called Brain virus appeared in 1986 and spread worldwide by 1987. In 1988 two new viruses appeared: Stone, the first bootstrap-sector virus, and the Internet worm, which crossed the United States overnight via computer network. The Dark Avenger virus, the first fast infector, appeared in 1989, followed by the first polymorphic virus in 1990.
Computer viruses grew more sophisticated in the 1990s. In 1995 the first macro language virus, WinWord Concept, was created. In 1999 the Melissa macro virus, spread by e-mail, disabled e-mail servers around the world for several hours, and in some cases several days. Regarded by some as the most prolific virus ever, Melissa cost corporations millions of dollars due to computer downtime and lost productivity.
The VBS_LOVELETTER script virus, also known as the Love Bug and the ILOVEYOU virus, unseated Melissa as the world's most prevalent and costly virus when it struck in May 2000. By the time the outbreak was finally brought under control, losses were estimated at U.S.$10 billion, and the Love Bug is said to have infected 1 in every 5 PCs worldwide.
The year 2003 was a particularly bad year for computer viruses and worms. First, the Blaster worm infected more than 10 million machines worldwide by exploiting a flaw in Microsoft’s Windows operating system. A machine that lacked the appropriate patch could be infected simply by connecting to the Internet. Then, the SoBig worm infected millions more machines in an attempt to convert systems into networking relays capable of sending massive amounts of junk e-mail known as spam. SoBig spread via e-mail, and before the outbreak was 24 hours old, MessageLabs, a popular e-mail filtering company, captured more than a million SoBig messages and called it the fastest-spreading virus in history. In January 2004, however, the MyDoom virus set a new record, spreading even faster than SoBig, and, by most accounts, causing even more damage.
| | |
| | |
|
No comments:
Post a Comment