Computer English Edited by Ruth Vilmi Authors (in alphabetical order): Jari Karjala Vesa Kautto Harri Kiljander Marko Nieminen Pekka Nikander Sakari Pihlava Kari Tuisku Jukka Vialen Typeset using the TeX typesetting system by Pekka Nikander Helsinki University of Technology Language Centre Foreword This publication has been made by the students in my Computer English Course (Kie-90-309) at the Language Centre, Helsinki University of Technology during the Spring term, 1991. As there is a dearth of good, up-to-date, ready-made teaching material in this area, the students were reluctant to use a course book. They preferred to make a presentation each and to try and produce some written work which could perhaps be used for teaching purposes later. As the course was very short (27 hours or 1 study week) they only had time to write one paper each and a glossary. The idea is that future studets will use these articles as a basis for discussion in groups in the classroom. I hope I shall have the opportunity to use this material, and in the near future, to make suitable exercises and recort the difficult words and phrases for pronunciation practice. There is no copyright on this material but I hope that any interested users will contact me and give me ideas for improving and adding to this work. Any kind of co-operation is welcome. Address for correspondence: Language Centre Ruth Vilmi Helsinki University of Technology Phone: +358-0-451 4292 Otakaari 1 Fax: +358-0-465077 02150 ESPOO Internet: rvilmi@hila.hut.fi Finland ----------------------------------------------------------------------------- 1. Basic computer technology Pekka Nikander 1.1 Introduction Information systems play an increasingly important role in our modern society. More and more of our technical everyday household machines contain embedded micro processors that control their functions. A modern car has several computers built into it. Traditional post is slowly being replaced by electronic mail. Newspapers emphasis the importance of telecommunication systems in the Persian Gulf war. However, even though we are affected by information systems every day, the term ``information system'' itself is often misunderstood or viewed >from a very narrow angle. From a very general point of view, an information system is any system that handles or stores information in one way or another. It may or may not be built up of computers. On the other hand, the word ``system'' gives us a feeling of a complex or complicated combination of numerous parts. Thus, by gluing together these two viewpoints we can give a more accurate definition for this fascinating tool: An information system consists of computer hardware, software and people, together with published and unpublished methodologies and means used to handle and store information both manually and automatically. By this definition, a piece of paper, a pencil, a person and means of writing forms an information system. However, the paper and pencil alone are not a complete system, they are just a part of it. Similarly, a computer and program alone are not enough, but together with the system designers and users they form a complete information system. 1.2 Hardware Hardware is the physical (computer) equipment we can see and touch. The technological development has brought us to a level where computers get increasingly powerful all the time. On the other hand, the basic technology and computer architecture has been quite stable since the sixties. 1.2.1 Basic Technology Today all computers are digital. All of them basically handle binary digits, or bits. A bit is the smallest amount of information. The value of a single bit can be zero or one, nothing else. However, when several of these bits are combined together, larger units of information can be formed. A typical modern computer system can store over a hundred billion bits. Almost every modern computer always handles at least eight bits simultaneously. Eight bits in a line is called a byte. A byte may contain any of 256 different bit combinations. These combinations can be used to denote different values. They may be used to mean integer numbers between zero and 255, or maybe between -128 and 127. Within some other context a byte may contain a value that denotes a letter, digit or some special character. In this way text can be stored in computers; every character is turned into a bit pattern and stored in some byte. Words are formed by putting several bytes in a row. Words can be joined by taking a special bit pattern to denote the space between the words, and so on. In this way complete books can be stored as huge byte strings. 1.2.2 Computer Architecture The prevailing computer architecture used today is a so-called von Neuman architecture. It is named after its inventor, a mathematician who, already in the forties, formed the basic theories behind modern computers. A von Neuman computer has basically two distinct components: CPU and memory. CPU, the central processing unit, is the computer's brain. Memory is just memory, it contains all the information the computer knows. However, in reality, every computer has a number of other components called Input/Output devices, or I/O devices for short. These peripherals allow the computer to communicate with the outside world and other computers. >From the hardware point of view, the memory is simply a huge array of memory cells. Each of these cells can normally store a single byte, or eight bits of information. The contents of each memory cell can be read and modified, or written, by the central processing unit. In this way the computer can access and modify the contents of its memory. Nearly every computer has two kinds of memory: primary and secondary. The primary memory is made of silicon and can be accessed very fast. Every single byte within the memory can be reached in less than 100 nanoseconds, or a billionth of a second. The secondary memory is actually a kind of I/O device. It can be a hard disk, a floppy disk or a tape. Secondary memory devices are used since their price is much lower per bit than the primary memory. However, the computer cannot directly use the information in the secondary memory; the required information has to be transferred into the primary memory first. The computer memory contains two kinds of information: programs and data. A program is a sequence of instructions meant to be executed by the central processing unit. Data is simply a collection of bytes handled by the program. If the memory contents is seen by the CPU as a program, each byte in the memory is taken as an instruction to do something. The instructions a normal CPU acts upon are utterly simple. For example, there might be an instruction for reading a data byte from the memory, adding another byte to the value and storing the byte again in the memory. Normally the computer reads instructions simply one after another, or linearly. However, there are special instructions that command the CPU to read the next instruction from somewhere else within the memory. These instructions are called jumps. A jump may be conditional, i.e. depending e.g. on the value of some memory cell the CPU may either read the next instruction in row or jump to some other location within the memory. The I/O devices, or peripherals, are often directed by special instructions. When the CPU encounters a special I/O instruction within a program, it will access the special I/O hardware and command the peripheral devices. In this way, the computer may e.g. read one character from the keyboard or write a single letter on the computer screen. Even though every basic instruction the computer understands is very simple, a computer can do magnificent things. How is this possible? There are two basic explanations: a) The computer is blindingly fast. A normal home computer or a personal computer can typically execute at least one million instructions every second. A modern supercomputer is capable of executing at least ten billion instructions every second. b) The basic instructions can be combined together to form larger software units. A typical program today contains at least ten thousand instructions, and in many cases well over a million instructions. However, these instructions are parts of several levels of larger systems, every level containing increasingly powerful software components. 1.2.3 Trends in hardware development There are four major trends that can be clearly seen in the field of hardware. The exponential growth has been going on for the last forty years and there seems to be no end. More recent trends are the shift from central computing towards distributed environments, the birth and evolution of personal networked workstations and the introduction of more and more powerful wide area networks. Since the invention of the computer in the forties, we have seen many generations of computers. The first computers were made of radio tubes. Then came the transistor, silicon chips and micro processor. Today the silicon chips are becoming more and more powerful. A single chip has more computing power than a number of large computers from the sixties. The growth of the hardware power can be easily measured using many indicators. The three most important indicators are CPU power, secondary memory or disk space, and primary memory space. Figures 1, 2 and 3 (see next page) display typical values for midrange computers >from 1985 to today. We can clearly see that each and every one of these aspects have grown exponentially over the given period -- and there seems to be no end. Every year we get more and more powerful computers that are able to store more and more information. Figure 1: CPU power development from 1985 to 1991 Figure 2: Disk storage development from 1985 to 1991 Figure 3: Main memory development from 1985 to 1991 Since the sixties to the beginning of the eighties computers were mostly seen as, well, computers. That is, people considered computers to be huge calculators. Computers were big, clumsy machines that were located in special air-conditioned machine rooms. The invention of the microprocessor changed all this. It made it possible to build small computers, personal computers that were mostly used for everything else but real computing. As the personal computers grew more powerful, they gradually began to contain capabilities that traditionally only big mainframes had had. At the same time, the minicomputers grew smaller. The invention of big graphical screens and the mouse brought us graphical workstations. Macintosh was the first upcomer from the microcomputer world, and Sun Microsystems was among the first manufacturers of big machine-like workstations. A single personal computer or workstation alone cannot do very much. It has access to its local memory that is typically quite small and contains only the user's personal data. However, if the workstation is hooked to a computer network the situation is quite different. Suddenly there are several magnitudes more information available. Documents can be transferred electronically from one workstation to another. In this way, networks increase the possibilities for interpersonal communication as well. During the beginning of the eighties local area networks became common resources in many organizations. In the near future we will see how the LANs will begin to get hooked together via wide area networks, WANs. First big companies will interconnect their local area networks located in physically distinct sites. A little later the business will begin to realize the benefits available by interconnecting the networks of distinct companies via WANs. This phenomenon is already happening in the research and university world, but has not yet penetrated into the administrative, financial and marketing fields. 1.2.4 Modern computer types As was shown earlier, computer power increases exponentially. Therefore we cannot classify computers by their power. The power of today's mainframe will be found at the desktop within a couple of years. Thus, the only meaningful way to classify the various systems is by price. Using this method, we can claim the following definitions. - A microcomputer is anything that costs less than $5 000. - A workstation costs from $5 000 to $10 000. They are nearly always connected to a network. - A minicomputer or server costs between $10 000 and $100 000. - A mainframe has its price tag somewhere in the range $100 000 -- $1 000 000. - A supercomputer is anything that costs over one million dollars. 1.2.5 Summary The power of digital computers increases every year. New hardware makes it possible to create new systems. Computers and networks penetrate everywhere. This will eventually change our society in a dramatical way. 1.3 Software The development in the software field has not been so fast as the hardware development. However, as increasingly many people have begun to use computers, people have also begun to require more from programs. New user interface techniques and programming methodologies have been developed at a rapid pace. On the other hand, the amount of scientific knowledge of the mathematical bases of programs and the psychological aspects of user interfaces has increased much more slowly. 1.3.1 Software development in the large Even today most programmers produce code in the same way as in the sixties. Nearly all widely used programming languages were developed in sixties or early seventies -- C, Lisp, Pascal, Cobol and Fortran are all quite old. In my opinion, only three major innovations have been made during the last twenty years. - Object oriented programming methodologies and the concept of abstract data types has made it easier to manage large systems. - Computer Aided Software Engineering, CASE, helps a lot when developing accounting, management etc. database based software. However, in the case of system software, user interfaces etc. it does not help very much. - Distributed system software is gradually replacing traditional operating systems and other system software. Even though we have large networks and thousands of distinct computers hooked into it, it will look like a single huge computer in the future. This can be accomplished with distributed operating systems such as Mach and Chorus. 1.3.2 New innovations The invention of spread sheet programs dramatically changed the way budgets and other kinds of calculations are done. I think today there are only two major trends that may have such a considerable effect in the future. - Multimedia and hypermedia will eventually change the computer from a number and word processing machine to an actual media processing machine. Within few years we will see an emergence of a plethora of computers capable of handling voice and live video. - The development of artificial intelligence and voice recognition may eventually produce computers that can understand natural speech. This will lead to programs that somewhat resemble human servants. Apple computer together with a few other companies are already investigating this kind of technology. They have also given a name to these programs: they are called ``personal agents''. 1.4 People There will always be two separate personal groups in the field of computing. Designers design computers and software. Users just use the ready-made software. However, the distinction between these two groups will gradually simultaneously both diminish and increase. On the one hand, the new programs will make it easier for users to build their simple applications or customize the behavior of commonly used tools. However, on the other hand, much of the system software and many of the tools will get so complex that only highly educated experts will be able to understand and modify their fundamental structures. 1.4.1 Social effects The technological development does not have just positive effects, it will also cause effects that should worry everyone. I am afraid that we will have three groups of people in the future. - The computer experts and power users are people that design and use computers everyday on a professional basis. They have thorough understanding of computing or at least the subfield they are working with. They spend a part of their time in communicating through computer networks. - The computer literate people are able to use standard applications and can cope with electronic media. They know the limitations of computer and understand their basic capabilities. - The computer illiterate people are afraid of computers and cannot use them. They do not have clear mental models of how computers operate and cannot understand their limitations. They may even turn hostile towards computerization of our society. In order to minimize the negative effects of this development we have to really pay attention to education. The more people know about computers, the less they are afraid of them and the better they can use them. Another important issue is the attitudes of the public. It should be clear to everyone that a computer is only a tool like a hammer or a saw; it is just somewhat more complex. There is nothing magic with computers, they are just machines made by people. 1.5 Summary Hardware seems to grow at exponential speed. The capacity of computers seems to double every two to three years. New hardware will offer a platform for new interesting software technologies such as multimedia and user agents. However, we have to pay great attention to the social effects of computerization in order to minimize the emerging negative effects. Glossary Abstract data type abstrakti tietotyyppi Artificial intelligence teko{ly Bit bitti, ykk|nen tai nolla Byte tavu, 8 bitti{ CASE -> Computer aided software engineering Central processing unit keskusyksikk| Chip mikropiiri Computer tietokone Computer aided software engineering tietokoneavusteinen ohjelmistosuunnittelu Computer network tietokoneverkko Compiler (ohjelmointikielen) k{{nt{j{ CPU -> Central processing unit Data tietokonemuodossa oleva tieto Disk -> Hard disk Distributed computing hajateuttu tietojenk{sittely Elektronic mail s{hk|posti Floppy disk levyke FLOPS liukulukuoperaatioita sekunnissa (nopeuden yksikk|) Gigabyte miljardi tavua -> Byte Hard disk kova- l. kiintolevy Hardware laitteisto Hypermedia hypermedia I/O sy|tt| ja tulostus Information informaatio, tieto Information System tietoj{rjestelm{t Instruction k{sky(koodi) Input sy|tt| Kilobyte tuhat tavua -> Byte LAN -> Local area network Local area network l{hiverkko Mainframe keskustietokone Memory muisti Megabyte miljoona tavua -> Byte Micro computer mikrotietokone, mikro Micro processor mikroprosessori Mini computer minitietokone, mini MIPS Miljoonaa k{sky{ sekunnissa (nopeuden yksikk|) Multimedia multimedia Object oriented programming oliokeskeinen ohjelmointi Operating system k{ytt|j{rjestelm{ Output tulostus Peripheral device oheislaite Personal computer henkil|kohtainen tietokone Primary memory keskusmuisti Program (tietokone)ohjelma Programming language ohjelmointikieli RAM -> Random access memory Random access memory luku- ja kirjoitusmuisti ROM -> Read only memory Read only memory lukumuisti Software ohjelmisto Secondary memory oheismuisti; levyt ja nauhat User interface k{ytt|liittym{ Tape (magneetti)nauha Voice recognition {{nen (puheen) tunnistus Workstation ty|asema WAN -> Wide area network Wide area network laajan alueen verkko ----------------------------------------------------------------------------- 2. A Generic View of Software Engineering Marko Nieminen 2.1 History of programming In the history of the electronic digital computer, the 1950s and 1960s were the decades of hardware. The 1970s were a period of transition and a time of recognition of software. The 1980s was the first pure decade of software. In fact, advances in computing may become limited by our inability to produce quality software that can use the enormous capacity of the 1980-era processors. During the 1970s we recognized circumstances that are called the software crisis. Actually the software crisis, which was quite a big problem, was first recognized in 1968 at the International Conference on Software Engineering. Software costs increased dramatically, becoming the largest item of money in many computer-based systems. Schedules and deadlines were set, but rarely kept. As software systems grew larger, quality became suspect. Persons responsible for software development projects had limited historical data to use as guides and less control over the course of a project. The essence of the software crisis is that it is much more difficult to build software systems than we think it should be. After all, aren't we just throwing together some symbols that tell our computer what to do? Experience shows us that the world is much more complex than that. In computer science education, we can build systems consisting of several hundred or even a couple of thousand lines of code and then feel confident that we fully understand the entire program. Modifying such software is not very difficult, since we can remember the structure of its design for many days after completing our first version. If our testing finds problems, we could even start over again. However, when we are dealing with software consisting of several hundreds of thousands of lines of code it is self evident that we cannot restart our project. The effort required to complete such a job is clearly beyond the capacity of one person. So we need other people to work with us.This brings other difficulties, such as communication problems between the members of our team etc. The software crisis will not disappear overnight. Recognizing problems and their causes are the first steps toward solutions. Then, the solutions themselves must provide practical assistance to the software developer, improve software quality, and finally, allow the software world to keep pace with the hardware world. There is no single best approach to a solution for the software crisis. However, by combining comprehensive methods for all phases in software development: better tools for automating these methods, more powerful building blocks for software implementation, better techniques for software quality assurance, and an overriding philosophy for coordination, control, and management, we can achieve a discipline for software development -- a discipline called software engineering. 2.2 A definition for software engineering An early definition of software engineering was proposed at the first major conference dedicated to the subject: The establishment and use of sound engineering principles in order to obtain economically software that is reliable and works efficiently on real machines. Although many other definitions have been proposed, all state that engineering discipline in software development is very important. Software engineering is an outgrowth of hardware and system engineering. It encompasses a set of three key elements -- methods, tools, and procedures -- that enable the manager to control the process of software development. Software engineering methods provide the technical knowledge for building software. Methods consist of tasks that include: project planning and estimation, system and software requirements analysis, design of a data structure, program architecture, and algorithm procedure (coding, testing, and maintenance). Methods for software engineering often introduce a special language-oriented or graphical notation and a set of criteria for software quality. Software engineering tools provide automated or semi-automated support for methods. When tools are integrated so that information created by one tool can be used by another, a system for the support of software development, called computer-aided software engineering (CASE), is established. CASE combines software, hardware, and a software engineering data base to create a software engineering environment. Software engineering procedures are the glue that holds the methods and tools together and enables rational development of computer software. Procedures define the sequence in which methods will be applied, the deliverables that are required, the controls that help assure quality and coordinate change, and milestones that enable software managers to assess progress. 2.3 The phases of software engineering The software development process contains three generic phases. The three phases, definition, development, and maintenance, are encountered in all software development, regardless of application area, project size, or complexity. The definition phase focuses on what. That is, during definition, the software developer attempts to identify what information is to be processed, what functions and performance are desired, what interfaces are to be established, what design constraints exist, and what validation criteria are required to define a successful system. Thus the key requirements of the system and the software are identified. The development phase focuses on how. That is, during development, the software developer attempts to describe how data structure and software architecture are to be designed, how procedural details are to be implemented, how the design will be translated into a programming language, and how testing will be performed. The maintenance phase focuses on change that is associated with error correction, adaptations required as the software's environment evolves, and modifications due to enhancements brought about by changing customer requirements. The maintenance phase reapplies the steps of the definition and development phases, but does so in the context of existing software. Glossary to keep pace with saavuttaa comprehensive laaja, kattava, monipuolinen outgrowth syy, seuraus encompass yhdist{{ establish perustaa assess arvioida, m{{r{t{, vahvistaa References Booch, G., Software Engineering with Ada, The Benjamin/Cummings Publishing Company, 1986, second edition. Pressman, R. S., Software Engineering: A Practitioner's Approach, McGraw-Hill Book Company, 1987, second editiond. ----------------------------------------------------------------------------- 3. Telecommunications and computer networks Jukka Vialen 3.1 Introduction This article is meant to give a short overview of the telecommunication and computer networks and also discuss where the communications is going in the near future. The main part of this text is kept at a very general level and details are collected in the nineth section -- there is a small list of concepts and abbreviations that one will very probably find in any article concerning the area. 3.2 The need for network The need for a network arises quite naturally. Conceivably one could interconnect a small number of users directly. But consider the problem of directly connecting hundreds, thousands, or millions of users who desire to communicate with one another. Making direct connections, even if it were possible, would not make much sense because many of them might be used only infrequently. 3.3 What is a network? A network consists essentially of network switches, or nodes, interconnected by transmission links. These links can be wire, cable, radio, satellite, or fibre optics facilities. The network systems range >from small networks interconnecting data terminals and computers within a single building or campus-like complex, to large geographically distributed networks covering entire countries or, in some cases, spanning the globe. Some networks are privately operated; others are public -- available for a fee to all who want to use them. Some networks use packet-switched technology, in which blocks of data called packets are transmitted from a source to a destination. Source and destination can be user terminals, computers, printers, or any other types of data-communicating and/or data-handling devices. In this technology packets from multiple users share the same distribution and transmission facilities. Each packet of data includes the addresses where the packet is going and where it is coming from. Other networks are of the circuit-switched type, most common portions of the ubiquitous telephone networks to which we are all accustomed. In these networks, which generally transmit voice or data, a private transmission path is established between any pair or group of users attempting to communicate and is held as long as transmission is required. In digital transmission the 'private transmission path' is usually a private time slot where the data or digitized speech is transferred. 3.4 About the physical media As mentioned the medium where information can be transferred in the networks can be of several different types. This section discusses their use in today's and tomorrow's communication. In the area of international telecommunications the upward trend in the capacity of major international links will probably continue, but the race between satellites and submarine cables awakes much more attention. The use of fibre-optic submarine cables has restarted this race. Optical fibres (which were first introduced in the early 1970s) use ultrashort waves to transmit information using the wavelengths of light. In a natural environment, like air, this type of wave is attenuated too fast to be used, but it can be guided along fibre glass threads, i.e. optical fibres. In telecommunications, optical fibres are today a very important medium in the internal traffic of large cities. More and more they are also replacing the national links as a basic medium for speech (and data) transmission. Many high speed LAN systems are also based on optical fibres (i.e. FDDI), though the majority of the commercial LANs use coaxial cables as transfer medium. The newest trend in this area is the "wireless LANs" -- use of infrared light or radio waves of the microwave region for transmission. 3.5 Analog and digital networks The bulk of the telephone plant throughout the world is analog. It has been designed to handle voice and, despite the huge growth of data transmission, voice transmission is still the most common mode of communication world-wide. The transmission of data in telephone networks requires the conversion of data signals to voice-type analog signals using devices called modems. This limits the rate of transmission of data to at most 14.4 kbps (9.6 kbps transmission is more common as an upper limit). Among foreseeable developments, the introduction of the integrated services digital network (ISDN), which affects the very structure of the networks, is already well on the way. Implementation of the ISDN will mean the replacement of networks which now exist separately, such as telex, data transmission, images and voice etc, by a single fully digitized network to which all sorts of terminals can be connected indiscriminately : the telephone of course, but also facsimile machines, telex, various kinds of computers, videophones and so on. This does not mean the physical creation of a new network but rather it means replacing the old analog telephone network by a new fully digitized general purpose network. 3.6 Cellular networks The early 80s brought cellular networks, which are characterized by full mobility without service interruptions, and fully automated call procedures. First generation cellular networks -- the Scandinavian NMT, the Bell-originated AMPS and its British derivative TACS, together with smaller systems -- are all intended for speech communication. Phones for these systems are called analog mobile phones mainly because speech transmission in these is analog. The next generation of cellular mobile telephone's (CMT) like the Pan-European GSM-system aim at expanding network capacity and further standardization of systems by using digital speech transmission and TDMA techniques. They also make more value-added services like short message services, facsimile and other data and speech applications possible in a mobile environment. Further advantages of digital systems are improved speech quality and , at least theoretically, the possibility of further capacity increases by reducing bit rates in speech coding. 3.7 Communications in the 1990s (The word "communications" here means both voice and data transmission through networks.) The infrastructure of the public communications networks in the future can probably be divided into three parts : the general purpose network (ISDN), radio networks and broadband networks (B-ISDN). The locally used networks (LANs) inside companies and offices and campus-like complexes will not lose their meaning, of course. The biggest change in using communications will happen in the area of data transmission (this evolution is already well on its way). Every office, working hall, factory or car will have its own data terminal. It is not only the internal data-transmission in companies that will grow but also the communication between companies will grow rapidly. The data-transmission will soon be an integrated part of the business operations of a successful company. A company can distribute its activities around the world but the internal communication can work in real time because of the modern telecommunication techniques. The rapidly increasing use of optical fibres offers new possibilities also in the area of mass communications (TV, radio). The capacity of the optical fibres is so great that TV-broadcasting can easily be done through networks designed for communications purposes. This opens new possibilities for the distribution of TV-programmes. In homes the use of optical fibres will be insignificant in the 1990s, but there is no doubt that in new buildings the future use of optical fibres will be taken into account. Wireless (radio) transmission is probably the most rapidly growing area of communications today and in the near future. The main reason for this is the growing use of mobile phones. 3.8 Some basic concepts and abbreviations AMPS Advanced Mobile Phone System ANSI American National Standards Institute ARPAnet A network developed for research purposes by the Department of Defence in the early 1970s (USA). ATM Asynchronous Transfer Mode. A switching system that will be used in the broadband networks of the future. B-ISDN Broadband ISDN. Capacity exceeded up to 2 Mbit/sec by using 30 B-channels. CityNet a product offered by HPY (The local telephone company in Helsinki). Based on optical cables. CRC Cyclic Redundancy Check. A method for ensuring that data has not been corrupted during transmission. CSMA/CD Carrier Sense Multiple Access with Collision Detection. The stations monitor the channel and start transmission when the channel is free. If two or more stations begin their transmission at the same time a collision occurs. Powerful in small quantities of traffic. DataNet a 'virtual-network' service. A customer's LANs can be connected to the DataNet network. Datel a network for business-customers from TELE. Datex a circuit-connection based network from TELE. Widely used in the Nordic countries. DQDB Distributed Queue Dual Bus. IEEE standard number 802.6. A standard for a MAN based on the use of fiber. Ethernet IEEE 802.3. Theoretical speed 10 Mbit/s (in 1989). Uses mainly coaxial cable as a physical medium and a 'channel' as topology. Uses CSMA/CD. Most widely used LAN. FDDI Fiber Distributed Data Interface. ANSI X3T9.5. A fiber-based high speed LAN and probably the standard for a 100-Mbit/s token-passing ring network in the near future. Maximum 500 stations can be attached to the network, with up to 200 kilometers total distance end to end. The standard also specifies that there can be no more than 2 kilometers between active attachments. FRAME a packet of information ready to be sent to the network. Includes both the data and address of the receiver. The basic 'idea' behind any kind of asynchronous transmission. GSM Groupe Special Mobile. IEEE The Institute of Electrical and Electronics Engineers, Inc. Internet Internet is a worldwide computer network. Its development was started by the DARPA ARPANET - an inspection office of the U.S. defense Department. It consists of several independent networks which are connected together. Most of these networks are located in the USA and Canada. ISDN Integrated Services Digital Network. The public telecommunication network of the future. It has so called 'B'-channels (capacity 64 kbit/s) and 'D' channels (capacity 16 kbit/s). The D-channel will be used mainly for signalling purposes whereas the B-channels will be used for the transfer of the actual information. ISDN is fully digital which means that you for example do not need a modem with your computer when transmitting data. Moreover it allows, for example, that you can be connected simultaneously to two or more places with your normal extension. ISO International Standard Organisation ITU International Telecommunication Union LAN Local Area Network MAC Media Access Control. Actions related to the transmission path. For example CSMA/CD. MAN Metropolitan Area Network MUX Multiplexer. NMT Nordic Mobile Telephone OSI Open System Interconnection. A reference model of the ISO. Has seven 'layers' by which a network developer can describe how his/her network is implemented. The layers cover everything from the physical medium to applications. PCM Pulse Code Modulation. A modulation method used in digital speech transmission. SMDS Switched Multimegabit Data Service. SNA System Network Architecture. IBM's standard for a common architecture for communication within a company's diverse equipment. TACS Total Access Communication System TCP/IP Transmission Control Protocol / Internet Protocol. Telex Based on the public telecommunications (telephone) network. Not very useful in data transmission because of its slowness but is very useful when sending short messages because of its wide spreadness. Token-passing in a ring-based network (FDDI) encircles a 'token' that reaches one station at a time. One station at a time can seize the token and begin to transmit. The station appends the token to the end of its transmission. So there can be several messages going in the ring at the same time. Token Ring IEEE standard 802.5. Known as IBM's product since 1985. Theoretical speed 4 Mbit/s. Uses ring topology and point-to-point connections. WAN Wide Area Network X.25 a standard for a packet-switched network Glossary conceivably mahdollisesti to span the globe ulottua koko maapallolle ubiquitous kaikkialla oleva accustomed to sth tottua jhk diverse sekava, kirjava References Dimitri Bertsekas/Robert Gallager, Data Networks, Prentice Hall Inc, 1987 Juha Rapeli,ASIC's for Cellular Mobile Telephones, Nokia Mobile Phones Ltd Oulu Research Center, 1990 Prosessori magazines, Erikoislehdet Oy, 13/1990, 2/1991 ----------------------------------------------------------------------------- 4. Knowledge Engineering Harri Kiljander 4.1 Introduction Knowledge Engineering (KE) is a relatively new branch of computer science which has been overloaded with several different meanings. It can be seen as a group of methodologies for building certain kinds of software applications -- namely expert systems; this definition brings it closer to traditional software engineering. Quite often it is also associated with all the unrealistic expectations focused on the artificial intelligence research in general during the last few decades. 4.2 History of Artificial Intelligence Research Artificial Intelligence (AI) has been a research topic for over thirty years now. From the middle of the fifties to the middle of the sixties some kind of a goal of the research was to build general problem solving systems. This means that the AI programs developed at that time were supposed to be able to solve just any kind of problem provided they were first given enough background information. The results achieved were not so promising; the best systems developed were extremely slow and still capable of solving only very simple problems. From this the researches concluded that some basic technical issues of knowledge representation and e.g. search algorithms had to be solved before any reasonable problem solvers could be built. >From the mid-sixties to the mid-seventies the main emphasis of AI research was on developing efficient ways to represent knowledge and to find a correct solution to a problem from the search space of all possible solutions. This period of time gave us more powerful problem solvers than ever before but they still did not have any greater importance outside the academic world. >From the mid seventies we have been living with the third wave of AI and it is still continuing. This paradigm or way of thinking is expert systems. The distinction between the earlier general problem solvers and expert systems is that while the former tried to answer just any kind of question the latter focus on a narrow problem domain trying to mimic the problem solving processes of a genuine human expert of that domain. During this period people have started using the term Knowledge Engineering meaning applied AI i.e. the art and science of building expert systems. KE is the opposite of pure AI although it uses the methods developed by AI research. Another term which has come into existence is Knowledge Engineer meaning a person whose task is to extract the domain knowledge out of the mind of the human expert and turn it into an expert system; actually a knowledge engineer is a software engineer using knowledge engineering methods to build expert systems. 4.3 Knowledge Engineering Tools: Software Knowledge Engineering deals with the automation of decision processes. Conventional data processing utilizes predefined algorithms which are executed sequentially and always in the same manner while the inference processes used in expert system applications are case sensitive i.e. the inference process used consists of steps which depend on and whose ordering depends on the data the system is being used with. Because of the mixing of data and program logic to control the system, expert systems represent data-driven data-processing. The software tools used in expert system building can be roughly categorized into three groups; programming languages, shells and toolkits. The most frequently used programming languages for expert system building have been Lisp and Prolog; Lisp was conceived in the fifties while Prolog was born in the middle of the seventies. Lisp is a functional language and its name stands for LISt Processing whilst Prolog has its origins in logic, its name stands for PROgramming in LOGic. Of course there are no reasons why just any programming language could not be used in the development of expert systems; it is possible to build an expert system even in Cobol -- although it is not advisable because of the effort needed, performance issues and so on. Languages like Lisp and Prolog are better suited for the required data-driven approach. Shells usually contain an interpreter and some predefined way to represent knowledge e.g. in the form of rules. It can be possible to produce prototypes of the system quite fast when working with shells but for the speed gained the richness of knowledge representation is at least to some extent lost. The distinction between languages and shells is sometimes rather vague since some shells are implemented just as extensions on top of e.g. Lisp or Prolog. Shells can be subdivided into rule based shells, natural language processing shells etc. Toolkits are the most sophisticated and flexible expert systems building tools. Typically they are hybrid environments offering e.g. several knowledge representation paradigms surrounded by complete prototyping and application development environments. Previously the majority of them was closely related with Lisp but during the last few years there has been a movement from Lisp-based systems towards C-oriented systems. It can be said that Lisp (and Prolog) is not dead but it has become a very specialized concern. Many of the leading commercially important KE vendors have recently released new C-oriented expert systems development tools. The most sophisticated KE toolkits nowadays work in graphical user interfaces. This is because the data processing world in general is moving towards GUI's and KE tools have always been state-of-the-art applications. Second, a graphical user interface solves many knowledge representation problems which could not have been so easily solved with character based user interfaces, like how to represent and edit complex hierarchical data structures in an efficient way. 4.4 Knowledge Representation A conceptualized and simplified expert system contains two things; a knowledge base which is the knowledge the system has, and an inference engine which is operated on that knowledge to produce some meaningful results. The contents of the knowledge base may be rules or facts or both. The inference engine picks up some facts from the knowledge base and by using some of the rules in the knowledge base it deduces new facts and asserts them into the knowledge base repeating this process over and over again until some specified state is reached. It can be seen that only the knowledge base of the expert system is application dependent while the general purpose inference engine can be used with diverse applications. Before we can use the knowledge to produce some meaningful results we must first have ways to represent and store it. The problems of knowledge storage are mostly technical and deal with the same kind of issues as conventional database systems so we need to focus on the much more important issue of knowledge representation. There are two almost mutually exclusive requirements being stated; first, the representation has to be easily understood by the people developing and maintaining the system, and second, the representation has to be efficient enough for the computer to be able to handle it fast. The oldest and still maybe the most widely used way to represent knowledge is rule-based reasoning. Within this knowledge representation paradigm the knowledge is stored in the form of facts and rules. The facts are of the type the cat is in the moon and the rules are of the type if thing_x is in place_x then place_x is not empty; from these our hypothetical system would deduce the fact the moon is not empty. Pure rule-based systems are well-suited for certain kinds of applications but for some others they are too strict and limiting; usually it is the system that decides what to ask and in which order, not the user. At the other end of the knowledge representation spectrum are different conceptual hierarchies or frame systems using e.g. attribute inheritance. For example we could state that cars have doors and Porsche is a car; the actual representation of these facts would be environment dependent of course. If we now knew something to be a Porsche then we would automatically know it to have doors; for the first it would be a car and for the second, cars have doors, so also this car specimen would have doors. This kind of knowledge representation paradigm moves us close to object oriented programming and modelling in general. By building an expert system on top of a graphical user interface and using the object oriented way of approach it is possible to build more user friendly (and more complex too) expert systems; this seems to be the future we are heading into. Selecting an inefficient way to represent domain knowledge within an expert system is likely to hamper or even halt the development of an otherwise successful expert system, so extreme care should be taken to avoid mistakes like this. It can well be said that it is even more important for the systems engineer to really get acquainted with the application domain when working with expert systems than with more traditional systems and tools. 4.5 Knowledge Engineering Tools: Hardware Before the eighties nearly all AI systems development had been done using conventional computer systems. Although it had been possible to implement large and effective expert systems like MYCIN which uses expert medical knowledge to diagnose e.g. bacterial infections of the blood and PROSPECTOR which determines the probable location and type of ore deposits based on geological information about a site, it was seen that dedicated Lisp-oriented computers would perform better with applications like these. There has never been a greater emphasis on Prolog-oriented computers although the Japanese are using them within their fifth generation computer project; the differences between Prolog-computers and Lisp-computers are small anyway. The differences between ``ordinary'' computers and Lisp-computers are quite small if we compare them e.g. with the differences between ordinary computers and neural network computers. Lisp-machines are based on the traditional von Neumann architecture, but they have three additional features when compared to the ordinary machines: First there are tag bits in the memory words because Lisp is a typeless language and the tag bits are needed to handle the internal type information faster than it would be possible with software; second Lisp-machines have a stack-based architecture because Lisp is a functional recursive language and third the machines have a ``flat'' virtual memory because Lisp keeps its data structures in a single-level ``flat'' memory and getting data from secondary memory to primary memory has to be as easy and fast as possible. Ten years ago Lisp-machines clearly outperformed conventional computers although they were also much more expensive. Although the prices of Lisp-machines have gone down just like any other computing equipment and it is now possible to get efficient Lisp-hardware implementations even for high end personal computers the benefits of the Lisp-machines are not so clear any more. An increasing amount of KE software tools has been and is being ported to industry standard workstations with a good price-performance ratio and a lot of other applications already available, while there have been few new releases of commercially important software for Lisp-machines lately. It may well be the case that in the future the dedicated Lisp-machines lose some of their earlier importance and it is not totally impossible that the AI hardware will merge with the conventional data processing hardware -- especially the workstation world. After all it is the software which counts, not the hardware. It does not actually matter how powerful the machine is if it is powerful enough; what you can do with the machine is more important, and there Lisp is not enough. 4.6 Expert System Feasibility The idea of developing expert systems has become quite a fashionable one during the last few years. In any case it should be kept in mind that they do not perform miracles, but they do require hard work and skilled professionals to implement them. Below follows some kind of a checklist which should be carefully considered before any decisions are made on behalf of or against a possible expert system. There should be the following prerequisites for the expert system development to be possible: - task does not require common sense - task requires only cognitive skills - experts can articulate their methods - genuine experts exist - experts agree on solutions - task is not too difficult If at least one of the following items is true then the expert system development is justified: - task system has high payoff - human expertise is being lost - human expertise is scarce - expertise is needed in many locations - expertise is needed in hostile locations Finally if also all of the following issues are true then the expert system development is appropriate: - task requires symbol manipulation - task requires heuristic solutions - task is not too easy - task has practical value - task is of manageable size Glossary cognitive skills skills related to knowledge and perception; e.g. learning data-driven data-processing a form of data-processing in which the flow of program control is to a large extent controlled by the input data expert system a computer (software) application mimicing the problem solving processes of a human ``expert'' to solve a given problem of a specific problem domain GUI Graphical User Interface heuristic something serving to indicate or point out; e.g. a good hypothesis hybrid something composed of elements of different kinds inference the process of deriving consequences of premises knowledge engineer a software engineer using knowledge engineering methods to build expert systems knowledge engineering the art and science of how to build expert systems; a form of software engineering in general paradigm a distinctive pattern a set of things has in common state-of-the-art up-to-date, leading edge to have high payoff to make good profit with something References Paul Harmon, What's happening in the expert systems market?, Intelligent Software Strategies, August 1990 Markku Syrj{nen, Tiet{mystekniikka -- s{{nt|pohjaisista j{rjestelmist{ kohti oliosuuntautuneita ----------------------------------------------------------------------------- 5. Computer Graphics Sakari Pihlava 5.1 A brief history of computer graphics. When the first commercial computer graphics stations appeared they were expensive and of poor quality, in terms of pixel resolution and colour. They were only affordable by large institutions, government laboratories, the military, large firms and the largest universities which had interest in this area. The 80's changed all this. Computer graphics came to be part of our daily life. The prices dropped and the quality of display and control devices improved dramatically. This was due to the increasing interest towards micro computers and personal computers. In order to sell them and to be ahead of all rivals many research projects were started and successfully completed. Computer graphics is still one of the most exciting and rapidly growing fields in computer science due to the value of the picture as an effective means for communication, and the way computers are used in all areas. 5.2 Today's world of computer graphics There is virtually no area in which graphical displays cannot be used to some advantage. Although early applications had to rely on expensive and cumbersome equipment, advances in computer technology have made interactive computer graphics a practical tool. Today we find computer graphics used routinely in such areas as businesses, industry, government, art, entertainment, advertising, education, research, training and medicine. 5.3 Computer-Aided Design For a number of years, the biggest use of computer graphics has been as an aid to design. CAD methods provide powerful tools. When an objects dimensions have been specified for the computer system, the designer can view and rotate the object to see how it will look when constructed. Experimental changes can be made freely since, unlike hand drafting, the CAD system quickly incorporates changes into the display of the object. Electrical and electronics engineers rely heavily on CAD methods. Electronic circuits, for instance, are typically designed with CAD systems built for this purpose. Using pictorial symbols to represent various components, a designer can build up circuits on a video monitor by adding components to the circuit layout. This helps the designer to minimize the size of the design and the number of components used. Automobile, aircraft, aerospace, and ship designers use CAD in a similar way to the electrical and electronics engineers. Their systems have symbols related to their field but the basic idea of interactive design is the same. However they usually have complicated simulation techniques included in their system. The performance of vehicles can be tested to some extent even before the vehicles are built and he outlook of a product can be seen. Building designs are also done with CAD. Architects interactively design floor plans and the arrangement of windows and doors. The utilization of space in the office or factory complex is worked out using CAD systems. Three-dimensional building models permit architects to study the appearance of buildings in different environments. The most sophisticated systems even let the architect ``walk'' inside the buildings. 5.4 Graphs and Charts A number of commercially available graphics programs are designed specially for the generation of graphs and charts. Graph types such as bar charts, line graphs, surface graphs and pie charts are familiar to us from newspapers and TV. These are methods of presenting information in a way that is well understood and easy to read. Business graphics is one of the most rapidly growing fields of applications; it makes extensive use of graphs and charts by trying to cope with the huge amount of information that is compiled for managers and other individuals. Some packages include methods of producing slides or transparencies to be used when presenting the data to an audience. 5.5 Computer Art Both creative and commercial art make extensive use of computer graphics. Paintbrush programs allow artists to create and colour pictures on a video monitor. This is done interactively using a graphics tablet, a mouse or some other method of entering data. Computer-generated art is widely used in commercial applications like graphical user interfaces, TV and firm logos and advertising designs. 5.6 Computer Animation This is a fairly new field of computer graphics due to the poor quality of early graphics devices and the slow speed of early computers. Movies like Star Trek -- The Wrath of Khan and The Last Starfighter contain a lot of computer graphics in them. Animation methods are also used in education, training and research applications. Flight simulators are now using computer animation in a way that includes all fields of Computer graphics. Flight conditions can be changed from snow storms to clear summer afternoons, and you can fly to any major airport in the world. 5.7 Advances in computer graphics technology Photo reproduction has been lately under discussion on TV and in many magazines. It all started when a well-known filmstar admitted to reporters that she had used photo reproduction to improve her appearance. She said that it is fairly common today that pictures are reproduced by computers to make people look younger. Due to the development of computers and graphics software this is now possible for all major newspictures and for the TV. Another interesting field today is Cyberspace -- a world controlled by computers (the term was invented by William Gibson a science fiction author). The current research leader in this field is VPL Research which is located in Silicon Valley, California. Their research has produced many innovations related to the study of virtual reality. These innovations include the data glow which can sense the movements of the hand and pass the information on to a computer. These innovations are now used successfully to solve many problems like the movement and vision inside Cyberspace. The data glow is the key to man's movement inside Cyberspace. The vision is provided using a helmet with two LCD displays fitted into it. The hardware required to control Cyberspace is relatively cheap and commonly used. VPL Research uses two Silicon Graphics Iris computers , one for each eye, to produce the graphics and an Apple Macintosh to control the virtual reality software. Due to the interest of the public some commercial products will be released next year. One will be the new AutoCad which can be used to create virtual reality buildings which can be inspected inside a Cyberspace. NASA is one of VPL Research's customers and is itself doing research related to this field. NASA's project is Telepresence. People can travel from place to place through the telephone lines. You can for instance play tennis with your friend in Tokyo using Telepresence. The game would take part in a synthetic environment like Cyberspace. Telepresence could also be used for meetings and many other situations which require travelling. Computer graphics is growing very fast and it is already used effectively in most fields. People have found that graphics can make the information much easier to understand. Graphics can also provide us with the means of doing something that is impossible in real life. The future will show us more surprises in the field of computer graphics. Glossary CAD,Computer Aided Design tietokoneavusteinen suunnittelu control device tiedon sy|t|n tai tulostuksen ohjaukseen k{ytetty laite Cyberspace kyberavaruus, tietokoneiden hallitsema tila graphics tablet levy, joka muuntaa kyn{n liikkeen koordinaatti pisteiksi hardware tietokone laitteet interactive vuorovaikutteinen innovation keksint| tai idea LCD tietokoneen n{ytt|ttyyppi photo reproduction valokuvien muokkaus pixel piste, joista n{yt|n kuva muodostuu software tietokone ohjelmistot Telepresence telel{sn{olo; puhelinlinjojen avulla tapahtuva keinotodellisuuksien yhdist{minen yhdeksi yhteiseksi transparency piirtoheitinkalvo user interface k{ytt|liittym{ virtual reality keinotodellisuus References Hearn & Baker, Computer Graphics, Prentice Hall, 1986 Glassner Graphics Gems, Academic Press, 1990 MikroPC, OY Talentum AB, 1/91 ----------------------------------------------------------------------------- 6. Graphical User Interfaces Vesa Kautto 6.1 About Windows and Windowing The ability to view things visually and to run several programs at the time makes windowing such a necessary and attractive way to deal with the complexity of modern computing. What a graphical user interface or a window system can offer -- to both a novice and an expert user -- is a friendly, easy-to-use work environment. 6.1.1 Basics The human-computer interaction is based on a computer screen where there are rectangles (created using graphics) called windows and a pointing device -- usually a mouse -- within easy reach. A user can select, drag and move windows using a mouse that has at least one button that can be clicked. When a user presses the button in a certain area of the screen -- e.g. above some window -- he may get a popup menu. Popup menus and menus in general contain a collection of choices that a user can make. It is up to the application programs how the windows behave. The window system itself creates only one window: the root window. Then there are application programs. Some of them can have several windows on-screen simultaneously. The number of windows per application is not limited but if there are too many on a screen at the same time you notice it when you try to get something done and end up searching for a window from a pile of overlapping windows. 6.2 Current Window Systems Many vendors have their own window systems for their own computing platforms. For example, Apple MacIntosh doesn't have a "traditional" command-line interface at all but only a graphical user interface. IBM Personal Computers and their clones have several window systems, of which the Microsoft Windows is the most popular. The X Window System is the de facto standard of Unix workstations although some vendor-specific window system could be better suited for some specific area of use. Thus some special-purpose graphical workstations have their own window systems and also interaction devices. There are utility programs delivered with window systems. Among them there are different kinds of graphical toy programs like clocks, calculators, calendars, eyes that follow the movement of the pointer and -- of course -- games. But there are also some useful programs like file and workspace managers. A file manager visualizes a file hierarchy. This helps a user to navigate through directories and, since files (and directories, too) are usually represented as icons, ordinary interaction methods can be used to manipulate both directories and files. A workspace manager is aimed to avoid window clutter. A collection of windows and icons are arranged as screens. A workspace manager collects these screens plus some system utilities to form a single window. Selecting a screen means changing a context e.g. from CAD to desktop publishing. 6.3 Window Components Window Systems can have very different look-and-feel and thus cannot be used in the same way. An example of a window may help to understand what kind of things it might contain. Figure 4 displays a functional frame of a X11/Motif window. Figure 4: X11/Motif window 6.4 Interaction Methods/Techniques Since computers are so lifeless and unimaginative on their own, they need human to instruct them. To accomplish this human-computer dialogue, several methods and devices have been invented. You can change the window that has the focus of input using a pointer device -- usually a mouse -- by simply pointing at the correct window (sometimes a click is needed). The same goes for areas or fields within a window. Clicking means simply pressing a mouse button. Multiple button clicks are sometimes used, too. Selection is one of the most popular methods used in graphical user interfaces. When you select something, you point at it and click. It enables you to grab an object and manipulate it. Several objects can be selected and manipulated, too (if there are actions you can take on several objects). You can display menus by pressing a button above a menu label. Pressing and holding the button displays the menu. A menu selection is done by moving the pointer above the correct choice and releasing the button. Conventional typing is also needed from time to time -- but not as often as in the past. There are still some text-oriented applications like terminal emulators, and editors that occupy a window but support only limited use of the mouse. Other applications prefer small text-entry-boxes, that often expect only a single line of input. 6.5 Interaction devices (in the estimated order of popularity) Monitor the main output device Keyboard the input device that practically all computers have; text entry is the primary function Mouse very popular in computers that have graphical user interfaces Lightpen used to select positions on the screen Trackball replaces mouse in some portables Joystick useful in computer games and other applications that need to ``shoot'' Tablet moving a pencil on a tablet moves a cursor on-screen Digitizer for reading in pictures or sheets of paper Touch panels not very popular at a moment in general-purpose workstations Voice system this is the input and output method of the future but very rarely used today 6.6 Application Programs Graphical user interfaces are practical in many areas of computing, especially in the areas where user interaction plays a major role. Some of these areas where programs already exist are Computer Aided Design, Desktop Publishing, Integrated System Engineering and Engineering Database Management. These programs can, among many other things, model various systems, show 2- and 3-dimensional views of designs, simulate different kinds of models or systems and analyse performances of components and the whole. 6.7 Window Programming & User Interface Design User interface programmers use graphical objects -- also called widgets or gadgets -- to build an interface for an application. There are several classes of widgets including different kinds of bulletinboards (object containers), push buttons, various menus, scrolled windows and text-entry fields. The first job is to design the layout of a window i.e. what widgets and where, what kinds of connections are needed between widgets and so on. That takes some code-writing but after that the programmer needs to write some more code to accomplish actions the user can take, and that is hard work. Window systems are usually event-driven. That means that if no user-initialized operation is going on, the application does nothing. Only user actions or events -- e.g. movements of the mouse -- cause things to happen e.g. the focus of input or output to change. These user-generated events have to be processed but fortunately not all events are handled by the programmer. Low-level window system software takes care of some of them, but eventually there are some events that need the attention of the programmer. Events are usually handled by invoking special kind of functions (callbacks), that have to be written according to the context. Actually, all you need to do is to design the layout and write the callbacks plus connect them to your application, but that is a lot easier to say than to make happen. Glossary Callback is a name used in the X Window System for a function that describes the instructions that must be executed to accomplish some action. The active window has the focus of input, i.e. it gets all the input events. Within a window, an area or a field has the focus of input and it is usually the area under the pointer. An active cursor, a highlight or some other suggestion of activeness indicate the focus of input. A portable is a small computer that is not bound to one place but that can be moved around easily. A utility program can be used for various purposes, e.g., for printing or managing windows. Window clutter means a situation in a windowing environment when there too many windows on-screen simultaneously and they clutter up the screen, interleaving each other. References M. Pauline Baker and Donald Hearn, Computer Graphics Using the X Window System, Hewlett Packard Manual Application program brochures from various companies ----------------------------------------------------------------------------- 7. Computer Games Jari Karjala 7.1 A short history of computer games Games have been around as long as the idea of computers. In his book On the Economy of Machinery and Manufactures, published in 1835, Charles Babbage discusses the possibility of making a machine play chess. He outlines what we now call game trees and the min-max strategy. He also mentions that he has designed a machine which would play TicTacToe, but he did not build it since "he would not make any money with it." Actually it took over a hundred years until these ideas were first implemented. The first real implementation of a computer game was TicTacToe in the early 1950s on Univac hardware. Also the games of Bridge and Draughts were implemented in the late 1950s on an IBM 701 computer. The Samuels' Checkers, as the Draughts playing program was called, was the first computer program to reach the master level. In the 1960s some of the MIT hackers invented a game called Spacewar. This was the first game which was not just a computer implementation of an old board game, but something which could not exist without a computer. You controlled a space ship and of course there was an enemy which you had to shoot before it killed you. To make playing more interesting there was also a sun in the middle of the screen which caused a gravity field. The simulation of the gravity field was of course a good excuse to use the program on the expensive hardware of those days! The first arcade game was introduced in 1971. This game was called Pong and the game contained only two paddles and a ball which the player or players tried to hit. The game was a huge success. It was manufactured by Atari which became one of the largest manufacturers in the video game business. Pong was not really a computer game since it did not contain a general purpose CPU, just discrete logic components. The great video game wave came a few years after Pong. The most famous video games were Breakout, Space Invaders, Missile Command and PacMan. In the late 1970s the video games invaded homes. At first the home games were simple variants of Pong with no expansion capability. After a couple of years came video consoles which contained a general purpose CPU and games were supplied on a cartridge. Many of those famous arcade games were converted for these consoles, but the quality was not as good as with the original arcade versions. The home video consoles almost disappeared when home computers started to appear. There were now `good' reasons for buying a home computer, but in most cases games were, and still are, the only reason for having a home computer. Lately the dedicated game consoles have been gaining a market share again, mainly due to the massive computer software piratism. 7.2 Board games The first computer games were board games. Board games provided an increasing challenge for programmers and a good demonstration of the capabilities of a computer. The more powerful the computer hardware has become, the more challenging board games have been implemented on a computer. TicTacToe and Draughts are amongst the simplest board games and there have been unbeatable computer implementations since the 1950s. Reversi has more possibilities but it is still possible to implement an unbeatable computer opponent. Chess has so many game positions that currently even the best programs lose against a grand master human player in a normal tournament. However, if the game is played under time constraints the computer starts to win. In the not too distant future, when still faster hardware and better software becomes available, computers will beat the best human players in chess, too. 7.3 Arcade games Originally arcade games meant games which can be found in the amusement arcades. These are halls where there are several, even dozens of, different games: pin-ball machines, fruit machines and video games. Nowadays almost any action game is classified as an arcade game. Several variants have been developed from the first arcade game Pong. One of the most famous examples is Breakout. This game modifies the idea of ball and paddle by adding a wall of bricks which the player must demolish using the ball. Such a simple idea can develop almost to the level of addiction. There are even books which describe the strategy of playing Breakout. Space Invaders has become a classic example of video games. It places the player as the commander of the last defense base of the Earth against the alien attack waves which slowly descend through the upper regions of atmosphere. This setup has given a general brand name to these kinds of games: they are shoot-em-up games. Another noteworthy arcade game is PacMan. This game features a small innocent-looking character collecting pills in a maze chased by four nasty ghosts. PacMan was exceptional in the respect that it was not as violent as most other video games. Perhaps that is the reason why women liked it, too. 7.4 Adventure games The first adventure game was called simply Adventure. It placed the player in a colossal cave which contained strange halls and passages, magical objects, treasures and mythical characters like trolls and a pirate. The player could wander around the cave by giving the computer instructions, and the system described the locations in text format. The purpose of the game was to explore the caves, collect all the treasures and escape alive. Adventure games have borrowed many ideas from the role playing games, like Dragons & Dungeons. The most famous adventure and role playing mixtures are Rogue and its later variant Hack (and still later NetHack). These were also the first games whose development was greatly aided by the USENET network. Nowadays text-only adventure games have almost disappeared, but the idea of adventuring remains in so called arcade adventures. These do not use as much text to describe the locations, but draw a picture instead and display the player as a little man who can wander around interactively. Well known examples of these kinds of games are the adventures of the Leisure Suit Larry. 7.5 Simulations There are various different kinds of simulations. One of the first was the lunar lander module simulation where the player had to guide the module to the ground before the fuel ran out. Big expensive flight simulators have been used extensively to train aeroplane pilots but there are several micro computer flight simulators which can offer almost as realistic a simulation. In addition to the technical simulations there are simulators for human sciences, too. It is possible to control a city or even the whole Earth by making decisions about investments, taxes, population, war and peace. The simulator tells the player what the results of the experiments will be after ten years. It is often hard to tell when a simulation is game and when it is education. The Spacewar can be thought of as an graphically illustrated simulation of gravity. On the other hand, what would be a more challenging game than flying a Boeing 747? Glossary TicTacToe A simple board game, a version of noughts and crosses played on a board of limited size (usually 3x3 or 4x4 squares) MIT Massachusetts Institute of Technology hacker A person who studies and programs computers purely as a hobby. Originally this word meant a member of a group of students at MIT who were interested in model railroads. CPU Central Processing Unit discreet Consisting of distinct or unconnected elements. cartridge A plastic box containing the necessary memory chips to store a program. ----------------------------------------------------------------------------- 8. Workstations Kari Tuisku 8.1 Definition of workstation Workstations are computers designed specially for personal use. As they have their own data processing capacity they can run application programs by themselves. Nowadays workstation computers are, however, very often connected to a local area network with a number of other computers. Then all computing resources of the whole information system can be achieved via a single workstation. Workstations can be divided into two categories. The smaller ones are called personal computers or low-end workstations and the more powerful ones engineering or high-end workstations. Yet the limit between these classes is not very clear. 8.2 Hardware components The major parts of a workstation computer are: - the central processing unit - the main memory - the disk storage or storages - the video monitor - the keyboard and - a pointing device. Most workstations can utilize optional hardware, like - a memory extension - external disk drives - a tape unit and - another pointing device. 8.2.1 Central processing unit The central processing unit - or CPU - is the most important part of all computers. It controls the co-function of all components inside the computer. The CPU of a workstation is always based on the microprocessor technology. A microprocessor is a VLSI circuit component which is mounted into a processor circuit card. There are two different design architectures of workstation CPUs: Complex Instruction Set Computer, CISC, and Reduced Instruction Set Computer, RISC. CISC processors are widely used as a central processing unit in personal computers and in many engineering workstations also. The processors based on RISC architecture are more powerful than CISC processors. The high-end workstations usually have a RISC processor. 8.2.2 Main memory The program the central processing unit executes and the data it processes is stored in the main memory when a computer is in operation. Because a workstation must be able to run graphical applications the main memory must be very large. The memory size of a low-end workstation is normally between 2 and 4 MB whereas a powerful engineering station can have a memory of 100 MB or more. 8.2.3 Disk storages A workstation usually has at least one internal hard disk drive for storing all programs and data that the CPU does not need immediately. The typical storage capacity of a hard disk drive in a workstation is 30-500 MB. If this capacity is not enough, additional internal or external hard disk drives can be connected to the system. Regardless of the number and the size of hard disk drives, most workstations have a diskette drive, because hard disks are not removable and portable. Data diskettes are easy to handle and can be used for transferring small amounts of data from one computer system to another. The maximum storage capacity of a 3.5" diskette drive is only 1.44 MB. 8.2.4 Monitor Most workstations have a Cathode Ray Tube display as a video monitor. The advantages of the CRT display, compared with other display techniques, are good picture quality, capability to reproduce colors and a moderate price. The strongest disadvantages, on the other hand, are the weight, the depth of the cathode ray tube and the power consumption. The bigger the screen size of a monitor, the more information there can be represented on the screen in parallel. The screen size of personal computers is typically 14-15 inch but engineering workstations can have even 20-inch displays. 8.2.5 Keyboard and pointing devices The user of a workstation always needs the keyboard when entering new information into the system. The keyboard has not only the normal typewriting keys but also special editing keys and a number of programmable function keys. The keyboard layout is not very well standardized, so there is a great variety of slightly different keyboards on the market. The usage of a workstation is based on a graphical user interface which can not be controlled with the keyboard alone. The place of the pointer on the screen can be changed quickly only by moving a pointing device on a desk surface to the direction wanted. Moreover, all actions applicable at the active screen point at the time can be activated by clicking a particular button of the pointing device. The most popular pointing device for all-purpose workstations is the mouse but a pointing pen, a light pen or a joystick can be better e.g. with drawing applications. 8.2.6 System bus The major components of a workstation system previously described are connected via the system bus. Actually the device itself e.g. a disk drive or a monitor is not connected to the bus directly but the hardware driver of the device is. The device drivers must therefore be compatible with the system bus. There are almost as many system bus standards on the market as there are different workstation producers. Only some busses for personal computers are industry-standards and supported by many producers. The engineering workstations always have a system bus designed and developed by the same producer, so other manufacturers do not support it. To avoid the interconnection problems between the workstation and the devices of different type, it is possible to connect a Small Computer System Interconnect device controller to the system bus of many workstations. Because SCSI is already an industry- standard there is a large scale of different machine independent device components available by many suppliers. Therefore, the standard of the internal systembus is no longer important. 8.3 Performance considerations Although the speed of the CPU alone does not determine the performance of the computer, there is a strong relationship between them. The unit of the CPU speed is MIPS, a Million Instructions Per Second. A typical CPU speed value for a low-end workstation is 4-10 MIPS and for high-end workstations it varies >from about 10 to even 40 MIPS. On the processor card, there is an oscillating internal clock circuit which synchronizes the operations of the CPU. The CPU speed and therefore the performance of a computer directly depends on the frequency of the internal clock. That is to say, if the clock frequency is doubled then the CPU will run almost twice as fast as before. The clock frequency of a workstation CPU is normally adjusted between 10 and 30 MHz. The average seeking time of the hard disk drive is a very crucial specification when considering the workstation performance. It takes much more time to seek for particular data in the disk drive than in the main memory. The slowest disk drives can have an average access time of 50 ms or even longer but the fastest models use only about 20 ms for a single operation. Glossary CRT Cathode Ray Tube katodis{deputki CISC Complex Instruction Set Computer laajan k{skykannan tietokone MB, megabyte 1 million bytes megatavu l. miljoona tavua MIPS Million Instructions Per Second miljoona k{sky{ sekunnissa RISC Reduced Instruction Set Computer suppean k{skykannan tietokone SCSI Small Computer System Interconnect pienen tietokone- j{rjestelm{n liit{nt{ VLSI Very Large Scale Integrated hyvin suuri integroitu (piiri)