Are large scale high speed centralized computers, used for internal corporate data processing needs

Introduction

Paul J. Fortier, Howard E. Michel, in Computer Systems Performance Evaluation and Prediction, 2003

1.1.7 Computer architectures

Computer architectures represent the means of interconnectivity for a computer's hardware components as well as the mode of data transfer and processing exhibited. Different computer architecture configurations have been developed to speed up the movement of data, allowing for increased data processing. The basic architecture has the CPU at the core with a main memory and input/output system on either side of the CPU (see Figure 1.7). A second computer configuration is the central input/output controller (see Figure 1.8). A third computer architecture uses the main memory as the location in the computer system from which all data and instructions flow in and out. A fourth computer architecture uses a common data and control bus to interconnect all devices making up a computer system (see Figure 1.9). An improvement on the single shared central bus architecture is the dual bus architecture. This architecture either separates data and control over the two buses or shares them to increase overall performance (see Figure 1.10).

Are large scale high speed centralized computers, used for internal corporate data processing needs

Figure 1.7. Basic computer architecture.

Are large scale high speed centralized computers, used for internal corporate data processing needs

Figure 1.8. Alternative computer architecture.

Are large scale high speed centralized computers, used for internal corporate data processing needs

Figure 1.9. Common bus architecture.

Are large scale high speed centralized computers, used for internal corporate data processing needs

Figure 1.10. Dual bus architecture.

We will see how these architectures and elements of the computer system are used as we continue with our discussion of system architectures and operations.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781555582609500011

HPC Architecture 1

Thomas Sterling, ... Maciej Brodowicz, in High Performance Computing, 2018

Abstract

Computer architecture is the organization of the components making up a computer system and the semantics or meaning of the operations that guide its function. As such, the computer architecture governs the design of a family of computers and defines the logical interface that is targeted by programming languages and their compilers. The organization determines the mix of functional units of which the system is composed and the structure of their interconnectivity. The architecture semantics is the meaning of what the systems do under user direction and how their functional units are controlled to work together. An important embodiment of semantics is the instruction set architecture (ISA) of the system. The ISA is a logical (usually binary) representative encoding of the basic set of distinct operations that a computer architecture may perform, and by which application programs specify the useful work to be done. At the machine level the hardware (sometimes controlled by firmware) system directly interprets and executes a sequence or partially ordered set of these basic operations. This is true for all computer cores, from those few in the smallest mobile phones to potentially millions making up the world's largest supercomputers. High performance computer architecture extends structure to a hierarchy of functional elements, whether small and limited in capability or possibly entire processor cores themselves. In this chapter many different classes of structure are presented, each exploiting concurrency in its own particular way. But in all cases this more broad definition of general architecture for high performance computing emphasizes aspects of the system that contribute to achieving performance. A high performance computer is designed to go fast, and its organization and semantics are specially devised to deliver computational speed. This chapter introduces the basic foundations of computer architecture in general and for high performance computer systems in particular. It is here, at the structural and logical levels, that parallelism of operation in its many forms and size is first presented. This chapter provides a first examination of the principal forms of supercomputer architecture and the underlying concepts that govern their performance.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124201583000022

Architecture

Sarah L. Harris, David Money Harris, in Digital Design and Computer Architecture, 2016

6.1 Introduction

The previous chapters introduced digital design principles and building blocks. In this chapter, we jump up a few levels of abstraction to define the architecture of a computer. The architecture is the programmer’s view of a computer. It is defined by the instruction set (language) and operand locations (registers and memory). Many different architectures exist, such as ARM, x86, MIPS, SPARC, and PowerPC.

The first step in understanding any computer architecture is to learn its language. The words in a computer’s language are called instructions.

Are large scale high speed centralized computers, used for internal corporate data processing needs

The computer’s vocabulary is called the instruction set. All programs running on a computer use the same instruction set. Even complex software applications, such as word processing and spreadsheet applications, are eventually compiled into a series of simple instructions such as add, subtract, and branch. Computer instructions indicate both the operation to perform and the operands to use. The operands may come from memory, from registers, or from the instruction itself.

Computer hardware understands only 1’s and 0’s, so instructions are encoded as binary numbers in a format called machine language. Just as we use letters to encode human language, computers use binary numbers to encode machine language. The ARM architecture represents each instruction as a 32-bit word. Microprocessors are digital systems that read and execute machine language instructions. However, humans consider reading machine language to be tedious, so we prefer to represent the instructions in a symbolic format called assembly language.

The instruction sets of different architectures are more like different dialects than different languages. Almost all architectures define basic instructions, such as add, subtract, and branch, that operate on memory or registers. Once you have learned one instruction set, understanding others is fairly straightforward.

A computer architecture does not define the underlying hardware implementation. Often, many different hardware implementations of a single architecture exist. For example, Intel and Advanced Micro Devices (AMD) both sell various microprocessors belonging to the same x86 architecture. They all can run the same programs, but they use different underlying hardware and therefore offer trade-offs in performance, price, and power. Some microprocessors are optimized for high-performance servers, whereas others are optimized for long battery life in laptop computers. The specific arrangement of registers, memories, ALUs, and other building blocks to form a microprocessor is called the microarchitecture and will be the subject of Chapter 7. Often, many different microarchitectures exist for a single architecture.

The “ARM architecture” we describe is ARM version 4 (ARMv4), which forms the core of the instruction set. Section 6.7 summarizes new features in versions 5–8 of the architecture. The ARM Architecture Reference Manual (ARM), available online, is the authoritative definition of the architecture.

In this text, we introduce the ARM architecture. This architecture was first developed in the 1980s by Acorn Computer Group, which spun off Advanced RISC Machines Ltd., now known as ARM. Over 10 billion ARM processors are sold every year. Almost all cell phones and tablets contain multiple ARM processors. The architecture is used in everything from pinball machines to cameras to robots to cars to rack-mounted servers. ARM is unusual in that it does not sell processors directly, but rather licenses other companies to build its processors, often as part of a larger system-on-chip. For example, Samsung, Altera, Apple, and Qualcomm all build ARM processors, either using microarchitectures purchased from ARM or microarchitectures developed internally under license from ARM. We choose to focus on ARM because it is a commercial leader and because the architecture is clean, with few idiosyncrasies. We start by introducing assembly language instructions, operand locations, and common programming constructs, such as branches, loops, array manipulations, and function calls. We then describe how the assembly language translates into machine language and show how a program is loaded into memory and executed.

Throughout the chapter, we motivate the design of the ARM architecture using four principles articulated by David Patterson and John Hennessy in their text Computer Organization and Design: (1) regularity supports simplicity; (2) make the common case fast; (3) smaller is faster; and (4) good design demands good compromises.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128000564000066

Database Systems Performance Analysis

Paul J. Fortier, Howard E. Michel, in Computer Systems Performance Evaluation and Prediction, 2003

14.2.1 PC performance assessment benchmark

The PC computer architecture performance test utilized is comprised of 22 individual benchmark tests that are available in six test suites. The six different test suites test for the following:

Integer and floating-point mathematical operations

Tests of standard two-dimensional graphical functions

Reading, writing, and seeking within disk files

Memory allocation and access

Tests of the MMX (multimedia extensions) in newer CPUs

A test of the DirectX 3D graphics system

The test results reported are shown as relative values. The larger the number the faster the computer. For example, a computer with a result of 40 can process roughly twice as much data as a computer with a result of 20. The Passmark rating is a weighted average of all the other test results and gives a single overall indication of the computer's performance. The bigger the number the faster the computer. The results we observed are shown in Table 14.2.

Table 14.2. Testbed Architecture Performance Results

Parameter TestedOracle SystemInformix SystemSQL SystemDB2 System
Math—Addition 96.6 96.2 94.6 97.0
Math—Subtraction 96.4 97.1 96.2 97.6
Math—Multiplication 101.1 101.4 101.4 103.1
Math—Division 12.9 12.8 12.9 13.0
Math—Floating-Point Addition 87.7 87.8 87.6 88.7
Math—Floating-Point Subtraction 89.4 89.5 88.6 90.1
Math—Floating-Point Multiplication 91.7 91.7 90.9 92.3
Math—Floating-Point Division 14.8 14.8 14.8 14.9
Math—Maximum Mega FLOPS 171.2 172.2 170.7 177.6
Graphics 2D—Lines 17.5 17.6 17.5 17.8
Graphics 2D—Bitmaps 12.9 12.9 12.8 12.9
Graphics 2D—Shapes 4.7 4.7 4.7 4.7
Graphics 3D—Many Worlds 22.9 23.0 22.9 22.9
Memory—Allocated Small Blocks 86.6 87.6 87.0 87.6
Memory—Read Cached 67.9 68.4 68.0 68.5
Memory—Read Uncached 48.7 48.8 50.0 49.1
Memory—Write 40.8 41.1 40.9 41.4
Disk—Sequential Read 3.2 3.8 3.7 3.1
Disk—Sequential Write 2.9 3.4 3.4 2.9
Disk—Random Seek 1.2 2.3 3.6 2.1
MMX—Addition 97.7 94.5 97.8 99.4
MMX—Subtraction 92.3 98.2 93.3 96.0
MMX—Multiplication 97.8 97.5 96.9 99.1
Math Mark 75.6 75.8 75.2 76.8
2D Mark 46.7 46.9 46.7 47.1
Memory Mark 58.7 59.2 59.2 59.4
Disk Mark 19.3 25.1 28.4 21.5
3D Graphics Mark 15.5 15.7 15.5 15.6
MMX Mark 48.8 49.2 48.9 50.0
Passmark Rating 45.7 47.2 47.8 46.7

Assessment of results

The performance assessment test found that the computer system configured for the DB2 servers appeared to have better performance than the other systems in most of the tests. However, the Passmark rating (weighted average of all test results giving a single overall indication of performance) of the computer system configured for the SQL Server 2000 was the highest.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978155558260950014X

Introduction to Digital Logic Design with VHDL

Ian Grout, in Digital Systems Design with FPGAs and CPLDs, 2008

6.14.7 Tristate Buffer

In many computer architectures, multiple devices share a common set of signals—control signals, address lines, and data lines. In a computer architecture where multiple devices share a common set of data lines, these devices can either receive or provide logic levels when the device is enabled (and all other devices are disabled). However, multiple devices could, when enabled, provide logic levels that would typically conflict with the logic levels provided by other devices. To prevent this happening, when a device is disabled, it would not produce a logic level, but would instead be put in a high-impedance state (denoted by the character z). When enabled, the buffer passes the input to the output. When disabled, it blocks the input and the output is seen by the circuit that it is connected to as a high-impedance electrical load. The operation is shown in Figure 6.59. Here, the enable signal may be active high (top, 1 to enable the buffer) or active low (bottom, 0 to enable the buffer).

Are large scale high speed centralized computers, used for internal corporate data processing needs

Figure 6.59. Tristate buffer symbol

The tristate buffer can be created in VHDL using the If-then-else statement, as shown in Figure 6.60. Lines 1 to 4 identify the libraries and packages to use. Lines 6 to 10 identify the design entity (One_Bit_Buffer) with a signal input (Signal_In) and an enable control input (Enable). Lines 12 to 26 identify the design architecture.

Are large scale high speed centralized computers, used for internal corporate data processing needs

Figure 6.60. One-bit tristate buffer

The design operation is defined within a single process in lines 16 to 24. This has a sensitivity list with both inputs to enable the process to react to changes in both the signal and control inputs. The tristate buffer is set to be active high so that when Enable is a 1, the input signal value is passed to the output signal. When Enable is a 0, the output is held in a high impedance state (z) irrespective of the value on the input signal.

An example test bench to simulate the buffer design is shown in Figure 6.61. Here, the inputs change every 10 ns; there is zero time delay in the operation of the design, so this short time between input signal changes would not cause any timing problems. The input signal is toggled between logic 0 and 1 for each state of the enable signal.

Are large scale high speed centralized computers, used for internal corporate data processing needs

Figure 6.61. One-bit tristate buffer test bench

The one-bit tristate buffer description in VHDL can be readily modified to produce the multibit tristate buffer commonly used in computer architectures. For example, if a device is to be connected to an eight-bit-wide data bus, the one-bit tristate buffer description in VHDL can be readily modified to allow for this. Figure 6.62 shows a code example where both input and output signals are eight-bit-wide std_logic_vectors.

Are large scale high speed centralized computers, used for internal corporate data processing needs

Figure 6.62. Eight-bit tristate buffer using the If-then-else statement

An example test bench for this design is shown in Figure 6.63.

Are large scale high speed centralized computers, used for internal corporate data processing needs

Figure 6.63. Eight-bit tristate buffer test bench using the If-then-else statement

The tristate buffer can also be created using the When-else statement, as shown in Figure 6.64. In this design, a dataflow description is used.

Are large scale high speed centralized computers, used for internal corporate data processing needs

Figure 6.64. Eight-bit tristate buffer using the When-else statement

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780750683975000064

Parallel and Distributed Systems

Dan C. Marinescu, in Cloud Computing (Second Edition), 2018

4.12 Large-Scale Systems

The developments in computer architecture, storage technology, networking, and software during the last several decades of the twentieth century coupled with the need to access and process information led to several large-scale distributed system developments:

The web and the semantic web expected to support composition of services (not necessarily computational services) available on the web. The web is dominated by unstructured or semi-structured data, while the semantic web advocates inclusion of semantic content in web pages.

The Grid, initiated in early 1990s by National Laboratories and universities primarily for applications in science and engineering.

The need to share data from high energy physics experiments motivated Sir Tim Berners-Lee, who worked at CERN at Geneva in late 1980s, to put together the two major components of the World Wide Web: HTML (Hypertext Markup Language) for data description and HTTP (Hypertext Transfer Protocol) for data transfer. The web opened a new era in data sharing and ultimately led to the concept of network-centric content.

The semantic Web is an effort to enable lay people to find, share, and combine information available on the web more easily. The name was coined by Berners-Lee to describe “a web of data that can be processed directly and indirectly by machines.” It is a framework for data sharing among applications based on the Resource Description Framework (RDF). In this vision, the information can be readily interpreted by machines, so machines can perform more of the tedious work involved in finding, combining, and acting upon information on the web.

The semantic web is “largely unrealized” according to Berners-Lee. Several technologies are necessary to provide a formal description of concepts, terms, and relationships within a given knowledge domain; they include the Resource Description Framework (RDF), a variety of data interchange formats, and notations such as RDF Schema (RDFS) and the Web Ontology Language (OWL).

Gradually, the need to make computing more affordable and to liberate the users from the concerns regarding system and software maintenance reinforced the idea of concentrating computing resources in data centers. Initially, these centers were specialized, each running a limited palette of software systems, as well as applications developed by the users of these systems. In the early 1980s major research organizations, such as the National Laboratories and large companies, had powerful computing centers supporting large user populations scattered throughout wide geographic areas. Then the idea to link such centers in an infrastructure resembling the power grid was born; the model known as network-centric computing was taking shape.

A computing grid is a distributed system consisting of a large number of loosely coupled, heterogeneous, and geographically dispersed systems in different administrative domains. The term computing grid is a metaphor for accessing computer power with similar ease as we access power provided by the electric grid. Software libraries known as middleware were furiously developed since early 1990s to facilitate access to grid services.

The vision of the grid movement was to give a user the illusion of a very large virtual supercomputer. The autonomy of the individual systems and the fact that these systems were connected by wide-area networks with latency higher than the latency of the interconnection network of a supercomputer posed serious challenges to this vision. Nevertheless, several “Grand Challenge” problems, such as protein folding, financial modeling, earthquake simulation, and climate/weather modeling, run successfully on specialized grids. The Enabling Grids for Escience project is arguably the largest computing grid; along with the LHC Computing Grid (LCG), the Escience project aims to support the experiments using the Large Hadron Collider (LHC) at CERN which generates several gigabytes of data per second, or 10 PB (petabytes) per year.

In retrospect, two basic assumptions about the infrastructure prevented the grid movement from having the impact its supporters were hoping for. The first is the heterogeneity of the individual systems interconnected by the grid. The second is that systems in different administrative domain are expected to cooperate seamlessly. Indeed, the heterogeneity of the hardware and of the system software poses significant challenges for application development and for application mobility.

At the same time, critical areas of system management including scheduling, optimization of resource allocation, load balancing, and fault-tolerance are extremely difficult in a heterogeneous system. The fact that resources are in different administrative domains further complicates many, already difficult, problems related to security and resource management. While very popular in the science and the engineering communities, the grid movement did not address the major concerns of enterprise computing community and did not make a noticeable impact on the IT industry.

Cloud computing is a technology largely viewed as the next big step in the development and deployment of an increasing number of distributed applications. The companies promoting cloud computing seem to have learned the most important lessons from the grid movement. Computer clouds are typically homogeneous. An entire cloud shares the same security, resource management, cost and other policies, and last but not least, it targets enterprise computing. These are some of the reasons why several agencies of the US Government including the Health and Human Services, the Center for Disease Control (CDC), NASA, Navy's Next Generation Enterprise Network (NGEN), and Defense Information Systems Agency (DISA) have launched cloud computing initiatives and conduct actual system developments intended to improve the efficiency and effectiveness of their information processing needs.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128128107000054

Microarchitecture

Sarah L. Harris, David Harris, in Digital Design and Computer Architecture, 2022

7.1.1 Architectural State and Instruction Set

Recall that a computer architecture is defined by its instruction set and architectural state. The architectural state for the RISC-V processor consists of the program counter and the 32 32-bit registers. Any RISC-V microarchitecture must contain all of this state. Based on the current architectural state, the processor executes a particular instruction with a particular set of data to produce a new architectural state. Some microarchitectures contain additional nonarchitectural state to either simplify the logic or improve performance; we point this out as it arises.

The architectural state is the information necessary to define what a computer is doing. If one were to save a copy of the architectural state and contents of memory, then turn off a computer, then turn it back on and restore the architectural state and memory, the computer would resume the program it was running, unaware that it had been powered off and back on. Think of a science fiction novel in which the protagonist’s brain is frozen, then thawed years later to wake up in a new world.

To keep the microarchitectures easy to understand, we focus on a subset of the RISC-V instruction set. Specifically, we handle the following instructions:

R-type instructions: add, sub, and, or, slt

Memory instructions: lw, sw

Branches: beq

These particular instructions were chosen because they are sufficient to write useful programs. Once you understand how to implement these instructions, you can expand the hardware to handle others.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128200643000076

Cohesion, Coupling, and Abstraction

Dale Shaffer, in Encyclopedia of Information Systems, 2003

I. Historical Perspective

Early computer programs followed computer architecture, with data in one block of memory and program statements in another. With larger systems and recognition of the major role of maintenance, the block of program statements was further broken down into modules, which could be developed independently. Research in cohesion and coupling has its roots in the early 1970s as part of the development of modules. Structured design formalized the process of creating modules, recognizing that better written modules were self-contained and independent of each other. This functional independence was achieved by making modules that were of a single purpose, avoided interaction with other modules, and hides implementation details.

Consider the modules in Fig. 1. The combinations module calculates the number of n things taken r at a time. For example, consider a visit to a restaurant that had a salad bar where you were allowed to choose any three of the six meat and vegetable selections to include on your lettuce salad. The combinations module would determine that there were twenty different ways you could select three of the six items.

Are large scale high speed centralized computers, used for internal corporate data processing needs

Figure 1. Example of functional independence.

Functional independence is shown by the factorial module. The factorial module does only one task; it returns the factorial of the value given. It also has minimal interaction with other modules. The calling module, combinations, sends the minimum information that factorial needs—one number. Functional independence is measured using two criteria, cohesion and coupling.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0122272404000095

Parallel and Distributed Systems

Dan C. Marinescu, in Cloud Computing, 2013

2.2 Parallel computer architecture

Our discussion of parallel computer architectures starts with the recognition that parallelism at different levels can be exploited. These levels are:

Bit-level parallelism. The number of bits processed per clock cycle, often called a word size, has increased gradually from 4-bit processors to 8-bit, 16-bit, 32-bit, and, since 2004, 64-bit. This has reduced the number of instructions required to process larger operands and allowed a significant performance improvement. During this evolutionary process the number of address bits has also increased, allowing instructions to reference a larger address space.

Instruction-level parallelism. Today’s computers use multi-stage processing pipelines to speed up execution. Once an n-stage pipeline is full, an instruction is completed at every clock cycle. A “classic” pipeline of a Reduced Instruction Set Computing (RISC) architecture consists of five stages2: instruction fetch, instruction decode, instruction execution, memory access, and write back. A Complex Instruction Set Computing (CISC) architecture could have a much large number of pipelines stages; for example, an Intel Pentium 4 processor has a 35-stage pipeline.

Data parallelism or loop parallelism. The program loops can be processed in parallel.

Task parallelism. The problem can be decomposed into tasks that can be carried out concurrently. A widely used type of task parallelism is the Same Program Multiple Data (SPMD) paradigm. As the name suggests, individual processors run the same program but on different segments of the input data. Data dependencies cause different flows of control in individual tasks.

In 1966 Michael Flynn proposed a classification of computer architectures based on the number of concurrent control/instruction and data streams: Single Instruction, Single Data (SISD), Single Instruction, Multiple Data (SIMD), and (Multiple Instructions, Multiple Data (MIMD).3

The SIMD architecture supports vector processing. When an SIMD instruction is issued, the operations on individual vector components are carried out concurrently. For example, to add two vectors (a1,a2,…,a50) and (b1,b2,…,b50), all 50 pairs of vector elements are added concurrently and all the sums (ai+bi),1⩽i⩽50 are available at the same time.

The first use of SIMD instructions was in vector supercomputers such as the CDC Star-100 and the Texas Instruments ASC in the early 1970s. Vector processing was especially popularized by Cray in the 1970s and 1980s by attached vector processors such as those produced by the FPS (Floating Point Systems), and by supercomputers such as the Thinking Machines CM-1 and CM-2. Sun Microsystems introduced SIMD integer instructions in its VIS instruction set extensions in 1995 in its UltraSPARC I microprocessor; the first widely deployed SIMD for gaming was Intel’s MMX extensions to the x86 architecture. IBM and Motorola then added AltiVec to the POWER architecture, and there have been several extensions to the SIMD instruction sets for both architectures.

The desire to support real-time graphics with vectors of two, three, or four dimensions led to the development of graphic processing units (GPUs). GPUs are very efficient at manipulating computer graphics, and their highly parallel structures based on SIMD execution support parallel processing of large blocks of data. GPUs produced by Intel, Nvidia, and AMD/ATI are used in embedded systems, mobile phones, personal computers, workstations, and game consoles.

An MIMD architecture refers to a system with several processors that function asynchronously and independently; at any time, different processors may be executing different instructions on different data. The processors can share a common memory of an MIMD, and we distinguish several types of systems: Uniform Memory Access (UMA), Cache Only Memory Access (COMA), and Non-Uniform Memory Access (NUMA).

An MIMD system could have a distributed memory; in this case the processors and the memory communicate with one another using an interconnection network, such as a hypercube, a 2D torus, a 3D torus, an omega network, or another network topology. Today most supercomputers are MIMD machines, and some use GPUs instead of traditional processors. Multicore processors with multiple processing units are now ubiquitous.

Modern supercomputers derive their power from architecture and parallelism rather than the increase of processor speed. The supercomputers of today consist of a very large number of processors and cores communicating via very fast custom interconnects. In mid-2012 the most powerful supercomputer was a Linux-based IBM Sequoia-BlueGene/Q system powered by Power BQC 16-core processors running at 1.6 GHz. The system, installed at Lawrence Livermore National Laboratory and called Jaguar, has a total of 1,572,864 cores and 1,572,864 GB of memory, achieves a sustainable speed of 16.32 petaFLOPS, and consumes 7.89 MW of power.

More recently, a Cray XK7 system called Titan, installed at the Oak Ridge National Laboratory (ORNL) in Tennessee, was coronated as the fastest supercomputer in the world. Titan has 560,640 processors, including 261,632 Nvidia K20x accelerator cores; it achieved a speed of 17.59 petaFLOPS on the Linpack benchmark. Several most powerful systems listed in the “Top 500 supercomputers” (see www.top500.org) are powered by the Nvidia 2050 GPU; three of the top 10 use an InfiniBand 4 interconnect.

The next natural step was triggered by advances in communication networks when low-latency and high-bandwidth wide area networks (WANs) allowed individual systems, many of them multiprocessors, to be geographically separated. Large-scale distributed systems were first used for scientific and engineering applications and took advantage of the advancements in system software, programming models, tools, and algorithms developed for parallel processing.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124046276000026

Server Classifications

Shu Zhang, Ming Wang, in Encyclopedia of Information Systems, 2003

I.A.1. Centralized System

The concept of network computer architecture is evolutionary. The origin of the centralized computer dates back to the 1940s. During that time host computers were very large and expensive machines, like the famous MARK I, ENIAC, and EDVAC, etc. Even after computers were commercialized around the 1950s, most computers were “hosted” in highly secured data processing centers, and users accessed the host computers through “dumb” terminals. By the late 1960s, IBM became a dominant vendor of large-scale computers called a “mainframe” host. In the mid-1970s, minicomputers started challenging mainframe computers. In many cases, minicomputers could host applications and perform the same functions as mainframes, but with less cost. Since the host is the center of this system architecture, it is called a centralized system. In the early 1980s, most computers, no matter whether they were large-scale mainframes or smaller minicomputers, were operated as application hosts, while terminal users had limited access to their hosts; in fact, most terminal users never have any physical access to host computers. Generally speaking, a host is a computer designed for massive parallel processing of large quantities of information connected with terminals utilized by end users. All network services, application executions, and database requests are hosted in this computer, and all data are stored in this host.

Basically, minicomputers and mainframe computers were the de facto standard of enterprise centralized computing systems before PCs entered the professional computing area. Figure 1 shows a typical centralized system.

Are large scale high speed centralized computers, used for internal corporate data processing needs

Figure 1. Centralized system.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B012227240400157X

What are large scale high speed centralized computers?

A mainframe (also known as "big iron") is a high-performance computer used for large-scale computing purposes that require greater availability and security than a smaller-scale machine can offer.

What is the most widely used form of centralized processing?

Client/server computing is the most widely used form of centralized processing. Application server software is responsible for locating and managing stored Web pages. Using an online storage service such as DropBox is a type of virtualization.

Which of the following is an example of a PaaS?

Popular examples of PaaS include: AWS Elastic Beanstalk. Windows Azure. Heroku.

Which of the following is a function of an ISP?

Internet service provider (ISP), company that provides Internet connections and services to individuals and organizations. ISPs may also provide software packages (such as browsers), e-mail accounts, and a personal website or home page. ISPs can host websites for businesses and can also build the websites themselves.