Multiple processes can be run on a time-shared manner on a general single processor computer. Difficult to get a speed-up unless while one process is doing computing the other processes are involved input/output for example.
2. Shared memory computers
2. 1 Multiprocessor Configurations (SMPs: Symmetric Multiprocessors)
SMP involves a multiprocessor computer hardware architecture where two or more identical processors are connected to a single shared memory and are controlled by a single OS instance.In an SMP multiple identical processors share memory connected via a bus. Bus contention prevents bus architectures from scaling. As a result, SMPs generally do not comprise more than 32 processors.
2. 2 Hyperthreading (Simultaneous Multithreading [SMT])
With HT Technology, two threads can execute on the same single processor core simultaneously in parallel rather than context switching 1 between the threads. Scheduling two threads on the same physical processor core allows better use of the processor’s resources. HT Technology adds circuitry and functionality into a traditional processor to enable one physical processor to appear as two separate processors. Each processor is then referred to as a logical processor.
2. 3 Dual Core
This term refers to integrated circuit (IC) chips that contain two complete physical computer processors (cores) in the same IC package. Typically, this means that two identical processors are manufactured so they reside side-by-side on the same die. It is also possible to (vertically) stack two separate processor die and place them in the same IC package. Each of the physical processor cores has its own resources
(architectural state, registers, execution units, etc.).
2.4 Multi Core
The multi core system is an extension to the dual core system except that it would consist of more than 2 processors. The current trends in processor technology indicate that the number of processor cores in one IC chip will continue to increase. If we assume that the number of transistors per processor core remains relatively fixed, it is reasonable to assume that the number of processor cores could follow Moore’s Law, which states that the number of transistors per a certain area on the chip will double approximately every 18 months. Even if this trend does not follow Moore’s Law, the number of processor cores per chip appears destined to steadily increase - based on statements from several processor manufacturers.
2.5 Many Core
Many core is a multi-core processor in which the number of cores is large enough that traditional multi-processor techniques are no longer efficient largely because of issues with congestion in supplying instructions and data to the many processors. The many-core threshold is roughing in the range of several tens or hundreds of cores.
2.6 Graphics Processing Units (GPUs)
Graphics cards (often having 100+ processor cores) and a rich structure of memory that they can share is a good general purpose computing platform. Each processor can do less than your CPU, but with their powers combined they become a fast parallel computer.
3. Distributed Computing
A distributed computer is a distributed memory computer system in which the processing elements are connected by a network. Also known as a distributed memory multiprocessor or multi computer.
3.1 Cluster computing
A cluster is a group of loosely coupled computers that work together closely, so that in some respects they can be regarded as a single computer.Clusters are composed of multiple standalone machines connected by a network. While machines in a cluster do not have to be symmetric, load balancing is more difficult if they are not. The most common type of cluster is the Beowulf cluster, which is a cluster implemented on multiple identical commercial off-the-shelf computers connected with a TCP/IP Ethernet local area network.
3.2 Massively parallel processing
A massively parallel processor (MPP) is a single computer with many networked processors. MPPs have many of the same characteristics as clusters, but MPPs have specialized interconnect networks (whereas clusters use commodity hardware for networking). MPPs also tend to be larger than clusters, typically having ”far more” than 100 processors. In an MPP, each CPU contains its own memory and copy of the operating system and application. Each subsystem communicates with the others via a high-speed interconnect.
3.3 Grid computing
Grid computing is the most distributed form of parallel computing. It makes use of computers communicating over the Internet to work on a given problem. Because of the low bandwidth and extremely high latency available on the Internet, grid computing typically deals only with embarrassingly parallel problems. Most grid computing applications use middleware, software that sits between the operating system and the application to manage network resources and standardize the software interface. Often, grid computing software makes use of ”spare cycles”, performing computations at times when a computer is idling.
Note: Please do not make conflicts between these general terms with commercial terms. (i.e intel dual core, core 2 duo)
Special Thanks for Mr. K.P.M.K. Silva - BSc (Col), MSc(York) (Lecturer)