Parallel Computing

Parallel computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently ("in parallel"). There are several different forms of parallel computing: bit-level, instruction level, data, and task parallelism. Parallelism has been employed for many years, mainly in high-performance computing, but interest in it has grown lately due to the physical constraints preventing frequency scaling. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multicore processors.

Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multi-core and multi-processor computers having multiple processing elements within a single machine, while clusters, MPPs, and grids use multiple computers to work on the same task. Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks.

Parallel computer programs are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance.

The maximum possible speed-up of a program as a result of parallelization is known as Amdahl's law.

Read more about Parallel Computing:  Background, Algorithmic Methods, History

Other articles related to "parallel computing, computing, parallel":

Formal Definition - Lateral Computing and Parallel Computing
... Parallel computing focuses on improving the performance of the computers/algorithms through the use of several computing elements (such as processing elements) ... The computing speed is improved by using several computing elements ... Parallel computing is an extension of conventional sequential computing ...
UPCRC Illinois - Parallel Computing History At Illinois
... Illinois history in parallel computing stretches more than 40 years ... From the first academic parallel supercomputer, the ILLIAC IV started in 1964, to today’s work to install the first petascale computer, Blue Waters, Illinois has ... Polaris, Parafrase, IMPACT, LLVM Race detection techniques Parallel runtime systems – Chare Kernel, Charm++ IBM/DARPA PERCS – a precursor to IBM’s Power ...
Cache Invalidation
... done explicitly, as part of a cache coherence protocol in a parallel computer ... Parallel computing General Cloud computing High-performance computing Cluster computing Distributed computing Grid computing Levels Bit Instruction Data Task Threads ...
Parallel Computing - History
... Gill (Ferranti) discussed parallel programming and the need for branching and waiting ... In 1967, Amdahl and Slotnick published a debate about the feasibility of parallel processing at American Federation of Information Processing ... capable of running up to eight processors in parallel ...
Introduction - Parallel and Distributed Computing
... The terms "concurrent computing", "parallel computing", and "distributed computing" have a lot of overlap, and no clear distinction exists between them ... The same system may be characterised both as "parallel" and "distributed" the processors in a typical distributed system run concurrently in parallel ... Parallel computing may be seen as a particular tightly coupled form of distributed computing, and distributed computing may be seen as a loosely coupled form of parallel computing ...

Famous quotes containing the word parallel:

    There isn’t a Parallel of Latitude but thinks it would have been the Equator if it had had its rights.
    Mark Twain [Samuel Langhorne Clemens] (1835–1910)