Map
Index
Random
Help
Topics
th

Topic: models of parallel computation

topics > computer science > Group: parallel processing



Group:
coordination system
Group:
database model
Group:
machine model

Topic:
data flow machines
Topic:
calculus of communicating processes
Topic:
communicating sequential processes
Topic:
computer performance
Topic:
concurrency control by monitors
Topic:
formal methods and languages
Topic:
massively parallel processors
Topic:
parallel algorithms
Topic:
parallel programming languages
Topic:
Petri net
Topic:
process migration
Topic:
race conditions
Topic:
software models of reality

Subtopic: parallel system up

Quote: a parallel system consists of a parallel algorithm and a parallel architecture; they cannot be separated [»gramAY8_1993]

Subtopic: latency, bandwidth up

Quote: high latency, communication overhead, and limited bandwidth will remain problems for parallel processing [»cullDE11_1996]

Subtopic: computation vs. communication up

Quote: parallel execution needs an even distribution while communication needs concentration; for two processes, either evenly divide or one alone [»induB3_1986]
Quote: create modules with limited intercommunication; then either fully distribute or totally local [»induB3_1986]

Subtopic: scalable up

Quote: a scalable parallel system preserves efficiency if both processors and problem size increase together [»gramAY8_1993]
Quote: the isoefficiency function balances problem size with number of processors; a highly scalable system allows small increments in problem size to use increasing numbers of processors efficiently [»gramAY8_1993]
Quote: a parallel system is cost-optimal if the total parallel time is proportional to the best serial time [»gramAY8_1993]

Subtopic: parallel vs. serial up

Quote: constructs for parallel computation can subsume those for serial computation and data [»pratV1_1983]
Quote: parallel is more natural than sequential; e.g., processing an array seldom requires a definite order [»gellDP7_1974]
Quote: input and output are basic programming primitives and parallel composition is a fundamental program structure method; use guarded commands [»hoarCA8_1978]

Subtopic: critical path up

Quote: work is time to compute sequentially while critical-path is time with infinite processors [»frigM6_1998]
Quote: work-first principle -- minimize scheduling for total sequential time; move overheads to critical path

Subtopic: interference up

Quote: a parallel language must check that processes access disjoint sets of variables only and do not interfere in time-dependent ways [»brinP4_1999]

Subtopic: overhead up

Quote: the total overhead of a parallel system is a function of problem size and number of processors; overhead due to idling, communication, contention, etc.
Quote: use isoefficiency to compare two parallel algorithms; low overhead is not OK if it limits concurrency [»gramAY8_1993]

Subtopic: logical vs. physical processors up

Quote: optimal parallel execution if memory is randomly distributed among the physical processors and there are log p more virtual processors; e.g., use hashing [»valiLG8_1990]

Subtopic: process group up

Quote: most real parallel programs consist of replicated instances of a few types of processes; i.e., process groups [»wilsGV_1995]

Subtopic: tasks vs. modules up

Quote: if organize modules as multi-tasks, can delay task division until later; otherwise the initial architecture determines the tasks [»clarDD12_1995]

Subtopic: message passing, data parallel, shared variable up

Quote: procedural message passing is more complicated than data-parallel or shared-variable programs; use an application builder [»wilsGV_1995]
Quote: naturally design processes by message passing, blocking semantics, servers, proprietors, administrators, notifiers, and couriers; anthropomorphize [»gentWM5_1981]

Subtopic: work queue up

Quote: use work queues instead of thread per connection or thread per work unit [»smaaB2_2006]

Subtopic: LogP model up

Quote: the LogP model of parallel computation uses latency, overhead, gap between successive sends/receives, and number of processors; fixed size messages and finite capacity network
Quote: LogP encourages practical speedups such as coordinating work assignment with data placement, reduced communication, overlapping computation with communication, and balanced communication without flooding a processor [»cullDE11_1996]
Quote: LogP helps guide algorithm design and implementation; discrepancies between predicted and measured times may indicate problems and improvements [»cullDE11_1996]

Subtopic: bulk-synchronous up

Quote: the bulk-synchronous parallel model achieves simulations of parallel computation up to constant factors; like the von Neumann model [»valiLG8_1990]
Quote: the bulk-synchronous parallel model requires parallel slackness; i.e., log n more virtual processes than physical processors [»valiLG8_1990]
Quote: the router in the bulk-synchronous parallel model sends up to h messages for each component [»valiLG8_1990]
Quote: with the bulk-synchronous parallel model, the hardware investment for communication must grow faster than that for computation
Quote: the bulk-synchronous parallel model uses barrier synchronization; i.e., supersteps of L time units with all communication between supersteps [»valiLG8_1990]

Subtopic: neighborhood model up

Quote: in the crystalline model, processes work independently on a portion of the data structure. They only communicate in unison with their immediate neighbors [»wilsGV_1995]

Subtopic: read vs. write up

Quote: use a special pause thread if reads much more frequent than writes; each read thread need only prevent its own preemption [»smaaB2_2006]
Quote: for short reads, simple mutex often better than a reader-writer lock [»smaaB2_2006]

Subtopic: explicit synchronization up

Quote: code all synchronization explicitly; speed ratios are not available


Related Topics up

Group: coordination system   (8 topics, 217 quotes)
Group: database model   (15 topics, 316 quotes)
Group: machine model   (13 topics, 206 quotes)

Topic: data flow machines (14 items)
Topic: calculus of communicating processes (13 items)
Topic: communicating sequential processes (33 items)
Topic: computer performance (14 items)
Topic: concurrency control by monitors (24 items)
Topic: formal methods and languages (53 items)
Topic: massively parallel processors (29 items)
Topic: parallel algorithms (15 items)
Topic: parallel programming languages (14 items)
Topic: Petri net (44 items)
Topic: process migration (3 items)
Topic: race conditions (33 items)
Topic: software models of reality
(24 items)


Updated barberCB 3/06
Copyright © 2002-2008 by C. Bradford Barber. All rights reserved.
Thesa is a trademark of C. Bradford Barber.