Map
Index
Random
Help
Topics
th

Topic: critical regions

topics > computer science > Group: parallel processing



Topic:
conditional control by guards
Topic:
concurrency
Topic:
concurrency control by monitors
Topic:
concurrency control by sequencers
Topic:
deadlocks
Topic:
managing shared memory
Topic:
multi-tasking
Topic:
path expression
Topic:
parallel control statements
Topic:
process threads
Topic:
proving concurrent programs
Topic:
race conditions
Topic:
shared information for collaborative work
Topic:
shared objects
Topic:
updating information with locking
Topic:
waitfor condition in parallel processing

Summary

A critical region is wherever two or more processes may modify the same resource. Since modification requires access, overlapping processes would receive incorrect information and may leave the resource in an inconsistent state. Several options exist to guarantee one active process to a region. Semaphores are the most primitive; guarded critical regions wait until the guard is true; monitor processes receive and process requests; path expressions determine process protocol; sequencers guarantee consistent access and allow process definition of continuation; shared procedures prevent multiple access; single tasking guarantees a single active process; mutually exclusive states show critical regions as state diagrams; and crowds allow determination of waiting processes. Whichever method is used the following should be guaranteed: mutual exclusion of processes, guaranteed exit for all entering processes, fair scheduling of all processes, and any higher order synchronization. (cbb 5/80)
Subtopic: critical region up

Quote: use critical regions to update shared variables [»brinP7_1972]
Quote: a mutex or critical section is the simplest primitive for accessing shared variables; lock a mutex, use shared variables, unlock the mutex and release a blocked thread [»birrAD_1991]
Quote: critical regions always satisfy an invariant even though they interleave arbitrarily
Quote: a monitor manages a resource with mutual-exclusion; users simply execute allocate and release procedures [»dennPJ_1980]
Quote: always protect shared data with a mutex and boolean wait expressions [»birrAD_1991]
Quote: a conditional critical region is entered when some condition is met [»brinP12_1973, OK]
QuoteRef: brinP12_1973 ;;237 critical regions -- mutual exclusion: only one process, termination processing region will terminate, fair scheduling: will reach region first (p. 240) eg region v do v ;= v:; end ordering so region v region w ..end
QuoteRef: brinP12_1973 ;;230 two processes independent if all changed variables are private (checkable)

Subtopic: conditional, critical region up

Quote: conditional, critical regions better than mutual-exclusion; non-blocking without priority inversion; reevaluate conditions on update [»harrT10_2003]
Quote: conditional critical regions in Atomos by watch sets that trip on updates; yields the hardware transactional context until a condition might be true [»carlBD6_2006]

Subtopic: monitor process up

Quote: combine critical regions into a secretary process; invoked, as needed, by its directors; order is undefined [»dijkEW2_1971]

Subtopic: atomic method up

Quote: use atomic methods to avoid unexpected thread interactions; race-detection is neither necessary nor sufficient [»flanC6_2003]
Quote: type system to verify atomic methods in multithreaded Java programs; based on rccjava and Lipton's reduction and type system for race-detection; found errors in java.lang.String and StringBuffer
Quote: verify atomicity using Lipton's right and left movers; lock acquisition moves to the right while lock release moves left; atomic if acquisition and release move to start and end respectively [»flanC6_2003]

Subtopic: group synchronization up

Quote: if use crowds instead of counters for synchronization, can determine which processes are currently using a resource [»atkiR_1977]

Subtopic: hard real time up

Quote: for hard real time synchronization uses a counting semaphore, up/down/pass; same as Dijkstra's V operation [»faulSR3_1988]

Subtopic: hierarchical regions up

Quote: each context level acts as a single mutex lock and thus prevents cycles in the invocation structure [»maysD5_2002]
Quote: CycleFree methodology creates hierarchy of object contexts; use events instead of upcalls; only events run concurrently [»maysD5_2002]

Subtopic: guarded region, monitor up

Quote: shared variables for a resource are only used in guarded regions [»brinP9_1978]
Quote: the statements of a guarded region implement a set of transitions from guards to results
Quote: a monitor consists of shared data, a mutex, and zero or more condition variables [»birrAD_1991]
Quote: implement critical regions by a shared class or monitor; its operations exclude each other over time [»brinP_1973]

Subtopic: barrier synchronization up

Quote: TreadMarks currently provides two synchronization primitives: global barrier synchronization and exclusive locks [»amzaC2_1996]
Quote: bulk synchronous parallelism uses barrier synchronization, i.e., all global communication between processors takes place at synchronized barriers [»pounD11_1996]
Quote: the bulk-synchronous parallel model uses barrier synchronization; i.e., supersteps of L time units with all communication between supersteps [»valiLG8_1990]
Quote: for incremental garbage collection, coordinate collector and mutator with read or write barriers; write barriers cause fewer faults [»wilsPR9_1992]
Quote: mostly non-copying, real-time incremental garbage collector with 4% read barrier overhead and 45% utilization in 1.6-2.5x actual space [»bacoDF1_2003]
Quote: implement read-barriers for Baker's memory algorithm via address range checks, virtual memory, or hardware checks on pointers [»cheaAM9_2000]
Quote: generational collectors are better than whole heap collectors in almost all circumstances; lower collection time; write barrier is reasonable [»blacSM6_2004]

Subtopic: semaphore up

Quote: P- and V-operations solve the problem of mutual exclusion [»dijkEW2_1971]
Quote: algorithm for mutual exclusion of readers and writers; minimize delay of readers or writers; based on P, V, and active counts [»courPJ10_1971]
Quote: if a semaphore is zero, processes may be blocked, waiting for a P-operation; a V-operation wakes one of these processes, without starvation [»dijkEW2_1971]
Quote: semaphores are the same as a mutex, but a thread does not "hold" a semaphore, there is no precondition for release, and P and V may be textual separated [»birrAD_1991a]
Quote: use private and public semaphores to control access to a shared resource; intermediate states for desired access [»dijkEW2_1971, OK]
Quote: use semaphores to synchronize with an interrupt routine; use P(sem) to block on the interrupt and V(sem) to signal an event
Quote: since a monitor locks an entire data structure, PORTAL has resources to lock individual data items; i.e., semaphores [»schiR4_1980]
Quote: implement exclusive access by 2 mutually exclusive states; the sequence is request:access:release [»brinP9_1978, OK]
Quote: a right mover is a "release" [V] that can move later in an interleaving; a left mover is a "seize" [P] that can move earlier [»liptRJ12_1975]
Quote: for proofs, can treat a sequence as atomic if it consists of right movers (V) followed by left movers (P) [»liptRJ12_1975]

Subtopic: implement semephores up

Quote: EMERALDS semaphores eliminate a context switch and reduce priority inheritance overhead [»zubeKM10_2001]
Quote: can implement semaphores in Unix by creating or deleting a known file [»ritcDM7_1978b]
Quote: implement concurrency with a micro-kernel; primitives for emit, absorb, p and v; low cost [»cormGV5_1988]
Quote: use separate functions instead of a state variable; e.g., 6-60x speedup for binary semaphores [»maurPM3_2004]

Subtopic: lightweight transactions up

Quote: simplify concurrent programming by lightweight transactions; use for shared data, object instantiation, exceptions, and blocking [»harrT10_2003]
Quote: lightweight, software transaction without object slots; thread synchronization; requires atomic compare-swap [»harrT10_2003]
Quote: when writing mutex code must manage locks, decide on lock granularity, and handle deadlock; hides important safety and progress properties [»harrT10_2003]

Subtopic: lock-free regions up

Quote: V uses time servers to serialize access to resources; better than monitors since independent of client [»cherDR4_1984]
Quote: implemented lock-free malloc (LFMalloc) with MP-RCS; one heap per CPU, low latency, scalable, memory and cache efficient
Quote: use MP-RCS for restartable critical sessions and processor-specific threads without locking; uses upcalls to restart interrupted critical sections [»diceD2_2003]
Quote: MP-RCS restarts critical sections at beginning of a time quantum; stale sections use upcalls to initiate a restart [»diceD2_2003]
Quote: MP-RCS critical sections written in assembler without destroying input arguments
Quote: concurrent data is wait-free if any process can complete in a finite number of steps; e.g., compare&swap but not test&set; reduce to consensus [»herlM1_1991]

Subtopic: implementing critical regions up

Quote: efficient spin lock for each free list; keeps lock in cache [»larsPA10_1998]
Quote: use unique capabilities to avoid copying user data into kernel space and to ensure mutually exclusive access [»walkD7_2000]
Quote: if an Agenda category is exclusive, only one sub-category per item; mutual exclusion
Quote: ubiquitous synchronization via a space and time efficient meta-lock; 2 bits, 11 ops, no busy wait, flexible [»agesO11_1999]

Subtopic: accounting up

Quote: assign user programs to reserved processes; do not charge time spent in critical sections [»dijkEW2_1971]

Subtopic: problems with semaphores up

Quote: if possible, use mutexes and condition variables instead of semaphores. They provide additional structure

Related Topics up

Topic: conditional control by guards (17 items)
Topic: concurrency (33 items)
Topic: concurrency control by monitors (24 items)
Topic: concurrency control by sequencers (27 items)
Topic: deadlocks (21 items)
Topic: managing shared memory (74 items)
Topic: multi-tasking (22 items)
Topic: path expression (14 items)
Topic: parallel control statements (12 items)
Topic: process threads (25 items)
Topic: proving concurrent programs (37 items)
Topic: race conditions (33 items)
Topic: shared information for collaborative work (36 items)
Topic: shared objects (13 items)
Topic: updating information with locking (20 items)
Topic: waitfor condition in parallel processing
(20 items)

Updated barberCB 6/05
Copyright © 2002-2008 by C. Bradford Barber. All rights reserved.
Thesa is a trademark of C. Bradford Barber.