Map
Index
Random
Help
th

QuoteRef: smaaB2_2006




Topic:
efficiency
Topic:
consistency testing
Topic:
abstraction in programming
Topic:
decomposition of a system into levels
Topic:
models of parallel computation
Topic:
message queues for communication
Group:
parallel processing
Topic:
memory cache
Topic:
probabilistic and randomized algorithms
Topic:
lock-free concurrency
Topic:
concurrency control by monitors

Reference

Smaalders, B., "Performance anti-patterns", Queue, ACM, 4, 1, pp. 44-50, February 2006. Google

Quotations
46 ;;Quote: performance work done at the beginning of the project in terms of benchmark, algorithm, and data-structure selection will pay tremendous dividends later on
46 ;;Quote: a good performance benchmark is repeatable, observable, portable, easily presented, realistic, and runnable
46 ;;Quote: for Solaris 10 development, all of the really big performance improvements resulted in changes to algorithms; study performance early in a project
47 ;;Quote: developers should document their assumptions and write tests for these assumptions; catch changing conditions or inapproriate reuse
47 ;;Quote: eliminate unneeded or unappreciated work; only the end state matters
48 ;;Quote: layered abstractions increase the stack data cache footprint, TLB misses, and function call overhead; too many arguments; spectacularly deep call stacks
48 ;;Quote: use work queues instead of thread per connection or thread per work unit
48+;;Quote: keep number of threads near the number of CPUs
49 ;;Quote: use randomness to avoid hot-spotting of cache lines and TLB entries; static patterns often interfere with some application's performance
49 ;;Quote: use a special pause thread if reads much more frequent than writes; each read thread need only prevent its own preemption
50 ;;Quote: false sharing when different CPUs accidently share the same cache line
50 ;;Quote: for short reads, simple mutex often better than a reader-writer lock

Related Topics up

Topic: efficiency (96 items)
Topic: consistency testing (60 items)
Topic: abstraction in programming (67 items)
Topic: decomposition of a system into levels (49 items)
Topic: models of parallel computation (33 items)
Topic: message queues for communication (36 items)
Group: parallel processing   (41 topics, 1125 quotes)
Topic: memory cache (29 items)
Topic: probabilistic and randomized algorithms (11 items)
Topic: lock-free concurrency (8 items)
Topic: concurrency control by monitors (24 items)

Collected barberCB 10/06
Copyright © 2002-2008 by C. Bradford Barber. All rights reserved.
Thesa is a trademark of C. Bradford Barber.