Map
Index
Random
Help
Topics
th

Topic: memory cache

topics > computer science > Group: computer hardware



Group:
memory management

Topic:
associative memory
Topic:
computer architecture
Topic:
code optimization
Topic:
computer performance
Topic:
data caching
Topic:
efficiency
Topic:
file cache
Topic:
managing shared memory
Topic:
memory management by working sets
Topic:
register allocation

Summary

A memory cache is high-speed memory that holds data and instructions from main memory. Memory caches are increasingly important as their capacity and performance improves.

Memory caches have become nearly as important as disk caches. Algorithms and data structures need to be aware of the high-cost of accessing main memory. (cbb 11/07)

Subtopic: cache-aware algorithms up

Quote: use contiguous, pointer-free storage for in-memory data structures; random access is expensive, like disks [»zobeJ11_2005]
Quote: for static tree structure, cluster subtrees for efficient cache access [»chilTM12_2000]
Quote: reduce the hot working set by splitting structures into hot, frequently accessed portions and cold portions [»chilTM12_2000]
Quote: cache-conscious pointer structures by rearrangement, allocation policy, and structure layout [»chilTM12_2000]
Quote: how to optimize programs for cache memory; unless hit rate 98% programs run at memory speed instead of processor speed [»searCB10_2000]
Quote: memory latency is almost 1000 instructions; may be faster to recalculate than to fetch [»pattDA10_2004]
Quote: R-MERGE is a cache-conscious sorting algorithm; uses mergesort and quicksort to avoid memory stalls; 10% to 36% speedup [»argeL9_2000]
Quote: algorithms should reduce memory stalls as well as instructions; sequential access is faster than random access; using registers wins on both counts [»argeL9_2000]

Subtopic: cache-aware memory allocation up

Quote: used class splitting and cache-conscious garbage collection; 20% faster and 29-43% fewer L2 cache misses [»chilTM12_2000]
Quote: 27% speedup by cache-conscious heap allocator; additional parameter for contemporous data element [»chilTM12_2000]
Quote: automatically inline single-reference objects; 25% fewer cache misses; 14% faster programs [»dolbJ6_2000]
Quote: for cache utilization and bus balance, each slab allocates buffers at slightly different offsets [»bonwJ6_1994]

Subtopic: spatial and temporal locality up

Quote: contiguous allocation important for cache performance of garbage collection; generational, copying collection is best [»blacSM6_2004]
Quote: generational collectors provide good spatial locality for young objects; mature objects have good temporal locality
Quote: nursery size for generational collectors should be well above the L2 cache size [»blacSM6_2004]

Subtopic: adaptive cache up

Quote: the adaptive replacement cache balances recency and frequency; ignores one-time sequential requests and low temporal locality; outperform LRU on most traces and cache sizes [»megiN4_2004]

Subtopic: cache optimization up

Quote: improve cache performance with compression, clustering, and coloring; i.e., packing cache blocks, grouping blocks, and reducing cache conflicts [»chilTM12_2000]
Quote: improve cache performance with locality grouping, dynamic data packing, pointer updates, and array alignment; up to 5x faster [»dingC5_1999]
Quote: use run-time data maps and data indirections to ensure correctness of run-time data transformations
Quote: use coloring to map frequently accessed elements to non-conflicting cache blocks [»chilTM12_2000]
Quote: improve cache performance via copying garbage collector, object-oriented languages, and octree coloring; 42% speedup [»chilTM12_2000]
Quote: improved prediction of cache behavior by analyzing calling contexts and separating first loop invocation from successive invocations [»martF3_1998]

Subtopic: hot spots up

Quote: use randomness to avoid hot-spotting of cache lines and TLB entries; static patterns often interfere with some application's performance [»smaaB2_2006]

Subtopic: false sharing up

Quote: false sharing when different CPUs accidently share the same cache line [»smaaB2_2006]

Subtopic: memory consistency up

Quote: hardware memory consistency should enforce sequential consistency with relaxed write-to-read order; other relaxed models too complicated [»hillMD8_1998]
Quote: split cache consistency for load/store operations; use fence operations to define synchronization regions; commit/reconcile/fence (CRF) memory model [»maesJW10_2000]

Subtopic: parallel cache optimization up

Quote: optimal parallel execution if memory is randomly distributed among the physical processors and there are log p more virtual processors; e.g., use hashing [»valiLG8_1990]

Subtopic: stack as registers up

Quote: can cache top of stack in high speed registers; for locals, temporaries, and parameters [»ditzDR3_1982]
Quote: caching the top of stack removes register allocation but worsens context switching
[»ditzDR3_1982, OK]

Related Topics up

Group: memory management   (11 topics, 367 quotes)

Topic: associative memory (5 items)
Topic: computer architecture (46 items)
Topic: code optimization (54 items)
Topic: computer performance (14 items)
Topic: data caching (35 items)
Topic: efficiency (96 items)
Topic: file cache (23 items)
Topic: managing shared memory (74 items)
Topic: memory management by working sets (18 items)
Topic: register allocation
(28 items)


Updated barberCB 1/06
Copyright © 2002-2008 by C. Bradford Barber. All rights reserved.
Thesa is a trademark of C. Bradford Barber.