|Title:||CONTROLLING CACHE PARTITION SIZES TO OPTIMIZE APPLICATION RELIABILITY|
Single-chip multiprocessors(CMP) are commonly used today due their better overall performance
compared to uniprocessor architectures. While CMPs are beneficial in terms of performance, CMPs
also comes with its own share of problems. One of these problems stems from how shared cache
memory is organized and utilized in a CMP alongside employed eviction policies, this problem is
known as cache contention.
Cache contention occurs as consequence of unconstrained usage of shared cache memory. As
cores in a CMP all have equal access to shared cache memory, they implicitly compete for available
capacity. If the issue of cache contention is left unmanaged, process performance can be negatively
effected as the Least Recently Used(LRU) eviction policy might evict stored data indiscriminately.
This performance impact becomes more troublesome if executing processes are time-critical, since
execution time is affected due to eventual cache misses.
A solution to solve the cache contention problem is use cache partitioning, which splits cache
memory into different independent partitions. This enables processes to execute in isolation from
each other, therefore minimizing risks of inter-process interference. In this study we intend to
employ cache-partitioning combined with cache-coloring rules to try and solve the cache contention
problem. As result of this we intend to investigate whether process reliability can be increased by
controlling cache partitions.
|IDT supervisors:||Jakob Danielsson|