Reducing DRAM latencies with an integrated memory hierarchy design

Wei fen Lin, Steven K. Reinhardt, Doug Burger

Research output: Contribution to conferencePaperpeer-review

127 Citations (Scopus)

Abstract

In this paper, we address the severe performance gap caused by high processor clock rates and slow DRAM accesses. We show that even with an aggressive, next-generation memory system using four Direct Rambus channels and an integrated one-megabyte level-two cache, a processor still spends over half of its time stalling for L2 misses. Large cache blocks can improve performance, but only when coupled with wide memory channels. DRAM address mappings also affect performance significantly. We evaluate an aggressive prefetch unit integrated with the L2 cache and memory controllers. By issuing prefetches only when the Rambus channels are idle, prioritizing them to maximize DRAM row buffer hits, and giving them low replacement priority, we achieve a 43% speedup across 10 of the 26 SPEC2000 benchmarks, without degrading performance on the others. With eight Rambus channels, these ten benchmarks improve to within 10% of the performance of a perfect L2 cache.

Original languageEnglish
Pages301-312
Number of pages12
Publication statusPublished - 2001
Event7th International Symposium on High-Performance Computer Architecture - Nuevo Leon, Mex
Duration: 2000 Oct 202000 Oct 24

Conference

Conference7th International Symposium on High-Performance Computer Architecture
CityNuevo Leon, Mex
Period00-10-2000-10-24

All Science Journal Classification (ASJC) codes

  • Hardware and Architecture

Fingerprint Dive into the research topics of 'Reducing DRAM latencies with an integrated memory hierarchy design'. Together they form a unique fingerprint.

Cite this