What is it about?

This paper presents an efficient indexing structure specially optimized for emerging persistent memory devices. The primary research focus was on reducing the expensive memory writes for persisting data. We instead find that in tree-based indexes, the read-intensive tree traversal dominates the overall latency. We then design a more efficient indexing structure by blending conventional tree-based and hash-based approaches to better balance the read and write overheads.

Featured Image

Why is it important?

Modern data-intensive applications are moving onto new types of persistent memory technologies to enjoy the large capacity and non-volatility features. It is therefore an important topic to continuously sustain their high performance by porting key data structures to persistent memories. One such example is the indexing structure used to support efficient data search and update queries. There are a variety of recent designs about specifically optimized indexing structures on persistent memories. This paper examines the previous designs' performance on the recently available real persistent memory devices, finds the real bottlenecks, and proposes a more efficient design.

Perspectives

It was a pleasure to work with my long standing collaborators as well as my advisors. Also I hope this article would bring some insights to put persistent memory into practice.

Ke Wang
Ant group

Read the Original

This page is a summary of: When Tree Meets Hash: Reducing Random Reads for Index Structures on Persistent Memories, Proceedings of the ACM on Management of Data, May 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3588959.
You can read the full text:

Read

Contributors

The following have contributed to this page