What is it about?

We introduce novel Randomized dimensionality reduction techniques for Approximate Nearest Neighbor search among points in High-Dimensional Euclidean spaces. Our methods avoid the curse of dimensionality and use close to optimal storage.

Featured Image

Why is it important?

Our methods slightly worse complexity than state-of-the-art methods such as Locality Sensitive Hashing (LSH) but use less memory space than LSH. Moreover, they are quite easy to program, as illustrated by our publicly available implementation.


Extensions to other metrics and to inputs with structure (such as doubling dimension). Some follow-up papers co-authored with I. Psarros and other members of our team have been published since.

Ioannis Emiris
National and Kapodistrian University of Athens

Read the Original

This page is a summary of: Randomized Embeddings with Slack and High-Dimensional Approximate Nearest Neighbor, ACM Transactions on Algorithms, June 2018, ACM (Association for Computing Machinery),
DOI: 10.1145/3178540.
You can read the full text:



The following have contributed to this page