Performance tuning

    • Native library indices are created per knn_vector field / (Lucene) segment pair.
    • Queries execute on segments sequentially inside the shard (same as any other OpenSearch query).
    • The coordinator node picks up final size number of neighbors from the neighbors returned by each shard.

    This topic also provides recommendations for comparing approximate k-NN to exact k-NN with score script.

    Take the following steps to improve indexing performance, especially when you plan to index a large number of vectors at once:

    • Disable the refresh interval

      Either disable the refresh interval (default = 1 sec), or set a long duration for the refresh interval to avoid creating multiple small segments:

      Note: Make sure to reenable after indexing finishes.

    • Disable replicas (no OpenSearch replica shard)

      Set replicas to 0 to prevent duplicate construction of native library indices in both primary and replica shards. When you enable replicas after indexing finishes, the serialized native library indices are directly copied. If you have no replicas, losing nodes might cause data loss, so it’s important that the data lives elsewhere so this initial load can be retried in case of an issue.

    • Increase the number of indexing threads

      Keep an eye on CPU utilization and choose the correct number of threads. Because native library index construction is costly, having multiple threads can cause additional CPU load.

    Take the following steps to improve search performance:

    • Reduce segment count

      To improve search performance, you must keep the number of segments under control. Lucene’s IndexSearcher searches over all of the segments in a shard to find the ‘size’ best results.

      Ideally, having one segment per shard provides the optimal performance with respect to search latency. You can configure an index to have multiple shards to avoid giant shards and achieve more parallelism.

      You can control the number of segments by choosing a larger refresh interval, or during indexing by asking OpenSearch to slow down segment creation by disabling the refresh interval.

    • Warm up the index

      Native library indices are constructed during indexing, but they’re loaded into memory during the first search. In Lucene, each segment is searched sequentially (so, for k-NN, each segment returns up to k nearest neighbors of the query point), and the top ‘size’ number of results based on the score are returned from all the results returned by segements at a shard level (higher score = better result).

      To avoid this latency penalty during your first queries, you can use the warmup API operation on the indices you want to search:

      1. "_shards" : {
      2. "total" : 6,
      3. "failed" : 0
      4. }

      The warmup API operation loads all native library indices for all shards (primary and replica) for the specified indices into the cache, so there’s no penalty to load native library indices during initial searches.

      Note: This API operation only loads the segments of the indices it sees into the cache. If a merge or refresh operation finishes after the API runs, or if you add new documents, you need to rerun the API to load those native library indices into memory.

    • Avoid reading stored fields

      If your use case is simply to read the IDs and scores of the nearest neighbors, you can disable reading stored fields, which saves time retrieving the vectors from stored fields.

    Recall depends on multiple factors like number of vectors, number of dimensions, segments, and so on. Searching over a large number of small segments and aggregating the results leads to better recall than searching over a small number of large segments and aggregating results. The larger the native library index, the more chances of losing recall if you’re using smaller algorithm parameters. Choosing larger values for algorithm parameters should help solve this issue but sacrifices search latency and indexing time. That being said, it’s important to understand your system’s requirements for latency and accuracy, and then choose the number of segments you want your index to have based on experimentation.

    The default parameters work on a broader set of use cases, but make sure to run your own experiments on your data sets and choose the appropriate values. For index-level settings, see Index settings.

    The standard k-NN query and custom scoring option perform differently. Test with a representative set of documents to see if the search results and latencies match your expectations.