Exact k-NN with scoring script

    Because the score script approach executes a brute force search, it doesn’t scale as well as the approximate approach. In some cases, it might be better to think about refactoring your workflow or index structure to use the approximate approach instead of the score script approach.

    Similar to approximate nearest neighbor search, in order to use the score script on a body of vectors, you must first create an index with one or more fields.

    If you intend to just use the score script approach (and not the approximate approach) you can set index.knn to false and not set index.knn.space_type. You can choose the space type during search. See for the spaces the k-NN score script suppports.

    This example creates an index with two knn_vector fields:

    If you only want to use the score script, you can omit "index.knn": true. The benefit of this approach is faster indexing speed and lower memory usage, but you lose the ability to perform standard k-NN queries on the index.

    After you create the index, you can add some data to it:

    1. POST _bulk
    2. { "index": { "_index": "my-knn-index-1", "_id": "1" } }
    3. { "my_vector1": [1.5, 2.5], "price": 12.2 }
    4. { "index": { "_index": "my-knn-index-1", "_id": "2" } }
    5. { "my_vector1": [2.5, 3.5], "price": 7.1 }
    6. { "index": { "_index": "my-knn-index-1", "_id": "3" } }
    7. { "my_vector1": [3.5, 4.5], "price": 12.9 }
    8. { "index": { "_index": "my-knn-index-1", "_id": "4" } }
    9. { "my_vector1": [5.5, 6.5], "price": 1.2 }
    10. { "index": { "_index": "my-knn-index-1", "_id": "5" } }
    11. { "my_vector1": [4.5, 5.5], "price": 3.7 }
    12. { "index": { "_index": "my-knn-index-1", "_id": "6" } }
    13. { "my_vector2": [1.5, 5.5, 4.5, 6.4], "price": 10.3 }
    14. { "index": { "_index": "my-knn-index-1", "_id": "7" } }
    15. { "my_vector2": [2.5, 3.5, 5.6, 6.7], "price": 5.5 }
    16. { "index": { "_index": "my-knn-index-1", "_id": "8" } }
    17. { "my_vector2": [4.5, 5.5, 6.7, 3.7], "price": 4.4 }
    18. { "index": { "_index": "my-knn-index-1", "_id": "9" } }
    19. { "my_vector2": [1.5, 5.5, 4.5, 6.4], "price": 8.9 }
    1. GET my-knn-index-1/_search
    2. {
    3. "size": 4,
    4. "query": {
    5. "script_score": {
    6. "query": {
    7. "match_all": {}
    8. },
    9. "script": {
    10. "source": "knn_score",
    11. "lang": "knn",
    12. "params": {
    13. "field": "my_vector2",
    14. "query_value": [2.0, 3.0, 5.0, 6.0],
    15. "space_type": "cosinesimil"
    16. }
    17. }
    18. }
    19. }

    All parameters are required.

    • lang is the script type. This value is usually , but here you must specify knn.
    • source is the name of the script, knn_score.

      This script is part of the k-NN plugin and isn’t available at the standard _scripts path. A GET request to _cluster/state/metadata doesn’t return it, either.

    • field is the field that contains your vector data.

    • query_value is the point you want to find the nearest neighbors for. For the Euclidean and cosine similarity spaces, the value must be an array of floats that matches the dimension set in the field’s mapping. For Hamming bit distance, this value can be either of type signed long or a base64-encoded string (for the long and binary field types, respectively).
    • space_type corresponds to the distance function. See the spaces section.

    The shows a search that returns fewer than k results. If you want to avoid this situation, the score script method lets you essentially invert the order of events. In other words, you can filter down the set of documents over which to execute the k-nearest neighbor search.

    This example shows a pre-filter approach to k-NN search with the score script approach. First, create the index:

    1. POST _bulk
    2. { "index": { "_index": "my-knn-index-2", "_id": "1" } }
    3. { "my_vector": [1, 1], "color" : "RED" }
    4. { "index": { "_index": "my-knn-index-2", "_id": "2" } }
    5. { "my_vector": [2, 2], "color" : "RED" }
    6. { "index": { "_index": "my-knn-index-2", "_id": "3" } }
    7. { "my_vector": [3, 3], "color" : "RED" }
    8. { "index": { "_index": "my-knn-index-2", "_id": "4" } }
    9. { "my_vector": [10, 10], "color" : "BLUE" }
    10. { "index": { "_index": "my-knn-index-2", "_id": "5" } }
    11. { "my_vector": [20, 20], "color" : "BLUE" }
    12. { "index": { "_index": "my-knn-index-2", "_id": "6" } }
    13. { "my_vector": [30, 30], "color" : "BLUE" }

    Finally, use the script_score query to pre-filter your documents before identifying nearest neighbors:

    1. GET my-knn-index-2/_search
    2. {
    3. "size": 2,
    4. "query": {
    5. "script_score": {
    6. "query": {
    7. "bool": {
    8. "filter": {
    9. "term": {
    10. "color": "BLUE"
    11. }
    12. }
    13. }
    14. },
    15. "script": {
    16. "lang": "knn",
    17. "source": "knn_score",
    18. "params": {
    19. "field": "my_vector",
    20. "query_value": [9.9, 9.9],
    21. "space_type": "l2"
    22. }
    23. }
    24. }
    25. }

    The k-NN score script also allows you to run k-NN search on your binary data with the Hamming distance space. In order to use Hamming distance, the field of interest must have either a binary or long field type. If you’re using binary type, the data must be a base64-encoded string.

    This example shows how to use the Hamming distance space with a binary field type:

    Then add some documents:

    1. POST _bulk
    2. { "index": { "_index": "my-index", "_id": "1" } }
    3. { "my_binary": "SGVsbG8gV29ybGQh", "color" : "RED" }
    4. { "index": { "_index": "my-index", "_id": "2" } }
    5. { "my_binary": "ay1OTiBjdXN0b20gc2NvcmluZyE=", "color" : "RED" }
    6. { "index": { "_index": "my-index", "_id": "3" } }
    7. { "my_binary": "V2VsY29tZSB0byBrLU5O", "color" : "RED" }
    8. { "index": { "_index": "my-index", "_id": "4" } }
    9. { "my_binary": "SSBob3BlIHRoaXMgaXMgaGVscGZ1bA==", "color" : "BLUE" }
    10. { "index": { "_index": "my-index", "_id": "5" } }
    11. { "my_binary": "QSBjb3VwbGUgbW9yZSBkb2NzLi4u", "color" : "BLUE" }
    12. { "index": { "_index": "my-index", "_id": "6" } }
    13. { "my_binary": "TGFzdCBvbmUh", "color" : "BLUE" }

    Finally, use the script_score query to pre-filter your documents before identifying nearest neighbors:

    1. GET my-index/_search
    2. {
    3. "size": 2,
    4. "query": {
    5. "script_score": {
    6. "query": {
    7. "bool": {
    8. "filter": {
    9. "term": {
    10. "color": "BLUE"
    11. }
    12. }
    13. }
    14. },
    15. "script": {
    16. "lang": "knn",
    17. "source": "knn_score",
    18. "params": {
    19. "field": "my_binary",
    20. "query_value": "U29tZXRoaW5nIEltIGxvb2tpbmcgZm9y",
    21. "space_type": "hammingbit"
    22. }
    23. }
    24. }
    25. }
    26. }

    Similarly, you can encode your data with the field and run a search:

    Cosine similarity returns a number between -1 and 1, and because OpenSearch relevance scores can’t be below 0, the k-NN plugin adds 1 to get the final score.