ISM error prevention resolutions


    To confirm that the index is a write index, run the following request:

    If the response does not contain : true, the index is not a write index. The following example confirms that the index is a write index:

    1. {
    2. "<index>" : {
    3. "aliases" : {
    4. "<index_alias>" : {
    5. "is_write_index" : true
    6. }
    7. }
    8. }
    9. }

    To set the index as a write index, run the following request:

    1. PUT <index>
    2. {
    3. "aliases": {
    4. "<index_alias>" : {
    5. "is_write_index" : true
    6. }
    7. }
    8. }

    The index does not have an alias

    If the index does not have an alias, you can add one by running the following request:

    1. POST _aliases
    2. {
    3. "actions": [
    4. {
    5. "index": "<target_index>",
    6. "alias": "<index_alias>"
    7. }
    8. }
    9. ]
    10. }

    Skipping rollover action is true

    In the event that skipping a rollover action occurs, run the following request:

    1. PUT <target_index>/_settings
    2. {
    3. "index": {
    4. "index_state_management.rollover_skip": false
    5. }
    6. }

    Remove the rollover policy from the index to prevent this error from reoccurring.

    The rollover policy misses rollover_alias index setting

    Add a rollover_alias index setting to the rollover policy to resolve this issue. Run the following request:

    1. PUT _index_template/ism_rollover
    2. {
    3. "index_patterns": ["<index_patterns_in_rollover_policy>"],
    4. "template": {
    5. "settings": {
    6. "plugins.index_state_management.rollover_alias": "<rollover_alias>"
    7. }
    8. }
    9. }

    Data too large and exceeding the threshold

    Check the and increase the heap memory.

    The shard limit per node, or per index, causes this issue to occur. Check whether there is a total_shards_per_node limit by running the following request:

    1. GET /_cluster/settings

    If the response contains total_shards_per_node, increase its value temporarily by running the following request:

    1. PUT _cluster/settings
    2. {
    3. "transient":{
    4. "cluster.routing.allocation.total_shards_per_node":100
    5. }
    6. }

    If the response contains the setting in the first example, increase its value or set it to -1 for unlimited shards, as shown in the second example:

    1. "index" : {
    2. "total_shards_per_node" : "10"
    3. }
    4. }
    5. }
    1. PUT <index>/_settings
    2. {"index.routing.allocation.total_shards_per_node":-1}

    The index is a write index for some data stream

    If you still want to delete the index, check your data stream settings and change the write index.

    The index is blocked

    Generally, the index is blocked because disk usage has exceeded the flood-stage watermark and the index has a read-only-allow-delete block. To resolve this issue, you can:

    1. Remove the -index.blocks.read_only_allow_delete- parameter.
    2. Temporarily increase the disk watermarks.
    3. Temporarily disable the disk allocation threshold.

    To prevent the issue from reoccurring, it is better to reduce the usage of the disk by increasing disk space, adding new nodes, or removing data or indexes that are no longer needed.

    Remove -index.blocks.read_only_allow_delete- by running the following request:

    1. PUT <index>/_settings
    2. {
    3. "index.blocks.read_only_allow_delete": null
    4. }
    1. PUT _cluster/settings
    2. {
    3. "transient": {
    4. "cluster": {
    5. "routing": {
    6. "allocation": {
    7. "disk": {
    8. "watermark": {
    9. "low": "25.0gb"
    10. }
    11. }
    12. }
    13. }
    14. }
    15. }

    Disable the disk allocation threshold by running the following request: