Quickstart
The Docker Compose commands used in this guide are written with a hyphen (for example, ). If you installed Docker Desktop on your machine, which automatically installs a bundled version of Docker Compose, then you should remove the hyphen. For example, change docker-compose
to docker compose
.
You’ll need a special file, called a Compose file, that Docker Compose uses to define and create the containers in your cluster. The OpenSearch Project provides a sample Compose file that you can use to get started. Learn more about working with Compose files by reviewing the official .
Before running OpenSearch on your machine, you should disable memory paging and swapping performance on the host to improve performance and increase the number of memory maps available to OpenSearch. See important system settings for more information.
Download the sample Compose file to your host. You can download the file with command line utilities like
curl
andwget
, or you can manually copy from the OpenSearch Project documentation-website repository using a web browser.# Using cURL:
curl -O https://raw.githubusercontent.com/opensearch-project/documentation-website/2.7/assets/examples/docker-compose.yml
# Using wget:
wget https://raw.githubusercontent.com/opensearch-project/documentation-website/2.7/assets/examples/docker-compose.yml
In your terminal application, navigate to the directory containing the
docker-compose.yml
file you just downloaded, and run the following command to create and start the cluster as a background process.docker-compose up -d
Confirm that the containers are running with the command
docker-compose ps
. You should see an output like the following:$ docker-compose ps
NAME COMMAND SERVICE STATUS PORTS
opensearch-dashboards "./opensearch-dashbo…" opensearch-dashboards running 0.0.0.0:5601->5601/tcp
opensearch-node1 "./opensearch-docker…" opensearch-node1 running 0.0.0.0:9200->9200/tcp, 9300/tcp, 0.0.0.0:9600->9600/tcp, 9650/tcp
opensearch-node2 "./opensearch-docker…" opensearch-node2 running 9200/tcp, 9300/tcp, 9600/tcp, 9650/tcp
Query the OpenSearch REST API to verify that the service is running. You should use
-k
(also written as--insecure
) to disable host name checking because the default security configuration uses demo certificates. Use-u
to pass the default username and password (admin:admin
).{
"name" : "opensearch-node1",
"cluster_uuid" : "W0B8gPotTAajhMPbC9D4ww",
"version" : {
"number" : "2.6.0",
"build_type" : "tar",
"build_hash" : "7203a5af21a8a009aece1474446b437a3c674db6",
"build_date" : "2023-02-24T18:58:37.352296474Z",
"build_snapshot" : false,
"lucene_version" : "9.5.0",
"minimum_wire_compatibility_version" : "7.10.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "The OpenSearch Project: https://opensearch.org/"
}
Explore OpenSearch Dashboards by opening
http://localhost:5601/
in a web browser on the same host that is running your OpenSearch cluster. The default username isadmin
and the default password isadmin
.
Create an index and define field mappings using a dataset provided by the OpenSearch Project. The same fictitious e-commerce data is also used for sample visualizations in OpenSearch Dashboards. To learn more, see .
Download ecommerce-field_mappings.json. This file defines a for the sample data you will use.
# Using cURL:
curl -O https://raw.githubusercontent.com/opensearch-project/documentation-website/2.7/assets/examples/ecommerce-field_mappings.json
# Using wget:
wget https://raw.githubusercontent.com/opensearch-project/documentation-website/2.7/assets/examples/ecommerce-field_mappings.json
Download ecommerce.json. This file contains the index data formatted so that it can be ingested by the bulk API. To learn more, see and Bulk.
# Using cURL:
curl -O https://raw.githubusercontent.com/opensearch-project/documentation-website/2.7/assets/examples/ecommerce.json
# Using wget:
Define the field mappings with the mapping file.
Upload the index to the bulk API.
curl -H "Content-Type: application/x-ndjson" -X PUT "https://localhost:9200/ecommerce/_bulk" -ku admin:admin --data-binary "@ecommerce.json"
Query the data using the search API. The following command submits a query that will return documents where
customer_first_name
is .curl -H 'Content-Type: application/json' -X GET "https://localhost:9200/ecommerce/_search?pretty=true" -ku admin:admin -d' {"query":{"match":{"customer_first_name":"Sonya"}}}'
Access OpenSearch Dashboards by opening
http://localhost:5601/
in a web browser on the same host that is running your OpenSearch cluster. The default username isadmin
and the default password isadmin
.- On the top menu bar, go to Management > Dev Tools.
In the left pane of the console, enter the following:
GET ecommerce/_search
{
"query": {
"match": {
"customer_first_name": "Sonya"
}
}
}
Choose the triangle icon at the top right of the request to submit the query. You can also submit the request by pressing
Ctrl+Enter
(orCmd+Enter
for Mac users). To learn more about using the OpenSearch Dashboards console for submitting queries, see .
You successfully deployed your own OpenSearch cluster with OpenSearch Dashboards and added some sample data. Now you’re ready to learn about configuration and functionality in more detail. Here are a few recommendations on where to begin:
Review these common issues and suggested solutions if your containers fail to start or exit unexpectedly.
Eliminate the need for running your Docker commands with sudo
by adding your user to the docker
user group. See Docker’s for more information.
If you installed Docker Desktop, then Docker Compose is already installed on your machine. Try docker compose
(without the hyphen) instead of docker-compose
. See .
OpenSearch will fail to start if your host’s vm.max_map_count
is too low. Review the important system settings if you see the following errors in the service log, and set vm.max_map_count
appropriately.
opensearch-node1 | ERROR: [1] bootstrap checks failed
opensearch-node1 | [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
opensearch-node1 | ERROR: OpenSearch did not exit normally - check the logs at /usr/share/opensearch/logs/opensearch-cluster.log