35.1 C

Setting up Elasticsearch on Linux


If you are looking at options on selecting a database that is best on searching “full text” data, look no further than Elasticsearch. Now, you may curious on what are these “full text” data. At very basics, its about any context that we might be reading out within a paragraph. For example, a comment someone post in one of the twitter post. Searching these full text data is one of the best aspect that Elasticsearch brings because performance of the search is  very fast and also reliable. Additionally, there are other benefits as well, including data visualization and aggregation.

OK, now we have a very basic idea about Elasticsearch. However, when it comes to Elasticsearch, you always be heard a stack called “ELK”-  acronym stands for ElasticSearch, Logstash & Kibana. Kibana is one of the other component comes into play when we want to search data inside the Elasticsearch. It has got very user-friendly interface where you can perform your queries in a easy manner. Logstash, on the other hand in the stack provide user ability to push data from any source locations, for example from a text file or from a network socket into Elasticsearch in a compatible manner. However, In this article I am not going to talk about Logstash & and its capabilities but it will be available in a separate document with all the powerful features we can discover upon.

Now we have enough background information, lets jump into configuration stuff.

  • Note that the procedure that will follow up will have tar ball source installation, rather depending upon the built-in package managers like APT/YUM. This makes the whole procedure apply to any Linux systems, from Debian based to Fedora based. However, I would recommend to work on one of the latest releases of Linux distros, such as Ubuntu 18.04, CentOS 7, etc.
  • Further, of course there are benefits if we ever install application/software from their respective sources.
    1. You have the full control over application version upgrades, freedom of assign access privileges & isolate binaries and other files into a separate directory areas.
    2.  Also, application or the software won’t ever get impacted across system upgrades, for example, doing yum update or apt-get upgrade.

Setting up the Elasticsearch Node.

- Advertisement -

Download the source archive. Note the version numbering where on this demo we are installing Elasticsearch version 6.3.0.

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.3.0.tar.gz


Lets create a directory named “elastic” under /opt that then will hold the entire installation files. Also you will notice another sub-directory “java” which we will later use to download oracle JDK.

mkdir -p /opt/elastic/java


Now move the download tar archive, change the directory location and extract it

mv elasticsearch-6.3.0.tar.gz /opt/elastic/ && cd /opt/elastic/ && tar -xvf elasticsearch-6.3.0.tar.gz


Further, create two more directories called “data” & “logs” that will hold the actual data and the elasticsearch own logging information respectively.

mkdir -p /opt/elastic/elasticsearch-6.3.0/{data,logs}


Its time to aware the Elasticsearch application about what we applied in  above step. To do this, we need to access the main configuration file and reference below settings appropriately.

vim /opt/elastic/elasticsearch-6.3.0/config/elasticsearch.yml
# Path to directory where to store the actual Elasticsearch data
path.data: /opt/elastic/elasticsearch-6.3.0/data
# Path to log files that generated by Elasticsearch
path.logs: /opt/elastic/elasticsearch-6.3.0/logs
# Reference the IP address of the NIC that you want ELasticsearch to listen on 

Note that the “network.host” value which should be the IP address on which Elasticsearch process listen on.

Another Important aspect of the configuration is to assign half of the physical memory to Java HEAP. This is because Elasticsearch process actually runs on top of Java.

vim /home/dlogs/elasticsearch/config/jvm.options
# If the system physical memory is 8GB, lets hand over half(4GB) onto Java Heap 


Further, Its also important to change the existing Kernel hard limits. Here we are looking at specifically changing the default value of “vm.max_map_count”.

echo "vm.max_map_count = 262144" >> /etc/sysctl.conf


To get affect the above change, lets execute

sysctl -p


As I already mentioned, JDK is part of elasticserch workspace. So lets download oracle JDK version 8 and move it to the desired location.

wget -c --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.tar.gz
mv  jdk-8u131-linux-x64.tar.gz /opt/elastic/java/ && cd  /opt/elastic/java/  &&  tar -xvf jdk-8u131-linux-x64.tar.gz


Of course, lets introduce Systemd unit file for the sack of controlling the service.

vim /etc/systemd/system/elasticsearch.service



ExecStart=/opt/elastic/elasticsearch-6.3.0/bin/elasticsearch \
                                                -p ${PID_DIR}/elasticsearch.pid \
                                                --quiet \
                                                -Edefault.path.logs=${LOG_DIR} \
                                                -Edefault.path.data=${DATA_DIR} \




If you note down the above unit file, it supposes that Elasticsearch service should runs as a separate user called “elastic” & group “elastic”. Lets create them.

useradd elastic


Now its time to aware the system about the new Unit files that we placed.

systemctl daemon-reload


As far as setting up the Elasticsearch node it’s done. Now you can invoke the process by calling the systemd service.

systemctl start elasticsearch 


Check the service status as well as the default TCP ports that should be listening on.

systemctl status elasticsearch
ss -lnt

If  you can see 9200/tcp & 9300/tcp is being listing on..

Great job.! You done it.

Lets run our first API call to Elasticsearch.

curl -XGET `hostname`:9200
  "name" : "node-1",
  "cluster_name" : "my-application",
  "cluster_uuid" : "PLiGiyKhSXykxxx7pKa_6w",
  "version" : {
    "number" : "6.3.0",
    "build_hash" : "260387d",
    "build_date" : "2017-06-30T23:16:05.735Z",
    "build_snapshot" : false,
    "lucene_version" : "6.6.0"
  "tagline" : "You Know, for Search"


Setting up the Kibana.

Its best practice to have all these Elastic stack into a desired directory. So lets download the kibana into the same /opt/elastic area.

cd /opt/elastic/ && wget https://artifacts.elastic.co/downloads/kibana/kibana-6.3.0-linux-x86_64.tar.gz


Once downloading process completed, extract the tar archive by

tar -xvfkibana-6.3.0-linux-x86_64.tar.gz


Lets edit the main configuration file & have reference the elasticsearch node IP.

vim /opt/elastic/kibana-6.3.0-linux-x86_64/config/kibana.yml
# Specifies the address to which the Kibana server will bind to
server.host: ""

# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: ""


Introduce the Kibana Systemd unit file

vim /etc/systemd/system/kibana.service

ExecStart=/opt/elastic/kibana-6.3.0-linux-x86_64/bin/kibana "-c /opt/elastic/kibana-6.3.0-linux-x86_64/config/kibana.yml"



Now its time to aware the system about the new Unit files that we placed.

systemctl daemon-reload


Now you can invoke the process by calling the systemd service.

systemctl start kibana


If you see 5601/tcp from the below output, it confirm that you successfully start the service.

ss -lnt


Now, fireup your favorite web browser and visit the Kibana UI.

Congratulations, You have successfully complete the Elasticsearch/Kibana deployment.

- Advertisement -
Everything Linux, A.I, IT News, DataOps, Open Source and more delivered right to you.
"The best Linux newsletter on the web"



Please enter your comment!
Please enter your name here

Latest article