ELK stack basics — Deploying and configuring ELK stack(“open source Splunk”):

Let’s get a basic understanding of what is ELK stack. ELK is acronym for Elasticsearch, Logstash, Kibana. Often referred to as Elasticsearch, the ELK stack gives you the ability to aggregate logs from all your systems and applications, analyze these logs, and create visualizations for application and infrastructure monitoring, faster troubleshooting, security analytics, and more. It is a software stack(similar to LAMP stack) which uses and combines the functionality of these 3 opensource software:


Elasticsearch is an open-source, RESTful, distributed search and analytics engine built on Apache Lucene. Support for various languages, high performance, and schema-free JSON documents makes Elasticsearch an ideal choice for various log analytics and search use cases


Logstash is an open-source data ingestion tool that allows you to collect data from a variety of sources, transform it, and send it to your desired destination. With pre-built filters and support for over 200 plugins, Logstash allows users to easily ingest data regardless of the data source or type.


Kibana is an open-source data visualization and exploration tool for reviewing logs and events. Kibana offers easy-to-use, interactive charts, pre-built aggregations and filters, and geospatial support and making it the preferred choice for visualizing data stored in Elasticsearch.

Enough theory, Lets start deploying(I am using kali-linux virtual machine for demonstration- steps will be similar to any other debian based distro):

Kali-Linux debian based distro

1. First we will install Elastic search

a) Elasticsearch package is not available in default repositories of kali linux, therefore we need to manually add its GPG key for secure installation.

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

b) Then we need to add the custom repository of elasticsearch:

echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-7.x.list

We can check in /etc/apt/sources.list.d/ that our custom repository is indeed added.

ls -l /etc/apt/sources.list.d/

c) Install elasticsearch:

apt-get update && apt-get install elasticsearch -y

d) Then we need to configure the elasticsearch for this we need to mae changes in elasticsearch.yml file available at /etc/elasticsearch/elasticsearch.yml

For demonstration purpose we don’t need to change it. However I have uncommented line no 55 which specifies elastic to run only at localhost not on all interfaces (These type of misconfigurations in live systems lead to databreaches and other attacks).

This file also has configuration options for your cluster, node, paths, memory, network, discovery, and gateway. Most of these options are preconfigured in the file but you can change them according to your needs. For the purposes of our demonstration of a single-server configuration, we will only adjust the settings for the network host.

e) Start the elasticsearch service:

systemctl start elasticsearch

Also we can check it by accessing port 9200 on localhost:

2. Now we will setup kibana:

As we have already added the custom repo for elasticsearch we should be able to install kibana with apt.

apt install kibana
systemctl start kibana

We can check it by visiting

3. Now we will install and configure logstash:


apt install logstash

b) Configure logstash:

Logstash’s configuration files are located at /etc/logstash/conf.d directory.

Logstash is like a pipe. It takes data from one end do some operations on it and pass it to elastic search in consistent manner.

Detailed configuation guide is available @ https://www.elastic.co/guide/en/logstash/current/configuration-file-structure.html

For us a simple configuration specifying port on which logstash will listen for data is enough:

input {
beats {
port => 5044

I named it input-beats.conf you can name it whatever you like.

Next we create a output configuration file. I named this elasticsearch-output.conf you can name it whatever you like :

output {
if [@metadata][pipeline] {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
pipeline => "%{[@metadata][pipeline]}"
} else {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"

And finally we check our configuration is correct or not by running this:

sudo -u logstash /usr/share/logstash/bin/logstash — path.settings /etc/logstash -t

Make sure you are inside /etc/logstash/conf.d directory prior running this.

if we get Configuration OK in output, we are good to go.

4. Now lets see how can we send data from our system to logstash pipe. Here we will be something called beats: The Elastic Stack uses several lightweight data shippers called Beats to collect data from various sources and transport them to Logstash or Elasticsearch. (beat is just a automated tool which keeps running to monitor changes in log files and fetch these changes) There are different beats available we will be using filebeat.


apt install filebeat


Configure the filebeat by editing /etc/filebeat/filebeat.yml as shown below.

(We need to make changes only in between line no 176–202)


Next, we need to set up the Filebeat ingest pipelines, which parse the log data before sending it through logstash to Elasticsearch. To load the ingest pipeline for the system module, enter the following command:

filebeat setup --pipelines --modules system


Then we need to load the index template into Elasticsearch. An Elasticsearch index is a collection of documents that have similar characteristics. Indexes are identified with a name, which is used to refer to the index when performing various operations within it. The index template will be automatically applied when a new index is created.

sudo filebeat setup — index-management -E output.logstash.enabled=false -E ‘output.elasticsearch.hosts=[“localhost:9200”]’


Filebeat comes packaged with sample Kibana dashboards that allow you to visualize Filebeat data in Kibana. Before you can use the dashboards, you need to create the index pattern and load the dashboards into Kibana.

As the dashboards load, Filebeat connects to Elasticsearch to check version information. To load dashboards when Logstash is enabled, you need to disable the Logstash output and enable Elasticsearch output:

filebeat setup -E output.logstash.enabled=false -E output.elasticsearch.hosts=[‘localhost:9200’] -E setup.kibana.host=localhost:5601

f) Ok finally we are ready to start and enable Filebeat:

systemctl start filebeat
systemctl status filebeat

g) We should now be able to visualize our data in kibana.

One more sanity check:

curl 'http://localhost:9200/filebeat-*/_search?pretty'

Now it is up to you read documentation and make changes accroding to your need!

Thank you for reading!!

Author: Prabhsimran (https://www.linkedin.com/in/pswalia2u/)





Geek👾. Tries to understand how computers work. Would love to hear your suggestions and feedbacks. https://www.linkedin.com/in/pswalia2u/

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

A Day In The Life of a Software Engineer Working For a Bay Area startup

How Full Stack Development Fits into Web Development in 2022

Full-stack development

🔥 New Gem Airdrop-Metawar(50$) 👨‍👧 Earn 2500 $Metawar on each referral

SAP MDG, BRF+ (Custom Workflow Design)

Practical Nginx: Installation

Application Operations in A Multi-Cloud World, and DevSecOps

Appbase Delivers on Global Users Expectation for Realtime Apps, Tapping Into $4.5 B Market

Using dplyr & dbplyr with R language and Oracle Database

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store


Geek👾. Tries to understand how computers work. Would love to hear your suggestions and feedbacks. https://www.linkedin.com/in/pswalia2u/

More from Medium

Kerberos Authentication Explained

Kerberos- a three-headed dog

Secure docker instance with basic Authentication

Create Your First Linux Server on AWS EC2

Securing Your Linux Installation