How to install ELK stack on Ubuntu 18.04 and 20.04

Report
Question

Please briefly explain why you feel this question should be reported .

Report
Cancel

IT security not only implements heavy security tools but also detects and prevents malfunction in infrastructure. Detection requires a vigilant approach and has many related tools. Monitoring is about collecting events and linking them to generate warnings.

In view of today’s threat, the methods used by opposers change and are difficult to detect. Thus finding and preventing methods is also improving. One of them is ELK Stack which is gaining popularity in Information Security. This article is about explaining what an ELK is and how we can incorporate it into the Virtual Environment.
ELK Stack is a combination of three open-source tools (ElasticSearch, Logstash, Kibana) that allow us to store a lot of data and visualize it. Provides intermediate logs that can be used to search all logs in one place. 
Elastic Stack has four main features:
 
Elasticsearch: is a RESTful distributed search engine that stores a large amount of data collected on a powerful search engine.
Logstash: collects and processes incoming data and sends it to an Elasticsearch server.
Kibana: a visual web interface to visualize and query elastic search information
Beats: are lightweight senders of data that send data from multiple log sources to Logstash or an Elasticsearch server. 
 
The steps to install ELK stack in your pc is provided below:- 
 
Step 1- Install Dependencies 
   Install Java
Java 8 is should be installed in your pc before installing ELK stack. Only Logstash is compatible with Java 9 and not others. 
Note*- To check your Java version:- 
              java-version 
The output you are looking for is 1.8.x_xxx. That would indicate that Java 8 is installed. 
 

You can directly install Nginx if you already have Java 8 installed in your Ubuntu system

1. The following command will help you install Java 8 in a by entering it in a new terminal window.

sudo apt-get install openjdk-8-jdk

2. If prompted, type y and hit Enter for the process to finish.

Install Java JDK 8 as a prerequisite for the ELK stack.

Install Nginx
Nginx is the way to kibana dashboard which therefore can be commanded to configure password-controlled access. It is a web server as well as acts as a proxy server.
1. Enter the following command to Install Nginx-

sudo apt-get install nginx

2. If prompted, type y and hit Enter for the process to finish.

Install Nginx on Ubuntu to set it up as a reverse proxy for Kibana.


Note: For additional tutorials, follow our guides on installing Nginx on Ubuntu and setting up Nginx reverse proxy For Kibana


Step 2: Put on Elastic RepositoryAll the open-source software in the ELK stack can only be accessed by Elastic repositories. The first step to adding an Elastic repository is to import the GPG key.

1. Open a new terminal and write the following command to import the PGP key.

wget –qO – https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add –

2. After adding the command your system must show OK and then only you can proceed.

Add Elasticsearch repository.
3. Next, install the apt-transport-https package:

sudo apt-get install apt-transport-https

4. The fourth step is to add the Elastic repository to your systems repository list:-

echo “deb https://artifacts.elastic.co/packages/7.x/apt stable main” | sudo tee –a /etc/apt/sources.list.d/elastic-7.x.list

Add elastic repository to your system's repository list.

Step 3: Install Elasticsearch

1. Before installing Elastic search you have to update the initial repositories by entering the following command.

sudo apt-get update

2. Now its time to install Elastic search by the following command:-

sudo apt-get install elasticsearch

Command to install Elasticsearch via Linux terminal.

Configure Elasticsearch

1. For configuring the Elastic search to control its behaviour, you have to open a configuration file for editing in a text editor. We have used Nano here:-

sudo nano /etc/elasticsearch/elasticsearch.yml

2. A configuration file will appear with many different descriptions and entries. You will have to scroll down to find following commands.

#network.host: 192.168.0.1
#http.port: 9200

3. Remove the comments line by entering hash (#) sign at the starting of both lines and replace 192.168.0.1 with localhost.

It must read:

network.host: localhost
http.port: 9200

An image of how to configure the Elasticsearch configuration file.

4. The discovery section will be just below the result. Here we are configuring a single node cluster, so we are adding one more line.

discovery.type: single-node

See the following image for more details.

Configuring Elasticsearch as a single-node cluster.

5. Set the JVM heap size not more than half the size of total memory. 1GB is set by default

sudo nano /etc/elasticsearch/jvm.options

6. Find the lines beginning with -Xms and -Xmx. In the example below, the maximum (-Xmx) and minimum (-Xms) size is set to 512MB.

Limiting JVM heap size.

Begin Elasticsearch

1.Begin the Elasticsearch service by entering a systemctl command:

sudo systemctl start elasticsearch.service

The Elastic search service can take some time to start. A successful command will give no any output.

2. Enable Elasticsearch to start on boot:

sudo systemctl enable elasticsearch.service

This command enables the Elasticsearch service on boot.

Try-out Elasticsearch

Use the curl  code to test your configuration by entering the following command.

curl –X GET “localhost:9200”

The Elastic search will be functional and listening on port 9200 when it will display your systems name on the screen.

This image indicates that the elasticsearch cluster is active.

Step 4: Install Kibana

It is recommended to install Kibana next. Kibana is a graphical user interface for parsing and interpreting collected log files.

1. Enter the following command to install Kibana on your system:-

sudo apt-get install kibana

2. After finishing the process of upper command, You have to configure Kibana now

Configure Kibana

1. Now, open the kibana.yml configuration file for editing:

sudo nano /etc/kibana/kibana.ymla

2. Remove the # sign at starting of the following lines to activate them:

#server.port: 5601
#server.host: “your-hostname”
#elasticsearch.hosts: [“http://localhost:9200”]

The commands above will look as follows:-

server.port: 5601
server.host: “localhost”
elasticsearch.hosts: [“http://localhost:9200”]

3. Save the file (Ctrl+o) and exit (Ctrl+ x).

Configuring the Kibana configuration file.


Note: This configuration allows traffic from the same Elasticstack system to which it is configured. You can set the value of the server.host to a remote server address

Begin and function Kibana

1. Start the Kibana service:

sudo systemctl start kibana

A successful service will give no output.

Now, the process of configuring Kibana to launch a new boot starts:-

sudo systemctl enable kibana

The command to enable the Kibana service on Ubuntu and the expected output.

Allow Traffic on Port 5601

You need to allow traffic on port 5601 and enable UFV firewall in order to access the Kibana dashboard.

Open a new terminal window and enter the following command:-

sudo ufw allow 5601/tcp

The following output must display on your ubuntu system:

Allow traffic on Kibana port.

Testing KibanaOpen a web browser and enter the following address to get access to the Kibana dashboard.

http://localhost:5601

The Kibana dashboard looks as below:-

The Kibana dashboard welcome screen.


Note: If you receive a “Kibana server not ready yet” error, check if the Elasticsearch and Kibana services are active.


Step 5: Installing` Logstash

Logstash is a tool used for collecting data from different sources. The data it collects is parsed by Kibana and stored in Elasticsearch.

Install Logstash by entering and accessing the following command:

sudo apt-get install logstash

Start and permit Logstash

1. Start the Logstash service:

sudo systemctl start logstash

2. Enable the Logstash service:

sudo systemctl enable logstash

3. To check the status of the service, run the following command:

sudo systemctl status logstash

Check logstash system status

Configure Logstash

Logstash is the most customized part of the ELK stack. Once installed, configure the INPUT, FILTERS, and OUTPUT pipes according to your usage case.

All custom Logstash configuration files are stored in /etc/logstash/conf.d/.

diagram showing How Logstash processes data


Note: Consider the following Logstash configuration examples and adjust the configuration for your needs.


Step 6: Installing Filebeat

Filebeat provides you the advantage of slowing down its speed when the Logstash service is overwhelmed with data. Filebeat is a plugin that is necessary while installing the ELK stack. It is the most popular Beats module to collect and ship log files.

Install Filebeat by entering the following command:

sudo atp-get install filebeat

Wait for the installation to complete.


Note: Make sure that the Kibana service is active again during the installation and repair process.


Configure Filebeat
All the data is sent to the Elasticsearch by default by the Filebeat.

1. To configure this, edit the filebeat.yml configuration file:

sudo nano /etc/filebeat/filebeat.yml

2. Under the Elasticsearch output section, remove the following lines:

# output.elasticsearch:
   # Array of hosts to connect to.
   # hosts: ["localhost:9200"]

3. Under the Logstash output section, delete the hash sign (#) from the following two lines:

# output.logstash
     # hosts: ["localhost:5044"]

After running the upper process, it will look like below:

output.logstash
     hosts: ["localhost:5044"]

Go through the following image for some more details:-

How to configure the Filebeat configuration file.

4. Now, access the Filebeat system module, which will examine local system logs:

sudo filebeat modules enable system

The output will be Enabled system.

5. Now, load the index template:

sudo filebeat setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'

The system will process and perform some tasks before connecting you to the Kibana dashboard. After that it will scan and take you to the Kibana.

Running the Filebeat setup.

Start and Sanction Filebeat

Now, Authorize the Filebeat service:

sudo systemctl start filebeat
sudo systemctl enable filebeat

Substantiate Elasticsearch Reception of Data

Lastly, confirm if Filebeat is shipping log files to Logstash and processing. After processing, data is transferred to Elasticsearch.

curl -XGET http://localhost:9200/_cat/indices?v

example command to check if Filebeat logs are being sent to elasticsearch


Conclusion

Now you have successfully installed the ELK stack in your ubuntu system. You can also customize data streams using Logstash as per your comfort. Kibana should be utilized for easy browsing through log files. We recommend you adjust the ELK stack based on your requirements.

0
Abhay Singh 4 months 2021-05-15T00:06:08+05:30 0 Answers 4 views 0

Leave an answer