How to use the ELK stack with Spring Boot
The ELK stack is a popular combination of open-source tools for collecting, storing, analyzing, and visualizing data. ELK stands for Elasticsearch, Logstash, and Kibana. Elasticsearch is a distributed search and analytics engine that can handle large volumes of structured and unstructured data. Logstash is a data processing pipeline that can ingest data from various sources, transform it, and send it to Elasticsearch. Kibana is a web-based interface that allows users to explore and visualize data stored in Elasticsearch.
In this blog post, I will show you how to set up the ELK stack, how to connect it to a Spring Boot application, and how to use it to monitor and troubleshoot your application.
What is the ELK stack?
Elasticsearch, Logstash and Kibana, also known as the ELK stack, are three open-source tools that work together to provide a powerful platform for data analysis and visualization.
- Elasticsearch is a distributed search and analytics engine that can handle large volumes of structured and unstructured data. It offers fast and flexible search capabilities, as well as advanced features such as aggregations, full-text search, geospatial queries, and more. Elasticsearch stores data in JSON documents, which are organized into indices and types. Elasticsearch also supports horizontal scaling, high availability, and RESTful APIs.
- Logstash is a data processing pipeline that can ingest data from various sources, transform it, and send it to Elasticsearch. Logstash can collect data from files, TCP/UDP sockets, HTTP requests, databases, message queues, and more. Logstash can also enrich, filter, parse, and format the data using various plugins. Logstash can handle different data formats, such as JSON, CSV, XML, syslog, etc. Logstash can also output data to other destinations, such as files, email, webhooks, etc.
- Kibana is a web-based interface that allows users to explore and visualize data stored in Elasticsearch. Kibana provides various features, such as dashboards, charts, maps, tables, histograms, and more. Kibana also supports creating custom visualizations and plugins, as well as integrating with other tools, such as Machine Learning, APM, Security, etc.
The ELK stack works as follows: Logstash collects and processes data from various sources and sends it to Elasticsearch. Elasticsearch indexes and stores the data, and makes it available for search and analysis. Kibana connects to Elasticsearch and allows users to query and visualize the data. The ELK stack can be used for various use cases, such as log analysis, business intelligence, security & error monitoring, application performance monitoring, etc.
Setting up the ELK stack
The easiest way to set up the ELK stack is to use Docker, which is a platform that allows you to run applications in isolated containers.
Once you have Docker installed, you can use the following commands to pull and run the ELK stack containers:
# Pull the Elasticsearch image
docker pull docker.elastic.co/elasticsearch/elasticsearch:7.13.4
# Run the Elasticsearch container
docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.13.4
# Pull the Logstash image
docker pull docker.elastic.co/logstash/logstash:7.13.4
# Run the Logstash container
docker run -d --name logstash -p 5000:5000 -p 9600:9600 --link elasticsearch:elasticsearch docker.elastic.co/logstash/logstash:7.13.4
# Pull the Kibana image
docker pull docker.elastic.co/kibana/kibana:7.13.4
# Run the Kibana container
docker run -d --name kibana -p 5601:5601 --link elasticsearch:elasticsearch docker.elastic.co/kibana/kibana:7.13.4
These commands will create and start three containers: one for Elasticsearch, one for Logstash, and one for Kibana. They will also expose the following ports on your machine:
- Port 9200 and 9300 are for Elasticsearch.
- Port 9200 is the HTTP port, which is used for RESTful APIs and web interfaces.
- Port 9300 is the transport port, which is used for internal cluster communication and Java clients.
- Port 5000 and 9600 are for Logstash.
- Port 5000 is the TCP port, which is used for receiving logs from your Spring Boot application.
- Port 9600 is the monitoring port, which is used for accessing Logstash metrics and health information.
- Port 5601 is for Kibana.
- This is the web port, which is used for accessing the Kibana interface and visualizing the data stored in Elasticsearch.
You can verify that the containers are running by using the command docker ps
. You can also access the web interfaces of Elasticsearch and Kibana by visiting http://localhost:9200 and http://localhost:5601 in your browser, respectively.
Connecting Spring Boot to the ELK stack
To connect your Spring Boot application to the ELK stack, you need to do two things: configure Logstash to receive logs from your application, and configure your application to send logs to Logstash.
Configuring Logstash
Logstash can receive logs from various sources, such as files, TCP/UDP sockets, HTTP requests, etc. In this example, we will use a TCP socket as the input source. To do this, we need to create a configuration file for Logstash that specifies the input, filter, and output plugins. The input plugin defines where Logstash will listen for incoming data, the filter plugin defines how Logstash will process and transform the data, and the output plugin defines where Logstash will send the data.
Create a file named logstash.conf
in your machine with the following content:
input {
# Listen for logs on port 5000 using the TCP protocol
tcp {
port => 5000
codec => json
}
}
filter {
# Parse the timestamp field as a date
date {
match => [ "timestamp", "ISO8601" ]
}
}
output {
# Send the logs to Elasticsearch
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "spring-boot-%{+YYYY.MM.dd}"
}
# Print the logs to the standard output (optional)
stdout {
codec => rubydebug
}
}
This configuration tells Logstash to listen for JSON-formatted logs on port 5000, parse the timestamp field as a date, and send the logs to Elasticsearch using the index name spring-boot-<date>
. It also prints the logs to the standard output for debugging purposes.
To apply this configuration, you need to copy the file to the Logstash container and restart it. You can use the following commands to do this:
# Copy the file to the Logstash container
docker cp logstash.conf logstash:/usr/share/logstash/pipeline/logstash.conf
# Restart the Logstash container
docker restart logstash
Configuring Spring Boot
To send logs from your Spring Boot application to Logstash, you need to add a dependency to your project that allows you to use Logback, which is the default logging framework for Spring Boot. Logback is a powerful and flexible logging library that supports various appenders, encoders, filters, and layouts. One of the appenders that Logback provides is the LogstashTcpSocketAppender
, which can send logs to Logstash over a TCP socket.
To use Logback, you need to add the following dependency to your pom.xml
file:
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>6.6</version>
</dependency>
Then, you need to create a file named logback-spring.xml
in the src/main/resources
folder of your project with the following content:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/boot/logging/logback/base.xml"/>
<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>localhost:5000</destination>
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<timeZone>UTC</timeZone>
</timestamp>
<pattern>
<pattern>
{
"severity": "%level",
"service": "${springAppName:-}",
"trace": "%X{X-B3-TraceId:-}",
"span": "%X{X-B3-SpanId:-}",
"exportable": "%X{X-Span-Export:-}",
"pid": "${PID:-}",
"thread": "%thread",
"class": "%logger{40}",
"rest": "%message"
}
</pattern>
</pattern>
</providers>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="LOGSTASH"/>
<appender-ref ref="CONSOLE"/>
</root>
</configuration>
This configuration tells Spring Boot to use the LogstashTcpSocketAppender
to send logs to Logstash on port 5000, using a JSON format that includes various fields such as timestamp, severity, service name, trace and span IDs, process ID, thread name, class name, and message. It also tells Spring Boot to use the CONSOLE
appender to print logs to the standard output, as usual.
You can also create or modify a file named application.yml
in the src/main/resources
folder of your project with the following content:
This configuration is equivalent to the XML configuration in the logback-spring.xml
file, but it is more concise and readable. You can use YAML to configure other aspects of your Spring Boot application, such as server properties, database connections, security settings, etc.
To apply this configuration, you need to rebuild and run your Spring Boot application. You can use the following commands to do this:
# Build the application
mvn clean package
# Run the application
java -jar target/<your-application-name>.jar
Using the ELK stack
Once you have set up the ELK stack and connected it to your Spring Boot application, you can start using it to monitor and troubleshoot your application. You can use Kibana to explore and visualize the logs stored in Elasticsearch.
To access Kibana, visit http://localhost:5601 in your browser. You will see the Kibana home page, where you can choose different options to work with your data. To view your logs, you need to create an index pattern that matches the name of the index that Logstash created for your logs. In this example, the index name is spring-boot-*
.
To create an index pattern, follow these steps:
- Click on the Stack Management option on the left sidebar.
- Click on the Index Patterns option under the Kibana section.
- Click on the Create index pattern button.
- Enter
spring-boot-*
in the Index pattern name field and click on the Next step button. - Select
@timestamp
as the Time field and click on the Create index pattern button.
You have now created an index pattern that allows you to query and analyze your logs. To view your logs, follow these steps:
- Click on the Discover option on the left sidebar.
- Select
spring-boot-*
from the Index pattern dropdown menu. - You will see a list of logs that match your index pattern, sorted by the timestamp in descending order. You can use the search bar and the filters to narrow down your results. You can also use the time picker to select a specific time range for your logs.
- You can click on any log to expand it and see the details of the fields. You can also add or remove fields from the table by using the Available fields panel on the left.
- You can use the Save button to save your current query and filters as a saved search, which you can later access from the Saved Objects option under the Stack Management section.
- You can use the Share button to generate a link or an embed code for your current view of the logs, which you can share with others or embed in another web page.
- You can use the Visualize button to create a visualization based on your current query and filters, such as a bar chart, a pie chart, a line chart, etc. You can then use the Dashboard option to combine multiple visualizations into a single dashboard, which you can customize and share as well.
Example - Monitoring errors/exceptions with the ELK stack
One of the benefits of using the ELK stack with Spring Boot is that you can easily monitor and troubleshoot any errors or exceptions that occur in your application. Errors and exceptions are logged by Spring Boot and sent to Logstash, which then forwards them to Elasticsearch. You can use Kibana to view and analyze the error logs, and identify the root cause and the impact of the issues.
To monitor errors/exceptions with the ELK stack, you can follow these steps:
- Create a saved search that filters the logs by the severity level. For example, you can create a saved search that only shows the logs with the level
ERROR
orWARN
. You can also add other filters, such as service name, class name, message, etc. - Create a visualization based on the saved search that shows the distribution of the errors/exceptions over time, by service, by class, by message, etc. For example, you can create a line chart that shows the number of errors/exceptions per hour, a pie chart that shows the percentage of errors/exceptions by service, a table that shows the top 10 error/exception messages, etc.
- Create a dashboard that combines the visualizations into a single view, where you can see the overall status and trends of the errors/exceptions in your application. You can also add other elements, such as text, images, markdown, etc. to provide more context and information.
- Use the alerting feature to set up rules and actions that trigger when certain conditions are met. For example, you can create an alert that sends an email or a notification when the number of errors/exceptions exceeds a certain threshold, or when a specific error/exception occurs. You can also use the anomaly detection feature to detect unusual patterns or spikes in the errors/exceptions, and alert you accordingly.
Conclusion
The ELK stack is a powerful and popular platform for log management and analytics. Together, these components form a flexible and scalable solution that can handle various use cases and challenges in the modern data-driven world.
I hope you learned something new and found this article helpful!