Quantcast
Channel: Daniel Berman is Product Evangelist at Logz.io
Viewing all articles
Browse latest Browse all 198

Nginx Web Server Monitoring with the ELK Stack and Logz.io

$
0
0

Nginx is an extremely popular open-source web server serving millions of applications around the world. Second only to Apache, Nginx’s owes its popularity as a web server (it can also serve as a reverse proxy, HTTP cache and load balancer) to the way it efficiently serves static content and overall performance.

From an operational and security perspective, Nginx sits at a critical juncture within an application’s architecture and requires close monitoring at all times. The ELK Stack (Elasticsearch, Logstash, Kibana and Beats) is the world’s most popular open-source log management and log analysis platform, and offers engineers with an extremely easy and effective way of monitoring Nginx. 

In this article, we’ll provide the steps for setting up a pipeline for Nginx logs and beginning the monitoring work. To complete the steps here, you’ll need a running Apache web server and your own ELK Stack or Logz.io account.

Nginx logging basics

Nginx provides users with various lodging options, including logging to file, conditional logging and syslog logging. Nginx will generate two log types that can be used for operational monitoring and troubleshooting: error logs and access logs. Both logs are typically located, by default, under /var/log/nginx but this location might differ from system to system. 

Nginx error logs

Error logs contain diagnostic information that can be used for troubleshooting operational issues. The Nginx error_log directive can be used to specify the log file path and severity and can be used in the main, http, mail, stream, server, location context (in that order).

Example log:

 

2019/07/30 06:41:46 [emerg ] 12233#12233: directive “http” has no opening “{” in /etc/nginx/nginx.conf:17

This example emerg (the most severe logging level) error log is informing us that a directive is misconfigured.

Nginx access logs

Access logs contain information on all the requests being sent to, and served by, Nginx. As such, they are a valuable resource to use for performance monitoring but also security. The default format for Nginx access logs is the combined format but this may change from distribution to distribution. As with error logs, you can use the access_log directive to set the log file path and log format. 

Example log:

199.203.204.57 - - [30/Jul/2019:06:35:54 +0000] "GET /hello.html HTTP/1.1" 200 63 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36"

Shipping to ELK

The simplest way of shipping Nginx logs into the ELK Stack (or Logz.io) is with Filebeat. 

In previous versions of the ELK Stack, Logstash played a critical part in Nginx logging pipelines — processing the logs and geo enhancing them. With the advent of Filebeat modules, this can be done without Logstash, making setting up an Nginx logging pipeline much simpler. The same goes if you’re shipping to Logz.io — parsing is handled automatically. More about this later. 

Installing Filebeat

First, add Elastic’s signing key so that the downloaded package can be verified (skip this step if you’ve already installed packages from Elastic):

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Next, add the repository definition to your system:

echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

Update and install Filebeat with:

sudo apt-get update && sudo apt-get install filebeat 

Enabling the Nginx Module

Our next step is to enable the Filebeat’s Nginx module. To do this, first enter: 

sudo filebeat modules enable nginx 

Next, use the following setup command to load a recommended index template and deploy sample dashboards for visualizing the data in Kibana:

sudo filebeat setup -e 

And last but not least, start Filebeat with:

sudo service filebeat start 

It’s time to verify our pipeline is working as expected. First, cURL Elasticsearch to verify a “filebeat-*” index has indeed been created:

curl -X GET "localhost:9200/_cat/indices?v"
health status index                uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .kibana_1            RjVOETuqTHOMTQZ8GiSsEA   1   0        705          153    900.1kb        900.1kb
green  open   .kibana_task_manager L78aE69YQQeZNLgu9q_7eA   1   0          2            0     30.4kb         30.4kb
yellow open   filebeat-7.2.0       xVZdngF6TX-EiRm2e-HuCQ   1   1          5            0     92.9kb         92.9kb

 

Next, open Kibana at: http://localhsot:5601 — the index will be defined and loaded automatically and the data visible on the Discover page:

Discover

Shipping to Logz.io

As mentioned above, since Logz.io automatically parses Nginx logs, there’s no need to use Logstash or Filebeat’s Nginx module. All we have to do is make some minor tweaks to the Filebeat configuration file. 

Downloading the SSL certificate

For secure shipping to Logz.io, we’ll start with downloading the public SSL certificate:

wget https://raw.githubusercontent.com/logzio/public-certificates/master/COMODORSADomainValidationSecureServerCA.crt && sudo mkdir -p /etc/pki/tls/certs && sudo mv COMODORSADomainValidationSecureServerCA.crt /etc/pki/tls/certs/

Editing Filebeat 

Next, let’s open the Filebeat configuration file:

sudo vim /etc/filebeat/filebeat.yml 

Paste the following configuration:

filebeat.inputs: - type: log paths: - /var/log/nginx/access.log fields: logzio_codec: plain token: type: nginx_access fields_under_root: true encoding: utf-8 ignore_older: 3h - type: log paths: - /var/log/nginx/error.log fields: logzio_codec: plain token: type: nginx_error fields_under_root: true encoding: utf-8 ignore_older: 3h #For version 6.x and lower uncomment the line below and remove the line after it #filebeat.registry_file: /var/lib/filebeat/registry filebeat.registry.path: /var/lib/filebeat #The following processors are to ensure compatibility with version 7 processors: - rename: fields: - from: 'agent' to: 'beat_agent' ignore_missing: true - rename: fields: - from: 'log.file.path' to: 'source' ignore_missing: true output.logstash: hosts: ['listener.logz.io:5015'] ssl: certificate_authorities: ['/etc/pki/tls/certs/COMODORSADomainValidationSecureServerCA.crt']

A few things about this configuration are worth pointing out:

  • The configuration defines two file inputs, one for the Nginx access log and the other for the error log. If you need to change the path to these files, do so now.
  • Be sure to enter your Logz.io account token in the relevant placeholders. You can find this token in the Logz.io UI.
  • The processors defined here are used to comply with the new ECS (Elastic Common Scheme) and are required for consistent and easier analysis/visualization across different data sources.
  • The output section defines the Logz.io listener as the destination for the logs. Be sure to comment out the Elasticsearch destination.

Save the file and restart Filebeat with:

sudo service filebeat restart 

Within a minute or two, you will begin to see your Nginx logs in Logz.io:

15 hits

Analyzing Nginx logs

Kibana is a pretty powerful analysis tool that provides users with rich querying options to slice and dice data. The auto-suggest and auto-complete features added in recent versions turn the experience of sifting through the logs much simpler and easier.

Let’s take a look at some examples.

The simplest search method, of course, is free text. Just enter a search query in the search field as follows:

sydney

sydney

Field-level searches allow us to be a bit more specific. For example, we can search for any Nginx access log with an error code using this search query:

type : "nginx_access" and response > 400

query

There are plenty of other querying options to choose from. You can search for specific fields, use logical statements, or perform proximity searches — Kibana’s search options are extremely varied and are covered more extensively in this Kibana tutorial.

Visualizing Nginx logs

Things get more interesting when we start to visualize Nginx logs in Kibana. Kibana is infamous for its beautiful dashboards and visualizations that help users depict their data in many different ways. I’ll provide four simple examples of how one can visualize Nginx logs using different Kibana visualizations.

Request map

For Nginx access logs, and any other type of logs recording traffic for that matter, the usual place to start is a geographic map of the different locations submitting requests. This helps us monitor regular behavior and identify suspicious traffic. Logz.io will automatically geo enrich the IP fields within the Nginx access logs so you can use a Kibana Coordinate Map visualization to map the requests as shown below:

map

If you’re using your own ELK Stack and shipped the logs using Filebeat’s Nginx module, the fields will also be geo enriched.

Responses over time

Another common visualization used for Nginx access logs monitors response codes over time. Again, this give us you a good picture of normal behavior and can help us detect a sudden spike in error response codes. You can use Bar Chart, Line Chart or Area Chart visualizations for this:

response over time

Notice the use of the Count aggregation for the Y-Axis, and the use of a Date Histogram aggregation and Terms sub aggregation got the X-Axis.

Top requests

Data table visualizations are a great way of breaking up your logs into ordered lists, sorted in the way you want them to be using aggregations. In the example here, we’re taking a look at the requests most commonly sent to our Nginx web server:

requests

Errors over time

Remember — we’re also shipping Nginx error logs. We can use another Bar Chart visualization to give us a simple indication for the number of errors reported by our web server:

errors over time

Note, I’m using a search filter for type:nginx_error to make sure the visualization is showing only depicting the number of Nginx errors.

These were just some examples of what can be done with Kibana but the sky’s the limit. Once you have your visualizations lined up, add them up into one comprehensive dashboard that provides you with a nice operational overview of your web server. 

dashboard

Endnotes

Logz.io users can install the dashboard above, and many other Nginx visualizations and dashboards, using ELK Apps — a free library of pre-made dashboards for various log types, including Nginx of course. If you don’t want to build your own dashboard from scratch, simply search for “nginx” in ELK Apps and install whichever dashboard you fancy.

To stay on top of errors and other performance-related issues, a more proactive approach requires alerting, a functionality which is not available in vanilla ELK deployments. Logz.io provides a powerful alerting mechanism that will enable you to stay on top of live events, as they take place in real-time. Learn more about this here. 

Monitor, troubleshoot, and secure your environment with one unified platform.

Viewing all articles
Browse latest Browse all 198

Trending Articles