Quantcast
Channel: Daniel Berman is Product Evangelist at Logz.io
Viewing all 198 articles
Browse latest View live

Monitoring Google Cloud Platform with Stackdriver and Logz.io

$
0
0

We’re happy to announce a new integration with Google Stackdriver, allowing users to easily ship data from Google Cloud Platform into Logz.io via Google Pub/Sub! Early adopters of Google Cloud may recall that they were pretty much in the dark as far as logging their projects was concerned. Sure, they could access their virtual machines and manually grep log files but that was pretty much it. With this new integration, we can import logs from Stackdriver into Logz.io.

This changed dramatically in 2014 when Google acquired Stackdriver and integrated it as a managed service for collecting and storing logs from Google Cloud Platform services and applications. As good a tool as it is, though, Google Stackdriver falls short compared to other log management and log analysis tools in the market. The ELK Stack (Elasticsearch, Logstash, Kibana, and Beats) offers users a much more powerful experience, allowing them to perform advanced queries and build those beautiful Kibana dashboards we’re all accustomed to seeing. 

This article will explain how to integrate Stackdriver with Logz.io so you can easily monitor your Google Cloud projects using the world’s most popular open-source log management solution. The integration is designed to help you easily tap into all your Google Cloud projects and applications — it’s container-based, lightweight, and supports multiple Google Pub/Sub pipelines for more complex environments. 

Let’s take a closer look.

Prerequisites

You’ll need a few things to use build the pipeline described below:

  • Docker
  • GCP SDK
  • A GCP project

Step 1: Simulating some logs

If you’ve already got logs flowing into Stackdriver — great. You can skip to the next step. If not, no worries, the following instructions will help you fake some request logs using a simple function that we’re going to run as a Cloud Function. 

In the Google Cloud console, simply open Cloud Functions and hit the Create new function button

Select HTTP as the trigger type, and copy/paste the code below as the source code for the index.js file:

 

var faker = require('faker')

exports.helloWorld = (req, res) => {
  var count = req.body.count || 1;
  
  for(var i = 0; i < count; i++) {
    console.log(generateFakeLog());
  }
  
  res.status(200).send('Sent: ' + count + ' logs like ' + generateFakeLog())
};

function generateFakeLog() {
  var file = '/' + faker.system.fileName() + faker.system.fileExt()
  return faker.internet.ip() + ' - - [' + faker.date.recent() + '] "GET ' + file + ' HTTP/1.1" ' + faker.internet.userAgent()
}


 

Use the following code for the package.json (for defining dependencies):

 

{
  "name": "sample-http",
  "version": "0.0.1",
  "dependencies": {
    "faker": "4.1.0"
  }
}


Create the new function and use the URL under the Trigger tab to call the function - monitoring GCP with Stackdriver and Logz.io

Create the new function and use the URL under the Trigger tab to call the function – monitoring GCP with Stackdriver and Logz.io

After creating the new function, we can use the URL displayed under the Trigger tab to call the function in our browser and generate some fake logs.

Create the new function and use the URL under the Trigger tab to call the function - monitoring GCP with Stackdriver and Logz.io

Create the new function and use the URL under the Trigger tab to call the function – monitoring GCP with Stackdriver and Logz.io

Clicking View Logs in the top-right corner of the console, we’re taken directly to Stackdriver where the logs we generated are displayed:

Click View Logs to go directly to Stackdriver where the generated logs are displayed

Click View Logs to go directly to Stackdriver where the generated logs are displayed

Step 2: Streaming to Pub/Sub

Next, we’re going to export the logs from Stackdriver to Google Pub/Sub. To do this, open the Exports tab in Stackdriver and then click Create Export.

In the pane that’s displayed on the right, name your export (aka. sink) and select Google Pub/Sub as the Sink Service. If you already have a Pub/Sub topic, select it from the drop-down menu. If not, create a new one and click the Create Sink button.

Create a new new Pub/Sub topic with the Create Sink button

Create a new new Pub/Sub topic with the Create Sink button

If you’ve already got a subscription for the topic, you can skip ahead to Step 3. If not, open the Google Pub/Sub console, and create a new subscription for the newly created topic. 

To make sure the pipeline is up and running, and the logs being collected by Stackdriver are streaming as expected into Pub/Sub, you can click the View Messages button at the top of the page:

Click View Messages

Click View Messages

Step 3: Integrating Logs from Stackdriver with Logz.io

So we have data being streamed from our “application” into Stackdriver and from there into Pub/Sub. Our last and final step is to set up the integration with Logz.io. 

We’ll start with creating and accessing a folder to hold the integration resources: 

mkdir logzio-pubsub && cd logzio-pubsub 

Next, we’re going to build a credentials file using the following command (be sure to replace with your project ID):

 

wget https://raw.githubusercontent.com/logzio/logzio-pubsub/master/Makefile \
&& make PROJECT_ID=my-project-id

 

In the case of multiple GCP projects, repeat the process for each project and just change the project ID.

 

Our next step is to define our Pub/Sub topics and subscriptions. We will do this using a pubsub-input.yml file: 

 

sudo vim pubsub-input.yml

 

Below is an example of the configurations that need to be entered in this file:

 

logzio-pubsub:
listener: 
pubsubs:
- project_id: 
  credentials_file: ./credentials-file.json
  token: 
  topic_id:   
  subscriptions: ["MY-PUBSUB-SUBSCRIPTION"]
  type: stackdriver

 

Let’s understand the different building blocks in this configuration:

 

  • listener – the URL of the Logz.io listener. This will differ depending on where your Logz.io account is located. For reference, check out this list of available regions.
  • project_id – the ID of your GCP project. This should be the same ID you used when creating your credentials file. 
  • credentials_file – the location of the ./credentials-file.json created above.
  • token – the Logz.io account shipping token. It can be found in the Logz.io UI, on the General page.
  • topic_id – the ID of the Pub/Sub topic.
  • subscriptions – a comma-separated list of Pub/Sub subscriptions. 
  • type – the data source type. In the case of Google Stackdriver, this will be stackdriver.

 

And yes, you can ship from multiple GCP projects if you like into different Logz.io accounts — we just need to add another block under the pubsubs section in the file with the relevant configurations.

 

All that’s left for us to do now is run the Logz.io container for integrating with Pub/Sub.

We’ll pull the image with:

 

docker pull logzio/logzio-pubsub 

And then, from within the same directory in which we created all our integration resources, run the container as follows (be sure to enter the path to the local directory): 

 

docker run --name logzio-pubsub -v /logzio-pubsub/pubsub-input.yml:/logzio-pubsub/pubsub-input.yml -v /logzio-pubsub/credentials-file.json:/logzio-pubsub/credentials-file.json logzio/logzio-pubsub

 

The container will run a beats-based agent that uses the credentials and configuration file we created to set up the pipeline of logs from Stackdriver, via Pub/Sub, into Logz.io. Within a minute or two, you should begin to see logs appearing in Logz.io:

Logs from Stackdriver appearing in Logz.io

Logs from Stackdriver appearing in Logz.io

Endnotes

Google Stackdriver is a great tool for centrally logging across Google Cloud Platform projects and applications but requires a complementary solution to give teams full analysis power. While improving over the years, Stackdriver still lacks the querying and visualization capabilities engineers are used to using. The new integration with Logz.io gives Google Cloud users the option to easily integrate with Logz.io’s managed ELK Stack and so enjoy the best of two worlds — Stackdriver’s native integration into Google Cloud projects and ELK’s power of analysis. 

 


Troubleshooting On Steroids with Logz.io Log Patterns

$
0
0

It’s 3 AM and your phone is ringing. 

Rubbing your eyes, you take a look at the alert you just got from PagerDuty.  

A critical service has just gone offline. Angry customers are calling support. Your boss is on the phone, demanding the issue be resolved ASAP. 

You open up your log management tool only to be faced by 5 million log messages. 

What now?

The scenario above may sound somewhat dramatic, but for engineers monitoring modern applications and systems, it is a recurring nightmare. The reason for this is simple — log data is big data. The growing volume, velocity, and variety in log data mean it’s not enough to collect, process and store the data, you need advanced tools to be able to analyze it and identify the needle in the haystack.

Enter Log Patterns!

Recently, we announced Log Patterns, our latest AI-powered analytics tool. 

Simply put, Log Patterns crunches up millions of log messages into what are much smaller, manageable groups of logs. This provides you with the ability to quickly cut through the noise, identify unique or unusual events as well as recurring and repetitive events. 

In just a few clicks, you will be able to identify the different bales comprising your haystack.

How does it work? Using advanced clustering algorithms, Log Patterns dissects indexed log messages into variables and constants to identify recurring patterns. These patterns are automatically associated with incoming logs as they are being ingested into the system and are displayed, in real-time, within Kibana:

The machine learning algorithms used to dissect the logs work continuously to analyze the indexed data to ensure existing patterns are perfected and new patterns are added.

For each pattern identified, you can see how many log messages are associated with the pattern, their ratio out of the total data logged, and the exact pattern they follow.

By default, the most noisy patterns are displayed first but you can sort the list of identified patterns by count and ratio. 

The makings of a pattern

Naturally, patterns differ from one another. Some will contain only constants, others constants and variables. 

Constants are displayed as is, whereas variables are categorized (e.g. Number, Ip, Url, Date) and highlighted. If the type of a specific variable was not identified, it will be marked with a colored wildcard expression: .*

 

Here are a few examples.

The following logs follow a very basic repetitive pattern:

  Account 358 was created , waiting for kibana indexes to be created
  Account 1265 was created , waiting for kibana indexes to be created
  Account 871 was created , waiting for kibana indexes to be created
  Account 1291 was created , waiting for kibana indexes to be created
  Account 309 was created , waiting for kibana indexes to be created

 

The corresponding pattern would be displayed as follows:

  Account Number was created , waiting for kibana indexes to be created

 

The following AWS ELB logs also follow a recurring pattern:

  2019-10-12T21:59:57.543344Z production-site-lb 54.182.214.11:6658 172.31.62.236:80 0.000049 0.268097 0.000041 200 200 0 20996 "GET http://site.logz.io:80/blog/kibana-visualizations/ HTTP/1.1" "Amazon CloudFront" - -
  2019-10-12T21:59:55.518955Z production-site-lb 54.182.214.71:41421 172.31.62.236:80 0.000054 0.104063 0.000029 200 200 0 1 "GET http://site.logz.io:80/wp-admin/admin-ajax.php HTTP/1.1" "Amazon CloudFront" 2019-10-12T21:59:55.268688Z production-site-lb 54.182.214.71:44944 172.31.62.236:80 0.000042 0.121069 0.000037 200 200 0 1 "GET http://site.logz.io:80/wp-admin/admin-ajax.php HTTP/1.1" "Amazon CloudFront" - -
  2019-10-12T21:59:52.186208Z production-site-lb 54.182.214.11:6658 172.31.62.236:80 0.000051 0.248411 0.000041 200 200 0 20996 "GET http://site.logz.io:80/blog/kibana-visualizations/ HTTP/1.1" "Amazon CloudFront" 
  2019-10-12T21:59:51.803543Z production-site-lb 54.182.214.11:21170 172.31.62.236:80 0.000023 0.00079 0.000017 200 200 0 73831 "GET http://site.logz.io:80/wp-content/uploads/2015/12/kibana-visualizations.png HTTP/1.1" "Amazon CloudFront"

 

In this case, the pattern is comprised of two constants and a series of variables, all highlighted:

  Date production-site-lb Ip:Number Number Number Number Number .* Url HTTP/Number" 

 

A production environment produces thousands of these log messages, and Log Patterns condenses these all into one single pattern.

Speeding up troubleshooting

Going back to the doomsday scenario above, sifting through millions of logs when trying to troubleshoot an issue in production is a daunting task. Sure, if you know exactly what you’re looking for, you could enter a beautifully-constructed Kibana query. But often enough, you will not know what to exactly to query.

With Log Patterns, those millions of log messages are suddenly condensed into a much smaller group of patterns. 

You can then discard the patterns that you recognize as being irrelevant to your investigation using the filter out option. These filters are added at the top of the Discover page, just like any other Kibana filter. 

Alternatively, you could reorder the list to look at patterns that are unique. A unique pattern could indicate what actually transpired, and filtering for the pattern will move you over to the Logs tab automatically, displaying the logs associated with the pattern.

Opening up the log, you can then begin understanding the specific event that the log is reporting on. On top of that, if you’ve structured your logs correctly, you will be able to track the root cause to the actual line in your code generating the log. To help understand the context, you can click View surrounding documents to see all the logs generated before and after the log.

Optimizing your logging costs

After reviewing your patterns, you may identify logs that are especially noisy but also totally unwarranted for. Logs cost money, and using Log Patterns you will be able to identify the component in your environment generating these log messages. Remove the lines of code generating these logs and you will be able to reduce the overall operational costs of your logging pipelines. 

Always improving!

The machine learning algorithms used to dissect logs and identify recurring patterns continuously analyze indexed logs to perfect existing patterns and add new ones.

AIOps to the rescue

Monitoring modern IT environments is first and foremost a big data challenge. Without advanced analysis tools to help them easily see through millions of logs, the engineers tasked with keeping their company’s applications up and running and performant at all times are simply ill-equipped to be able to effectively do their job. 

To help engineers overcome this big data challenge, Logz.io designed a suite of AIOps tools. Cognitive Insights™ was the first tool in this suite, followed by Application Insights™. Log Patterns is the latest addition, using advanced clustering techniques to transform big data into small data. 

So what are you waiting for? Log Patterns is available now in all our plans, at no extra charge. You can sign up for a free trial here.

Enjoy!

 

Logging Kubernetes on AKS with the ELK Stack and Logz.io

$
0
0

Hosted Kubernetes services such as AKS were introduced to help engineers deal with the complexity involved in deploying and managing Kubernetes clusters. They do not cover, however, the task of monitoring Kubernetes and the services running on it. 

While some of the hosted Kubernetes services offer logging solutions, they do not offer all the functionality, flexibility and user-experience expected from a modern log management solution. The ELK Stack, or the EFK Stack to be more precise fills in these gaps by providing a Kubernetes-native logging experience — fluentd (running as a daemonset) to aggregate the different logs from the cluster, Elasticsearch to store the data and Kibana to slice and dice it.

This article will show how to hook up an AKS Kubernetes cluster with Logz.io’s ELK Stack. You’ll need two things for following the steps outlined below — an existing Kubernetes cluster on AKS and a Logz.io account. 

Step 1: Deploy a demo application on AKS

To simulate some load and generate log data, we’ll deploy a demo application on our AKS cluster. For this purpose, we’ll use the Azure voting app which includes two Kubernetes Services – a Redis instance and an external service for accessing the app. 

If you already have services running on AKS, feel free to skip directly to the next step.

So first, we’ll create a new manifest file containing the specs of deploying the app:

vim azure-vote.yaml

We’ll use this YAML file for deploying the app:

apiVersion: apps/v1
kind: Deployment
metadata:
 name: azure-vote-back
spec:
 replicas: 1
 selector:
   matchLabels:
     app: azure-vote-back
 template:
   metadata:
     labels:
       app: azure-vote-back
   spec:
     nodeSelector:
       "beta.kubernetes.io/os": linux
     containers:
     - name: azure-vote-back
       image: redis
       resources:
         requests:
           cpu: 100m
           memory: 128Mi
         limits:
           cpu: 250m
           memory: 256Mi
       ports:
       - containerPort: 6379
         name: redis
---
apiVersion: v1
kind: Service
metadata:
 name: azure-vote-back
spec:
 ports:
 - port: 6379
 selector:
   app: azure-vote-back
---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: azure-vote-front
spec:
 replicas: 1
 selector:
   matchLabels:
     app: azure-vote-front
 template:
   metadata:
     labels:
       app: azure-vote-front
   spec:
     nodeSelector:
       "beta.kubernetes.io/os": linux
     containers:
     - name: azure-vote-front
       image: microsoft/azure-vote-front:v1
       resources:
         requests:
           cpu: 100m
           memory: 128Mi
         limits:
           cpu: 250m
           memory: 256Mi
       ports:
       - containerPort: 80
       env:
       - name: REDIS
         value: "azure-vote-back"
---
apiVersion: v1
kind: Service
metadata:
 name: azure-vote-front
spec:
 type: LoadBalancer
 ports:
 - port: 80
 selector:
   app: azure-vote-front

To deploy the app, we’ll use:

kubectl apply -f azure-vote.yaml

We’ll get the following output:

deployment "azure-vote-back" created
service "azure-vote-back" created
deployment "azure-vote-front" created
service "azure-vote-front" created

To verify the app is working, we’re going to access it in our browser.  As part of the deployment process, a Kubernetes service exposes the app’s front end to the internet. This might take a minute or two, and we can follow the status of the deployment with:

kubectl get service azure-vote-front --watch

NAME               TYPE CLUSTER-IP             EXTERNAL-IP PORT(S) AGE
azure-vote-front   LoadBalancer 10.0.232.187   52.188.177.27 80:32515/TCP 1m

As soon as an external IP is available for the service, we’ll simply paste it in our browser:

As part of the deployment process, a Kubernetes service exposes the Azure app’s front end to the internet.

As part of the deployment process, a Kubernetes service exposes the Azure app’s front end to the internet.

Step 2: Ship Kubernetes logs to Logz.io

Our cluster and the app deployed on it is now generating a mix of log data, all useful for gaining insight into how our environment is performing. Our next step is to ship this log data into Logz.io. 
To do this, we’ll use a daemonset that runs a fluentd pod on each node in our Kubernetes cluster.
Our first step is to store Logz.io credentials as a Kubernetes secret — our Logz.io account’s shipping token and Logz.io’s listener host.  We can find the shipping token in the Logz.io UI, and the listener host depends on our region’s account, for example, listener.logz.io or  listener-eu.logz.io.  

 

Once we have these two credentials, we’ll replace the placeholders in the following kubectl command and execute it:

kubectl create secret generic logzio-logs-secret \
--from-literal=logzio-log-shipping-token=<>' \
--from-literal=logzio-log-listener='https://<> \
-n kube-system

This message is displayed:

secret/logzio-logs-secret created

Next, we’ll deploy the dameonset with:

kubectl apply -f
https://raw.githubusercontent.com/logzio/logzio-k8s/master/logzio-daemonset-rbac.yaml

And the output:

serviceaccount/fluentd created
clusterrole.rbac.authorization.k8s.io/fluentd created
clusterrolebinding.rbac.authorization.k8s.io/fluentd created
daemonset.extensions/fluentd-logzio created

We can verify that the Logz.io fluentd pods are running with:

kubectl get pods -n kube-system grep | logzio

Here we see three pods, one per node:

fluentd-logzio-4bskq              1/1     Running     0     58s
fluentd-logzio-dwvmw              1/1     Running     0     58s
fluentd-logzio-gg9bv              1/1     Running     0     58s

And within a minute or two, we should see logs flowing into Logz.io from our Kubernetes cluster:

 Logs flowing into Logz.io from a Kubernetes cluster

Logs flowing into Logz.io from a Kubernetes cluster

Step 3: Analyzing AKS logs in Logz.io

Great, we’ve built a logging pipeline from our Kubernetes cluster on AKS to Logz.io. What next? How do we make sense of all the log data being generated by our cluster?

Container logs are shipped in JSON format using Docker’s json-file logging driver. This means that they will be parsed automatically by Logz.io. This makes it much easier to slice and dice the data with the analysis tools provided by Logz.io. 

Still, some messages might require some extra parsing, in which case we would need to tweak the fluentd configuration or simply ping Logz.io’s support team for help.

We can query the logs using different queries. You could perform a simple free-text search looking for errors but Kibana offers much more advanced filtering and querying options that will help you find the information you’re looking for. 

For example, here we’re using the filter box to easily look at the logs generated by our frontend voting service:

You could perform a simple free-text search looking for errors but Kibana offers much more advanced filtering and querying options

You could perform a simple free-text search looking for errors in AKS logs but Kibana offers much more advanced filtering and querying options

Logz.io also provides advanced machine learning capabilities that help reveal events that otherwise would have gone unnoticed within the piles of log messages generated in our environment. 

In the example below, Cognitive Insights has flagged an issue with etcd, the Kubernetes key value store used as for storing cluster data. Opening the event reveals contextual information that helps us understand whether there is a real issue here or not:

Cognitive Insights has flags an issue with etcd, the Kubernetes key value store used as for storing cluster data

Cognitive Insights has flags an issue with etcd, the Kubernetes key value store used as for storing cluster data

If you want to see a live feed of the cluster logs, either in their raw format or in parsed format, you can use Logz.io’s Live Tail page. Sure, you could use the kubectl logs command to tail logs but in an environment consisting of multiple nodes and an even larger amount of pods, this approach is far from being efficient. 

See a live feed of the cluster logs, either in their raw format or in parsed format, you using Logz.io’s Live Tail

See a live feed of the cluster logs, either in their raw format or in parsed format, you using Logz.io’s Live Tail

Step 4: Building a monitoring dashboard

Kibana is a great tool for visualizing log data. You can create a variety of different visualizations that help you monitor your Kubernetes cluster — from simple metric visualizations to line charts and geographical maps. Below are a few basic examples.

Number of pods

Monitoring the number of pods running will show you if the number of nodes available is sufficient and if they will be able to handle the entire workload in case a node fails. A simple metric visualization can be created to help you keep tabs on this number:

Simple metric visualization in Kibana with AKS

Simple metric visualization in Kibana

Logs per pod 

Monitoring noisy pods or a sudden spike in logs from a specific pod can indicate whether an error taking place. You could create a bar chart visualization to monitor the log traffic:

Create a bar chart visualization to monitor the log traffic from AKS

Create a bar chart visualization to monitor the log traffic from AKS

Once you have all your visualizations lined up, you can add them to a dashboard that provides a comprehensive picture of how our cluster is performing.

Integrated Kibana dashboard for AKS logs shipped to Logz.io

Integrated Kibana dashboard for AKS logs shipped to Logz.io

Endnotes

Logging Kubernetes is challenging. There are multiple log types to make sense of and also a large volume of them to handle. These logs need to be collected, processed and stored. That’s where centralized logging systems like Logz.io come into the picture. 

The combination of AKS and the analysis tools provided in the ELK Stack and Logz.io is a powerful combination that can help simplify not only the deployment and management of your Kubernetes cluster but also troubleshooting and monitoring it.

Speeding Up Security Investigation with Logz.io Threat Intelligence

$
0
0

Cloud, microservices, Kubernetes — all these bleeding-edge technologies revolutionizing the way applications are built and deployed are also a huge security headache. Modern IT environments are increasingly comprised of more and more components and layers, each of generating growing amounts of data.  

In most organizations, more data is a double-edged sword. On the one hand, it gives teams more visibility into their environment. On the other hand, it also means that these teams will most likely be dealing with more security events. 

Since more data sources also mean new vulnerabilities and attack vectors, the engineer or security analyst tasked with securing these environments faces not only an overwhelming amount of security alerts, a large percentage of which are false-positives, but also an ever-evolving threat landscape. 

Needless to say, this complexity slows down and impedes security investigations.  Which is why modern SIEM solutions seek to alleviate some of this pain by providing teams with threat intelligence — additional information that can be used to understand the threats currently targeting an organization and thus make faster and more informed security decisions. 

What is Threat Intelligence?

Gartner defines threat intelligence as follows:

“Threat intelligence” (TI) is evidence-based knowledge — including context, mechanisms, indicators, implications, and actionable advice — about an existing or emerging menace or hazard to IT or information assets. It can be used to inform decisions regarding the subject’s response to that menace or hazard.”

Put simply, threat intelligence helps the engineer or analyst make faster and more informed decisions by providing information about who’s attacking, what their motivation is, and what to look for. 

An example could be a malicious IP address recorded in a log message generated by a web server. This log is collected and stored together with the other millions of log messages generated by the environment and without threat intelligence, it would go unnoticed until it’s too late. Solutions like Logz.io Cloud SIEM offer advanced threat intelligence capabilities that will automatically identify the IP in the log as being malicious and flag it for further investigation.

Threat Intelligence in Logz.io Cloud SIEM

Logz.io Cloud SIEM provides simple threat detection and analytics built on top of the ELK stack. It’s fast, easy to use, and open-source-native to reduce threat detection times and improve a team’s security posture. One of the ways Logz.io’s Cloud SIEM helps speed up investigation times is with threat intelligence.

Logz.io Cloud SIEM automatically correlates the data sent to the system from your environment with multiple public threat feeds such as blocklist.de and alienvault reputation. If your logs are found to contain an IOC (an indication of compromise), the threat is recorded and displayed on a dedicated Threats page:

Logs found to contain an IOC (an indication of compromise), the threat is recorded and displayed on a dedicated Threats page

Logs found to contain an IOC (an indication of compromise), the threat is recorded and displayed on a dedicated Threats page

From this page, further investigation can ensue by clicking on a malicious IP and drilling down further into the rabbit hole.

Conveniently, the threat feeds used to correlate with your data can be viewed on the new Threats → Threat intelligence feeds page:

Threats → Threat intelligence feeds page

Threats → Threat intelligence feeds page

Each feed listed on the page displays the IOC (Indication of Compromise) type, a confidence score, a URL for investigating further, and the date of the last sync. 

Currently, Logz.io Cloud SIEM supports three IOC types — IP, DNS and URL. The confidence score is a rating given by Logz.io’s security analysts which indicates a level of accuracy for each feed, based on their experience investigating data.  Feeds are updated and synced once a day. 

The page can also be used to perform research on potential IOCs. Simply enter an IP, DNS or URL you suspect might be malicious to search across the feeds. This could prove to be useful in case you’re investigating IOCs in historical logs or logs not currently being shipped into Logz.io.  

To keep leaders, stakeholders and other users informed on the latest threats in your environment you can create a report, in essence, a snapshot of the Threats page, on a set schedule.

To keep leaders, stakeholders and other users informed on the latest threats in your environment you can create a report, in essence, a snapshot of the Cloud SIEM Threats page, on a set schedule.

To keep leaders, stakeholders and other users informed on the latest threats in your environment you can create a report, in essence, a snapshot of the Cloud SIEM Threats page, on a set schedule.

Why is threat intelligence important?

Organizations have an increasingly low tolerance for risk. Downtime or breaches are simply not an option. The teams tasked with securing modern IT environments, therefore, require security solutions that facilitate smarter and more efficient investigation workflows instead of manually triaging false-positives.  

Threat intelligence gives teams the information they need to make faster and more informed security decisions. It helps keep teams informed of the latest threats and be more proactive about how they investigate threats. Logz.io Cloud SIEM provides users with automatic and up-to-date threat intelligence enabling you to identify and mitigate new and emerging attacks more quickly. 

Grafana Templates for Elasticsearch, Prometheus and InfluxDB

$
0
0
Grafana is everywhere. Almost every DevOps team out there is currently in the process of creating a proof of concept enabling them to implement Grafana into their stack—if they have not already implemented it, that is. Teams are eager to employ Grafana’s highly effective visualizations and dashboards that monitor and track services’ functionality and performance.  […]

Logz.io Enhancements and Changes with Kibana 7

$
0
0
We are happy to inform you that we are upgrading our user interface to support Kibana 7 for Logz.io! Kibana 7 offers users a long list of UI and UX enhancements that will make monitoring and troubleshooting your environment a much simpler and nicer experience. These enhancements include a cross-app dark theme, a new time […]

Kubernetes Observability with Logs and Metrics in Logz.io

$
0
0
Yesterday, we announced the beta release of Logz.io Infrastructure Monitoring — our Grafana-based monitoring solution, and the planned release of a Jaeger-based tracing solution. These additions to our platform complement our ELK-based Log Management product, together constituting what is the world’s only open source-based observability platform for monitoring, troubleshooting and securing distributed cloud workloads.  This […]

10 Elasticsearch Concepts You Need to Learn

$
0
0
Getting acquainted with the terminology is one of the first things you’re going to have to do when starting out with the ELK Stack.

Getting Started with Kubernetes using MicroK8s

$
0
0
Microk8s are the easiest way to set up a single node cluster for Kubernetes. We run through some basic steps for instellation, enabling addons, and logging.

What Is a Service Mesh, and Why Do You Need One?

$
0
0
“Service mesh” is an umbrella term for products that seek to solve the problems that microservices’ architectures create. These challenges include security, network traffic control, and application telemetry. The resolution of these challenges can be achieved by decoupling your application at layer five of the network stack, which is one definition of what service meshes […]

2019 at a Glance: Logz.io Key Announcements

$
0
0
At AWS re:Invent recently, we excitedly announced Logz.io Infrastructure Monitoring – our new Grafana-based monitoring product! This product is the third pillar of our cloud observability platform, together with Logz.io Log Management and Logz.io Cloud SIEM. This announcement comes at the end of a very busy 2019 during which we introduced a series of major enhancements […]

The Cost of Doing the ELK Stack on Your Own

$
0
0
The cost of running your own deployment and the missing enterprise-grade features make a convincing case for choosing a cloud-hosted ELK Stack platform.

Prometheus and Grafana: A Match Made in Heaven?

$
0
0
Prometheus and Grafana are two monitoring tools that, when combined, provide all of the information DevOps and Dev teams need to build and maintain applications. Prometheus collects many types of metrics from almost every variety of service written in any development language, and Grafana effectively queries, visualizes, and processes these metrics. Together, these two tools […]

How to Monitor Cloud Migration and Data Transfer

$
0
0
Cloud migration is more than just a buzzword. According to several reports released at the beginning of 2019, almost 70% of enterprise organizations are moving their applications and infrastructure from local, self-managed hardware to one of the big cloud providers. Multiple case studies have been written about companies like Spotify, Dropbox, Gitlab, and Waze, all […]

Logging Redis with ELK and Logz.io

$
0
0
Learn how to ship Redis Logs to ELK and Logz.io in order to optimize its performance and troubleshoot issues that impact application stability.

10 Elasticsearch Concepts You Need to Learn

$
0
0
Getting acquainted with the terminology is one of the first things you’re going to have to do when starting out with the ELK Stack.

Troubleshooting On Steroids with Logz.io Log Patterns

$
0
0
It’s 3 AM and your phone is ringing.  Rubbing your eyes, you take a look at the alert you just got from PagerDuty.   A critical service has just gone offline. Angry customers are calling support. Your boss is on the phone, demanding the issue be resolved ASAP.  You open up your log management tool only […]

Monitoring Google Cloud Platform with Stackdriver and Logz.io

$
0
0
We’re happy to announce a new integration with Google Stackdriver, allowing users to easily ship data from Google Cloud Platform into Logz.io via Google Pub/Sub! Early adopters of Google Cloud may recall that they were pretty much in the dark as far as logging their projects was concerned. Sure, they could access their virtual machines […]
Viewing all 198 articles
Browse latest View live