Log Analytics is DEAD
December 03, 2015

Albert Mavashev
jKool

Log Analytics is DEAD. Did I really say that?? Yes I did. Log Analytics is a process of investigating logs and hoping to derive actionable information that might be useful to the business. Many log analytics tools are used to gain visibility into web traffic, security, application behavior, etc. But how valuable and practical is log analytics in reality?

One basic precondition for log analytics is that information to be delved into must be in log files and here lies the basic problem:

In order to derive useful analytics from logs one must have proper logging instrumentation and have it enabled everywhere, all the time.

Not only is this approach impractical and very expensive, except in a few limited cases, but it is also burdensome, imposing a significant performance overhead on the systems that produce these logs.

One must log gigabytes and gigabytes of data, store this data and then analyze it in order to detect a problem. I would call this a brute force approach. As most brute force approaches, it is expensive, slow and unwieldy. In many cases log analytics is used to catch occasional errors or exceptions. Do we really need to have all these logs to catch a few outliers?

Log analytics quickly turns into a Big Data problem – store and analyze everything, everywhere, all the time. Is that really needed? Maybe, or maybe not …

Simple Example

You deploy log analytics and it tells you've got 100 errors or exceptions in the past hour. Typically, you will want to investigate this and start with a specific exception.

Your next question would be “is what am I looking and noise or something that requires attention?” Then you will ask “what else happened” and “why?”. There is a series of questions you would ask might include the following:

■ What was my application doing?

■ What was the response time?

■ What was CPU, memory utilization?

■ What were the I/O rates and network utilization?

■ What was Java GC doing?

■ What other abnormal conditions occurred that I should be looking at?

There are so many variables. There are too many to look at and too much to analyze.

What do you do? Unfortunately this is where log analytics stops, you have to jump elsewhere. The path to root-cause becomes lengthy and painful. You may know that there is a problem, but why you have a problem in many cases is not clear.

We have all this data (big data) yet I don’t know what it means or where to look to find meaning. Of course one can say that you can parse out the log entries and extract metrics. Who will write the parsers? Who maintains the rules? Who writes complex regular expressions? What if the required metrics are not in the log files? In most cases they won’t be.

The biggest problem with log analytics is that what can be analyzed must be always logged. You need to know what information you need for root cause in advance. How often do you know what you need in advance? It is what you don’t know, have not thought about, did not instrument, did not log. It is unlikely you will have the information you will need.

Customers don’t want log analytics; customers want solutions to their problems. So what do I propose? I think log analytics is really morphing into a larger discipline.

The Post Log Analytics World

It is Application Analytics that combines logs, metrics, transactions, topology, changes, and more, along with machine learning techniques: where asking about quality of service, application performance, business and IT KPIs is a click away.

This approach must be combined with smart instrumentation, heuristics and even crowd-sourced knowledge that points to anomalies, suppresses noise and reveals important attributes without constantly collecting terabytes of data.

How do I understand what I don’t know or have not collected yet? How do I know what questions to ask?

Essentially Application Analytics is about managing risks lurking within application and IT infrastructures which are inherently complex and “broken”.

Log Analytics is dead, not because is not useful, but because it must quickly evolve into the next level.

Albert Mavashev is Chief Technology Officer at jKool.

The Latest

June 25, 2018

The previous chapter in this WhiteHat Security series discussed Codebase as the first step of the Twelve-Factor App and defined a security best practice approach for ensuring a secure source control system. Considering the importance of applying security in a modern DevOps world, this next chapter examines the security component of step two of the Twelve-Factor methodology. Here follows some actionable advice from the WhiteHat Security Addendum Checklist, which developers and ops engineers can follow during the SaaS build and operations stages ...

June 21, 2018

DevSecOps is quickly gaining support and traction, within and beyond information security teams. In fact, 70% of respondents believe their culture can embrace the change needed to fuse Security and DevOps, according to a new survey of 80 security professionals by Aqua Security ...

June 20, 2018

The larger the company size, the higher the proportion of low IT performers, according to the State of DevOps: Market Segmentation Report from Puppet, based on the 2017 State of DevOps Survey data ...

June 18, 2018

An overwhelming 83 percent of respondents have concerns about deploying traditional firewalls in the cloud, according to Firewalls and the Cloud, a survey conducted by Barracuda Networks...

June 14, 2018

Despite the vast majority of cloud management decision-makers believing that DevOps and microservice enablement are important, very few believe that their organizations are capable of delivering them today — a gap that is costing the average enterprise $34 million per year, according to new report from the Ponemon Institute ...

June 12, 2018

Dev teams are doing their best to give the customers what they want, but oftentimes find themselves in between a rock and a hard place. Teams are struggling to get up to speed with new tools that are meant to make their lives easier and more realistic to hit deadlines. With spring cleaning season upon us, take time this season to tune up agile processes and continue the work of advancing the shift towards DevOps ...

June 11, 2018

The ability to create a culture of DevOps is critical to any organization's ability to deliver applications and services at a high rate of speed, but can we clearly and concisely answer the question: What exactly is DevOps? Despite the best intentions, some large companies are struggling to understand what DevOps actually is, and what it takes to fully implement its concepts and reap its benefits ...

June 07, 2018

The Twelve-Factor App is a methodology that offers a 12-step best practice approach for developers to apply when building software-as-a-service apps that are both scalable and maintainable in a DevOps world. As software continues to be written and deployed at a faster rate and in the cloud, development teams are finding there is more room for failure and vulnerabilities. This blog series will discuss how to build a Twelve-Factor app securely ...

June 05, 2018

Everyone understands the importance of code quality for applications, particularly when DevOps results in releases becoming faster and faster, reducing the room for error. The same issues increasingly apply to databases, which are a vital part of DevOps workflows. Fail to integrate the database into DevOps and you'll face bottlenecks that slow down your processes and undermine your efforts ...

June 04, 2018

DevOps and security traditionally have been siloed functions and security is often seen as a policing function by DevOps team members. However, more mature business leaders are trying to bridge the gap between the two functions to achieve business excellence. This theme was evident from our recent survey where 39% of respondents cited that DevOps and development teams care greatly about their cybersecurity posture, showing that the silo between security/IT and development teams is diminishing ...

Share this