GitLab announced the general availability of GitLab Duo with Amazon Q.
The adoption of Cloud Native Architecture and use of microservices is starting to take off and Kubernetes is at the forefront of the technologies enabling this trend. However, while Kubernetes provides you with many features that make deploying and running microservices easy, there are few additional services that need to be enabled in order for developers/devops to be productive and make applications maintainable. Logging is one such vital service.
When it comes to logging, the Kubernetes ecosystem offers many choices for you to collect and manage logs.
This blog discusses different aspects about logging in Kubernetes, while identifying key considerations to keep in mind when developing a Kubernetes logging strategy.
Let's start by discussing the essentials of Kubernetes logging.
Types of Log Data in Kubernetes
In a Kubernetes environment, there are two main types of logs to manage:
Kubernetes logs
Kubernetes generates its own logs for the API server, etcd, schedulers and kube-proxy. These logs are useful for troubleshooting or tracking down issues related to Kubernetes itself. Most of these logs are stored on Kubernetes master nodes in the host operating system's log directory (on most Linux distributions, that's /var/log). Other logs associated with the worker node components such as kubelet and kube-proxy are stored on the worker node itself. The actual location of the logs and associated log rotation policy may depend on the vendor or toolkit you are using to manage Kubernetes cluster.
Application Logs
These are the logs from applications hosted in Kubernetes. Typically, application log data is written to standard output (stdout) and standard error (stderr) streams inside the container (or containers. While this may be true for most modern application, there are few challenges to address :
■ Some containers may run multiple processes: typically when you are porting over large monolith installation to Kubernetes. This may mean that there are multiple process logs you may want to access.
■ Even if you have a single process container, it is possible that the process itself is not writing to stdout/stderr and may be writing to a log file.
I will discuss in detail below how to work around these architectural limitations in the section addressing sidecar logs solution.
Accessing Kubernetes Log Data
There are different ways to access log data in Kubernetes. The most straightforward — but also the least efficient — is to access the raw log data. For application logs, that means logging into the containers that host them and viewing the stdout and stderr streams directly from there. For Kubernetes logs, it means SSH'ing into the nodes and reading log files straight from the /var/log directory. If you are using docker as the container runtime, the application logs are typically stored in /var/lib/docker/containers/
Tip: Kubernetes administrators, you should pay particular attention to make sure you set up a logrotate policy for the container logs, if you are not careful you may run out of disk-space. Similarly, note that container logs disappear after the container is evicted from the node.
A slightly more centralized approach for accessing application logs is to use kubectl, which allows you to view logs for individual containers with a command like:
kubectl logs pod-name
This saves you from the hassle of having to access containers individually to view their logs, although it still only allows you to view log data for one pod at a time.
You can also use the kubectl get eventscommand to view a list of what Kubernetes refers to as events. Kubernetes logs an event when certain actions are performed, like a container starting, or when problems are encountered, like a workload that runs out of resources.
Events provide a quick overview of cluster-level health without requiring you to dive into actual log files; the downside is that there is minimal context associated with each event, which makes it hard to use events alone to monitor cluster health and troubleshoot performance problems.
Beyond these resources, Kubernetes itself provides no functionality for log aggregation, analysis or management. Nor does it store logs for you persistently. It expects you to use third-party logging solutions for those purposes.
Using Logging Aggregation with Kubernetes
The vanilla kubectl logs approach is clearly not scalable for large clusters and complex applications in production. For production of large scale environments, it is highly recommended to use a log aggregation system.
Kubernetes is compatible with a variety of third-party logging tools. If your logging solution of choice supports Linux and Linux-based applications, it almost certainly works with Kubernetes, too. Common logging systems and commercial systems have recipes to consume logs from Kubernetes.
That said, Kubernetes doesn't go out of its way to make it easy to collect log data and forward it to a third-party logging platform. Instead, you have to find a way to forward log data from your applications and/or Kubernetes itself to the logging platform you want to use to aggregate, analyze and store your log data.
Using logging agents to collect log data from Kubernetes itself is straightforward enough. Because those log files live in /var/log on the nodes, you can simply deploy logging agents directly to the nodes and collect log data from there.
For application logging, things are more complicated, because you can't deploy a conventional log agent directly inside a container to collect data from stdout and stderr. Instead, Kubernetes admins typically rely on one of three approaches for collecting application log data:
■ Node agent logging: As discussed earlier, this relies on an agent at the node level. Most common agents now are Kubernetes-aware and annotate log files with container, pod, namespace names so that sorting and searching logs by pod name or container name becomes easier. The agents are also deployed as Kubernetes artifacts. This is the most common approach.
■ Sidecar logging: Within each pod, they run a so-called sidecar container. The sidecar container hosts a logging agent, which is responsible for collecting log data from other containers in the pod and forwarding it to an external log management platform or stack.
■ Direct-from-application logging: You can design your application in such a way that it streams log data directly to an external logging endpoint, without relying on an agent inside the pod to collect it. Because this approach in most cases would require changes to the application itself and the way it exposes log data, it's a less common solution.
How to Choose a Kubernetes Logging Solution
Because Kubernetes can be integrated with any mainstream, Linux-compatible logging stack, it may be hard to decide which logging solution is best. Here are some factors to consider:
■ No Lock-in: Kubernetes itself is open-source and gives you broad flexibility in designing cluster architecture. Ideally, your logging platform will also be flexible, so that it doesn't lock you into a certain technology stack or vendor ecosystem.
■ Log agent performance: If you run logging agents in sidecar containers, you add to the overall workload that your Kubernetes cluster has to support. For that reason, you'll want your logging agents to be lightweight and high-performance. Tip: certain agents are easy to configure to filter logs at the source, keeping the volume low.
■ Log aggregation and centralization: To get the most out of Kubernetes logs, you should be able to not just to view logs for individual applications and components, but also compare log data across the entire cluster. The capability to aggregate all logs into a central location is critical for this.
■ Log storage and rotation: A Kubernetes cluster that runs for years will generate a lot of logs. Realizing it won't be practical in many cases to store every log forever, you'll want a logging solution that makes it easy to define how long logs are stored. Ideally, you'll be able to configure these policies in a granular way so that you can keep certain types of logs longer than others if you need. The ability to move older logs to a storage location where it costs less to keep them (such as a “cold” storage tier on a cloud storage service) is also beneficial, especially in situations where you need to keep Kubernetes logs around for years due to compliance requirements.
Industry News
Perforce Software and Liquibase announced a strategic partnership to enhance secure and compliant database change management for DevOps teams.
Spacelift announced the launch of Saturnhead AI — an enterprise-grade AI assistant that slashes DevOps troubleshooting time by transforming complex infrastructure logs into clear, actionable explanations.
CodeSecure and FOSSA announced a strategic partnership and native product integration that enables organizations to eliminate security blindspots associated with both third party and open source code.
Bauplan, a Python-first serverless data platform that transforms complex infrastructure processes into a few lines of code over data lakes, announced its launch with $7.5 million in seed funding.
Perforce Software announced the launch of the Kafka Service Bundle, a new offering that provides enterprises with managed open source Apache Kafka at a fraction of the cost of traditional managed providers.
LambdaTest announced the launch of the HyperExecute MCP Server, an enhancement to its AI-native test orchestration platform, HyperExecute.
Cloudflare announced Workers VPC and Workers VPC Private Link, new solutions that enable developers to build secure, global cross-cloud applications on Cloudflare Workers.
Nutrient announced a significant expansion of its cloud-based services, as well as a series of updates to its SDK products, aimed at enhancing the developer experience by allowing developers to build, scale, and innovate with less friction.
Check Point® Software Technologies Ltd.(link is external) announced that its Infinity Platform has been named the top-ranked AI-powered cyber security platform in the 2025 Miercom Assessment.
Orca Security announced the Orca Bitbucket App, a cloud-native seamless integration for scanning Bitbucket Repositories.
The Live API for Gemini models is now in Preview, enabling developers to start building and testing more robust, scalable applications with significantly higher rate limits.
Backslash Security(link is external) announced significant adoption of the Backslash App Graph, the industry’s first dynamic digital twin for application code.
SmartBear launched API Hub for Test, a new capability within the company’s API Hub, powered by Swagger.
Akamai Technologies introduced App & API Protector Hybrid.