Automating Log Management

Steve Waterworth
May 9, 2025

Grepr slips in like a shim between the log shippers and the aggregation backend. With a small configuration change log shippers forward the logs to Grepr instead of the usual aggregation backend. Grepr automatically analyses each log entry and identifies similarity across all messages in real time. Noisy messages are summarised while unique messages are passed straight through. Nothing is discarded, all messages received by Grepr are persisted in low cost storage.

Semantic Pipeline

As the log messages arrive in Grepr they are processed by a pipeline which parses the messages into the Grepr internal structure. Each log message has the following structure:

  • ID: Globally unique identifier
  • Received Timestamp: When Grepr received the message
  • Event Timestamp: The timestamp from the log message
  • Tags: A set of key-value pairs used to filter and route messages e.g. host, service, environment, etc.
  • Attributes: Structured data and fields extracted from the message.
  • Message: The text of the message.
  • Severity: The Open Telemetry standard for message severity. 1-4 TRACE, 5-8 DEBUG, 9-12 INFO, 13-16 WARN, 17-20 ERROR and 21-24 FATAL. Either derived from a message filed or parsed out of the message.
Masking

Once the message is in a standard form the real work can begin. Masking automatically identifies and masks out frequently changing values such as numbers, UUID, timestamps, IP addresses, etc. This significantly improves the efficiency od our machine learning by normalising variable data into consistent patterns.

Clustering

Using sophisticated similarity metrics to group messages into patterns. The similarity threshold determines how closely messages must match to be considered part of the same pattern.

Sampling

Once a pattern reaches a threshold, Grepr will either stop forwarding those messages matching the pattern or only forward a sampled subset of the matching messages. If the pattern has been configured to be sampled, then Grepr uses a logarithmic sampling algorithm. With the base set to 2 and the deduplication threshold set to 4, then Grepr will send an additional sample message once the number of messages reaches 32. Since 4 messages were already sent before the threshold was reached and 2^4 = 16 so the next threshold is 32 and 64, 128, 256, you get the idea.

Summarising

At the end of each time slot, Grepr will generate a concise summary for each clustered pattern including the following extra attributes:

  • grepr.patternId: Unique identifier for the pattern
  • grepr.rawLogsUrl: Direct link to view all raw messages for this pattern
  • grepr.repeatCount: Count of the number of messages aggregated
Exceptions

The machine learning in the semantic pipeline does a very good job of significantly reducing the volume of log data sent through to the aggregation backend without filtering out any essential data. However, there are always exceptions. Fortunately there is a rules engine that works beside the machine learning that allows for the configuration and fine tuning of which messages are filtered and which are allowed to pass straight through.

All The Data When You Need It

When using Grepr to automatically manage log data, it’s like having an engineer look at each message and decide which ones are useful and which ones may not be useful. Most log messages are only useful when investigating an issue, only a small subset of messages are useful to verify that everything is working as it should be. Why pay to have every message indexed and stored by your log aggregation backend? With Grepr you can keep all messages in low cost storage where they can be queried for reporting or to feed AI analysis. When an incident occurs, the relevant log messages can be quickly backfilled into the log aggregation backend to aid in the restoration of service. With this strategy, you get the benefits of automated log reduction without changing any of the configurations, analytics or dashboarding your team has built on the logging tools.

Share this post

More blog posts

All blog posts
Product

Structured Logging - What It Is and Why You Need It

In modern, complex software environments, unstructured logs can create chaos and make it difficult to gain insights. This blog post explains why structured logging, which captures log data in a consistent, machine-readable format like JSON, is an essential practice. By standardizing your logs, you can dramatically improve observability, ensure consistency across teams, and future-proof your systems. The post details how this approach facilitates faster troubleshooting, enables powerful automation, and turns your log data into a valuable source for metrics and analytics, ultimately transforming logs from simple text files into a critical source of truth for your applications.
August 25, 2025
Product

Control Observability Costs Without Dropping Data

Many IT teams face a difficult trade-off: managing the high costs of observability data while still maintaining full visibility into increasingly complex systems. This blog post introduces a solution to this problem, explaining how to achieve 100% visibility with just 10% of the data. It breaks down observability data into two tiers—essential "heartbeat" data and voluminous "diagnostic" data—and demonstrates how the Grepr Intelligent Observability Data Engine uses machine learning to summarize diagnostic logs, retaining all of the raw data in low-cost storage. This approach allows teams to dramatically reduce their ingestion costs, while still having the ability to backfill all of the relevant diagnostic data for troubleshooting incidents, ensuring no critical information is lost.
August 20, 2025
Announcements

Announcing live edit

In the fast-paced world of data pipelines, making a mistake can have serious consequences. This blog introduces Grepr's new Live Edit feature, which allows you to safely test changes to your production pipelines. By creating a temporary, risk-free clone of your pipeline, you can add new parsers, exceptions, or other modifications and see the results in real time. This ensures you can validate changes and their impact on your data stream before committing, preventing errors and giving you the confidence to maintain your pipelines with ease.
August 14, 2025

Get started free and see Grepr in action in 20 minutes.