How Jitsu Cut Logging Costs by 90% While Managing Millions of Shipments Generating 400 Logs Each

Summer Lambert
September 30, 2025
Alt text: A smiling delivery driver wearing a green and black Jitsu-branded jacket carries a stack of cardboard boxes to a doorstep. Overlaid on the image is a delivery tracking card showing "Status: Out for Delivery" with a completed progress bar, along with fields for tracking code, scheduled delivery date, and delivery address.

Jitsu is a last-mile delivery company focused on providing fast, reliable, and affordable package delivery solutions for e-commerce businesses, particularly in urban areas. Jitsu utilizes proprietary technology for route optimization, driver communication, and real-time tracking, enhancing efficiency and customer experience. Their platform is built from multiple services operating on Kubernetes in the cloud.

Years ago, Jitsu had chosen Datadog for its comprehensive ability to leverage metrics, logs and traces, and the Jitsu DevOps team is quite happy with the solution. Over the years they have built up around 50 custom logging dashboards for monitoring their operations, and the team is well versed in leveraging Datadog for both business and technical operational use cases. However, as the Jitsu platform and business has grown, so have the logging costs. 

Jitsu uses their logging system for 3 core use cases; 1) Real-time application troubleshooting, 2) Retrospective investigations of package delivery issues and 3) Continuous compliance validation. These capabilities are foundational to their operations, and their tooling and workflows are well established around Datadog. But therein lies the challenge and the opportunity, how could they reduce the costs of operations without disrupting their established operations?

This is when Evan Robinson, CTO at Jitsu, found Grepr.  “Jitsu handles millions of shipments per month and for each shipment, we generate 400 logs.  Well over 99% of our shipments are successful, we only need to review logs to understand why a shipment has gone wrong.”  This sets up a strong motivation to find a clever way to strip out the non-interesting log data and save money, yet still be able to utilize the established processes and tooling. Evan continued, “we felt that if we could find a solution that balanced between sifting out the redundant and ‘non-interesting’ logs, and could automate that process of deciding what to keep, that would be a very nice approach.”

There was one additional requirement. The log set for each shipment transaction must be archived for 13 months to meet business requirements, so therefore, Jitsu needed to store logs as cheaply as possible, yet still be findable, to meet their compliance needs. 

One of the ways Jitsu troubleshoots issues is when an alert fires due to errors in a trace, engineers go to the associated logs. With Grepr reducing logs, what is the impact on that workflow? To mitigate this impact, Jitsu used two features: 1) the trace sampler which passes through full logs for a fraction of the traces Grepr sees, and 2) triggered backfills - when there’s an alert from Datadog on a trace error, Grepr automatically backfills logs for that trace back into Datadog. 99% of the time, the logs that are needed for troubleshooting are already in Datadog, and when not, it’s quick and easy enough to find them in the Grepr Data Lake.

Grepr delivered. Grepr was initially deployed within an hour, and Jitsu saw results within 15 minutes reducing log volume by over 90%. Despite immediate log volume reduction, the impact on their Mean Time To Resolution (MTTR) was "negligible compared to the cost savings” after months of use. Storing logs in their own S3 bucket for compliance was significantly cheaper than within Datadog, with Grepr providing efficient methods for finding specific logs in the archive when needed.

To sum up their feelings, Evan concluded, “Grepr allows us to find the needle in the haystack without paying for indexing the haystack!”   We could not have said that any better.

Share this post

More blog posts

All blog posts
Grepr team members John and Utkarsh at an outdoor café in Amsterdam during KubeCon EU 2026.
Events

KubeCon Amsterdam 2026: Hallway Conversations Said What Keynotes Didn't

KubeCon EU 2026 made one thing clear: AI infrastructure is generating telemetry volumes that most observability budgets were never built to handle.
April 8, 2026
Close-up of a hand playing the classic board game Operation, reaching to remove a piece from the patient's body on the yellow game board.
Engineering Guides

How to Drop Noisy Health Check Logs Before They Hit Your Observability Platform

Healthcheck logs generate millions of identical lines per day and silently inflate your observability bill, but filtering them at the right layer can cut total log volume by 15 to 40 percent.
April 7, 2026
Battle image of Grepr and New Relic, with a lightning bolt in between
Comparisons

New Relic Pipeline Control vs Grepr: Manual Rules vs AI Automation

New Relic Pipeline Control bills you on data volume before any filtering happens, requires manual YAML config for every pipeline, and needs a separate Kubernetes install per environment.
April 2, 2026

Get started free and see Grepr in action in 20 minutes.