Cribl puts your IT and Security data at the center of your data management strategy and provides a one-stop shop for analyzing, collecting, processing, and routing it all at any scale. Try the Cribl suite of products and start building your data engine today!
Learn more ›Evolving demands placed on IT and Security teams are driving a new architecture for how observability data is captured, curated, and queried. This new architecture provides flexibility and control while managing the costs of increasing data volumes.
Read white paper ›Cribl Stream is a vendor-agnostic observability pipeline that gives you the flexibility to collect, reduce, enrich, normalize, and route data from any source to any destination within your existing data infrastructure.
Learn more ›Cribl Edge provides an intelligent, highly scalable edge-based data collection system for logs, metrics, and application data.
Learn more ›Cribl Search turns the traditional search process on its head, allowing users to search data in place without having to collect/store first.
Learn more ›Cribl Lake is a turnkey data lake solution that takes just minutes to get up and running — no data expertise needed. Leverage open formats, unified security with rich access controls, and central access to all IT and security data.
Learn more ›The Cribl.Cloud platform gets you up and running fast without the hassle of running infrastructure.
Learn more ›Cribl.Cloud Solution Brief
The fastest and easiest way to realize the value of an observability ecosystem.
Read Solution Brief ›Cribl Copilot gets your deployments up and running in minutes, not weeks or months.
Learn more ›AppScope gives operators the visibility they need into application behavior, metrics and events with no configuration and no agent required.
Learn more ›Explore Cribl’s Solutions by Use Cases:
Explore Cribl’s Solutions by Integrations:
Explore Cribl’s Solutions by Industry:
September 25 | 10am PT / 1pm ET
Hold my beer: lessons from one team’s data pipeline journey
Register ›Try Your Own Cribl Sandbox
Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Get inspired by how our customers are innovating IT, security and observability. They inspire us daily!
Read Customer Stories ›Sally Beauty Holdings
Sally Beauty Swaps LogStash and Syslog-ng with Cribl.Cloud for a Resilient Security and Observability Pipeline
Read Case Study ›Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Transform data management with Cribl, the Data Engine for IT and Security
Learn More ›Cribl Corporate Overview
Cribl makes open observability a reality, giving you the freedom and flexibility to make choices instead of compromises.
Get the Guide ›Stay up to date on all things Cribl and observability.
Visit the Newsroom ›Cribl’s leadership team has built and launched category-defining products for some of the most innovative companies in the technology sector, and is supported by the world’s most elite investors.
Meet our Leaders ›Join the Cribl herd! The smartest, funniest, most passionate goats you’ll ever meet.
Learn More ›Whether you’re just getting started or scaling up, the Cribl for Startups program gives you the tools and resources your company needs to be successful at every stage.
Learn More ›Want to learn more about Cribl from our sales experts? Send us your contact information and we’ll be in touch.
Talk to an Expert ›OpenTelemetry is starting to gain critical mass due to its vendor neutrality and having worked in the APM space for the last five years. I can see the appeal. Using OpenTelemetry libraries to instrument your code frees you from putting vendor libraries in your codebase. The other challenge most customers face is balancing cost versus visibility. While effective, most APM solutions are costly. When an issue does occur, having an APM solution is invaluable, but for the most part, it’s costly and comparable to flood insurance.
For this reason, you have to decide which applications are the most critical, so you end up instrumenting a handful of applications to control cost. In a perfect world, cost is not an option, so you can instrument all your applications. Sampling is an attempt to control volume, and most solutions today use head-based sampling. More advanced solutions have moved to tail-based sampling to capture a complete distributed transaction and index a sample of successful and unsuccessful transactions. This is where Cribl Stream + Cribl Search comes into play.
Stream allows you to calculate RED (Rate, Error, and Duration) metrics and send them to the monitoring tool of your choice for dashboarding, alerting, and correlation. While simultaneously sending 100% of your traces and spans into your own blob storage for long-term retention, analysis, and tail-based sampling. This approach offers a balanced APM solution, ensuring cost control without sacrificing observability.
In this example, I’m using Cribl Stream to replace the OTEL Collector. You can send your traces directly to a Cloud Stream gRPC endpoint. Here we will scale up the infrastructure to meet the throughput of your transactions. Our goal here is choice. If you still want to use the OTEL collector, you can set Stream up as an OpenTelemetry gRPC or HTTP endpoint. Next, use our OpenTelemetry pack to convert your Traces to RED metrics. RED stands for Rate, Error, and Duration. The pack will generate these metrics along with service and resource_url dimensions for correlation and to help keep cardinality to a minimum.
From there, you can send them to your monitoring tool of choice for dashboarding and alerting. The OTEL pack will generate the duration, hit rate, and error rate for each service and resource URL. This will allow you to create a dashboard of leading indicators to help alert you when an issue arises.
Metrics are now flowing into your TSDB. At the same time, we are sending all raw traces to cheaper blob storage. See my other blog post for possible lifecycle policies and retention strategies
This is where Cribl Search comes into play. We use blob storage as a buffer to collect 100% of all traces before using tail-based sampling to send them to your APM tool of choice. Using cheap, durable blob storage is a great way to keep costs under control. Not only are you using lower-cost storage, but high-volume data is being compressed 10:1. Once the traces have landed in blob storage, you can use Search to find slow or unsuccessful transactions, generate top lists of failed resource_urls
, and drill deep into the details of failed transactions. Here you can see URLs that have failed most frequently as a starting point to find the root cause of the failures.
This search is a top list of the slowest transactions by trace_id
. This is the root level key that ties in all spans. Pivoting from this list, you can drill into each specific transaction.
Pro Tip: Write traces to the local file system and use search in place to find your Exceptions and stack traces.
From your monitoring solution, you can create dynamic Search queries to pivot from your alert detection to the exact transactions contributing to the incident. In this case I’m generating an alert for high error rates by service, and I dynamically create the search string using template variables.
Clicking on the link in the alert brings you directly to the transactions causing the issue. Your SREs can go directly to the failures and drill into the root cause of the problem. The OTEL span contains the stack trace that is causing the problem. In this example, the service is UNAVAILABLE: No connection established
The last piece of the puzzle is to do distributed sampling to send a representative set of successful and unsuccessful transactions to your APM solution. Here you can see the same query as above but with a sample rate of 1:100 set.
Using this pattern, you can selectively include what types and how many of each transaction make it into your APM solution. Appending | send to your query allows you to send the results back through Stream to route to another analysis system. For example, representative samples can be selectively sent to other systems of analysis.
In conclusion, using Cribl Stream and Cribl Search gives you the best of both worlds—100% Data capture/retention while selectively sampling traces and sending them to your APM tool. Alternatively you can use Search alone to give insights into your applications without sending data anywhere. This approach will provide you with control over the trace volume, sending the correct data to your APM solution, and help keep costs under control, while giving you 100% observability. Ultimately, this approach eliminates the need to choose between cost and observability. Ready to learn more? Check out our Sandbox to try out Cribl Stream with sample data.
Cribl, the Data Engine for IT and Security, empowers organizations to transform their data strategy. Customers use Cribl’s suite of products to collect, process, route, and analyze all IT and security data, delivering the flexibility, choice, and control required to adapt to their ever-changing needs.
We offer free training, certifications, and a free tier across our products. Our community Slack features Cribl engineers, partners, and customers who can answer your questions as you get started and continue to build and evolve. We also offer a variety of hands-on Sandboxes for those interested in how companies globally leverage our products for their data challenges.
Experience a full version of Cribl Stream and Cribl Edge in the cloud with pre-made sources and destinations.
Classic choice. Sadly, our website is designed for all modern supported browsers like Edge, Chrome, Firefox, and Safari
Got one of those handy?