Cribl puts your IT and Security data at the center of your data management strategy and provides a one-stop shop for analyzing, collecting, processing, and routing it all at any scale. Try the Cribl suite of products and start building your data engine today!
Learn more ›Evolving demands placed on IT and Security teams are driving a new architecture for how observability data is captured, curated, and queried. This new architecture provides flexibility and control while managing the costs of increasing data volumes.
Read white paper ›Cribl Stream is a vendor-agnostic observability pipeline that gives you the flexibility to collect, reduce, enrich, normalize, and route data from any source to any destination within your existing data infrastructure.
Learn more ›Cribl Edge provides an intelligent, highly scalable edge-based data collection system for logs, metrics, and application data.
Learn more ›Cribl Search turns the traditional search process on its head, allowing users to search data in place without having to collect/store first.
Learn more ›Cribl Lake is a turnkey data lake solution that takes just minutes to get up and running — no data expertise needed. Leverage open formats, unified security with rich access controls, and central access to all IT and security data.
Learn more ›The Cribl.Cloud platform gets you up and running fast without the hassle of running infrastructure.
Learn more ›Cribl.Cloud Solution Brief
The fastest and easiest way to realize the value of an observability ecosystem.
Read Solution Brief ›Cribl Copilot gets your deployments up and running in minutes, not weeks or months.
Learn more ›AppScope gives operators the visibility they need into application behavior, metrics and events with no configuration and no agent required.
Learn more ›Explore Cribl’s Solutions by Use Cases:
Explore Cribl’s Solutions by Integrations:
Explore Cribl’s Solutions by Industry:
September 25 | 10am PT / 1pm ET
Hold my beer: lessons from one team’s data pipeline journey
Register ›Try Your Own Cribl Sandbox
Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Get inspired by how our customers are innovating IT, security and observability. They inspire us daily!
Read Customer Stories ›Sally Beauty Holdings
Sally Beauty Swaps LogStash and Syslog-ng with Cribl.Cloud for a Resilient Security and Observability Pipeline
Read Case Study ›Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Transform data management with Cribl, the Data Engine for IT and Security
Learn More ›Cribl Corporate Overview
Cribl makes open observability a reality, giving you the freedom and flexibility to make choices instead of compromises.
Get the Guide ›Stay up to date on all things Cribl and observability.
Visit the Newsroom ›Cribl’s leadership team has built and launched category-defining products for some of the most innovative companies in the technology sector, and is supported by the world’s most elite investors.
Meet our Leaders ›Join the Cribl herd! The smartest, funniest, most passionate goats you’ll ever meet.
Learn More ›Whether you’re just getting started or scaling up, the Cribl for Startups program gives you the tools and resources your company needs to be successful at every stage.
Learn More ›Want to learn more about Cribl from our sales experts? Send us your contact information and we’ll be in touch.
Talk to an Expert ›The ability to collect from myriad sources, send to just as many destinations, and transform your machine data in-between is extremely useful. When evaluating observability pipeline tools, though, something else to keep in mind is the ability to store and replay your raw data at a later date.
Think about it: when (not if) there is a data breach, you’ll need access to the raw data to analyze it in a new way that you haven’t been (otherwise you would have caught the breach sooner). Or if you need to prove compliance to certain security standards back to a certain date, you’re going to want your raw data on hand to help.
How do you get at your raw data? Do you need to contact the log administrator? The backup/storage administrator? Did you get the right data the first time? No? So you need to contact them all again? No wonder some large companies don’t disclose breaches for months if not years.
With the idea of storing and replaying your raw machine data in mind (“raw” meaning, prior to transmutation where you would pull out seemingly useless information), let’s look at how two of the major players in the space handle this task: fluentd and Cribl Stream.
Starting with fluentd there is an architectural design you have to keep in mind while deploying fluentd – once you send data to a destination, you can no longer make changes to the data, it’s gone. This means that if you want to send your raw machine data to, say, S3 for long-term storage prior to shaping and slimming it down, you have to have two fluentd instances: the first will collect the data and ship it to S3 as well as the second fluentd instance while the second instance is where you will do all of your fun data shaping AND pushing to your actual destinations.
Wrapping ask that into one sentence: You need two ‘layers’ of fluentd just to send copies of your raw data off to long-term storage prior to manipulation. (You would also need another layer if you want to pull your stored data out of storage, even if it’s post transformation.)
On to Stream, where Replay is built into the product from the beginning. In Stream, sending to multiple destinations pre AND post data manipulation is not only doable but encouraged. To do this, we create a pipeline that doesn’t do anything to the data, just passthrough. Oh wait, it’s already there by default. Ok, then let’s add the pipeline to a route that pushed everything to S3.
Now that is out of the way, we can continue on with the rest of our data transmutation by adding more routes. So long as we didn’t select ‘final’ on the raw to S3 route, the data stays in Stream and allows us to reuse/edit it in flight to send it to our final destinations.
Ok, recap: Using fluentd, you need to architecturally have two ‘layers’ of fluentd running. One ONLY sends to S3 and your second layer of fluentd, because once data is sent somewhere from fluentd it leaves the pipeline. Conversely, with Stream you can simply use the included passthrough pipeline with a non-final route to take raw data and put it in S3. From there, you can continue to sanitize and slim down the data before sending it off to your other sources.
All this is to say, when considering tools to use in your observability pipeline, consider the ability to send raw data to cheap storage. Stream seems like the easy choice to make. Oh by the way, next time we’ll talk about how different vendors handle pulling data from S3 in order to actually replay the raw data you just sent there. Hint: fluentd doesn’t have a certified S3 source plugin.
The fastest way to get started with Cribl Stream is to sign-up at Cribl.Cloud. You can process up to 1 TB of throughput per day at no cost. Sign-up and start using Stream within a few minutes.
Experience a full version of Cribl Stream and Cribl Edge in the cloud with pre-made sources and destinations.
Classic choice. Sadly, our website is designed for all modern supported browsers like Edge, Chrome, Firefox, and Safari
Got one of those handy?