Cribl puts your IT and Security data at the center of your data management strategy and provides a one-stop shop for analyzing, collecting, processing, and routing it all at any scale. Try the Cribl suite of products and start building your data engine today!
Learn more ›Evolving demands placed on IT and Security teams are driving a new architecture for how observability data is captured, curated, and queried. This new architecture provides flexibility and control while managing the costs of increasing data volumes.
Read white paper ›Cribl Stream is a vendor-agnostic observability pipeline that gives you the flexibility to collect, reduce, enrich, normalize, and route data from any source to any destination within your existing data infrastructure.
Learn more ›Cribl Edge provides an intelligent, highly scalable edge-based data collection system for logs, metrics, and application data.
Learn more ›Cribl Search turns the traditional search process on its head, allowing users to search data in place without having to collect/store first.
Learn more ›Cribl Lake is a turnkey data lake solution that takes just minutes to get up and running — no data expertise needed. Leverage open formats, unified security with rich access controls, and central access to all IT and security data.
Learn more ›The Cribl.Cloud platform gets you up and running fast without the hassle of running infrastructure.
Learn more ›Cribl.Cloud Solution Brief
The fastest and easiest way to realize the value of an observability ecosystem.
Read Solution Brief ›Cribl Copilot gets your deployments up and running in minutes, not weeks or months.
Learn more ›AppScope gives operators the visibility they need into application behavior, metrics and events with no configuration and no agent required.
Learn more ›Explore Cribl’s Solutions by Use Cases:
Explore Cribl’s Solutions by Integrations:
Explore Cribl’s Solutions by Industry:
September 25 | 10am PT / 1pm ET
Hold my beer: lessons from one team’s data pipeline journey
Register ›Try Your Own Cribl Sandbox
Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Get inspired by how our customers are innovating IT, security and observability. They inspire us daily!
Read Customer Stories ›Sally Beauty Holdings
Sally Beauty Swaps LogStash and Syslog-ng with Cribl.Cloud for a Resilient Security and Observability Pipeline
Read Case Study ›Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Transform data management with Cribl, the Data Engine for IT and Security
Learn More ›Cribl Corporate Overview
Cribl makes open observability a reality, giving you the freedom and flexibility to make choices instead of compromises.
Get the Guide ›Stay up to date on all things Cribl and observability.
Visit the Newsroom ›Cribl’s leadership team has built and launched category-defining products for some of the most innovative companies in the technology sector, and is supported by the world’s most elite investors.
Meet our Leaders ›Join the Cribl herd! The smartest, funniest, most passionate goats you’ll ever meet.
Learn More ›Whether you’re just getting started or scaling up, the Cribl for Startups program gives you the tools and resources your company needs to be successful at every stage.
Learn More ›Want to learn more about Cribl from our sales experts? Send us your contact information and we’ll be in touch.
Talk to an Expert ›June 15, 2020
Before I became an employee at Cribl, I was a prospective customer. For years, I, and others at my former employer, had been struggling with the requirements of our log analytics system. The yearly re-justification of the license cost; the ever-escalating infrastructure costs, due to the constantly compounding growth of data; and the inevitably destructive trade-offs we made around what we logged and what we ignored were a constant struggle.
When I saw what Cribl LogStream could help us achieve, it brought a tear to my eye. Then, during the sales cycle, I was lucky enough to be exposed to Cribl’s roadmap, and heard about what they were at the time calling “Replay”. If the product made me cry, this made me sob in joy and relief. Ok, I didn’t really sob or even cry, but the idea immediately made me realize that we could *completely* reimagine how we were doing log analysis and retention.
Simply put, data collection is batch data ingestion. Batch processing is nothing new – it’s been around as long as computing has. But it is new to the log analysis world, which has been focused almost exclusively on real-time data: logs and metrics stream in and are analyzed in a timely manner.
Those systems are really optimized for near real-time access, but so many of us have also made them the system of record for our log data, so we store months or years of data in them, just to meet a compliance requirement. Most of that data *never* gets analyzed – it’s like a data insurance policy; save it for some time in the future when you might need it. Unfortunately, due to data growth, it’s an insurance policy with an ever-escalating price tag
Data Collection enables LogStream users to break that cycle. Instead, they can store *all* of their data in an inexpensive storage solution – like AWS S3 or even AWS Glacier – and *only* index data that is needed for analysis. You can see the potential benefits from this in my “When All I’ve Got is a Hammer” post. The Data Collection feature completes the vision that I describe there.
We were like many shops, where a variant of the type of logging environment you see on the left was in place. We had high-performance SSD storage for the Hot/Warm data, and slightly lower-performance SSD or HDD in place for the Cold data. We had the old SATA HDD’s in place for “Frozen” data, which was not directly accessible by the logging system, but could be “thawed” back into the system (although this was typically a pretty slow and painful process that was done outside of the logging system). This worked fine when we started, but was a struggle to scale, both tactically and financially. Keeping performance up meant more reliance on high-cost SSD’s and more and more compute, ultimately breaking the budget.
LogStream has had the ability to write to an archival store like S3 since its very first beta release. The ability to replay it from that datastore changes the game. Data Collection gives you a very fast and easy way to “warm up” archived data, replete with all of the power of LogStream pipelines, functions, etc.
So now, with LogStream in place and making use of Data Collection, a typical deployment can looks more like this:
In this deployment, all data that’s coming through LogStream is being archived to the S3 bucket, which also has a Data Collector configured for it. This enables you to be a lot more aggressive with filtering out the data you don’t need for immediate troubleshooting or analysis. You can now do things like convert log data to metrics (sending only the metrics on), drop superfluous fields, sample, or even drop logs or log lines that you decide are unnecessary. All of this still safe in the knowledge that if you need any of that data back in the system, it’s a simple matter of running a data collection job.
As part of our 2.2 Release efforts, we’ve created an interactive sandbox course to help you learn how to use the Data Collection feature. This course takes about 30 minutes, offers an Accredible certificate upon completion, and is available at https://sandbox.cribl.io/course/data-collection.
Judith Silverberg-Rajna Sep 10, 2024
Felicia Dorng Sep 3, 2024
Classic choice. Sadly, our website is designed for all modern supported browsers like Edge, Chrome, Firefox, and Safari
Got one of those handy?