Cribl puts your IT and Security data at the center of your data management strategy and provides a one-stop shop for analyzing, collecting, processing, and routing it all at any scale. Try the Cribl suite of products and start building your data engine today!
Learn more ›Evolving demands placed on IT and Security teams are driving a new architecture for how observability data is captured, curated, and queried. This new architecture provides flexibility and control while managing the costs of increasing data volumes.
Read white paper ›Cribl Stream is a vendor-agnostic observability pipeline that gives you the flexibility to collect, reduce, enrich, normalize, and route data from any source to any destination within your existing data infrastructure.
Learn more ›Cribl Edge provides an intelligent, highly scalable edge-based data collection system for logs, metrics, and application data.
Learn more ›Cribl Search turns the traditional search process on its head, allowing users to search data in place without having to collect/store first.
Learn more ›Cribl Lake is a turnkey data lake solution that takes just minutes to get up and running — no data expertise needed. Leverage open formats, unified security with rich access controls, and central access to all IT and security data.
Learn more ›The Cribl.Cloud platform gets you up and running fast without the hassle of running infrastructure.
Learn more ›Cribl.Cloud Solution Brief
The fastest and easiest way to realize the value of an observability ecosystem.
Read Solution Brief ›Cribl Copilot gets your deployments up and running in minutes, not weeks or months.
Learn more ›AppScope gives operators the visibility they need into application behavior, metrics and events with no configuration and no agent required.
Learn more ›Explore Cribl’s Solutions by Use Cases:
Explore Cribl’s Solutions by Integrations:
Explore Cribl’s Solutions by Industry:
September 25 | 10am PT / 1pm ET
Hold my beer: lessons from one team’s data pipeline journey
Register ›Try Your Own Cribl Sandbox
Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Get inspired by how our customers are innovating IT, security and observability. They inspire us daily!
Read Customer Stories ›Sally Beauty Holdings
Sally Beauty Swaps LogStash and Syslog-ng with Cribl.Cloud for a Resilient Security and Observability Pipeline
Read Case Study ›Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Transform data management with Cribl, the Data Engine for IT and Security
Learn More ›Cribl Corporate Overview
Cribl makes open observability a reality, giving you the freedom and flexibility to make choices instead of compromises.
Get the Guide ›Stay up to date on all things Cribl and observability.
Visit the Newsroom ›Cribl’s leadership team has built and launched category-defining products for some of the most innovative companies in the technology sector, and is supported by the world’s most elite investors.
Meet our Leaders ›Join the Cribl herd! The smartest, funniest, most passionate goats you’ll ever meet.
Learn More ›Whether you’re just getting started or scaling up, the Cribl for Startups program gives you the tools and resources your company needs to be successful at every stage.
Learn More ›Want to learn more about Cribl from our sales experts? Send us your contact information and we’ll be in touch.
Talk to an Expert ›I’ve been in the log data analytics space for years, and I have loved seeing the technology and methodologies change and evolve. One of my favorite changes has been the emergence of index-less solutions, and LogScale has a great solution here. If you haven’t heard of LogScale, you should check out their index-less log management solution for yourself (free up to 16 GB/day too). In today’s blog, I want to walk you through how to set up Cribl Stream v.3.5.1 and later to start sending data to LogScale, so you can get up and running fast.
I’ll show you how.
To start, let’s make sure you have a repository for this data setup in LogScale. For a little context, think of Repositories as a collection of data, parsing rules, users, dashboards, searches, and alerts (if you want to learn more about it, I suggest starting here). Specifically, we need to do this step as data inputs are tied to a repository.
Once you’ve created your repository (or if you already have one), you will need to get your ingest token. Do this by clicking on the repo you want to send data to, this will take you to the search page. In my example, I’ve created a repo called “infra-metrics”.
Click on the “infra-metrics” repo and navigate to “Settings” for the repo, located on the top bar.
Once on the settings page go to the “Ingest Tokens” side tab.
We are going to create a new token called “Cribl-Stream”, without selecting an assigned parser. This is important (we’ll go into why after we set up the destination in Cribl Stream).
Once saved click on the eye icon (…or eye-con 😉 ), and copy the token.
Great! Now we have LogScale set up to receive data, we just need to start sending!
So let’s head to the Stream “destinations” page and select a “LogScale HEC” destination.
We are going to add a new “LogScale HEC” input for LogScale. Feel free to name your output however you want, but we do need to change the endpoint. We are going to use the following URL structure:
If you have a custom endpoint: <URL for your LogScale deployment>:443/api/v1/ingest/hec
OR
If using the community edition: https://cloud.community.humio.com:443/api/v1/ingest/hec
Then paste the ingest token we created earlier in as the “HEC Auth Token”.
Once you save it, open the configuration page back up and go to the “test” tab so we can send some sample data.
You should see the sample log go through without a hitch, and if you flip over to LogScale, the events should have already shown up.
Alright, we now have the destination setup in Stream, we just need to start sending data in! The destination is ready to be used like any other destination in Stream, you just need to attach it to a route.
LogScale will read the fields designed for Splunk notation to LogScale notation without you needing to do much of anything. But this is where the parsers kick in. If you are having trouble with the format of events in LogScale, that’s where you need to start looking.
Inside your repo on LogScale you will see a tab named “Parsers”, if you open that page you will see all the pre-built parsers.
If you click into any one of these parsers you will see the parsing logic or regex used for that data type.
When you leave the assigned parser empty, LogScale will try to match the sourcetype (what LogScale calls the “@Type”) to a parser. If your data is getting malformed, this is the best place to start looking. An easy way to make sure you have no issues with the parsers is to either create your own parser for the event in question or reformat events into JSON and set the sourcetype to “json”.
That’s all for now! Thanks for joining me on a dive into how Cribl and LogScale integrate, we will be doing more with LogScale over the coming weeks, so stay tuned! And remember, the bird is the word.
The fastest way to get started with Cribl Stream and Cribl Edge is to try the Free Cloud Sandboxes.
Experience a full version of Cribl Stream and Cribl Edge in the cloud with pre-made sources and destinations.
Classic choice. Sadly, our website is designed for all modern supported browsers like Edge, Chrome, Firefox, and Safari
Got one of those handy?