Cribl puts your IT and Security data at the center of your data management strategy and provides a one-stop shop for analyzing, collecting, processing, and routing it all at any scale. Try the Cribl suite of products and start building your data engine today!
Learn more ›Evolving demands placed on IT and Security teams are driving a new architecture for how observability data is captured, curated, and queried. This new architecture provides flexibility and control while managing the costs of increasing data volumes.
Read white paper ›Cribl Stream is a vendor-agnostic observability pipeline that gives you the flexibility to collect, reduce, enrich, normalize, and route data from any source to any destination within your existing data infrastructure.
Learn more ›Cribl Edge provides an intelligent, highly scalable edge-based data collection system for logs, metrics, and application data.
Learn more ›Cribl Search turns the traditional search process on its head, allowing users to search data in place without having to collect/store first.
Learn more ›Cribl Lake is a turnkey data lake solution that takes just minutes to get up and running — no data expertise needed. Leverage open formats, unified security with rich access controls, and central access to all IT and security data.
Learn more ›The Cribl.Cloud platform gets you up and running fast without the hassle of running infrastructure.
Learn more ›Cribl.Cloud Solution Brief
The fastest and easiest way to realize the value of an observability ecosystem.
Read Solution Brief ›Cribl Copilot gets your deployments up and running in minutes, not weeks or months.
Learn more ›AppScope gives operators the visibility they need into application behavior, metrics and events with no configuration and no agent required.
Learn more ›Explore Cribl’s Solutions by Use Cases:
Explore Cribl’s Solutions by Integrations:
Explore Cribl’s Solutions by Industry:
September 25 | 10am PT / 1pm ET
Hold my beer: lessons from one team’s data pipeline journey
Register ›Try Your Own Cribl Sandbox
Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Get inspired by how our customers are innovating IT, security and observability. They inspire us daily!
Read Customer Stories ›Sally Beauty Holdings
Sally Beauty Swaps LogStash and Syslog-ng with Cribl.Cloud for a Resilient Security and Observability Pipeline
Read Case Study ›Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Transform data management with Cribl, the Data Engine for IT and Security
Learn More ›Cribl Corporate Overview
Cribl makes open observability a reality, giving you the freedom and flexibility to make choices instead of compromises.
Get the Guide ›Stay up to date on all things Cribl and observability.
Visit the Newsroom ›Cribl’s leadership team has built and launched category-defining products for some of the most innovative companies in the technology sector, and is supported by the world’s most elite investors.
Meet our Leaders ›Join the Cribl herd! The smartest, funniest, most passionate goats you’ll ever meet.
Learn More ›Whether you’re just getting started or scaling up, the Cribl for Startups program gives you the tools and resources your company needs to be successful at every stage.
Learn More ›Want to learn more about Cribl from our sales experts? Send us your contact information and we’ll be in touch.
Talk to an Expert ›When you’re using Cribl Stream and Cribl Edge to send data to hundreds of Splunk indexers using Load Balancing-enabed Destinations, it is sometimes necessary to analyze memory usage. In this blog post, we delve into buffer management, memory usage calculations, and mitigation strategies to help you optimize your configuration and avoid memory issues.
This post is valid for SplunkLB Destinations, as well as any other Load Balancing-enabled Destination supported by Stream/Edge (with which we do not interface using a third-party library like AWS, Azure, and Kafka). So the complete destination list to which this post applies as of publishing is SplunkLB, TCP syslog, TCP JSON, and Cribl TCP. However, in our experience, users don’t have hundreds of targets for any of those Destinations except SplunkLB, so I’ll only reference SplunkLB for brevity.
This post builds upon a previous blog post, Understanding SplunkLB Intricacies, but you can jump right in to learn how to use Stream efficiently with large numbers of Splunk indexers. In Understanding SplunkLB Intricacies, we discussed the various buffers that each Sender uses inside a SplunkLB Destination and how Stream reacts to backpressure from LB-destinations. This post digs into analyzing memory usage that is sometimes necessary to perform in Cribl Stream/Edge environments when sending to hundreds of Splunk indexers.
Check out that post first if you want more context into the load balancing design of the SplunkLB Destination, how the buffers are used, and why there are so many of them.
It also includes diagrams that depict where in the flow those buffers reside, that aren’t in this post.
There are two types of buffers that we’ll cover here: in-progress buffers and transit buffers. Up to two of each can exist simultaneously but, at most, only up to three (two transits and one in-progress) buffers actively store events at any point in time.
When the fourth buffer is necessary, it will be waiting on hot standby and, therefore always allocated in memory. For simplicity, we call it the fourth buffer but it won’t always be the same buffer. The two in-progress buffers are swapped back and forth between being used (that is, being filled with events) and unused.
At lower throughput, SplunkLB Senders for a Worker Process won’t get much backpressure from indexers, so Stream may get by with just two buffers allocated: one in-progress and one transit buffer. Two is the absolute minimum.
At higher throughputs, and in combination with network issues or when Splunk indexers are delaying receipt of TCP packets from Stream, Senders may need two of each buffer type in order to simply have more buffering capacity to absorb backpressure effects. Ultimately, this means you can expect that for a given worker process, 2-4MB of buffer memory will be allocated per indexer connection.
So, what does this memory usage look like at the next level up from Sender at the Worker Process level? Let’s go over a few examples with low and high numbers of indexers, and the calculations that determine the memory usage.
Let’s say you created a SplunkLB Destination with 100 indexers (manually configured or using indexer discovery). Each Process will, by default, connect to each of those 100 indexers and will at least allocate 2MB total for two buffers. If necessary, up to two more buffers (2 MB more) will be allocated as data begins flowing.
This results in 200 MB – 400 MB of external memory being dedicated for this example SplunkLB Destination on a per Process basis, and this is in addition to the heap memory usable by each Process, which defaults to a maximum of 2GB (which is allocated only as needed).
If you have multiple SplunkLB Destinations configured, additional memory will be used for each of those. For more information about what external memory is and how it relates to heap memory, see this external page.
We have customers that have up to 300 indexers defined in a single SplunkLB Destination and then have two or more SplunkLB Destinations that have many of the same indexers defined. That’s on the high end for indexer count, so I’ll use it as an obvious example of high memory usage.
Since Stream treats each SplunkLB Destination as independent, each Process in that customer environment will connect to some indexers multiple times, because that’s what their configuration specifies. This results in a lot of memory usage just for Destination-related buffers.
Here is an example of a log event from a system that possesses a configuration that uses 1300 indexers across multiple SplunkLB destinations (just over 300 indexers reused across 4 destinations) and was reported to Cribl Support as having memory issues due to the host running out of memory. Note the large amount of external memory resulting from the 1300 indexers. The Worker Node this event came from is currently having processes crash on it because the full amount of necessary memory couldn’t be allocated because the host had insufficient memory to accommodate all processes and their needs.
{"time":"2023-12-10T16:44:07.012Z","cid":"w15","channel":"server","level":"info","message":"_raw stats","inEvents":191915,"outEvents":345407,"inBytes":140474707,"outBytes":223893933,"starttime":1702226581,"endtime":1702226638,"activeCxn":0,"openCxn":392,"closeCxn":392,"rejectCxn":0,"abortCxn":0,"pqInEvents":43891,"pqOutEvents":3664,"pqInBytes":25447295,"pqOutBytes":2383397,"droppedEvents":54570,"tasksStarted":0,"tasksCompleted":0,"pqTotalBytes":869425152,"pqBufferedEvents":0,"activeEP":7,"blockedEP":0,"cpuPerc":74.24,"eluPerc":60.49,"mem":{"heap":345,"ext":2634,"rss":3707}}
In addition, this scenario of duplicative indexers across destinations also causes a non-trivial amount of memory that is, in a sense, redundant. While using the same indexers across multiple destinations is sometimes unavoidable, it can create significant redundant memory usage. If your Processes consume hundreds of megabytes of external memory, review your SplunkLB configurations to see if this might be the cause.
Since the memory segment used by these buffers is external rather than heap, there isn’t a setting to control the memory limit. There are ways to make the configuration more efficient – we’ll discuss those in the section below. If the configuration can’t be made more efficient to help control memory usage, ensure the Stream hosts have sufficient memory for Worker Processes to request as necessary if hundreds of Splunk indexer connections are being established per Process.
There are two ways to ensure the application uses less memory for SplunkLB Destinations with hundreds of indexer connections:
If the changes above can’t be applied, the worst-case scenario is that your Stream/Edge Nodes will need more RAM allocated. This makes RAM available when Worker Processes need it.
We’ve discussed just one aspect of your Stream configuration that can consume a large amount of memory, which can potentially be the largest part of the overall memory footprint of each Worker Process. Because the section of memory used by these buffers cannot be limited, you must take appropriate steps when configuring Stream to minimize memory usage. I hope this helps provide insight into better managing memory consumed by your LB-enabled destinations!
Cribl, the Data Engine for IT and Security, empowers organizations to transform their data strategy. Customers use Cribl’s suite of products to collect, process, route, and analyze all IT and security data, delivering the flexibility, choice, and control required to adapt to their ever-changing needs.
We offer free training, certifications, and a free tier across our products. Our community Slack features Cribl engineers, partners, and customers who can answer your questions as you get started and continue to build and evolve. We also offer a variety of hands-on Sandboxes for those interested in how companies globally leverage our products for their data challenges.
Experience a full version of Cribl Stream and Cribl Edge in the cloud with pre-made sources and destinations.
Classic choice. Sadly, our website is designed for all modern supported browsers like Edge, Chrome, Firefox, and Safari
Got one of those handy?