Analyzing Historical Log Data with Chaos Sumo
In recent years, many companies have sprung up in the log analytics and monitoring market. It’s no surprise – as more and more complicated software and services are released, the need for deep visibility into those systems increases. With the rise of cloud computing and serverless architectures, centralizing metrics and event data is key to understanding the state of a deployment, as well as diagnosing any issues that arise in real time.
Solutions are a mix of proprietary and open source stacks. Many of the latter wrap around the ELK stack, and many of the former use Elasticsearch as their data store as well. Elasticsearch is wonderful for this use case, but its tendency of indices to explode in size with increased data ingestion causes many vendors to offer short retention periods; the amount of time your log data is accessible within their systems is often only 7,14, or maybe 30 days. After this time period, many users have no choice but to delete or archive their data; they simply can’t afford the cost or complexity of longer term storage. Some allow their data to be deleted, some archive it. Deleted data has no value. Archived data has value, but at a cost.
Chaos Sumo seeks to bridge this gap – allowing search and analytics on top of your archived data, while exposing the Elasticsearch API that we all love. Yes, this means you can use Kibana too:
When it comes to choosing a cloud storage offering, Object Storage is king. More specifically, Amazon’s Simple Storage Service (S3) is king. It offers the perfect platform for archiving your historical log data. S3 is:
Read the entire article here, Analyzing Historical Log Data with Chaos Sumo
via the fine folks at Chaos Sumo