The 5 Myths of Centralized Log Management

By Blue Matador on June 5, 2017

five-myths-of-centralized-log-management

Centralized logging has been around for years, which means the DevOps practice has acquired a few misnomers among developer and operations folk. The common myths surrounding log management may surprise you and your team.

It took five storks to deliver the giant infant Paul Bunyan, so the legend goes, who became the world’s most-famous lumberjack, along with his trusty blue ox companion, Babe. But Paul Bunyan isn’t the only myth in logging. The modern digital equivalent — centralized log management — has its fair share of myths like Bunyan’s story.

With that in mind, let’s take a look at the myths that have surrounded log management over the last decade and why they could be robbing your company thousands of dollars and hundreds of hours in root cause analysis each year.

Myth 1: Log management isn’t necessary for small businesses

A small business or organization may think that because they are nimble and agile that they may not need log management. After all, how many things can go wrong when the customer base is manageable and your AWS servers are few? The truth is, even a single server still needs log management.

Here’s why: If your business is connected to the internet (and whose isn’t?), you are exposed to a few common problems. Hacking attempts by third parties. Auditing requirements by governmental or industry regulations. Debugging when your servers inevitably go down during that 0.01% downtime mentioned in your service-level agreement. When your company starts the log management habit while in its early years, it pays dividends throughout the life of the company, similar to when you learn a musical instrument or a sport when you’re a child.

The dividends aren’t just financial (downtime costs your company up to $5,600 per minute — source: Gartner); improved customer satisfaction, less downtime, and more business intelligence are all benefits of monitoring your logs in one place. Give your site reliability engineers access to your logs in a centralized, smart repository with predictive alerts built-in and you’ve already saved your company thousands, if not more, of operational overhead dollars.

Myth 2: Server logging is difficult to implement

The uninitiated think that logging is hard if they’ve either (a) survived exposure to Splunk before or (b) haven’t tried centralized logging in the first place. In fact, logging the manual way (grep, we’re looking at you here) with all your syslog files in different locations on different boxes is actually the path of pain here. It’s also the reactive way of monitoring your logs. You only look at your log files when you’ve already got a problem on hand, a problem that could have been prevented in the first place with intelligence from a centralized log management tool.

You also don’t have to be a Linux expert to start with centralized log management (although it does help — the majority of the world’s servers run on Linux). Lumberjack, our centralized logging tool, installs in less than a minute with a copy-and-paste command generated from our web app. And setting it up to capture all your log files is easy with smart discovery. Many other logging tools are simple to set up and come with thorough documentation as well.

Myth 3: Centralized logging is difficult to learn

Many vendors have built-in log management tutorials so users can get up and running quickly. There’s also a swath of primers on YouTube.

For our log management tool, if you know SQL, you know our query language for Lumberjack. But you also don’t need to know SQL. Even without advanced queries, the right logging tool can give your product managers, developers, and operations team direct access to insights that enable them to make better decisions.

Myth 4: Log management is the old way of doing things

Many people put log management in the same era of computing as managing multiple floppy drives, managing jumper cables on server racks, or having to clean out your system registry. Instead, they think that flashy application performance monitoring tools or expensive business intelligence suites give reliability engineers all the data they need to keep tabs on their servers. While these tools can tell you what happened, they don’t tell you how or why problems happened. You’re not getting the full story from your data without log management.

On the contrary, log management is the newer way of keeping tabs on your server health. As a discipline, it’s grown exponentially in the last 10 years. Logs do tell you how and why downtime and other nasty events happened. For example, your APM software may reveal you had increased 5xx error rates. But log management software can reveal they were caused by a null pointer exception.

Myth 5: Log management is expensive

It’s true that some log management tools are expensive. We did the unfathomable and mentioned Splunk he who shall not be named above — the biggest and most bloated logging tool out there. The fact is that most tools are reasonably priced. The largest factors considered in centralized log management software pricing models are data ingest (how many GBs of log files are sent to and analyzed by log software per month), search length (how frequently your log files are analyzed), and archive (how long those files are stored for future retrieval.)

Affordable aws application monitoring that follow these main pricing factors are Papertrail and LogDNA. There’s also open source alternatives, such as ELK stack. You’ll find us in the affordable log management software crowd as well.

Still believe in the 5 myths of log management? Try out Mythbusters’ approach: Conduct an experiment yourself and see if the myths hold any water. (Experimentation is just good science anyways. We support it and think most DevOps-minded people do, too.) 

What to Read Next

Subscribe to get blog updates in your inbox.

Get started with alert automation
See how it'll save you time and toil
START FREE TRIAL