The morning of Tuesday, February 28, 2017, started like most other mornings for DevOps professionals. That is, until the internet broke as a result of Amazon’s now (in)famous S3 outage on its Amazon Web Services platform, all resulting from increased error rates a typo entered into the command line.

You’ve probably read that an Amazon engineer used standard operating procedures to debug a billing server problem but accidentally took down a large subset of servers, cascading the problem into the largest service interruption in AWS’s 11-year history. And when AWS services go down, everyone feels it, from Netflix to Reddit to your IoT-connected smart light bulb.

The outage presented another headache for many users of log management tools: log data usage spikes caused by an increase of error messages. Just like with your personal cell phone bill, spikes in data usage can cause your usage to go over limit. Unlike your cell phone plan, which usually has some sort of overage protection (or at least a provision to pay for more data in a given billing cycle), users of log management software often end up with data caps that result in dropped logs, an inability to search, and frustration.

This is because the logging industry is fixated on data caps, contracts, and suffering. Most products offer caps on data usage, which means major outages like Amazon’s result in you losing precious log data and the gems in them.

Do either of these messages sound familiar?

LICENSE VIOLATION: You have exceeded your license data limit too
many times.
Usage is over limit. Your log data is not being indexed or retained.
Searching disabled.

Don’t get us wrong. Some logging suites, such as Loggly, offer “overage protection.” First, you have to sign up for a higher, more expensive tier. Then, what you are allowed are “occasional” data spikes in a given billing cycle that still get indexed and retained even though they are over your normal limit. But you’re still limited to a total of 50GB for this protection and a limit of 3 spikes per month (3 is what they mean by “occasional”).

Meme: Dr. Evil saying Log Data "Overage Protection"

Our Lumberjack log management tool takes a different approach to pricing. There are never any service contracts (we’re looking at you, Splunk), nor is there a problem with overage because our affordable model automatically scales with your usage. It’s a pay-as-you-go model that encourages business growth because you don’t need to plan your operations around your logging tool’s limit (because there is none).

You’ve come to expect pay-as-you-go billing from AWS and Rackspace. Why should your monitoring tools be any different? Lumberjack’s pricing model follows the standard for how you’re used to paying. Our competitors, on the other hand, have it wrong.

Here’s why. Our usage-based pricing is based on three components: ingest, archive, search. Our platform is fully featured, for all your users, regardless of the amount you spend with us. Just imagine … no paywalls, no limitations, no upgrades — and no data limits. Check out our bill calculator here.

Compare the step function of our competitor’s tiered plans with us:

Splunk Lite (Monthly, Cloud-based)

15GB Monthly Plan = $1,140/mo.

At above 15GB you must upgrade to a higher tier, even if you just need 16GB:

20GB Monthly Plan = $1,440/mo. (a difference of $300/mo. or $3,600 a year)

Lumberjack (Monthly, Cloud-based)

15GB Monthly Plan = $706.50/mo.

16GB Monthly Plan = $753.60/mo. (a difference of $47.10, or $565.2 a year)

For reference, you could switch to the 20GB allowance with Lumberjack and still beat Splunk:

20GB Monthly Plan = $942/mo. (compared to Splunk’s $1,440/mo.)

Ouch! — That’s an expensive extra gigabyte with Splunk. How often do customers ride the line, discarding important logs, because they don’t want to upgrade? How many times do they double their bill because they scaled 10%? How many times do you want to downgrade, but don’t because it requires a phone call that you know you’ll just reverse in a few months anyway?

Amazon admitted in their status message on that fateful February day that “[w]e build our systems with the assumption that things will occasionally fail.” Don’t let your customers be affected by services that you don’t control but which “occasionally fail.” Lumberjack is the best log management tool to help you investigate downtime from AWS outages and more.

Did we mention that Lumberjack’s pricing model includes predictive alerts, which use artificial intelligence to predict when downtime will happen before you even start seeing error messages?

Request a beta invite today and get started on predictive, downtime-busting log management that doesn’t gouge you in pricing.

Interested in predictive alerts? Request a beta invite to Lumberjack today. Coming in May 2017.

Blue Matador Staff

Author Bio

Blue Matador is the AI-powered DevOps monitoring platform that solves the "Franken-monitor" effect, enabling organizations to have all their monitoring tools in one place.