AWS Cost Anomaly Alert: The Case of the Missing Discount

A story about investigating AWS cost anomalies, Amazon Q's AI debugging attempts, and discovering the real culprit was... wait for it! (Spoiler Alert) Expired Reserved Instances.

Recently, I started getting those lovely "AWS Cost Anomaly Detected" emails. You know the ones — they sound like the IRS of the cloud:

Dear AWS Customer,

You are receiving this alert because AWS Cost Anomaly Detection has detected higher than expected spending in your account. Below is a summary of ongoing cost anomalies with potential root cause(s) that have been detected or updated on 2025-10-22.

I open the email, click on the link, and navigated to the correct AWS account, click a few more times to get to the cost report. A couple of line items basically say: "EC2 and RDS costs increased by about $3 per day."

But wait, there's more. When I looked at the RDS details specifically, it showed: "RDS charges went up by 1090.48% in the last 9 days." 🤬

Now, $3 a day doesn't sound like much… but a 1090% increase? That's the kind of number that makes you check if someone spun up a production database in Tokyo by accident.

And if you've used AWS long enough, you know that's how it starts. It's the cloud equivalent of hearing a drip in your house — you know that's going to end with a plumber and a $6000 bill.


The Lowdown

The report didn't give much detail. Just that my EC2 instances were somehow racking up 48 hours worth of charges every 24 hours. Cool. Love it when my servers discover time travel.

I only have two EC2 instances in an Auto Scaling group and a single RDS instance (not even Multi-AZ). So the math kind of made sense — two instances, two extra "days" of billing for the EC2 instances (not that I agreed with the charges but it made some sense).

At this point I had no idea what was going on with the RDS instance.

In any case, I had to figure out what the heck as going on.

Did I make any changes recently? Yes, but a minor one. A tiny, innocent-looking security group update. Could that really be it? The short answer is no, but keep reading to follow the process.

I'd added a self-reference rule to allow my instances to talk to each other. No big deal, right? Security groups are free, and it's a super common pattern.

But in AWS land, "no big deal" can easily be followed by a surprise on your bill.

Again, don't worry about it being a security group change. Although this kind of change could theoretically open a way to cause data transfer charges.... it didn't in my case.

I digress but you have to follow the rabbit down the hole to find out.


Calling In the AI Cavalry (a.k.a. Amazon Q)

Naturally, I turned to Amazon Q, the new AI assistant in the AWS console, and asked it what was happening. Here's the summary of what it told me (paraphrased for sanity):

"You're right — security groups themselves don't cost anything. But maybe you accidentally unblocked traffic that was previously blocked, and now your VPC is generating new data transfer charges."

It even gave me a little cost breakdown, CSI-style:

  • Amazon VPC: +$1.01
  • EC2 Compute: +$2.07
  • RDS: +$1.89

And then came the grand theory:

"The self-reference might have enabled internal communication, database replication, backups, or secret background processes that were previously blocked."

In other words, I accidentally opened a portal and now my instances were gossiping behind my back, leaking data and making me pay for it.

The AI confirmed my fears... but it keep reading, cause it was wrong.


Testing a Theory

Okay, fair enough. Maybe I unblocked some chatty microservice traffic. So, I rolled back the change — deleted the self-reference rule — and waited for the next cost report.

The charges didn't stop.

Now I'm thinking… great. I've created some sort of VPC feedback loop and my EC2s are stuck sending each other cat memes. Or even worse, I've been hacked and the uptick in traffic is someone stealing my data. Now I'm starting to panic.


The Real Root Cause

After taking a breather (panic attack avoided), and doing some digging, I discovered the real cause — and this is where it gets funny, slightly sad, and a bit embarrassing, and a lot annoying:

My 3-year Reserved Instances had expired.

Both my EC2 Reserved Instances and my RDS Reserved Instance expired — all purchased three years ago, paid in full upfront like a responsible cloud architect.

That's right. Nothing to do with security groups, data transfer, or phantom network traffic. My discounts just ran out.

Those "anomaly" charges? They weren't anomalies at all — they were just the actual on-demand prices I hadn't paid in years.

And that 1090% RDS spike? That's what happens when a database goes from deeply discounted Reserved Instance pricing to full-price on-demand overnight.


Lessons Learned (and a Tiny Rant)

So what did we learn?

  1. AWS Cost Anomaly Detection is great at spotting spikes — but not so great at explaining them.
  2. Amazon Q, bless its virtual heart, can't (yet) say, "Hey, your Reserved Instances expired. This is literally the entire reason your bill went up."
  3. Sometimes the scariest anomalies are just… time.
  4. Oh, and I should have set up an alert to notify me when my Reserved Instances were about to expire. Bad Deadpool!

Pro Tip

If you ever get cost anomaly alerts and can't figure out why, check to see if you had any Reserved Instances or Savings Plans expiration dates first. It's the easiest $$ mystery you'll ever solve.


What Matters Most

So, at the end of the day:

  • I didn't break my VPC.
  • I didn't cause a billing black hole.
  • I didn't accidentally spin up a rogue database farm.
  • I just forgot that three years ago I bought both EC2 and RDS Reserved Instances on the same day like a responsible adult — and time finally caught up.

Next Steps

Now, excuse me while I go buy another set of 3-year reservations — for both EC2 AND RDS this time — before my wallet notices.

Happy DevOps/FinOps!


📚 Read Next:
RDS Reserved Instances: The Surprising Math Behind AWS Database Savings — Speaking of buying reservations... should you really pay everything upfront? The numbers might surprise you.

Geek Cafe LogoGeek Cafe

Your trusted partner for cloud architecture, development, and technical solutions. Let's build something amazing together.

Quick Links

© 2025 Geek Cafe LLC. All rights reserved.

Research Triangle Park, North Carolina

Version: 8.9.6