Why S3 Lifecycle Rules Silently Do Nothing
S3 lifecycle rules have gotten complicated with all the silent failures flying around. As someone who spent three hours last month staring at a rule I’d configured in the AWS console — completely baffled why objects weren’t being deleted — I learned everything there is to know about this particular brand of quiet misery. Today, I will share it all with you.
The rule looked correct. Bucket name matched. Prefix was there. And yet nothing happened. No error. No CloudTrail warning. Just silence. That was a Tuesday afternoon I’ll never get back.
Here’s the thing about S3 lifecycle rules: they don’t fail loudly. They fail quietly. A misconfigured rule won’t throw an error on save. It won’t ping you. It will simply do nothing — and you’ll figure that out weeks later when your storage bill looks exactly the same as last month.
Three culprits show up constantly: versioning conflicts, prefix or filter mismatches, and the propagation delay that AWS documentation mentions in passing but most engineers miss entirely. Each one looks fine from the console. Behind the scenes, the rule either doesn’t apply to your objects or hasn’t started running yet. So, without further ado, let’s dive in.
Check Your Bucket Versioning Status First
Probably should have opened with this section, honestly. This is the single biggest gotcha with S3 lifecycle rules, and it’s invisible until you walk straight into it.
But what is versioning doing to your lifecycle rules? In essence, it changes which objects the rule actually touches. But it’s much more than that — it can make your entire expiration configuration functionally useless without a single error message to show for it.
If you enable versioning on a bucket, a rule set to expire objects after 30 days will not delete current object versions. Full stop. It only affects noncurrent versions — the old snapshots created when you overwrote or deleted something. That’s what makes versioning behavior so dangerous to miss.
Frustrated by this exact scenario, I checked my bucket’s versioning status using the AWS CLI with a straightforward command:
aws s3api get-bucket-versioning --bucket my-bucket --region us-east-1
The response told me everything. "Status": "Enabled" means your bucket is versioning every single object. "Status": "Suspended" means versioning was turned on and then off — older objects are versioned, new ones aren’t. No Status field at all? Versioning is off entirely.
Each state needs a different lifecycle setup:
- Versioning Off: Lifecycle expiration rules delete current objects once they hit the specified age. Straightforward.
- Versioning Enabled: Expiration rules ignore current versions entirely. You need a separate
NoncurrentVersionExpirationaction to clean up old versions — don’t skip that part. - Versioning Suspended: New objects are unversioned and get caught by standard expiration rules, but old versioned objects stick around until you add noncurrent version rules.
My bucket had versioning enabled. My 30-day expiration rule was never going to touch my current objects. That was the whole problem — the entire thing. I added a noncurrent version expiration rule with a much shorter window and things finally started cleaning up. Don’t make my mistake.
Inspect the Actual Rule Configuration via CLI
The AWS console is fine for learning. It hides too much detail for debugging. Pull the raw lifecycle configuration directly:
aws s3api get-bucket-lifecycle-configuration --bucket my-bucket --region us-east-1
This returns JSON showing every rule, filter, and action. Read it carefully — not skim it, actually read it.
The Filter block is where most people slip up. An empty filter applies the rule to every object in the bucket. A Prefix filter only applies to objects whose keys start with that exact prefix. Tag filters only apply to objects matching those tags precisely.
I’m apparently someone who stores logs under "Logs/" with a capital L, and my prefix filter of "logs/" works for nothing while the lowercase version never catches a single file. AWS didn’t complain. It just silently skipped every object for what I’m estimating was about two weeks.
Trailing slashes matter too. A prefix of "logs" matches "logs/", "logs-backup", and "logs2". If you mean only the folder structure, write "logs/" explicitly — that extra character does real work.
Tag filters are even trickier. Objects must carry those exact tags to match the rule. A missing tag, a different value, one typo — rule doesn’t apply. Check that your tags are actually attached to the objects you’re targeting before you assume anything is working.
Transition Rules That Break Without an Error
Transition rules have minimum storage duration requirements that AWS enforces in ways that aren’t always obvious. Transition an object to S3-IA before 30 days and AWS will run the transition anyway — but it charges you for the full 30-day minimum regardless. The rule executes. You pay for it. Your objects move.
The real breakage happens when you chain transitions. Standard to S3-IA after 30 days, then to GLACIER after 60 days — that works fine. Flip the order, try to hit GLACIER at 30 days and IA at 60, and the rule becomes invalid. AWS discards it silently in some cases or processes only the first valid action. No warning. No flag in the console.
Always keep transition dates moving upward as you go to colder storage: Standard → IA at 30+ days → Glacier at 90+ days → Deep Archive at 180+ days. That sequence is non-negotiable.
How to Verify a Rule Is Actually Running
S3 doesn’t log lifecycle actions natively. There’s no “Lifecycle Execution Log” sitting somewhere you can check. I use three approaches instead — at least if you want any confidence that your rule is doing something real.
Enable S3 CloudTrail data events: Set CloudTrail to log S3 API calls. Lifecycle-triggered deletions and transitions show up as internal AWS events. Look for DeleteObject and PutObject calls with source "AWS Internal" rather than a specific IAM user or role. That distinction matters.
Use S3 Server Access Logging: Enable access logs and filter for lifecycle-related operations. Less precise than CloudTrail but functional if you haven’t set CloudTrail up yet.
Monitor with S3 Storage Lens: This dashboard shows storage class distribution over time. A working rule shows objects gradually shifting from Standard to IA or Glacier. A distribution that never changes means your rule isn’t running — full stop.
After fixing the versioning issue and correcting the prefix, I tested on a small, isolated prefix first using a separate test bucket. Created five objects with the test prefix, waited 24 hours, confirmed they disappeared. Only after that did I apply anything to production. Probably worth adding that step to your workflow permanently.
Final checklist before assuming your rule is broken:
- Verify versioning status and confirm your rule targets the right object versions.
- Pull the lifecycle configuration via CLI — check Filter syntax, prefix case, and tag values character by character.
- Confirm transition dates increase correctly if you’re chaining storage class changes.
- Wait 24–48 hours. AWS doesn’t apply rules the moment you save them.
- Enable CloudTrail or S3 Storage Lens to confirm the rule actually executed.
Stay in the loop
Get the latest team aws updates delivered to your inbox.