🎉 retentions 1.0 is out
After a long stretch of building, testing, and thinking a bit too hard about file deletion, I’m releasing retentions 1.0 — the first production-ready version.
retentions is a small, cross-platform CLI tool that applies backup-style retention rules to plain files.
It does not create backups.
It does not “clean up a bit”.
It decides what stays and what goes — deterministically, explainably, and with safety rails.
🧰 What 1.0 already does
Time-bucket retention (the “backup tool” way)
retentions keeps one representative file per time bucket, across multiple time scales:
--hours/-h,--days/-d,--weeks/-w,--months/-m,--quarters/-q- long-range variants:
--week13(quarter-style buckets based on 13 ISO weeks)--years/-y
Within each bucket, the newest file wins.
No tie-breakers. No magic.
Always keep “the last N”
--last/-l N keeps the N most recent files, regardless of bucket rules.
Because sometimes the most important rule is simply:
“Don’t touch the latest backups.”
Practical limits, applied after retention
After retention rules decide what should be kept, optional limits can further constrain the result:
--max-files/-f N--max-size/-s N(10.5M,500G, …)--max-age/-a Nrelative to script start (7d,3m,1y, …)
Limits are explicit.
If something is removed at this stage, the reason is visible.
Matching, filtering, and protection
- glob matching by default
- regex matching via
--regex-mode/-r --protect/-p PATTERNto explicitly shield files from deletion
If a file is protected, it stays protected — even if it is old, large, or unpopular.
Timestamp selection
Retention age can be calculated from:
mtime(default)ctimeatime
Choose consciously. The tool will not guess.
Safety features (because deletion is forever)
- lock file enabled by default (
.retentions.lock) --dry-run/-Xto preview deletions--list-only/-Lfor clean, script-friendly output
Also intentionally boring:
no recursion, no surprises, no “helpful” assumptions.
🧪 Three real CLI examples
1) Classic “dense recent, sparse older” (dry-run)
retentions /data/backups '*.tar.gz' -d 14 -w 8 -m 12 -y 3 -X -V infoRuns in dry-run mode and shows what would be deleted.
Nothing actually is.
2) Keep the newest no matter what, then cap by size (dry-run)
retentions /data/backups '*.zst' -d 7 -w 4 -l 10 -s 50G -X -V infoAlso runs in dry-run mode.
The newest files are pinned first.
Only then does the size limit apply.
This order is intentional.
3) Regex matching, protected “golden” backups, list-only output
retentions /data/backups -r ignorecase '.*\.(sql|dump)\.gz$' -p '.*-golden-.*' -m 6 -y 2 -L '\0'Produces a clean delete-set suitable for piping —
and still refuses to touch anything explicitly protected.
🧭 Why 1.0 matters
retentions 1.0 is not a clever script.
It is a decision engine.
Buckets are explicit.
Limits are explicit.
Protection is explicit.
Months later, you can still answer:
“Why does this file exist?”
That alone already puts it ahead of most find | rm one-liners.