Amazon S3 Files: S3 as Native File System

Amazon just slapped a file system on S3. It's clever, but don't ditch your EFS yet.

AWS S3 Files dashboard showing NFS-mounted S3 bucket with file operations

Key Takeaways

  • S3 Files uses EFS to provide NFS access to existing S3 data without migration.
  • Great for shared workloads like ML and AI, but limited to AWS compute—no local mounts.
  • Performance tiers hot/cold data; strong on latency and throughput, but adds EFS costs.

S3 files. Now with actual files.

Amazon’s behemoth object store—500 trillion objects strong—finally pretends to be a file system. AWS dropped S3 Files Monday, letting you mount S3 buckets via NFS v4.1 from EC2, ECS, EKS, Fargate, even Lambda. No data migration needed. Your petabytes stay put, accessible via old S3 APIs too. Sounds handy? Sure. Revolutionary? Pump the brakes.

Sébastien Stormacq, AWS evangelist, boasts:

“the first and only cloud object store that offers fully-featured, high-performance file system access to your data.”

First and only? Cute. Open-source hacks like s3fs-fuse have limped along for years. Sure, they’re pokey on locking or renames. But AWS’s own Mountpoint for S3—launched last year—already sped up reads. This? It’s EFS under the hood, not pure S3 magic.

Does Amazon S3 Files Solve S3’s Eternal File Headache?

Here’s the thing. S3 was born object storage. Infinite scale, dirt cheap. Devs dreamed bigger—file systems on top. FUSE tools translated POSIX calls to S3 APIs. Result? Laggy messes. Renames became copy-delete dances. No atomic locks. ObjectiveFS tried metadata tricks elsewhere. Still, clunky.

AWS flirted before. S3 File Gateway? Hybrid on-prem toy. Mountpoint? Read-heavy speed demon, but no writes, no edits. Now S3 Files layers EFS—AWS’s NFS workhorse—onto S3 buckets. Sub-ms latencies for hot data. Concurrent thousands of clients. Intelligent prefetch. Tiered caching: hot files in EFS speed lane, cold ones straight from S3 throughput glory.

Smart architecture. But it’s no S3 API glow-up. Data lives in S3, metadata and caching in EFS. Developers get NFS mounts inside AWS. No local desktop Finder folder. No cross-cloud mounts. Locked to AWS compute. That’s the catch—they’re not yelling about.

And performance? AWS claims ~1ms reads. Fine for ML pipelines, AI agents scribbling Python files, prod apps sharing data. But sprawl those workloads. EFS costs stack up for metadata. S3 stays cheap underneath. Hybrid win? Maybe. Or just EFS reskinning S3 bills.

Why Bother When EFS Exists?

AWS admits the absurdity. “Enough file storage services to entertain cloud architects,” Stormacq quips. EFS for shared files. FSx for lustre. EBS for blocks. Why bolt NFS on S3?

Shared access to existing S3 data. No copy to EFS. Train ML on S3 lakes without egress fees or ETL hell. Agentic AI—those hypey autonomous bots—needs collaborative file mutating. Multiple Lambdas? EC2 swarms? NFS without the FUSE fragility.

But peek behind. It’s EFS caching S3. Hot data migrates automatically. Cold? Direct S3 blasts. Users tweak prefetch: full file or metadata only. General-purpose buckets only—no intelligent-tiering quirks.

Critics—and I’m one—smell PR spin. “Native file system”? Nah. It’s federated. EFS owns the semantics. S3’s just dumb durable backend. Historical parallel: Remember NetWare? Everyone bolted NFS on everything in the ’90s. S3 Files feels like that—object storage’s midlife crisis, chasing file system respect.

My bold prediction: This cannibalizes EFS for S3-heavy shops. But won’t kill it. Latency purists stick to pure EFS. Costs? Watch AWS tweak pricing to blur lines. Devs win short-term. Long-term? More lock-in glue.

One gripe. No local mounts. Dream of S3 as home NAS? Dead. That’s Storage Gateway turf—on-prem only. Cross-cloud? Forget it. Azure Blob or GCP? Roll your own FUSE forever.

The Developer Traps Hiding in Plain Sight

Performance shines for mixed workloads. Sequential scans? S3 throughput rules. Random edits? EFS cache delivers. But scale to exabytes. Metadata balloon in EFS. Bills creep.

No POSIX full monty? NFS v4.1 covers creates, reads, updates, deletes. Locking? Yes. But S3’s eventual consistency lurks underneath. Race conditions possible on cold data fetches.

Unique insight: This echoes the storage wars of yore. Like when EMC bolted file systems on SANs—promising unity, delivering complexity. AWS repeats history. S3 Files bridges object-file chasm, but creates EFS dependency. Your “S3 file system”? Now EFS-shaped.

Preview now. GA soon. Free tier? Unclear. Bet on per-mount fees atop S3/EFS.

Look, it’s progress. S3 evolves. But hype as “world’s biggest object store a file system”? Overreach. It’s a file system window on object glass. Useful. Not transformative.


🧬 Related Insights

Frequently Asked Questions

What is Amazon S3 Files?

AWS feature mounting S3 buckets as NFS v4.1 file systems for AWS compute. Data stays in S3, caching via EFS.

Can I mount S3 Files on my local machine?

No. AWS compute only—no desktop or other clouds.

Does S3 Files replace EFS or Mountpoint?

Enhances S3 access. EFS for pure files, Mountpoint for reads. Pick per workload.

Priya Sundaram
Written by

Hardware and infrastructure reporter. Tracks GPU wars, chip design, and the compute economy.

Frequently asked questions

What is Amazon S3 Files?
AWS feature mounting S3 buckets as NFS v4.1 file systems for AWS compute. Data stays in S3, caching via EFS.
Can I mount S3 Files on my local machine?
No. AWS compute only—no desktop or other clouds.
Does S3 Files replace EFS or Mountpoint?
Enhances S3 access. EFS for pure files, Mountpoint for reads. Pick per workload.

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by The NewStack

Stay in the loop

The week's most important stories from theAIcatchup, delivered once a week.