Local Storage vs SAN/NAS in Bare Metal
When you're working in the cloud, storage is usually invisible. You spin up a VM and it's just there. But bare metal doesn't give you that illusion — you're the one wiring things together. That means you're forced to decide early: do you store data locally on the server itself, or do you pull in shared storage over the network?
It sounds like a simple technical choice, but it has ripple effects everywhere: complexity, cost, performance, and how much pain you'll be in when something fails.
Local Storage: Direct, Fast, and Easy to Understand
There's something refreshingly straightforward about local disks. You install the OS, format a drive, and go. Nothing needs to be mapped across the network. Nothing's abstracted behind layers of protocols or services. You know where your data is — right there inside the box.
Performance is generally excellent, especially with modern NVMe drives. Latency is low, throughput is predictable, and there's no chance that a blip on a network switch takes down your I/O. And when something breaks, it's usually just that server. You're not dragging down an entire cluster because one target dropped off.
For standalone workloads — databases, telemetry processors, log collectors — local storage often hits the sweet spot. It's fast, it's simple, and it doesn't rely on anything beyond that server's own hardware.
Shared Storage: When the Infrastructure Needs to Flex
As soon as you're building systems that need to fail over cleanly or share state across multiple machines, local storage starts to feel limiting. You can't fail over a database if its data is tied to a single box. You can't have multiple frontend servers writing to the same directory unless you have something that mediates that access.
That's where SAN and NAS start to make sense. They move your storage into its own layer — accessible from multiple machines, resilient (assuming you've done the work), and centralized. You can boot from it, mount it, migrate workloads on top of it. This approach is key for implementing proper storage redundancy and failover.
But it's rarely free in terms of complexity. You're dealing with storage fabrics, iSCSI targets, jumbo frame tuning, and monitoring another critical piece of infrastructure. If you don't have solid practices and visibility around it, shared storage can feel like you've built one big single point of failure… just farther away.
What We're Actually Comparing
To ground the conversation, here's how local storage, NAS, and SAN typically compare:
Storage Type | How It Works | Feels Like to the OS | Common Protocols | Ideal For |
---|---|---|---|---|
Local Storage | Disks installed in the server | Native block devices | SATA, NVMe | High-speed local workloads |
NAS | File storage over the network | Remote-mounted directories | NFS, SMB | Shared files, backups |
SAN | Block storage over a network fabric | Appears as a disk | iSCSI, Fibre Channel | Clustered apps, HA setups |
It's Never Just One or the Other
In reality, most bare metal operators blend both.
You might boot off SAN but keep scratch space on local SSD. You might run your persistent services off shared volumes, but keep your cache layers local. Even in the same rack, you'll have boxes doing wildly different things depending on what's needed — and the same server might use both models at once.
It's not about being purist. It's about knowing the trade-offs and deploying storage where it actually makes sense for that workload.
The Real Decision Isn't Speed — It's Complexity vs Control
Local storage gives you autonomy. If the box is alive, the storage is too. Shared storage gives you flexibility and resilience — but only if you're willing to architect for it. Both have their place, but you can't assume that “enterprise-grade” means “better.”
To help visualize those trade-offs:
Consideration | Local Storage | Shared Storage (SAN/NAS) |
---|---|---|
Setup Complexity | Simple — just the server | Requires fabric, tuning, maintenance |
Performance | Predictable and fast | Variable — depends on backend + network |
Scalability | Tied to the server | Independent from compute |
Failure Scope | One server at a time | Can impact multiple systems |
Cost | Lower up front | Higher total cost (but shared) |
Plenty of well-intentioned teams overengineer their storage stack chasing uptime, only to find that they've created more operational overhead than their actual workloads justify. Meanwhile, other teams try to scale on local disks alone and hit a wall when the second node can't pick up where the first left off.
What matters is the lifecycle of the workload: does it need to survive machine failure? Be migrated? Shared? Backed up centrally? If so, centralized storage starts to look good — but only if your team can actually support it.
Final Thoughts
Bare metal makes storage choices unavoidable. It's one of the first architectural forks in the road that actually matters — and unlike virtualization, you don't get a hypervisor to hide your mistakes.
If in doubt, start with local disks. They're fast, simple, and hard to get wrong. Move to shared storage when you know why you need it — not just because it looks better in a diagram.