RustFS Review: The High-Performance Open Source S3 Object Storage
Good morning everyone! I'm Dimitri Bellini, and welcome back to Quadrata, my channel dedicated to the world of open source and the technology that I love—and that I know you love too.
First of all, I want to wish you a fantastic start to the year. We are officially in 2026, and I hope you celebrated in the best way possible. With the new year comes new technology, and today I want to talk about a project that I recently had to get involved with for a customer. We needed to replace a very famous S3 solution—MinIO.
The Problem with MinIO (and Why We Need an Alternative)
Let's be honest for a second. MinIO was the standard for a long time. But, alas, like many open-source solutions that eventually go corporate, things changed. I don't wish any ill will toward multinationals or companies playing the stock market game, but when a project starts as open source, it should stay truly open.
Recently, the freedom to use MinIO has become... complicated. It felt like a slap in the face to the user base that helped build it. But, as one door closes, another opens. I took off my hat to the old king, and I found a new contender that is doing something very special.
Enter RustFS: High-Performance Object Storage
The project is called RustFS. As the name suggests, it is a high-performance, distributed object storage system written in Rust. It is designed to be a drop-in replacement for MinIO and AWS S3.
Why is the "written in Rust" part important? Rust allows for incredible memory safety. It helps eliminate common problems like garbage collection pauses that plague other languages, theoretically offering much smoother performance. RustFS aims to be:
- Lightweight: It runs as a single binary or container, making it perfect for containerized environments.
- Compatible: It offers full S3 compatibility. If your application works with AWS S3 or MinIO, it should work with RustFS with just an endpoint change.
- Fast: It claims to be twice as fast as MinIO, specifically when handling small files (images, unstructured data).
Key Features
Although the project started around 2023, it has matured significantly by now. Here are some features that really stood out to me:
- S3 Compatibility: Implements the most important AWS S3 features.
- Performance Optimization: Reduces latency and increases throughput, which is critical for AI projects handling terabytes of data.
- Memory Safety: Thanks to Rust, it avoids common memory management issues.
- Distributed Architecture: While you can run it on a single node (which I did for testing), it is designed to run across multiple nodes and disks for resilience using erasure coding.
- Web Console: It comes with a built-in UI to manage buckets, users, and policies.
How to Deploy RustFS with Docker
Getting RustFS up and running is incredibly simple, especially if you use Docker. However, there is one critical detail you must remember: RustFS runs as a non-root user.
You need to create a volume on your machine and attribute it to a specific user ID (UID 10001). If you skip this, you will run into permission errors.
Here is the configuration I used:
# Create data directory and fix permissions (UID 10001)
mkdir -p /opt/rustfs-data
chown -R 10001:10001 /opt/rustfs-data
# Run RustFS Container
docker run -d \
--name rustfs \
-p 9000:9000 \ # API Port
-p 9001:9001 \ # Web UI Port
-v /opt/rustfs-data:/data \
rustfs/rustfs:latest
Note on Ports: Port 9000 is for the S3 API (where your apps connect), and port 9001 is for the Web Interface.
Exploring the Web Interface
Once the container is running, you can access the console at http://your-ip:9001. The default credentials are usually rustadmin / rustadmin (make sure to change these immediately!).
The console is surprisingly rich. You can:
- Browse Buckets: Create and manage your storage buckets.
- Set Policies: Configure access policies, encryption, and tags.
- Versioning & Object Lock: Essential for data protection. You can even set "WORM" (Write Once, Read Many) policies for compliance.
- Lifecycle Management: This is a feature I love. You can define "cold" data that hasn't been accessed in a while and automatically move it to a secondary storage tier.
I also tested it with an external client, Cyberduck, and it worked flawlessly using the generated access keys.
Backup and Resilience
For my test, I used an open-source backup tool (similar to Restic or Kopia) to push encrypted chunks of data from my PC to RustFS. It handled the workload perfectly.
While I haven't tested the full distributed architecture with multiple nodes yet, the resilience is guaranteed through erasure coding. This algorithm splits data and parity across disks, ensuring that even if a drive fails, your data remains safe.
Conclusion: Is it the Future?
RustFS is positioning itself as a serious enterprise open-source solution. It's not the only one out there—projects like Garage (developed by a French team) are also great, though Garage is more focused on edge computing and low-end hardware.
However, if you are looking for a high-performance, drop-in replacement for MinIO that respects the open-source spirit, RustFS is absolutely worth trying. I'll be testing it more in the coming months for my client's backup repository, and I'll definitely share more feedback.
Let me know in the comments if you are using other S3 solutions or if you plan to give this one a spin!
That's all for today. Have a great start to the year, and I'll see you next week!
Dimitri Bellini
Quadrata Channel
📺 Subscribe to the Channel: Quadrata on YouTube
💬 Join the Community: ZabbixItalia Telegram Channel