Storage Isolation: Pros, Cons, & Money

Quobyte
6 min readDec 29, 2021

--

When designing storage solutions for your users, you may come across one challenge: Storage isolation. How do you ensure user A is not seeing or deleting data from user B?

There are two strategies: First, you can build physically separated customer environments, including storage targets. And second, you can rely on a logical layer of separation and share storage resources.

In this article we will compare both approaches so you can understand and profit from the benefits of both approaches.

Storage Isolation — Physically Separated Environments

Consider the following simple video encoding workflow:

In the first stage you get RAW content, which can arrive via S3. The second stage wants to work on the very same data; for example, transcoding the content in different formats. The third stage delivers the content. This can be done by acting as origin storage for a CDN.

All three stages have one thing in common: They work on the very same data. That implies one important characteristic for the storage system: It should be accessible from different clients at the same time. One example that fulfills this need is NFS.

Out of the three, stage two gets a bit more interesting: A transcoding job requires a lot of CPU resources for a short period of time. This makes it a perfect example for a scheduler like Kubernetes. If a new video file arrives, Kubernetes will create several Containers running the coding job. If the transcoding is finished, CPU resources can be scaled down again to reduce costs.

Stage three is then, again, very primitive: Data needs to be read by some software (NGINX being a classical webserver example) to be delivered to the users.

Now, switch the perspective from an architecture to an operations view: you probably want to cover changes to this kind of infrastructure with profound change management. For example, introducing a new transcoding software is something that needs proper testing. That is not necessarily an issue. Infrastructure today is built following immutable infrastructure patterns and can be rebuilt or reproduced with very low effort. Usually it is a one-liner that can be fired up by some automation software.

A practical example is writing a simple Helm-Chart to build a system like the one depicted above and do a simple “helm install production <myHelmChart>”. Then, build another instance of that infrastructure doing a “helm install testing <myHelmChart>”.

That changes the picture to something like this:

As you can see, the infrastructure pattern is exactly the same. There is only one small difference: The storage target is not the same one. And there is a very good reason for that. You do not want to deliver test pictures (probably cat videos) to your users when they are paying for the latest blockbuster.

Needing two different storage targets gives you two choices. The first choice is to use separate storage clusters, which provides a very good level of storage isolation, but very costly. And the second choice is to use a storage solution that can identify and differentiate between production and development clients.

Storage Isolation — Logical Layer Separation

With the second choice, NFS comes to an end. The access control for NFS is usually based on IP addresses and that is something you cannot rely on in a containerized world. Also, from a performance point of view, NFS is not the best choice. In the second stage of our example, the transcoding process, you will only be fast if a large number of clients can access the storage in parallel. NFS suffers from bottlenecks because all requests must traverse a single node.

The solution to NFS bottlenecks is to use a parallel file system. With a parallel file system, you benefit from distributed read and write traffic across the whole storage cluster.

Let’s summarize what we just discussed. To be able to support modern transcoding pipelines, your storage should:

  • Support multi-tenancy for security reasons
  • Be a parallel storage system for performance reasons

There are some things on the storage system side that could make your life easier. Storage provisioning should be doable from within Kubernetes. This way, your storage provisioning can be automated as well. And of course, It would be nice to not have to run S3-proxy within Kubernetes. What if your storage provider already included it?

We can say that importing content is a solved exercise. We can also say that reducing complexity is always a good idea. So, if that is the case, why not use the very same Object Storage for delivering content? That simplifies things again. Let’s paint it in nice Quobyte colors:

You can see in the image that the Kubernetes side is not as complex as before. The basic idea behind that is simple: Let Kubernetes do what it is good at (scheduling dynamic resources) and leave the rest to the storage system.

Now, we have a longer checklist for our storage system to comply with:

  • An Object Storage Interface for data ingest and content delivery
  • Native Posix file access for transcoding jobs
  • Unique access credentials that identify storage payload at the Ingest stage (cat videos vs. blockbuster)
  • The same identity mechanism for native file access during transcoding
  • Same file space for object storage and native clients (we do not want to copy data or duplicate our content)
  • Parallel file access for many transcoding clients
  • Kubernetes Integration
  • Capability to organize different tenants

Quobyte is a highly parallel file system that offers all capabilities outlined in our checklist. You can safely ingest cat videos into your testing cluster while all customers will get the content they expect. Quobyte also offers direct access to all clients while using a unified space to store data. This means that data can be reached via S3, native mounts (Windows, Linux, Mac OS) or even, if really needed, via NFS. Beside that, Quobyte ensures data integrity and service availability at highest levels.

Storage Isolation Options

First, you have the traditional way. You can use a separate testing infrastructure targeting an isolated storage system. Plus, a production stack that is running in a different place. On the “pro” side of it, you obtain strong layers of isolation. On the “con” side, you will have the costs for two storage solutions. Also, the costs for two infrastructure environments (like Kubernetes clusters). And always, if you need a new environment, you will need a new storage cluster.

The other alternative is to rely on logical separation. You can run different workloads isolated in Kubernetes namespaces. You can also separate storage access on a logical level. If a user is mounting a development storage unit (a.k.a. volume) they are forced to use authentication. With that authentication, there is no chance to access production storage at all, simply because they belong to different tenants.

Built on that foundation, you can scale production workloads and replicate infrastructure as needed.

Want to Know More About Quobyte?

Schedule a call with us to learn more about Quobyte and our Editions

Contact us!

Originally posted on Quobyte’s blog on December 28, 2021.

--

--

Quobyte
Quobyte

Written by Quobyte

Quobyte empowers customers by providing real software storage so that they can keep up with the ever-increasing amounts of data in today’s data-driven world.

No responses yet