What's new

Multiple Hard Drive Best Practices: Partitions, Libraries, Data and Mitigating Streaming Bottlenecks in an NVMe and SSD setup

A bizzare an unnecessary complex setup.😂
People still try to make this a rocket science.
I can't even remember the last time I partitioned a drive. Folders work just as well.
Some people have this habit of putting VSTs on a separate drive. DLLs don't take up space but it's usually the folders that hold presets or whatever in other parts of a system. I'm not referring to sample libraries.
 
Hello Wisest Ones;

I have just purchased a new computer and am in the setup phase. You know - the one where you re-think your entire setup and how to do it better from scratch :)

I am a producer for TV and film, so I often run fairly large setups that run the gamut from full orchestral templates, to hybrid production of (a few dozen) audio tracks plus the other "usual suspect" VSTs (Kontakt, Omnisphere, Spitfire etc).

I am currently musing on best practices for drive partitioning and data location for maximum speed and efficiency and would love your thoughts.

I will have three drives: 2X NVMe 4TB and 1X SSD 2TB. My first draft of partitioning is based on my current setup and looks like this:



Drive 1: NVMe 1 (3.7 TB)
OS : 600 GB
Users: 800 GB
Projects 1: 1000 GB
Projects 2: 1000 GB

Drive 2: NVMe 2 (3.7 TB)
Instrs 1: 1200 GB
Instrs 2: 1200 GB
Instrs 3: 800 GB
Loops: 500 GB

Drive 3: SSD 1 (1.8 TB)
Aux: 500 GB
Restore: 500 GB



The storage amounts are fine by me, and will give me room for growth over a projected 8 year horizon - but I would love to have your thoughts on the placement of specifically the INSTRUMENTS partitions (ie, the sample data for my VSTis) as the most bandwidth usage will come from there.

Let me explain further:

With all my previous systems (containing HDDs and SSDs), I have put each Instrument Drive (1, 2 and 3) on different PHYSICAL drives. This allows for more parallel bandwidth when streaming samples, and is an excellent optimization hack (FYI!).

However, this new system is a different story with the NVMe M.2 drive technology being (arguably) over 10 times faster than my older SATA 3 setup. So I am wondering:

1. Is the bandwidth of each NVMe drive so massive, that there is no way I could bottleneck the streaming, so I can just put all my Instruments / samples on the same drive
2. Or should I play it safe and still use the (arguably still un-antiquated) best practice of parallel streaming from different physical drives?

I will be running an AMD 7950x with 64Gb of DDR5 RAM. I have a robust backup system and am not concerned with RAID setups or drive life span.

Any thoughts appreciated - thanks! :)
Ok. I'm going back to your original post.

My point of reference : We run multiple workstations (music composition and sound post) in a facility structure. So, YMMV, but I use a lot of the facility thinking within my own personal composition setup.

Projects first. These need the least bandwidth, and arguably are the most important part of your setup from the point of view of backups. We now run ALL projects off a 10GBE NAS. The NAS takes care of both snapshot backups (daily) as well as running a full redundant secondary NAS which mirrors both the storage pool AND the snapshot pool. Finally, snapshots are backed up to our own private cloud.

At home, I am now starting to run my projects off a similar NAS, and I back that up internally to snapshots, then mirror that all on the backup NAS at the studios, which also goes to the (private) cloud.

While I love FreeNAS/TrueNAS, we run QNAP Hero with lots of precautions to mitigate against hacks (which have been a problem for others not running firewalls / exposing their NAS to the wider web.) Our particular approach is to run PfSense on its own hardware, and for those connecting to our system to use a similar approach. Similar thinking to : https://www.servethehome.com/inexpe...firewall-box-review-intel-j4125-i225-pfsense/

You don't need to worry about partitions etc for projects and the like. I understand the thinking behind this from 10+ years ago (and why so many folk did it) but it is likely needlessly overcomplicating things.

As for the NAS, choose a file system that you know will be able to be opened on new hardware in the future. We are using ZFS.

1GBE is completely fine (but not speedy) for a single user on a single nas. However, you will notice slower save times compared to locally. We use 10GBE (and a solid state based storage pool) and its superb for speed, and has absolutely no trouble running multiple studios with large sessions all at the same time. For home use, I'd look at inexpensive 2.5GBE.

So why do I say this? Well, a NAS is just more robust than connected storage or internal storage. Its designed so that all backups happen whatever computer is connected to it. Change your computer, you don't need to set anything up. Just connect to the network. You can then also control things like outside connection (if you want access to your data remotely etc). Management of the data is super easy, and you can also protect the data if needed (more important in a multi-user environment)

It also makes setting up redundancy within the pools simple. Redundancy doesn't equal backup. It just means you don't need to stop working when a drive fails. Zero downtime is the key when you have a delivery for a netflix show that is mixing tomorow!

Then samples.

I've written on here multiple times that I use single large NVME drives these days. PCIE Gen3 8TB drives are awesome for samples. I use an external thunderbolt system, but internal is cool too. The way I see it is - sure, I'd like more space, but this is enough for the next few years, and by then I'll be able to get a single 16TB drive. Indeed, I have a single 15TB U.2 drive sitting next to me right now for some other tests which would actually make an AMAZING sample drive with the right setup. Way too expensive for most composition purposes (but worth it for other data purposes)

And if you want to gradually expand a single sample volume (which I recommend rather than spreading samples over multiple drives etc), then look into disk spanning / amalgamation software for sure. Some of it is amazing. Look into things that use file systems that are modern AND well supported.

These days, while it is a PAIN to rebuild sample libraries from scratch, it also isn't nearly as hard as it used to be. Indeed, I'm about to embark on doing exactly that for my Mac Studio system. With Kontakt 7, I'm going to organise things differently to how I've done it in the past. Thats just my OCD kicking in, and completely unnecessary, and more a symptom of not having ideas for a theatre score that I really need to get moving on....

Modern SSD's - be they SATAIII, NVME, U.2 or the like are all amazing for samples. I doubt anyone would see any sort of bottle neck even running a SATA III based SSD system. But if you go to NVME, I don't see a time in the next little while where any new advances (other than storage capacity on single drives) which will mean you need to change your system / upgrade it.

I have run sessions with 1000's of voices at the same time. We run internal tests for qualifying systems in our studios - using our own custom kontakt sample libs. We have never got close to the limits of NVME drives under ANY stress test. All the bottle necks are way back down the line with regards to single core (zero core) needs of real time audio systems. Old film mixes used to run off multiple computers. Many places still do that. However, even the largest hollywood film could stream off a single drive no stress these days. Most have moved to NAS based storage which is only 10GBE. Thats not as fast as ANY NVME (and caps out at about 2 parallel SATA III SSD's)

Backup for samples doesn't need to be as robust as projects, as most of the time its just a matter of waiting for new downloads. I keep archives of all downloads here locally on the network to speed things up, but I don't (rightly or wrongly) keep a full backup of my sample drive any longer I have 2 complete computer systems I compose from, each with their own sample drive. Laptop has 8TB internal NVME and Mac Studio uses the 8TB NVME drive in an enclosure. I just carry the external NVME drive with me as a backup when travelling with the laptop. This backup does NOT work for some non-kontakt libs though. Thus having archives of the installers close enough (and I can access them remotely if for some reason I can't re-download from a sample company! Would I like a better solution? Sure! But the complexity is already enough, and I've weighed up the risks)

Anyway. There's my 2c. I'm getting back to building my new sample drive and hopefully making some noise.
 
You linked directly to an Apple support page explaining how to set up RAID 0, 1 and JBOD
Scroll down to concatenated disk set. Not sure why they call it JBOD, because that's the default (meaning that if you connect a bunch of drives to your computer, they're automatically seen as just a bunch of drives).

'm not sure where you got the idea from that I'm trying to combine drives between Windows and macOS, but I'm not. I'm just looking for a quick and easy solution for making any disks I want to be part of a larger "virtual JBOD" so that I can refer everything to the same place. Like Drivepool seems to do. Actually just picked up a 4TB, so space isn't an issue externally, just internally.
Okay, never mind then. Maybe I'm confusing this with another thread, or another post in which someone asked for the same thing on Mac.

I’m failing to see your point?

You have never used drivepool and it’s nothing like you what you used on any level.
I have no problem with you doing whatever works for you.

My point is simply this: if you use software that does stuff behind the OS' back, you're asking for trouble next time you change anything - based on my experience. The computer is very likely to get confused.

I'm not sure why you find it so annoying that I'd say that!
 
Scroll down to concatenated disk set. Not sure why they call it JBOD, because that's the default (meaning that if you connect a bunch of drives to your computer, they're automatically seen as just a bunch of drives).


Okay, never mind then. Maybe I'm confusing this with another thread, or another post in which someone asked for the same thing on Mac.


I have no problem with you doing whatever works for you.

My point is simply this: if you use software that does stuff behind the OS' back, you're asking for trouble next time you change anything - based on my experience. The computer is very likely to get confused.

I'm not sure why you find it so annoying that I'd say that!
Drivepool works at kernel level so again your post is nonsense.

I’m annoyed as you’re posting disinformation.
 
Drivepool works at kernel level so again your post is nonsense.
The kernel level is at the very root of the OS, I think?

What happens when you need to update to the next OS version? Does anyone know?

Is that just being paranoid, or is it a rational fear? I admit that both are possible. :)


I’m annoyed as you’re posting disinformation.
No, I'm posting misinformation!

Disinformation implies that it's deliberate misdirection propaganda.

(Sometimes I can't help being an ass. :) )

But it's information that my *opinion*... well, you know my opinion. Files and folders seem like a perfectly good system to me.

Anyway, I still love you whether or not we agree about this historically important matter.
 
The kernel level is at the very root of the OS, I think?

What happens when you need to update to the next OS version? Does anyone know?

Is that just being paranoid, or is it a rational fear? I admit that both are possible. :)



No, I'm posting misinformation!

Disinformation implies that it's deliberate misdirection propaganda.

(Sometimes I can't help being an ass. :) )

But it's information that my *opinion*... well, you know my opinion. Files and folders seem like a perfectly good system to me.

Anyway, I still love you whether or not we agree about this historically important matter.
Update of os makes no difference

You could reformat your os, install drivepool, connect the drives from the previous windows build and the pool would be automatically created.

Do some research on how Drive pool works first before creating problems that do not exist!
 
Ok. I'm going back to your original post.

My point of reference : We run multiple workstations (music composition and sound post) in a facility structure. So, YMMV, but I use a lot of the facility thinking within my own personal composition setup.

Projects first. These need the least bandwidth, and arguably are the most important part of your setup from the point of view of backups. We now run ALL projects off a 10GBE NAS. The NAS takes care of both snapshot backups (daily) as well as running a full redundant secondary NAS which mirrors both the storage pool AND the snapshot pool. Finally, snapshots are backed up to our own private cloud.

At home, I am now starting to run my projects off a similar NAS, and I back that up internally to snapshots, then mirror that all on the backup NAS at the studios, which also goes to the (private) cloud.

While I love FreeNAS/TrueNAS, we run QNAP Hero with lots of precautions to mitigate against hacks (which have been a problem for others not running firewalls / exposing their NAS to the wider web.) Our particular approach is to run PfSense on its own hardware, and for those connecting to our system to use a similar approach. Similar thinking to : https://www.servethehome.com/inexpe...firewall-box-review-intel-j4125-i225-pfsense/

You don't need to worry about partitions etc for projects and the like. I understand the thinking behind this from 10+ years ago (and why so many folk did it) but it is likely needlessly overcomplicating things.

As for the NAS, choose a file system that you know will be able to be opened on new hardware in the future. We are using ZFS.

1GBE is completely fine (but not speedy) for a single user on a single nas. However, you will notice slower save times compared to locally. We use 10GBE (and a solid state based storage pool) and its superb for speed, and has absolutely no trouble running multiple studios with large sessions all at the same time. For home use, I'd look at inexpensive 2.5GBE.

So why do I say this? Well, a NAS is just more robust than connected storage or internal storage. Its designed so that all backups happen whatever computer is connected to it. Change your computer, you don't need to set anything up. Just connect to the network. You can then also control things like outside connection (if you want access to your data remotely etc). Management of the data is super easy, and you can also protect the data if needed (more important in a multi-user environment)

It also makes setting up redundancy within the pools simple. Redundancy doesn't equal backup. It just means you don't need to stop working when a drive fails. Zero downtime is the key when you have a delivery for a netflix show that is mixing tomorow!

Then samples.

I've written on here multiple times that I use single large NVME drives these days. PCIE Gen3 8TB drives are awesome for samples. I use an external thunderbolt system, but internal is cool too. The way I see it is - sure, I'd like more space, but this is enough for the next few years, and by then I'll be able to get a single 16TB drive. Indeed, I have a single 15TB U.2 drive sitting next to me right now for some other tests which would actually make an AMAZING sample drive with the right setup. Way too expensive for most composition purposes (but worth it for other data purposes)

And if you want to gradually expand a single sample volume (which I recommend rather than spreading samples over multiple drives etc), then look into disk spanning / amalgamation software for sure. Some of it is amazing. Look into things that use file systems that are modern AND well supported.

These days, while it is a PAIN to rebuild sample libraries from scratch, it also isn't nearly as hard as it used to be. Indeed, I'm about to embark on doing exactly that for my Mac Studio system. With Kontakt 7, I'm going to organise things differently to how I've done it in the past. Thats just my OCD kicking in, and completely unnecessary, and more a symptom of not having ideas for a theatre score that I really need to get moving on....

Modern SSD's - be they SATAIII, NVME, U.2 or the like are all amazing for samples. I doubt anyone would see any sort of bottle neck even running a SATA III based SSD system. But if you go to NVME, I don't see a time in the next little while where any new advances (other than storage capacity on single drives) which will mean you need to change your system / upgrade it.

I have run sessions with 1000's of voices at the same time. We run internal tests for qualifying systems in our studios - using our own custom kontakt sample libs. We have never got close to the limits of NVME drives under ANY stress test. All the bottle necks are way back down the line with regards to single core (zero core) needs of real time audio systems. Old film mixes used to run off multiple computers. Many places still do that. However, even the largest hollywood film could stream off a single drive no stress these days. Most have moved to NAS based storage which is only 10GBE. Thats not as fast as ANY NVME (and caps out at about 2 parallel SATA III SSD's)

Backup for samples doesn't need to be as robust as projects, as most of the time its just a matter of waiting for new downloads. I keep archives of all downloads here locally on the network to speed things up, but I don't (rightly or wrongly) keep a full backup of my sample drive any longer I have 2 complete computer systems I compose from, each with their own sample drive. Laptop has 8TB internal NVME and Mac Studio uses the 8TB NVME drive in an enclosure. I just carry the external NVME drive with me as a backup when travelling with the laptop. This backup does NOT work for some non-kontakt libs though. Thus having archives of the installers close enough (and I can access them remotely if for some reason I can't re-download from a sample company! Would I like a better solution? Sure! But the complexity is already enough, and I've weighed up the risks)

Anyway. There's my 2c. I'm getting back to building my new sample drive and hopefully making some noise.
Wow - @colony nofi - this is great information, and very similar to my setup, to which a NAS is also integral. We use it primarily with Qsync, to make sure files are synced locally to all respective workstations (including a mobile laptop). All installations and files are mirrored, and that works incredibly well. Of course, local snapshots and remote backup are part of the strategy as well.

I also have a similar current take now on partitions (nay) versus folders (yea) versus drive pooling (maybe eventually, but not needed for years to come). Currently, I have 8Xd my storage from SATA 3 to NVMe, so I figure I am ok for 5 yrs+, given my past growth.

Thanks so much for the in-depth thoughts! :)
 
Update of os makes no difference

You could reformat your os, install drivepool, connect the drives from the previous windows build and the pool would be automatically created.

Do some research on how Drive pool works first before creating problems that do not exist!
I was skeptical of DrivePool as well - but after my own research into it, its feature set and user reviews, it looks *extremely* robust, well-thought out and forward thinking. Although I don't think I have a current need for it, I will be testing it out on my home system.
 
Wow - @colony nofi - this is great information, and very similar to my setup, to which a NAS is also integral. We use it primarily with Qsync, to make sure files are synced locally to all respective workstations (including a mobile laptop). All installations and files are mirrored, and that works incredibly well. Of course, local snapshots and remote backup are part of the strategy as well.

I also have a similar current take now on partitions (nay) versus folders (yea) versus drive pooling (maybe eventually, but not needed for years to come). Currently, I have 8Xd my storage from SATA 3 to NVMe, so I figure I am ok for 5 yrs+, given my past growth.

Thanks so much for the in-depth thoughts! :)
So on Qsync
Its interesting that you use it for workstations - as our tests showed much less risk in just running off the server itself. No multiple copies. Just what ever is on the server. And the server handles it extremely well.

Qsync can be good for a shared "documents" folder for a company / group of people. We use dropbox for that for legacy reasons. Actually, your mention of qsync has me thinking I'll look at this more closely again. I like having ALL our data in one place.
(my dropbox IS backed up to the NAS - this is extremely inefficient - I know!)

Ie, anyone who wants to open a project duplicates the project file, renames it according to our system, and that project is labelled to show others that it is being edited. We have an AppleScript that handles that all fairly seamlessly.

For anyone interested - this is post pro related, but there's ideas in here for collaborative composition in Longford as well.

We are also developing a workflow where there is one master project, and only "sections" are copied out to an editor. Let me try explain.
Old school long form editing means you have dialog, atmos, sfx, music edit, foley (and sometimes multiples of these) in separate sessions, which are only brought together come pre-mix time.
Now days folk want preview and work in progress mixes ALL the time.
There are many ways of doing this - and we are looking at just one of the methods.
Essentially, we have one master session.
It starts as a template with no audio in it. It is loaded with temp audio, EDL data and and AAF data from the edit, plus pictures.
At the end of each day (or start), each section (dialog, foley, sfx etc) is rendered out to a stereo file as it is in that session. This is extremely easy these days with the way DAWS for post are setup.
If I am a dialog editor, I run a script that essentially imports the dialog tracks as they are into a new session, as well as the stereo mixes for the other sections (and temp audio) + the video (which is always the latest video).
That way, I'm hearing everything "in place". I can edit dialog at approx correct levels (keeping an eye on EBU meters all the way) knowing that things are in a good "ball park" all the time. I can hear progress on fx / atmos as I go - so if something isn't jelling, I can go talk to the other editor about how it might work better etc.
At the end of the day, I check the tracks back into the session. This involves the old tracks being moved to another set of "old" tracks with their own dated playlist, those are turned off. The new tracks are turned on, and the process repeats daily.
Someone who is premixing will see that the dialog tracks are "checked out" (by the way they are disabled and instead replaced by a stereo render). If a mix is needed urgently, it can be made super quick using the stereo stems. No one is after masterful mixes at this stage - just good indications.
And the mixer can use separate tracks to do level correcting between the sections using VCA's, which can be printed back to other tracks later if required.
No wasted time - all premix decisions CAN be applied to edited tracks if thats wanted, or if its quick and nasty, the rides can be thrown away.
For TV where budgets are not high, this is a complete game changer. It means a small facility that hasn't been given huge budgets can spend more time on the creative. Which is a win for EVERYONE!
 
@colony nofi This is so interesting.

Regarding Qsync, to give you some context, because my situation is a bit different than yours: I am doing composition and production for television, so streaming samples and VSTs over the network is a no-go for me.

I have two workstations, and all project files, assets, recorded audio, tools etc are all synced in real time to the server. The server serves as a central hub for file access if need be, but more as a snapshot backup machine, project archive, media server. It is also backed up daily to a remote location. There are no backups that happen on production machines - all backups are done from the central file server.

The other advantage to having multiple copies locally on each workstation is redundancy - if a system goes down, the other one is ready to go.

Obviously, there are some files that are not shared (licensed VSTs etc) but using the network only for sync, cloud and backup gives me very* robust production workflow on however many machines I want to sync up. Obviously, each production machine has the exact same file structure.

To speak to your checking in-and-out system, we use a very similar method: if a technician is working on a file, it is copied and renamed with our nomenclature, and a CHECKED OUT suffix added. It's ghetto, but man it works - you can instantly see on *all* machines who is working on what, where and when.

The Qsync system also works great when including a laptop in the equation - everything done remotely is instantly synced across all machines. If I save a great new track stack or reverb preset, it's all accessible on all machines.

I will be updating my QNAP machine this year - I absolutely love it, and it has become a stable and secure backbone of my studio and business. Highly recommended! :)
 
@colony nofi This is so interesting.

Regarding Qsync, to give you some context, because my situation is a bit different than yours: I am doing composition and production for television, so streaming samples and VSTs over the network is a no-go for me.

I have two workstations, and all project files, assets, recorded audio, tools etc are all synced in real time to the server. The server serves as a central hub for file access if need be, but more as a snapshot backup machine, project archive, media server. It is also backed up daily to a remote location. There are no backups that happen on production machines - all backups are done from the central file server.

The other advantage to having multiple copies locally on each workstation is redundancy - if a system goes down, the other one is ready to go.

Obviously, there are some files that are not shared (licensed VSTs etc) but using the network only for sync, cloud and backup gives me very* robust production workflow on however many machines I want to sync up. Obviously, each production machine has the exact same file structure.

To speak to your checking in-and-out system, we use a very similar method: if a technician is working on a file, it is copied and renamed with our nomenclature, and a CHECKED OUT suffix added. It's ghetto, but man it works - you can instantly see on *all* machines who is working on what, where and when.

The Qsync system also works great when including a laptop in the equation - everything done remotely is instantly synced across all machines. If I save a great new track stack or reverb preset, it's all accessible on all machines.

I will be updating my QNAP machine this year - I absolutely love it, and it has become a stable and secure backbone of my studio and business. Highly recommended! :)
So interesting.

Thanks for that info.

I am personally a composer - running VERY similarly to you in that I have 2 complete systems which I work from - and a third for an assistant if ever that is required - don't get the luxury much. My 2 working systems are a mac studio ultra and new 16" laptop. Great if I'm in a theatre project or on site for an installation more than film / tv which i mainly stick to the studio.

And of course, samples are all tied to each machine with local storage. Internal on the laptop, thunderbolt NVME for the studio. However, project files I run directly off the 10GBE nas without any issues at all. No need to run qsync - and everything is always there. I've run horrendously large orchestral mix sessions for immersive installations (think very high speaker channel counts) with 100's of recorded tracks + 90+ designed instrument tracks. And I don't notice auto-saves / speed to open is sweet. Now it would be faster internally, but not much.

But - I'm going over in my head why it MIGHT be a good idea to use QSync like you for this workflow. Obviously it wont be used for the post studios :).

A downside is just the need for additional internal storage - and I like having LOTS of old projects on hand. I keep 5 or 6 years worth on the NAS at the moment, and unfortunately that is a tonne of data mostly due to recording sessions and mix sessions for dolby atmos/immersive media (where bounces are many many tracks often with long durations and many versions. I was shocked a recent museum immersive project was 250GB in size at the end. I never thought as a composer I'd generate that much data! But I also do get sent a lot of "too hard" projects for some reason....
If it wasn't for storage requirements, I might just go for it. After I finish this current theatre project, I think I'll have a little break (like that ever really happens) and give it a go in that time. Thats march. We will see.

Thanks for the interesting conversation. There's lots here for others, which is one of the beauties of this forum.
 
Very interesting indeed sir!

I highly recommend QSYNC to sync all "active" files on all machines (Projects, Libraries, Resources etc) - then you just store your ARCHIVES on the NAS without syncing.

If it can provide some context, my current Client folder is about 650GB in size. This gets synced in real time across all my machines - no big deal obviously, because the initial data seed was already made years ago.

Good luck to you, and thanks for all the thoughts!
 
Just upgraded storage using DrivePool

Let's wake this thread up :) I owe @easyrider a big thank-you for suggesting DrivePool. Awesome add-on s/w ($30) that has given my new storage solution very flexible expansion options with zero OS ill effects. I admit to being a computer nerd so it was quite easy for me to qualify the solution.

My system is a Dell Optiplex 5000* + Win10. I just upgraded from 1TB of internal SSD + a slow USB spinning Seagate backup drive to 1TB of fast internal SSD for programs/documents/vids + 3TB of fast internal NVMe SSD for samples + 2 TB of slower/cheaper USB NVMe SSD (actual 950 MB/s) for backups.
(* Dell is NOT a good choice for realtime apps as they preload tons of crap Dell services that literally kill realtime performance --I "fixed" this but that is another story...)

Before the upgrade my samples were scattered in multiple folders on C:, afterwards they are all folders on a 3TB DrivePool V: volume which combines a fast (actual 3500 MB/s) 2TB SSD on a PCIe NVMe expansion card + a 1TB partition on the motherboard's upgraded faster 2TB SSD (actual 6400 MB/s) -- the base 1TB partition is for programs/docs. When it comes time to get more storage I have several internal and external options but they all simply will expand the size of V:

For backups now I'm using a single DrivePool volume X: that has a medium speed (cheaper) 2TB SSD in a dual Orico NVMe enclosure connected to a USB 3.2 Gen 2x1 port. X: has 2 backup folders: "X:\Samples backup" and "X:\Documents backup." Expanding backup storage in various ways will simply increase the size of X:

I can provide details but the main thing here is that using DrivePool has, among other things, made the issue of adding future storage (via internal NVMe/SATA-3 or external USB) childs play to implement.

Monitoring SSD status

I'm using Hard Disk Sentinel Pro ($33) (vid) to realtime monitor status (temp, SSD life, drive health, etc) of _all_ internal and external disks. Awesome tool, tons of features and notification and disk test options.

Any q's fire away.
 
Just a little heads up, it might be system related but it's worth mentioning: I purchased DrivePool a while back and it totally slowed down the patch loading time in Kontakt. Maybe it is because DrivePool added some management overhead, since I'm using 10 SSDs. Anyway, went back to my old method of adding symlinks which restored the initial loading speed.
 
I highly recommend QSYNC to sync all "active" files on all machines (Projects, Libraries, Resources etc) - then you just store your ARCHIVES on the NAS without syncing.
@TimRideout, From your experience, are there any quirks with QSync that should be noted? I've been testing it in this way for a few days and it seems kind of great. It has reduced most of my manual and scheduled sync jobs with a real-time solution.

My new current testing phase is using my library and project drives (NTFS) on my Mac via Paragon NTFS. So far, Kontakt performance is indistinguishable. I am still waiting on a response from Paragon about how deletes and new files are tracked between indexing methods of each OS...this part is the final puzzle piece to an effective cross-platform workflow.

CAUTION: For those trying this kind of MAC/PC hybrid project/library drive, here are 3 major cautions:

1) Use safe unmounting in Windows, should be done easily from the taskbar tray. Or if using an OWC dock, they offer a free app.
2) DO NOT delete trashes folder created by MacOS
3) DO NOT use APFS on Windows (unless set to read-only). Use NTFS on Mac via Paragon NTFS. Beware that some mac updates may break NTFS driver...don't haste to updates unless for relevant security reason. Check for Paragon updates first if possible.
 
Last edited:
I took some time to adjust and lightly document the storage and backup scheme. Below is a concise visual of the setup that allows taking projects remote, while syncing changes to the main studio through VPN tunnel, and automatic scheduled backup of all project/asset data. All SSDs except Backup Drive. Other 'passive' HDD backups exist, but not shown in this 'active' setup.

- QNAP NAS hosts all projects/assets/renders. Renders and published data are automatically sent to OMV NAS.
- OMV NAS allows selective public access to renders via Plex Media Server, SMB, and experimental web services.
- Desktop hosts sound libraries locally, and gets project data from QNAP.
- Laptop/Mac share an external enclosure with all projects and libraries for remote work. Projects sync with QNAP over WireGuard VPN when connected to internet (QSync).



JBLONGZ DATA TOPOLOGY-2.png
Wish there was a QNAP with RAID4.
 
Last edited:
Just a little heads up, it might be system related but it's worth mentioning: I purchased DrivePool a while back and it totally slowed down the patch loading time in Kontakt. Maybe it is because DrivePool added some management overhead, since I'm using 10 SSDs. Anyway, went back to my old method of adding symlinks which restored the initial loading speed.
That's good to know!
 
Top Bottom