Thanks so much for facilitating this Mike! It seems there's a few technical bumps in the road to iron out based upon what I'm reading.
I managed to successfully pull down all files as manual groupings of ZIP files (generated on Google's end to let me download multiple files). Granted, I was downloading within 10 minutes that the email was blasted so I may have beat the rush since everything was smooth and painless as far as downloading went.
However, Google only allowed download sizes of 2GB - at least, it would only ZIP-on-the-fly up to 2GB max for multiple files, versus maybe posting/downloading singular 10GB files. So the steps I had to do it where:
1. Select 15-17 files or so at a time
2. Select "Download", then download "As-Is" and ZIP them.
3. Duration said over 5 minutes, so I told Google to "Email Link"
4. Rinse and repeat 1-3 the whole way down the list
5. Downloaded all ZIP files as they came in, about 16x 2GB ZIPs total
6. Unzip each one, check against the Google Drive master copy
So maybe posting ZIP (or RAR) files for the future would be best, even if broken up into 5GB - 10GB files (not spanned archives, just grouped into parts). Trying to download singular files, or cause Google to have to number-crunch ZIP file creation for users, creates a lot of unnecessary server bandwidth overhead and could be contributing to the problem overall.
Maybe there's a better solution than Google Drive going forward? Only pondering since it seems they're not designed to handle this type of load. Maybe Gobbler? I've never tried it on a mass-scale level like this, but understanding how it functions from having used it, it seems like it's built to push data of this amount and also allow direct download links to be sent (to avoid publicly-available links - it even allows you to select if the link should be private or public). It also seems to handle things sort of like Aspera in the sense that to us users it looks like we're uploading a ton of individual files, but on the server-side, it's all contained as a singular Package, making the retrieval process just as easy. I believe the process even allows for server-side compression geared for audio data. it seems like an alternative to consider which mimics Aspera, without being the overhead/infrastructure cost of actually running Aspera faspex.
Regarding the M/S question, please let us know if you find which file it is again, I'm curious as well.
Thanks, and hope this feedback helps!