I've read the very thorough documentation on rclone's website and between Google Drive and One Drive for Business...it appears OdFB has a lot more restrictions in terms of file names, file size and number of files. This has me concerned as I have the decision-making ability to select either Google Workspace or M365 E3 as our provider. I intend to use rclone via TrueNAS' implementation to backup to one of these providers.
How does rClone handle the scenario here?
Does it now not span more than 50K and chunk files into another directory to avoid this limit or is this still a risk or has TrueNAS mitigated this somehow?
What is your rclone version (output from rclone version)
Which cloud storage system are you using? (eg Google Drive)
Haven't decided yet, leaning towards OdFB.
The command you were trying to run (eg rclone copy /tmp remote:tmp)
that is a tough question to answer without knowing your use-case.
onedrive is well known for throttling connections, many posts in the forum about that.
the rclone algorithm, which i believe is exponential back-off, has issues, perhaps a bug. @ole is working on that.
gdrive has many limitations, as you have read.
if i had a choice, i would use neither.
i use a combination of:
wasabi, a s3 clone, known for hot storage
aws s3 deep glacier, for cold storage, $1.01/TB/month
and i see that @ncw is going to post, so i will defer to him....
I haven't ran across any throttling on OneDrive for Business yet - the speeds are not great for sure. This was tested using Cyberduck. Speeds were much faster on Googe Drive (300-400 down) vs. about 50 down on a 1000/1000 fiber link.
I do believe some folders have over 100,000 files within them (linux export of log and config files) so will rClone gracefully handle that folder or skip it entirely? Unfortunately in TrueNAS/FreeNAS the ability to see what's occurring is hidden until the cloud sync task either completes or errors out hard.
I use both and they both work well and somewhat similar - each with their issues (In my use case!)
I don’t think the OneDrive throttling is a major issue as long as you keep --transfers and --checkers at defaults (and perhaps lower them a bit for jobs running more than a couple of hours).
Google Drive have other limits, such as a maximum upload per day - which also hit the forum frequently.
The important thing to understand about both OneDrive and GoogleDrive is that they are priced per size only and therefore both come with limitations to the speed and number of requests you can do within an hour/day (that is fair usage limitations that are enforced by some kind of rate limiting or throttling).
If you want unlimited speed and up/download capacity, then you typically get to pay per request (one way or the other).
The improvements I am testing are basically improving/fixing the back-off when a cloud storage signals that a user has been running with too many rclone sessions, --transfers or --checkers for too long. So currently the trick is just to always stay under throttling limits.
Some important things to also consider when selecting storage provider:
Total size of backup
Total number of folders
Total number of files
Changes to be uploaded per day (number of files and their typical size)
Need for versioning/snapshots/malware protection etc.
I see it as a good example that there are betters speeds and less limitations when also paying for requests/uploads (one way or the other).
But perhaps ThePrez (implicitly) meant:
My best answer to that specific question is: I don't know. The issue is from 2018 and related to the OneDrive API, so it may have been fixed by Microsoft or somebody else. Test it if important! (ditto for Google Drive)
I see the minimum storage duration as a price on uploads/requests in real usage scenarios. Just a different way of pricing, that favors the backup customers they want to attract. It may become expensive if you daily backup (the same) large files with minimal changes, which I do.