Which is more reliable for rClone - Google Drive or One Drive for Biz?

What is the problem you are having with rclone?

I've read the very thorough documentation on rclone's website and between Google Drive and One Drive for Business...it appears OdFB has a lot more restrictions in terms of file names, file size and number of files. This has me concerned as I have the decision-making ability to select either Google Workspace or M365 E3 as our provider. I intend to use rclone via TrueNAS' implementation to backup to one of these providers.

How does rClone handle the scenario here?

Does it now not span more than 50K and chunk files into another directory to avoid this limit or is this still a risk or has TrueNAS mitigated this somehow?

What is your rclone version (output from rclone version)

1.53.1

Which cloud storage system are you using? (eg Google Drive)

Haven't decided yet, leaning towards OdFB.

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Paste command here

The rclone config contents with secrets removed.

Paste config here

A log from the command with the -vv flag

Paste  log here

Thanks!

hello and welcome to the forum,

that is a tough question to answer without knowing your use-case.

onedrive is well known for throttling connections, many posts in the forum about that.
the rclone algorithm, which i believe is exponential back-off, has issues, perhaps a bug.
@ole is working on that.

gdrive has many limitations, as you have read.

if i had a choice, i would use neither.

i use a combination of:

  • wasabi, a s3 clone, known for hot storage
  • aws s3 deep glacier, for cold storage, $1.01/TB/month

and i see that @ncw is going to post, so i will defer to him....

If you are backing up from Windows, the file name restrictions are the same as Windows.

Both work work pretty well, however people probably have more problems with the throttling on onedrive though whether this applies to the business variant, I don't know.

This is only a problem if you have 100,000 files in a single directory. This is very unusual. If you have 100,000 files spread around a normal directory structure that isn't a problem.

Thanks everyone.

I haven't ran across any throttling on OneDrive for Business yet - the speeds are not great for sure. This was tested using Cyberduck. Speeds were much faster on Googe Drive (300-400 down) vs. about 50 down on a 1000/1000 fiber link.

I do believe some folders have over 100,000 files within them (linux export of log and config files) so will rClone gracefully handle that folder or skip it entirely? Unfortunately in TrueNAS/FreeNAS the ability to see what's occurring is hidden until the cloud sync task either completes or errors out hard.

Thanks

Hi Theprez,

I use both and they both work well and somewhat similar - each with their issues (In my use case!)

I don’t think the OneDrive throttling is a major issue as long as you keep --transfers and --checkers at defaults (and perhaps lower them a bit for jobs running more than a couple of hours).

Google Drive have other limits, such as a maximum upload per day - which also hit the forum frequently.

The important thing to understand about both OneDrive and GoogleDrive is that they are priced per size only and therefore both come with limitations to the speed and number of requests you can do within an hour/day (that is fair usage limitations that are enforced by some kind of rate limiting or throttling).

If you want unlimited speed and up/download capacity, then you typically get to pay per request (one way or the other).

The improvements I am testing are basically improving/fixing the back-off when a cloud storage signals that a user has been running with too many rclone sessions, --transfers or --checkers for too long. So currently the trick is just to always stay under throttling limits.

Some important things to also consider when selecting storage provider:

  • Total size of backup
  • Total number of folders
  • Total number of files
  • Changes to be uploaded per day (number of files and their typical size)
  • Need for versioning/snapshots/malware protection etc.

Speeds vary by the size of files and length of the job, so make sure you test a realistic scenario.

thank you for the detailed replies and excellent points

1 Like

yes, rclone can handle that, only testing will prove if the backend provider can handle it.

i have a set of local folders that i use for various tests.

just now, i took a folder with 100,00 files and synced it to wasabi.
each file is one byte in size.
tested on 1Gbps symmetrical link, 4ms latency.

rclone sync D:\files\01 wasabi01:test100000 --transfers=256 --checkers=256

the first sync took 38 seconds in which all local files were copied to wasabi
the second sync, in which no local files changed, took 27 seconds
no pacer/throttling in the debug log

Nice test @asdffdsa

I see it as a good example that there are betters speeds and less limitations when also paying for requests/uploads (one way or the other).

But perhaps ThePrez (implicitly) meant:

My best answer to that specific question is: I don't know. The issue is from 2018 and related to the OneDrive API, so it may have been fixed by Microsoft or somebody else. Test it if important! (ditto for Google Drive)

to be clear

I see the minimum storage duration as a price on uploads/requests in real usage scenarios. Just a different way of pricing, that favors the backup customers they want to attract. It may become expensive if you daily backup (the same) large files with minimal changes, which I do.

PS: I suggest you update your rclone to the latest version (1.56.2). There have been many fixes and improvements since 1.53.1.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.