Boost upload speeds to OneDrive

What's is the complete command you are using? --onedrive-chunk-size requires a value. It also depends how much RAM your device has.

You could check with a network traffic app. For linux you can use nload or for windows task manager allows you to check the same.

Oh sorry I was using --onedrive-chunk-size 200
Yeah i was trying to figure it out and realized it said its written in memory and my device doesnt have that much hence the crash

Windows tells me Im getting 18mb/s up through the client. It isnt good im testing on windows and linux but I would think id get better speeds with rclone on linux. The other device is a raspberrypi but I know it uploads faster for my media server for sure with at least 5 up there

Yeah that might be a problem. Maybe tri with '--onedrive-chunk-size 40M' it's a multiple and not that big of a number. If you still run out of RAM try 20M which would be double the size of the default 10M.

If Windows task manager shows 18mbps equals 2.25MBps, so it might also be a unit interpretation error as I believe rclone shows bytes and not bits in speeds (might be wrong) Have you run a speedtest on your raspberry device to see what real internet speeds you can get?

Correct. And binary units. 1M is 1024*1024 Byte. Size and speed.

Speeds show 13up.

So that would make sense with me only seeing 1.6ish MBs. So i guess in reality i really am fine

I have found that you generally get best speeds to OneDrive with the default parameters. You may sometimes get better performance if you have checkers=16 (for a short while). You can see more details in this post:

You really need to have a special situation to start tweaking the parameters (like the number of transfers).

What are the characteristics of your data? How many folders? How many files in each? Typical file size? Total size? How many of the folders and files are already present in the target (to be checked and then skipped)?

Interesting. I like to rather have 1 transfer at a time that way it just focuses on 1 upload. I know rclone does its checks before copying or any other command in case for errors but i rather have an error with 1 file than say 4.

Idk how much but for now, Id say 2tb of data. I wouldnt say a lot of sub folders but most are my movie backups i ripped from my library. So id say something like:

Movies>2006> (multiple movies within the year) > movie.mp4 w/ maybe a poster

Files range between 800mb-3gb.

Im initially starting from a fresh account and uploading it to the cloud.

Great, it seems like you already have found the optimal settings for the job. These are my thoughts for inspiration:

2 TByte data with app. 1GByte per file, that is app. 2000 files, so the default of 8 checkers is fine. If you primarily had many folders with small files, then more checkers might be better.

Your 15 Mbit/s connection has a theoretical max. upload around 1.5 MByte/s (using 8 bit/byte + 25% overhead). My rule of thumb is to divide by 10.

You are seeing 1.6 Mbyte/s, so 1 transfer is enough to saturate your connection. This is sufficient because you primarily have large files (e.g. videos), you would need more transfers if you had many small files (e.g. photos or office documents) or a faster connection. Repeating myself for new readers: Using more than 4 transfers may trigger OneDrive throttling that will be slower than 4 transfers.

I fully agree, in your situation it is a good idea to reduce transfers to 1 (or maybe 2).

One thing I do not understand is your --max-transfer=1G when your files are between 0.8 and 3 GByte - I guess it is just for your initial testing.

1 Like

Thanks for the suggestions. Still learning a lot with this app and its the best for remote setups.

So i have it set to this right now but will change to 5G or higher because I have an ongoing script that runs that I want to check the time of day and if its lets say midnight, I want it to start running backups til 12 in the afternoon. I have it as a soft cutoff flag as well . Once it uploads the 1G, itll recheck the time, and if its within the time still, keep uploading!

I get it, I do something similar myself. Perhaps --max-duration is better, that will do a soft stop even if you have been throttled/slowed down.

You may get some additional inspiration from the latest posts in this throttling thread.

Please share if you observe a significant slowdown after some hours. There seems to be a throttling limit that I haven’t fully understood yet :face_with_monocle:

1 Like

Oo this flag might be better since it stops all new transfers after X amount of time.

Will do report if i see any throttling. With my speeds, I dont think I would see any BUT will keep watch of the logs!

Thanks!

I agree, but if you do then it is a strong indication of a limit on time (not GByte) :wink:

That would be very odd for a cloud service to throttle based on upload time though dont you think?

I do, but I have been surprised before and therefore keeps an open mind while investigating.

Hmm so using --max-duration=10m gives this on my transfers

2021-07-02 00:47:26 ERROR : Attempt 1/3 failed with 4 errors and: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdtdUkir7jnFA8G1XAyncNUiPAIyxRqp5nj23sWGy8fbYBghu5aXhZAV9_-GVuDBZhbYCgyNGCEpz-5Jj9NCDcM": context deadline exceeded

I haven't seen that before, is it transient or reproductive?
Note: I do not use --max-duration myself, I slice my data by --min-age --max-age.

Well I had it run for 10 mins. First time it reaches the threshold, error occurs, then will continue uploading. It will then display the error again

Sorry, --max-duration may not behave like I expected as suggested by this issue:

Ahh welp thats unfortunate. Thanks for the attempt! I just be using my original command then

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.