One Drive - Failed to copy / Unauthenticated / Expired Token / The access token has expired

onedrive via rclone is slower than onedrive via web and both are much slower than wasabi

for the time period, 6m:47s
--- onedrive via web uploaded the entire 10GiB
--- onedrive via rclone uploaded just 4.980 GiB

--- wasabi uploaded the entire 10GiB file in 1m34.5s

rclone copy d:\files\10GiB\10GiB.file onedrivevb:zork --progress --onedrive-chunk-size=100Mi --log-level=DEBUG --log-file=onedrive.speed.txt
Transferred:        4.980 GiB / 10 GiB, 50%, 10.842 MiB/s, ETA 7m54s
Transferred:            0 / 1, 0%
Elapsed time:      6m48.3s
rclone copy d:\files\10GiB\10GiB.file wasabi01:zork --progress --s3-chunk-size=32Mi --s3-upload-concurrency=32 --s3-disable-checksum --log-level=DEBUG --log-file=wasabi.speed.txt
Transferred:           10 GiB / 10 GiB, 100%, 66.459 MiB/s, ETA 0s
Transferred:            1 / 1, 100%
Elapsed time:      1m34.5s

Hmm, this is strange. No idea why your rclone upload to OneDrive is so slow compared to the web interface and there doesn't seem to be an easy way to find the root cause or a fix.

My single file rclone upload speed to OneDrive is comparable to my upload speed in the OneDrive web interface and both are faster than Jojos - probably pure luck with ISP and OneDrive datacenter.

Guys,

I spent the last few days doing some more tests, one of them I installed Rclone on a Windows system that is on my network and has access to the Linux server data that I want to backup, and the result was exactly the same, the upload does not pass of 2.5Mb/s.

I also tried creating a client and key ID, following this documentation:

Microsoft OneDrive

All the setup worked, but the result was still the same.

In the midst of these tests, one thing I noticed is that, for example, let's say I have a 10GB file, if I upload it as a single file, One Drive limits the upload and leaves it at an average of 2.5Mb/ s however if I split this file into 4 2.5GB files, One Drive uploads them at 2.5MB/s for each of the files so the total upload that is shown to me in the progress is 10MB /s, it is as if there is no limitation on the upload speed, but an upload limitation for each file.

I'm really running out of ideas, and unfortunately as they've already recommended using another cloud, this is the resource I have available at the moment and I'm stuck with it, so I'm looking for the best possible solution to this problem.

It's really very strange, because as I mentioned earlier this only happens with One Drive, so it's really not a network problem but with One Drive.

Thanks in advance for everyone's help.

I see similar behavior, but at considerable higher speeds. That is:

  • No significant effect of using own Client ID for OneDrive

  • There is a variable upload speed/rate limit both per file and per account, the last is typically 2-8 times higher than the first. The ratio varies with both data and time of day.

Important side note for parallel uploads:

  • OneDrive seems to be a hard limit of 4 parallel uploads (corresponding to the default of --transfers=4), starting more transfers will result in decreased speed and long pauses (seen as HTTP 429 "too many requests")

Here is what I would consider/try in your situation:

Upload the backup using the rclone chunker remote with a chunksize of 50GB and then verify with rclone check --download … and/or a full download and compare. This will give you max upload/download speed with a minimum of chunks, and also solve your initial issue. The drawbacks are added risk and complexity.

Disclaimer: I only know the chunker backend from the documentation:
https://rclone.org/chunker/

Hello again folks.

I was on vacation and ended up not following up on the solution for my case.

It took me a long time but I found a way, and I'll try to explain it without much ado.

At least for me, I understood that One Drive limited my speed not for the connection as a whole, but rather, to between 2.5-3MB/s per file uploaded.

No matter the system, method, configuration, client ID, it stayed that way.

Remembering that my problem was that I had files that were too big, so the upload took much longer than 24 hours, and this caused a time limit for One Drive itself, which is also not possible to be removed, where it blocks the connection for updating the permission token .

For example, a single 15GB file would have an average upload speed of 2.5-3MB/s, but a 15GB file divided into 6 2.5GB parts would upload each of those parts at the same speed. however, with the help of the --transfers=6 parameter, I was able to upload the files simultaneously, and in the total upload I got my 15-18 MB/s upload.

The only problem with this is that I also didn't have much disk space to simply split this 214GB file with winrar, because even though the compressed files are smaller, it would still use at least 50% of space in addition to the original file (space that I don't have).

So with a lot of effort, due to my structure being divided with the data between windows and linux systems causing several code and compatibility errors, I developed a script using Python and Unix Shell, which does the following functions:

  • wbadmin command to generate bare metal image of my entire system;

  • So I adapted the use of the tail command together with truncate, so that for each part of the file that was created, it would remove this part of the original file, so I would get 6 parts of the original file in the end, without using storage space, and without affecting the integrity of the file.

  • Then upload the files to One Drive with rclone with the parameter --transfers=6;

  • After the upload is complete, I reverse tail and truncate the local file using the cat command to merge the file into a single one without using more disk space than necessary.

Obviously I summarized everything that was done and the methods I used, however, the result was that the upload that was done at a measly 2.5MB/s and would take more than 36 hours to complete, was reduced to 3 hours of upload.

I appreciate everyone's help, it was a long way but it was a great learning experience even more with the support of wise people like you.

@ncw @Ole @asdffdsa

Note : In case someone needs help with a script in the future for a case similar to mine, you can send me a DM and then I can make the scripts I developed more specifically available, it would be complicated to put it here because the script contains many tokens and secrets and I'll have to spend a lot of time with them to get the scripts redacted without affecting their functionality.

Glad you got it sorted :slight_smile:

Note that the chunker backend will do that splitting for you and it doesn't need any extra disk space so you might want to take a look at that.

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.