One Drive - Failed to copy / Unauthenticated / Expired Token / The access token has expired

What is the problem you are having with rclone?

Guys, I recently discovered this wonderful tool that is Rclone, after a lot of reading I got excellent results and different methodologies for my cloud backups. The only problem I'm having is the following:

One of my scripts involves creating a complete image of my Windows Server, and then after the image is created, Rclone uploads it to the One Drive cloud.

The problem itself is that the system image is 213 GB, and initially I had the problem that the upload was stopped with an error because it said that the file had been modified or something, and after research I discovered that One Drive has this problem where it somehow changes the file sizes and with that Rclone does not synchronize the folder correctly which causes the error and cancellation of the upload, but this problem I managed to solve using --ignore-size to the rclone sync command. The problem I am facing now is the error :

"Failed to copy: unauthenticated: expiredToken: The access token has expired. It's valid from '2/20/2022 12:26:23 PM' and to '2/21/2022 12:26:23 PM'."

I'm not sure what is causing this problem, it seems that because the file is too big and takes more than 24 hours to finish, the token expires...

I wanted a way to be able to upload this without problem, I don't know if I can add some command to the line so that the token doesn't expire, and in the last case, some way in which the file is divided, I've seen that there is the command " chunk" but I didn't understand and I couldn't find examples of how it could apply to my situation.

Run the command 'rclone version' and share the full output of the command.

rclone v1.57.0

  • os/version: ubuntu 21.10 (64 bit)
  • os/kernel: 5.13.0-28-generic (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.17.2
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

One Drive

hello and welcome to the forum,

what tool do you use to create the system image?

not a command, but a storage system.
any file copied to a chunker remote, will be upload as a number of parts.
tho it is still beta.

not ideal but could split the file on local, and rclone copy the parts.

Hello again,

To create the system image i use the own Windows Server command :

wbadmin start backup -backupTarget:path-to-save-image -allcritical -systemstate -quiet

And it works fine.

But you dont think that could work to me ? If I correctly understand so i need to create an another "remote" only with the chunck storage system mode ?

I thought about that too, but it would also take too long, as well as requiring more storage space in which I don't have that much to the point of creating a new copy of this image, even if zipped...

As for this error I mentioned is actually about One Drive token expiring in 24 hours? Is there no way to change this, or is it something from Microsoft itself?

Any another ideias ?

imho, given the small size of the file, use another provider which does not use tokens.
or a provider, that if using tokens, has fast upload speeds.

correct, local -> chunker remote -> onedrive remote.

the docs for crypt remote offers more detail.
"A remote of type crypt does not access a storage system directly, but instead wraps another remote, which in turn accesses the storage system.

as far as i know, 24 hour is hard coded by microsoft.
https://docs.microsoft.com/en-us/azure/active-directory/develop/refresh-tokens#token-timeouts

maybe @ole has a suggestion?

Hi @Dorkens and @asdffdsa,

213GB in 24 hours is equivalent to 213/24/60/60 Mbyte/sec = 2.5 Mbytes/sec or roughly 25 Mbit/sec (Mbps). This is significantly slower than the speeds I typically see.

I therefore suggest we take a step back and clarify the reason for the long transfer time; perhaps we can reduce it.

It would therefore be helpful to see the following information:

  • The available upload speed as shown by https://www.speedtest.net/ or similar.
  • The config used as shown by rclone config show yourOneDrive: (please redact the token and any secrets)
  • The rclone command and parameters used to copy the backup to OneDrive
  • A full debug log as output using your rclone command with these additional parameters: --log-file=yourLogFile.txt --stats-one-line --stats=15m -vv and without -p or --progress to get the stats into the logfile.

Some things to look for in the debug log:

  • Other errors (search for “ERROR”)
  • Retries after errors (search for “Attempt”)
  • Upload speed changing over time (grep/search for lines containing “ETA”)
  • OneDrive throttling (grep/search for lines containing “Too many requests” OR “pacer:”)

If affected by OneDrive throttling then be aware that it is triggered by all activities performed by the account (not just rclone) and has a 24 hour memory (cool down period), so it could be caused by other (previous) activities.

Hello @Ole,

You're right, , and as you can see :

image

Except by the constant variance in the upload speed, but i also find this upload speed strange, because the server is wired and I never had any problems with the connection.

Here,

[One Drive]
type = onedrive
token = {"access_token":
"token_type":"Bearer","refresh_token":
"expiry":"2022-02-22T10:41:08.376058577-03:00"}
drive_id =
drive_type = business

sudo rclone sync --ignore-size -P -v /local-path/ remote:/cloud-path/

As I said before, I already had other failures in this upload, which was resolved with the --ignore-size parameter.

About this I will owe you, because right now I'm making one more attempt where right before starting the file upload I redid the rclone config reconnect command to test if the authentication would actually be expired after 24 hours, because the real problem is which, due to my upload speed and all this variance in speed constancy, is causing the upload to take longer than 24 hours.

One thing that crossed my mind now, I'm using the rclone sync command, would there be any difference to the rclone copy command? Or would I end up having the same problem? I don't remember exactly the difference between these two, if I'm not mistaken, in rclone sync it's similar to the use of the famous rsync which I liked a lot, because the upload is done incrementally and in my opinion it's easier to organize the backups. I imagine you opted for rclone sync because it also seemed safer to do it incrementally, but in the end it wouldn't really need to be done that way. So using the rclone copy command, would there be any difference in this copy, especially in terms of upload speed?

Anyway, thank you for your attention, and I'm open to any new ideas.

Thank you very much for now.

Great info and nice simple command; --ignore-size is also part of many of my OneDrive commands.

Hmm, I am puzzled by this. Your full upload capacity is around 15Mi/s and rclone only transfer is around 2Mi/s, this could be OneDrive throttling or something else. My OneDrive upload speed is app. 10 times higher; I do however have smaller files and less to transfer. This is where I would focus until we have an explanation/solution.

The very simple explanation is that rclone sync is an rclone copy followed by a deletion of the target files that aren’t present in the source, so it probably makes no difference in your situation. None of them use incremental transfer nor resume failed uploads; so you will need to upload the full 213GB backup every time you have made an backup (unless your backup tool makes incremental files). This is an important part of the reasoning behind my focus on speeding it up.

I just noted that you rclone version shows Ubuntu and your are talking about a Windows backup. Are you executing rclone in WSL or similar? Any reason to not execute rclone in the host OS (Windows)?

Thanks again for the information given.

I understood what you meant. In my case, I use it as incremental when the situation is for more than one file, usually smaller, where when using sync and it does the process of excluding files from the destination that are not in the source and at the same time replacing only the files that have been modified since the last copy, becomes for me a kind of incremental backup where it is not necessary to redo a full backup of a certain directory.

However, as previously mentioned, there would be no need to use sync to backup this server image.

I don't know if I understood correctly, in case you use OneDrive via an app, outside of rclone, so that's why you get this speed, or am I wrong and did I do some different configuration that could be causing this speed loss?

For this question the reason is simple, here we have several servers, mainly for application and active directory, however, I created an exclusive data server to organize all the data and backups of these servers, even more because they are dated machines and do not contain much free storage space, so I created this unique data server in linux base, where it connects via AD via a linux application called CID (Closed In Directory) which allows me to access the data of this linux server through anyone these other servers directly as if it were a storage, and at the same time I use this same server to make the folders of each sector available to their respective users.

In short: I use rclone via linux, because when creating an image of a windows server for example, it is saved directly on my linux data server, and in my opinion it seemed more correct to install rclone directly where the data is located, reducing any type of slowness caused by the network.

perhaps something like

while more chunks

  1. using dd to read a chunk of the .vhd(x) file to standard output
  2. rclone rcat that chunk to onedrive

I've never heard about that command, how and where exactly can i use it, and how it works,
this command is part of rclone ? I didn't find much information about this command, let alone how it would apply to my situation.

About rcat, from what I understand the idea would be to use this dd command to create the chunk of the files, and then rcat would do the part of getting these chunks and uploading it to a single file, is that it?

But in this case could you explain better about rcat ?

And still about dd, to create the chunks so I would have to create it from the moment I created the windows image with wbadmin, or would it be after the image creation?

Ty again !

my rclone rcat suggestion is a variation of the chunker remote.

dd is a common unix command, been around for decades.
imho, if you do not know what dd is,
then best to use rclone chunker

tho the problems which chunker are
--- still in beta
--- would need a script to upload the chunks, as if you upload all chunks to slow onedrive,
might still have issue with token.

or use another provider
i use veeam for server backups to local backup servers,
then upload the recent files to wasabi, s3 clone known for hot storage.
older backup files to aws s3 deep glacier.

or if i were in your case and punished with onedrive or slow internet.
i would do a variation of something i have been doing for many years.
and this also works well with 100GB blu-ray dvd discs for offsite backup.

  1. use winrar with recovery records to split the the large file into parts, perhaps 4GB dvd size.
  2. upload the parts over several runs of rclone.
  3. delete the local parts

tho i have not tested could this but no reason it will not work.
in this way, might use less disk space.

  1. use rclone mount --vfs-cache-mode=writes
  2. use winrar with recovery records to split the the large file into chunks, perhaps 4GB dvd size.
    have winrar save those to the mount
  3. rclone will upload the chunks and remove them from the vfs file cache.

Probably me being a bit too short and implicit. I mean: My upload speed to OneDrive Personal Premium is approximately 10 times higher than yours, that is around 20 MiByte/sec. I see similar speeds when using rclone and the native OneDrive client on Windows. I do however have smaller files and less to transfer, but it is still a big difference.

It could be due to different OneDrive datacenters, differences due to the amount of data, ISP throttling/prioritization of traffic, or something completely different. I hope the debug log will shed some light on this.

Good choice!

Echoing my picture to be 100% sure I got it right:

  • The rclone and speedtest in the above screendump were both running on the same OS and HW (Ubuntu on the new Linux server)?
  • There are no other clients, software or rclone jobs accessing the Microsoft account used for your OneDrive?

hi,
not sure exactly how tokens work with rclone, as i do not use such backends much.

if rclone is running and a token expires, rclone does not renew it?
the file transfers fail hard
or
file transfer completes but no new files can be transferred?

rclone only renews tokens at the start of each command?
or
what?

Rclone should be renewing oauth the token in the background with onedrive and normally it only lasts 1 hour, not 24 hours, so I wonder if this is a different token, maybe the upload session that is expiring.

If you had a log with -vv of the failure that would be very useful.

@asdffdsa ,

I don't know if I understand correctly, but does rclone rcat have any relationship with rclone cat ? In the case one of them concatenates the files and the other would put them together again? Do you have an example of how these commands work?

About the dd command, wouldn't it be exclusively for creating images of linux systems? Otherwise I could then create an image already divided into more parts, or even use it to divide a previously created file, so it is possible to make a variation of your idea but instead of using winrar, use the dd command itself.

@Ole ,

Yes, I set up VNC access to my linux server so that I could access its GUI for when I needed to access visuals beyond an ssh connection.

No, the account is being accessed exclusively by rclone.

@ncw Hello ! Thank you for that incredible tool and you support.

As I said before, I am finishing another attempt to upload the complete file after using the rclone config reconnect command, soon it will be 24 hours from the beginning and I will be able to check if the failure will occur again, and as soon as it happens I will do it again in a way that I can get the log.

About that, I've never had the need to create logs so I'm not familiar with these parameters , so the command I'm currently using is sudo rclone sync --ignore-size -P -v /local-path/ remote:/cloud-path/ and as @Ole said earlier I could use the following parameters to get the log :

But what would be the correct way to implement these parameters to my command already created without causing errors, for example, the -P as also mentioned by @Ole .

And what would be the way for me to find and access these logs after the end of the process.

Again, thank you all for your help and clarification.

Note : At this exact moment 24 hours have passed, and this is how my upload status is instantly.

image

The speed starts to fluctuate and decrease and the ETA just starts to increase infinitely and then decrease again, but I am sure I will get the error again soon.

Edit : Exactly 24 hours later, the error actually appears :

It doubles the upload size, and starts all over again.

I suggest you first try this command a couple of times on a 200 MB test file to get acquainted with the command and rclone logging:

sudo rclone sync --ignore-size /local-testpath/ remote:/cloud-testpath/ --log-file=/local-home/yourLogFile.txt --stats-one-line --stats=15s -vv

then you can quickly ask if things are unclear/unexpected.

--stats=15s just need to be replaced by --stats=15m for the big upload :wink:

PS: Try inspecting your small test log for these symptoms:

as per @ncw, really need to a see a full debug log.
and figure out what type of onedrive token you are using.

imho, 216GB is not a very large amount of data.
find a provider like wasabi, that does not use tokens, can easily saturate a 1Gbps internet connection.

lacking that:

  1. dd + rclone rcat was just suggestion.
    since you are not very familiar with dd and such tools, i cannot recommended it.
  2. winrar is rock stable and has recovery records; i have pounded it for over 10+ years.
    it can split a large file quickly when choosing not to compress the file.

i have access to a 1TB office365 plan and still choose .......
@ole and @ncw know much more about ......

I don’t think this issue is related to tokens and Wasabi probably also as some upper limit on the time it will keep a connection/file/object open for writing.

I don’t like when you are nicknaming and talking bad about things you don’t like.

geez, i was the one that made first mention of you in this topic.
and i even paid you a complement
"@ole and @ncw know much more about"

fwiw, my response was to the OP.

thanks for calling me out in public for the thought crime of
calling onedrive as zerodrive
next time, private message me.

edit:
i have redacted zerodrive to ....
i have redacted not ... trust it. to ...

i went out of my way to share a bunch of ideas for the OP.
and even though i might choose not to use one of them, it might spark more ideas.

the OP has a important backup file to upload.
i was sharing my first hand experience and the many other fellow rcloners trying, and failing to use onedrive as a reliable backup solution.
--- pacer issues
--- tokens issues.
--- slow transfer speeds
--- variable transfer speeds based on business hours, night time, weekends.
--- having to use --ignore-size