getting an error when running a OneDrive Personal sync or cryptcheck or basically anything. error says "InvalidAuthenticationToken: Unable to initalize RPS" on random instances during the process, not for every file. rclone is doing this on three different systems. i've tried to do a refresh token, i've closed and reopened rclone, still coming back to the same thing.
What is your rclone version (output from rclone version)
1.54.1
Which OS you are using and how many bits (eg Windows 7, 64 bit)
Windows Server 2019
Which cloud storage system are you using? (eg Google Drive)
Microsoft OneDrive Personal
The command you were trying to run (eg rclone copy /tmp remote:tmp)
2021/04/18 16:04:15 DEBUG : Dossiers/0 - Archive/XXX c MP/4 - Pièces/Copie intégrale de la procédure.pdf: Unchanged skipping
2021/04/18 16:04:15 ERROR : Dossiers/0 - Archive/XXX c MP/6 - Copies pénales/D: error reading destination directory: couldn't list files: InvalidAuthenticationToken: Unable to initialize RPS
2021/04/18 16:04:16 DEBUG : Dossiers/0 - Archive/XXX c MP/4 - Pièces/Bulletin solde décembre 2018.pdf: Size and modification time the same (differ by 0s, within tolerance 1s)
As sweh noticed in another topic, it might be a wider issue :
Hello everyone, I'm having the same issue with OneDrive Personal in the sync mode from OneDrive to my machine.
OS: Debian 10 (Buster)
rclone version 1.55.0
Issue noted on April 16. Last time issue confirmed April 19th.
I'm syncing about 80.000 files and getting about 20-60 "Unable to initalize RPS" errors on random files on each march. Rclone still syncs flies if any difference found, but as errors appear it repeats the cycle 3 times and then reports about unsuccessful sync.
Tried:
refresh the key, did not help,
generate and use my own client ID and the key, did not help,
slow down the operation speed with --tpslimit 0.2, it decreased the speed to about 100 file checks per minute but it did not help, RPS errors still appear randomly in similar proportion.
Temporary workaround that does not really fix anything but makes life a bit easier:
set "--ignore-errors" to allow rclone to delete files in destination when syncing because it won't otherwise if a single error appear
set "--retries 1" to limit sync to only one retry cycle to avoid syncing default 3 times with the same result each time
Reading this forum I see this issue is happening time to time. Hope it is not an rclone bug and that MS fixes it soon.
How many files are changing, how many directories are they in, and what are their sizes?
In my case my tree is around 1800 files in size and I typically only update 40-80 files per day, each approx 5Gb each, and delete a similar number (rotating backups). So the number of API calls made by rclone to the Graph API won't be that many.
Do you upload 80,000 files each time, or just checking that many?
In my case ~80K is the total number of files. Nothing too special: personal pictures, music etc, with overall size of about 600Gb. Each folder can hold up to ~100s of files. Each day a small number of items (less than 100) may update or it can be no change.
I'm running sync daily to sync my local drive with the cloud. I suppose each run checks all the files (or at least hashes), that means that each file/folder should be called via API to OneDrive. If not limiting the "--tpslimit" the whole sync operation may take 15-20 minutes considering that there is nothing to transfer (or almost nothing).
if you sync on a daily basis, might want to add --max-age=25h
if a local file has been modified within the last 25 hours, then rclone will check the corresponding fie in gdrive, if it exists, and decided what to do.
if the local file does not exist, then rclone will copy it.
if a local file modtime is older than 25 hours, then do nothing.
so instead of 80k of checks, rclone would perform just ~100 checks.
if someone moves a file to a different folder without modifying it, rclone will skip over it and the move will not be reflected in the cloud remote.
if someone adds a file to the directory from a different directory without modifying it, this file will also be skipped.
renaming a file will not constitute as "modified", so rclone will also skip over old files that are just renamed.
the --max-age flag has it's uses for filtering, certainly, but i haven't found it particularly fitting for making large directory syncs more efficient.