What is the problem you are having with rclone?
So as I transfer my GSuite data to DropBox I've been getting a number of issues that look like this. Does anyone know what it might be?
What is your rclone version (output from
Which OS you are using and how many bits (eg Windows 7, 64 bit)
Which cloud storage system are you using? (eg Google Drive)
The command you were trying to run (eg
rclone copy /tmp remote:tmp)
rclone copy GEnc:Movies/00-MatchedMovies DropBoxEnc:Movies --transfers=4 --stats=3s -vv --ignore-existing --max-size=49G
The rclone config contents with secrets removed.
A log from the command with the
`2020/10/20 13:07:21 DEBUG : pacer: low level retry 7/10 (error lookup_failed/incorrect_offset/..)
2020/10/20 13:07:21 DEBUG : pacer: Rate limited, increasing sleep to 303.75ms
2020/10/20 13:07:21 DEBUG : pacer: low level retry 8/10 (error lookup_failed/incorrect_offset/...)
2020/10/20 13:07:21 DEBUG : pacer: Rate limited, increasing sleep to 607.5ms
2020/10/20 13:07:22 INFO :
Transferred: 1.648T / 101.607 TBytes, 2%, 64.813 MBytes/s, ETA 2w4d17h13m17s
Errors: 5 (retrying may help)
Transferred: 45 / 10057, 0%
Elapsed time: 7h26m16.4s
* 10 Cloverfield Lane (2… ramyDoVi@PTP.mp4].mp4: 26% /43.600G, 17.089M/s, 31m47s
* 30 Days of Night Dark …HDMA.5.1-R2D2.mkv].mkv:100% /17.961G, 7.417M/s, -
* 40 Days and 40 Nights …MUX-FraMeSToR.mkv].mkv: 81% /15.799G, 17.939M/s, 2m46s
* 50+50 (2011)/50+50 (20…264.Remux-decibeL].mkv: 78% /20.181G, 14.895M/s, 4m54s
2020/10/20 13:07:22 DEBUG : pacer: low level retry 9/10 (error lookup_failed/incorrect_offset/.)
2020/10/20 13:07:22 DEBUG : pacer: Rate limited, increasing sleep to 1.215s
2020/10/20 13:07:22 DEBUG : pacer: low level retry 10/10 (error lookup_failed/incorrect_offset/...)
2020/10/20 13:07:22 DEBUG : pacer: Rate limited, increasing sleep to 2s
2020/10/20 13:07:22 ERROR : 30 Days of Night Dark Days (2010)/30 Days of Night Dark Days (2010) - 1080p - 5.1 DTS-HD Master Audio [30.Days.Of.Night.Dark.Days.2010.1080p.Blu-ra
y.ReMuX.AVC.DTS-HDMA.5.1-R2D2.mkv].mkv: Failed to copy: upload failed: lookup_failed/incorrect_offset/...
hi, what version of rclone are you running?
make sure to use latest stable, v1.53.1
not sure you know about this.
if you are copying from crypt remote to crypt remote and using the same passwords for both, you might want to try this
Incorrect offset is something to do with large file upload going wrong.
Do you have more of that log? I'd like to see all of it if possible. You can PM me a link if you want.
I actually considered doing that. Im only 5tb in, should I start fresh with the same password to keep things easy?
I've never used sync before though.
All I need to do is figure out what the obfuscated path of the folder I'm trying to copy is and then setup a crypted remote with a matching pass and salt?
Sure thing I can do that. I got the same error on a sub 10GB file too so it seems to be less about the size more just about uploads going wrong.
One question though, this lists my obfuscated file names and the resulting un obfuscated ones, theres no risk of my password getting out since theres a salt correct?
i would do that, as no need to decrypt and re-crypt over again, and that can introduce another layer of potential problems.
the more important reason is the rclone can checksum the files.
in this case, gdrive and dropbox used different types of checksums, so it might not help.
@ncw, can you comment?
rclone lsd GEnc: --recursive ---crypt-show-mapping
the only way that would happen is if someone could a copy of the rclone config file.
and you should encrypt that https://rclone.org/docs/#configuration-encryption
I tested sync on a small folder and it seems to work, no word on the errors yet though. Will try a folder with files from 200mb - 90GB and see what happens. So far the larger transfers tend to fail more but small ones still occasionally get the same error so its not entirely just size related
Interesting. I just transferred 26T from gdrive to dropbox and had zero issues or errors. My largest single file was a hair over 60GB.
I'm currently transferring an empty 101GB file from local to dropbox to verify that there isn't a 100GB limit.
Also, I'm not using my home connection (which is symmetrical gigabit) , I'm using a gigabit VPS hosted by OVH running nothing but the rclone transfer.
vps is less reliable then a dedicated computer, all the more so if that vps is shared.
i have verizon fios symmetrical gigabit which has no data cap.
so given a choice, i always use that.
i transfer 100GB+ files on a weekly basis, almost never have a problem, very rare.
and my vps, good for fluff like plex, vpn and proxy server.
I've never had an issue with OVH VPSes nor my home internet. However, my main home server is already doing 1-4TB a day upload (with about the same or more down) so its pretty busy most of the time.
ok. i have never seen that error before.
now, best to PM @ncw with the log file.
Thanks for the log @LizWatchesStuff
WIth a bit of grepping I can see that the error starts like this
2020/10/20 19:55:23 DEBUG : pacer: low level retry 1/10 (error path/malformed_path/)
2020/10/20 19:55:24 DEBUG : pacer: low level retry 2/10 (error lookup_failed/incorrect_offset/..)
2020/10/20 19:55:24 DEBUG : pacer: low level retry 3/10 (error lookup_failed/incorrect_offset/)
2020/10/20 19:55:25 DEBUG : pacer: low level retry 4/10 (error lookup_failed/incorrect_offset/.)
2020/10/20 19:55:26 DEBUG : pacer: low level retry 5/10 (error lookup_failed/incorrect_offset/..)
2020/10/20 19:55:26 DEBUG : pacer: low level retry 6/10 (error lookup_failed/incorrect_offset/..)
2020/10/20 19:55:27 DEBUG : pacer: low level retry 7/10 (error lookup_failed/incorrect_offset/...)
2020/10/20 19:55:28 DEBUG : pacer: low level retry 8/10 (error lookup_failed/incorrect_offset/...)
2020/10/20 19:55:28 DEBUG : pacer: low level retry 9/10 (error lookup_failed/incorrect_offset/.)
2020/10/20 19:55:31 DEBUG : pacer: low level retry 10/10 (error lookup_failed/incorrect_offset/.)
2020/10/20 19:55:31 DEBUG : 023l8e8qdvhgq05a7qc375pop7je17ufvht2gb8h3uk006udfaos0eiteu5nmnk6rb46ajd9iaqcc/pap92aaiutuj4nn94q6cj6akv3g7tacefjvt6e78rd6q9gu2deig7pgu79f3j5vejjm7uvfsav7gltts63olsspsnbthgtrub1hn2p33r1n556k0unsumodhokfualvkkkpm9s80vpqstoqerpbmd3uj6llb8tascpe8ap8c5a7hj46pgjugq3tgjggj58bkkcivgbjc7as2h8cf6iov7srobg8tt1n13pe5fia4h8mrecrrtepdfblj05sj5qg6as9kvh73t87nc1jqo9vt0css78: Received error: upload failed: lookup_failed/incorrect_offset/. - low level retry 1/10
So it looks like the root cause of the error is
I'm not sure what causes that, but I did notice in your log that you have duplicates in the source - it might be worth running
rclone dedupe on that first as dropbox can't have duplicates.
rclone de-dupe is totally non destructive right? Id be livid if i messed up and lost data.
Hmm, so where do we go from here? I'm getting that error on tiny files too. I capped max size to 5GB and have let it run for 13 hours. So far 924 files have copied with 36 failures.
Is it possible I need to do something as drastic as download problematic files from google locally and upload from there?
dedupe interactively finds files with duplicate names and offers to delete all but one or rename them to be different"
--dedupe-mode rename - removes identical files then renames the rest to be different."
Just to be clear, what does that mean, how can it remove identical files while also renaming them, or have I just not had enough coffee this morning?
in this case, it is good to be very paranoid.
have a good read of this https://rclone.org/commands/rclone_dedupe/.
based on on posts in the forum, it works well, never seen a bug report or other problems.
there was one post about human error due to lack of coffee in the morning..
the one time i needed to do a dedupe on a cloud remote where rclone would not help,
i did rclone mount on that cloud remote and use a tool i was familiar with.
Also for the super paranoid run
rclone dedupe -i remote: and rclone will ask yo lots of questions before doing anything.
So I did interactive mode but it's going to be very obnoxious for the amount of renames (one bdmv folder had like 300 duplicate playlists lmao) so to be clear 'dedupe-mode rename' will not delete files, it will only append a number correct? it'll be like me telling interactive mode 'r' each time?
rename is the same as pressing
r each time.