ReadFileHandle.Read seek failed: failed to authenticate decrypted block - bad password?

Hello,

i have 3 plexdrive that mount en encrypted Gdrive storage (obfuscated)
/mnt/plexdrive/mount1
/mnt/plexdrive/mount2
/mnt/plexdrive/mount3

They are pooled with mergerfs 2.28.3 under /mnt/plexdrive/pool with theses settings in fstab

/mnt/plexdrive/mount1=RO:/mnt/plexdrive/mount2=RO:/mnt/plexdrive/mount3=RO /mnt/plexdrive/pool fuse.mergerfs ro,async_read=false,sync_read,use_ino,allow_other,auto_cache,func.getattr=all,category.action=all,category.create=all,category.search=eprand,dev,suid 0 0

If i copy an encrypted file directly from the pool /mnt/plexdrive/pool, the file copy without any issue (so the file is still encrypted but copy at 100%)

The issue begin when i add a rclone cache and uncrypt rclone settings over /mnt/plexdrive/pool (so the pool is mergerfs)

[pool_c]
type = cache
remote = /mnt/plexdrive/pool
plex_url = https://127.0.0.1:32400
plex_username =
plex_password =
info_age = 1m
chunk_total_size = 250G
chunk_size = 10M
plex_insecure = true
db_path = /cache/pool
chunk_path = /cache/pool
db_purge = false
writes = false
tmp_upload_path = /cache/pool/upload
tmp_wait_time = 3s
plex_token =

[pool_d]
type = crypt
remote = pool_c:/
filename_encryption = obfuscate
directory_name_encryption = true
password =
password2 =

and mount in fstab

#rclonefs#pool_d:/ /mnt/vault fuse config=/data/program/rclone/config/pool.conf,allow-other,allow-non-empty,read-only,daemon,umask=0,buffer-size=128M,poll-interval=15s,timeout=1h,cache-read-retries=1,attr-timeout=1000h,dir-cache-time=1000h,log-level=DEBUG,log-file=/var/log/rclone_pool_d.log 0 0

Now when i read decrypted file from /mnt/vault that is over a cache, cache over a mergerfs pool, i get these king of error in rclone log

2019/11/04 10:10:00 DEBUG : : ChunkedReader.openRange at 10420224 length 134217728
2019/11/04 10:10:00 DEBUG : : moving offset set from 0 to 0
2019/11/04 10:10:00 DEBUG : : cache reader closed 32
2019/11/04 10:10:00 DEBUG : : moving offset set from 0 to 10422800
2019/11/04 10:10:00 DEBUG : : cache reader closed 10488352
2019/11/04 10:10:00 DEBUG : : ReadFileHandle.Read seek failed: failed to authenticate decrypted block - bad password?
2019/11/04 10:10:00 ERROR : : ReadFileHandle.Read error: failed to authenticate decrypted block - bad password?
2019/11/04 10:10:00 DEBUG : (r)}: >Read: read=0, err=failed to authenticate decrypted block - bad password?

My file faile to copy at 5%

If i try the same without pooling with mergerfs, i don't have the issue.
I have read that async_read=false have to be set, things done in my fstab.
I have also try adding sync_read, same issue.
I have read here and there that it could be a incompatibility between rclone and mergerfs with offset that could be different in seaking but not found any solution.

Does anyone have an idea?

thank you

I forgot to mention that my 3 mount of plexdrive are exactly the same (they are synced with rclone).
The issue appear whatever file i choose...

What does this show for you?

felix@gemini:~$ mergerfs -V
mergerfs version: 2.28.3
FUSE library version: 2.9.7-mergerfs_2.29.0
fusermount3 version: 3.4.1
using FUSE kernel interface version 7.29

Hello, here is the output

mergerfs version: 2.28.3
FUSE library version: 2.9.7-mergerfs_2.29.0
fusermount version: 2.9.7
using FUSE kernel interface version 7.29

also have to mention that the issue does't happen when switching to category.search=epall.
The idea of eprand is to read some chunk randomly on the 3 mount, king of RAID0 reading to avoid api ban and speedof of reading...

What are you trying to do with mergerfs and the 3 drives? What's the purpose?

In this order
3 plexdrive mount that access obfuscated strorage (advantage one: better reading with plexdrive)
1 mergerfs that "merge" the 3 plexdrive containing identical data and mounted with "category.search=eprand" meaning that chunk will be read randomly from 3 identical drive, meaning divide by 3 API access
Making cache on this pool
I was hoping getting faster speed and less API call with the random read on mergerfs splitted over the 3 plexdrive....

Thanks you

Not sure what you mean by API ban as that isn't a thing.

You have a daily upload and download quota of 750GB and 10TB respectively.

API quotas are per user and 1 billion per day so that's pretty hard to hit.

I'm not sure what your setup brings other than over complexity to it. The problem you hit is that the way it request chunks using the cache, the workers are expecting sequential things to happen.

If you randomize that, you are going to get some odd results.

The error you have is usually a bad password or salt as all of them have to be the same thing.

Does the same file work via a rclone ls on all 3 remotes? Can you share that output?

copying the file from mergerd obfuscated directory work, the problem appear when copy form the decrypted side...

yes copy from 3 mount work (obfuscated file) and even from the pool (mergerfs)
The issue occur when i add the rclone layer of unencryption...

Awesome, Can you share the output?

what king of output? howto?

Whatever commands you are running to validate "it works".

simple copy in midnight commander...

I have no idea what midnight commander is.

Can you run the actual command on the CLI and share what you are doing?

rclone copy <x> <y>

and show the output? We can't see your screen so it's impossible to troubleshoot an issue unless you can share what you are doing with output.

It could also be corrupted files which seems possible given the setup...

the file copy well without mergerfs and eprand settings...
The file copy well when i copy the obfuscated one...
It seems to happen on the decrypted one, i suppose rclone decrypt and jump thrue the 3 plexdrive and stuck one the second jump. Kind of offset difference or something similar...

Hello, here are the 2 output of the same copy file with rclone, cache cleared after each try.
First one is the mount with mergerfs func.open=epff (that work) and second with func.open=eprand (that fail)

First one that is ok (epff)

rclone --config /data/program/rclone/config/pool.conf copy pool_d:/medias/videos/camera/Archives/Captures/2004.11.27_15-00-59.dv /tmp --log-level=DEBUG
2019/11/05 12:17:11 DEBUG : rclone: Version "v1.50.1" starting with parameters ["rclone" "--config" "/data/program/rclone/config/pool.conf" "copy" "pool_d:/medias/videos/camera/Archives/Captures/2004.11.27_15-00
-59.dv" "/tmp" "--log-level=DEBUG"]
2019/11/05 12:17:11 DEBUG : Using config file from "/data/program/rclone/config/pool.conf"
2019/11/05 12:17:11 DEBUG : pool_c: wrapped local:/mnt/plexdrive/pool_c/115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ at root 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60
-55-04.zR
2019/11/05 12:17:11 INFO : pool_c: Cache DB path: /cache/pool_c/pool_c.db
2019/11/05 12:17:11 INFO : pool_c: Cache chunk path: /cache/pool_c/pool_c
2019/11/05 12:17:11 INFO : pool_c: Chunk Memory: true
2019/11/05 12:17:11 INFO : pool_c: Chunk Size: 10M
2019/11/05 12:17:11 INFO : pool_c: Chunk Total Size: 250G
2019/11/05 12:17:11 INFO : pool_c: Chunk Clean Interval: 1m0s
2019/11/05 12:17:11 INFO : pool_c: Workers: 4
2019/11/05 12:17:11 INFO : pool_c: File Age: 1m0s
2019/11/05 12:17:11 INFO : pool_c: Upload Temp Rest Time: 3s
2019/11/05 12:17:11 INFO : pool_c: Upload Temp FS: /cache/pool_c/upload
2019/11/05 12:17:11 DEBUG : Adding path "cache/expire" to remote control registry
2019/11/05 12:17:11 DEBUG : Adding path "cache/stats" to remote control registry
2019/11/05 12:17:11 DEBUG : Adding path "cache/fetch" to remote control registry
2019/11/05 12:17:11 DEBUG : Cache remote pool_c:115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ: new object '226.7559.66.72_60-55-04.zR'
2019/11/05 12:17:11 DEBUG : 226.7559.66.72_60-55-04.zR: find: error: couldn't open parent bucket for 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ
2019/11/05 12:17:11 DEBUG : 226.7559.66.72_60-55-04.zR: find: not found in local cache fs
2019/11/05 12:17:11 DEBUG : 226.7559.66.72_60-55-04.zR: find: cached object
2019/11/05 12:17:11 DEBUG : 2004.11.27_15-00-59.dv: Need to transfer - File not found at Destination
2019/11/05 12:17:11 DEBUG : 2004.11.27_15-00-59.dv: Starting multi-thread copy with 2 parts of size 135.875M
2019/11/05 12:17:11 DEBUG : 2004.11.27_15-00-59.dv: multi-thread copy: stream 2/2 (142475264-284832000) size 135.762M starting
2019/11/05 12:17:11 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: moving offset set from 0 to 0
2019/11/05 12:17:11 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 0: chunk retry storage: 0
2019/11/05 12:17:11 DEBUG : 2004.11.27_15-00-59.dv: multi-thread copy: stream 1/2 (0-142475264) size 135.875M starting
2019/11/05 12:17:11 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: moving offset set from 0 to 0
2019/11/05 12:17:11 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 0: chunk retry storage: 0
2019/11/05 12:17:12 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 0: chunk retry storage: 1
2019/11/05 12:17:12 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 0: chunk retry storage: 1
2019/11/05 12:17:12 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 0: chunk retry storage: 2
2019/11/05 12:17:12 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 0: chunk retry storage: 2
2019/11/05 12:17:13 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 0: chunk retry storage: 3
2019/11/05 12:17:13 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 0: chunk retry storage: 3
2019/11/05 12:17:13 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 0
2019/11/05 12:17:13 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 0
2019/11/05 12:17:13 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: cache reader closed 32
2019/11/05 12:17:13 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: moving offset set from 0 to 142510080
2019/11/05 12:17:13 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 136314880: chunk retry storage: 0
2019/11/05 12:17:13 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 10485760: chunk retry storage: 0
2019/11/05 12:17:14 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 136314880: chunk retry storage: 1
2019/11/05 12:17:14 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 10485760: chunk retry storage: 1
2019/11/05 12:17:14 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 136314880: chunk retry storage: 2
2019/11/05 12:17:14 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 10485760: chunk retry storage: 2
2019/11/05 12:17:15 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 136314880: chunk retry storage: 3
2019/11/05 12:17:15 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 10485760: chunk retry storage: 3
2019/11/05 12:17:15 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 136314880: chunk retry storage: 4
2019/11/05 12:17:15 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 10485760: chunk retry storage: 4
2019/11/05 12:17:15 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 125829120
2019/11/05 12:17:15 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 10485760
2019/11/05 12:17:16 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 136314880: chunk retry storage: 5
2019/11/05 12:17:16 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 20971520: chunk retry storage: 0
2019/11/05 12:17:16 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 136314880
2019/11/05 12:17:16 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 20971520
2019/11/05 12:17:16 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 146800640: chunk retry storage: 0
2019/11/05 12:17:16 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 31457280: chunk retry storage: 0
2019/11/05 12:17:16 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 146800640
2019/11/05 12:17:16 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 31457280
2019/11/05 12:17:17 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 157286400: chunk retry storage: 0
2019/11/05 12:17:17 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 41943040: chunk retry storage: 0
2019/11/05 12:17:17 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 157286400
2019/11/05 12:17:17 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 41943040
019/11/05 12:17:17 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 167772160: chunk retry storage: 0
019/11/05 12:17:17 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 52428800: chunk retry storage: 0
019/11/05 12:17:17 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 167772160
019/11/05 12:17:17 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 52428800
019/11/05 12:17:18 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 178257920: chunk retry storage: 0
019/11/05 12:17:18 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 62914560: chunk retry storage: 0
019/11/05 12:17:18 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 178257920
019/11/05 12:17:18 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 188743680: chunk retry storage: 0
019/11/05 12:17:18 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 62914560: chunk retry storage: 1
019/11/05 12:17:19 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 188743680: chunk retry storage: 1
019/11/05 12:17:19 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 62914560: chunk retry storage: 2
019/11/05 12:17:19 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 62914560
019/11/05 12:17:19 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 188743680
019/11/05 12:17:19 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 199229440: chunk retry storage: 0
019/11/05 12:17:19 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 73400320: chunk retry storage: 0
019/11/05 12:17:20 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 199229440: chunk retry storage: 1
019/11/05 12:17:20 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 199229440
019/11/05 12:17:20 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 73400320: chunk retry storage: 1
019/11/05 12:17:20 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 73400320
019/11/05 12:17:20 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 209715200: chunk retry storage: 0
019/11/05 12:17:20 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 83886080: chunk retry storage: 0
019/11/05 12:17:20 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 83886080
019/11/05 12:17:21 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 209715200: chunk retry storage: 1
019/11/05 12:17:21 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 209715200
019/11/05 12:17:21 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 94371840: chunk retry storage: 0
019/11/05 12:17:21 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 94371840
019/11/05 12:17:21 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 220200960: chunk retry storage: 0
019/11/05 12:17:21 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 104857600: chunk retry storage: 0
019/11/05 12:17:21 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 220200960
019/11/05 12:17:22 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 230686720: chunk retry storage: 0
019/11/05 12:17:22 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 104857600: chunk retry storage: 1
019/11/05 12:17:22 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 104857600
019/11/05 12:17:22 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 230686720: chunk retry storage: 1
019/11/05 12:17:22 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 230686720
019/11/05 12:17:22 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 115343360: chunk retry storage: 0
019/11/05 12:17:23 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 115343360
019/11/05 12:17:23 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 241172480: chunk retry storage: 0
019/11/05 12:17:23 DEBUG : 2004.11.27_15-00-59.dv: multi-thread copy: stream 1/2 (0-142475264) size 135.875M finished
019/11/05 12:17:23 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: cache reader closed 142510080
019/11/05 12:17:23 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 241172480: chunk retry storage: 1
019/11/05 12:17:24 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 241172480
019/11/05 12:17:24 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 251658240: chunk retry storage: 0
019/11/05 12:17:24 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 251658240: chunk retry storage: 1
019/11/05 12:17:25 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 251658240: chunk retry storage: 2
019/11/05 12:17:25 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 251658240: chunk retry storage: 3
019/11/05 12:17:25 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 251658240
019/11/05 12:17:26 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 262144000: chunk retry storage: 0
019/11/05 12:17:26 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 262144000
019/11/05 12:17:26 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 272629760: chunk retry storage: 0
019/11/05 12:17:27 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 272629760: chunk retry storage: 1
019/11/05 12:17:27 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 272629760
019/11/05 12:17:27 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 283115520: chunk retry storage: 0
019/11/05 12:17:27 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: partial downloaded chunk 270M
019/11/05 12:17:28 DEBUG : 2004.11.27_15-00-59.dv: multi-thread copy: stream 2/2 (142475264-284832000) size 135.762M finished
019/11/05 12:17:28 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: cache reader closed 284901584
019/11/05 12:17:28 DEBUG : 2004.11.27_15-00-59.dv: Finished multi-thread copy with 2 parts of size 135.875M
019/11/05 12:17:28 INFO : 2004.11.27_15-00-59.dv: Multi-thread Copied (new)
019/11/05 12:17:28 INFO :
ransferred: 271.637M / 271.637 MBytes, 100%, 16.112 MBytes/s, ETA 0s
rrors: 0
hecks: 0 / 0, -
ransferred: 1 / 1, 100%
lapsed time: 16.8s
2019/11/05 12:17:28 DEBUG : 9 go routines active
2019/11/05 12:17:28 DEBUG : rclone: Version "v1.50.1" finishing with parameters ["rclone" "--config" "/data/program/rclone/config/pool.conf" "copy" "pool_d:/medias/videos/camera/Archives/Captures/2004.11.27_15-0
0-59.dv" "/tmp" "--log-level=DEBUG"]
2019/11/05 12:17:28 INFO : plex: stopped Plex watcher
2019/11/05 12:17:28 DEBUG : Cache remote pool_c:115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ: Services stopped

Second one (fail with eprand)

rclone --config /data/program/rclone/config/pool.conf copy pool_d:/medias/videos/camera/Archives/Captures/2004.11.27_15-00-59.dv /tmp --log-level=DEBUG
2019/11/05 12:19:50 DEBUG : rclone: Version "v1.50.1" starting with parameters ["rclone" "--config" "/data/program/rclone/config/pool.conf" "copy" "pool_d:/medias/videos/camera/Archives/Captures/2004.11.27_15-00
-59.dv" "/tmp" "--log-level=DEBUG"]
2019/11/05 12:19:50 DEBUG : Using config file from "/data/program/rclone/config/pool.conf"
2019/11/05 12:19:50 DEBUG : pool_c: wrapped local:/mnt/plexdrive/pool_c/115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ at root 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60
-55-04.zR
2019/11/05 12:19:50 INFO : pool_c: Cache DB path: /cache/pool_c/pool_c.db
2019/11/05 12:19:50 INFO : pool_c: Cache chunk path: /cache/pool_c/pool_c
2019/11/05 12:19:50 INFO : pool_c: Chunk Memory: true
2019/11/05 12:19:50 INFO : pool_c: Chunk Size: 10M
2019/11/05 12:19:50 INFO : pool_c: Chunk Total Size: 250G
2019/11/05 12:19:50 INFO : pool_c: Chunk Clean Interval: 1m0s
2019/11/05 12:19:50 INFO : pool_c: Workers: 4
2019/11/05 12:19:50 INFO : pool_c: File Age: 1m0s
2019/11/05 12:19:50 INFO : pool_c: Upload Temp Rest Time: 3s
2019/11/05 12:19:50 INFO : pool_c: Upload Temp FS: /cache/pool_c/upload
2019/11/05 12:19:50 DEBUG : Adding path "cache/expire" to remote control registry
2019/11/05 12:19:50 DEBUG : Adding path "cache/stats" to remote control registry
2019/11/05 12:19:50 DEBUG : Adding path "cache/fetch" to remote control registry
2019/11/05 12:19:50 DEBUG : Cache remote pool_c:115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ: new object '226.7559.66.72_60-55-04.zR'
2019/11/05 12:19:50 DEBUG : 226.7559.66.72_60-55-04.zR: find: error: couldn't open parent bucket for 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ
2019/11/05 12:19:50 DEBUG : 226.7559.66.72_60-55-04.zR: find: not found in local cache fs
2019/11/05 12:19:50 DEBUG : 226.7559.66.72_60-55-04.zR: find: cached object
2019/11/05 12:19:50 DEBUG : 2004.11.27_15-00-59.dv: Need to transfer - File not found at Destination
2019/11/05 12:19:50 DEBUG : 2004.11.27_15-00-59.dv: Starting multi-thread copy with 2 parts of size 135.875M
2019/11/05 12:19:50 DEBUG : 2004.11.27_15-00-59.dv: multi-thread copy: stream 1/2 (0-142475264) size 135.875M starting
2019/11/05 12:19:50 DEBUG : 2004.11.27_15-00-59.dv: multi-thread copy: stream 2/2 (142475264-284832000) size 135.762M starting
2019/11/05 12:19:50 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: moving offset set from 0 to 0
2019/11/05 12:19:50 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: moving offset set from 0 to 0
2019/11/05 12:19:50 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 0: chunk retry storage: 0
2019/11/05 12:19:50 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 0: chunk retry storage: 0
2019/11/05 12:19:50 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 0: chunk retry storage: 1
2019/11/05 12:19:50 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 0: chunk retry storage: 1
2019/11/05 12:19:51 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 0: chunk retry storage: 2
2019/11/05 12:19:51 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 0: chunk retry storage: 2
2019/11/05 12:19:51 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 0: chunk retry storage: 3
2019/11/05 12:19:51 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 0: chunk retry storage: 3
2019/11/05 12:19:52 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 0: chunk retry storage: 4
2019/11/05 12:19:52 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 0: chunk retry storage: 4
2019/11/05 12:19:52 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 0
2019/11/05 12:19:52 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 0
2019/11/05 12:19:53 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: cache reader closed 32
2019/11/05 12:19:53 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: moving offset set from 0 to 142510080
2019/11/05 12:19:53 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 136314880: chunk retry storage: 0
2019/11/05 12:19:53 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 10485760: chunk retry storage: 0
2019/11/05 12:19:53 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 136314880: chunk retry storage: 1
2019/11/05 12:19:53 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 10485760: chunk retry storage: 1
2019/11/05 12:19:54 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 136314880: chunk retry storage: 2
2019/11/05 12:19:54 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 10485760: chunk retry storage: 2
2019/11/05 12:19:54 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 136314880: chunk retry storage: 3
2019/11/05 12:19:54 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 10485760: chunk retry storage: 3
2019/11/05 12:19:55 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 136314880: chunk retry storage: 4
2019/11/05 12:19:55 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 10485760: chunk retry storage: 4
2019/11/05 12:19:55 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 125829120
2019/11/05 12:19:55 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 10485760
2019/11/05 12:19:55 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 136314880: chunk retry storage: 5
2019/11/05 12:19:55 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 20971520: chunk retry storage: 0
2019/11/05 12:19:55 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 136314880
2019/11/05 12:19:55 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 20971520
2019/11/05 12:19:56 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 146800640: chunk retry storage: 0
2019/11/05 12:19:56 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 31457280: chunk retry storage: 0
2019/11/05 12:19:56 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 146800640
2019/11/05 12:19:56 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: cache reader closed 146836512
2019/11/05 12:19:56 DEBUG : 2004.11.27_15-00-59.dv: Reopening on read failure after 4259840 bytes: retry 1/10: failed to authenticate decrypted block - bad password?
2019/11/05 12:19:56 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: moving offset set from 0 to 0
019/11/05 12:19:56 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: cache reader closed 32
019/11/05 12:19:56 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: moving offset set from 0 to 146770960
019/11/05 12:19:56 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: cache reader closed 146836512
019/11/05 12:19:56 DEBUG : 2004.11.27_15-00-59.dv: Reopen failed after 4259840 bytes read: failed to authenticate decrypted block - bad password?
019/11/05 12:19:56 DEBUG : 2004.11.27_15-00-59.dv: multi-thread copy: stream 2/2 failed: multpart copy: read failed: failed to authenticate decrypted block - bad password?
019/11/05 12:19:56 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 31457280: chunk retry storage: 1
019/11/05 12:19:57 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: 31457280: chunk retry storage: 2
019/11/05 12:19:57 DEBUG : worker-0 <226.7559.66.72_60-55-04.zR>: downloaded chunk 31457280
019/11/05 12:19:57 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: cache reader closed 31464992
019/11/05 12:19:57 DEBUG : 2004.11.27_15-00-59.dv: Reopening on read failure after 31391744 bytes: retry 1/10: failed to authenticate decrypted block - bad password?
019/11/05 12:19:57 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: moving offset set from 0 to 0
019/11/05 12:19:57 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: cache reader closed 32
019/11/05 12:19:57 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: moving offset set from 0 to 31399440
019/11/05 12:19:57 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: cache reader closed 31464992
019/11/05 12:19:57 DEBUG : 2004.11.27_15-00-59.dv: Reopen failed after 31391744 bytes read: failed to authenticate decrypted block - bad password?
019/11/05 12:19:57 DEBUG : 2004.11.27_15-00-59.dv: multi-thread copy: stream 1/2 failed: multpart copy: read failed: failed to authenticate decrypted block - bad password?
019/11/05 12:19:57 ERROR : 2004.11.27_15-00-59.dv: Failed to copy: multpart copy: read failed: failed to authenticate decrypted block - bad password?
019/11/05 12:19:57 ERROR : Attempt 1/3 failed with 3 errors and: multpart copy: read failed: failed to authenticate decrypted block - bad password?
019/11/05 12:19:57 DEBUG : Cache remote pool_c:115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ: new object '226.7559.66.72_60-55-04.zR'
019/11/05 12:19:57 DEBUG : 226.7559.66.72_60-55-04.zR: find: warm object: 226.7559.66.72_60-55-04.zR, expiring on: 2019-11-05 12:20:50.491151404 +0100 CET
019/11/05 12:19:57 DEBUG : 2004.11.27_15-00-59.dv: Sizes differ (src 284832000 vs dst 146735104)
019/11/05 12:19:57 DEBUG : 2004.11.27_15-00-59.dv: Starting multi-thread copy with 2 parts of size 135.875M
019/11/05 12:19:57 DEBUG : 2004.11.27_15-00-59.dv: multi-thread copy: stream 2/2 (142475264-284832000) size 135.762M starting
019/11/05 12:19:57 DEBUG : 2004.11.27_15-00-59.dv: multi-thread copy: stream 1/2 (0-142475264) size 135.875M starting
019/11/05 12:19:57 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: moving offset set from 0 to 0
019/11/05 12:19:57 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: moving offset set from 0 to 0
019/11/05 12:19:57 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: cache reader closed 32
019/11/05 12:19:57 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: moving offset set from 0 to 142510080
019/11/05 12:19:57 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: cache reader closed 146836512
019/11/05 12:19:57 DEBUG : 2004.11.27_15-00-59.dv: Reopening on read failure after 4259840 bytes: retry 1/10: failed to authenticate decrypted block - bad password?
019/11/05 12:19:57 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: moving offset set from 0 to 0
019/11/05 12:19:57 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: cache reader closed 32
019/11/05 12:19:57 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: moving offset set from 0 to 146770960
019/11/05 12:19:57 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: cache reader closed 146836512
019/11/05 12:19:57 DEBUG : 2004.11.27_15-00-59.dv: Reopen failed after 4259840 bytes read: failed to authenticate decrypted block - bad password?
019/11/05 12:19:57 DEBUG : 2004.11.27_15-00-59.dv: multi-thread copy: stream 2/2 failed: multpart copy: read failed: failed to authenticate decrypted block - bad password?
019/11/05 12:19:57 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: cache reader closed 10553904
019/11/05 12:19:57 DEBUG : 2004.11.27_15-00-59.dv: multi-thread copy: stream 1/2 failed: context canceled
019/11/05 12:19:57 ERROR : 2004.11.27_15-00-59.dv: Failed to copy: multpart copy: read failed: failed to authenticate decrypted block - bad password?
019/11/05 12:19:57 ERROR : Attempt 2/3 failed with 3 errors and: multpart copy: read failed: failed to authenticate decrypted block - bad password?
019/11/05 12:19:57 DEBUG : Cache remote pool_c:115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ: new object '226.7559.66.72_60-55-04.zR'
019/11/05 12:19:57 DEBUG : 226.7559.66.72_60-55-04.zR: find: warm object: 226.7559.66.72_60-55-04.zR, expiring on: 2019-11-05 12:20:57.630000454 +0100 CET
019/11/05 12:19:57 DEBUG : 2004.11.27_15-00-59.dv: Sizes differ (src 284832000 vs dst 146735104)
019/11/05 12:19:57 DEBUG : 2004.11.27_15-00-59.dv: Starting multi-thread copy with 2 parts of size 135.875M
019/11/05 12:19:57 DEBUG : 2004.11.27_15-00-59.dv: multi-thread copy: stream 2/2 (142475264-284832000) size 135.762M starting
019/11/05 12:19:57 DEBUG : 2004.11.27_15-00-59.dv: multi-thread copy: stream 1/2 (0-142475264) size 135.875M starting
019/11/05 12:19:57 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: moving offset set from 0 to 0
019/11/05 12:19:57 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: moving offset set from 0 to 0
019/11/05 12:19:57 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: cache reader closed 32
019/11/05 12:19:57 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: moving offset set from 0 to 142510080
019/11/05 12:19:57 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: cache reader closed 146836512
019/11/05 12:19:57 DEBUG : 2004.11.27_15-00-59.dv: Reopening on read failure after 4259840 bytes: retry 1/10: failed to authenticate decrypted block - bad password?
019/11/05 12:19:57 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: moving offset set from 0 to 0
019/11/05 12:19:57 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: cache reader closed 32
019/11/05 12:19:57 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: moving offset set from 0 to 146770960
019/11/05 12:19:57 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: cache reader closed 146836512
019/11/05 12:19:57 DEBUG : 2004.11.27_15-00-59.dv: Reopen failed after 4259840 bytes read: failed to authenticate decrypted block - bad password?
019/11/05 12:19:57 DEBUG : 2004.11.27_15-00-59.dv: multi-thread copy: stream 2/2 failed: multpart copy: read failed: failed to authenticate decrypted block - bad password?
019/11/05 12:19:57 DEBUG : 115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ/226.7559.66.72_60-55-04.zR: cache reader closed 11996048
019/11/05 12:19:57 DEBUG : 2004.11.27_15-00-59.dv: multi-thread copy: stream 1/2 failed: context canceled
019/11/05 12:19:57 ERROR : 2004.11.27_15-00-59.dv: Failed to copy: multpart copy: read failed: failed to authenticate decrypted block - bad password?
019/11/05 12:19:57 ERROR : Attempt 3/3 failed with 3 errors and: multpart copy: read failed: failed to authenticate decrypted block - bad password?
019/11/05 12:19:57 Failed to copy with 3 errors: last error was: multpart copy: read failed: failed to authenticate decrypted block - bad password?
019/11/05 12:19:57 INFO : plex: stopped Plex watcher
019/11/05 12:19:58 DEBUG : Cache remote pool_c:115.xpotlD/138.ErmnxB/105.dbnfsb/53.YPAFGTCQ/71.TrGKLIvJ: Services stopped

Thanks for any suggestion :slight_smile:

If you are using the cache backend, things are going to be slow as that only uses 1 worker for anything 'non' plex.

If you are randomly hitting 1 of the 3 remotes, it's going to not find things as you've started a request on one remote and the second remote has no idea, so you are requesting a chunk that it is not aware of.

It may work without the cache backend as it seems your goal is to have a request go to 1 of 3 backends and will fail as it's trying to split out the work.

Do you have 3 separate GSuite users configured for each backend?