What is the problem you are having with rclone?
I have an rclone remote which is a gdrive crypt mounted to my server, the mount works perfectly, however when I use rclone move
to transfer files to a local folder, I am getting unexpected EOF on all files.
Example file: anime/X.mkv
File is not erroring on the mount:
luke@download:/mnt/rclone_teamdrive/anime$ du -sh X.mkv
357M X.mkv
ffmpeg -i also works fine,
however, when using rclone move
2023/06/19 18:32:04 DEBUG : anime/X.mkv: Reopen failed after 186908672 bytes read: unexpected EOF
2023/06/19 18:32:04 DEBUG : anime/X.mkv: multi-thread copy: stream 2/2 failed: multipart copy: read failed: unexpected EOF
2023/06/19 18:32:04 DEBUG : anime/Xmkv: Received error: multipart copy: read failed: unexpected EOF - low level retry 1/10
Run the command 'rclone version' and share the full output of the command.
rclone v1.62.2
- os/version: ubuntu 22.04 (64 bit)
- os/kernel: 5.15.0-73-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.20.2
- go/linking: static
- go/tags: none
Which cloud storage system are you using? (eg Google Drive)
Google Drive w/ Crypt
The command you were trying to run (eg rclone copy /tmp remote:tmp
)
rclone move \
teamdrive_crypt: /mnt/storage/ \
--bwlimit=50M \
--stats=10s \
--transfers=8 \
--checkers=8 \
--delete-empty-src-dirs \
--verbose=4 \
The rclone config contents with secrets removed.
[teamdrive]
type = drive
client_id =
client_secret =
scope = drive
token =
team_drive =
root_folder_id =
[teamdrive_crypt]
type = crypt
remote = teamdrive:media
filename_encryption = standard
directory_name_encryption = true
password =
A log from the command with the -vv
flag
This is also happening, same file
2023/06/19 18:31:39 DEBUG : anime/X.mkv: Sizes differ (src 373952512 vs dst 312999936)
2023/06/19 18:31:39 DEBUG : anime/X.mkv: Starting multi-thread copy with 2 parts of size 178.375Mi
2023/06/19 18:31:39 DEBUG : anime/X.mkv: multi-thread copy: stream 2/2 (187039744-373952512) size 178.254Mi starting
2023/06/19 18:31:39 DEBUG : anime/X.mkv: multi-thread copy: stream 1/2 (0-187039744) size 178.375Mi starting
post full logfile - if it always happens with one file just try to move only this file
This happens repeatedly with many files, I've left this running for hours and it always stays at Transferred: 0/.... and checkers constantly growing - I don't think there is anything other anomalities in the log file.
Let's start with some sanity check.
Run rclone config reconnect teamdrive:
then when authenticating in web browser do you see rclone logo and name? If yes it means your client_id/secret are ignored and you use rclone shared "profile"
I'm seeing my own oauth app, so not the shared profile - I am pretty confident that the remote works as the mount is actively in use (and I can ffmpeg on the "bad" file), only when using move it's having trouble
Thanks for your help so far!
1 Like
would you mind to try the same with copy
instead of move
?
perfect - so there is nothing wrong with move
- it is something with gdrive
I've actually just noticed that when trying to copy the file manually from the mount, i get the same error in the mount log - Just tried a reconnect and restart too, I'm mighty confused
cp anime/X.mkv ~
cp: error reading 'X.mkv': Input/output error
Jun 19 19:47:56 download rclone[900426]: ERROR : anime/X.mkv: ReadFileHandle.Read error: unexpected EOF
Jun 19 19:47:56 download rclone[900426]: ERROR : IO error: unexpected EOF
Jun 19 19:47:56 download rclone[900426]: ERROR : anime/X.mkv: ReadFileHandle.Release error: file already closed
Jun 19 19:47:56 download rclone[900426]: ERROR : IO error: file already closed
Should have spotted this earlier, sorry.
what about you unmount and try to copy/move? In theory it should not matter.. but theory is one.. life is two
Same results! 
Is there any extra information I can provide to help?
well.. me like you... trying to get some clues.
for me it looks like gdrive for some reason is throttling very aggressively - this is where I thought that maybe mount is doing some big uploads. This is why I thought first about client_ID. Not that I know what is going on. but maybe going extreme with transfers/checkers to 1 could shed some light?
As if this works we know that it is throttling
Can you please recreate the issue and post the full debug log?
Not sure where you are seeing any throttling at all.
Is that a Shared Drive / Personal Account / EDU account / business account?
Shared drive on a business account, I am the owner of the drive
Shared drives have quite the odd limits and I'm not sure exactly what they report back in terms of exceeding a quota.
What is odd is that you start to download it, but it rate limits out.
* anime/Alice Gear Aegis…s Cafe! HDTV-1080p.mkv: 39% /356.629Mi, 32.560Mi/s, 6s
* anime/Assassination Cl…eriod Bluray-1080p.mkv: 5% /1.178Gi, 21.604Mi/s, 52s
2023/06/19 20:09:11 DEBUG : pacer: Reducing sleep to 19.364413ms
2023/06/19 20:09:11 DEBUG : pacer: Reducing sleep to 79.919515ms
2023/06/19 20:09:11 DEBUG : pacer: Reducing sleep to 148.920071ms
2023/06/19 20:09:11 DEBUG : pacer: Reducing sleep to 182.144065ms
2023/06/19 20:09:11 DEBUG : pacer: Reducing sleep to 259.002576ms
2023/06/19 20:09:11 DEBUG : pacer: Reducing sleep to 330.912621ms
2023/06/19 20:09:11 DEBUG : pacer: Reducing sleep to 429.017463ms
2023/06/19 20:09:11 DEBUG : pacer: Reducing sleep to 441.402005ms
2023/06/19 20:09:11 DEBUG : pacer: Reducing sleep to 496.311271ms
2023/06/19 20:09:11 DEBUG : pacer: Reducing sleep to 449.186328ms
2023/06/19 20:09:11 DEBUG : pacer: Reducing sleep to 401.409729ms
2023/06/19 20:09:12 DEBUG : pacer: Reducing sleep to 104.634411ms
2023/06/19 20:09:12 DEBUG : pacer: Reducing sleep to 0s
2023/06/19 20:09:13 DEBUG : pacer: Reducing sleep to 31.745518ms
2023/06/19 20:09:13 DEBUG : pacer: Reducing sleep to 129.492522ms
2023/06/19 20:09:13 DEBUG : pacer: Reducing sleep to 154.452775ms
2023/06/19 20:09:13 DEBUG : pacer: Reducing sleep to 154.415776ms
2023/06/19 20:09:13 DEBUG : pacer: Reducing sleep to 214.541939ms
2023/06/19 20:09:13 DEBUG : pacer: Reducing sleep to 275.510795ms
2023/06/19 20:09:13 DEBUG : pacer: Reducing sleep to 363.587953ms
2023/06/19 20:09:13 DEBUG : pacer: Reducing sleep to 444.708398ms
2023/06/19 20:09:13 DEBUG : pacer: Reducing sleep to 457.791615ms
2023/06/19 20:09:13 DEBUG : pacer: Reducing sleep to 506.978462ms
2023/06/19 20:09:13 DEBUG : pacer: Reducing sleep to 334.86221ms
2023/06/19 20:09:14 DEBUG : pacer: Reducing sleep to 21.783779ms
2023/06/19 20:09:14 DEBUG : pacer: Reducing sleep to 0s
2023/06/19 20:09:14 DEBUG : pacer: Reducing sleep to 54.483729ms
2023/06/19 20:09:14 DEBUG : pacer: Reducing sleep to 150.759258ms
2023/06/19 20:09:15 DEBUG : pacer: Reducing sleep to 18.490506ms
until it fails.
Either bad file (would be odd) or perhaps a quota issue on said file.
Try removing the streams and see if that works with 1 file.
--multi-thread-streams 1
I wasn't aware extra limits were imposed per-file also? Is it possible to make a configuration where it will simply delete these "bad" files from gdrive if it's unable to download them? I have thousands of files that I need to download to local storage and I feel like a good few thousand could be hit by this odd limit.
That's the challenge and none of this is actually documented so it's all guessing and what you hear other people say. I migrated away from Google Drive as I couldn't stand the limits for upload/download and the support wasn't helpful.
Not that I'm aware of. I'd test by waiting ~24 hours and trying to copy a bad one and see if the error still is there.