lbi_allan
(Allan)
February 27, 2023, 10:36am
1
Running rclone sync causes IO errors.
Command used:
rclone sync spaces-sgp1: mybucket1 /mnt/vol2 --checkers 256 --transfers 256 --progress --fast-list --size-only
Log:
rsync.log
root@sydney:/mnt/vol2# rclone sync spaces-sgp1:mybucket1 /mnt/vol2 --checkers 256 --transfers 256 --progress --fast-list --size-only
2023-02-27 07:18:50 ERROR : 52231/: Failed to copy: failed to open source object: NoSuchKey:
status code: 404, request id: tx00000000000001d4ec082-0063fc595a-285cf3d6-sgp1b, host id:
2023-02-27 07:18:54 ERROR : Local file system at /mnt/vol2: not deleting files as there were IO errors
2023-02-27 07:18:54 ERROR : Local file system at /mnt/vol2: not deleting directories as there were IO errors
2023-02-27 07:18:54 ERROR : Attempt 1/3 failed with 3 errors and: failed to open source object: NoSuchKey:
status code: 404, request id: tx00000000000001d4ec082-0063fc595a-285cf3d6-sgp1b, host id:
Transferred: 2.276M / 2.276 MBytes, 100%, 3.198 MBytes/s, ETA 0s
2023-02-27 07:20:04 ERROR : 52231/: Failed to copy: failed to open source object: NoSuchKey:
status code: 404, request id: tx00000000000001d4e5352-0063fc59a4-28667c7c-sgp1b, host id:
This file has been truncated. show original
Yes, I am using the latest version of rclone which is v1.61.1
Cloud storage used is Digital Ocean Space and being sync in a mounted Digital Ocean Volume.
rclone sync spaces-sgp1: mybucket1 /mnt/vol2 --checkers 256 --transfers 256 --progress --fast-list --size-only
The rclone config contents with secrets removed.
[spaces-sgp1]
type = s3
provider = DigitalOcean
env_auth = false
access_key_id =
secret_access_key =
endpoint = sgp1.digitaloceanspaces.com
acl = private
Not sure why folks fight to use the template, but here again.
Run the command 'rclone version' and share the full output of the command.
STOP and READ :
Do not type in "Latest".
Do not just type in a version number. Run the command and share the full output.
A log from the command with the -vv
flag
Paste log here
It looks like a file was deleted/removed from the source, but the debug would show that better. If you run the same with -vv and share the debug log, we can see what's going on.
asdffdsa
(jojothehumanmonkey)
February 27, 2023, 1:43pm
3
hello and welcome to the forum,
imho, DO s3 spaces has lots of rate limiting and other limitations.
i would test without --checkers 256 --transfers 256 --fast-list
system
(system)
Closed
March 29, 2023, 1:43pm
4
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.