Rclone does not exit early when local destination has write error

What is the problem you are having with rclone?

I am doing a basic S3 -> local directory clone.
My local directory is a SMB mount in macOS. Sometimes macOS will unmount the directory, during rclone running (big S3 bucket to clone).

This makes it error out but the rclone keeps chugging along. I just got a call from my 5G operator to ask if all is correct with my home network, because I am using 3TB of data a day. I guess the rclone just keep trying to redownload all the repo. :smiley:

I went through all the documentation but I couldn't find any flag that would force it to exit early. Any suggestion is appreciated. Thank you for your amazing work!

Run the command 'rclone version' and share the full output of the command.

rclone version
rclone v1.68.1
- os/version: darwin 14.6.1 (64 bit)
- os/kernel: 23.6.0 (arm64)
- os/type: darwin
- os/arch: arm64 (ARMv8 compatible)
- go/version: go1.23.1
- go/linking: dynamic
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Scaleway S3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone --progress copy s3scaleway:bucket-name "/Volumes/smb-mounted-share"

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

rclone config redacted
[scaleway]
type = s3
provider = Scaleway
access_key_id = XXX
secret_access_key = XXX
region = nl-ams
endpoint = s3.nl-ams.scw.cloud
acl = private
storage_class = GLACIER
### Double check the config for sensitive info before posting publicly

A log from the command that you were trying to run with the -vv flag

I can try to do it again, but maybe this will just do, one with progress and the error?

Transferring:
 *  fragrance/f/r/fragonard-sorenza_upscaled.png: transferring
 *              fragrance/l/i/ligno_upscaled.png:  0% /3.495Mi, 0/s, -
 *            fragrance/i/m/impulse_upscaled.png: transferring
 * fragrance/l/a/lattafa-…t-al-musk_upscaled.png: transferring
2024/10/15 20:35:10 ERROR : fragrance/i/n/in-fieri-the-jetty.jpeg: Failed to copy: mkdir /Volumes/tarsnap-backups: permission denied
Transferred:              0 B / 14.375 GiB, 0%, 0 B/s, ETA -
Errors:             22062 (retrying may help)
Checks:             86579 / 86579, 100%
Transferred:            0 / 10012, 0%
Elapsed time:   2h32m12.9s
Transferring:
 *  fragrance/f/r/fragonard-sorenza_upscaled.png: transferring
 *            fragrance/i/m/impulse_upscaled.png: transferring
 * fragrance/l/a/lattafa-…t-al-musk_upscaled.png: transferring
 *         fragrance/i/n/in-fieri-the-jetty.jpeg:  0% /180.644Ki, 0/s, -Transferred:              0 B / 14.375 GiB, 0%, 0 B/s, ETA -
Errors:             22062 (retrying may help)
Checks:             86579 / 86579, 100%
Transferred:            0 / 10012, 0%
Elapsed time:   2h32m13.0s
Transferring:
 *  fragrance/f/r/fragonard-sorenza_upscaled.png: transferring
 *            fragrance/i/m/impulse_upscaled.png: transferring
 * fragrance/l/a/lattafa-…t-al-musk_upscaled.png: transferring
 *        fragrance/a/q/aqua-di-aix_upscaled.png: transferring

welcome to the forum,

once rclone has downloaded a file, and that file has not changed, rclone will not re-download it.


that is a permission error, not really a rclone error.
need to fix that before running rclone.


how big?
rclone size s3scaleway:bucket-name


--retries and --low-level-retries


really need to use a debug log, so see what rclone is doing
--log-level=DEBUG --log-file=/path/to/rclone.log

that is a permission error, not really a rclone error.
need to fix that before running rclone.

As I written, this happens in middle of work.

I have a script that mounts the SMB and checks if it's writable:
(it's ansible template, so ignore the variable is not expanded)
BACKUP_VOLUME=/Volumes/smbsharemount

echo "Mounting {{truenas_share}}"
osascript <<EOF
mount volume "{{truenas_share}}"
EOF
echo "Mounted"

if [ ! -w "${BACKUP_VOLUME}" ]
 then
    echo "${BACKUP_VOLUME} is not writeable"
    return 1
else
    echo "${BACKUP_VOLUME} check passed, we can write there"
fi

The SMB share may disconnect, macOS then unmounts it and the directory stops to exists in the middle of rclone doing work, so nothing I can do to check for that.

Rclone first succeeds with some file, smb share disconnects and volume disappears, then rclone fails while it tries to mkdir in /Volumes/ which is noop, so it keeps failing and continuing with the rest of the files.

I would expect rclone would just exit if it cannot save file at local destination. I saw the flags --retries and --low-level-retries but the default options for these should just stop rclone for continuing.

Please see verbose logs:

2024/10/15 21:13:53 DEBUG : global/2/4/2406f68a-7e5d-4bce-8a00-43814f2a4965_97ce2bc0-7527-4763-87a8-11fae811a85c.png: Need to transfer - File not found at Destination
2024/10/15 21:13:53 ERROR : global/0/0/004d4cf3-5e26-48a0-91da-c237f8db7737_849c01c1-f2ca-4cf8-b89e-888c80600010.png: Failed to copy: mkdir /Volumes/tarsnap-backups: permission denied
2024/10/15 21:13:53 DEBUG : global/0/5/056a517e-0dd5-4a26-a510-dffe388908f8_a96d5e2a-4acf-43dd-b99c-10cd0cdf27ae.png: Need to transfer - File not found at Destination
2024/10/15 21:13:53 ERROR : global/0/0/004ed0ad-1339-4100-a0e5-a076750e3d0d_52e70f29-24e5-48dd-9ad1-4d471a751cd3.png: Failed to copy: mkdir /Volumes/tarsnap-backups: permission denied
2024/10/15 21:13:53 DEBUG : global/1/5/150c0f02-b5f5-4f61-8020-19770d517830_455ad6a4-8a7d-4587-94c3-fac27e100b9e.png: Need to transfer - File not found at Destination
2024/10/15 21:13:53 ERROR : global/0/0/004f7fb6-66d4-44c0-94a6-1644ee8aea05_1b56f022-fd57-4135-968d-086949fa0732.png: Failed to copy: mkdir /Volumes/tarsnap-backups: permission denied
2024/10/15 21:13:53 DEBUG : global/3/3/335b1a66-d52d-4649-960b-a42d092fafc7_1ccbd348-037d-46e2-a185-380c38e50e01.png: Need to transfer - File not found at Destination
2024/10/15 21:13:54 ERROR : global/0/0/00501b17-c9a5-47be-8c25-8cd463a21541_fruits-vegetables-and-nuts.webp: Failed to copy: mkdir /Volumes/tarsnap-backups: permission denied
2024/10/15 21:13:54 ERROR : global/0/0/00502342-7463-48a2-b0df-61723d172c04_513112bf-4051-4b9f-9144-a54682e162f9.png: Failed to copy: mkdir /Volumes/tarsnap-backups: permission denied
2024/10/15 21:13:54 DEBUG : global/7/0/70069f3a-2eb7-48d6-8fd6-40eb6bb3e795_eb94ee5c-fb6c-4a24-8b90-a063a5d3e1bb.png: Need to transfer - File not found at Destination
2024/10/15 21:13:54 DEBUG : global/5/1/51093b12-f634-4396-98ae-b73a50cc27da_047d8ff6-1870-4841-8580-66e435c93e76.png: Need to transfer - File not found at Destination
2024/10/15 21:13:54 ERROR : global/0/0/0050a915-4d28-4fe9-a820-d3e36eda1ae1_02088394-0ab4-4eeb-bc10-ca1cfe12ef73.png: Failed to copy: mkdir /Volumes/tarsnap-backups: permission denied
2024/10/15 21:13:54 DEBUG : global/6/0/6005e41e-4400-4c61-8db3-e4f231c3e33f_3598675a-cb5b-45f5-9627-943118a3254a.png: Need to transfer - File not found at Destination
2024/10/15 21:13:54 ERROR : global/0/0/00512866-c3f6-4190-846a-887306b7e2e3_691e8037-01d0-420b-a493-8cd24c4c0c0a.png: Failed to copy: mkdir /Volumes/tarsnap-backups: permission denied
2024/10/15 21:13:54 DEBUG : global/4/3/430a0e82-a7c8-4e78-b96e-3c84332767f1_8bcbcb4c-7c9b-4f89-8ecd-e26599af4b39.png: Need to transfer - File not found at Destination
2024/10/15 21:13:54 ERROR : global/0/0/00535ab0-58d8-4c43-8ad3-d1ed1de90f4a_95813902-8385-4a24-9869-108822f62d60.png: Failed to copy: mkdir /Volumes/tarsnap-backups: permission denied
2024/10/15 21:13:54 DEBUG : global/2/4/2406f815-58ee-4e64-81e5-a47201d24491_e0006cfd-6d50-46c5-b4b6-575568ba122f.png: Need to transfer - File not found at Destination
2024/10/15 21:13:54 ERROR : global/0/0/00520bc5-1d86-4b88-929a-98e85e59b29e_989d3272-2970-442d-b7ef-c78c0e522919.png: Failed to copy: mkdir /Volumes/tarsnap-backups: permission denied
2024/10/15 21:13:54 DEBUG : global/0/5/056abd53-1c54-4de9-8769-9ebd36cc688f_f2d415b5-9e80-489c-a0f0-e259c5c2f51e.png: Need to transfer - File not found at Destination
2024/10/15 21:13:55 ERROR : global/0/0/0053731a-33d5-46a5-b389-3e6e4f0230a8_pomegranate-blossom.webp: Failed to copy: mkdir /Volumes/tarsnap-backups: permission denied
2024/10/15 21:13:55 DEBUG : global/1/5/150c3a46-476d-4e7c-bb27-ea6516e6917e_1ae74b1e-f5f6-438a-86fc-327b3c9773f2.png: Need to transfer - File not found at Destination
2024/10/15 21:13:55 ERROR : global/0/0/0053bf5c-e7e1-4418-b8e0-ca9c42f876bc_036e8114-94a6-46ee-908c-d66277532e96.png: Failed to copy: mkdir /Volumes/tarsnap-backups: permission denied
rclone size scaleway:fraghead-production
Total objects: 123.405k (123405)
Total size: 296.648 GiB (318523764626 Byte)

that is a small number of files and small amount of data, does not come close to 3TB of data
rclone has not downloaded any data


well, that defaults are what they are, nothing to be done about that now.
did you try the flags, did they not help??

i believe it is common for copy tools, to try and copy as many files as possible, despite errors on individual files.

My backups run every few hours. I have other buckets with similar data. It easily adds up to 3TB or more of data, if it downloads all of them into ether.

I did try the flags, and the rclone still keeps going

rclone --low-level-retries 1 --retries 1 --log-level=DEBUG --log-file=/Users/admin/rclone.log --progress copy scaleway:fraghead-production "${BACKUP_VOLUME}fraghead/s3/fraghead-production/"
2024/10/16 08:02:39 DEBUG : global/6/0/6000102d-d3a3-4b04-be3c-374961d1897f_16f9a506-f4bc-4b4b-ac81-4a8f4107df75.png: Need to transfer - File not found at Destination
2024/10/16 08:02:39 ERROR : global/0/0/000b0be5-9ce2-4cbd-9e37-735460c440f6_d45c20db-c5d9-40f7-bc4c-b9146e44a3bb.png: Failed to copy: mkdir /Volumes/tarsnap-backups: permission denied
2024/10/16 08:02:39 DEBUG : ucm16go8fk0002770h9iof2y7n/5/2/529e8ca6-a9e3-496d-8d20-709664e5718c_notes-category.jpeg: Need to transfer - File not found at Destination
2024/10/16 08:02:39 ERROR : global/0/0/00106b57-b851-4a89-884b-15ec7522b2e1_1a6cf1ff-375e-4fa1-ae3f-36145f69856c.png: Failed to copy: mkdir /Volumes/tarsnap-backups: permission denied
2024/10/16 08:02:40 ERROR : global/0/0/000fc829-b84b-461b-8a03-9bc7e4e4151b_912e8a9e-b275-4d12-b1af-72a8b2009e71.png: Failed to copy: mkdir /Volumes/tarsnap-backups: permission denied
2024/10/16 08:02:40 DEBUG : ucm16go8fk0002770h9iof2y7n/7/0/70d8fddb-a197-4c3f-96e4-508bc05bc92b_givenchy-gentleman-cologne-edt-dukhi-dlya-muzhchin.jpg: Need to transfer - File not found at Destination
2024/10/16 08:02:40 ERROR : global/0/0/00115c85-9d52-4d70-8f17-5ccb494c22c7_7022fe21-81b8-42d5-8aaf-5940a928da9c.png: Failed to copy: mkdir /Volumes/tarsnap-backups: permission denied
2024/10/16 08:02:40 DEBUG : global/5/1/51012881-7f2a-4ead-bc66-3618b1694207_1e6b0b29-1231-4898-985f-87ed301ae426.png: Need to transfer - File not found at Destination
2024/10/16 08:02:40 INFO  : Signal received: interrupt
2024/10/16 08:02:40 INFO  : Exiting...

It is, if the error is individual to the file, but if the whole destination is not writable, then it should just stop.

Similarily to when your destination is an S3 bucket. If S3 bucket doesn't exist, it should stop, if single file fails to copy, it should go.

Not sure if this is possible here, but I would gladly accept erroring out on first failed file if we cannot distinguish if the whole directory is dead.

I am happy to make a pull request if you could tell me what solution you would think is the best?

I think it is good idea to have an option to abort either on the first error or on some specific ones.

Situation you described should be avoidable.

Maybe for the start add flag "--abort-on-hard-error" (or similar) aborting all transfer when "permission denied" happens? Other errors can be added later.

Or/and flag "--abort-on-first-error"? Primitive but would work in many cases.

Let's see what others think.

This will not solve the problem for me, as I do have some files that cannot be transferred (weird names in S3, cannot be accepted by ZFS where I try to copy them) so they need to fail silently.

In my opinion there should be distinction between "that error is for single file only" vs "the whole destination is not accessible". I haven't tried but what happens if I try to copy 100k files into non-existing S3 bucket?

The flag you proposed would be useful to have either way for people that want 100% perfect copy.

And what error does this situation produce? "permission denied" as well?

No, some other ones. I don't have the logs now but I will try to reproduce.

I think good option is on file error to check if destination:

  1. is local file system
  2. has write permissions

and if not, just hard fail whole transfer as if you cannot write one file, you won't be able to write another one.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.