Files copied corrupted

What is the problem you are having with rclone?

I performed a large "rclone" of a directory structure (about 3TB) from one system to another, using a locally mounted filesystem via apfs_fuse, read-only. The end result was inconsistent, where many files such as JPG images are of file type "data" and are presumably corrupt. I have no easy way to realistically determine the level of corruption (ie: thousands of files).

I'm puzzled at how this could happen, especially on a local operation.

What is your rclone version (output from rclone version)

I have been using the latest version of rclone, freshly built from source. This incident occurred a few weeks ago.

Which OS you are using and how many bits (eg Windows 7, 64 bit)

I am on MacOS Big Sur (up to date)

Which cloud storage system are you using? (eg Google Drive)

Presently, this was from a local imported filesystem using apfs_fuse to ext4.

The command you were trying to run (eg rclone copy /tmp remote:tmp)

/usr/bin/rclone sync --verbose --transfers 35
--copy-links
--checkers 8 --contimeout 60s --timeout 300s --retries 3
--exclude ".SpotLight-V100/**"
--low-level-retries 10 --drive-acknowledge-abuse
--stats 1s "/apfs/root/" "/srv/dev-disk-by-uuid-6a3e9c4d-4b2f-46a8-a2c3-c9c4f9e6ecc0/"

The rclone config contents with secrets removed.

[ no config was necessary for local operation ]

Without any log file, there isn't much help to offer.

If you can recreate the issue with a log file, we can take a look.

For local file systems, I wouldn't use rclone as I'd just rsync.

Is the data corrupted or is is just the metadata? Rclone isn't very good at preserving macOS data and resource forks. Normally this doesn't matter since cloud providers don't support them either, but it might in a local -> local copy.

ext4 doesn't support resource forks either does it?

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.