Sync --track-renames is not tracking the renames

STOP and READ USE THIS TEMPLATE NO EXCEPTIONS - By not using this, you waste your time, our time and really hate puppies. Please remove these two lines and that will confirm you have read them.

What is the problem you are having with rclone?

Using rclone sync to sync online (1fichier backend) changes to local with --track-renames, but renamed files are not renamed locally, but the original named is deleted, and the file with new name is created.
I know that 1fichier has very limited timestamps, only the file creation date is recorded, which is the current timestamp even on a rename, but still, the hash of the file is the same, so locally it should work like a real rename. Tried to add the hash type supported by 1fichier specifically (--hash whirlpool), same result. Also tried adding --check-first, but no luck.

Why? BTW as i see rclone doesn't have the metadata overlay was planned few years ago, but is there a third party solution to store file dates transparently even if the remote doesn't supports them?

Run the command 'rclone version' and share the full output of the command.

rclone v1.68.1
- os/version: Microsoft Windows 11 Pro 22H2 (64 bit)
- os/kernel: 10.0.22621.3155 (x86_64)
- os/type: windows
- os/arch: amd64
- go/version: go1.23.1
- go/linking: static
- go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

1fichier

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone sync hasher1fichier:/Progz/GM9/gdt_data/ E:\Progz\GM9\gdt_data\ --modify-window=5s --exclude-from e:\Progz\GM9\rcloexclude.txt --track-renames --hash whirlpool --check-first --fast-list --no-update-modtime -P -v

Please run 'rclone config redacted' and share the full output.

[1fichier]
type = fichier
api_key = XXX

[backupkript1fichier]
type = crypt
remote = 1fichier:/bigtar/
password = XXX
password2 = XXX

[hasher1fichier]
type = hasher
remote = backupkript1fichier:/
hashes = md5,sha1,whirlpool

You're so close!

You need --track-renames-strategy hash (docs).

I hope that helps.

Also, test with --dry-run (I am pretty sure that'll show renames) and also use more logging to see what's happening

not works, on dry run i see delete, and copy, instead of rename

Run again with -vv (as per the original template) and see what’s happening

ran again, thats why i wrote that again delete-copy happens, as i saw that happens.

I've renamed a files just for test in cloud, but no rename happened when syncing to local, only deletes, copies.

Create small test case with few files. Run sync with -vv option. Post all output here. Then we can have a look. Otherwise this is just a story and nobody wants to play guessing games.

1 Like

created, and the rename works. Also works on the whole backup if just few files are renamed. But if more (currently about 70 files), then copy-delete happens.

ok i've found it!

If the rename in cloud happens on different computer(so the hasher .bolt files are not in sync), OR without the hasher overlay, and for the sync with command above i use hasher overlay the the renamed files are copied then deleted, as there is no entry yet in hasher .bolt file for the "new"(renamed files).

I think that means, rclone checks all available hashes, but if there are no hashes yet for the renamed files, then it isn't create them and find the possible pair to rename, but just copy-delete.

So if hasher is non just anywhere, then the renames are detected correctly. If hasher is used, then... see above.

1 Like

Glad you found it.

You also wasted our time and a good bit of your time by (repeatedly) ignore requests for debug logs and other information. I strongly suspect they would have narrowed it down very, very quickly (assuming they were unadulterated) because someone would have asked about the hasher.

In the future, please follow the prompts before posting or don't bother.

1 Like

What logs youre talking about? Nothing is visible about this problem in the logs that leading to the cause of this mystery.
I tried many times (with -vv too), then while i was gaming, i had a thought that there is something around the hashes, so i tried...

If you need hashes and you are using crypt what I do is to use chunker. It can be utilized not only for splitting large files but also for storing all files' hashes. Splitting part can be effectively turned off too if not needed.

In your case:

[1fichier]
type = fichier
api_key = XXX

[backupkript1fichier]
type = crypt
remote = 1fichier:/bigtar/
password = XXX
password2 = XXX

[backupchunker1fichier]
type = chunker
remote = backupkript1fichier:
chunk_size = 1P
hash_type = sha1all
name_format = *.rcc###

This way hashes are stored together with files and does not matter that you access your content from different machines.

If you posted the config and the logs and the command that you ran as requested, this would’ve come up a lot sooner. Either way you were asked to show it, and you ignored it.

You don't see the topic starter message? All os these were posted, except the logs, as its meaningless.

Thanks, this could work, but i've read the "caveat" section, which can be a problem, as the remote (1fichier), has a bit sick API rate limit, so this could mean more 30 sec bans by the service :sweat_smile:
And why i need the hashes is that 1fichier uses only whirlpool hashes, while everything else is using sha1 or md5.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.