How to minimize bandwith w.r.t. renames during sync

It does mention hashes, the modime support is non-binary - what is supported is a precision. Very large precisions are effectively ignores so rclone sets them to 100y for backends which don't support precisions.

$ rclone backend features TestDrive:
	"Name": "TestDrive",
	"Root": "",
	"String": "Google drive root ''",
	"Precision": 1000000,
	"Hashes": [
	"Features": {
		"About": true,
		"BucketBased": false,
		"BucketBasedRootOK": false,
		"CanHaveEmptyDirectories": true,
		"CaseInsensitive": false,
		"ChangeNotify": true,
		"CleanUp": true,
		"Command": true,
		"Copy": true,
		"DirCacheFlush": true,
		"DirMove": true,
		"Disconnect": false,
		"DuplicateFiles": true,
		"GetTier": false,
		"IsLocal": false,
		"ListR": true,
		"MergeDirs": true,
		"Move": true,
		"OpenWriterAt": false,
		"PublicLink": true,
		"Purge": true,
		"PutStream": true,
		"PutUnchecked": true,
		"ReadMimeType": true,
		"ServerSideAcrossConfigs": false,
		"SetTier": false,
		"SetWrapper": false,
		"UnWrap": false,
		"UserInfo": false,
		"WrapFs": false,
		"WriteMimeType": true

yeah, my bad, I was looking under features.

oh, I see, thanks for the explanation, it wasn't really clear from the docs.

okay, so... bottom line:

interested users can check out the --track-renames and --track-renames-strategy flags.
NOTE: those flags rely on either the hash or the modtimes being supported by the remotes.

too bad for me, but case closed.

1 Like

I did think of doing some other strategies, maybe "leaf" to deal with the very common case of just moving files around rather than renaming the file itself. Would that be useful?

Also rclone could potentially do a "sample" option where it reads the first 1k of the file, makes a hash and uses that.

those seem reasonable strategies to me! :smiley:

I'll add my two cents:
why not create a local hash/modtime database?
it would be useful for tracking renames and extending some functionalities for unsupported remotes like Mega.
moreover, folks that are using the crypt remote will be interfacing with their cloud providers only through rclone, and never via web or other methods, so that rclone would be confident that nobody modified the file.
rclone could be set up to periodically hashcheck the files, downloading them if needed, to make sure this local db is updated.
does this make sense?

Local caches of things are on the roadmap you'll be pleased to know.

There is rather a lot of groundwork to lay first but I'll be starting that after I've done with the VFS changes.

1 Like

I did --track-renames-strategy leaf here if you want to have a go

Also I note that --track-renames-strategy size will work for you if (and it is a big if) all the files you rename are different sizes! (uploaded in 15-30 mins)

1 Like

I'll think about it, thanks! I'll report here if I do.

I have tens of thousands of files, so I wouldn't be comfortable assuming that. :smiley:

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.