Is the only way to move a file, as opposed to deleting and re-uploading, by using the move command? I plan on having rclone sync set up with a daemon so that any any file added to a certain directory is automatically synced to Google Drive. However I have other automated processes that may move a file from one directory to another, but still within the folder structure that rclone is watching. Will rclone see this as a move or will it delete and re-upload? What about renames? If so, any way around this? I am not using crypt currently but I’d also like to know if the answer is different using crypt.
If you want rclone to do this then add the
--track-renames doesn’t work wit crypt
One more question. I am doing a sync with
--track-renames but the process takes forever. I’m guessing it is either calculating the checksum locally or downloading the remote file and hashing that. Which is it? This is Google Drive, btw. I have a lot of large files. Will it hash every time I do a sync with
--track-renames or will it cache it locally or something?
--track-renames just involves comparing remote hashes which is usually quick.
What other options are you using? Are you using
--checksum? That will run a checksum on all your local files.
I’ll have to double check as I likely lost track of what I was doing but I believe I tried it both with the --checksum and without. To clarify, what I was trying this time was I renamed a file locally that was already uploaded. I have a directory of files that sometimes get renamed or moved around and I am trying to set it up so that I can move and rename things locally and run a sync without reuploading things. Both times it took about a half hour to reconcile the rename. To me, this indicates it was downloading the file as it doesn’t take my machine that long to hash the file locally. This was all done with a dry run.
Can you share the command line you are using? I can’t really say what was happening without it!
This is the one that is syncing now. I tried several different things but I think this is one that gave me issues.
rclone sync --delete-after --size-only --verbose --transfers 1 --checkers 8 --bwlimit 300k --contimeout 60s --timeout 300s --retries 3 --low-level-retries 10 --exclude /sort/** --stats 1s Z:\Stuff google:Backup/Stuff
That should be doing modtime only checks, however if the modtime doesn’t match but the size does it will do a checksum test to work out if the file should be uploaded or have its modtime set.
How does the checksum work on Google’s side? Is it just stored as an attribute or something? Is it rclone that calculates the hash and stores it when it uploads a file or does Google hash everything it stores? I imagine it’s the former but I don’t really know. It just seems to be downloading because a hash shouldn’t take that long. I’ll have to take a closer look after the sync is finished as I tried a few things and need to clarify the results. I think I got a little lost with my testing.
Google calculate it when you upload the file and store it as an attribute. rclone fetches it when it gets info about the file.
So does rclone cache checksums of local files or does it recalculate on every sync? If I have terabytes of stuff in a folder, rename a file, and then do a sync with --track-renames, will it hash everything that is local to compare hashes with google to see what was renamed?
I think it caches the checksums but I’m not 100% sure…
If you are syncing from local -> google then the only hash comparisons will be using caches retreived from google, so no local hashes.
That said, if you are using the cache backend too for uploads, I’m not 100% sure how that is going to interact with things.
How would I find that out? Where does it store the cache?
I don’t quite follow. How does it know if a file has been renamed or moved on the remote end if it doesn’t compare the remote file hash with the local file hash? Compare on size and modtime seems to take forever too. I don’t know if this is because it takes Google longer to get the mod times. I’ll have to run some checks to make sure I didn’t screw anything up.
I checked the source - it does cache the hashes.
I’ve reviewed the way
sync --track-renames works… When it is syncing files and a file needs to be uploaded it saves it for the end, likewise when a file would have been deleted on the destination because it is missing in the source it saves it to the end. At the end it tries to match up those source and destination files to see if it could do a rename rather than an upload and a delete.
In this checking phase it will be checking source hash and dest hash, so what I said was wrong. If you rename a lot of big files on the remote (or locally) it will have to check their hashes locally which might take a long time.
Sorry for the confusion!
--track-renames this should be quick, however
--track-renames will still do its local hashing regardless.