Only update mod-time without copying file

Backstory:
I just did a full copy from acd to gdrive. By just copying the raw encrypted data from acd to gdrive using a vps. After the full acd outage I would like to move to gdrive as my main source of backup and do the backup to acd and other service from a vps.

I just started by trying to upload to gdrive from my local nas setup. I noticed that by copying from acd -> gdrive the mod-times did get somewhat broken on gdrive and rclone wants to reupload from my local nas (everything).

Is there a way to just update the mod-time on gdrive to reflect the local nas mod time?

Question:
So in essence the reverse of --no-update-modtime like --only-update-modtime.
As a workaround I can still upload using the --size-only flag.

Hi,

is there any update on this? I am running into the exactly same issue and cannot find a solution.

Regards,
JP

The mod times get updated to googleā€¦ except if you are syncing the inside of a crypt. If you are syncing the actual crypted files then it should update the times to match what was in ACD. You may be running into a problem where ACD and GD and local have differing exactness of the time stamp. Check if your dates are reasonably close and then you can play with the ā€˜ā€“modify-windowā€™ parameter.

EDIT: I forgot that ACD doesnā€™t support modification of times. So it has the upload time. So the above wont really work well because now GD has hte same times as amazon. So youā€™ll need to sync the same way as amazon. :frowning: size-only.

What if you took the opposite approach. What if you updated the local file system to match the remote instead.

You can grab all the times with a simple lsl in rclone like below.

rclone lsl robgs-cryptp: 2>&1 | awk '{print "touch -m -c --date=\""$2" "$3"\" /path/dir/"$4}' | tee update.sh

hang onā€¦ this breaks with space in file names. iā€™ll post a better one.

I did it by patching the rclone source in fact I only reverted this commit and run using --dry-run.

1 Like

This seems to work. but you gotta test it. Should update the time stamps locally to match the remote. Reverse of what you asked but if the timestamps are only important for syncing, this may work for you.

rclone lsl robgs-cryptp: 2>&1 | awk -F' ' '{ORS=""; print "touch -m -c --date=\""$2" "$3"\" \"/path/dir/"; $1=$2=$3=""; gsub(/ /, "", $0); print $0"\"\n"}'

Same issue hereā€¦Hopefully there is a workaround!

Did you do this against a crypt remote? I can see this working against a regular google remote but im not sure about crypt.

I was just going to drop in and request the same thing; my copy from a non-crypted ACD to a crypted Google Drive utterly munged the timestamps - theyā€™re off by anything from minutes to years. It would be great if rclone could compare checksums and if they match, fix the timestamp to match the source.

I got it by hacking the source (thanks @luggi for the idea!) . You have to revert the commit as suggested by @luggi but you also have to comment out the hash checking and RUN WITH --DRY-RUN.

// Check if the hashes are the same
//same, hash, _ := CheckHashes(src, dst)
//if !same {
// Debugf(src, ā€œ%v differā€, hash)
// return false
// }
// if hash == HashNone {
// // if couldnā€™t check hash, return that they differ
// return false
// }

If someone wants a precompiled linux binary, im happy to provide it. Dont use it for anything but this and please test before you start mangling your stuff! I just updated 3 destinations at around 3TB each. Beautiful.

I did exactly the same worked great! I donā€™t have my version anymore I already deleted it. Thanks for helping me out!

Iā€™d be interested in the binary; Iā€™m not familiar with go, and I have no idea whatā€™s required from a compiler/toolchain perspective. Unless Nick wants to add it as a feature (but Iā€™d understand if thereā€™s no point in that since this is likely a one-time issue).

https://drive.google.com/open?id=0Bzx0f6SXRnPXNFREblNhLWxJMU0

Just remember that when you say --dry-run it wont copy any files but it WILL update your mod times even though you specified dry-run. Good luck.

./rclonem --checkers=50 --transfers=50 -v copy /data/Media1 zonegd-cryptp:Media --dry-run

1 Like

Heh, I need to work on my reading comprehension - sadly, changing the local timestamps isnā€™t an option for me - I actually use when a file was created/modified, so I have to change it on the remote. If thatā€™s not possible at all then Iā€™ll just settle for ā€œā€“checksumā€.

The binary will do exactly that. It will modify the remote times to match the local times. Just use the dry run option just like my example.

Ah, I misunderstood (twice!) from the earlier comments. Sure enough, this resets the modification times, although man does the Google Drive rate limiting interfere! Itā€™s going to take me days to fix it all, but at least then it will be fixed once and for all. 100% worth it!

Days. Heh heh, if onlyā€¦ I had 5.5 million files on ACD that were copied over with bad dates. With Googleā€™s rate limiting, this looks like itā€™ll take months.

Mine did 210,000 files in about 2 hours. Maybe increase your checkers/transfers? I also use my own API instead of the default. Maybe that helped me.

I saw massive numbers of pacer messages in the verbose log; Iā€™ll try with my own API key as I doubt throwing more threads at rate limiting will help. Thanks for the API key suggestion; the feeling on multiple posts here is that slow small file performance is just a fact of life on Google Drive.

I dont think so personally. I get really good performance. BUT make sure you request an increase of your API limits. The limits are set to 1,000 per user per 100 seconds with a global limit of 10,000 for all users. Since youā€™ll be just ā€˜oneā€™ user using your API you can request that the 1,000 be increased to the global limit of 10,000 and they pretty much approve it without much issue. I get VERY LITTLE pacer messages and I run with like 50-100 checkers to do the above job. There is still some throttling that occurs that only allows so many requests per second but its far more managable if you can get the USER API throttling under control IMO.