Only update mod-time without copying file

I did it by patching the rclone source in fact I only reverted this commit and run using --dry-run.

1 Like

This seems to work. but you gotta test it. Should update the time stamps locally to match the remote. Reverse of what you asked but if the timestamps are only important for syncing, this may work for you.

rclone lsl robgs-cryptp: 2>&1 | awk -F' ' '{ORS=""; print "touch -m -c --date=\""$2" "$3"\" \"/path/dir/"; $1=$2=$3=""; gsub(/ /, "", $0); print $0"\"\n"}'

Same issue here…Hopefully there is a workaround!

Did you do this against a crypt remote? I can see this working against a regular google remote but im not sure about crypt.

I was just going to drop in and request the same thing; my copy from a non-crypted ACD to a crypted Google Drive utterly munged the timestamps - they’re off by anything from minutes to years. It would be great if rclone could compare checksums and if they match, fix the timestamp to match the source.

I got it by hacking the source (thanks @luggi for the idea!) . You have to revert the commit as suggested by @luggi but you also have to comment out the hash checking and RUN WITH --DRY-RUN.

// Check if the hashes are the same
//same, hash, _ := CheckHashes(src, dst)
//if !same {
// Debugf(src, “%v differ”, hash)
// return false
// }
// if hash == HashNone {
// // if couldn’t check hash, return that they differ
// return false
// }

If someone wants a precompiled linux binary, im happy to provide it. Dont use it for anything but this and please test before you start mangling your stuff! I just updated 3 destinations at around 3TB each. Beautiful.

I did exactly the same worked great! I don’t have my version anymore I already deleted it. Thanks for helping me out!

I’d be interested in the binary; I’m not familiar with go, and I have no idea what’s required from a compiler/toolchain perspective. Unless Nick wants to add it as a feature (but I’d understand if there’s no point in that since this is likely a one-time issue).

https://drive.google.com/open?id=0Bzx0f6SXRnPXNFREblNhLWxJMU0

Just remember that when you say --dry-run it wont copy any files but it WILL update your mod times even though you specified dry-run. Good luck.

./rclonem --checkers=50 --transfers=50 -v copy /data/Media1 zonegd-cryptp:Media --dry-run

1 Like

Heh, I need to work on my reading comprehension - sadly, changing the local timestamps isn’t an option for me - I actually use when a file was created/modified, so I have to change it on the remote. If that’s not possible at all then I’ll just settle for “–checksum”.

The binary will do exactly that. It will modify the remote times to match the local times. Just use the dry run option just like my example.

Ah, I misunderstood (twice!) from the earlier comments. Sure enough, this resets the modification times, although man does the Google Drive rate limiting interfere! It’s going to take me days to fix it all, but at least then it will be fixed once and for all. 100% worth it!

Days. Heh heh, if only… I had 5.5 million files on ACD that were copied over with bad dates. With Google’s rate limiting, this looks like it’ll take months.

Mine did 210,000 files in about 2 hours. Maybe increase your checkers/transfers? I also use my own API instead of the default. Maybe that helped me.

I saw massive numbers of pacer messages in the verbose log; I’ll try with my own API key as I doubt throwing more threads at rate limiting will help. Thanks for the API key suggestion; the feeling on multiple posts here is that slow small file performance is just a fact of life on Google Drive.

I dont think so personally. I get really good performance. BUT make sure you request an increase of your API limits. The limits are set to 1,000 per user per 100 seconds with a global limit of 10,000 for all users. Since you’ll be just ‘one’ user using your API you can request that the 1,000 be increased to the global limit of 10,000 and they pretty much approve it without much issue. I get VERY LITTLE pacer messages and I run with like 50-100 checkers to do the above job. There is still some throttling that occurs that only allows so many requests per second but its far more managable if you can get the USER API throttling under control IMO.

Requested, and also increased the checkers/transfers. Seeing much better performance even without the quota increase (at least for now).

Where do you have a 1k per user per 3 seconds limit? I only see a 10k per user per 100 second limit?

I mis-typed. I corrected it now. I meant per 100/sec. Sorry for confusing.

Thanks! Isn’t increasing that limit a little suspicious? I was quite comfortable pushing 200Mbytes/s from GDrive to ACD last night (got a old security profile that is still working)

I dont think so. I was honest in my request that I use rclone to push files to GD as a backup and I hit the 1000 per user limit easily. Since I am the only user I’d like it increased to the whole 10,000 rate. they approved no issue.

Its not like im doing anything illegal. Im using a tool to backup my files… What would be suspicious?