Only update mod-time without copying file

Did you do this against a crypt remote? I can see this working against a regular google remote but im not sure about crypt.

I was just going to drop in and request the same thing; my copy from a non-crypted ACD to a crypted Google Drive utterly munged the timestamps - they’re off by anything from minutes to years. It would be great if rclone could compare checksums and if they match, fix the timestamp to match the source.

I got it by hacking the source (thanks @luggi for the idea!) . You have to revert the commit as suggested by @luggi but you also have to comment out the hash checking and RUN WITH --DRY-RUN.

// Check if the hashes are the same
//same, hash, _ := CheckHashes(src, dst)
//if !same {
// Debugf(src, “%v differ”, hash)
// return false
// }
// if hash == HashNone {
// // if couldn’t check hash, return that they differ
// return false
// }

If someone wants a precompiled linux binary, im happy to provide it. Dont use it for anything but this and please test before you start mangling your stuff! I just updated 3 destinations at around 3TB each. Beautiful.

I did exactly the same worked great! I don’t have my version anymore I already deleted it. Thanks for helping me out!

I’d be interested in the binary; I’m not familiar with go, and I have no idea what’s required from a compiler/toolchain perspective. Unless Nick wants to add it as a feature (but I’d understand if there’s no point in that since this is likely a one-time issue).

https://drive.google.com/open?id=0Bzx0f6SXRnPXNFREblNhLWxJMU0

Just remember that when you say --dry-run it wont copy any files but it WILL update your mod times even though you specified dry-run. Good luck.

./rclonem --checkers=50 --transfers=50 -v copy /data/Media1 zonegd-cryptp:Media --dry-run

1 Like

Heh, I need to work on my reading comprehension - sadly, changing the local timestamps isn’t an option for me - I actually use when a file was created/modified, so I have to change it on the remote. If that’s not possible at all then I’ll just settle for “–checksum”.

The binary will do exactly that. It will modify the remote times to match the local times. Just use the dry run option just like my example.

Ah, I misunderstood (twice!) from the earlier comments. Sure enough, this resets the modification times, although man does the Google Drive rate limiting interfere! It’s going to take me days to fix it all, but at least then it will be fixed once and for all. 100% worth it!

Days. Heh heh, if only… I had 5.5 million files on ACD that were copied over with bad dates. With Google’s rate limiting, this looks like it’ll take months.

Mine did 210,000 files in about 2 hours. Maybe increase your checkers/transfers? I also use my own API instead of the default. Maybe that helped me.

I saw massive numbers of pacer messages in the verbose log; I’ll try with my own API key as I doubt throwing more threads at rate limiting will help. Thanks for the API key suggestion; the feeling on multiple posts here is that slow small file performance is just a fact of life on Google Drive.

I dont think so personally. I get really good performance. BUT make sure you request an increase of your API limits. The limits are set to 1,000 per user per 100 seconds with a global limit of 10,000 for all users. Since you’ll be just ‘one’ user using your API you can request that the 1,000 be increased to the global limit of 10,000 and they pretty much approve it without much issue. I get VERY LITTLE pacer messages and I run with like 50-100 checkers to do the above job. There is still some throttling that occurs that only allows so many requests per second but its far more managable if you can get the USER API throttling under control IMO.

Requested, and also increased the checkers/transfers. Seeing much better performance even without the quota increase (at least for now).

Where do you have a 1k per user per 3 seconds limit? I only see a 10k per user per 100 second limit?

I mis-typed. I corrected it now. I meant per 100/sec. Sorry for confusing.

Thanks! Isn’t increasing that limit a little suspicious? I was quite comfortable pushing 200Mbytes/s from GDrive to ACD last night (got a old security profile that is still working)

I dont think so. I was honest in my request that I use rclone to push files to GD as a backup and I hit the 1000 per user limit easily. Since I am the only user I’d like it increased to the whole 10,000 rate. they approved no issue.

Its not like im doing anything illegal. Im using a tool to backup my files… What would be suspicious?

Sounds reasonable. Still worried they will enforce the 1TB limit on <5 User Accounts.

Heh, worked for a brief while and then my user quota was crushed.

Failed to set modification time: googleapi: Error 403: User rate limit exceeded, userRateLimitExceeded

2017/05/24 13:10:32 DEBUG : pacer: Rate limited, sleeping for 1.591569721s (1 consecutive low level retries)
2017/05/24 13:10:32 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User rate limit exceeded, userRateLimitExceeded)
2017/05/24 13:10:32 DEBUG : pacer: Rate limited, sleeping for 2.139585917s (2 consecutive low level retries)
2017/05/24 13:10:32 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User rate limit exceeded, userRateLimitExceeded)
2017/05/24 13:10:32 DEBUG : pacer: Rate limited, sleeping for 4.204764712s (3 consecutive low level retries)
2017/05/24 13:10:32 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User rate limit exceeded, userRateLimitExceeded)
2017/05/24 13:10:32 DEBUG : pacer: Rate limited, sleeping for 8.050264449s (4 consecutive low level retries)
2017/05/24 13:10:32 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: User rate limit exceeded, userRateLimitExceeded)
2017/05/24 13:10:32 DEBUG : pacer: Rate limited, sleeping for 16.303989579s (5 consecutive low level retries)
2017/05/24 13:10:32 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User rate limit exceeded, userRateLimitExceeded)
2017/05/24 13:10:32 DEBUG : pacer: Rate limited, sleeping for 16.909156333s (6 consecutive low level retries)
2017/05/24 13:10:32 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User rate limit exceeded, userRateLimitExceeded)
2017/05/24 13:10:32 DEBUG : pacer: Rate limited, sleeping for 16.429879825s (7 consecutive low level retries)
2017/05/24 13:10:32 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User rate limit exceeded, userRateLimitExceeded)
2017/05/24 13:10:32 DEBUG : pacer: Rate limited, sleeping for 16.184312746s (8 consecutive low level retries)
2017/05/24 13:10:32 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User rate limit exceeded, userRateLimitExceeded)
2017/05/24 13:10:32 DEBUG : pacer: Rate limited, sleeping for 16.706343734s (9 consecutive low level retries)
2017/05/24 13:10:32 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User rate limit exceeded, userRateLimitExceeded)
2017/05/24 13:10:32 DEBUG : pacer: Rate limited, sleeping for 16.476771718s (10 consecutive low level retries)
2017/05/24 13:10:32 DEBUG : pacer: low level retry 3/10 (error googleapi: Error 403: User rate limit exceeded, userRateLimitExceeded)
2017/05/24 13:10:32 DEBUG : pacer: Rate limited, sleeping for 16.158377901s (11 consecutive low level retries)
2017/05/24 13:10:32 DEBUG : pacer: low level retry 3/10 (error googleapi: Error 403: User rate limit exceeded, userRateLimitExceeded)
2017/05/24 13:10:32 DEBUG : pacer: Rate limited, sleeping for 16.384065704s (12 consecutive low level retries)
2017/05/24 13:10:32 DEBUG : pacer: low level retry 3/10 (error googleapi: Error 403: User rate limit exceeded, userRateLimitExceeded)
2017/05/24 13:10:33 DEBUG : pacer: Rate limited, sleeping for 16.11208761s (13 consecutive low level retries)
2017/05/24 13:10:33 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User rate limit exceeded, userRateLimitExceeded)
2017/05/24 13:10:34 DEBUG : pacer: Rate limited, sleeping for 16.264975024s (14 consecutive low level retries)
2017/05/24 13:10:34 DEBUG : pacer: low level retry 3/10 (error googleapi: Error 403: User rate limit exceeded, userRateLimitExceeded)

Get increased to 10000 and that goes away your likely hitting the 1000 which is easy.

In https://console.developers.google.com you’ll notice that your errors go WAY down once you get increased. From like a 25-30% error rate to like 3%.