Fails to sync gdrive to mega

What is the problem you are having with rclone?

I'm trying to sync from a gdrive: to mega: and it's mostly working but failing on one specific file. It is a simple text file generated by:

rclone mount --vfs-cache-mode=full probus-7: /mnt/probus/g-suite/7-policies-procedures-manuals
date > /mnt/probus/g-suite/7-policies-procedures-manuals/new-file.txt 

I ran an initial sync (successfully) with:

rclone --verbose sync probus-7: mega:test

However, if I re-run the 'date' command and update the gdrive version, it always fails:

(see final output below)

Obviously, the old and new file sizes are the same.

It's as if 'rclone sync' is only comparing file size and ignoring the timestamp.

--checksum fails as gdrive and mega 'don't share a hash'

--ignore-size fails in the same way.

I've tried unmounting all the rclone mounts and

rm -rf ~/.cache/rclone

... but it doesn't help.

If I jump on the gdrive webpage gui thingy, I can see the new version of the file.
On the mega webpage gui thingy, I only see the old version of the file.

What is your rclone version (output from rclone version)

rclone v1.55.0-DEV
- os/type: linux
- os/arch: amd64
- go/version: go1.15.8
- go/linking: dynamic
- go/tags: none

Which OS you are using and how many bits (eg Windows 7, 64 bit)

fedora-33 64-bit linux on Intel

The rclone config contents with secrets removed.


[probus-7]
type = drive
scope = drive
token = {"access_token":"...","token_type":"Bearer","refresh_token":"...","expiry":"2021-05-21T14:26:32.494002052+10:00"}
team_drive = 0AKf_MVaFc9...

[mega]
type = mega
user = membership@....org.au
pass = YRsHOyjax0pas....


A log from the command with the -vv flag

$ rclone -vv sync probus-7: mega:test
<7>DEBUG : Using config file from "/home/XXX/.config/rclone/rclone.conf"
<7>DEBUG : rclone: Version "v1.55.0-DEV" starting with parameters ["rclone" "-vv" "sync" "probus-7:" "mega:test"]
<7>DEBUG : rclone: systemd logging support activated
<7>DEBUG : Creating backend with remote "probus-7:"
<7>DEBUG : Creating backend with remote "mega:test"
<7>DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
<7>DEBUG : pacer: Rate limited, increasing sleep to 1.016715876s
<7>DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
<7>DEBUG : pacer: Rate limited, increasing sleep to 2.14937854s
<7>DEBUG : pacer: low level retry 3/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
<7>DEBUG : pacer: Rate limited, increasing sleep to 4.94574382s
<7>DEBUG : pacer: Reducing sleep to 0s
<7>DEBUG : new-file.txt: Sizes identical
<7>DEBUG : new-file.txt: Unchanged skipping
...
<7>DEBUG : mega root 'test': Waiting for transfers to finish
<7>DEBUG : Waiting for deletions to finish
<6>INFO  : There was nothing to transfer
<6>INFO  : 
Transferred:   	        0 / 0 Bytes, -, 0 Bytes/s, ETA -
Checks:                91 / 91, 100%
Elapsed time:        20.2s

<7>DEBUG : 25 go routines active

That is what will be happening

  • gdriive and mega don' share a hash
  • mega doesn't support setting modtimes so rclone can't use a modtime sync

You might find that this is an acceptable workaround using this flag

-u, --update Skip files that are newer on the destination.

Wow! That worked. Thank you so much!

As for why it worked, I'm going to have to have a deep think as the description of -u doesn't explain it - why would 'skip[ping] files that are newer on the destination' make it happen?

If I can't understand what is going on, it makes me a bit uneasy using this method for our (secondary) backup method. I think it may be better for me to consider a more gdrive-compatible cloud provider other than mega for our backup - we are a club of elderly, retired people without much of a budget.

Great!

Skipping the files that are newer on the destination is the same as saying...

Rclone transfers files which for which the time stamps on the source are newer than those on the destination. If the timestamps are the same it won't transfer them.

So provided that when files get updated they get a new timestamp then this will work fine.

Understood.

For best syncs you want one which supports ModTime in the overview table. For best checking of checksums then it wants to support the MD5 hash. Unfortunately mega does neither.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.