Rclone error syncing files that came from shared drive

Hello,

I am having issues syncin files that came form a shared drive. I did a copy, using the google drive gui (or rclone copy), to a shared drive.
I then copied the data form said gdrive shared account, to another account (with permissions on that drive), to the main drive.
Then i try to sync both drives AGAIN, using rclone sync, and altough all files exist on both ends, rclone says that either they do not exist, or they have differnt sizes - visually inspected, they have exactly the same byte count and procedes to do an full sync.
Also, i did pass the --dry-run flag just for showing without doing any actual sync.

This is what triggers this:

a) From GCD 1 i transfer data to a shared drive that I created on the same Google Account.
b) I go the GUI, add a user from another Google account to it.
c) I copy files to the shared drive on account 1, and then, after being copied to the shared drive, i go the account 2, and move it to the main google drive removing it from the shared drive.
d) After i try to do a rclone sync form account 1 to account 2, same folders, same files, they are indicated as missing or some issue with size althought the size is te same, and if i copy them locally, both copies have the same md5sum.

Also, if i do a rclone mount on both drives, and then do a rsync between them, rsync says that the files are the same, and does not process them.

What am i missing? Is this a bug? An ownership / creation issue?

What is your rclone version (output from rclone version)

Rclone version:

rclone v1.54.0

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Linux/Centos

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone sync -vvvP --drive-server-side-across-configs=true --ignore-checksum --bwlimit 7M --drive-stop-on-upload-limit --update --transfers 10 --dry-run SHARED01:Datastore GGL:Datastore

The rclone config contents with secrets removed.

[SHARED01]
type = drive
client_id = SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
client_secret = XXXXXXXXXXXXXXXXXXXXXX
token = {"access_token":"XXXXXXXXXXXXXXXXXXXXX"}
root_folder_id = XXXXXXXXXXXXXXXXXXXX

[GGL]
type = drive
client_id = SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
client_secret = XXXXXXXXXXXXXXXXXXXXXX
token = {"access_token":"XXXXXXXXXXXXXXXXXXXXX"}
root_folder_id = XXXXXXXXXXXXXXXXXXXX

A log from the command with the -vv flag

+ /usr/bin/rclone sync -vvvP --drive-server-side-across-configs=true --ignore-checksum  --bwlimit 7M --drive-stop-on-upload-limit --update --transfers 10 --dry-run SHARED01:Datastore GGL:Datastore
2021/02/28 19:28:21 DEBUG : rclone: Version "v1.54.0" starting with parameters ["/usr/bin/rclone" "sync" "-vvvP" "--drive-server-side-across-configs=true" "--ignore-checksum"  "--bwlimit" "7M" "--drive-stop-on-upload-limit" "--update" "--transfers" "10" "--dry-run" "SHARED01:Datastore" "GGL:Datastore"]
2021/02/28 19:28:21 DEBUG : Using config file from "/root/.rclone.conf"
2021/02/28 19:28:21 INFO  : Starting bandwidth limiter at 7MBytes/s
2021/02/28 19:28:21 DEBUG : Creating backend with remote "SHARED01:Datastore"
2021/02/28 19:28:22 DEBUG : Creating backend with remote "SHARED01:Datastore/l700b9jlt4gmh199ggegj3iaq0/7cjsta0tip0mmkq1pls2msqlrs"
2021/02/28 19:28:24 DEBUG : Creating backend with remote "GGL:Datastore"
2021/02/28 19:28:24 DEBUG : Creating backend with remote "GGL:Datastore/f1252rc8m9tad85heebc9d570o/8q102jpp56f16hpmk8hu8bnc7o"
2021/02/28 19:28:24 DEBUG : Google drive root 'GGL:Datastore/f1252rc8m9tad85heebc9d570o/8q102jpp56f16hpmk8hu8bnc7o': root_folder_id = "0AAlyJM_OH3LAUk9PVA" - save this in the config to speed up startup

2021-02-28 19:28:29 NOTICE: AAAAA001.DAT: Skipped copy as --dry-run is set (size 377.357M)
2021-02-28 19:28:29 NOTICE: AAAAA002.DAT: : Skipped copy as --dry-run is set (size 395.522M)
2021-02-28 19:28:29 NOTICE: AAAAA003.DAT: : Skipped copy as --dry-run is set (size 599.988M)
2021-02-28 19:28:29 NOTICE: AAAAA004.DAT: (size 463.446M)

Weird stuff like this is sometimes caused by duplicate files, so I'd suggest using rclone dedupe to check/fix duplicate files first.

Already did that, but retried again with heavier verbose on the -vvvv.
This was the output. There is something else here:

2021/03/01 10:42:10 DEBUG : 3imb8dnh1muc1kunm3g56og29k/7hli28klrdkc5bh1nhvlk19rik/h0atji1qv1kevmspg35no21tbk245j5mmn9eofuai03lkpke4rqg/9s22u1nnabsuk8q5bsh7bv874ipm9oaufh38optsc2d45fo62f6vdi76mrd76g1qsh7um5fbt55so: Skipping undecryptable file name: Bad PKCS#7 padding - too long
2021/03/01 10:42:10 DEBUG : 3imb8dnh1muc1kunm3g56og29k/7hli28klrdkc5bh1nhvlk19rik/h0atji1qv1kevmspg35no21tbk245j5mmn9eofuai03lkpke4rqg/8iuc9si1d6st3f9unjrl4e6q0t5fsv2pkdnatel496l5l78kj2vdaidgpv6ugcbd174cn2gtafib8: Skipping undecryptable file name: Bad PKCS#7 padding - too long
2021/03/01 10:42:10 DEBUG : 3imb8dnh1muc1kunm3g56og29k/7hli28klrdkc5bh1nhvlk19rik/h0atji1qv1kevmspg35no21tbk245j5mmn9eofuai03lkpke4rqg/65puqaarq7f8jqj1osefgsqka8l77rijmmoh0l719qpejpnv0o2l4735onctae9t1hk6ai919dq7a: Skipping undecryptable file name: Bad PKCS#7 padding - too long
2021/03/01 10:42:10 DEBUG : 3imb8dnh1muc1kunm3g56og29k/7hli28klrdkc5bh1nhvlk19rik/h0atji1qv1kevmspg35no21tbk245j5mmn9eofuai03lkpke4rqg/8u9l24btrdr8s79n96drbp60qfnmv9t73artnk6tmu4qerhj9m0cf33vioa81c9vsjkk2vicc7rrg: Skipping undecryptable file name: Bad PKCS#7 padding - too long
2021/03/01 10:42:10 DEBUG : 3imb8dnh1muc1kunm3g56og29k/7hli28klrdkc5bh1nhvlk19rik/h0atji1qv1kevmspg35no21tbk245j5mmn9eofuai03lkpke4rqg/ll2abfg5c53f3b6itiejplac4q02g5q5btuhpvq3gl7pjau3qvvhdnancmkvugtb8b006u5474nas: Skipping undecryptable file name: Bad PKCS#7 padding - too long
2021/03/01 10:42:10 DEBUG : 3imb8dnh1muc1kunm3g56og29k/7hli28klrdkc5bh1nhvlk19rik/h0atji1qv1kevmspg35no21tbk245j5mmn9eofuai03lkpke4rqg/b8ib5r4m9q6h7fqu116oblnoh5s9ce4euk2kcr0ol4ma8khtnim8qvgochnlkisgglf8ifj2hiq6q: Skipping undecryptable file name: Bad PKCS#7 padding - too long
2021/03/01 10:42:10 DEBUG : 3imb8dnh1muc1kunm3g56og29k/7hli28klrdkc5bh1nhvlk19rik/h0atji1qv1kevmspg35no21tbk245j5mmn9eofuai03lkpke4rqg/ichepmjpia1u5lutmmbvv1krmiin7j6jphju4p86rrebm238enuoj6mrvj2imumdbetih03e5a932: Skipping undecryptable file name: Bad PKCS#7 padding - too long
2021/03/01 10:42:10 DEBUG : 3imb8dnh1muc1kunm3g56og29k/7hli28klrdkc5bh1nhvlk19rik/h0atji1qv1kevmspg35no21tbk245j5mmn9eofuai03lkpke4rqg/5ji3d1lr722mkv8n2b1cslj3cbbj2qvq4j5a8pf0pgqlkr8nj05cv4tqnqevn74up8cqbq0i25qgc: Skipping undecryptable file name: Bad PKCS#7 padding - too long
2021/03/01 10:42:10 DEBUG : 3imb8dnh1muc1kunm3g56og29k/7hli28klrdkc5bh1nhvlk19rik/h0atji1qv1kevmspg35no21tbk245j5mmn9eofuai03lkpke4rqg/cgdhndcu89ausgjmhqkjcpb2dcj14bdrf29dshogs0u2mjscb0d1bl23l4a6can2uls969abeguf2: Skipping undecryptable file name: Bad PKCS#7 padding - too long
2021/03/01 10:42:10 DEBUG : 3imb8dnh1muc1kunm3g56og29k/7hli28klrdkc5bh1nhvlk19rik/h0atji1qv1kevmspg35no21tbk245j5mmn9eofuai03lkpke4rqg/ofnb9105lac5movarbk5c15u6g53v7tk9u2cbkc8v49g4qtkc67aoert0kptl72ihss7q77597co8: Skipping undecryptable file name: Bad PKCS#7 padding - too long
2021/03/01 10:42:10 DEBUG : 3imb8dnh1muc1kunm3g56og29k/7hli28klrdkc5bh1nhvlk19rik/h0atji1qv1kevmspg35no21tbk245j5mmn9eofuai03lkpke4rqg/fv6pa2vcjkgu4lop1uc9gicfei57f0qohl3hf7nahjm9aagpbi9iim53mf5m615arbj246sm3to6s: Skipping undecryptable file name: Bad PKCS#7 padding - too long
2021/03/01 10:42:10 DEBUG : 3imb8dnh1muc1kunm3g56og29k/7hli28klrdkc5bh1nhvlk19rik/h0atji1qv1kevmspg35no21tbk245j5mmn9eofuai03lkpke4rqg/dr7j6cnnptp3n6nmhj8ov2sq4vei0ib3djjuqk6pu6jtu9mmj3i3o43l153vrgd3aaolmeee06ec2: Skipping undecryptable file name: Bad PKCS#7 padding - too long
2021/03/01 10:42:10 DEBUG : 3imb8dnh1muc1kunm3g56og29k/7hli28klrdkc5bh1nhvlk19rik/h0atji1qv1kevmspg35no21tbk245j5mmn9eofuai03lkpke4rqg/b7k6d3fntojd51p6jvjp2jiob5lpcncignqpjo5o30oohp102ev22gbh5heg9srbebgd46q0ks2to: Skipping undecryptable file name: Bad PKCS#7 padding - too long
2021/03/01 10:42:10 DEBUG : 3imb8dnh1muc1kunm3g56og29k/7hli28klrdkc5bh1nhvlk19rik/h0atji1qv1kevmspg35no21tbk245j5mmn9eofuai03lkpke4rqg/ed8ejv0158shii812383pbv5pgne86odb1fbj10v2sta00j2huvc8cq8812gfa7j9i1170he9g8hk: Skipping undecryptable file name: Bad PKCS#7 padding - too long

At first i thought that it could be a bad password, but the truth is that if i mount the share via mount it works like a charm.
Also, i've confirmed after moving the "3imb8dnh1muc1kunm3g56og29k/7hli28klrdkc5bh1nhvlk19rik/h0atji1qv1kevmspg35no21tbk245j5mmn9eofuai03lkpke4rqg/" folder to another place, a directory would disappear - let's say directory A - and if i put it back, it would show in rclone mount with a correct human name.

Try the dedupe on the underlying remote not on the crypt wrapper.... THough I didn't see crypt config in your config file above?

Hello again,

Re-running the dedupe now. It's a 80TB drive so it will take a while.

About the crypt:

[Datastore]
type = drive
client_id = blablablablausr.apps.googleusercontent.com
client_secret = blablablalbla
scope = drive
token = {"access_token":"XXXXXXX","expiry":"2021-03-01T13:04:24.XXXX"}

It just ended. And the issue remains. I had a few duplicates but nothing wierd.

Tried on another underlying remote that was copied the same way and the issues also appears there.
Is there a flag that i can do a check like chkdsk/fsck on both source and destination?

Pick one file that has a problem and copy that with -vv and share the output.

If i do an md5sum withtin the rclone mount.
PS: the file names and path's were redacted for privacy.

[root@lxcvsan03]# md5sum /mnt/arquivos_old/SHARE03/Series/DATASET003/"MyFileDat.DAT"
5e3746f7a3c6dcfd042cdc1a3db88344 /mnt/arquivos_old/SHARE03/Series/DATASET003/"MyFileDat.DAT"

[root@lxcvsan03]# md5sum /mnt/arquivos_new/SHARE03/Series/DATASET003/"MyFileDat.DAT"
5e3746f7a3c6dcfd042cdc1a3db88344 /mnt/arquivos_new/SHARE03/Series/DATASET003/"MyFileDat.DAT"

  • /usr/bin/rclone sync -vv --dry-run --ignore-existing --config /root/ola.conf 'XXXXX:/SHARE03/Series/DATASET003/MyFileDat.DAT' 'YYYYY:MEDIA/SHARE03/Series/DATASET003/MyFileDat.DAT'
    2021/03/01 18:36:13 DEBUG : rclone: Version "v1.54" starting with parameters ["/usr/bin/rclone" "sync" "-vv" "--dry-run" "--ignore-existing" "--config" "/root/ola.conf" "XXXXX:/SHARE03/Series/DATASET003/MyFileDat.DAT" "YYYYY:MEDIA/SHARE03/Series/DATASET003/MyFileDat.DAT"]
    2021/03/01 18:36:13 DEBUG : Using config file from "/root/ola.conf"
    2021/03/01 18:36:13 DEBUG : Creating backend with remote "XXXXX:/SHARE03/Series/DATASET003/MyFileDat.DAT"
    2021/03/01 18:36:13 DEBUG : Creating backend with remote "GEEKCD:API1SHARE1/l700b9jlt4gmh199ggegj3iaq0/7cjsta0tip0mmkq1pls2msqlrs/ghuqljpao7l23i3592mbqn4mnc/q5nrh55bg7u8976f2b9qfll7n0n74mpbdpi27n6k624pflmqnte0b08lvssfo23mnrpak0ebvhvvt4b38h4600unknqgacq84hd0040"
    2021/03/01 18:36:13 DEBUG : Google drive root 'XXXXXXXX:/l700b9jlt4gmh199ggegj3iaq0/7cjsta0tip0mmkq1pls2msqlrs/ghuqljpao7l23i3592mbqn4mnc/q5nrh55bg7u8976f2b9qfll7n0n74mpbdpi27n6k624pflmqnte0b08lvssfo23mnrpak0ebvhvvt4b38h4600unknqgacq84hd0040': root_folder_id = "0AIyG7DBivvwCUk9PVA" - save this in the config to speed up startup
    2021/03/01 18:36:20 DEBUG : fs cache: adding new entry for parent of "GEEKCD:API1SHARE1/l700b9jlt4gmh199ggegj3iaq0/7cjsta0tip0mmkq1pls2msqlrs/ghuqljpao7l23i3592mbqn4mnc/q5nrh55bg7u8976f2b9qfll7n0n74mpbdpi27n6k624pflmqnte0b08lvssfo23mnrpak0ebvhvvt4b38h4600unknqgacq84hd0040", "GEEKCD:API1SHARE1/l700b9jlt4gmh199ggegj3iaq0/7cjsta0tip0mmkq1pls2msqlrs/ghuqljpao7l23i3592mbqn4mnc"
    2021/03/01 18:36:20 DEBUG : Creating backend with remote "YYYYYYYY:MEDIA/SHARE03/Series/DATASET003/MyFileDat.DAT"
    2021/03/01 18:36:20 DEBUG : Creating backend with remote "NUNESHIGGS-ONLINE:GSTORE014/ds0qk7tl4a5qch3gjg8m42632o/vt7e9d4kfloisb0rdjbm2ecmlk/9n9b0jknoddsvi3e6k6vls4944/vt7is83rig96b06se4jticro4g/03ci4pgi3pnt91udqd0qcvij14gdofpu2umnep4sdsehq8r2m4v1tedu22knhek9o63am24fjqk8s73jlrc8fo7ahgmqt41k173ae6g"
    2021/03/01 18:36:20 DEBUG : Google drive root 'GSTORE014/ds0qk7tl4a5qch3gjg8m42632o/vt7e9d4kfloisb0rdjbm2ecmlk/9n9b0jknoddsvi3e6k6vls4944/vt7is83rig96b06se4jticro4g/03ci4pgi3pnt91udqd0qcvij14gdofpu2umnep4sdsehq8r2m4v1tedu22knhek9o63am24fjqk8s73jlrc8fo7ahgmqt41k173ae6g': root_folder_id = "0AAlyJM_OH3LAUk9PVA" - save this in the config to speed up startup
    2021/03/01 18:36:21 DEBUG : MyFileDat.DAT: Need to transfer - File not found at Destination
    2021/03/01 18:36:21 NOTICE: MyFileDat.DAT: Skipped copy as --dry-run is set
    2021/03/01 18:36:21 INFO :
    Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
    Transferred: 1 / 1, 100%
    Elapsed time: 8.2s

If you just want to copy a single file then you want the destination to be the directory "YYYYY:MEDIA/SHARE03/Series/DATASET003/" - however don't use sync here, use copy

So try

/usr/bin/rclone sync -vv --dry-run --ignore-existing --config /root/ola.conf 'XXXXX:/SHARE03/Series/DATASET003/MyFileDat.DAT' 'YYYYY:MEDIA/SHARE03/Series/DATASET003/'

I want to do a sync of two shares. The single file was per request.
Also, if i do a copy, the --ignore-existing flag, gets ignored and it copies all files within that directory.
What really bugs me its the error: DEBUG : MyFileDat.DAT: Need to transfer - File not found at Destination

I'm trying to see the specific reason rclone copies the file so if you can run a single file copy, that would be superb.

That will be in the debug log.

My hash matches and it just updates the mod time as an example.

felix@gemini:~$ rclone copy /etc/hosts GD: -vv
2021/03/01 14:41:15 DEBUG : rclone: Version "v1.54.0" starting with parameters ["rclone" "copy" "/etc/hosts" "GD:" "-vv"]
2021/03/01 14:41:15 DEBUG : Creating backend with remote "/etc/hosts"
2021/03/01 14:41:15 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2021/03/01 14:41:15 DEBUG : fs cache: adding new entry for parent of "/etc/hosts", "/etc"
2021/03/01 14:41:15 DEBUG : Creating backend with remote "GD:"
2021/03/01 14:41:15 DEBUG : hosts: Modification times differ by 450h16m30.171421056s: 2021-01-23 11:49:37.053578944 -0500 EST, 2021-02-11 11:06:07.225 +0000 UTC
2021/03/01 14:41:15 DEBUG : hosts: MD5 = e9b49c993fe22326c398ecea2fd9b219 OK
2021/03/01 14:41:16 INFO  : hosts: Updated modification time in destination
2021/03/01 14:41:16 DEBUG : hosts: Unchanged skipping
2021/03/01 14:41:16 INFO  :
Transferred:             0 / 0 Bytes, -, 0 Bytes/s, ETA -
Checks:                 1 / 1, 100%
Elapsed time:         1.5s

2021/03/01 14:41:16 DEBUG : 4 go routines active

If you can pick one file that you aren't sure why it gets recopied, that's the log we are looking for rather than running a whole sync. Once we identify why, we can proceed with the sync.

  • /usr/bin/rclone sync -vv --dry-run --ignore-existing --config /root/ola.conf 'XXXXX:/SHARE03/Series/DATASET003/MyFileDat.DAT' 'YYYYY:MEDIA/SHARE03/Series/DATASET003/MyFileDat.DAT'
    Here it is:

2021/03/01 18:36:13 DEBUG : rclone: Version "v1.54" starting with parameters ["/usr/bin/rclone" "sync" "-vv" "--dry-run" "--ignore-existing" "--config" "/root/ola.conf" "XXXXX:/SHARE03/Series/DATASET003/MyFileDat.DAT" "YYYYY:MEDIA/SHARE03/Series/DATASET003/MyFileDat.DAT"]
2021/03/01 18:36:13 DEBUG : Using config file from "/root/ola.conf"
2021/03/01 18:36:13 DEBUG : Creating backend with remote "XXXXX:/SHARE03/Series/DATASET003/MyFileDat.DAT"
2021/03/01 18:36:13 DEBUG : Creating backend with remote "GEEKCD:API1SHARE1/l700b9jlt4gmh199ggegj3iaq0/7cjsta0tip0mmkq1pls2msqlrs/ghuqljpao7l23i3592mbqn4mnc/q5nrh55bg7u8976f2b9qfll7n0n74mpbdpi27n6k624pflmqnte0b08lvssfo23mnrpak0ebvhvvt4b38h4600unknqgacq84hd0040"
2021/03/01 18:36:13 DEBUG : Google drive root 'XXXXXXXX:/l700b9jlt4gmh199ggegj3iaq0/7cjsta0tip0mmkq1pls2msqlrs/ghuqljpao7l23i3592mbqn4mnc/q5nrh55bg7u8976f2b9qfll7n0n74mpbdpi27n6k624pflmqnte0b08lvssfo23mnrpak0ebvhvvt4b38h4600unknqgacq84hd0040': root_folder_id = "0AIyG7DBivvwCUk9PVA" - save this in the config to speed up startup
2021/03/01 18:36:20 DEBUG : fs cache: adding new entry for parent of "GEEKCD:API1SHARE1/l700b9jlt4gmh199ggegj3iaq0/7cjsta0tip0mmkq1pls2msqlrs/ghuqljpao7l23i3592mbqn4mnc/q5nrh55bg7u8976f2b9qfll7n0n74mpbdpi27n6k624pflmqnte0b08lvssfo23mnrpak0ebvhvvt4b38h4600unknqgacq84hd0040", "GEEKCD:API1SHARE1/l700b9jlt4gmh199ggegj3iaq0/7cjsta0tip0mmkq1pls2msqlrs/ghuqljpao7l23i3592mbqn4mnc"
2021/03/01 18:36:20 DEBUG : Creating backend with remote "YYYYYYYY:MEDIA/SHARE03/Series/DATASET003/MyFileDat.DAT"
2021/03/01 18:36:20 DEBUG : Creating backend with remote "NUNESHIGGS-ONLINE:GSTORE014/ds0qk7tl4a5qch3gjg8m42632o/vt7e9d4kfloisb0rdjbm2ecmlk/9n9b0jknoddsvi3e6k6vls4944/vt7is83rig96b06se4jticro4g/03ci4pgi3pnt91udqd0qcvij14gdofpu2umnep4sdsehq8r2m4v1tedu22knhek9o63am24fjqk8s73jlrc8fo7ahgmqt41k173ae6g"
2021/03/01 18:36:20 DEBUG : Google drive root 'GSTORE014/ds0qk7tl4a5qch3gjg8m42632o/vt7e9d4kfloisb0rdjbm2ecmlk/9n9b0jknoddsvi3e6k6vls4944/vt7is83rig96b06se4jticro4g/03ci4pgi3pnt91udqd0qcvij14gdofpu2umnep4sdsehq8r2m4v1tedu22knhek9o63am24fjqk8s73jlrc8fo7ahgmqt41k173ae6g': root_folder_id = "0AAlyJM_OH3LAUk9PVA" - save this in the config to speed up startup
2021/03/01 18:36:21 DEBUG : MyFileDat.DAT: Need to transfer - File not found at Destination
2021/03/01 18:36:21 NOTICE: MyFileDat.DAT: Skipped copy as --dry-run is set
2021/03/01 18:36:21 INFO :
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Transferred: 1 / 1, 100%
Elapsed time: 8.2s

Please note that the file exists on the destination, and i can even do an md5sum of both files within a rclone mount.
Also this is happening also on another share.
Could this something related to ownership of the files?

You need to remove the file name at the end.

So it should be:

/usr/bin/rclone sync -vv --dry-run --ignore-existing --config /root/ola.conf 'XXXXX:/SHARE03/Series/DATASET003/MyFileDat.DAT' 'YYYYY:MEDIA/SHARE03/Series/DATASET003

Just moved the files to another folder, went for a coffe, and moved back. It is working from what i gather.
Also i cleaned my .rclone.conf file.

I'll keep you posted. Thanks for your help!!

Hello,

Just to follow up on this. It did not happend again with this copy.
With another copy it appeared again. Did the same thing:

a) Moved to another location within the drive.
b) Waited about 15 minutes.
c) Moved back
d) Profit.

I dont know if google is doing something on their end that would cause this, but it appears that the solution is to move to another place, wait a while, copy back and then it should work.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.