Directory shows up as file

Hello,

ok, i am sure by 80 % that this actually isn't a rclone problem by itself. However, this community seems to be the most knowledgable related to this topic and hopefully someone here has an idea what happened.

I am mounting a crypted GD remote containing several folders. While trying to access content from this folder i noticed some problems and when looking into the folder structure of the remote, everything looks as always except of the directory i wanted to access which is now showing as a file with 0 bytes.Name, Date and Permissions seem to be correct. On Ubuntu, the object has lost it's "d" marker and i can of course not access it.

[EDIT]
The particular remote is mounted to different machines running different versions of rclone. On two machines running the latest beta, i see the problem as described above. On one machine running rclone 1.44, the directory appears properly with all its contents.

Is there a known issue or is my behaviour just wrong and stupid? What data would be helpful here?

hello,
very strange,

so each machine is running the exact same command, but different versions of rclone?
about the config file,
does each machine have it owns config file, created locally on each machine
or is there just one config file that you copied to each machine?
can you do a rclone lsd on each machine

This is the output from the windows machine where it is working properly, running the older version of rclone:

-1 2019-05-04 18:40:53 -1 Audiobooks

Output of local dir command:

04.05.2019 18:40 Audiobooks

This is the linux machine showing the directory as file:

-1 2019-05-04 18:40:53 -1 Audiobooks

-rw-r--r-- 1 root root 0 May 6 2019 Audiobooks

And the second windows machine showing the problem:

-1 2019-05-04 18:40:53 -1 Audiobooks

06.05.2019 10:05 0 Audiobooks

An rclone lsd command for the affected directory shows its contents on each of the three machines. But accessing the content e. g. from the Plex library it is attached to, is not possible.

The rclone.conf is stored locally on each of the machines but it is the same version for each of the machines.

Would you mind trying with the latest version? v1.51.0

Right now i am using:
rclone v1.51.0-176-g1f50b709-beta

  • os/arch: windows/amd64
  • go version: go1.14.2

I am not seeing an update available.

Yep, that's the latest.

2 things to consider:

  • Can you run a dedupe to see if any duplicates exist?
  • Is there any chance that you have both a file & directory with the same name?

Should the dedupe be ran explicitely for the affected folder? This means that i would have to run it on the windows machine where the folder is still accessible.

I doubt that there is another Audiobooks object located anywhere on this remote.

The parent folder, if it exists, otherwise the whole remote.

I could find a single duplicate entry which has been deleted.

After remounting, the initial problem still exists.

[EDIT]
I just found time to downgrade to v1.43-027-gdeda0936-beta on the Windows machine where the folder was showing to be corrupted and after a re-mount, it's back like it should in all its glory.

So next thing would probably be stepping forward through some of the releases inbetween to find out where exactly the problem occurs.

:disappointed:

Can you post the logs from running the following command with both the rclone versions:

rclone lsf <parent_folder>: -vv --dump responses

The <parent_folder> refers to the parent of the corrupted folder.

i noticed a problem in 1.5.1.0, that the lsf output would get confused about a directory entry
that a file of zero bytes would appear in lsf file output but was a folder.
that was/is with s3 backends, not sure that applies to gdrive.
also not sure this is your problem.

in this post, i demonsrated the problem and how to test for it.

ncw create a beta version that fixed the issue.
perhaps the beta was not merged into the latest beta.

@darthShadow
Please find the logs here:
1.43: https://pastebin.com/xxxLE6Uy
1.51: https://pastebin.com/aZvUg97J

The console output for both commands seems to show the problem by itself, as it lists two items:

C:\Tools\rclone>rclone lsf gdrive_audio: -vv --dump responses --log-file rc-test-143.log --config rclone.conf
Audiobooks
Audiobooks/
Comedy/
Domian/
Sampler/
System Volume Information/
mountcheck

This output is identical for both versions.

I would doubt that as the lsd / lsf commands are always showing the correct contents within the affected folder. it's only the mount that differs between older and newer versions of rclone.
Also i am using 1.51.0.176 and the one mentioned in the thread was ...14x.

Are you using shared-with-me or anything else in the rclone.conf for that remote?

The output seems to suggest you have duplicates still and need to run dedupe.

Can you run the dedupe with -vv and share the output?

It should spit out something like:

felix@gemini:~$ rclone dedupe GD: -vv --fast-list
2020/04/23 09:40:40 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rclone" "dedupe" "GD:" "-vv" "--fast-list"]
2020/04/23 09:40:40 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2020/04/23 09:40:40 INFO  : Google drive root '': Looking for duplicates using interactive mode.
2020/04/23 09:41:43 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=286927808882, userRateLimitExceeded)
2020/04/23 09:41:43 DEBUG : pacer: Rate limited, increasing sleep to 1.129236937s
2020/04/23 09:41:44 DEBUG : pacer: Reducing sleep to 0s
2020/04/23 09:42:27 DEBUG : 19 go routines active
2020/04/23 09:42:27 DEBUG : rclone: Version "v1.51.0" finishing with parameters ["rclone" "dedupe" "GD:" "-vv" "--fast-list"]
felix@gemini:~$

The remote itself is pretty straight forward, nothing fancy and no sharing stuff attached to it.

Job output looks like that:

rclone dedupe gdrive_audio:/ -vv
2020/04/23 16:30:19 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rclone" "dedupe" "gdrive_audio:/" "-vv"]
2020/04/23 16:30:19 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2020/04/23 16:30:20 INFO : Encrypted drive 'gdrive_audio:/': Looking for duplicates using interactive mode.
2020/04/23 16:31:59 DEBUG : 19 go routines active
2020/04/23 16:31:59 DEBUG : rclone: Version "v1.51.0" finishing with parameters ["rclone" "dedupe" "gdrive_audio:/" "-vv"]
root@v22019043885288555:/etc/nginx/sites-available# ^C
root@v22019043885288555:/etc/nginx/sites-available# ^C
root@v22019043885288555:/etc/nginx/sites-available# ^C
root@v22019043885288555:/etc/nginx/sites-available# rclone dedupe gdrive_audio:/ -vv
2020/04/23 16:42:12 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rclone" "dedupe" "gdrive_audio:/" "-vv"]
2020/04/23 16:42:12 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2020/04/23 16:42:13 INFO : Encrypted drive 'gdrive_audio:/': Looking for duplicates using interactive mode.
2020/04/23 16:42:30 DEBUG : gdrive: Loaded invalid token from config file - ignoring
2020/04/23 16:42:31 DEBUG : gdrive: Saved new token in config file
2020/04/23 16:43:00 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project
quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and
adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=779687654375, userRateLimitExceeded)
2020/04/23 16:43:00 DEBUG : pacer: Rate limited, increasing sleep to 1.946274593s
2020/04/23 16:43:00 DEBUG : pacer: Reducing sleep to 0s
2020/04/23 16:43:00 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project
quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and
adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=779687654375, userRateLimitExceeded)
2020/04/23 16:43:00 DEBUG : pacer: Rate limited, increasing sleep to 1.621486415s
2020/04/23 16:43:00 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project
quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and
adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=779687654375, userRateLimitExceeded)
2020/04/23 16:43:00 DEBUG : pacer: Rate limited, increasing sleep to 2.720193218s
2020/04/23 16:43:00 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project
quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and
adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=779687654375, userRateLimitExceeded)
2020/04/23 16:43:00 DEBUG : pacer: Rate limited, increasing sleep to 4.139570548s
2020/04/23 16:43:00 DEBUG : pacer: Reducing sleep to 0s
2020/04/23 16:43:15 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project
quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and
adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=779687654375, userRateLimitExceeded)
2020/04/23 16:43:15 DEBUG : pacer: Rate limited, increasing sleep to 1.808305801s
2020/04/23 16:43:15 DEBUG : pacer: Reducing sleep to 0s
2020/04/23 16:43:18 DEBUG : 21 go routines active
2020/04/23 16:43:18 DEBUG : rclone: Version "v1.51.0" finishing with parameters ["rclone" "dedupe" "gdrive_audio:/" "-vv"]

Can you run the dedupe on the non encrypted remote?

It found lots of duplicates in the first run but now i am getting 403's for rate limit exceeded. Never had these for years.

403s are not an issue. When dedupe runs, it hammers the API as it's just telling rclone to slow down and it'll retry.

Once it's all deduped, can you let me know the results?

Good to know. I considered them to be that bad "ban" thing from the good old days where everyone was shooting for those cache remotes for Plex.

Here's the output:

rclone dedupe --dedupe-mode rename -vv --fast-list gdrive:/
2020/04/23 18:53:20 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rclone" "dedupe" "--dedupe-mode" "rename" "-vv" "--fast-list" "gdrive:/"]
2020/04/23 18:53:20 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2020/04/23 18:53:20 INFO : Google drive root '': Looking for duplicates using rename mode.
2020/04/23 18:56:00 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project
quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and
adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=779687654375, userRateLimitExceeded)
2020/04/23 18:56:00 DEBUG : pacer: Rate limited, increasing sleep to 1.620466944s
2020/04/23 18:56:00 DEBUG : pacer: Reducing sleep to 0s
2020/04/23 18:56:00 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project
quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and
adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=779687654375, userRateLimitExceeded)
2020/04/23 18:56:00 DEBUG : pacer: Rate limited, increasing sleep to 1.183041963s
2020/04/23 18:56:00 DEBUG : pacer: Reducing sleep to 0s
2020/04/23 18:56:03 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project
quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and
adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=779687654375, userRateLimitExceeded)
2020/04/23 18:56:03 DEBUG : pacer: Rate limited, increasing sleep to 1.437224957s
2020/04/23 18:56:03 DEBUG : pacer: Reducing sleep to 0s
2020/04/23 18:56:11 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project
quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and
adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=779687654375, userRateLimitExceeded)
2020/04/23 18:56:11 DEBUG : pacer: Rate limited, increasing sleep to 1.220888214s
2020/04/23 18:56:11 DEBUG : pacer: Reducing sleep to 0s
^C
root@v22019043885288555:/# rclone dedupe --dedupe-mode rename -vv --fast-list gdrive:/
2020/04/23 20:07:39 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rclone" "dedupe" "--dedupe-mode" "rename" "-vv" "--fast-list" "gdrive:/"]
2020/04/23 20:07:39 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2020/04/23 20:07:39 INFO : Google drive root '': Looking for duplicates using rename mode.
2020/04/23 20:10:48 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project
quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and
adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=779687654375, userRateLimitExceeded)
2020/04/23 20:10:48 DEBUG : pacer: Rate limited, increasing sleep to 1.292728501s
2020/04/23 20:10:48 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project
quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and
adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=779687654375, userRateLimitExceeded)
2020/04/23 20:10:48 DEBUG : pacer: Rate limited, increasing sleep to 2.838820748s
2020/04/23 20:10:49 DEBUG : pacer: Reducing sleep to 0s
2020/04/23 20:12:46 DEBUG : 19 go routines active
2020/04/23 20:12:46 DEBUG : rclone: Version "v1.51.0" finishing with parameters ["rclone" "dedupe" "--dedupe-mode" "rename" "-vv" "--fast-list" "gdrive:/"]

The mount is still showing different contents on 1.43 and 1.51.

Does dedupe even handle a file and folder with the same name?

It's not a great compare from 1.43 which is years old and 1.51. There are major changes since those times in terms of encoding and just the sheer mass of changes.

Let's focus on the current version and let's see what we can do.

That looks to be the clean run so I'm guessing you renamed a bunch of things in the previous runs?

Let's also not use the mount as just work with what we are seeing in ls commands.

What does rclone lsf gdrive:audio show now?