Rclone hanging on long listing (?) with Digital Ocean Space

What is the problem you are having with rclone?

rclone mount appears to freeze on Digital Ocean space with a large number of files in the top level directory.

So... it is weird. If I do

$ rclone mount -vv --read-only  --vfs-cache-mode=full do-space:gdrive /home/craig/mounts/gdrive

I see the full listing in the log and see the listing when I ls /home/craig/mounts/gdrive.

If I do

$ rclone mount -v --read-only  --vfs-cache-mode=full do-space:gdrive /home/craig/mounts/gdrive

then I see the initial debug output and then the ls /home/craig/mounts/gdrive just hangs.

If I do

$ rclone mount  --read-only  --vfs-cache-mode=full do-space:gdrive /home/craig/mounts/gdrive

then the ls /home/craig/mounts/gdrive hangs in this case too.

So, the only time I can ls the mounted directory and get output is when I use the -vv on the rclone mount command.

I have several other mounts to other Digital Ocean spaces (using essentially the same command) and they all work. So I am confident the command is correct (or, correct enough).

The thing that is different about this one space is there are about 700 files in the top level directory. But not sure why that would matter (?).

Run the command 'rclone version' and share the full output of the command.

$ rclone --version
rclone v1.57.0
- os/version: ubuntu 20.04 (64 bit)
- os/kernel: 5.4.0-97-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.17.2
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Digital Ocean Spaces

The command you were trying to run (eg rclone copy /tmp remote:tmp)

 rclone mount --read-only  --vfs-cache-mode=full do-space:<spacename> /tmp/tt

The rclone config contents with secrets removed.

[do-space]
type = s3
provider = DigitalOcean
access_key_id = <access>
secret_access_key = <key>
endpoint = nyc3.digitaloceanspaces.com
acl = private```



#### A log from the command with the `-vv` flag  
<!-- You should use 3 backticks to begin and end your paste to make it readable.  Or use a service such as https://pastebin.com or https://gist.github.com/   -->

$ rclone mount -vv --read-only --vfs-cache-mode=full do-space:gdrive /home/craig/mounts/gdrive
2022/03/02 19:17:24 DEBUG : rclone: Version "v1.57.0" starting with parameters ["rclone" "mount" "-vv" "--read-only" "--vfs-cache-mode=full" "do-space:gdrive" "/home/craig/mounts/gdrive"]
2022/03/02 19:17:24 DEBUG : Creating backend with remote "do-space:gdrive"
2022/03/02 19:17:24 DEBUG : Using config file from "/home/craig/.config/rclone/rclone.conf"
2022/03/02 19:17:24 INFO : S3 bucket gdrive: poll-interval is not supported by this remote
2022/03/02 19:17:24 DEBUG : vfs cache: root is "/home/craig/.cache/rclone"
2022/03/02 19:17:24 DEBUG : vfs cache: data root is "/home/craig/.cache/rclone/vfs/do-space/gdrive"
2022/03/02 19:17:24 DEBUG : vfs cache: metadata root is "/home/craig/.cache/rclone/vfsMeta/do-space/gdrive"
2022/03/02 19:17:24 DEBUG : Creating backend with remote "/home/craig/.cache/rclone/vfs/do-space/gdrive"
2022/03/02 19:17:24 DEBUG : Creating backend with remote "/home/craig/.cache/rclone/vfsMeta/do-space/gdrive"
2022/03/02 19:17:24 DEBUG : S3 bucket gdrive: Mounting on "/home/craig/mounts/gdrive"
2022/03/02 19:17:24 INFO : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2022/03/02 19:17:24 DEBUG : : Root:
2022/03/02 19:17:24 DEBUG : : >Root: node=/, err=```

hi,

rclone ls do-space:gdrive outputs the files?

700 files, that is a very small amount of files.

the debug log is for the command that works, that command has -vv
can you post the command that does not work?

or to get a deeper look into what rclone is doing, add this to the command
--dump=headers --retries=1 --low-level-retries=1 --log-level=DEBUG --log-file=rclone.log

My bad...

Here is the output:

rclone mount -v --read-only  --vfs-cache-mode=full do-space:gdrive /home/craig/mounts/gdrive
2022/03/02 21:45:45 INFO  : S3 bucket gdrive: poll-interval is not supported by this remote
2022/03/02 21:45:45 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)

That is all I get. The last line repeats every minute.

The odd thing, too, is that if I ^C the rclone mount command, I do get a subset of the listing after the ls /home/craig/mounts/gdrive but only a subset.

rclone ls do-space:gdrive

outputs the files, no problem.

I tried the command

rclone mount  --dump=headers --retries=1 --low-level-retries=1 --log-level=DEBUG --log-file=rclone.log --read-only  --vfs-cache-mode=full do-space:gdrive /home/craig/mounts/gdrive

and the ls /home/craig/mounts/gdrive gave me an output and didn't block.

I tried

rclone mount  --retries=1 --low-level-retries=1  --read-only  --vfs-cache-mode=full do-space:gdrive /home/craig/mounts/gdrive

and the ls /home/craig/mounts/gdrive blocked again. I tried without the vfs-cache-mode, too, that didn't work either.

Try adding --use-server-modtime to the rclone mount - does that make a difference? If you don't do that then rclone will have to HEAD each object to find its modtime which can take quite a long time and (I'm guessing) might trigger rate limits on DO.

If you run the mount with -vv you'll more of an idea what is going on, I suspect you'll see lots of low level retries.

Bingo... wow. What a difference. It now works great.

So, I have a bunch of OD spaces all mounted the same. Why would it be just this space that had a problem? Would it be the number of files having to be parsed to find the modtime?

[... and thank you]

this is strange:
--- that with rclone mount, using -vv outputs the list of files, using -v does not?
--- that only when killing the rclone mount, rclone lists some of the files?
--- that when using -vv there are no low-level retries in the rclone debug log.

tho i do remember testing DO spaces, i found it slow, very expensive

https://docs.digitalocean.com/products/spaces/resources/performance-tips/
"We don’t recommend Spaces for use with filesystem-on-S3 services, like S3FS, S3QL, FuseOveramazon, S3FSite, goofys, YAS3FS, ObjectiveFS, or S3Backer."

Great.

It is probably the number of files in a single directory that is the problem.

It's quite possible that it started listing then you got rate limited half way through.

It has lots of rate limiting. So much so that I stopped running the integration tests against it because they never completed.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.