Rclone not open large storage folder of s3 bucket from windows explorer and not able to perform any operation in that folder

What is the problem you are having with rclone?

I have one folder in s3 bucket contains 50 Gb of data and I mount that using below command with windows drive:

rclone mount remotename:bucketname/foldername S: --vfs-cache-mode full --dir-cache-time 10s

and now I am facing the issue is like that I am not able to access that folder from cli, rclone browser, windows explorer

In rclone browser it takes around 20 minutes to open that folder
and from windows explorer is gicing me not responding , so how I see list, copy file and do other operations

Run the command 'rclone version' and share the full output of the command.

rclone v1.65.1

Which cloud storage system are you using? (eg Google Drive)

AWS S3 bucket

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount remotename:bucketname/foldername S: --vfs-cache-mode full --dir-cache-time 10s

The rclone config contents with secrets removed.

Paste config here

A log from the command with the -vv flag

2024/02/13 09:00:22 DEBUG : /: Statfs:
2024/02/13 09:00:22 DEBUG : /: >Statfs: stat={Bsize:4096 Frsize:4096 Blocks:274877906944 Bfree:274877906944 Bavail:274877906944 Files:1000000000 Ffree:1000000000 Favail:0 Fsid:0 Flag:0 Namemax:255}, errc=0
2024/02/13 09:00:22 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2024/02/13 09:00:22 DEBUG : /: >Getattr: errc=0
2024/02/13 09:00:22 DEBUG : /: Readlink:
2024/02/13 09:00:22 DEBUG : /: >Readlink: linkPath="", errc=-40
2024/02/13 09:00:22 DEBUG : /: Getxattr: name="non-existant-a11ec902d22f4ec49003af15282d3b00"
2024/02/13 09:00:22 DEBUG : /: >Getxattr: errc=-40, value=""
The service rclone has been started.
2024/02/13 09:00:22 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2024/02/13 09:00:22 DEBUG : /: >Getattr: errc=0
2024/02/13 09:00:22 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2024/02/13 09:00:22 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2024/02/13 09:00:22 DEBUG : /: >Getattr: errc=0
2024/02/13 09:00:22 DEBUG : /: Opendir:
2024/02/13 09:00:22 DEBUG : /: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2024/02/13 09:00:22 DEBUG : /: >Getattr: errc=0
2024/02/13 09:00:22 DEBUG : /: >OpenFile: fd=/ (r), err=<nil>
2024/02/13 09:00:22 DEBUG : /: >Opendir: errc=0, fh=0x0
2024/02/13 09:00:22 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2024/02/13 09:00:22 DEBUG : /: Statfs:
2024/02/13 09:00:22 DEBUG : /: >Statfs: stat={Bsize:4096 Frsize:4096 Blocks:274877906944 Bfree:274877906944 Bavail:274877906944 Files:1000000000 Ffree:1000000000 Favail:0 Fsid:0 Flag:0 Namemax:255}, errc=0
2024/02/13 09:00:22 DEBUG : /: Releasedir: fh=0x0
2024/02/13 09:00:22 DEBUG : /: >Getattr: errc=0
2024/02/13 09:00:22 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2024/02/13 09:00:22 DEBUG : /: >Getattr: errc=0
2024/02/13 09:00:22 DEBUG : /: >Releasedir: errc=0
2024/02/13 09:00:22 DEBUG : /: Opendir:
2024/02/13 09:00:22 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2024/02/13 09:00:22 DEBUG : /: >Getattr: errc=0
2024/02/13 09:00:22 DEBUG : /: Opendir:
2024/02/13 09:00:22 DEBUG : /: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2024/02/13 09:00:22 DEBUG : /: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2024/02/13 09:00:22 DEBUG : /: >OpenFile: fd=/ (r), err=<nil>
2024/02/13 09:00:22 DEBUG : /: >Opendir: errc=0, fh=0x0
2024/02/13 09:00:22 DEBUG : /: >OpenFile: fd=/ (r), err=<nil>
2024/02/13 09:00:22 DEBUG : /: Releasedir: fh=0x0
2024/02/13 09:00:22 DEBUG : /: >Releasedir: errc=0
2024/02/13 09:00:22 DEBUG : /: >Opendir: errc=0, fh=0x1
2024/02/13 09:00:22 DEBUG : /: Releasedir: fh=0x1
2024/02/13 09:00:22 DEBUG : /: >Releasedir: errc=0
2024/02/13 09:00:22 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2024/02/13 09:00:22 DEBUG : /: >Getattr: errc=0
2024/02/13 09:00:22 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2024/02/13 09:00:22 DEBUG : /: >Getattr: errc=0
2024/02/13 09:00:22 DEBUG : /: Opendir:
2024/02/13 09:00:22 DEBUG : /: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2024/02/13 09:00:22 DEBUG : /: >OpenFile: fd=/ (r), err=<nil>
2024/02/13 09:00:22 DEBUG : /: >Opendir: errc=0, fh=0x0
2024/02/13 09:00:22 DEBUG : /: Releasedir: fh=0x0
2024/02/13 09:00:22 DEBUG : /: >Releasedir: errc=0
2024/02/13 09:00:22 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2024/02/13 09:00:22 DEBUG : /: >Getattr: errc=0
2024/02/13 09:00:22 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2024/02/13 09:00:22 DEBUG : /: >Getattr: errc=0
2024/02/13 09:00:22 DEBUG : /: Opendir:
2024/02/13 09:00:22 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2024/02/13 09:00:22 DEBUG : /: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2024/02/13 09:00:22 DEBUG : /: >OpenFile: fd=/ (r), err=<nil>
2024/02/13 09:00:22 DEBUG : /: >Opendir: errc=0, fh=0x0
2024/02/13 09:00:22 DEBUG : /: >Getattr: errc=0
2024/02/13 09:00:22 DEBUG : /: Releasedir: fh=0x0
2024/02/13 09:00:22 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2024/02/13 09:00:22 DEBUG : /: >Getattr: errc=0
2024/02/13 09:00:22 DEBUG : /: Opendir:
2024/02/13 09:00:22 DEBUG : /: >Releasedir: errc=0
2024/02/13 09:00:22 DEBUG : /: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2024/02/13 09:00:22 DEBUG : /: >OpenFile: fd=/ (r), err=<nil>
2024/02/13 09:00:22 DEBUG : /: >Opendir: errc=0, fh=0x0
2024/02/13 09:00:22 DEBUG : /: Releasedir: fh=0x0
2024/02/13 09:00:22 DEBUG : /: >Releasedir: errc=0
2024/02/13 09:00:22 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2024/02/13 09:00:22 DEBUG : /: >Getattr: errc=0
2024/02/13 09:00:22 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2024/02/13 09:00:22 DEBUG : /: >Getattr: errc=0
2024/02/13 09:00:22 DEBUG : /: Opendir:
2024/02/13 09:00:22 DEBUG : /: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2024/02/13 09:00:22 DEBUG : /: >OpenFile: fd=/ (r), err=<nil>
2024/02/13 09:00:22 DEBUG : /: >Opendir: errc=0, fh=0x0
2024/02/13 09:00:22 DEBUG : /: Releasedir: fh=0x0
2024/02/13 09:00:22 DEBUG : /: >Releasedir: errc=0

there is no indication of any issues in your log - which is good.

Before opening folder rclone has to read its content - if you have a lot of items, high latency internet connection and/or slow S3 provider then it takes time.

And you make this issue worse as you instruct rclone to re-read it on access every 10s - --dir-cache-time 10s. Change it to --dir-cache-time 9999h then you will have to wait long time only once.

In addition add --vfs-fast-fingerprint and --fast-list flags. It should make things faster.

yeah I understand there is no issue detected on the logs but I am facing this problem

I have mount my s3 drive here for flexible use , my internet speed is also good and I have 50 GB of that folder

but if I open in my local it is working fine but if I open data through rclone mount directory it is goes to not responding

I also attach that two tags with my command and also update that --dir-cache so please let me know the solution for the same

Again - maybe it is slow. Added flags can improve it but won't suddenly make everything to fly. Wait and see if folder opens - it maybe just takes long time.

If still an issue then post all details and FULL logfile, not only fragment.

Ohk by using this [--vfs-fast-fingerprint] and --dir-cache-time 9999h so it is taking long time at once and then after that it working fine , means if I revisit the folder it does not take the long time again

so my question here is now only that this is the final solution and you asked me to use --dir-cache-time 9999h, so if I upload anything on folder it reflects on s3 but suppose if I perform any operations on s3 it does not reflect on folder that's why I used 10s

Correct. You can’t have both at the same time. Choose acceptable value for you. 1h? other option to make sure that operations are faster is to arrange your data in the way that there are not too many files in one folder.

Ohk great, last question related to this 1h, should I get the same response from windows explorer to open the folder compare to 9999? and that too I only faced only at one time, and not every time

I think yes. Not using windows myself so can’t test.

When dealing with a slow S3 provider, add this into your list of commands. It helped loading folders with large amounts of items in them.

--s3-list-chunk 1000000

hi, are you sure about that?
from what i understand, there is a hard limit of 1000

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.