Read and write on folder with too many files are very slow

What is the problem you are having with rclone?

I have rclone with a S3 from AWS on a few computers, all are sync with this config:
--vfs-case-insensitive --vfs-cache-mode full --write-back-cache --vfs-write-back 10s --vfs-cache-max-age 300s --vfs-cache-poll-interval 60s --cache-dir %temp% --network-mode --dir-cache-time 5s --no-console

But the read and write on folders with too many files are too slow, is taking a lot of time when I need to save a file on that folder or read something.
What can I do to make it a lot faster? Is there some config that need to be changed?

What is your rclone version (output from rclone version)

Latest version

Which OS you are using and how many bits (eg Windows 7, 64 bit)

All computer on windows 10

Which cloud storage system are you using? (eg Google Drive)

S3 AWS

hi,

what version of rclone?

if you cannot post the entire debug log, at least the top lines.

best to test first using defauts and no extra flags, and when adding a flag make sure you know what it does.
for example,
--write-back-cache does nothing on windows

Version 1.55.1

Ok, I will remove the --write-back-cache
I read about the --no-modtime (Don't read/write the modification time (can speed things up))
Will this work for me, increasing the speed when read and write the files on folders with a lot of files?

only way to know is to try.
the --no-modtime should help with listing of files, not writing/reading the contents.

what is the total size of all the files?

I've jus tested, it works very well using the --no-modtime
Now the files are loading on Explorer very quickly and with no lags to save any files.
I have about 35gb of files, not big files.
But its ok for now, I did a few tests with all computers now and seems to be working fine.

that is a very small amount of data.

i have found the fastest backend for random access is wasabi, a s3 clone known for hot storage.

Never use wasabi, I need something here for my work because of sensitive data about clients and the new law about those data here in Brazil.
I need to be able to grant control to my employees only to specific folder, and need reports about writes and deletes of those users, this way I can managed who employee create and delete some specific file. Also I need versioning.
Wasabi can provide those things?

wasabi has versioning, and compliance settings like aws s3.
MFA delete protection.

with aws, how to you grant control to specifc file, using s3 bucket polices?

as for sensitive clients, do use you some form of encryption?

Acually, today all the employees has acess to everything, we are a small company. But in the future with new employees and others hierachy, we might need restrict folder access.

And I use the s3 encryption, nothing else.

well, it seems that your problem has been solved so no need for wasabi.
tho wasabi is much cheaper then aws s3.

wasabi offers only SSE-C encryption, which is what i use.
that way the backend does not have the encryption keys, same as a rclone crypt remote.

I'm trying Wasabi now, 30 days trial.
But I can't get the mount correctly, it mount normally, but when I try to open the location, I got an error.
"ERROR : IO error: AccessDenied: Access Denied"

I don't know why, my user has all permission access.
I'm using the same config as my S3 AWS, just created a bucket on wasabi and use that name and use another letter to mount. The rest is the same.

as i cannot see into your computer, you need to post a full debug log.

and before using rclone mount, make sure rclone ls works

2021/07/19 17:51:59 Failed to ls: AccessDenied: Access Denied
status code: 403, request id: XXXXXXXXXXXXXXXXXXX, host id: /XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Getting this error on rclone ls remote:/path

i cannot see into your computer,

  • post a full debug log
  • post the config file, redact id/secrets

How can I generate a debug log?

add -vv to the command, for example,

rclone ls remote:path -vv

rclone ls jmfcloud:jmfcloud -vv
2021/07/19 18:04:41 DEBUG : Using config file from "c:\rclone\rclone.conf"
2021/07/19 18:04:41 DEBUG : rclone: Version "v1.55.1" starting with parameters ["rclone" "ls" "jmfcloud:jmfcloud" "-vv"]
2021/07/19 18:04:41 DEBUG : Creating backend with remote "jmfcloud:jmfcloud"
2021/07/19 18:04:42 DEBUG : 6 go routines active
2021/07/19 18:04:42 Failed to ls: AccessDenied: Access Denied
status code: 403, request id: 8A2E1902CA5E7BBF, host id: Ujx/UtA0p2IhJLogjpR+4Z3QafQM5Zed9AsoIr0pryAiDSSZqDoHaco6XRQ9a5nVyfCypbTDy+QE

rclone.config

[jmfcloud]
type = s3
provider = Wasabi
env_auth = false
access_key_id = XXX
secret_access_key = XXX
region = us-east-1
endpoint = s3.wasabisys.com
acl = bucket-owner-full-control

try rclone ls jmfcloud: -vv

2021/07/19 18:11:16 DEBUG : Using config file from "c:\rclone\rclone.conf"
2021/07/19 18:11:16 DEBUG : rclone: Version "v1.55.1" starting with parameters ["rclone" "ls" "jmfcloud:" "-vv"]
2021/07/19 18:11:16 DEBUG : Creating backend with remote "jmfcloud:"
2021/07/19 18:11:18 DEBUG : 6 go routines active
2021/07/19 18:11:18 Failed to ls: AccessDenied: Access Denied
status code: 403, request id: 3AE2CE972CBEA27D, host id: N078/JycV0djK6o+dCpv9ASdzXvVHQHUVKSuQOGrZp2SwPE34hGZ3SENmiEsAm1dL83OoAur3nkr

not sure what is going on,

are you using the root id/secret or did you create a IAM user?

rclone ls jmfcloud: --dump=bodies --retries=1 --low-level-retries=1 --log-level=DEBUG --log-file=rclone.log
and post rclone.log