The server generated by the rclone serve s3 command appears to be capable of serving only a single client at a time.
To illustrate, when listing a folder with a considerable number of files with rclone ls rclone_s3_server:/my-files, it is not possible to reconnect to the server and run, for example, rclone copy -P doc.pdf rclone_s3_server:/my-files until the previous command is completed.
Run the command 'rclone version' and share the full output of the command.
rclone 1.68.1
os/version: gentoo 2.15 (64 bit)
os/kernel: 6.1.67-gentoo-whatbox (x86_64)
os/type: linux
os/arch: amd64
go/version: go1.23.1
go/linking: dynamic
go/tags: none
Which cloud storage system are you using? (eg Google Drive)
S3 + local filesystem
The command you were trying to run (eg rclone copy /tmp remote:tmp)
My previous statement was not sufficiently clear.
What I intended to convey was that any subsequent action must await the completion of the previous one.
It looks like the operation is single-threaded.
I couldn't find any mention of a lock in cmd/serve/s3.
It looks like there are mutexes only in the code of the gofakes3/s3mem backend, which Rclone doesn't seem to use.
Even unauthenticated user requests are locked up until the previous action is finished. I'm not sure if this is related to the auth-proxy directive, which I use to create backends on the fly.
I don't think there is locking in the Auth proxy itself, but maybe how it is called.
If you can get a go routine dump at the point it is locking the Auth calls that will show what is going on. Easiest way is to kill the S3 serve with SIGQUIT or use the RC (see the RC docs)
Looking at the gofakes code this lock is being held throughout the http transaction which is wrong and causing the problem as the lock is held through the very long directory listing.
I've had a go at fixing this - can you give this a try please