What is the problem you are having with rclone?
To slow downloads on high frequent access
Run the command 'rclone version' and share the full output of the command.
rclone v1.59.2
- os/version: debian 11.5 (64 bit)
- os/kernel: 5.4.0-125-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.18.6
- go/linking: static
- go/tags: none
Which cloud storage system are you using? (eg Google Drive)
StorJ Gateway-ST, basically minio
The command you were trying to run (eg rclone copy /tmp remote:tmp
)
No direct rclone cmd related, I'm using rclone mount.
The rclone config contents with secrets removed.
[cdn]
type = s3
provider = Other
access_key_id = %hidden%
secret_access_key = %hidden%
region = minio
endpoint = http://127.0.0.1:7777
acl = private
chunk_size = 64M
A log from the command with the -vv
flag
Nothing necessary here.
I'm kinda stuck with my HLS/DASH streaming implementation as I cannot serve enough clients at the same time, meaning I get stuck at about 50 Mbit/s in total where my system is able to handle 5000 Mbit/s. In my opinion, this issue comes from rclone. I'm using rclone to mount an S3 bucket on a Linux machine where a web server (nginx) than pulls its files from using a location block like this:
location /hls { alias /srv/hls/; # rclone mount-point auth_jwt_enabled on; add_header 'cache-control' 'no-cache'; add_header 'access-control-allow-credentials' 'true'; add_header 'access-control-allow-origin' $allow_origin; add_header 'access-control-expose-headers' 'content-encoding,content-length,content-range,date'; add_header 'access-control-allow-headers' 'authorization,accept,accept-encoding,accept-language,access-control-request-headers,access-control-request-method,cache-control,connection,dnt,host,pragma,sec-fetch-dest,sec-fetch-mode,sec-fetch-site,sec-gpc,te,user-agent'; # Trick preflight option requests of media players like VideoJS if ($request_method = OPTIONS) { add_header 'access-control-allow-credentials' 'true'; add_header 'access-control-allow-headers' 'authorization,accept,accept-encoding,accept-language,access-control-request-headers,access-control-request-method,cache-control,connection,dnt,host,pragma,sec-fetch-dest,sec-fetch-mode,sec-fetch-site,sec-gpc,te,user-agent'; add_header 'access-control-allow-methods' 'GET, HEAD, OPTIONS'; add_header 'access-control-allow-origin' $allow_origin; add_header 'content-length' 0; return 204; } if ($invalid_referer){ return 403; } if ($request_method !~ ^(GET|HEAD|OPTIONS)$ ){ return 405; } }
The problem now is that as soon as the client pulls a file, rclone only opens a single connection for this, meaning I get a maximum of 2-3 MB/s trough this single connection, which kinda sucks... In my opinion, this behavior sits very deep inside the rclone code. As rclone mount does not support multi-part downloads, only the pure rclone command does. Long story short, using rclone mount does not give me the needed performance to realize my project, only speaking from a software perspective. I also tried S3fs, which does support multi-part uploads but not downloads ... which again brings me to the same outcome.
Is there any chance to get multi-part downloads for a mounted S3 volume here? This would really help me to speed up my performance.
This is how I mount the rclone volume:
rclone mount cdn:vod /srv/hls
--use-server-modtime
--async-read
--no-modtime
--umask 0000
--buffer-size 16M
--dir-cache-time 180s
--poll-interval 0m30s
--write-back-cache
--vfs-cache-max-age 43200s
--vfs-cache-mode full
--vfs-read-ahead 2M
--vfs-read-chunk-size 16M
--cache-dir /cache/vod
--max-read-ahead 512Ki
--transfers 1000
--checkers 1000
--drive-chunk-size 2M
--volname vod
--daemon
Kind regards and thanks in advance