Rclone mount and egress

What is the problem you are having with rclone?

I'm getting insanely high egress rates when using rclone mount with an S3-compatible storage running a media streaming application.

I run a media streaming application for personal use (Ampache). I use idrive e2 (S3-compatible) to store my media files (circa 200GB) and mount the buckets with rclone mount. Ampache runs a daily catalog update job over this rclone mounted media repository and I got a pretty high bill after few days of use. I'm trying to understand what is causing those high egress rates, because the catalog update job just looks for modified/added files to update the catalog. I suspect the catalog update job causes the entire mass of data to be downloaded (egressed) every time to the server running Ampache. Is that supposed to be happening? Am I using the wrong parameters for my application? Are there rclone mount parameters to change/limit this behavior?

Thanks in advance.

Run the command 'rclone version' and share the full output of the command.

rclone v1.61.0

  • os/version: arch 22.0.0 (64 bit)
  • os/kernel: 5.15.85-1-MANJARO (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.19.4
  • go/linking: dynamic
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Idrive e2 (S3-compatible)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount --verbose idrive:bucket /mnt/idrive/bucket --ignore-checksum --allow-other --allow-non-empty --drive-pacer-min-sleep 10ms --drive-pacer-burst 200 --vfs-cache-mode writes --bwlimit-file 32M```


#### The rclone config contents with secrets removed.  
<!--  You should use 3 backticks to begin and end your paste to make it readable.   -->

Paste config here


#### A log from the command with the `-vv` flag  
<!-- You should use 3 backticks to begin and end your paste to make it readable.  Or use a service such as https://pastebin.com or https://gist.github.com/   -->

Paste log here

Check out:

Isn't it this tip regarding avoiding HEAD requests valid only for sync or copy?

that is for gdrive, not S3

No it is for mounts too. I'd recommend --use-server-modtime for consistent time stamps.

You might want to increase your directory caching time --dircache 24h say - this will mean rclone won't pick up changes made not through rclone for 24h but that may be OK.

I would get rid of --ignore-checksum unless you have a specific reason for it.

You mean --dir-cache-time ?

Yes, that's what he means.

1 Like

Still no go. Still insanely high egress rates :melting_face:

Screenshot_20230120_220413

rclone mount --verbose s3:media /data/media --use-server-modtime --allow-other --allow-non-empty --log-file /log/rclone.log --vfs-cache-mode=writes --vfs-fast-fingerprint --cache-dir /cache --dir-cache-time 24h

What are you expecting to happen?

Rclone only does what your application/use case asks as it doesn't transfer things on its own.

Don't stream as much?

Hi,
That's what I'm trying to find out. It's a personal media application, I stream no more than 1GB daily. However, it's the daily catalog update task (in which Ampache checks if there are modifications/additions to the media catalog) that is causing this behavior, and I have no clue why. My media catalog is about 200 GB and I'm getting circa 8% of it in egress traffic... Looking at Ampache source code, I can just see some directory listings and the search for new and updated files...

Seems like that’s how it works as it’s downloading a lot.

Capture a debug log and you can see it but not sure that will do much other than confirm how it works.

2023/01/21 19:28:12 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/01/21 19:29:12 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/01/21 19:30:12 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/01/21 19:31:12 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/01/21 19:32:12 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/01/21 19:33:12 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/01/21 19:34:12 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/01/21 19:35:12 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/01/21 19:36:12 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/01/21 19:37:12 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/01/21 19:38:12 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/01/21 19:39:12 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/01/21 19:40:12 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/01/21 19:41:12 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)

After shutting down Ampache and maintaining the mount, the logs are full of those messages. And surprisingly, the egress rates are still increasing (in a slower pace, of course). So is data being egressed just because the rclone mount?

Debug log would show you what’s going as that’s why we ask for it.

A mount with no activity is very, very little traffic.

So, could you please recommend a set of parameters to use as minimum bandwidth as possible, considering that the files in this repository rarely changes (it's just my personal music collection) and the file additions occurs infrequently?
The daily catalog update is a default application behavior that can be easily changed.

Dunno as without seeing what the debug log, it’s tough to guess what the issue might be.

Something like the attached file, after remounting with:

rclone mount --log-level 'DEBUG' s3:media /data/media --use-server-modtime --allow-other --allow-non-empty --log-file /log/rclone.log --vfs-cache-mode=writes --vfs-fast-fingerprint --cache-dir /cache --dir-cache-time 168h

rclone.log (2.6 MB)

Any chance you can share the full log? It's missing the starting part :frowning:

It's also about 12 seconds of a log so nothing really to see.

Hi, updated the attached log file with the first 200000 lines .

Did you try with the server mod time? If you’re doing no modtime, your tool may constantly think the media is new and reread it?