Today I have notice that my vps has huge incoming traffic with my setup (rclone + emby). The vps has 2,5TB outgoing and 23TB incoming traffic.
first step: I check the traffic meter in netdata while streaming:
As you can see the the incoming traffic is much higher than the outgoing...
I guess rclone read files mutiple times with while streaming. Has anybody a similar setup and can checkout his traffic history.
--umask 0007 \
--uid 120 \
--gid 120 \
--vfs-read-wait 60ms \
--buffer-size 2G \
--timeout 30m \
--dir-cache-time 96h \
--drive-chunk-size 128M \
--vfs-cache-max-age 72h \
--vfs-cache-mode writes \
--vfs-cache-max-size 100G \
--vfs-read-chunk-size 128M \
--vfs-read-chunk-size-limit 2G \
--user-agent "GoogleDriveFS/22.214.171.124 (Windows;OSVer=10.0.19041;)"
@ncw can this happen by the
failed to wait for in-sequence read issue in
after disable the async feature (
--async-read=false) it looks much better:
I think it is a combination of this and the very big buffer
So whenever rclone gets one of those
failed to wait for in-sequence read it will do a seek which means chucking away the buffer.
I have tried a few mount options. With
--buffer-size 0 and
--async-read=false I can improve the incoming traffic but incoming traffic is still ten times higher as outgoing traffic.
Looks like vfs mount + emby read files multiple times. I don't no why.
Would be nice if someone with the same setup can help.
I'd probably focus in on Emby as rclone is just serving whatever Emby is asking for.
You can check their logs and see what's going on.
You most likely have an addon scraping files, a configuration doing analysis, someone syncing large amounts of stuff, etc.
thanks for the note with the logs
I've got it in nginx reverse proxy log! I have turned
proxy_buffering on in nginx reverse proxy config file. So it was an configuration mistake in nginx.
Seems like the proxy buffer request the files incessantly.
happy easter everybody.
This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.