Well, not invented, but that’s my domain name and an A record I have registered to my IP.
@Animosity022 Another couple of queries:
- Isn’t the 1 minute blip between the rclone move and rclone picking up the change an issue for Sonarr or Radarr if they are scanning the disk at that time?
- Have you tried
rclone movedirectly to the mount instead of moving to the remote and then waiting for the mount to pick it up via the poll interval? Any possible issues you can see with this approach?
Yep, I guess that could happen, but I’ve never noticed the issue to be honest.
I use the rclone move to the remote so I can limit the tps and a few other things to ensure it transfers the way I want. If I didn’t want to use rclone move, I can just “mv”.
Can’t you still have the same options when copying from a local directory to the mount? That should work the same way as the current behaviour but also make it more reliable since there is never an occasion when the file is not present at that path (either local or remote).
I mean, you are trying to solve a problem that I’ve never experienced.
You can have the same options but I’m not impacting my mount by doing it outside.
If I was really that concerned about the minute impact, I’d just add a line to refresh that directory with a rc command when it was done to make it near instant.
Got it. Thanks.
So, just one more thing: What is your download directory set to in deluge? The
/data/local/torrents folder or the
For me, it’s in /gmedia/torrents so I can take advantage of the hard linking in mergerfs and move it locally.
in the last couple Days I’ve got:
ReadFileHandle.Read error: low level retry 1/10
every time I played a movie through my mount.
Any plan what can be the reson?
Not much really as those are network errors to your Google Drive and it will automatically retry. They pop up here and there. Are you using your own API key as well?
no I don’t. WIll this be an scenario which is better with an own API key?
It’s always better if you can to use your own API key as you have your own limits/quotas.
@Animosity022 what is the size like of your media library stored in Gdrive? I’m noticing a lot of lag and slow startups lately while using a VFS/unionfs setup with your mount settings. Before it was a lot faster, so I’m trying to troubleshoot what’s going on.
My library is:
rclone about gcrypt: Used: 49.987T Trashed: 0 Other: 57.134M
What kernel version are you running? There are a number of threads that people reported slowness on Ubuntu with specific kernel versions due to a bug.
Im running it on Unraid, so that is Linux based. Will have to start from scratch and see how I can improve it. Your library is much bigger than mine but somehow it’s unbearably sluggish.
If you are using unionfs, you need to make sure it has the sync_read option on the mount. I’m not familiar with Unraid myself.
Yeah I’m already using that. Current settings are:
rclone mount --allow-other --dir-cache-time 72h --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --buffer-size 100M --log-level INFO gdrive_crypt: /mnt/user/mount_rclone/Gdrive --stats 1m &
unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/Gdrive=RW:/mnt/user/mount_rclone/Gdrive=RO /mnt/user/mount_unionfs/Gdrive
Which should be good right? I’ve had this work before. But now my library is a lot bigger in terms of more series and thus folders. So I think there is a problem with this, but I don’t know how to increase speed on the folders/directories.
I would think that would be fine. Are you seeing any errors in the rclone logs? You could turn them up to debug and capture some logs and share that back.
Are you using your own API key as well? That’s the only other thing that I can think of that might cause a problem.
I’ll have to see if I can find errors. Currently I’m isolating the specific parts and disabled Plex and have a clean Emby install filling it’s library. So far it seems to have faster playback again, so maybe the Emby instance was corrupt. Or maybe Plex is putting too much strain on the vfs mount.
No Im not, trying to find a good tutorial how to get my own API and to use it in rclone. Hopefully that solves the problem. Thanks for helping again!
There is a section here to describe how to do it:
That helps with some of the quota stuff as using the shared one, you can bang into high use periods and such.
I use unraid as well and I’m max 5s to start for pretty much anything on a 300/300 connection. Maybe reduce your chunk size for a faster start time - I’m at 32M:
rclone mount --allow-other --buffer-size 1G --dir-cache-time 72h --drive-chunk-size 32M --fast-list --log-level INFO --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit off gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs
I also run
rclone rc --timeout=1h vfs/refresh recursive=true
after mount as I read somewhere it populates the dir-cache.
What’s your Transcode default throttle buffer set to in Plex? Mine’s at 300