Mounting rclone to use like a local drive

What is the problem you are having with rclone?

I want to mount gdrives with rclone so it can be used like a local drive for media consumption and for the hosted apps to read and write normally.

What is your rclone version (output from rclone version)

rclone v1.56.0
  • os/kernel: 5.4.0-80-generic (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.16.5
  • go/linking: static

Which OS you are using and how many bits (eg Windows 7, 64 bit)

  • os/version: ubuntu 20.04 (64 bit)

Which cloud storage system are you using? (eg Google Drive)

Google drives unlimited

The Way I Mounted My Drives

With the rclone browser mount option, everything else default.

How I Plan To Mount It

Setting up cronjobs after exporting the commands from the browser with the right options or just use the flags you think I should use from this post.

Problems I am encountering:

  1. When uploading / writing to the mounted location, the local drive also gets filled up and used. Is this rclone compensating for cloud drive instability or is it something wrong in my mounting settings?
  2. I want jellyfin or plex to read the media of the drives normally and access / read them as they would on a normal local drive. Do I need extra options to make this happen?
  3. I have a group called media that I would like to have access to the contents mounted in the mount dir. Do I just have to change perms for the mount dir or set something when mounting with rclone?

PS. why isn't gdrives being recommended as the best option out there? What are we expecting google to do making it worse?

Thank you for taking a look!

  1. for rclone to emulate a local file system, it uses a local cache, the total amount of space used can be tweaked. that is documented https://rclone.org/commands/rclone_mount/#vfs-file-caching
  2. that depends. for streaming media, some rcloners use a vfs cache, i do not.
    i have emby on a pi4 with little free local disc space and a very fast internet connection.
  3. for gdrive and plex, you can get the rclone settings from
    https://github.com/animosity22/homescripts/blob/master/systemd/rclone.service

here is the command i run, it connects to a seedbox using a sftp server
rclone mount remote: /home/ubuntu/rclone/mountpoint/remote --allow-other --read-only --dir-cache-time=10000h --buffer-size=200M --rc --rc-addr=:6001 --log-level=INFO --log-file=log.txt

1 Like

I wouldnt't say mine is very fast, but it does have 40Gbps down and 2 Gbps up.

I can't put the read only I suppose, I am thinking on having apps write too.

Thank you for the insight. I will give it to some tests to see if I have any other problems.

As a side note, do you use crypt function?

Like I think that increases workload and delays read write speeds no?

well, 40Gbps, that is super fast internet, perhaps you mean 40MBbps?

most rcloners who stream media from cloud use crypt, does not increase workload or affect speeds.

Nope, I just rechecked, I do get 40Gbps down. But again, streaming back to me is limited by the up speed.

Hmm, maybe I should also look into the crypt documentation then. Adding the flag mounting my drives definitely help protect the data.

streaming is about downloading, rclone is downloading the media files from cloud to local.....

Oh let me elaborate. So vps downspeed is quick. So reading is goong to be fast no problem. But then when I access it at the web UIs or jellyfin client it has to stream back to me, which is where it is limited by the up speed.

oh you are using a vps, that explains the high internet speed.

What VPS provider gives 40Gbs? That seems not right.

Linode? https://www.linode.com/pricing/

Ah ok, that's shared though so there's not a chance on the planet that a single person is getting that.

A random speed test exceeded 8Gbit/s across the internet to an external host.

40Gbit is probably achievable between linodes in the same datacenter.

FWIW: https://www.linode.com/blog/linode/linode-cloud-ssds-double-ram-much-more/

On that page you linked:

The blog post is from 2014 when linode revamped their core network infrastructure. The "pricing" page I first linked to shows the current plans.

You linked it :slight_smile:

So it's shared in and limited out on shared stuff.

Ah thanks for finding that out. Makes a lot more sense. But still, 2 Gbps is more than enough for me for both up and down.

Hi! Great idea I use crypt with rclone on ubuntu to link my computer to my Gdrive. If you need help I can help you

Yes, I think I have found out a good way, even tho I did encounter a problem, I think I should open another thread with that to solve it. Thank you for the offer.

So this is the solution I adopted:

  1. one systemd service for one drive.
  2. systemd triggers a bash file with the command
  3. Down would trigger a umount bash script

And I have personally adopted these options for now:

rclone mount GD: /desired-dir --allow-other --dir-cache-time 5000h --poll-interval 5s --umask 002 --cache-dir=/home/caffine/rclone/cache --vfs-read-chunk-size 1M --vfs-cache-mode writes --vfs-cache-max-size 20G --vfs-cache-max-age 2h --vfs-cache-poll-interval 5m --vfs-read-ahead 2G --log-file /home/caffine/rclone/log/rclone.log --log-level INFO

Not adopting vfs/refresh with rc set up, even though I know of its benefits, I don't wanna mess with the ports just yet. Once I am sure everything is set up nicely I might.

I just realized, you doing this would mean?:

  1. rclone mount vps - sftp
  2. seedbox - sftp connect to rclone
  3. others connecting to you

The files gotta jump 3 times before your desired person gets it.