Sharing rclone mount with windows

I was hoping someone would be able to give me some assistance with using rclone (with google drive) and samba together, so that I can share the mount with computers on my network and thus they don’t need to have a mounted drive themselves.

Sadly I am getting poor/inconsistence results, on the windows computers I frequently get not responding explorer windows, transfer that just randomly stop (I use teracopy).

I’ve tried various configuration options, and tried using a cache and still the same issues.

I’m not overly concerned about speed more about just getting a reliable experience.

I am using the following options at present, I have 3 mount points just so I could try things out to see what worked well and what didn’t, two of them are cache remotes and one is just a standard remote no cache. I’m not using any encryption.

As you can probably see I tested a few different flags, and also the --vfs-cache-mode to see if this improved things at all.

rclone mount --allow-non-empty --allow-other --umask 000 --bwlimit 8.5M --cache-db-purge --vfs-cache-mode full --transfers 4 --checkers 1 --contimeout 60s --timeout 300s --retries 50 --low-level-retries 20 --drive-chunk-size=32M --buffer-size=32M --drive-upload-cutoff=64M --verbose --progress --stats 5s gdcache1: /mnt/rclone/gdcache1

rclone mount --allow-non-empty --allow-other --umask 000 --bwlimit 8.5M --cache-db-purge --vfs-cache-mode full --transfers 4 --checkers 1 --contimeout 60s --timeout 300s --retries 50 --low-level-retries 20 --drive-chunk-size=32M --buffer-size=32M --drive-upload-cutoff=64M --verbose --progress --stats 5s gdcache2: /mnt/rclone/gdcache2

rclone mount --allow-non-empty --allow-other --umask 000 --bwlimit 8.5M --transfers 4 --checkers 1 --contimeout 60s --timeout 300s --retries 50 --low-level-retries 20 --drive-chunk-size=32M --buffer-size=32M --drive-upload-cutoff=64M --verbose --progress --stats 5s gdrive: /mnt/rclone/gdrive

I was curious so I have tried using plain old windows explorer for the transfer, but same problems occur, also the rclone mount stops working via samba, with explorer saying it’s not accessible and asking me to check if I am still connected to the network.

This is the contents of .rclone.conf file.

[gdrive]
type = drive
client_id =
client_secret =
token = {"access_token":"*** REMOVED ***","token_type":"Bearer","refresh_token":"*** REMOVED ***","expiry":"2019-01-08T13:58:43.391217448Z"}

[gdcache1]
type = cache
remote = gdrive:/Path1
chunk_size = 32M
info_age = 1d
chunk_total_size = 10G
db_purge = true
read_retries = 50
workers = 4

[gdcache2]
type = cache
remote = gdrive:/Path2
chunk_size = 32M
info_age = 1d
chunk_total_size = 10G
db_purge = true
read_retries = 50
workers = 4

I’ve added my samba configuration file below also, in case it was useful in some way.

# Global parameters
[global]
        interfaces = 127.0.0.0/8 enp30s0
        server string = %h server (Samba, Debian)
        log file = /var/log/samba/%m.log
        max log size = 50
        syslog = 0
        panic action = /usr/share/samba/panic-action %d
        usershare allow guests = Yes
        client min protocol = SMB2
        max xmit = 65535
        min receivefile size = 16384
        server min protocol = SMB2
        map to guest = Bad User
        obey pam restrictions = Yes
        pam password change = Yes
        passdb backend = smbpasswd
        passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .
        passwd program = /usr/bin/passwd %u
        security = USER
        server role = standalone server
        unix password sync = Yes
        deadtime = 15
        socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=65536 SO_SNDBUF=65536 SO_KEEPALIVE
        dns proxy = No
        idmap config * : backend = tdb
        hosts allow = 192.168.0. 192.168.1. 192.168.2. 127.0.0.1
        aio read size = 16384
        aio write size = 16384
        use sendfile = Yes

[homes]
        comment = Home Directories
        browseable = No
        create mask = 0775
        directory mask = 0775
        read only = No

[mnt]
        comment = "/mnt"
        path = /mnt
        hide unreadable = Yes
        create mask = 0644
        read only = No

What’s your rclone.conf look like? Are you using the cache backend? What version are you running?

--vfs-cache-mode full

means it downloads the entire file each time before you can use it.

--transfers 4 --checkers 1

These do nothing on a mount.

--bwlimit 8.5M 

You seem to be slowing it down with this too.

What’s the rclone log show? Can you run it debug when you get an error?

I added the contents of .rclone.conf to the original post.

[root@debian ~]$ rclone version
rclone v1.45
- os/arch: linux/amd64
- go version: go1.11.2

I added in --vfs-cache-mode full as I was seeing errors about being unable to seek, and I didn’t think full was anything bad, but not realising that it would have to download the full file.

I wasn’t aware they did nothing on a mount, that was advice someone else gave me, and the -bwlimit 8.5M that was added from another post on this forum so that I didn’t exceed the 750gb per day limit with google drive.

How do I run it with debug? I don’t see anything error wise in the logs when using --verbose.

What’s the reason you want to stick rclone in the middle and not just use Google Drive? It doesn’t seem like you are using encryption so I’m not trying to figure out what the use case or problem you are trying to solve.

The 750GB upload daily limit is a thing, but the bwlimit also impacts your download so that would also make it slow.

You can either run with -vv or --log-level DEBUG as they are the same thing.

Because I am using gsuite with google drive, and am looking to take advantage of the unlimited storage, i’d like to be able to upload files and still have a level of access to them whilst not having the amount of storage available locally to store the data. I’m wanting to use it with a windows pc that is used as an HTPC and able to store the recordings made in google drive.

Can’t use the google drive app on linux also. I did try using the google drive stream app on windows, but my hard drive fills up to 100% and there is no option to limit the size of the cache in the app.

I did try the debug log you mentioned and no errors are in the log, so I presume it’s samba, but I can’t see anything that seems relevant in the samba log.

Is there a driving reason to use samba? (just offering an alternative) An alternative option would be to setup rclone as a webdav server as a proxy to google drive on a vps somewhere or locally on your network and then each windows machine cano connect to the davs share. I do this on Linux and it works pretty well. samba doesn’t really expect the underlying disk to be high latency storage and can get pretty chatty I believe.

No driving reason to using samba, it was just what was already setup and working on the server in question, it’s been working well pre rclone for 3-4 years now.

I’ve no knowledge/experience of using webdav or proxies, do you have any guides and/or tips as to how to do this?

Really all you’d need is a server to set it up on. A central server either located on your local network or on a VPS somewhere. You can start rclone in serve mode like this:

rclone serve webdav remote: \
 --log-level INFO \
 --stats=0 \
 --server-read-timeout 60m \
 --server-write-timeout 60m \
 --checkers 20 \
 --transfers 20 \
 --vfs-cache-mode writes  \
 --vfs-read-chunk-size 50M \
 --vfs-read-chunk-size-limit 150M \
 --vfs-cache-max-age 0h0m30s

(You probably want to add a username/password(s) to the above. See the rclone docs)

Then on each windows machine, you just connect to the webdav. Something like this:

The windows machines will connect to the rclone serve webdav and rclone will serve the gsuites content for you via https. (I’d suggest using https if this is on the internet as you really don’t want plain text passwords flying around your personal files. You can specify certificates to use from free places like letsencrypt.org or something. If it is on a local safe network then regular http is probably fine depending on your needs.

Ok so i have done the above command, all seemed well, but how to find out what shares are running on the WebDav server?

NOTICE: Cache remote gcache2:: WebDav Server started on http://127.0.0.1:8080/

How do I access that url from a different machine I tried changing 127.0.0.1 with the servers actual address, but got permission denied.

You may need elevated privileges to start it? You should be able to bind it to the external IP of the machine with:

–addr X.X.X.X:PORT

or all IPs with

–addr :PORT

And there is only one share which is the gsuite drive you’ve connected to. In windows you’d simply specify the url:

http://X.X.X.X:PORT/

Ok great done that and it seems to be working, not copied any files to it yet, but I was hoping to use the mount with a drive letter, I guess that’s not possible with webdav?

I’m not sure in windows. I believe you can map the drive to the webdav share. google seems to imply that you can.

Might be useful

It’s a real shame I couldn’t get samba and rclone to work so easily and as good, as I could assign a drive letter here.

Do you have any other suggestions.?

I followed the first link and that is working as a drive letter, just need to test it for the issues I have been having now.

Thanks for your help.

Sadly my issues have returned despite using webdav, transfers start well then writes start failing.

I guess I am just going to have to give up on this, and just copy files to the physical drives on the machine instead and remember to run a move/copy command to get them onto google drive.

Was really hoping this was going to work well, I guess I need to spend some time trying to figure out samba for improved performance.

With what errors? If you can share some logs, we can probably help out.

Share rclone logs and check your windows logs and samba logs samba should work as i’ve seen others do it. You need to narrow it down to performance or errors.

I believe it is down to performance, as the log files don’t contain errors the rclone ones don’t atleast, only lots of DEBUG messages, and those one just say that data is being written, so it must be samba that is having the issue.

I used the following command for the mount.

rclone mount --allow-non-empty --allow-other --vfs-cache-mode writes --contimeout 60s --timeout 300s --retries 50 --low-level-retries 20 --drive-chunk-size=32M --buffer-size=32M --drive-upload-cutoff=64M --verbose --verbose --progress --stats 5s --umask 000 --log-file=/var/log/rclone.log gdcache2: /mnt/rclone/gdcache2

Here are the logs from the linux debian 9 server.
https://goo.gl/WY8y24

I was hoping to add them to a pastebin like service but the files were too large.

The logs seem pretty light in terms of any errors. You have a few 403s, but those retry and are kind of normal at times. Are you using your own API key? That might help with those.

It does look pretty good from a rclone perspective.

No I am using the API key that rclone has instead.

Yes I noticed the same very few errors, sadly the connection still drops during transfers, with the server being inaccessible for a few minutes via the netbios name and same using the ip address or host name.

The above is my main issue, as I am unable to leave transfer running unattended as they can fail at any time, and when they it does I am back to square one as no new files are added despite some seeming to copy without errors, these files aren’t there when samba does come back.