Help!(Noob) i want to mount onedrive as a local drive

Hi guys, what i want is to mount my onedrive business account as a local drive, something like Raidrive, but with a read/write cache, so i can write files normally as the rclone syncronizes in the background, as my internet is 30/6 Mb

Onedrive on demand is good, but the problem is that you need to download the entire file to use it, which is not so convinient.

i was searching for something that does this for free and i found exactly none (other than rclone) although i've found some paid ones that does exactly or almost what i want, like ExpanDrive or Stablebit CloudDrive

it would be nice to have some help to configure this properly as rclone seems a little complicated.

also i use Windows 10.

You can do what you say you want in rclone.

We can help you with the setup, but you should first of all try to follow the documented instructions - then if you run into problems or find something confusing - come back and ask pointed questions.

Here is where you should start:

Install rclone:
https://rclone.org/downloads/
(extract and run it anywhere you want, there is no "installation" for this, just an exe file)

Configure a Onedrive remote:
https://rclone.org/opendrive/

Mount your remote to work as a "normal drive"
https://rclone.org/commands/rclone_mount/

Once this works at a basic level we can talk more about setting up caching, optimizing or other things - depending on your spesific need (describing your most common use-case is a good idea).

1 Like

I have a dying 750GB drive so i want to backup my files to onedrive and use it as secundary local disk, i'll install some softwares, watch movies, play some games.. like a normal drive, but my internet connection is not enough alone so i'll need some caching to speed things up

i've already tried the trial of those paid softwares and they worked great.

Yep! it worked, now i can use as a local drive (although is slow), what i do now?

Well, lets get that dying 750GB disk safe first...

go into CMD and type:
cd X:\RclonePath (C:\rclone for myself)
then type
rclone sync X: MyOnedriveRemote:\NewFolder -P

That will copy over your failing drive. It's going to take a while on 6Mbit, but you can stop that command anytime and start it again - and rclone will smartly continue.

Before you ask - yes you could do this via the mount too, but this may be a little faster and also I wanted to show you the command in case you wanted to use it in any recurring script.

When that is done you can kind of just keep using the failing drive and just regularly perform the same command to keep the data on the cloud updated (it can be done entirely automated too obviously if you want - via a script and task scheduler). Doing this and actually having a physical mirror locally is always going to be the fastest way.

But if you want to look at more general caching then the only rclone system that currently can perform read-caching is the cache backend:
https://rclone.org/cache/

Later on the VFS cache (the caching system inside of the existing system that helps run the mount) will likely get this feature too and may make the older cache backend redundant, but for now that's how you can do it. Read up and ask for clarifications as needed.

EDIT: Forgot you were using Onedrive and not Gdrive - fixed post to reflect that.

1 Like

Hello again, i have migrated to linux (Solus) and i've created a cache remote and mounted on \var\tmp\CacheRclone but it doesn't seem to work, i can only use the files when stops uploading, what am i missing here?

Not sure I fully understand the question you ask...
Could you elaborate?

And if you also shared the rclone.conf and mount-command that would be helpful for me to provide assistance.

Please note that you should redact any sensitive information in the config before sharing. Most importantly client secret and any crypt-keys if you use crypt.

1 Like

[OneDrive5TB1]
type = onedrive
token = {"access_token":"xxx","expiry":"xxx"}
drive_id = xxx
drive_type = business

[Cache]
type = cache
remote = OneDrive5TB1:
info_age = 10y
chunk_total_size = 32G

Ok, the configuration looks fine.
Could you share the the mount command also?

Your last post there was not needed by the way. your configuration file summarizes all that, so you can probably just remove that post since it's so long.

1 Like

this? "rclone mount Cache: /var/tmp/CacheRclone"

Yes.
And that command looks ok too.
The only thing is that /var/tmp/ seems like a bit of a weird location to mount...
I'm not a huge Linux geek, but isn't media usually the right place for this sort of thing?
I don't know if it really matters on a technical level - but I thought I'd comment on it at least.

And what exactly is your problem? You can not play files that you are currently uploading?
That seems like normal behaviour as they won't show up on the cloud-drive until uploaded. If you enabled --vfs-cache-mode writes in your mount command then you would get that ability though, as new files will show up as soon as they enter the local cache.

Maybe the confusion is that you expected the cache to do write-caching? It does not do that unfortunately (even thought you'd expect it to with the --cache-writes). My recommendation is to use the VFS-cache (which is part of the mount) for that, and if you want the data to stick around for a while you might want to add longer expiry timers because I think the default is fairly aggressive and will evict old data within an hour or so.

1 Like

i tried "rclone mount OneDrive5TB1: /home/user/OneDrive5TB1 --vfs-cache-mode full --vfs-cache-max-age 1000000h --vfs-cache-max-size 32768m"

and it seems to work more like a buffer, the files are written locally first but i can do nothing with it until it fully uploads, while read-cache seems to work the way i expected.

maybe there is some kind of alternative? it doesn't need to mount a drive, maybe something like onedrive on demand, but with streaming and read-cache, where it synchronizes a folder and shows placeholders for files in the cloud.

what i really want:

1: being able to just paste a file and not wait 999 hours to open it

2: being able to open a file without waiting until it fully downloads

3:free-up space removing the most unused files locally after the folder reaches a certain size like 32GB

4: being able to see the files available remotely like the placeholders of onedrive.

with the current approach 2,3,4 is fine but 1 don't.

Don't use cache mode full. Full will force all files to be downloaded before you can open them.
This is not recommended for most use-cases.
Use --vfs-cache-mode writes
instead if you want write-cache

Can you use both at the same time though if you want.
rclone mount Cache: /home/user/OneDrive5TB1 --vfs-cache-mode writes
would mean you have both write-caching (from vfs) and read-caching (from cache backend).
Which combination you want is up to you. They do not interfere with eachother.

  1. For that you probably want to use the vfs-cache-mode writes . Then it will look to you like files get copied to Onedrive really fast (but really they upload in the background). You can use them immediately, even if they are still really uploading.

  2. As long as you don't use vfs cache mode FULL, this is not a problem :slight_smile: rclone fully supports streaming. No problem to open a 100GB 4K movie in a few seconds if you want... (no caches needed to do this)

  3. This is no problem. Just set the max-size for the cache (or caches if you use both types).
    --cache-chunk-total-size 32G for the cache-backend (read-cache).
    --vfs-cache-max-size 32G for the VFS cache (write cache, using "writes" mode)

  4. This will happen no matter what. if you use one or both cache systems they are "transparent" so they make it look like the files are on the Onedrive even if they are also in the cache.

To me it sounds like you want to use --vfs-cache-mode writes , and --vfs-cache-max-size 32G.
It does not sound like you necessarily need to use the cache-backend, but if your internet speed is slow then having that too can help. That is up to you.

I hope that clarified. Looks like your main problem now was just using cache mode full. Let me know if you have more questions :slight_smile:

PS: If you want to control where rclone saves the cache files, there are options for that also. See documentation. Otherwise they get saved to the default location (which is probably on a system drive, which may not be ideal always if you have a small system SSD).

1 Like

i've used "rclone mount Cache: /home/user/OneDrive5TB1 --vfs-cache-mode writes"

still the same issue if copy for example a 100MB file it stops at 99MB and then i need to wait the upload, what am i missing???

maybe its something related to nautilus or some bug with the OS? which blocks the file while rclone is using it or something

or maybe rclone for some reason can't read and write at the same time

edit: i tested, i can't open ANY file while rclone is syncing

Check if you are using an old version - because I thought we got rid of this...
run this command
rclone version

(Many Linux repositories have badly out of date rclone versions... I highly recommend you use the download script on the main site to install from)

1 Like

rclone version
rclone v1.49.2

  • os/arch: linux/amd64
  • go version: go1.13

There has been a few updates since then - and it never hurts to keep fully updated (all configs will keep working). That said, it's not that old, so I did not expect to see that happen for you hmm...

@ncw Could you confirm for me that this should be a non-issue now? It's certainly not something I can replicate.

While I would like to get to the bottom of this - I can think of one alternative method you can use at least (hopefully just temporarily, because IMO it is not the best way).

rclone mount Cache: /home/user/ --cache-tmp-upload-path /home/somefolder
temp upload will do something similar to cache mode writes (but in the cache backend). Main difference is it does not keep files after they are finished uploaded.

This method will force the OS to think the transfer is complete as soon as it enters the temp-upload (ie. as fast as your HDD can copy) - try it.

Main reason I would not ideally recommend it is cache still has some small bugs here and there, and it's not getting updated anymore like the VFS is.

1 Like

i reinstalled rclone and the same thing was happening but the workaround solved the problem. :grinning:

i'm using a Pentium E5200 with 2GB of RAM (Solus 4.0 Fortitude)

Now what i'm need to do to automatically mount OneDrive on startup?

The very basic way:
Put your mount command into a script and just run that script via cron on startup. I will not elaborate on how to use cron because there are gazillion guides on google and I'm not a Linux expert anyway. Making that script literally just entails opening a text-editor, putting #!/bin/bash at the top, and put your mount command on the second line - so all you need is for cron to trigger it for you on startup.

The proper way:
Set the mount up as a system service. This has some added benefits. You can pretty much go steal Animosity's template for this as he shares all his Linux scripts - then just make some minor edits in the flags to meet your need:

Hopefully that gets you on the right track :slight_smile:

If you are referring to Windows FUSE locking up while transfers complete, then I don't think this is fixed yet.

I made 1/2 a fix but decided the proper fix is this: https://github.com/rclone/rclone/issues/3186