Mounting Google Drive - Potential Pitfalls

So far, rclone mount ... works well for mounting gdrive and uploading new files.

Let's say I've mounted the remote gdrive to /mnt/gdrive. Then, let's say I have a 2 gb file on my local drive and I want to store it in gdrive. On my linux machine, I simply issue the move command: mv /home/user/2gbfile /mnt/gdrive. All is well, it automatically begins uploads. So far so good.

Here's where I get a little nervous. If my internet goes out or my machine powers down, what happens to the 2gbfile? To the linux host, the file lives in /mnt/gdrive/, but does the file actually live there if the upload was interrupted by a reboot or internet issues?

Thanks in advance! Any suggestions are welcome.

As long as you aren't talking about involving any caching system then this is 100% safe. a move operation will do basically this:

  • Copy the file to remote (yes copy, not move)
  • Verify that is has arrived safely (and you can use --checksum too if you are very paranoid although this is really not needed for most use-cases as the transport layer also has error detection so in-flight corruption is very rare)
  • Finally, delete the local file

So if your power goes out before the file arrives then the half finished cloud file will not be saved and your local file will still exist. It will be as if the command was never performed at all. No problem :slight_smile:

I was just reading about vfs file caching here: https://rclone.org/commands/rclone_mount/#file-caching. Is this what you're talking about?

Thank you so much for the explanation!

Yes. If you use a write cache on a mount (which generally helps a lot with compatability for some OS operations and usually is a good idea to use) then the file gets moved to the write cache first before it gets sent to the cloud. If it makes it into the cache but the power goes out or something before it can arrive to cloud then I think there is a certain risk that the file can getstuck in the cache (and eventually be cleaned up as junk if left there - leading to the potential for dataloss). It will no longer be in it's original place then, and currently I don't think the VFS can remember what it was supposed to move from the cache to the cloud if it has an unexpected shutdown because I don't think it has any persistance yet.

Of course if this happens and you just go into your cache folder and retrieve your files (that didn't make it over yet) before the cache needs to clean up old junk then nothing will be lost - but this system could do with being a little more robust. Note that I am not 100% certain that isn't already a failsafe in place here so take this with a grain of salt. I may be overstating the problem. I still need to research this a bit and speak to Nick about it. Assuming it's a real issue then a simple persistent log-file of "not yet uploaded" files should be all that is needed to solve this, and if so I will make an issue on this for it to be improved in future :slight_smile:

I'd love to get some more feedback from you on this! This is already so insightful.

Keeping a simple db of in/out from the cache would be super helpful.

What @thestigma is true, you'll find the file in the VFS cache. There is an issue to make this a bit more robust: https://github.com/rclone/rclone/issues/3186 - note the persisting the queue bit of this.

The most reliable way of getting stuff into the cloud is to use rclone move. That will definitely do the right thing in the face of internet errors etc. The problem with using the mount is that the interface from the kernel -> rclone via FUSE is not quite what you want for reliable cloud transfers, it was designed for local disk transfers.

Or alternatively use --vfs-cache-mode off and all the uploads will happen synchronously.

Very nice to have confirmation on this - thanks!

But as I said, this problem seems like it would be fairly easy to solve. Assuming that you aren't completely against having a persistent component in the mount. Do you agree in that, and should I turn this into a github issue for you?

I think it is covered by the issue I posted, as part of the writeback I'll want to implement some kind of DB of files that haven't been uploaded yet. I'm hoping to find time to do that for 1.50

I was originally going this route, but then I noted it would break my workflow. So maybe you can help with a workaround.

I use Radarr to manage the download process of files. When completed, Radarr handles the renaming and moving of files to their final destination. I'm sure you're familiar with Radarr, but just in case, radarr will monitor that file in its final destination. If it's missing or if a better version of the previous download becomes available, it will grab a new copy.

So let's say I have
/downloads/
/watch/
gdrive:media/

The goal is to get a downloaded file to end up in gdrive:media/ which I want mounted to /mnt/gdrive. So here's the flow:

When a download completes, the file lives in /downloads. In order to get it into gdrive, I set up /watch to be the source folder for the rclone move command. Now all I need to do is set Radarr to handle the file and put it in /watch so it'll get uploaded (via a script that runs periodically). So far so good.

Here's the tricky part, when rclone move finishes the move operation (and subsequently deletes the source file), Radarr no longer sees the file in the final destination folder (/watch) and therefore its logic is to download the file again.

Now I'm stuck in this loop. I really want to use the move command, because I absolutely agree that it's the best option, but I don't know how to solve this "destination loop" problem.

mergerfs is your friend for this.

I copy everything locally and upload at night. For Sonarr/Radarr/Plex/Emby, they are none the wiser as the paths never change.

I have my setup documented here:

I was thinking this was going to be an answer, I just couldn't wrap my mind around it all! I'll take a look at your docs and report back.

Sorry, must have overlooked your link. Yes, I agree - and I seem to have already commented on it.
Something to look forward to then :slight_smile:

Your setup is very nice. It's going to take me a while to translate what you've done and execute it, but I freaking love how you've taken a flow and made it work in such a powerful way.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.