Can you upload to an rclone mount without caching?

In that case, I'll give it a try once we get that information.

Thank you both very much for spending so much time today to help me with this issue. I hope that the coronavirus spares you both. I have two quick questions about rclone and caching if that's okay.

(1) How much HDD usage will this flag cause me: --vfs-cache-mode writes. For example, if I were to upload 2TB of data via rclone mount, would that mean that (in total) my HDD (where the cache is saved) would have 2TB go through it?

(2) What local resources does transferring data from one cloud to another cloud use? Would it use my HDD at all, as in download the data from one cloud to my HDD, then upload it again to another cloud? or perhaps only use CPU and RAM?

Can I just send you my HDDs by mail, so that you can upload it for me to the cloud? :slight_smile: This would probably be the fastest way...

so far you have only tested one cloud provider, gdrive.
perhaps get a free trial of wasabi and do some testing.
might learn something...

Yes, 2TB will be written (assuming you use a mount, because this flag only applies to mounts). How much you retain of this data (for faster re-access to those files at a later date) is entirely up to you. You can set a maximum-age after which files get removed from cache, a maximum total size, or both. The minimum size for the cache (temporarily) is the size of the largest file(s) you are currently transferring at any given moment (ie. 4 or 5 files probably).

An upload via CLI or webinterface or basically any other option that isn't a mount will have no ned for a write cache (because they don't try to support complicated read+write commands to begin with - not because they are "better" ). A simple upload script is easy to make if you just want to "dump" files to cloud from a particular folder. For more elegant (and complex) setups you can use the new multiwrite union remote... but I am getting way off topic here. Ask if you want more info on these things :stuck_out_tongue:

You can make do without write-cache if you only do simple copy/move uploads even on a mount. Anything else (like the read+write access that a program might expect to be able to do) is not possible without write-cache. This is not so much a fault of rclone but rather an unfortunate consequence of the way cloud systems are designed. If you have a (spinning) HDD then I'd generally recommend using the write-cache anyway because HDDs don't really get worn out in that sense. Not not to any significant degree at least.

cloud-to-cloud you only use the network, some (low) CPU, and a bit of RAM. No data needs to touch your HDD at all. In fact that is optimal in that scenario - and how it would work by default.

Between certain type of remotes (like Gdrive to Gdrive) you can even do transfers server-side without even using your own bandwidth (I just synced 2 drives at about 4GB/sec .... Google servers be scary).
Ask for more details if you want them. Trying to not ramble on too much here... :slight_smile:

Hell yes, I would like to know more about it, but honestly I'm a little embarrassed to torture you like this for hours and hours... You've been so generous with your time today (and @asdffdsa as well) that I'm also a little bit concerned that you'll want to charge me a pretty sum at the end of all this. :slight_smile: So, now you know the real reason why I didn't want to disclose my location details.

On a more serious note, though, both of the things that you mentioned sound extremely interesting and I would like to know more, but gdrive to gdrive transfers is something that I know that I'll need very soon...

Will using rclone copy gdrive1:/ gdrive2:/ do the server-side only copy you mentioned or is there more to it? I guess the CLI needs to be operational during the transfer, not that it would be problematic at those speeds...

That will work, but if you want to server-side it rather than pipe the data through your local PC then you need to:

either add this line to the rclone.conf file (under both Gdrives I'd recommend, as part of their blocks)
server_side_across_configs = true (this is then on by default)
or as a temporary one-operation flag:
--drive-server-side-across-configs
This will make rclone try to server-side whenever it is possible.

There is also a few small prerequisites you need to meet for server-side - that you need to set up once.
The destination drive needs to have permission to read the source drive.

To say how to do this, I need to know if you use a Gdrive personal (like the free one) or a Teamdrive/shared drive from a Gsuite (a common source for "unlimited" drives).
The methods differ a bit (and teamdrives are much easier to set up for this, but it can be done on both).

This is my current rclone.conf file with only one remote set up:

[gdrive1]
type = drive
client_id = XXXXXXXXXXXXXXXXXXXXXXXXXX
client_secret = XXXXXXXXXXXXXXXXXXXXXXXXXX
scope = drive
token = {"access_token":"XXXXXXXX","token_type":"Bearer","refresh_token":"XXXXXXXX","expiry":"2020-03-25T00:07:08.968815+01:00"}
root_folder_id = XXXXXXXXXXXXXX

<- *Do I add server_side_across_configs = true here?*

Will rclone ask me to renew the personal client IDs after they expire?

I'll be using Shared Drives for this.

No worries, I added some extra characters originally too, but I later decided to change it to all XXXXXX to make it look more elegant. :slightly_smiling_face:

@thestigma @asdffdsa Sorry guys, it seems that this forum won't allow me to bother you anymore today (You’ve reached the maximum number of replies a new user can create on their first day. Please wait 15 hours before trying again.). Thank you once again and I hope to speak to you later about this!

you need to delete that post, MUST NOT share your private id and secrets.
you need to redact that info NOW!!!!

1 Like

Well, a union (especially the vastly improved multiwrite union currently availiable in the latest beta) lets you do a lot of stuff like:

  • Combine many locations (local, clouds and other locations) into a single logical location.
  • Set up data to be handled exactly like you want. Mirror data to all places for example - or "upload" all data to a local folder where it can be handled by a script (A personal favorite). Multiwrite union basically mirrors the functionality of mergerFS on Linux if are familiar with it at all. The sky's the limit now. This stuff does require a bit of understanding just to know what you really want it to do in the first place so it might be overwhelming if you are a beginner (but hey - feel free to ask if needed).
    Documentation is here when you've covered the easier stuff and want to think about more complex setups later :slight_smile:
    https://tip.rclone.org/union/
1 Like

Yep - directly after the last line (the root folder one).
It doesn't really matter exactly where you put it as long as you don't place it in some other remote's block (which is demarcated by [remotename] if you have several ).

I note you don't have an encryption remote.
If you are security sensitive - you might want to consider this?
There are very few drawbacks to using encryption on rclone from a technical perspective.

rclone will do so automatically. If you don't use the remote for several weeks it could expire, and you might need to re-authorize it via rclone config (edit remote and just keep all settings and do the re-auth step).

If a service-account is set up (authorization baked into a single file, used often on server-systems) it would never expire if that is important to you. But it's not like re-authing is difficult to do if needed.

Then whatever account (email address) you use in rclone must be a member of both teamdrives, and the user must at least have read permissions on the source and write/delete permissions on the destination. In teamdrives this is easy to set up (on google drive website), but ask if you run into any problems.

Happy to help :smiley:

1 Like

It depends entirely on the command used to do the copy... If it doesn't do open, write, close but instead does something fancier like truncate or seek then --vfs-cache-mode writes will be needed.

But failing to truncate is not actually a critical error correct? From my (very limited) understanding of this stuff - it basically means you may get some empty padding at the end of the file right? Not sure if that results in checksums being wrong or not - and thus having more serious run-on consequences.

Also, in my experience - truncate seems to be called in any copy to a mount regardless of if it is a very simple sequential transfer. Even to a mounted local remote. In my testing of this though - all files still CRC checked perfectly afterwards so I'm not really sure what rclone is complaining about in this case?

Whether or not to use encryption is currently my main quandary. The problem is, and please correct me if I'm wrong here, that encryption methods are not cross-compatible with different backup software, so once I make a commitment to encrypt TBs of data I cannot simply change software (e.g., rclone to Air Explorer or vice versa). This might be particularly a problem if I don't manage to solve my slow upload (and as it turns out also download) speed for individual files via rclone CLI.

Would it be possible to somehow decrypt filenames in Chrome, so that I could download via Chrome (and get max download speed)? Obviously, when I look into my gdrive in Chrome all the filenames are encrypted, so I don't know what's what. I can see the decrypted filenames in rclone CLI, rclone Browser or rclone mount, but as you know I get only about 20-25% of my upload/download...

Best I can figure at the moment is to move the files that I want to download to a separate folder with rclone Browser, download the files within that folder via Chrome, copy the files to a folder that I mount in rclone as an encrypted remote, then decrypt to another local folder/remote.

I used rclone mount gdrive: z: --vfs-cache-mode off and then drag&drop.This results in it the following error messages in rclone CLI:

2020/03/25 16:22:08 ERROR : Best settings for GD mount read speed- - question - rclone forum.url: WriteFileHandle: Truncate: Can't change size without --vfs-cache-mode >= writes
2020/03/25 16:22:08 ERROR : Best settings for GD mount read speed- - question - rclone forum.url: WriteFileHandle: ReadAt: Can't read and write to file without --vfs-cache-mode >= minimal
2020/03/25 16:22:13 ERROR : Best settings for GD mount read speed- - question - rclone forum.url: WriteFileHandle: Can't open for write without O_TRUNC on existing file without --vfs-cache-mode >= writes

Also, when I try to copy a larger file, like 2GB, with rclone mount gdrive: z: --vfs-cache-mode writes I only get to see the Windows copy window for as long as the file is copied to cache. After that it disappears and when I click anything in Windows Explorer, the window freezes and I have to restart explorer.exe. Using rclone mount gdrive: z: --vfs-cache-mode off doesn't have that problem, although the transfer is slow (but that's probably related to my other issue I mentioned).

Those errors mean

  • the program attempted to run truncate() on an existing file
  • The program opened the file for both READ and WRITE
  • The program opened the file with O_TRUNC (to truncate the file) on an existing file

Rclone has work-arounds for some of these things - if you truncate a file to 0 size or its current size then it should't trigger the ERROR. I can't tell exactly what was happening without a DEBUG log. It might be that those workarounds aren't working on Windows for some reason.

The file will likely take a while to upload? Does it go back to normal after the upload has finished?

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.