Gcrpyt Mount hangs while opening folder

Waiting for new windows 10 cmd which supports emojis :rofl::rofl:

Stig, I cannot believe how fast this is compared to "dir /s". Priming my entire 550TB mount took a little over two minutes compared to at least an hour. Thanks so much for this tip :heart_eyes:

1 Like

@thestigma This system is so much better. I am getting full download and upload speeds on simply copy pasting files to and from the cloud drive. Earlier I used to get very bad speeds so always had to use copy command through rclone cmd to do the transfers at full speed! Also now newly uploaded files show up without any hassle earlier I used to refresh cache by renaming the folders.

@thestigma One thing I wanted to ask, is there any way to properly unmount the drive or I should simply close the cmd window? Earlier I used to do ctrl+c to terminate cmd command but that does not seem to work with this script (although I do not know if even that is the proper way).

try start rclone
instead of
start /b rclone

If you use start /b on anything you ask it to run in the background.
In that case I think you can no longer reach it with CTRL+C, as that only would apply to foreground tasks.
If you wanted CTRL+C, you'd have to re-jigger the script to make the mount run in the foreground in some window.

For example in my own setup, I have a "bootup" window that initially starts and launches a new window for the mount (start, no /b) then runs the warmup for it and closes. The mount window is then CTRL+C accessible there as it is a foreground task in that window.

But generally it is perfectly fine to just close the window.
We recently had a long back and forth about hard-kill vs soft-kill to close rclone and in short:
(1) Hard-killing seems to be the intended method so far and very rarely will cause issues, although seemingly soft-kill recently got added in WinFSP I think asdffdsa told me.
(2) Methods to induce a soft-kill (CTRL+C) can be a little bit convoluted. It is possible to do by all means, but it becomes non-trivial once you want it to happen programmatically or using no windows (such as in a system service setup) because obviously you no longer have manual access - and thus you either have to simulate a CTRL+C or else induce a "natural end of script" to happen on some signal.

I can advice you on how to make that happen if you wanted - but chances are you don't need it. If you don't notice wrong behaviour from simply closing the window you can assume it's fine. (the worst thing that could really happen is that a mount-point might get temporarily stuck or something - no risk of dataloss or anything critical)

All right then seems no point messing with it, I will just keep closing the window directly.

Lastly, now that your basic functionality is coming online you may want to look into optimization.
I'd be happy to help in that, and I know Gdrive well from my own use.

For example, increasing your chunk size (for Gdrive in your config) to 64M would net you a very substantial increase in upload speed on larger files - getting you closer to maxing out your upstream - no matter if you direct-copy to the mount or use a script for it.

Just be aware the tradeoff is that each active transfer can use that much memory, so for example 4x64M = 256M. Technically more is better, but each doubling brings reduced returns and will only helps for file at least that large. 64M is a goof balance. 128M is ideal, but a high price. 256M is very rarely worth it. Maybe a couple of percent faster in some cases, but if you have more RAM than you know what to do with then by all means go ahead...

Other neat topics include, getting to understand what write-cache can do for you (VFS), knowing about server-side transfers, knowing what --track-renames do if you do large sync jobs, and perhaps even setting a bandwidth limit to prevent your regular use from being choked out by rclone when it's working heavily. All of these can have very significant impacts. Feel free to pick my brain if you are interested :slight_smile:

I will give chunk size 64 and 128 values a shot and see how they affect my ram usage. I am already using server side transfers to backup my whole drive to another drive incase my account gets flagged or something like that. And as far as I know track renames does not work encrypted drives. I am interested in the bandwidth limit thing though. Can I limit it dynamically meaning that when I am trying to stream something it streams at full speed but at other times it is limited when maybe a download or something is going on in another app?

Also will chunk size make a difference in downloading / streaming speeds?

Unfortunately dynamic limiting is not possible currently, and I rather doubt it will be anytime soon.
Then you are really talking about QoS (Quality of service) and that's a pretty complex problem to solve just because there are so many traffic streams to coordinate. That's something you would have to handle in an (advanced/powerful) router basically.

That said - if you enable the --rc then you can change your BWlimit on the fly without having to interrupt rclone. That can be useful.

I'd say it's generally good to set it at maybe 80-90% of your bandwidth just to make sure that simple stuff like browsing doesn't feel sluggish in responsiveness and gaming doesn't get ping-spikes. If you start a youtube video or something then it will demand equal priority to any rclone transfer, so it's going to get about 1/5 of the bandwidth (if using 4 transfers) which is probably enough on a good connection. That's just down to how TCP works without any special QoS layer.

As for track-renames, the reason it "doesn't work" on encrypted files is because hashes on encrypted and unencrypted files won't match and it needs that to do it's job.
But... if you just pre-encrypt your data before upload then it will work fine (as hashes then match), and that is very much doable if you use a script for the data you need to sync.
If you want that then you probably have to explain to me some more details of your spesific use-case and I can suggest some ways to do it. I do this myself, and depending on your situation --track renames can safe you a buttload of re-uploading.

Thing is that once I upload things to the drive I delete them from my local storage, so I dont use sync commands that much. Rather only use copy command. The only place where I use sync is for server side copy from one drive to another where I copy the gdrive and not gcrypt.

Regarding the bandwidth limit. I typically upload at night so using it along with other services like youtube is not that crucial for me. And I have not felt any sluggishness in general browsing. And I nowadays do not game that much anymore.

--track-renames is ideal for this yes, under the assumption that your locations share the same crypt keys that is. Simply reorganizing your file structures on your source won't necessitate a re-upload of all that data.

Yeah the crypt keys are same for both. So --track-renames works fine for me in that case.

@thestigma Hi,
I am trying to make the script run automatically in the background without any cmd window or anything like that, I think you mentioned you have a setup for that, can you help me do this?

Ideally best case scenario will be that the script loads up minimized and in the notification tray and if I click on the icon there the cmd window opens and on minimizing it minimizes back to the notification tray...Something like that..

you can create a task in task scheduler, check the 'hidden box'
and you can also run rclone as system user, which is helpful for rclone mount

But then what will be the process for closing/stopping the script?
Endtask using task manager?

  1. task manager
  2. taskkill.exe /im rclone.exe

All right thanks!

another option is to create a .cmd file, then create a shortcut to the .cmd file and for run, choose minimized', then the .cmd file will be an icon in the taskbar.

image