Rclone to Google Drive with Cryptomator


For a long time, I have wanted to have access to all my encrypted files in Google Drive on iOS. Currently I upload them with Crypt which works great, but rclone being unavailable on iOS, I can't have access to my files this way (I do have a secondary Android phone, but since I prefer iOS I want them accessible on that platform).

So after googling for a long time I came across Cryptomator. It allows me to create encrypted "vaults". locally and mount them as drives. They also have an iOS app.

So I thought of mountaing Google Drive with rclone, creating a Cryptomator vault in the mounted drive, and then use rclone to copy the files from my hard drive into the Cryptomator vault.

Unfortunately this is not working well because of how unreliable Google Drive is (bear with me - I know I could get a different service like B2 or even Wasabi, but the monthly cost for 8TB is way out of budget right now, not to mention the Cryptomator iOS app only supports Dropbox, Drive and OneDrive).

The biggest problem seems to be that the mount disconnects when under "heavy" load. Rclone starts printing errors in the lines of "Device not configured".

I have created a Cache remote to wrap around my standard, non-encrypted drive, and I mount it like this:

rclone mount aibanezkautschme_Cache: gdrive2 --vfs-cache-mode full

So I have two questions: Has anyone here had any success using rclone and Cryptomator this way? Basically using rclone to mount Drive, and then create the Cryptomator vault in the Drive mount. And two, are there any other parameters I'd could try to improve the reliability of the rclone mount when using Drive? I'm aware Drive has been having issues since yesterday, but I'm not in an affected zone (none of my normal rcone sync logs show excess of 4xx errors).


I know there are other people using different crypto wrappers like this. It should work...

:frowning: Does it disconnect when uploading big files? That is a problem on macOS which has a work-around to increase this

  --daemon-timeout duration                Time limit for rclone to respond to kernel (not supported by all OSes).

Thanks, I’ll try using that flag. Though worth mentioning that it does this with any file. I’m trying to copy 8TBs of data with many variable sizes.

This may also worth be mentioning: I used the app Gemini to find duplicates files in a B2 remote with no problem. But when trying to do the same with a Drive remote it also disconnects mid-analysis. This is the kinda thing that makes me suspect it’s an specific but with the GDrive remote, but we’ll know better when I test that flag.

EDIT: OK, running with rclone mount aibanezkautschme: gdrive2 --daemon-timeout 100000m --vfs-cache-mode full, let's see how it works.

EDIT 2: Didn't last long:

2019/05/16 11:35:39 ERROR : Artbooks/Series and Games/H/Hanasaku Iroha/Hanasaku Iroha Official Visual Book/045.jpg: Failed to copy: mkdir /volumes/qneqIzL3QFqp_0/Artbooks: device not configured


:frowning: Can you post the log from the rclone mount with -vv when that happens please?


I run all my commands with log generation using the --log-level=DEBUG. Does this work?


This is a small sample but it happens very shortly after running the command. Let me know if you need it to be longer, but everything underneath it is Failed to copy: mkdir /volumes/qneqIzL3QFqp_0/Artbooks: device not configured for a bunch of files.

EDIT: That's the copy command. Give me a second to get a mount log.


OK so, there doesn't seem to be an specific error in the mount log.


It looks like the mount reports "device not configured" when it is indeed sending a large chunk.

Command rclone mount aibanezkautschme: gdrive2 --daemon-timeout 100000m --vfs-cache-mode full -vv

What's interesting is that the mount is still reporting Sending chunk, but the rclone copy script has long stopped working due to device not configured. In fact this is the last few lines of the copy script:

2019/05/16 12:24:58 DEBUG : Videos: Excluded
2019/05/16 12:24:58 ERROR : : error reading destination directory: failed to open directory "": open /volumes/qneqIzL3QFqp_0: device not configured
2019/05/16 12:24:58 INFO  : Local file system at /volumes/qneqIzL3QFqp_0: Waiting for checks to finish
2019/05/16 12:24:58 INFO  : Local file system at /volumes/qneqIzL3QFqp_0: Waiting for transfers to finish
2019/05/16 12:24:58 ERROR : Local file system at /volumes/qneqIzL3QFqp_0: not deleting files as there were IO errors
2019/05/16 12:24:58 ERROR : Local file system at /volumes/qneqIzL3QFqp_0: not deleting directories as there were IO errors
2019/05/16 12:24:58 ERROR : Attempt 3/3 failed with 2 errors and: not deleting files as there were IO errors
2019/05/16 12:24:58 Failed to sync: not deleting files as there were IO errors```


This is very likely caused by the kernel disconnecting the mount, or maybe OSXFUSE disconnecting the mount. Now why it has disconnected the mount is a different question!

Here is another thread with a similar issue: Sonarr / Radarr Hangs when using cache temp upload path which is when I introduces --daemon-timout

One thing that occurs to me is that you might need a new OSXFUSE for daemon timeout to work.


I have updated my OSXFuse version. I think I also enabled OSXFuse debugging so hopefully we will be able to tell what's going on.

1 Like