Good morning to everyone.
I am using RClone since years to sync my 12 TB local file server ( based on Ubuntu ) with Google Drive to preserve my data in case of a NAS failure. I have encryption enabled.
I am running out of space on the file server and instead of buying additional disk, I am planning to delete several TB from the file server rerely used and leverage only in Google Drive.
To continue to have such files available with the SAMBA\CIFS, I would like to mount read\write the remote Google Drive on the Linux box and export with CIFS\SAMBA this mounted remote.
Most of the files are movies, pictures and music
Now the question:
can you suggest best practices to do that?
thanks
I had my remote on GDrive working exported with CIFS.
I am performing some tests...
With caching disabled, copying data do the CIFS share shows a performance that is more or less 200 Mbps that is my upload speed to the internet
With caching enebled, the copy is performed at light speed, but when the copy is almost completed ( 99% ) than I have to wait a lot to get it to 100%
Let me explian: if the copy without caching takes 1 minute... enabling the cache it takes some seconds to reach 99%, then again 1 minute to go to 100%
Just updated the Rclone binaries to the latest version.
I had the problem with fuse3, since I am running Ubuntu 18 that does not officially supports fuse3, so I bypassed with a symlink and... now the behaviour is correct.
SOLVED
Anyway... I run out of space on my local disk. Searching the reason... it was the cache that growed until the disk full.
I run the mount with the option --cache-db-purge, but the local cache is still there with hundreds big files.
What is the correct and official way to completely delete the local cache?