Hi, I have a client's web application with 15Go documents linked to it, it runs on Ubuntu.
The client wants his employees to access the documents from their own Desktop computer for some particular file actions, so having the web app store the documents directly on his Google Drive sounds like a simple solution. They will install the google drive application, the google drive will be shared to them.
So what do you think about having a Rclone mount linked to Google Drive for the web apps documents.
As I test things, I think my plan would be :
- Transfer all current web app documents stored locally on the server to a mounted Google drive folder on the server (mounted with rclone). To be safe, and since it seems a long job to copy everything to Google Drive (a lot of small files), I would use Rsync locally between the 2 Folders, keep the web app running with the original Documents directory.
- The next night maybe, do a last Rsync (less new files to upload this time) then switch the Web app to the mounted directory.
- I will set in crontab to remount the directory at each reboot
So, with my local environment tests I see that the app is a bit slower when uploading documents and photos. But it seems to work quite well. The app is used by few people (less that 10 at a time). My tests are quite small, with a much smaller batch of files and it works well.
So how reliable is this solution?
- Is there any limits I should know about?
- is there any problems of lost files that could happen?
- If my directory gets unmounted, how will the sync work afterwards when setting the mount back? Will all new files created on the ubuntu machine will automatically bet sent to Google Drive?
A rclone mount isn't as reliable as local disk. There is networking, the Internet, google drive, rate limits, etc, etc all of which conspire against it.
--vfs-cache-mode are you using?
If you restart
rclone mount while it is uploading stuff, stuff can get lost.
When the mount is missing, the directory will be empty and any files places there won't get uploaded to google.
Do the clients need to upload data? If you can make the mount read only that would be the most reliable solution.
Hi thanks for the answer, I have let it the default value during my tests. So it is off. After reading I guess a full would be necessary as many people could access files from many places.
I don't have that much concern about disk space so I guess this would be the right cache option.
From the web interface, they can drop documents, delete documents and download documents. They can't edit them from there. The edition will come from the people opening certain type of files with their Google Drive Application on their Desktop.
I think it could work, but you'll have to do lots of testing to make sure it is suitable.
rclone mount isn't a perfect file system emulation so there are some apps which don't work very well with it. Things work much better with
--vfs-cache-mode writes though.
Hi, after some tests, when I use the --vfs-cache-mode writes I see the cache works fine. The problem I was wondering that happened is that if I kill the mount during an upload of files from the web app, the user of the web app sees his files uploaded in the interface, they are there in the cache directory, but as I kill the mount before the upload is done on the remote, I end up having 10 files in my cache, 8 files for example in the Google Drive.
So after that if I remount Rclone, the files that miss on Google Drive don't seem to retry uploading. Is there any option to set for this?
Not yet... This is something I hope to find time to work on for 1.51