Is it possible to put the timestamp of files and directories that has been copied through webdav in some sort of a datafile? The timestamps in this datafile will then always replace the timestamps reported by webdav when rclone is doing some action on this webdav-server.
I ask this because I have an 1TB cloud storage with only webdav access. No matter what I try (plain webdav or owncloud/nextcloud), the webdav server always sets the timestamp to the date and time of upload.
I connect to this webdav-server using rclone mount and the storage is of type crypt and webdav.
The last few days I’ve been testing rclone in combination with securefs (works) and cryfs (usually doesn’t work). Both are fuse filesystems that encrypt and also put the timestamp in a separate file. So timestamps will always be preserved, no matter if the cloud storage supports this or not.
It is working slow though… because each file has a meta file with the timestamp, and this meta file is also stored in the cloud instead of local. So doing an ls of a big directory in an rclone mount mountpoint, will need to download all meta files of all the files in this directory.
But still, the combination rclone with securefs may finally enable me to use this 1TB cloud storage that has been sitting idle for almost 2 years.
Okay, after some more testing, both securefs and cryfs work… but very fragile. If rclone crashes (or maybe also when the connection with the cloud server is lost), the whole securefs/cryfs filesystem may become unmountable/empty because some important file(s) were not uploaded. I think those important files contain the directory and filenames because securefs/cryfs change the names of directories and files and also has a different directory structure (all files are in the root directory, no sub directories).
Also when there is no crash/disconnection, the upload to the cloud will only start after securefs/cryfs is unmounted. This is when using rclone cache with offline uploading: copy files to -> (securefs mountpoint) -> (rclone mount mountpoint) -> (cloud server).
Where does rclone store the files before it gets uploaded to the cloud?
The upload will happen immediately but if more come in they will continue. I think what you’re looking for is a way to gracefully stop more incoming files to the queue and finish what is remaining in the queue?
Yes, finish the remaining files without unmounting the securefs mountpoint first. The incoming files are fully under my control, so that’s not a problem. I want the mountpoint to be mounted all the time, not just for uploading a backup once a day. But never unmounting the mountpoint means that a lot of new and updated files are in the vfs-cache directory waiting and risking getting lost when there’s a system crash.
What I experience is this:
I copy files to the securefs mountpoint; the cache-tmp-upload-path is empty;
Nothing gets uploaded; the cache-tmp-upload-path is still empty;
I unmount the securefs mountpoint;
The cache-tmp-upload-path is filled with files; then after a few seconds the files are uploaded
This is using the offline uploading of the cache backend, maybe I should test without the cache backend (only vfs cache).
What order do you have your fuse mounts mounted? Can you share the order you mount things including securfs and rclone and their location?
As soon as a file is copied to the rclone mount, rclone caches that immediately and begins upload. If nothing is getting uploaded prior to unmount of securfs then it’s not making it to rclone mount which leads me to believe that you have securefs mounted on top of the rclone mount?
The order of everything you have mounted would help.
Would manually moving or copying from the vfs-cache to the cloud confuse rclone?
For example, using the setup from a few messages above, doing an hourly cronjob with the next command: rclone move /tmp/cache-vfs/vfs/pcloud/crypted pcloud:crypted
Or if move will confuse rclone when it is looking for these files, maybe using copy?