Using separate file for timestamps for WebDAV

Is it possible to put the timestamp of files and directories that has been copied through webdav in some sort of a datafile? The timestamps in this datafile will then always replace the timestamps reported by webdav when rclone is doing some action on this webdav-server.

I ask this because I have an 1TB cloud storage with only webdav access. No matter what I try (plain webdav or owncloud/nextcloud), the webdav server always sets the timestamp to the date and time of upload.

I connect to this webdav-server using rclone mount and the storage is of type crypt and webdav.

I suggest to have a backend (like crypt) in order to keep splitted files and meta infos in order to survive with broken webdav servers.

Example:

FileName.ext.metadata
FileName.ext.chunk1
FileName.ext.chunk2
FileName.ext.chunk3
FileName.ext.chunk4

FileName.ext.metadata will contain information likes timestamps and the index of chunks (with offest).

Having the backend, we can push on top of another backend (webdav, crypt, …).

This is just an idea.

ciao

luigi

WebDAV doesn’t support setting the upload time as part of the protocol, the nextcloud and owncloud support is an extension.

There are some ideas along this like as existing issues. It is reasonably tricky keeping everything in sync though.

The last few days I’ve been testing rclone in combination with securefs (works) and cryfs (usually doesn’t work). Both are fuse filesystems that encrypt and also put the timestamp in a separate file. So timestamps will always be preserved, no matter if the cloud storage supports this or not.

It is working slow though… because each file has a meta file with the timestamp, and this meta file is also stored in the cloud instead of local. So doing an ls of a big directory in an rclone mount mountpoint, will need to download all meta files of all the files in this directory.

But still, the combination rclone with securefs may finally enable me to use this 1TB cloud storage that has been sitting idle for almost 2 years.

1 Like

Okay, after some more testing, both securefs and cryfs work… but very fragile. If rclone crashes (or maybe also when the connection with the cloud server is lost), the whole securefs/cryfs filesystem may become unmountable/empty because some important file(s) were not uploaded. I think those important files contain the directory and filenames because securefs/cryfs change the names of directories and files and also has a different directory structure (all files are in the root directory, no sub directories).

Also when there is no crash/disconnection, the upload to the cloud will only start after securefs/cryfs is unmounted. This is when using rclone cache with offline uploading:
copy files to -> (securefs mountpoint) -> (rclone mount mountpoint) -> (cloud server).
Where does rclone store the files before it gets uploaded to the cloud?

I believe all that is done in memory unless you’ve specified a cache backend or vfs in writes mode which would then copy to the cache first.

Maybe not 1 metadata file for each file, but 1 metadata file for the whole directory.

Okay, so I have to unmount securefs before the duration in vfs-cache-max-age is reached or is vfs-cache-max-age only for vfs in full mode?

Is there a method to force the upload of those files without unmounting securefs?

The upload will happen immediately but if more come in they will continue. I think what you’re looking for is a way to gracefully stop more incoming files to the queue and finish what is remaining in the queue?

Yes, finish the remaining files without unmounting the securefs mountpoint first. The incoming files are fully under my control, so that’s not a problem. I want the mountpoint to be mounted all the time, not just for uploading a backup once a day. But never unmounting the mountpoint means that a lot of new and updated files are in the vfs-cache directory waiting and risking getting lost when there’s a system crash.

What I experience is this:

  1. I copy files to the securefs mountpoint; the cache-tmp-upload-path is empty;
  2. Nothing gets uploaded; the cache-tmp-upload-path is still empty;
  3. I unmount the securefs mountpoint;
  4. The cache-tmp-upload-path is filled with files; then after a few seconds the files are uploaded

This is using the offline uploading of the cache backend, maybe I should test without the cache backend (only vfs cache).

What order do you have your fuse mounts mounted? Can you share the order you mount things including securfs and rclone and their location?

As soon as a file is copied to the rclone mount, rclone caches that immediately and begins upload. If nothing is getting uploaded prior to unmount of securfs then it’s not making it to rclone mount which leads me to believe that you have securefs mounted on top of the rclone mount?

The order of everything you have mounted would help.

Yes, securefs is mounted on top of the rclone mount.

First command:

rclone mount pcloud: /tmp/cloud \
--cache-dir /tmp/cache-vfs --vfs-cache-mode writes \
--write-back-cache --no-modtime

or

rclone mount -v cached_pcloud: /tmp/cloud \
--cache-dir /tmp/cache-vfs --vfs-cache-mode writes \
--cache-db-path /tmp/cache-db --cache-chunk-path /tmp/cache-db \
--cache-tmp-upload-path /tmp/cache-tmp \
--write-back-cache --no-modtime

Second command:

securefs mount -b -o nonempty,kernel_cache \
/tmp/cloud/crypted /tmp/plain

Copy some files:

cp -p *.gif /tmp/plain

When you’re overlaying securefs onto rclone, rclone isn’t getting the data you’re copying because you’re essentially hiding it under securefs.

Ah okay, that explains.

Would manually moving or copying from the vfs-cache to the cloud confuse rclone?

For example, using the setup from a few messages above, doing an hourly cronjob with the next command:
rclone move /tmp/cache-vfs/vfs/pcloud/crypted pcloud:crypted
Or if move will confuse rclone when it is looking for these files, maybe using copy?

I really wouldn’t recommend that.

What about this

The underlying filesystem would be the rclone mount. Says it caches all metadata persistent.