Is there a way to force sequential vfs cache flush?

I mount a remote internxt store via their local proxy-webDAV interface. The goal is to be able to do multi-volume tar backups “directly” to the cloud store.

This works reasonably well, even though I had to put rclone into a write IO restricted process group, to limit the speed at which rclone can write to the local VFS cache to about the speed at which cloud uploads work, because cache size limits seem to be more of a suggestion than a hard limit.

All that works.

Except when I have file.nnn.tar through file.mmm.tar they aren’t uploaded in sequence, even though they are created and written to, strictly sequentially.

So file.015.tar may end up with an older creation date than file.009.tar simply because for whatever unknown reasons rclone chose to push the younger file up to the cloud store before the older file.

While I don’t care about absolute creation dates, the relative temporal order should be preserved, but it isn’t.

Example:

2097160 -rw-r--r-- 1 root root 2147491840 Feb 13 15:39 filesystem.2025-02-13_15:32:54.000.tar

2097160 -rw-r--r-- 1 root root 2147491840 Feb 13 15:50 filesystem.2025-02-13_15:32:54.001.tar

2097160 -rw-r--r-- 1 root root 2147491840 Feb 13 15:53 filesystem.2025-02-13_15:32:54.002.tar

2097160 -rw-r--r-- 1 root root 2147491840 Feb 13 16:00 filesystem.2025-02-13_15:32:54.003.tar

2097160 -rw-r--r-- 1 root root 2147491840 Feb 13 16:08 filesystem.2025-02-13_15:32:54.004.tar

2097160 -rw-r--r-- 1 root root 2147491840 Feb 13 16:14 filesystem.2025-02-13_15:32:54.005.tar

2097160 -rw-r--r-- 1 root root 2147491840 Feb 13 16:20 filesystem.2025-02-13_15:32:54.006.tar

2097160 -rw-r--r-- 1 root root 2147491840 Feb 13 16:34 filesystem.2025-02-13_15:32:54.007.tar

2097160 -rw-r--r-- 1 root root 2147491840 Feb 13 16:39 filesystem.2025-02-13_15:32:54.008.tar

2097160 -rw-r--r-- 1 root root 2147491840 Feb 13 16:41 filesystem.2025-02-13_15:32:54.009.tar

2097160 -rw-r--r-- 1 root root 2147491840 Feb 13 16:58 filesystem.2025-02-13_15:32:54.010.tar

2097160 -rw-r--r-- 1 root root 2147491840 Feb 13 17:01 filesystem.2025-02-13_15:32:54.011.tar

2097160 -rw-r--r-- 1 root root 2147491840 Feb 13 17:08 filesystem.2025-02-13_15:32:54.012.tar

2097160 -rw-r--r-- 1 root root 2147491840 Feb 13 20:30 filesystem.2025-02-13_15:32:54.013.tar

2097160 -rw-r--r-- 1 root root 2147491840 Feb 13 20:41 filesystem.2025-02-13_15:32:54.014.tar

2097160 -rw-r--r-- 1 root root 2147491840 Feb 13 20:26 filesystem.2025-02-13_15:32:54.015.tar

2097160 -rw-r--r-- 1 root root 2147491840 Feb 13 20:24 filesystem.2025-02-13_15:32:54.016.tar

2097160 -rw-r--r-- 1 root root 2147491840 Feb 13 20:15 filesystem.2025-02-13_15:32:54.017.tar

2097160 -rw-r--r-- 1 root root 2147491840 Feb 13 20:26 filesystem.2025-02-13_15:32:54.018.tar

1008130 -rw-r--r-- 1 root root 1032325120 Feb 13 18:04 filesystem.2025-02-13_15:32:54.019.tar

AFAIK no such functionality exist today.

But I am not sure why to use mount here at all... Wouldn't simple rclone copy/move achieve the same?

Or rclone rcat? You could pipe tar output directly to your remote.

Copying won’t work, because that means the backup must first hit the local drive, which means I will need more than 50% free disk space.
I still want to be able to make a full system backup, even with 90% of disk space used.

rcat I may have to look into.

There’s also the issue that I’d like to eventually not have to run from the command line or from some easily forgotten about custom script, but with the standard mechanism provided by webmin’s file system backup module.

For my concrete issue, the monotonically ascending date are mostly to avoid confusion and logical coherence, as for restoring the sequentially numbered file names are relevant. But in general, relative age of files should be preserved, I think. So in a way, to me, this has almost bug-like quality.

Also, internxt’s webDAV proxy throws sufficient errors when litmus tested, that e.g. davfs2 won’t work with it. So I’d have to test, if it even allows for append operations, or just for copying of entire files. And if it the latter, rcat would need to cache the entire backup, before being able to cat/copy the file.

There’s of course also the option to use the --info-script hook in tar, to first copy the the file to the remote, then deleting it, before returning control to tar for the next volume… :thinking:

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.