So basically I went with following setup which works quite allright:
I send my media to the
local-media drive after which i run a
rclone move ... command. After a minute or two the files show up in
remote-media and all is well.
I wanted to go the extra mile on this and decided I needed the a single folder to read/write to:
unionfs-fuse -o cow -o allow_other /home/user/local-media=RW:/home/user/remote-media=RO /home/user/media/
This allows me to just read and write to the
media folder. Reads are redirected to the remote version and writes to the local folder. All my programs are happy, they have no clue at all files get moved around
However, I would like to option to remove files from my
media folder. So basically I would like to somehow pipe the DELETE command to
remote-media as that’s a mounted drive and rclone supports delete operations on that.
The point behind all this is the well known media download tools that will upgrade media to a better version if found.
Does anybody have any suggestions on this? I’m sure I could turn it in a short tutorial if I can figure this out. Think it would be helpful for others as well!
I seem to have figured this one out more or less. My problems were - again - caused by root/user privileges. Rclone’s mount directory is owned by root, on which user doesn’t have enough privileges to delete files (write protected file?).
Disclaimer: I’ve tested this on small files for now, will test it on actual data later on and report back my findings.
I am now able to delete files from the fused directory. Unionfs keeps a store of deleted filenames which it excludes from its view. This means that any future files bearing the same name are unfortunately excluded as well, keep this in mind. Unionfs calls this “whiteout” btw.
When I have some time I’ll take a look at writing a bash script that can extract whiteouted files from unionfs’ bookkeeping. Those files should be deleted from amazon cloud to keep everything tidy.
Lastly there is renaming files. On renaming files get moved back to local-media with their new filename. The old file-name is again put as a whiteout. Basically we’re just copying the entire file with a new name. It would again be good practice to delete the old file from amazon cloud and re-upload the new file. Probably got to be a bit cautious when renaming a lot of files at once
Hey @Blaapje, what perms did you end up setting on rclone’s mount directory to make this work? I am having the exact same issue.
I eventually went with a cronjob that would scan all the files in the .cache folder inside the read/write folder. I would then remove the files from the mounted drive with a rm -f command and clean the cache folder.
I would share my script, but my VPS got wiped after a crash a while back. At the moment I just let the cache folder grow.
It might be worth to try:
Theoretically all new writes should flow to the first folder mentioned while leaving the ability to write to the second folder (and thus deleting). Be sure to also remove the cow option which enables copy on write that copies the entire file from remote to local before editing.
Hope it helps
Awesome, thanks! This helped me figure it out and what I had actually done was copied the .union-fs/ folder once or twice up to the cloud share which meant that it was RO in the fuse drive. Stupid mistake… always something to do with perms