Rclone invoke user script/command after each file copy/move/delete

Maybe I’m in a minority, but if rclone had the ability to invoke a user script/command after each file copied/moved/deleted during a sync or other operation, I think it would be quite handy.

Parameters could include what was just done “delete”, “copy”, “move” and the source and dest paths of the item in question.

I wanted to do something like
rclone copy all files in a particular directory tree > 10 minutes old and > 3MB to dirs on my gdrive
Replace the local source file with a symlink to the just-copied file in a mounted /CloudGDrive

I could possibly do it by parsing rclone log output, but this struck me as an all-round neater way to do it.

Hands up, I have had a glass or two of wine this evening so this may be complete bollocks, in which case apologies for wasting your time!


1 Like

It sounds useful, though I think instead of an rclone command of sorts I’d go the route of using xargs compatible output and use xargs to do a command for each input rclone spits out as an output.


Making the rclone log parsable (with an option most likely) has been something I’ve been thinking about. That would enable, for instance, people writing GUIs to have better control over what rclone is doing.

@Xap - for new features I try writing the docs first - I often find that clarifies my thinking about new features. What do you think the docs for a --post-operation command would look like?

Hey Nick,

I’m not sure whether the just the source file path is enough (it would have been for my intended usage), the following assumes it is.

--post-operation command
Invokes COMMAND with a single parameter which is the source file or directory that was just copied/moved/deleted. This is called after each source file/directory has been processed.

For example, COMMAND could be a string like 'echo "Just copied"' and the effect would be as if you had called
echo "Just copied" 'foo:/tv/some show/S01E01.Episode 1.mkv'

Given that the rclone command itself will have the sync/move/copy/delete then it seems superfluous to pass the operation to COMMAND when it can be inferred or explicitly passed as an option in COMMAND, e.g. --post-operation 'myscript deleted' would end up being myscript deleted 'path/to my file/gone' .

I think you’d probably want the source path and the dest path.

Also I think you’d want to say what operation, because some things like move can be copy then delete and --backup-dir means that a sync will involve move copy and delete.

So I’d think

copy source dest
move source dest
delete dest

Rclone has some other primitives as well like Move and MoveDir.

If rclone was to make a parsable log output (perhaps to a socket or a file descriptor) like that then you could just attach one process to parse it which would be more efficient and you wouldn’t have to worry about synchronisation (ie having many copies of your post process command started at once).

I like the idea of having a machine parsable log output. I’d probably choose JSON though - would that be any use to you?

Hmm, I was looking for something that could be parsed by a quick-and-dirty shell script, but if JSON it is, then JSON it is…

Well if we do a JSON output for the logs then it would be trivial to add a csv format output too I expect.

Can you please make a new issue on github about this - a machine parsable log output for rclone so I don’t forget.

I think that would be my preferred option for this.

Done. #2180.

1 Like

it would be great for nextcloud/owncloud

sudo -u www-data php occ files:scan --path="/dir/modified/sent/from/rclone"

so, when some sync move or copy is made, rclone will execute this line

sudo -u www-data php occ files:scan --path=“arg”

and this arg will be directory which content has been changed after sync/copy/move/creation had complete.

Because nextcloud owncloud does not directly operate through files on disk, it creates mysql database of all files on mounted share, something like cache in rclone.
And to update this database there is occ script
and this script occ scans all files and writes it to mysql.

So when i have 5TB on mounted drive,
inside this 5TB i have plenty of folders, and some of this folders are changed through external application (not throug nextcloud)
and later i do rclone sync,
nextcloud could be able to update only what was changed and not rescan whole 5TB again.

If i would update it through web interface of nextcloud, nextcloud would automatically update database, but such brwosing through nextcloud is slow and inconvenient.

If i do it through another webdav/ftp to temporary vps drive and later run rclone sync it is much better.
But have to update nextcloud database for it to be seen in nextcloud web interface.

Hope you clearly understand it.


or maybe @ncw, does rclone detect changes in remote ?
If yes it could automatically run command after change happened.
Does rclone monitor changed files automatically?

something like this --on-change mentioned here:

is rclone able to monitor changes through vfs/cache .db ?

On some remotes, but not on nextcloud/owncloud