Rclone rc and DEBUG logging (Elastic Logstash and log aggregation)

Hi,

I'm using rclone rc in the alpine docker image (rclone/rclone:1.70.2) with the global log level set to DEBUG. I've been writing a logstash.conf to capture when jobs are posted via 'parameters map', 'reply map' with particular focus on sync/copy and operations/hashsum functions. I pass _async and _group settings (i.e. to run in the background and assign a group reference).

QUESTION:
Is it possible to capture the associated _group parameter, if present, in subsequent related log events, for example:

2023/07/30 11:33:03 DEBUG : test.file: md5 = 1e2db57dd6527ad4f8f281ab028d2c70 OK
2025/07/30 11:33:03 INFO : test.file: Copied (new)

2023/07/30 11:33:03 DEBUG GROUP : test.file: md5 = 1e2db57dd6527ad4f8f281ab028d2c70 OK
2025/07/30 11:33:03 INFO GROUP : test.file: Copied (new)

This would serve as a unique reference for reliable logging, which currently I can only do synchronously and process in order (so no concurrent jobs!). Essentially the _group would then be captured by Logstash and stored as the task_id for which event aggregation would occur - a common field across all related events (e.g. the parameter map contains the _group, the reply map resonds with [jobid:x] [group:x], and the DEBUG/INFO/ERROR events (or any log event), presents the _group value in Copied (new) or Multithread-copied (new) etc. Subsequently fields such as srcFS, dstFS, etc could be appended to the log event from the initial 'parameter map'.

In addition to this, is there a way of outputting operations/hashsum file calculations to logging in real time (as per a sync/copy)? Processing a job/status output of a hashsum operation is incredibly difficult and especially so if the hashum array is very large!

Kind regards,
Luke

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.