Rclone lsf --recursive ,How to speed up the process and increase performance

What is the problem you are having with rclone?

rclone lsf on a Local Filesystem (local directory) is taking a long time, is there any flags to add to increase its processing speed and make more performant?.

Run the command 'rclone version' and share the full output of the command.

rclone v1.66.0

  • os/version: debian bookworm/sid (64 bit)
  • os/kernel: 6.5.0-28-generic (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.22.1
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Local Filesystem

The command you were trying to run (eg rclone copy /tmp remote:tmp)


rclone lsf --format "hstp" --hash "SHA1" --files-only --recursive --csv --links --human-readable --exclude-from "~/Downloads/Rclone/exclude-from-file.txt" --ignore-case --log-file "~/Downloads/Rclone/rclone-lsf-log-file.log" --verbose=2 "/media/laweitech/MY-DRIVE/Z-LARGE-DIRECTORY" > ~/Downloads/Rclone/local-metadata.csv

A log from the command that you were trying to run with the -vv flag


2024/05/07 15:32:00 DEBUG : rclone: Version "v1.66.0" starting with parameters ["rclone" "lsf" "--format" "hstp" "--hash" "SHA1" "--files-only" "--recursive" "--csv" "--config" "/home/laweitech/.config/rclone/rclone.conf" "--links" "--human-readable" "--exclude-from" "/home/laweitech/Downloads/Rclone/exclude-from-file.txt" "--ignore-case" "--log-file" "/home/laweitech/Downloads/Rclone/rclone-lsf-log-file.log" "--verbose=2" "/media/laweitech/MY-DRIVE/Z-LARGE-DIRECTORY"]
2024/05/07 15:32:00 DEBUG : Creating backend with remote "/media/laweitech/MY-DRIVE/Z-LARGE-DIRECTORY"
2024/05/07 15:32:00 DEBUG : Using RCLONE_CONFIG_PASS password.
2024/05/07 15:32:00 DEBUG : Using config file from "/home/laweitech/.config/rclone/rclone.conf"
2024/05/07 15:32:00 DEBUG : local: detected overridden config - adding "{b6816}" suffix to name
2024/05/07 15:32:00 DEBUG : fs cache: renaming cache item "/media/laweitech/MY-DRIVE/Z-LARGE-DIRECTORY" to be canonical "local{b6816}:/media/laweitech/MY-DRIVE/Z-LARGE-DIRECTORY"
2024/05/07 15:38:14 DEBUG : Scoop/persist/bitwarden/bitwarden-appdata/Cache: Excluded
2024/05/07 15:39:58 DEBUG : Scoop/persist/brave/User Data/GrShaderCache: Excluded


Further Information

/media/laweitech/MY-DRIVE/Z-LARGE-DIRECTORY' contains 1,348,080 items, totalling 415.0 GB

after runing the above command it takes 4 hours, 35 minutes, and 29 seconds to complete and save the results to local-metadata.csv

Is there any way I can reduce processing time to something lower than 4 hours, 35 minutes, and 29 seconds ? do I need to add flags like --fast-list ?

what is taking a long time; basic listing of files, calculating the sha1 hash or what?

you could try the flag and see what happens?

as far as i know, local backend does not support ListR, so --fast-list would not help.

rclone backend features local | grep "ListR"
                "ListR": false,

You request hash. Local backend requires every file to be read in order to calculate it. For many files and a lot of data it can be very slow indeed. Only way to speed it up is to use faster disk and faster CPU.

What you could try is to add hasher overlay and only calculate hashes once.

Or use tools like GitHub - rfjakob/cshatag: Detect silent data corruption under Linux using sha256 stored in extended attributes to store hashes in extended attributes. Then you do not need rclone at all.

How you approach it depends on your exact workflow and requirement.

1 Like

@asdffdsa I mean the whole process takes too long to complete and thank you for the information.

Thanks @kapitainsky for your response, I am trying out your suggestions, to find which works best for my case.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.