What is the problem you are having with rclone?
None really, I want to ask if I can check lots of BIG video files faster. The defaults are very slow and I need to check over 12TB of big video files (they are VHS recordings).
I'm copying first with --ignore-checksum
to download all the remote data as fast as possible and then I want to run the checking process.
This is all in a shell script with the different paths I want to copy locally.
I read the output of rclone check
but didn't see anything that could be what I want.
I suppose the next best thing is just trust the checkers
, since they check by size and date, right? That might be enough.
Thanks for the suggestions.
On a side note, would a "partial checksum" be a good idea?
As far as I know, cryptographic hash functions change radically even if one bit is different. Would checking 1MB of the source/dest files be enough?
Run the command 'rclone version' and share the full output of the command.
# rclone version
rclone v1.68.2
- os/version: slackware 15.0+ (64 bit)
- os/kernel: 6.1.106-Unraid (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.23.3
- go/linking: static
- go/tags: none
Which cloud storage system are you using? (eg Google Drive)
Dropbox Business.
The command you were trying to run (eg rclone copy /tmp remote:tmp
)
rclone copy REMOTE:/ ./destination -P -v --create-empty-src-dirs --transfers=4 --checkers=16 --low-level-retries=20 --log-file=dropboxDownload.log --ignore-checksum
rclone check REMOTE:/ ./destination -P --log-file=dropboxDownload.log
Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.
# rclone config redacted
[REMOTE]
type = dropbox
token = XXX
A log from the command that you were trying to run with the -vv
flag
Doesn't apply.