Rclone 1.48 release

Rclone 1.48 has been released (after a slightly extended release period!) Find it here https://rclone.org/downloads/

Highlights:

  • rclone serve sftp - serve any rclone backend over sftp
  • Multithread downloads from any backend to local storage
  • server side copy for b2

Thank you to all the contributors to this release, those who contributed code or doc fixes (27 people!) or made issues or answered questions in the forum - your help is much appreciated!

v1.48.0 - 2019-06-15

  • New commands
    • serve sftp: Serve an rclone remote over SFTP (Nick Craig-Wood)
  • New Features
    • Multi threaded downloads to local storage (Nick Craig-Wood)
      • controlled with --multi-thread-cutoff and --multi-thread-streams
    • Use rclone.conf from rclone executable directory to enable portable use (albertony)
    • Allow sync of a file and a directory with the same name (forgems)
      • this is common on bucket based remotes, eg s3, gcs
    • Add --ignore-case-sync for forced case insensitivity (garry415)
    • Implement --stats-one-line-date and --stats-one-line-date-format (Peter Berbec)
    • Log an ERROR for all commands which exit with non-zero status (Nick Craig-Wood)
    • Use go-homedir to read the home directory more reliably (Nick Craig-Wood)
    • Enable creating encrypted config through external script invocation (Wojciech Smigielski)
    • build: Drop support for go1.8 (Nick Craig-Wood)
    • config: Make config create/update encrypt passwords where necessary (Nick Craig-Wood)
    • copyurl: Honor --no-check-certificate (Stefan Breunig)
    • install: Linux skip man pages if no mandb (didil)
    • lsf: Support showing the Tier of the object (Nick Craig-Wood)
    • lsjson
      • Added EncryptedPath to output (calisro)
      • Support showing the Tier of the object (Nick Craig-Wood)
      • Add IsBucket field for bucket based remote listing of the root (Nick Craig-Wood)
    • rc
      • Add --loopback flag to run commands directly without a server (Nick Craig-Wood)
      • Add operations/fsinfo: Return information about the remote (Nick Craig-Wood)
      • Skip auth for OPTIONS request (Nick Craig-Wood)
      • cmd/providers: Add DefaultStr, ValueStr and Type fields (Nick Craig-Wood)
      • jobs: Make job expiry timeouts configurable (Aleksandar Jankovic)
    • serve dlna reworked and improved (Dan Walters)
    • serve ftp: add --ftp-public-ip flag to specify public IP (calistri)
    • serve restic: Add support for --private-repos in serve restic (Florian Apolloner)
    • serve webdav: Combine serve webdav and serve http (Gary Kim)
    • size: Ignore negative sizes when calculating total (Garry McNulty)
  • Bug Fixes
    • Make move and copy individual files obey --backup-dir (Nick Craig-Wood)
    • If --ignore-checksum is in effect, don't calculate checksum (Nick Craig-Wood)
    • moveto: Fix case-insensitive same remote move (Gary Kim)
    • rc: Fix serving bucket based objects with --rc-serve (Nick Craig-Wood)
    • serve webdav: Fix serveDir not being updated with changes from webdav (Gary Kim)
  • Mount
    • Fix poll interval documentation (Animosity022)
  • VFS
    • Make WriteAt for non cached files work with non-sequential writes (Nick Craig-Wood)
  • Local
    • Only calculate the required hashes for big speedup (Nick Craig-Wood)
    • Log errors when listing instead of returning an error (Nick Craig-Wood)
    • Fix preallocate warning on Linux with ZFS (Nick Craig-Wood)
  • Crypt
    • Make rclone dedupe work through crypt (Nick Craig-Wood)
    • Fix wrapping of ChangeNotify to decrypt directories properly (Nick Craig-Wood)
    • Support PublicLink (rclone link) of underlying backend (Nick Craig-Wood)
    • Implement Optional methods SetTier, GetTier (Nick Craig-Wood)
  • B2
    • Implement server side copy (Nick Craig-Wood)
    • Implement SetModTime (Nick Craig-Wood)
  • Drive
    • Fix move and copy from TeamDrive to GDrive (Fionera)
    • Add notes that cleanup works in the background on drive (Nick Craig-Wood)
    • Add --drive-server-side-across-configs to default back to old server side copy semantics by default (Nick Craig-Wood)
    • Add --drive-size-as-quota to show storage quota usage for file size (Garry McNulty)
  • FTP
    • Add FTP List timeout (Jeff Quinn)
    • Add FTP over TLS support (Gary Kim)
    • Add --ftp-no-check-certificate option for FTPS (Gary Kim)
  • Google Cloud Storage
    • Fix upload errors when uploading pre 1970 files (Nick Craig-Wood)
  • Jottacloud
    • Add support for selecting device and mountpoint. (buengese)
  • Mega
    • Add cleanup support (Gary Kim)
  • Onedrive
    • More accurately check if root is found (Cnly)
  • S3
    • Suppport S3 Accelerated endpoints with --s3-use-accelerate-endpoint (Nick Craig-Wood)
    • Add config info for Wasabi's EU Central endpoint (Robert Marko)
    • Make SetModTime work for GLACIER while syncing (Philip Harvey)
  • SFTP
    • Add About support (Gary Kim)
    • Fix about parsing of df results so it can cope with -ve results (Nick Craig-Wood)
    • Send custom client version and debug server version (Nick Craig-Wood)
  • WebDAV
    • Retry on 423 Locked errors (Nick Craig-Wood)
9 Likes

Awesome. Just gets better and better!

3 Likes

As always - love your work guys!

Very much appreciated

Morphy

1 Like

Thanks again to Nick and contributors, it's a fantastic work. :clap: :clap:

And in this version, specifically by this, especially useful for my use case:

2 Likes

Thanks!!!! i love rclone, im r/DataHoarder/

1 Like

Lovely stuff! Well done to all involved. :clap: :clap: :clap:

1 Like

Brilliant! Thank you Nick, fionera, Garry, Gary and all

2 Likes

Multi threaded downloads to local storage (Nick Craig-Wood)

  • controlled with --multi-thread-cutoff and --multi-thread-streams

Work this new feature also for "rclone mount" commands?
How does this feature work with --vfs-cache?

Thanks for reply

If you are using --vfs-cache-mode writes or full and the file is big enough and needs to be downloaded rather than streamed, then yes it will work.

Would it make sense to use it while streaming high bitrate stuff (e.g. 4K) for higher network speeds or doesn't it help for streaming?

Btw.. once again a huge thank you and all supporters for this awesome project. :slight_smile:

Can I just say, the new multi-threaded copy is insanely fast. This is what I now get on my seedbox slot (which sits on a 20Gb/s connection) when downloading from GSuite:

2019/06/19 09:12:42 INFO : xxxxxx.mkv: Multi-thread Copied (new)
2019/06/19 09:12:42 INFO :
Transferred: 4.404G / 4.404 GBytes, 100%, 382.210 MBytes/s, ETA 0s
Errors: 0
Checks: 0 / 0, -
Transferred: 1 / 1, 100%
Elapsed time: 11.7s

Before I was maxing out at about 25-30MB/s

Fantastic improvement - thanks!

1 Like

That is amazingly fast! Did you try tuning these? Increasing streams may help further.

  --multi-thread-cutoff SizeSuffix   Use multi-thread downloads for files above this size. (default 250M)
  --multi-thread-streams int         Max number of streams to use for multi-thread downloads. (default 4)

It won't really help for streaming.

1 Like

Using --multi-thread-streams=8 on a 24GB file, I got

Transferred:   	   24.434G / 24.434 GBytes, 100%, 457.509 MBytes/s, ETA 0s
Errors:                 0
Checks:                 0 / 0, -
Transferred:            1 / 1, 100%
Elapsed time:       54.6s

It initially ran at 550MB/s but I think that's the limit of the disk I'm on, as it was showing 100% utilisation while I was running this.

So possibly if you're using fast SSD's, you may be able to beat this...

1 Like

Wow, 0.5 GByte/s!

:slight_smile:

Does this mean I can create a hashes of incrementally changes to local file system
Like on first time there are 20 files I add 10 more on next run only those 10 files are hashed and updated in the output file

Congrats and thx you for the multi threading d/l. My 1gb fiber link is happy !

1 Like

Alas no. What it means is that it just calculated the one hash rather than MD5, SHA1, Dropbox etc like it was under some circumstances.

1 Like

im using the rclone Browser how would i add that new command for multi threading? right now i use this

-v -vv --fast-list --drive-chunk-size 128M --verbose --drive-acknowledge-abuse

thanks !

You don't have to do anything, rclone will use multithreading by default for files bigger than 250M using up to 4 streams.

You could put either of these options in if you wanted to change the defaults

  --multi-thread-cutoff SizeSuffix   Use multi-thread downloads for files above this size. (default 250M)
  --multi-thread-streams int         Max number of streams to use for multi-thread downloads. (default 4)