Info about v1.54͏͏͏͏͏͏͏͏͏͏

hi,

can we start a discussion on this?

  • new commands and/or flags?
  • documentation?
  • how end-users can start testing it for bugs?

You can check github as that is where all the items are:

The next milestone has things tagged for it.

thanks͏͏͏͏͏͏͏͏͏ but that link has 181 pages going back to 2014.

i was hoping for a more concise list for 1.54 major changes.

What would you like to see?

I have in on the list

  • next part of VFS caching so we can serialize objects to disk - this will build the foundations for metadata caching in the VFS (which probably won't be in 1.54)
  • a VFS backend so you can wrap a backend in a VFS and use its caching
  • move bandwidth accounting to the edge to make it more accurate and have separate Rx and TX limits.

Lots of other stuff which other people are working on.

To see the most up to date list of changes/updates for 1.54 you should check out: https://github.com/rclone/rclone/milestone/41 every once in a while.

What does this means?

This means move the bandwidth accounting to the http layer and do it when rclone reads data from and to the network. Currently rclone does it in the middle when passing data from the source FS to the dest FS.

This will make it more accurate (at the moment it fills buffers so makes spikes) and also include listings which it to doesn't at the moment.

I have issues with VFS maxing out my gbit link on every file, with default settings for VFS cache full i cant control downloading from remote in terms that it is downloading just at the speed that is needed for emby to play movie. Will try --bwlimit option which i found now. Issue is as i see it that rclone is downloading at max speed to have it in cache as fast as it can, which is diffrent from old cache backend i also use.
Is bandwith accounting something that will help in this case ?

That should help for overall bandwidth limits. You can also set --bwlimit-file to set a per file bandwidth limit.

The VFS layer will download as fast as it can to fill up --buffer-size and satisfy --vfs-read-ahead after that it will download at the speed the user is requesting the file. At least that is what is supposed to happen!

Thanks @ncw

I started using -bwlimit-file option, works fine when downloading from remote. Today i started uploading to mounted remote and it seems bwlimit-file is not complied, i see 80-90 MB/s upload while bwmilit is set to 40MB. Is maybe threads in question or something ? Cheers

It might be internal buffering allowing the external bandwidth to peak. bwlimit-file guarantees the long term average not the peaks unfortunately.

Which backend are you uploading to? And how are you doing the upload?

I m uploading to gdrive through mount. I m using lfm file manager on ubuntu linux 18 LTS.

Can you try a big file and graph it's bandwidth usage to see if the average is right? Or look at the logs to see how long the transfer takes?

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.