Jottacloud, my cloud supplier limits uploads speeds considerably on transfers for me. I made a jottacloud -> crypt -> chunker share in the hopes of being a solution, but I see that I misinterpreted how this actually worked.
This is my running command:
(I am aware of chunk size and transfer session limit will make for some very huge cache use, but please ignore this)
Sensetive details omitted
[localstorage]
type = local
[jottacloud]
type = jottacloud
configVersion = 1
[jottacloud_encrypted]
type = crypt
remote = jottacloud:encrypted
md5_memory_limit = 10M
[jottacloud_encrypted_chunker]
type = chunker
remote = jottacloud_encrypted:
chunk_size = 2G
Is there a way to allow for paralell uploads of chunks? I imagine prioritizing larger files first (As done currently with "--order-by size,mixed,30"), in combination with smaller chunks, e.g. 200 MB would be a good solution. So a 10 GB file would have 50 transfer sessions allocated, and would upload in parallel, 50xing my transfer speed.
For reference I have a 1G up/down internet connection, and a read speed of my storage to match this. As it is currently, my cloud provider limits me to 4 mbps per session, leaving my 50gb files to take quite a while. I will access these shares from several platforms, and thus I do not wish to rely on any extra encapsulation methods to solve this issue.
I have no expericence with chunker or Jottacloud, but --transfers 100 would cause very serious throttling by OneDrive - and thereby effectively reduce the transfer speed.
This thread has some extra info on my experiences with OneDrive.
This help article suggest that Jottacloud could have similar (fair usage) limitations The limitations are most likely per account (like OneDrive) and then it doesn't help to use chunker or increase transfers.
Do you have recent tests/statistics to validate that your change of --transfers improves your transfer speed? (and not the opposite)
What is your upload speed if you do a plain (no extra flags/options):
I chose a slightly high number as I figured it would be "enough", I figured that it would probably cause more overhead if I were to chunk everything in to 100 MB files, and that I could save lot of wasted processing power etc if I only chunked really big files.
In my mind, it would be "logical" that rclone would consider each chunk a seperate upload and allocate more sessions to it (reasoning being, why would it not ? I guess it comes from the structure of rclone / chunkers transfer/session implementation, and that the chunker drive is being tasked to transfer x amount of upstream files in parallell, not to use x amount of paralell transfers.)
Is this something that be easily reconfigured or implemented?
You are right, it is all about finding the right balance for your usage and data.
I don’t think so. You can create a forum issue (type: enhancement) to propose this as a new feature, but I doubt it will be implemented in the foreseeable future. Take a look at the open issues/enhancements. I see many things with higher need/value/priority.