Jottacloud, per-transfer speed limits, chunker workaround?

Hello,

Jottacloud, my cloud supplier limits uploads speeds considerably on transfers for me. I made a jottacloud -> crypt -> chunker share in the hopes of being a solution, but I see that I misinterpreted how this actually worked.

This is my running command:
(I am aware of chunk size and transfer session limit will make for some very huge cache use, but please ignore this)

rclone copy I:\ jottacloud_encrypted_chunker:\LEGACY_BACKUP_1 --transfers 100 --retries 3 -vv --progress --log-file=rclone_log.txt --order-by size,mixed,30

Share configuration:

Sensetive details omitted

[localstorage]
type = local

[jottacloud]
type = jottacloud
configVersion = 1

[jottacloud_encrypted]
type = crypt
remote = jottacloud:encrypted
md5_memory_limit = 10M

[jottacloud_encrypted_chunker]
type = chunker
remote = jottacloud_encrypted:
chunk_size = 2G

Is there a way to allow for paralell uploads of chunks? I imagine prioritizing larger files first (As done currently with "--order-by size,mixed,30"), in combination with smaller chunks, e.g. 200 MB would be a good solution. So a 10 GB file would have 50 transfer sessions allocated, and would upload in parallel, 50xing my transfer speed.

For reference I have a 1G up/down internet connection, and a read speed of my storage to match this. As it is currently, my cloud provider limits me to 4 mbps per session, leaving my 50gb files to take quite a while. I will access these shares from several platforms, and thus I do not wish to rely on any extra encapsulation methods to solve this issue.

Thank you very much for reading through my post.

hello and welcome to the forum,

it depends on the definition of a session?

I have no expericence with chunker or Jottacloud, but --transfers 100 would cause very serious throttling by OneDrive - and thereby effectively reduce the transfer speed.

This thread has some extra info on my experiences with OneDrive.

This help article suggest that Jottacloud could have similar (fair usage) limitations The limitations are most likely per account (like OneDrive) and then it doesn't help to use chunker or increase transfers.

Do you have recent tests/statistics to validate that your change of --transfers improves your transfer speed? (and not the opposite)

What is your upload speed if you do a plain (no extra flags/options):

   rclone copy I:\FolderWith10FilesOf1GB jottacloud:FolderWith10FilesOf1GB

I tried now with tests as you described, you can see that it normalises at 560 kilobyte per sec:


Good illustration, I got it - I hope :wink:

I don't think it is possible to prioritize the individual transfers of chunks - only files as you already know.

It may be a naive question, but why don’t you reduce the chunk size to something smaller, that would allow your smaller files to blend in?

I chose a slightly high number as I figured it would be "enough", I figured that it would probably cause more overhead if I were to chunk everything in to 100 MB files, and that I could save lot of wasted processing power etc if I only chunked really big files.

In my mind, it would be "logical" that rclone would consider each chunk a seperate upload and allocate more sessions to it (reasoning being, why would it not ? I guess it comes from the structure of rclone / chunkers transfer/session implementation, and that the chunker drive is being tasked to transfer x amount of upstream files in parallell, not to use x amount of paralell transfers.)

Is this something that be easily reconfigured or implemented?

You are right, it is all about finding the right balance for your usage and data.

I don’t think so. You can create a forum issue (type: enhancement) to propose this as a new feature, but I doubt it will be implemented in the foreseeable future. Take a look at the open issues/enhancements. I see many things with higher need/value/priority.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.