Sorry if this is a silly question, but I am really having a difficult time understanding how the chunk size and upload cutoff arguments are to be used.
I have some large files I want to upload to backblaze (backup archives +500GB in size. Here is the command I have constructed to run with.
I expected to see the -v output to show chunks of files uploading and fairly quickly of 250M (1gpbs connection) But I do not. Any insight would be helpful!
This means that you want the b2 chunks to be 250M. Remember these are buffered in memory so you’ll need --transfers worth of them, ie 6*250M = 1.5G of memory.
This means that for files < 5000M you want them transferred as single files and not using chunked upload.
Thank you kind sir. I was worried I had misinterpreted some of the documentation. I’ll try with the -vv to see if I get the type of feed back I am looking for.
The large files seem to spend a lot of time reading the disk but not transferring. I am assuming that this process has to read the entire size of the file on disk and then check the cloud version before transfers will start?
I am getting closer, but still getting a message on one of my larger files I am trying to upload. Reading documentation at b2 seems to say they support files up to 10TB. This file is well short of that.
File size too big: 5235605504 (400 bad_request)
2018/08/14 12:46:08 ERROR : B2 bucket Veeam1: not deleting files as there were IO errors
2018/08/14 12:46:08 ERROR : B2 bucket Veeam1: not deleting directories as there were IO errors
2018/08/14 12:46:08 ERROR : Attempt 3/3 failed with 1 errors and: File size too big: 5235605504 (400 bad_request)
2018/08/14 12:46:08 Failed to sync: File size too big: 5235605504 (400 bad_request)