Copying Files Within a B2 Bucket

Here is the command I am trying to attempt:

\\C:\rclone>rclone move :remote1/BB2-DeploymentsTest/CBB_S9SERVER02 :remote1/BB2-DeploymentsTest/MBS-c78ef851-0588-453b-ac8c-7191d9a55544/CBB_S9SERVER02

And here is my config file:

[remote1]
type = b2
account = #########
key = ##########

What is the problem you are having with rclone?

I get this error when executing the command:

2020/05/26 09:45:28 Failed to create file system for ":remote1/BB2-DeploymentsTest/CBB_S9SERVER02": config name contains invalid characters - may only contain 0-9, A-Z ,a-z ,_ , - and space

What is your rclone version (output from rclone version)

rclone-v1.51.0-windows-amd64

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Windows 10 64bit

Which cloud storage system are you using? (eg Google Drive)

Backblaze B2

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Paste command here
```C:\rclone>rclone move :remote1/BB2-DeploymentsTest/CBB_S9SERVER02 :remote1/BB2-DeploymentsTest/MBS-c78ef851-0588-453b-ac8c-7191d9a55544/CBB_S9SERVER02

#### The rclone config contents with secrets removed.  
<!--  You should use 3 backticks to begin and end your paste to make it readable.   -->

Paste config here

type = b2
account = #########
key = ##########


#### A log from the command with the `-vv` flag  
<!-- You should use 3 backticks to begin and end your paste to make it readable.  Or use a service such as https://pastebin.com or https://gist.github.com/   -->

Paste log here

2020/05/26 09:55:20 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rclone" "move" ":remote1/BB2-DeploymentsTest/CBB_S9SERVER02" ":remote1/BB2-DeploymentsTest/MBS-c78ef851-0588-453b-ac8c-7191d9a55544/CBB_S9SERVER02" "-vv"]
2020/05/26 09:55:20 Failed to create file system for ":remote1/BB2-DeploymentsTest/CBB_S9SERVER02": config name contains invalid characters - may only contain 0-9, A-Z ,a-z ,_ , - and space

It should be

rclone move remote1:BB2-DeploymentsTest/CBB_S9SERVER02 remote1:BB2-DeploymentsTest/MBS-c78ef851-0588-453b-ac8c-7191d9a55544/CBB_S9SERVER02 --dry-run

I believe. Notice the colons after the remote name and not before.

Thank you Rob! I knew I had to be doing something really dumb. This is my first attempt at using RClone, I really appreciate the quick help!

no problem. I highly suggest you test with --dry-run BEFORE you run anything.

Got it... I ran it with the dry run switch and got good looking info. I then ran it and it looks like it copied things very well. This was just test data to start out with. I have a 30TB bucket that I need to move this way that I am working my way up to copying once I am confident of my use of the commands. I think I will try on a larger test sample of data next.

So far most things are working well with rclone, but I have run into one (expected) issue. When it gets to files that are over 5GB I get an error:

Failed to move: Copy source too big: 5899789057 (400 bad_request)

Is there a provision in rclone that will allow the moving of files over 5GB within a bucket in a server-side copy? I believe it is possible looking at the B2 b2_copy_part command, I am wondering if this can all be done within rclone or is it required to use B2's tools to accomplish this?

Ok, so I have found the Chunker command and setup a test of one directory with a <5GB file in it and it seems to be transferring just fine. My one concern I am not sure how to check is to confirm that this is a server-side copy and not an download and re-upload.

Well, I spoke to soon. The command ran for hours and seemed to download and upload way over 5GB of data from my PC. I had Glasswire running and while running the sync on the chunker overlay it consumed 32GB of data. Here is the result of the command I had to eventually CTRL+C out of:

C:\rclone>rclone sync overlay: remote1:S9NAS01-Test/MBS-c78ef851-0588-453b-ac8c-7191d9a55544/CBB_S9SERVER02 --progress
Transferred:       25.860G / 30.082 GBytes, 86%, 1.851 MBytes/s, ETA 38m56s
Transferred:            5 / 6, 83%
Elapsed time:   3h58m28.9s
Transferring:
 * G$/CBBTesting/180920_F…LE TRANSFER TEST 2.zip: 23% /5.495G, 1.495M/s, 48m11s

Looks like I am going to have to find a different way of doing this.

Is it possible to confirm that this task can not be done in RClone but would require B2's native tools?

There is a bug about this which I didn't have time to fix for the 1.52 release

If I had a go at it, would you be up for testing?

1 Like

Hey Nick,

Sure, I would be happy to test it out.

Would this allow the server-side copying of files (both over and under the 5GB B2 limit) between folders in the same B2 Bucket?

Yes it would.

Below 5GB works fine now. Above 5GB is where the problem is!

I've had a go at fixing this here. This adds a new flag --b2-copy-cutoff - above this limit files will be server side copies in --b2-copy-cutoff sized chunks. I set this quite conservatively to 4G but it might be quicker to set it lower. In my testing with copying 1GB ish files setting it to 100M made it quicker... So some experimentation with this value would be useful!

I've also reworked the multipart transfer code - I think it is all working, but please report bugs!

https://beta.rclone.org/branch/v1.52.0-004-g257a890f-fix-3991-b2-copy-beta/ (uploaded in 15-30 mins)

Hi guys, sorry to barge in, but would this also cause me to see the following error when trying to make a copy of a large file on B2? I.e. make a copy within the same bucket but in a different "folder"?

2020/06/03 10:14:13 ERROR : file.bin: Failed to copy: Copy source too big: 30074455444 (400 bad_request)

Hey Nick,

I really apologize for the delay. Other demands got in the way and I just got a chance to test out the beta version you sent over.

It appears as if the function to copy large files is now being used, when it gets to those files it does not give errors and shows them transferring:

(image removed)

When I check the destination of that file entry on the B2 GUI, it shows that it as "Started Large File":

(image removed)

But I think it is getting held up somewhere and not able to complete the process, it finishes all the smaller files and then seems to get stuck on the larger files. The Move task keeps running and does not seem to show any progress happening in the GUI:

(image removed)

But when I check the B2 GUI, the bucket size keeps getting larger and larger the longer I let it go:

(image removed)

Oddly enough, the bucket size seems to keep growing disproportionately to the folders themselves. The destination folder seems to be getting larger than the source but the size of the 2 does not seem to add up to the bucket size it is reporting.

I hope these notes help you out. If you need me to try anything else please feel free to hit me up.

(I hope this image is readable, I got to the end of this and tried to post it and I got en error that said I could only post one image. So, I made the whole thing an image and added it here. I spent a bunch of time to make good notes and did not want to waste it!)

So rclone looks like it is running and according to the b2 website stuff is being transferred, but the stats aren't updating...

Did rclone complete Ok in the end?

I've just checked the code for server side copy - it doesn't report usage as it goes along since normally it is very quick. That would need some different infrastructure rclone hasn't got to make that work.

I actually just CTRL+C'd out of it as it was running for 4 hours. It did not seem to show any progress in the CLI in the last 2-3 hours so I assumed it was not going to finish. This test set of data was 41gb so I assume that would be plenty of time to complete.

Would that require the use of the Chunker tool?

I think it is is probably working but not showing you progress properly. Could you try setting --transfers 1 and trying again one file at a time - see if we can get one large file to complete?

Nick,

Success!

The first time I tried it, I tried with just one 6GB file in the bucket and it went right thru in about 4.5 min. I then tried it against the last job I had cancelled out and it finished it off, it had moved all the files that were under 5GB and this job moved all the rest over. I then reset the whole bucket to how it originally was and ran it once more with the --transfers 1 switch and it completed the whole 33GB in about 55 min.

I tested out my backup software and everything works in the migration for this test data set. Thank you so much for taking the time to help me work thru it. The more I use rclone the more I really like it. It is really an elegant solution to a complex problem, thank you for all your work.

My full data sets that I need to transfer range from 3TB to 34TB, I am getting ready to begin that migration but I wanted to see if you would recommend using the same --transfers 1 switch on a larger data set like 34TB?

It seems that even if the job gets stopped we can easily just run it again and finish off moving the files that have not yet been moved. Is there anything I should keep an eye out for if running this for an extended period of time on such a large data set?

Thanks again for everything. I really appreciate your time!

Jason

Glad it is working for you @asdf1234

You shouldn't need to set --transfers 1 - I was only getting you to set that for debugging. B2 works very efficiently with more transfers. The default is 4 - you can try that or go larger.

That is correct. Rclone is designed to be run as many times as you want or need.

I think it should work just fine :slight_smile:

1 Like

Thanks again Nick.

Should I continue to use the Beta version you sent over, or should I use the latest distribution you have published?