Azure container to container

Trying to use RCLONE to copy the content of a container in 1 storage account to the same container name in a 2nd storage account. I get the error: nothing to do as source and destination are the same.

I’m using:
rclone copy --azureblob-sas-url SOURCESAS :azureblob;CONTAINER-X --azureblob-sas-url TARGETSAS :azureblob;CONTAINER-X

Using rclone lsd using the individual sas-urls works.

I think you’ll need to make a different remote for each one - the detection for source and destination being the same doesn’t take into the --azureblob-sas-url. Note also that that flag will apply to both the :azureblob: and the other will be ignored which probably isn’t what you wanted.

hi NCW, just tested by using a different container as the target (and new SAS url). It then starts complaining about the container name in the SAS URL not being the same so it does look like that 2nd --azureblob-sas-url is ignored.

Yes i’m pretty sure that the second --azureblob-sas-url will be ignored. Ideally the flag library rclone uses would warn about this. I did make an issue about it a while back.

Note that if you don’t want to make a second remote in the config file, you can synthesize remotes from environment variables too

Hello

I tried to do the same thing but with another account and nothing happen
What I want to do is :

  • copy from a container from an account to another container in another account
  • and with include 'filename*'

I tried this command but nothing happens

./rclone copy -v -P 'account1:container1' 'account2:container2'

Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Errors: 0
Checks: 0 / 0, -
Transferred: 0 / 0, -
Elapsed time: 2m56s^C

It works if I copy a single file:
./rclone copy -v -P 'account1:container1/file1' 'account2:container2'

and from local to container too
./rclone copy -v -P '/folder' 'account2:container2'

So I'm pretty sure that it's not a configuration problem
Am I missing something in the command ?

It looks OK to me. If you do rclone ls account:container and the same for account2:container2 does it list the files in the container?

If you do a copy with -vv and without -P what does it print?

When I do the clone with ls, it works on both container (the second one was empty since it's the target but I tried with a file too)

by the way my source container has 81 millions of blobs (append blob)
and I set my parameters by default (5000 blob for listing)

Without the -P , the transfer seems to not work :

Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Errors: 0
Checks: 0 / 0, -
Transferred: 0 / 0, -
Elapsed time: 1m0s

2019/06/27 13:27:40 INFO :
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Errors: 0
Checks: 0 / 0, -
Transferred: 0 / 0, -
Elapsed time: 2m0s

2019/06/27 13:28:40 INFO :
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Errors: 0
Checks: 0 / 0, -
Transferred: 0 / 0, -
Elapsed time: 3m0s

Ok I tried to a smaller source container (26315 blobs) and it works
I wonder if rclone try to list / check all the blobs before doing the copy and this process takes a long time if there is millions of files ?

How many files in a directory do you have? Rclone lists the files one directory at a time, compares the source and dest then moves on.

I know azureblob storage doesn't really have directories, but that is how rclone sees it as it is used to interface with filesystems.

So do you have millions of files in one directory? That will be very slow to get going and use a lot of memory.

yes I have 81 millions of files in the source directory and no files in the target
I guess the list take some times since it fetch 5000 files that's why I don't see any display
It would be great if the copy start at the first fetch

millions of files in a single directory is rclone's weak spot. It needs to read the whole directory in first so it can sync the source and destination. That is going to take a lot of memory too.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.