Migration strategy - 50+ sharepoint sites

Hi All,

I'm evaluating rclone for a project we are working on - migrating 50 or so SharePoint Sites to matching Google Workspace Shared Drives and I'm looking for some advice from the seasoned users here.

I have used rclone on individual migrations, but as we are migrating 50 I'd like to be able to migrate different connections simultaneously on a machine.

For example we'll have
Sharepoint A > Google Shared Drive A
Sharepoint B > Google Shared Drive B
etc

Clearly each will take a differing amount of time, as they contain different quantities (count and volume) of data. Running up one machine that does a single migration might be efficient, but monitoring 50 machines would be a challenge.

Looking at the docs it seems that the config file is global to the machine, so a machine can only conduct one config at a time is that correct?

What's the best strategy to run migration A and B...n at the same time?

Thanks!

You can use --config to have a different config file.

You can also have multiple remotes in a single config file.

You can do multiple syncs at one time by running multiple rclone processes. You could even kick off all 50 at once.

Or if you wanted to be more sophisticated you could kick the syncs off using the remote control API and monitor them like that. This is probably more work than you need to do for a once off 50 unit migration.

Is this a once off migration or will you need to keep the data in sync afterwards?

How much data is there roughly? And what is the largest sharepoint site?

1 Like

Amazing thanks!

Once off migration, they are moving to Google.

800Gb largest

Total ~1238Gb

I'm thinking a cloud machine to do the Sharepoint > Google migration.

We also have a set of on prem drives to migrate, about 50 of them and 10Tb total, so we'll be doing a mix to get it all across. This would be installing rclone on that server, or one in the same rack to minimise transfer.

That is what I would do. Its relatively cheap. You probably want to use a google VM so you don't pay for the egress to google drive.

I'd probably write a script which does all 50 migrations in sequence and do an initial rclone sync however long that takes. You'd then run the script again to pick up any last minute changes - this will be much quicker, and you can set the sharepoint sites to read only on the last transfer, run the script one last time, then declare the drives open for business.

It might be that running 50 migrations in sequence takes too long - if that is the case then I'd run a limited number in parallel. The easiest way to do this is with xargs --max-procs see this SO answer and subsequent for more ideas: How to limit number of threads/sub-processes used in a function in bash - Stack Overflow

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.