I want to do a server side copy of my files from one google team drive to another. I am the manager of both team drives.
I am completely inexperienced when it comes to Rclone or running commands. which means I don't know how to do even the most basic of things. I only managed to mount my team drive because of a step by step guide on reddit.
Is there a beginners step by step guide on how to copying over my files?
Step1 - Set up basic remotes for both locations.
How to set up remotes is covered by the documentation page: https://rclone.org/drive/
Step2 - Enable server-side transfers. This can be done two different ways.
Use the flag somewhere in your command --drive-server-side-across-configs
Put this line in both Gdrive blocks in your rclone.conf file server_side_across_configs = true
step3 - Tranfer files normally using the regular move/copy/sync commands
Rclone will detect if it can use server-side (assuming it is enabled) and use that instead of piping it through your local computer. If you have verbose output on it will indicate this in the output, but of course it will be obvious from just looking at your network graph also...
If are you are completely fresh of the boat - an example of how to copy from X to Y: rclone copy MyRemoteName1:/backupfolder MyRemoteName2:/backupfolder -P -v
(-P gives you a progress indicator and -v will show verbose output, neither are required but they can be useful to see what is going on while you are still unfamiliar with the process).
Give that a try and come back to ask questions once you get stuck on something.
I am more than willing to help - but there is little point in me explaining how to set up a basic remote when there is both a guide and a built-in configuration menu to help you do this. I'd much prefer answering more spesific questions if you get stuck
thanks, ill try it out tomorrow and let you know how it went. I already know how to set up remotes. quick question though, should i set up a separate API for the new remote or use the same one im using for the first team drive?
Ideally it may be best performance-wise to have separate API keys, but I think that will rarely matter in practice if you are mainly going to backup to these locations.
The benefit to using the same across all is that making sure the required data-access for the user is in place is much easier. Otherwise you have to make sure to share access for all backup accounts back to the primary user. Teamdrives are easier to manage in this regard.
tried it out, and it works perfectly! thanks a lot man. im assuming ill still run into the 750gb daily limit. will it just throw up an error if i do? and when i start it up the next day, do i have to add anything so it doesnt try and recopy anything its already copied? or will it just automatically skip those items?
Rclone will just skip any items it sees has already been copied, so you can start and stop at any time (it will not resume half-finished files though, those have to be started again).
It will throw a "user rate limit exceeded" or something close to that if you go above the limit.
(@Animosity022 - server side copy does not reference or use the bwlimit)
You can also use these two to manage this if you want: --bwlimit 8.5M
(limit speed to 8.5MB/sec, or about 68Mbit/sec which is the speed at which you can upload 24/7 and never reach the limit) --max-transfer 730G
(stop the process once 730GB of data has transfered. Note that rclone does not track if you already used parts of your quota today however, so this only helps to the degree that you know your quota for the day is mostly unused).
But if you run into the limit nothing bad happens... rclone will just continue trying a whole lot and eventually error out on transfering some files. Simply running the same command again when you have more quota to use will transfer any files that errored out because of this.
Rclone does not keep a record of what it did last time.
It asks the server what status the files have and goes *hmm ok that means I need to transfer these files now to make them be the same". It compares on filename, size, modtime and also hashsum (if availiable).
I'm new here and I hope I'm not doing something wrong writing here
I downloaded rclone because of this post, I mean, I need a program to server-side copy files from a Team Drive (or "Shared Drive", the new name) to another
So, I followed this post
Edited the rclone.config file
Your setup seems correct. make sure you actually saved the config and restarted the rclone command afterwards for the changes to be active. Also make sure that rclone us using the correct config. Sometimes if you use a non-standard location it can be easy to get confused and have 2 sets of config files.
before you run the command you can run rclone config file to see the location of the one which is currently being used.
You can also try using the flag variant I noted above, but this should have the exact same result and there is no point in setting both. It would only really help if indeed you were having some confusion around which config file you were using because the flag is set independently that way.
Harry is correct about the access stuff - but I think if rclone did not have proper access and server-side is enabled it would error out on permissions rather than silently fall back to copying via local, so I don't actually think this is the problem.
According to NCW, server-side transfers are not guaranteed to work between all types of accounts (thus why it is off by default), but I have never seen or heard confirmed reports of it failing yet.
That is all I can really think... Most likely this is just some trivial mistake. Double-check all your basics.
I think that is about how to set up a personal API key.
That is not relevant to this problem. Any API key from any account can be used as long as it is authorized to access the data.
What is important in this situation is that the Teamdrive user who has write-access to destination also has read-access to source. Otherwise there is no way for that user to ask the Google server to copy that data. When you do it via your own PC it does not matter if the different places use different accounts without cross-access - but when you server-side transfer it must be this way.
Normally Content Manager should give sufficient permissions to read&write - but I have seen some situations where it actually doesn't work unless you have full Manager access. It is a bit weird - and I'm not sure why this is. Might be a bug - but you should at least be aware of it so you don't get frustrated trying to find out why something doesn't work I think Manager is mostly required if you actually move files (ie. delete on source) rather than just copy. Especially if the account that is deleting is not the same that uploaded the data. I will leave it at that