Is it safe to directly use move than copy and sync for moving data from ACD to Google Drive to improve the efficiency, with millions of small files and complex directory structures?

I’m moving data from my ACD to Google Drive by using “sync”. The speed was super at first but it’s getting slower and has no sign to catch up. I have tried creating myself a client ID and secret, and it worked at first but still, it doesn’t work now. I’ve tried creating new ID and secret, new project, switching to another Google Account and doing the same thing, or shut down the VM and wait for another 24 hours. But, the best speed I can get now is around 2MB, much slower than 20MB in my first transfer.

I’m worried because I have 33T in my ACD and they are millions of small files in complicated directories and I only have about 6 months to move them. I believe the speed issue is caused by the drive itself, not associated with different ids, projects or Google accounts. But although under this situation, “sync” and “copy” will still check every file. Now I must alternate my strategy to improve the speed, so I’m thinking to use “move” directly to move my data, since I will say bye to ACD anyway.

My only concern is that if this is safe? Because my data are encrypted by Arq Backup so the file directory is very complicated, and failed to copy one file or that if the directory structured is altered then it might cause the entire backup unreadable I guess.

Any ideas on this or any suggestion to improve the speed? Much appreciate it!

PS: I just tried some “move” and see empty folders are not moved??

Empty folders are not deleted by default.

Move, copy and sync will not transfer files faster. The only difference it might make is verifying lots of files over and over - but if you leave it to transfer in one go, that will not matter.

There is a limit of how many files can be moved/edited/deleted on Google drive which I think is your problem. Are you using --transfers=200 for example?

Thanks for your replay Qwatuz! I definitely want to leave it in one go but I am afraid I can not based on the current situation, since I will need to restart over and over again to get higher speed for “sync” so I am afraid the same thing might happen to “move”. My concern is just what if I need to restart a “move” command, might it cause any data loss? From the doc it says it will only delete the file after it is successfully transferred so I guess it should be fine, but just wanna see if anybody has experience.

Yes, I did use very high transfers and checks, I tried among 20, 40, 75, 200, 250… but found 75 normally has the highest rate so I am staying on 75 now.

Last night I tried shutting down my VM and restarted the VM and rclone after 12 o’clcok California time, and I saw some speed caught up, but only lasted for 2~3 hours. Will try the same thing tonight and see if it gets some pattern.

Thanks again!

What size are the files on average, what is the FULL command you’re using? And lastly what speed are you getting with rclone doing these transfers?

Average size I would say 14Mbyte… and they are 33T of them. The full command I’m using is

“rclone sync acd: gdrive:fromACD --transfers 50 --checkers 75 --stats 1s -v”

My speed could reach 18~20Mb/s at first and lasts for 2 hours, and finally down to around 100Kb/s or less.

But my first and second day experience with rclone was awesome (on the second day I changed client id and secret after I noticed a speed drop after like 6 hours of my first day run). They transferred 700Gb each day, and now the best I can get is around 200G each day, with trying everything back and forth I mentioned in my first post.

PS: Okay in my first reply I should say I’m staying on 50 for transfer, not 75.

Okay, some things you can do:

Use --checkers 300 --transfers=150

Use a new server to transfer (are you using a VPS?). Try out Google Compute - https://cloud.google.com/compute/ They give you $300 free credit when you sign up. Sending data TO Google drive is free. Make a VPS with 2 cores and ~6GB RAM.

Be warned, you can only upload around 750GB per day to Google drive.

And sync/move/copy will give the same results if you have enough transfers/checkers.

Thanks Qwatuz I definitely will try that amount of checkers and transfers. I will pause it and try it now.

Yea I am using Google Compute, I should have mentioned that in my first post. I used 8 core cpu(n1-highcpu-8) with 7.2Gb memory in us-east. Will switching to 2 cores and ~6RAM be even better than 8 core 7.2G ram?

Oh that’s the limit? I will have that in mind, and okay I might stay on sync here since I believe it’s safer.
PS: I tried switching between new and old VM back and forth in one and different Google accounts in different projects with no avail.

I appreciate your help Qwatuz!

Your cores/ram are more than needed, but it won’t make things worse - it’s fine as is.

Move is 100% safe as the files are checked for successful transfer before deleting the source.

You can try deleting the GCE instance and re-making it in another Zone (US-West for example). Sometimes GCE instances get poor speed, but not as poor as 20mbps.

You may be hitting the files/second limit which would mean there is nothing you can do.

Okay, I restarted rclone with 300 checkers and 150 transfers. It was 7~8Mb/s for several minutes and they now stayed at 2Mb/s. Better than nothing. I will try US-West once I see the speed drops to Kbs.

Yes I think you are right and I’m thinking the same thing, so that’s why I’m trying to find if there’s any pattern. Google Drive API seems to mention every limit is a daily thing, so I will shut down the VM and wait till after 0 and restart, for some days, to see if that’s the pattern.

One day I kept my VM running for 30 hours, and the speed say at 3rd hour drops to 200Kb/s, but it doesn’t get caught up at 27th or 28th hour(24 hours from the 3rd hour), so I am trying stop the rclone and restart the VM currently, since I guess keep rclone running would not refresh the daily limit.

I will keep you updated. It might be good information I guess :slight_smile: