Recommendations for ACD->GSuite/Other transfers

Is there a way to limit the sync to a certain subfolder, for those of us with way more data than the 10TB disk on GCE? I will need to move the data in 10TB chunks.

you can just request a quota raise.
asked for 20TB, got them hassle free in 2min.

1 Like

I’m currently using the Google Compute Engine method as well, thanks for that @Philip_Koninghofer.
Instead of the three parallel screen session I used one screen session with the command:

exec 6>&1;num_procs=10;output="go"; while [ "$output" ]; do output=$(find "$HOME/odrive-agent-mount/Amazon Cloud Drive/Media/" -name "*.cloud*" -print0 | xargs -0 -n 1 -P $num_procs "$HOME/.odrive-agent/bin/odrive.py" sync | tee /dev/fd/6); done

If you get a permission denied error just chmod a+x ~/.odrive-agent/bin/odrive.py and try again.

This starts 10 threads according to the author of the topic odrive forum’s for syncing and gives me higher speed than the other method. I’m averaging around 700Mbit/s right now (checked nload), but sometimes hit peaks of 1.3Gbit. According to a calculator an 8TB transfer will take around 24/30 hours so I hope the data is transfered by the end of the weekend and I can start uploading it to G-Drive again.

Edit: Just to let you know, I tried other methods as well since I thought my odrive expired trail would give issues on linux as well, but it seems there is no issue with syncing the whole drive when using the agent. Also multcloud, cloudhq nor amazon’s own client were able to download as fast as Google’s compute engine, I guess it’s the best bet right now.

1 Like

currently people (including me), get about 100MB/s, which are 800Mbit/s.
120MB/s even 960Mbit/s.

anyhow, don’t you think 10 threads is bit aggressive? i seems 10 vs. 3 or 4 does not give thtat much speed advantage, but would hammer amazon API three times harder.

does anyone have experience with ACD blocking an account being too excessive in API usage? what limit do you recommend?

This has the advantage of only running find once and dispatching to multiple odrive calls, so avoids This file doesn't exist errors.

Unfortunately I’m getting a significant amount of these:

Unable to sync mqvqvijjrp02d0pqnr6q...nec6fpbh8hago0.cloud.  Amazon Cloud Drive internal server error.

I suspect those are the “file name too long” issues, but won’t know they’re persistent errors until the rest syncs and I’m only left with those. I’m a bit worried how reliable odrive really is in this scenario.

I see.

I am also worried a bit.
BUT I used to see them alot when using odrive about a year ago. Maybe they are just not as smooth in handling API timeouts as rclone is.
Currently I am thinking/hoping that another run will just pick them up, since the paths do not seem that long to me (in my case)

If so there is still the possibility to use another way to download missing files, since odrive is nice enough to print the paths accordingly.

since I already have about 50% of my 15TB moved to gdrive I am not that worried. I would have guessed that it would have affected more files/paths if that was the case. we will see in a few hours. :slight_smile:

@Saviq keep me updated.

I’m also receiving a couple of internal server error’s from ACD but I think the second run will fix this, as far as I know non-transfered files still have the .cloud extension on them so that would work.

Last september I did my initial transfer from home (6TB) to ACD with odrive on Windows without any issues, so I have faith that this will work. 0.8TB of 8TB transfered…

i just browsed around odrive forums and my best guess are API timeout for now. it really looks like they are not that good at handling them at odrive. which would mean they are only temporary and most likely random. fingers crossed for my second sync run :slight_smile:

Just realized there’s one more advantage there - checksums, which are still very much a problem with rclone’s crypt. With encfs under rclone syncing with checksums will Just Work™.

1 Like

So a cheaper method (in terms of not using all your $300 credits). I spun up a 2012 Server with basically no storage, installed ExpanDrive and am doing a copy from ACD to Gdrive. I’m getting around 40-50MB/s. A few things here, you don’t have to rclone to Gdrive later, you’re already doing that with the copy from ACD to GDrive. Downside you can only do 10TB blocks at a time because ExpanDrive presents that both drives are 10TB each. It’s an alternative if you don’t want to worry about linux, not sure in the end which is faster.

multcloud shows also only 10TB each (gdrive and acd)
but I guess this is just the GUI
it can do more

Rclone say me “the system is Not functioning” after a Lot of Transfers from expandrive. Habe you the Same Problem?

How are you guys getting 700MBit/s here is my average download using the 10 thread method.

For me, it depends on the file size. I have some .mov home video’s that are 40gb or so, during these transfers I hit the top speed. On folders with a lot of smaller files the speeds are less. Although all my transfers are now queit fast.

I’m running a n1-highcpu-2 (2 vCPU’s, 1,8 GB RAM) on Google Compute Engine at europe-west1-d
since my ACD is in Germany.

After the night speeds went up:

maybe 10 is too much. maybe the region you provisioned your instance in.
i am running europe-west1-c

Yeah I am US East 1 d

Your upload method to google drive is what? is this also using odrive? and if so what is the proper way to do this just mv the amazon folder to the google folder and sync?

Not sure what the right US location is if your ACD is in the US located as well. I’m now getting lower speeds too though, probably just per file based since it looks like it already fetched the larger files.

I’m not yet uploading to G-Drive, will try rclone first. Not sure what the right way would be with odrive, will let you know how I did it when all my data is downloaded to the GCE instance.

Can someone point me to the guide to using odrive to sync between ACD and gdrive on Linux? I have 30tb I need to get off of ACD.