Recommendations for ACD->GSuite/Other transfers

i split the log its to big

  1. https://pastebin.com/mUUwHucC
  2. https://pastebin.com/badYnwDi

another question which linux distri do you use on google compute???

since you are using root try:

rm -rf /root/.odrive-agent
od="root/.odrive-agent/bin" && curl -L "http://dl.odrive.com/odrive-py" --create-dirs -o "$od/odrive.py" && curl -L "http://dl.odrive.com/odriveagent-lnx-64" | tar -xvzf- -C "$od/" && curl -L "http://dl.odrive.com/odrivecli-lnx-64" | tar -xvzf- -C "$od/"
/root/.odrive-agent/bin/odriveagent"

if this still does not work, please show the output of:
ls -lah /root/.odrive-agent

I use Debian 8 Jessie (i think it was default for my account)

its not working

root@vpsxxx:~# nohup “$HOME/.odrive-agent/bin/odriveagent” > /dev/null 2>&1 &
[9] 2086
root@vpsxxx:~# nohup “root/.odrive-agent/bin/odriveagent” > /dev/null 2>&1 &
[10] 2087
[9] Exit 127 nohup “$HOME/.odrive-agent/bin/odriveagent” > /dev/null 2>&1
root@vpsxxx:~# ls -lah /root/.odrive-agent
total 20K
drwxr-xr-x 4 root root 4.0K May 19 15:10 .
drwx------ 9 root root 4.0K May 19 15:10 …
-rw-r–r-- 1 root root 74 May 19 15:10 .oreg
drwxr-xr-x 4 root root 4.0K May 19 15:10 db
drwxr-xr-x 2 root root 4.0K May 19 15:10 log

do you have curl installed?

what does ‘curl’ return?

withou curl the 1. step not work and curl is installed.

what does ‘curl’ return? i need the input commend from you

root@vpsxxx:~# apt-get install curl
Reading package lists… Done
Building dependency tree
Reading state information… Done
curl is already the newest version (7.47.0-1ubuntu2.2).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
[11]+ Exit 127 nohup “/root/.odrive-agent/bin/odriveagent” > /dev/null 2>&1

Re:

ACD BAN/Alternatives discussion (error HTTP code 429)

Responding to @alneven in this thread:

I’m seeing this every now and then when running that same command but I think because I’m running 4 streams in parallel, there is inevitably some clashing as one stream tries to download a .cloud file that has already been downloaded in another stream.
Once the first full run is finished, I’ll run the command again to see if it picks any of these up again.

I have deleted everything
Re-done the find .cloudf
I got the whole folder and file structure
And now the find .cloud is running and the transfer is working:

rx: 534.63 Mbit/s 7878 p/s tx: 3.57 Mbit/s 6851 p/s
rx: 620.94 Mbit/s 8574 p/s tx: 3.86 Mbit/s 7435 p/s
rx: 609.50 Mbit/s 7944 p/s tx: 3.55 Mbit/s 6843 p/s
rx: 616.13 Mbit/s 8247 p/s tx: 3.63 Mbit/s 7003 p/s

Is there a way to limit the sync to a certain subfolder, for those of us with way more data than the 10TB disk on GCE? I will need to move the data in 10TB chunks.

you can just request a quota raise.
asked for 20TB, got them hassle free in 2min.

1 Like

I’m currently using the Google Compute Engine method as well, thanks for that @Philip_Koninghofer.
Instead of the three parallel screen session I used one screen session with the command:

exec 6>&1;num_procs=10;output="go"; while [ "$output" ]; do output=$(find "$HOME/odrive-agent-mount/Amazon Cloud Drive/Media/" -name "*.cloud*" -print0 | xargs -0 -n 1 -P $num_procs "$HOME/.odrive-agent/bin/odrive.py" sync | tee /dev/fd/6); done

If you get a permission denied error just chmod a+x ~/.odrive-agent/bin/odrive.py and try again.

This starts 10 threads according to the author of the topic odrive forum’s for syncing and gives me higher speed than the other method. I’m averaging around 700Mbit/s right now (checked nload), but sometimes hit peaks of 1.3Gbit. According to a calculator an 8TB transfer will take around 24/30 hours so I hope the data is transfered by the end of the weekend and I can start uploading it to G-Drive again.

Edit: Just to let you know, I tried other methods as well since I thought my odrive expired trail would give issues on linux as well, but it seems there is no issue with syncing the whole drive when using the agent. Also multcloud, cloudhq nor amazon’s own client were able to download as fast as Google’s compute engine, I guess it’s the best bet right now.

1 Like

currently people (including me), get about 100MB/s, which are 800Mbit/s.
120MB/s even 960Mbit/s.

anyhow, don’t you think 10 threads is bit aggressive? i seems 10 vs. 3 or 4 does not give thtat much speed advantage, but would hammer amazon API three times harder.

does anyone have experience with ACD blocking an account being too excessive in API usage? what limit do you recommend?

This has the advantage of only running find once and dispatching to multiple odrive calls, so avoids This file doesn't exist errors.

Unfortunately I’m getting a significant amount of these:

Unable to sync mqvqvijjrp02d0pqnr6q...nec6fpbh8hago0.cloud.  Amazon Cloud Drive internal server error.

I suspect those are the “file name too long” issues, but won’t know they’re persistent errors until the rest syncs and I’m only left with those. I’m a bit worried how reliable odrive really is in this scenario.

I see.

I am also worried a bit.
BUT I used to see them alot when using odrive about a year ago. Maybe they are just not as smooth in handling API timeouts as rclone is.
Currently I am thinking/hoping that another run will just pick them up, since the paths do not seem that long to me (in my case)

If so there is still the possibility to use another way to download missing files, since odrive is nice enough to print the paths accordingly.

since I already have about 50% of my 15TB moved to gdrive I am not that worried. I would have guessed that it would have affected more files/paths if that was the case. we will see in a few hours. :slight_smile:

@Saviq keep me updated.

I’m also receiving a couple of internal server error’s from ACD but I think the second run will fix this, as far as I know non-transfered files still have the .cloud extension on them so that would work.

Last september I did my initial transfer from home (6TB) to ACD with odrive on Windows without any issues, so I have faith that this will work. 0.8TB of 8TB transfered…

i just browsed around odrive forums and my best guess are API timeout for now. it really looks like they are not that good at handling them at odrive. which would mean they are only temporary and most likely random. fingers crossed for my second sync run :slight_smile:

Just realized there’s one more advantage there - checksums, which are still very much a problem with rclone’s crypt. With encfs under rclone syncing with checksums will Just Work™.

1 Like

So a cheaper method (in terms of not using all your $300 credits). I spun up a 2012 Server with basically no storage, installed ExpanDrive and am doing a copy from ACD to Gdrive. I’m getting around 40-50MB/s. A few things here, you don’t have to rclone to Gdrive later, you’re already doing that with the copy from ACD to GDrive. Downside you can only do 10TB blocks at a time because ExpanDrive presents that both drives are 10TB each. It’s an alternative if you don’t want to worry about linux, not sure in the end which is faster.

multcloud shows also only 10TB each (gdrive and acd)
but I guess this is just the GUI
it can do more

Rclone say me “the system is Not functioning” after a Lot of Transfers from expandrive. Habe you the Same Problem?

How are you guys getting 700MBit/s here is my average download using the 10 thread method.