I have a company. So…
i can not start the agent
root@vpsxxx:~# nohup “$HOME/.odrive-agent/bin/odriveagent” > /dev/null 2>&1 &
[1] 1218
i try this
root@vpsxxx:~# nohup “$HOME/.odrive-agent/bin/odriveagent”
nohup: ignoring input and appending output to ‘nohup.out’
nohup: failed to run command ‘/root/.odrive-agent/bin/odriveagent’: No such file or directory
[1]+ Exit 127 nohup “$HOME/.odrive-agent/bin/odriveagent” > /dev/null 2>&1
is step 2 from the install guide on odrive
Is this a 64bit vps? If so switch to the 64bit tab in the odrive command tab.
if that’s not it what is the content of your /root/.odrive-agent/bin directory?
yeah i test this on my vps ubuntu 16.04 lts x64 and i try with the 64bit tab too. In this directory i have 3 files:
- odrive
- odrive.py
- odriveagent
i have the same issues on the google compute!
try chmod +x on the odriveagent file. if this does not help:
sudo apt-get install strace
strace -f -s 99 $HOME/.odrive-agent/bin/odriveagent
post the log to pastebin and the link here
i split the log its to big
another question which linux distri do you use on google compute???
since you are using root try:
rm -rf /root/.odrive-agent
od="root/.odrive-agent/bin" && curl -L "http://dl.odrive.com/odrive-py" --create-dirs -o "$od/odrive.py" && curl -L "http://dl.odrive.com/odriveagent-lnx-64" | tar -xvzf- -C "$od/" && curl -L "http://dl.odrive.com/odrivecli-lnx-64" | tar -xvzf- -C "$od/"
/root/.odrive-agent/bin/odriveagent"
if this still does not work, please show the output of:
ls -lah /root/.odrive-agent
I use Debian 8 Jessie (i think it was default for my account)
its not working
root@vpsxxx:~# nohup “$HOME/.odrive-agent/bin/odriveagent” > /dev/null 2>&1 &
[9] 2086
root@vpsxxx:~# nohup “root/.odrive-agent/bin/odriveagent” > /dev/null 2>&1 &
[10] 2087
[9] Exit 127 nohup “$HOME/.odrive-agent/bin/odriveagent” > /dev/null 2>&1
root@vpsxxx:~# ls -lah /root/.odrive-agent
total 20K
drwxr-xr-x 4 root root 4.0K May 19 15:10 .
drwx------ 9 root root 4.0K May 19 15:10 …
-rw-r–r-- 1 root root 74 May 19 15:10 .oreg
drwxr-xr-x 4 root root 4.0K May 19 15:10 db
drwxr-xr-x 2 root root 4.0K May 19 15:10 log
do you have curl installed?
what does ‘curl’ return?
withou curl the 1. step not work and curl is installed.
what does ‘curl’ return? i need the input commend from you
root@vpsxxx:~# apt-get install curl
Reading package lists… Done
Building dependency tree
Reading state information… Done
curl is already the newest version (7.47.0-1ubuntu2.2).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
[11]+ Exit 127 nohup “/root/.odrive-agent/bin/odriveagent” > /dev/null 2>&1
Re:
Responding to @alneven in this thread:
I’m seeing this every now and then when running that same command but I think because I’m running 4 streams in parallel, there is inevitably some clashing as one stream tries to download a .cloud file that has already been downloaded in another stream.
Once the first full run is finished, I’ll run the command again to see if it picks any of these up again.
I have deleted everything
Re-done the find .cloudf
I got the whole folder and file structure
And now the find .cloud is running and the transfer is working:
rx: 534.63 Mbit/s 7878 p/s tx: 3.57 Mbit/s 6851 p/s
rx: 620.94 Mbit/s 8574 p/s tx: 3.86 Mbit/s 7435 p/s
rx: 609.50 Mbit/s 7944 p/s tx: 3.55 Mbit/s 6843 p/s
rx: 616.13 Mbit/s 8247 p/s tx: 3.63 Mbit/s 7003 p/s
Is there a way to limit the sync to a certain subfolder, for those of us with way more data than the 10TB disk on GCE? I will need to move the data in 10TB chunks.
you can just request a quota raise.
asked for 20TB, got them hassle free in 2min.
I’m currently using the Google Compute Engine method as well, thanks for that @Philip_Koninghofer.
Instead of the three parallel screen session I used one screen session with the command:
exec 6>&1;num_procs=10;output="go"; while [ "$output" ]; do output=$(find "$HOME/odrive-agent-mount/Amazon Cloud Drive/Media/" -name "*.cloud*" -print0 | xargs -0 -n 1 -P $num_procs "$HOME/.odrive-agent/bin/odrive.py" sync | tee /dev/fd/6); done
If you get a permission denied error just chmod a+x ~/.odrive-agent/bin/odrive.py
and try again.
This starts 10 threads according to the author of the topic odrive forum’s for syncing and gives me higher speed than the other method. I’m averaging around 700Mbit/s right now (checked nload), but sometimes hit peaks of 1.3Gbit. According to a calculator an 8TB transfer will take around 24/30 hours so I hope the data is transfered by the end of the weekend and I can start uploading it to G-Drive again.
Edit: Just to let you know, I tried other methods as well since I thought my odrive expired trail would give issues on linux as well, but it seems there is no issue with syncing the whole drive when using the agent. Also multcloud, cloudhq nor amazon’s own client were able to download as fast as Google’s compute engine, I guess it’s the best bet right now.
currently people (including me), get about 100MB/s, which are 800Mbit/s.
120MB/s even 960Mbit/s.
anyhow, don’t you think 10 threads is bit aggressive? i seems 10 vs. 3 or 4 does not give thtat much speed advantage, but would hammer amazon API three times harder.
does anyone have experience with ACD blocking an account being too excessive in API usage? what limit do you recommend?
This has the advantage of only running find
once and dispatching to multiple odrive
calls, so avoids This file doesn't exist
errors.
Unfortunately I’m getting a significant amount of these:
Unable to sync mqvqvijjrp02d0pqnr6q...nec6fpbh8hago0.cloud. Amazon Cloud Drive internal server error.
I suspect those are the “file name too long” issues, but won’t know they’re persistent errors until the rest syncs and I’m only left with those. I’m a bit worried how reliable odrive really is in this scenario.
I see.
I am also worried a bit.
BUT I used to see them alot when using odrive about a year ago. Maybe they are just not as smooth in handling API timeouts as rclone is.
Currently I am thinking/hoping that another run will just pick them up, since the paths do not seem that long to me (in my case)
If so there is still the possibility to use another way to download missing files, since odrive is nice enough to print the paths accordingly.
since I already have about 50% of my 15TB moved to gdrive I am not that worried. I would have guessed that it would have affected more files/paths if that was the case. we will see in a few hours.
@Saviq keep me updated.
I’m also receiving a couple of internal server error’s from ACD but I think the second run will fix this, as far as I know non-transfered files still have the .cloud
extension on them so that would work.
Last september I did my initial transfer from home (6TB) to ACD with odrive on Windows without any issues, so I have faith that this will work. 0.8TB of 8TB transfered…
i just browsed around odrive forums and my best guess are API timeout for now. it really looks like they are not that good at handling them at odrive. which would mean they are only temporary and most likely random. fingers crossed for my second sync run