ACD BAN/Alternatives discussion (error HTTP code 429)

when i run the resize2fs command it says
‘The filesystem is already 536870655 (4k) blocks long’

i used ubuntu 16.04
could i just make a snapshot and then delete the vm and make a new one that uses the full 30tb?

@David_M: what I did on google compute was the following

I have ordered a VM in my location with 30GB system SSD disc and additional with 8TB hdd with Ubuntu 16.04
If the instance was ready I have formated the 2nd disc, /dev/sdb and mounted into /mnt/disc/xxx which I mount bind to the folder ~/odrive-agent-mount (mkdir first)

The odrive setup is easy, you have to install, authenticate, start and mount
install

od="$HOME/.odrive-agent/bin" && curl -L “http://dl.odrive.com/odrive-py” --create-dirs -o “$od/odrive.py” && curl -L “http://dl.odrive.com/odriveagent-lnx-64” | tar -xvzf- -C “$od/” && curl -L “http://dl.odrive.com/odrivecli-lnx-64” | tar -xvzf- -C “$od/”

run

nohup “$HOME/.odrive-agent/bin/odriveagent” > /dev/null 2>&1 &

sign up for auth
https://www.odrive.com/account/authcodes

auth

python “$HOME/.odrive-agent/bin/odrive.py” authenticate 00000000-0000-0000-0000-000000000000-00000000

mount odrive

python “$HOME/.odrive-agent/bin/odrive.py” mount “$HOME/odrive-agent-mount” /

Now the folder you have created is ready for SYNC

After that you have to use the script from @Philip_Konighofer

for example

nano odrive.sh

copy - paste the RAW part of this link (end of the page)
https://pastebin.com/KDB1XVYR

save it (ctrl + o, ctrl + x), and

chmod +x odrive.sh

after this you have to run it in SCREEN

screen -S job1
./odrive.sh

ctrl a+d

screen -S job2
./odrive.sh

ctrl a+d

screen -S job3
./odrive.sh

ctrl a+d

You can always enter the screen and see the progress

screen -r job1

Leave it again with
ctrl a+d

After all 3 jobs are ready
Just ENTER this is line on every screen again:

find ~/odrive-agent-mount/ -type f -name “*.cloud” -exec python “$HOME/.odrive-agent/bin/odrive.py” sync {} ;

which will do the DOWNLOAD
this will take a while, depinding how many TB you have…
(more details and how you could verify if everything was downloaded can be found here: post 524)

If it done, and you have downloaded all from the AMAZON cloud
You could just run rclone as usually for the copy, I would use first COPY

Something like this:

rclone copy
–transfers 16
–low-level-retries 1
–checkers=32
–exclude *.cloud
–exclude *.cloudf
–retries 10
–contimeout 10m
–size-only
–max-size 50000M
–verbose
–log-file=Log_gdrive_movies.txt
/home/XXX/odrive-agent-mount/acd/Plex/huUHudnndjee766dejNNjeje/ gdrive:Plex/huUHudnndjee766dejNNjeje

where XXX is your username on google compute
and “huUHudnndjee766dejNNjeje” is your Movies or TV-Series folder crypted in the Plex folder (as you created based on enzTV howto document)

This you could run also during the download from ACD as well
It will skip the TEMP files, cloud and cloudf

2 Likes

wow thank you alneven : )

i deleted the 19tb vm, created a new one using the full 30tb
followed your directions exactly and have the three screens running atm ; )

fingers crossed…

does anyone have any tips on how much you an transfer in a day to google drive without getting a ban?

I have transfered yesterday 2,14 TB (but it was not running 24h)
And now I’m transferring with tx: 1.02 Gbit/s 25881 p/s
I guess there is no ban on upload
Only on API requests/downloads

I uploaded 15TB yesterday without complains.

i thought when i created a new ubuntu 16 vm and made the hard drive 30b that it would use it all

alas now as i df -h with the three screens running… it shows the main partition is only 2tb : (

i tried expanding it before deleting the last vm and starting again… but it was a no go

i got this error

The filesystem is already 16775167 blocks long. Nothing to do!

any ideas on how to expand the main partition?

i think its because, its an MSDOS Partition that is limited to 2 TB.

I deleted this partition and created a new one with gpart.

ughh… so basically id have to start all over again? : (

no way to resize it?

is there anyway to make a snapshot and restore it into the bigger partition ?

I did it before i started.

Maybe there is a way to change it and keep your data? But i dont know.

1 Like

ok… made a new vm… and added a separate disk… now i see a blank sdb thats 36TB

ow i just have to figure out how to mount it and change the instructions/commands above to point to that mount…

too tired to keep at it… have to do it when i wake up : ) : (

so close and yet so far.

When you ssh into the VM you’ll be in your home folder. Before you do anything it is highly recommended to run byobu (then Enter). This will keep the session alive even if you lose connection to the terminal/reboot etc. It will allow you to create multiple screens (like tabs) that you can flip through in the same terminal. This will also come in handy when you come to start pulling down the ACD content since you can run multiple instances at once via different screens. If you ever close the terminal window (or putty etc), run byobu as soon as you log back in and it’ll pull up the screens you had before you left.

In byobu, Ctrl+A, C creates a new tab and Ctrl+A, N cycles through them (next) or Ctrl+A, P to cycle back. When you create your first screen it’ll ask you to choose a type (just pick 1). When you start syncing/downloading it will tie up the screen you’re using so having multiple tabs is very very useful so you can run other commands (check d/l speeds/monitor disk usage).

The attached disk should be /dev/sdb. You can run sudo parted -l to verify.

First run sudo parted -s -a optimal /dev/sdb mklabel gpt -- mkpart primary ext4 1 -1. That will write the GPT so you can bypass the 2TB limit and create a primary partition that spans the whole disk.

Next, you need to run sudo mkfs -t ext4 /dev/sdb1 to make the filesystem.

Now you should be able to mount the disk. Since the default location for the odrive mount is a folder called odrive-agent-mount in your home folder, we’ll stick to that. Run mkdir odrive-agent-mount to create it.

Finally, mount the attached disk to that location with sudo mount /dev/sdb1 ~/odrive-agent-mount.

You can run sudo parted -l again to see the disk info. The partition table should show gpt.

Now you can go ahead and install the sync agent (likely the 64bit version). After install, run nohup "$HOME/.odrive-agent/bin/odriveagent" > /dev/null 2>&1 &. If it doesn’t launch properly then you’ve likely installed the wrong variant (32/64).

You can now skip down to here. You only need to do the first 4 steps. For the fourth step, you don’t need to make the mount directory since we’ve already done that. Just skip that and do the mounting part.

Before you start any syncing, if you have folders with lots of small files (ROMs, comics, music files), they can be pretty slow to sync. If you have another way to upload those then you might want to do so to speed up the odrive sync/downloading. I found that I was able to download about 10 times as much when pulling down Films/TV as I was when it was downloading lots of small files.

So, if you have them stored elsewhere and don’t mind deleting them from ACD, I would do so. If you want to keep them on ACD but don’t want to download them to the VM, just go through the sync stage (next paragraph) but cd into the ~/odrive-agent-mount/Amazon Cloud Drive folder and delete any folders there you want to skip. I just went into that folder and ran rm -rf Comics and rm -rf Music to delete those two folders so they wouldn’t be pulled down.

From there you can start with Philip’s info (post #525) to sync the folder structures/placeholders. Depending on the amount of folders/depths this could take a while (took about an hour for me at 9.6TB and quote a number of nested folders/files, like comics/roms etc).

Once they’ve all synced you can run that find ~/odrive-agent-mount/ -type f -name "*.cloud" -exec python "$HOME/.odrive-agent/bin/odrive.py" sync {} \; command shown at the bottom of his post. You can have multiple instances of this running (just Ctrl+A, C to create a new tab) and run it there too. I used 4 at once although how many is optimal I couldn’t say. You probably want 2 at least.

2 Likes

wow… thank you… worked perfectly : )

actually now that i try to run the three jobs…

i get this error

find: ‘/home/xxx/odrive-agent-mount/lost+found’: Permission denied

i tried - chmod 777 /dev/sdb1 /home/xxx/odrive-agent-mount

thiinking that might fix it… alas… and doing it as sudo didnt work either…

what step did i miss here?

responding to my own question:
Okey i fixed this issue by trying other method (done a “converter” and mounting trough sftp, so i get same dir/speed as before). i dumped now odrive because of directory slow download. it was too complicated anyway. now my change is transparent to the system

cd into that odrive-agent-mount folder and sudo rm -rf lost+found.

1 Like

Never mind, I found Rclone has been banned from Amazon Drive

This thread seems to have digressed into using some other tools?

Can someone tell me what is the latest on using rclone to access acd? Is it a dead-end for now?

Thanks a lot

1 Like

Another successful report of using google, signing up to the cloud, firing up a VM using the $300.

Rather than Linux I went with the Windows 2012 and installed the Amazon Drive. Partial selective sync to stay under the 2TB, upload using rclone.

Works a treat ;-).

so, my 4 threads are done
transferred 6,9TB from ACD to google compute

now I started the “once again” and I have a few of these:

Amazon Cloud Drive internal server error

and it is also downloading some files, maybe some were missing over the threads…

as this has also an output

find ~/odrive-agent-mount/ -type f -name *.cloud

awaiting if all *.cloud files are finished…

update, 3 hours later
now all my files are synced
uploading to gdrive

1 Like

a big thank you to philip, alneven and gavin…

following all your instructions… i have odive/google compute working…

transferring at 120mbs… around 600gb an hour it seems ; )

fyi… i have 7 screens running… seems to be the sweet spot for me anyways.