ACD BAN/Alternatives discussion (error HTTP code 429)

If you are transfering to google you may want to just sync Amazon to Google using rclone you get about 3.32GBit/s using google cloud compute

https://forum.rclone.org/t/guide-rclone-with-acd-using-drivesink-token/2421/10

If youā€™re using byobu (donā€™t know about any of the others) then it shows RAM usage in the bottom bar. Keep an eye on it (you can use htop or similar in a separate screen too). When I was using 6 syncs at a time the RAM crept up and I got booted out of the VM. I had to go to the dashboard, stop/start the VM and launch byobu again. Luckily it was all where I left off (minus the killed sync processes). Back down to four again.

When using multiple sync instances, I found it more efficient to target each one rather than just point it to the mount folder, e.g

find ~/odrive-agent-mount/Films/ -type f -name "*.cloud" -exec python "$HOME/.odrive-agent/bin/odrive.py" sync {} \;
find ~/odrive-agent-mount/TV/ -type f -name "*.cloud" -exec python "$HOME/.odrive-agent/bin/odrive.py" sync {} \;
find ~/odrive-agent-mount/Games/ -type f -name "*.cloud" -exec python "$HOME/.odrive-agent/bin/odrive.py" sync {} \;
find ~/odrive-agent-mount/Music/ -type f -name "*.cloud" -exec python "$HOME/.odrive-agent/bin/odrive.py" sync {} \;

or else they bump into one another and it wastes time with errors/skipped attempts.

EDIT: I canā€™t stress the RAM thing enough. I lost 4TB of transferred data because I got locked out of the VM due to maxed out RAM usage, had to reboot via the dash, and my mounted 10TB disk got corrupted. I could have recovered it but the time it would have taken was in terms of days, not hours. Quicker just to start over.

EDIT2: Again with the RAM - when making the VM choose 7.5GB RAM. It will work with 1.8GB but you will need to monitor it quite closely and you wonā€™t be be able to average anywhere near as high speeds. If you choose anything other than 1.8 but less than 7.5 it wonā€™t let you attach a disk bigger than 3TB. Just a warning for those with more than 3TB to transfer.

With 1.8 I was averaging ~30MB/s with 3-4 sync instances. With 7.5 I can basically have dozens of instances, so something is always coming down and Iā€™m averaging more like 90MB/s. While 7.5 is more costly (although itā€™s still coming out of the $300 credit) the much higher speeds mean youā€™ll finish a lot quicker and can move onto using the credit for something else.

1 Like

What machine type are you using in which region? I tried a us-central 4 cCPU machine, but I get only a fraction of your throughput ā€¦

I have transferred 7TB with Google compute, I had a VM with 2 CPU and 1,8GB RM with 30 GB SSD and 8TB Hdd
It cost me 37,21 ā‚¬ from the free trial money.

you should consider doing the transfert with the drivesink auth and direct rclone sync amazon:/ google:/.

Itā€™s very fast, doesnā€™t use disk space (increasing the time the free $300 can be usefull) and doesnā€™t use much ram (around 600mb on my system) for 20 transfert at the same time.

But isnā€™t this abusing the client id / secret of another application, potentially causing trouble for the owner / author of the DriveSink application?

And in my (short) test the rclone + DriveSink throughput was only about 50-70 MB/s, substantially slower than the ā€œtemp Google diskā€ to GDrive with rclone approach, which give me about 150 MB/s. I have a US ACD and GDrive account and tried Google central/east/west instances - no difference in speed.

What I am currently using is the temp google disk with all the ACD data mounted as read-only in 3 machines, each running a rclone copy. Give me about 150 MB/s per machine. So should not be more expensive than a one machine transfer in the end. Only 3 times faster :slight_smile:

Guys is it possible with my own security profile get rclone to work with ACD ?

well ā€¦ beside the work that needs to be done, iā€™m so much more happy with my google cloud than with acdā€¦ ok ā€¦20 dollars more a year but thats worth the price.
at least the dropouts during playback of ultra-hd and (most the time) full-hd files is completly gone.

luckily i went with a vm with more ram than recommended (7.5gb i think)
must be why i havenā€™t had this problemā€¦

@chkk ā€¦thats an idea ā€¦ but you will loose the overview a bit ā€¦esspecially when you try to separate your transfers on 3 machinesā€¦ but if you only have 3 root dirs and take one on every machineā€¦easy fast thingy

Same transfer from root on all machines - rclone / grdrive seem to sort that out pretty well. Will do one final single-machine pass to make sure nothing was lost

can you explain how you did the transfer? Did you use the ACD Windows App? then mount google drive via Rclone beta?

for anyone currently directly syncing amazon:/ and google:/ with rclone/drivesink with the script to get a new token, iā€™ve made a very simple loop :slight_smile:

#!/bin/bash for i in {1..48} do /usr/bin/python3 /where/is/drivesink echo "loop $i" /usr/bin/timeout 3540 rclone sync -v --transfers=20 amazon:/ google:/ mail -s "Upload $i $(date) " your email <<< 'Update !' done

it refresh the token, start an upload, stop after 59minutes, refresh the token and so on for 48 times, and mail you at each loop if you have mailutils.

1 Like

@interferon i did it somehow more simple then the mostā€¦
took a windows vps, downloaded all with original acd software for windows ā€¦ mounted the drive too with expandrive and did a manually copy for the cause i missed some files (i dont trust acd software).
and finally just uploaded all to gcloud with rclone.

finshed it 2 hours ago ā€¦ kinda 30tb in 3-4 days.

Probably one from googleā€¦

does expandrive allow you to directly download from ACD to Google Drive? (i.e i donā€™t want want to create a 1tb harddrive VPS- iā€™d rather use a 150gig SSD vps)

looking in to https://hubic.com/en/offers/storage-10tb

Try chmod -R 777 /dev/sdb1 /home/xxx/odrive-agent-mount
(recursive)

Depends on your application. Do you need your data fast? Are 1.2MB/s enough?

1 Like

I am using the 2vcpu 1.8 ram, i am using 500 checkers with 50 transfers anything beyond or below that gets me less than 2 GBit/s