ACD_CLI is back, RClone the next?

Just tried that, (amazon --> acd_cli --> rclone --> gdrive) runs pretty good!

I’m kind of tempted to spin up a more expensive scaleway for faster movement of data.

Any examples of the mount commands would really help me out :wink:

I totally get the concept mind; makes perfect sense.

Linux is not my forte.

acd_cli mount /FOLDER

Acd_cli setup guilde: https://github.com/yadayada/acd_cli/blob/master/docs/setup.rst

Usage guide: https://github.com/yadayada/acd_cli/blob/master/docs/usage.rst

Rclone: https://rclone.org/install/

Rclone usage: https://rclone.org/docs/

Rclone mount: https://rclone.org/commands/rclone_mount/

I am getting 25 MB/s migrating an acd_cli fuse mount to backblaze b2 via rclone, which is all I need to get my digital camera backup out of ACD before I cancel it, with gusto.

I’m no expert but here’s a quick summary of the process after I got acd_cli running.

mkdir ACD
acd_cli mount /home/<name>/ACD/
tmux
rclone --stats=5s --transfers=12 copy Local:/home/<name>/ACD/ Google:/SORT/

Ctrl-B, D

I’m not sure if I needed to do the mkdir or if mounting would create it for me. Then I can just log off and leave it running. I check in every now and then to make sure it’s going OK. 9 hours so far and no issues.

Currently it’s running with 40 MB/s. Really nice.

I agree this is working OK as we speak… getting the same speeds

Runs like hell - getting nearly the complete 200 Mbit in and out on Scaleway …

acd_cli mounted -> ENCFS -> RClone => gdrive (encrypted with rclone)

Has anyone tried to get 2 acd_cli running on 2 different servers ? This would speed up the copy proccess …8)

I tried but my token refreshing was broken after a few seconds …

You can get the Oct Core (I think it’s the C2L) on scale way for a couple of days for less than a dollar, and it has a 800MB/s cap. Also, the CPU power really helps with the movement of files.

It’s a great solution to get the job done really quickly, then downgrade to a C2S or something.

1 Like

I ran three Vultr instances concurrently in order to transfer my files from ACD to my Gdrive. They were each getting about 45MB/s. So yes, it works!
The really impressive speed came when I used those same instances to copy from one Gdrive to another. Each one was getting 150MB/s.

1 Like

Perfect! Many thanks

Chugging along at 95MBytes/s

Quick question, I’m running in a screen, will just leave it. However, should the transfer get terminated for whatever reason, will that copy command cater for ignore existing data and resume the transfer, if I restart it? Or should I add --ignore-existing ?

–ignore-existing --ignore-checksum should work fine!

Is acd_cli still working for you guys?

hmm no im getting error 429 rate exceeded
but this is happening everywhere not just with acdcli

I’m getting nonstop 429’s as well, for the last 18 hours or so.

Still getting rate exeeded under acl_cli …

i’m getting 429 errors from https://www.amazon.com/clouddrive/folder/root

amazon cloud drive is just borked up entirely across the board today, regardless of api.

"
Amazon Drive is having problems. We’re addressing the issue and will be back as quickly as possible.
"

at least that website message is what I think their website uses in place of 429.

Got the error but a login back got me on track. And 3rd party API keys still work for some apps so all is fine :slight_smile: