Recommendations for ACD->GSuite/Other transfers

The code does two things first one syncs your folders, second one downloads the files, it’s done 1 after the other so that it make sures it gets all your folders prior to downloading. it run 20 processes at a time, the speed you get depends on how fast your connection is, in the case of using google cloud it maxes out at 1.63Gbit/s approx 203.75MB/s

You need to figure out your paths, if you are syncing just a single folder you need to drill down to the parent folder that you want to sync, in this case are you syncing ZZEUS or are you sync skhr4jjs6pqc2l603qd5jtgaqg I don’t know your path structure only you know, I assumed ZZeus is your parent and skhr4jjs6pqc2l603qd5jtgaqg is your sub dir, and blah is your file so you point the find to the subdir for both *.cloudf and *.cloud

the line i gave you is the same except one is for .cloud and one is .cloudf reason is because i did all my folder syncs first then download, other people did the sync and download at the same time I didn’t want to do that.

Each folder within ZZeus is the encrypted TvShows, movies, etc… so the path for one of my encrypted folders inside is

“$HOME/odrive-agent-mount/Amazon Cloud Drive/ZZeus (2017-05-21T06_35_46.359)/tl9ieuis27sium0r04gvip96qs/”

But even attempting to use the first part of your code with these paths I get the errors

usage: odrive.py sync [-h] placeholderPath
odrive.py sync: error: too few arguments

Which I don’t understand why this is erroring because This works completely fine.

exec 6>&1;num_procs=10;output=“go”; while [ “$output” ]; do output=$(find “$HOME/odrive-agent-mount/Amazon Cloud Drive/” -name “.cloud” -print0 | xargs -0 -n 1 -P $num_procs “$HOME/.odrive-agent/bin/odrive.py” sync | tee /dev/fd/6); done

Although this appears to be downloading everything. which is not what I want

I highlighted why it is downloading everything LOOK AT THE PATH, just change the path to the full path to the location of files/folders you do want to download and change -name “.cloud” to “.cloud” for some reason your system is not accepting the linked commands i.e. Command && Command

when I specify a path, it will give me errors

exec 6>&1;num_procs=10;output=“go”; while [ “$output” ]; do output=$(find “$HOME/odrive-agent-mount/Amazon Cloud Drive/ZZeus/skhr4jjs6pqc2l603qd5jtgaqg/” -name “.cloudf” -print0 | xargs -0 -n 1 -P $num_procs “$HOME/.odrive-agent/bin/odrive.py” sync | tee /dev/fd/6); done
find: ‘/home/terry/odrive-agent-mount/Amazon Cloud Drive/ZZeus/skhr4jjs6pqc2l603qd5jtgaqg/’: No such file or directory
usage: odrive.py sync [-h] placeholderPath
odrive.py sync: error: too few arguments

it does the same for cloud, or cloudf FYI

change to -name “*.cloudf”

and to actually download the “File” you need to search for “*.cloud” notice the wild card “star”

exec 6>&1;num_procs=10;output=“go”; while [ “$output” ]; do output=$(find “$HOME/odrive-agent-mount/Amazon Cloud Drive/ZZeus/skhr4jjs6pqc2l603qd5jtgaqg/” ".cloudf" -print0 | xargs -0 -n 1 -P $num_procs “$HOME/.odrive-agent/bin/odrive.py” sync | tee /dev/fd/6); done
find: ‘/home/terry/odrive-agent-mount/Amazon Cloud Drive/ZZeus/skhr4jjs6pqc2l603qd5jtgaqg/’: No such file or directory
find: ‘
.cloudf’: No such file or directory
usage: odrive.py sync [-h] placeholderPath
odrive.py sync: error: too few arguments

you removed the “-name” you need that

The best is only using RClone, from GCE still.

Amazed by that speed.

1 Like

I never finished transfering my files so I decided to give the rclone with the google compute with 20 transfers… wow double the speed.

so my suggestion is:
setup rclone using the drivesink method with the python script (https://forum.rclone.org/t/guide-rclone-with-acd-using-drivesink-token/2421/6), sync the drives with --transfers=20 and you get something like this:

To automate the token refresh set the token refresh as cron job hourly.

over night the my sync stopt and when i restart:

root@instance-1:~# exec 6>&1;num_procs=10;output=“go”; while [ “$output” ]; do output=$(find “$HOME/odrive-agent-mount/Amazon Cloud Drive/crypt/hbgthh3vkbn66414cbem8nic5c/” -name “.cloud” -print0 | xargs -0 -n 1 -P $num_procs “$HOME/.odrive-agent/bin/odrive.py” sync | tee /dev/fd/6); done
usage: odrive.py sync [-h] placeholderPath
odrive.py sync: error: too few arguments

Have anyone a idea

This looks like it finished the sync.

Does it work right?

For me even after the token is refreshed, rclone gives me errors.
https://forum.rclone.org/t/guide-rclone-with-acd-using-drivesink-token/2421/28?u=easy90rider

What kind of VM-instance did you create?

I’m using the Windows Server + ExpanDrive to have my ACD available. Then I’m using Rclone to upload that to Google Drive. It worked last night with a few errors but kept going for the smaller files. Now on the bigger files, I’m getting 100% errors that “Failed to copy: read (encrypted file name here): A device attached to the system is not functioning.”

What is causing this error? I can access both ACD and Google Drive just fine. I went and got my own client ID and secret to use with Rclone.

Yep, received the same thing. That means everything is transfered since there are no .cloud files anymore you receive this error.

So a pretty small one.

Using rclone copy --transfers 15 --checkers 5 IIRC (copy still in progress, so can’t see the exact commands)

How did you manage to get client ID and secret for ACD?

I finished copying all of my data from ACD to the VM. I’m getting ready to upload to my gdrive accounts. I have one that is legit and one that I bought from ebay. I just want to make sure I don’t incur any large charges for data egress. I don’t know where the domain is located/registered for the ebay account, does that matter? The google compute pricing page says that data egress is free to google products so does it matter where the domain is located/registered? I appreciate the help!

Google compute to the google/gsuite drive is free.

Edit: Stay below 100MB when upload or you will get 24hour lockout if you upload more than 10TB a day.

1 Like