ACD oauth proxy broken!

Hello,
for now I use netdrive in trial version on a VM under windows that mounts me a virtual drive A: corresponding to my ACD.
then I use rclone like this:
rclone sync a: \ NAS-Cloud-Dedicated GGD1: \ NAS-Cloud-Dedicated --config rclone.conf -v

here, I have a very good average network of the order of 300mb / S

1 Like

AWESOME … I used to use NetDrive to connect to a COTS NAS back in the day, it brings back old memories! :slight_smile: That was before I determined I needed a SuperMicro SC836 in my closet (yes, they belong in datacenters and getting the heat / noise under control in a 59 cubic foot closet was a PITA.

for now I use netdrive in trial version on a VM under windows that mounts me a virtual drive A: corresponding to my ACD.

Thanks so much for your kind reply. I’m giving it a shot now. I too am using a VM (and always have with rclone).

then I use rclone like this:
rclone sync a: \ NAS-Cloud-Dedicated GGD1: \ NAS-Cloud-Dedicated --config rclone.conf -v

  • My A drive will be ACD …
  • I’m assuming your GGD1 is a remote configured using rclone config? And I can continue using the local path as illustrated below?
  • May I ask what the --config rclone.conf does?

An example of the rclone sync command I used to run is presented below (always executed from cmd prompt from the rclone folder):

  • I’m assuming the source is replaced by the Netdrive mount?
  • I’m assuming the destination remains the same (local FreeNAS server and no need to configure it as a remote or drive in Netdrive?

here, I have a very good average network of the order of 300mb / S
IMPRESSIVE. :grinning:

What % of your uplink is that using?

Thanks again so much. I’ll report back with my findings.

Sheesh … Netdrive isn’t kidding when they say connecting may take a long time!

_My A drive will be ACD …
OK
_I’m assuming your GGD1 is a remote configured using rclone config? And I can continue using the local path as illustrated below?
Yes, in my case A: corresponds to the Local drive which is the netdrive mount in windows.
_And I can continue using the local path as illustrated below?
yes the destination can be also a local directory
_May I ask what the --config rclone.conf does?
it allows to point to my rclone configuration file
_I’m assuming the source is replaced by the Netdrive mount?
YES
_I’m assuming the destination remains the same (local FreeNAS server and no need to configure it as a remote or drive in Netdrive?
yes that can be what you want, in my case I send back to a google drive account
_COMMAND LINE rclone sync:
why block rclone with the parameter transfers = 1?
I put it to 10 to get faster, my backup contains many small files.
_What % of your uplink is that using?
the link on this machine is gigabit on the internet, more exactly 1000mb / s Down and 500mb / s in UP

good test;)

why block rclone with the parameter transfers = 1?

I think it was recommended to me at some point to avoid rate limiting (too many requests/day - not that I ever understood that as my pipe is only 100 Mbps)

I put it to 10 to get faster, my backup contains many small files.

Mine are all 8 GB in size (maybe that makes it make more sense)?

HRM…

rclone sync “e:Images_MS\QNAP_TVS-871\VirtualizationStation\VirtualizationStation 4” “\sullynas\Data4\Temp\Images_MS\QNAP_TVS-871\VirtualizationStation\VirtualizationStation 4” --acd-templink-threshold 0 --transfers 1 --low-level-retries 10 --stats 60s --log-file=“VirtualizationStation 4.txt” -vv

= no dice … any thoughts on why I’d get a fail to read from netdrive mount? I can run rclone lsd e: just fine …

2018/01/31 14:00:47 DEBUG : Using config file from “C:\Users\user\.config\rclone\rclone.conf”
2018/01/31 14:00:47 DEBUG : rclone: Version “v1.39-070-g38f82984” starting with parameters [“rclone” “sync” “e:Images_MS\QNAP_TVS-871\VirtualizationStation\VirtualizationStation 4” “\\sullynas\Data4\Temp\Images_MS\QNAP_TVS-871\VirtualizationStation\VirtualizationStation 4” “–acd-templink-threshold” “0” “–transfers” “1” “–low-level-retries” “10” “–stats” “60s” “–log-file=VirtualizationStation 4.txt” “-vv”]
2018/01/31 14:00:49 INFO : Local file system at \?\UNC\sullynas\Data4\Temp\Images_MS\QNAP_TVS-871\VirtualizationStation\VirtualizationStation 4: Modify window is 100ns
2018/01/31 14:01:49 INFO :
Transferred: 0 Bytes (0 Bytes/s)
Errors: 0
Checks: 0
Transferred: 0
Elapsed time: 1m2.1s
Transferring:
TVS-871_VirtualizationStation_4.7z.001: 0% /7.938G, 0/s, -
2018/01/31 14:01:52 NOTICE: TVS-871_VirtualizationStation_4.7z.001: Removing partially written file on error: read \?\E:\Images_MS\QNAP_TVS-871\VirtualizationStation\VirtualizationStation 4\TVS-871_VirtualizationStation_4.7z.001: The request could not be performed because of an I/O device error.
2018/01/31 14:01:52 ERROR : TVS-871_VirtualizationStation_4.7z.001: Failed to copy: read \?\E:\Images_MS\QNAP_TVS-871\VirtualizationStation\VirtualizationStation 4\TVS-871_VirtualizationStation_4.7z.001: The request could not be performed because of an I/O device error.
2018/01/31 14:02:49 INFO :
Transferred: 0 Bytes (0 Bytes/s)
Errors: 1
Checks: 0
Transferred: 0
Elapsed time: 2m2.1s
Transferring:
TVS-871_VirtualizationStation_4.7z.002: 0% /7.938G, 0/s, -

Notes: No edit to config file.
ACD mounted as E:

test with read only, and specify a letter on local drive

if you tru to copy a file to E: to a other loal disk it’s OK ?

Changed to local drive. Attempted to DL a 1kb file using winexplorer and I get an error from win explorer about the request not being able to be performed because of an I/O error

Here is the nd3svc_acd.log which i believe is relevant.

[2018/01/31 15:34:08.329] [DEBUG ] [ 7676] [FILESYSTEM] [E:explorer.exe] FileSystem::ReleaseDir : \Arq Backup Data
[2018/01/31 15:34:08.329] [DEBUG ] [ 7676] [FILESYSTEM] [E:explorer.exe] FileSystem::ReleaseDir => AERROR::SUCCESS : \Arq Backup Data
[2018/01/31 15:34:08.329] [DEBUG ] [ 7672] [FILESYSTEM] [E:explorer.exe] FileSystem::ReleaseDir :
[2018/01/31 15:34:08.329] [DEBUG ] [ 7672] [FILESYSTEM] [E:explorer.exe] FileSystem::ReleaseDir => AERROR::SUCCESS :
[2018/01/31 15:34:08.329] [DEBUG ] [ 7680] [FILESYSTEM] [E:explorer.exe] FileSystem::OpenDir : \Arq Backup Data
[2018/01/31 15:34:08.329] [DEBUG ] [ 7680] [FILESYSTEM] [E:explorer.exe] FileSystem::OpenDir => AERROR::SUCCESS : \Arq Backup Data
[2018/01/31 15:34:08.329] [DEBUG ] [ 7684] [FILESYSTEM] [E:explorer.exe] FileSystem::GetAttr : \Arq Backup Data/README.TXT
[2018/01/31 15:34:08.329] [DEBUG ] [ 7684] [FILESYSTEM] [E:explorer.exe] FileSystem::GetAttr => AERROR::SUCCESS : \Arq Backup Data/README.TXT
[2018/01/31 15:34:08.329] [DEBUG ] [ 7676] [FILESYSTEM] [E:explorer.exe] FileSystem::ReleaseDir : \Arq Backup Data
[2018/01/31 15:34:08.329] [DEBUG ] [ 7676] [FILESYSTEM] [E:explorer.exe] FileSystem::ReleaseDir => AERROR::SUCCESS : \Arq Backup Data
[2018/01/31 15:34:08.329] [DEBUG ] [ 7680] [FILESYSTEM] [E:explorer.exe] FileSystem::Open : \Arq Backup Data\README.TXT
[2018/01/31 15:34:08.329] [DEBUG ] [ 7680] [FILESYSTEM] [E:explorer.exe] FileSystem::Open : \Arq Backup Data\README.TXT found(jibberish)
[2018/01/31 15:34:08.329] [DEBUG ] [ 7680] [FILESYSTEM] [E:explorer.exe] Handle 3 for READ created
[2018/01/31 15:34:08.329] [DEBUG ] [ 7680] [FILESYSTEM] [E:explorer.exe] FileSystem::Open => AERROR::SUCCESS : \Arq Backup Data\README.TXT
[2018/01/31 15:34:08.345] [DEBUG ] [ 7684] [FILESYSTEM] [E:msmpeng.exe] FileSystem::Read : 3, 0, 176
[2018/01/31 15:34:08.345] [DEBUG ] [ 7684] [FILESYSTEM] [E:msmpeng.exe] handle found : arq backup data/readme.txt
[2018/01/31 15:34:08.345] [MESSAGE ] [ 7684] [CACHE ] [readme.txt] Read : 000002A256ED0550, 0, 176
[2018/01/31 15:34:08.345] [DEBUG ] [ 7684] [CACHE ] [readme.txt] Queue empty offset 0 to 65536(65536 bytes)
[2018/01/31 15:34:08.345] [MESSAGE ] [ 7684] [CACHE ] [readme.txt] Read-ahead : 0, 65536
[2018/01/31 15:34:08.345] [DEBUG ] [ 6576] [PROTOCOL ] AmazonCloudDrive::Read [jibberish] offset 0, length 176
[2018/01/31 15:34:08.345] [DEBUG ] [ 6576] [PROTOCOL ] AmazonCloudDrive::GetRedirectURL current timestamp [1517430848]
[2018/01/31 15:34:08.345] [DEBUG ] [ 6576] [PROTOCOL ] AmazonCloudDrive::GetRedirectURL try remove expired entries
[2018/01/31 15:34:08.345] [DEBUG ] [ 6576] [PROTOCOL ] AmazonCloudDrive::GetRedirectURL lookup [jibberish]
[2018/01/31 15:34:08.626] [MESSAGE ] [ 6576] [PROTOCOL ] AmazonCloudDrive::Read >> HTTPS GET >> 429 : LINk - 293ms
[2018/01/31 15:34:08.626] [MESSAGE ] [ 6576] [PROTOCOL ] AmazonCloudDrive::Read >> 5 time(s) backoff left
[2018/01/31 15:34:09.642] [DEBUG ] [ 6576] [PROTOCOL ] AmazonCloudDrive::Read >> Backoff done. Retry
[2018/01/31 15:34:09.736] [MESSAGE ] [ 6576] [PROTOCOL ] AmazonCloudDrive::Read >> HTTPS GET >> 429 : LINk - 109ms
[2018/01/31 15:34:09.736] [MESSAGE ] [ 6576] [PROTOCOL ] AmazonCloudDrive::Read >> 4 time(s) backoff left
[2018/01/31 15:34:11.751] [DEBUG ] [ 6576] [PROTOCOL ] AmazonCloudDrive::Read >> Backoff done. Retry
[2018/01/31 15:34:11.814] [MESSAGE ] [ 6576] [PROTOCOL ] AmazonCloudDrive::Read >> HTTPS GET >> 429 : LINk - 73ms
[2018/01/31 15:34:11.814] [MESSAGE ] [ 6576] [PROTOCOL ] AmazonCloudDrive::Read >> 3 time(s) backoff left
[2018/01/31 15:34:15.829] [DEBUG ] [ 6576] [PROTOCOL ] AmazonCloudDrive::Read >> Backoff done. Retry
[2018/01/31 15:34:15.892] [MESSAGE ] [ 6576] [PROTOCOL ] AmazonCloudDrive::Read >> HTTPS GET >> 429 : LINk - 73ms
[2018/01/31 15:34:15.892] [MESSAGE ] [ 6576] [PROTOCOL ] AmazonCloudDrive::Read >> 2 time(s) backoff left
[2018/01/31 15:34:17.064] [DEBUG ] [ 6540] [CACHE ] [CACHE] [C:\ProgramData\NetDrive3_cache_\acd] Total cache : 0(limits : 4,294,967,296)
[2018/01/31 15:34:23.907] [DEBUG ] [ 6576] [PROTOCOL ] AmazonCloudDrive::Read >> Backoff done. Retry
[2018/01/31 15:34:23.985] [MESSAGE ] [ 6576] [PROTOCOL ] AmazonCloudDrive::Read >> HTTPS GET >> 429 : LINk - 79ms
[2018/01/31 15:34:23.985] [MESSAGE ] [ 6576] [PROTOCOL ] AmazonCloudDrive::Read >> 1 time(s) backoff left

it’s the lastest netdrive v3 ?

I really love rclone and I regret Amazon don’t like it.

In order to get a second copy from my ACD, I’ve :

  • installed the CLI Odrive Client, configured it and mounted my odrive in a directory
  • force the sync of directories (.cloudf) to be able (me or a script) to find files (.cloud)
  • wrote a little script that syncs a file (= donwload it), moves it and loops over the rest of files

It works well and it isn’t complicated at all. Here is the real source of inspiration for that method :

https://www.reddit.com/r/PlexACD/comments/6e4cwl/tutorial_how_to_transfer_your_data_from_amazon/

PS : because I only sync files with odrive (understand download files) and then moving them, files are progressively deleted from my ACD. If you want to let them in your ACD, I think you have to unsync them. I’ve never tried. I think it’s a paid option in odrive.

I hope my post helps !

do the steps in this tutorial still work? how long ago did you follow it? I’m about to follow it, and I’m worried I’ll run into some sort of rate limit errors and then fail to complete the task within my free GCC credits.

I’ve installed the ODrive client on a Raspberry Pi at home this January, a few days after Amazon revoked keys. It downloads 200 GB every night since. So, yes the ODrive part works.

I don’t know for the Google Cloud Platform part but you don’t have to stick to GCP. You could operate from your home, rent a dedicated server, a cheap VPS, etc. It really depends on your purposes (get data back home, transfer to another cloud …), your budget, the time frame you have, …

I only wanted to point out another solution to connect your ACD and download files from it.

1 Like

That sounds like a really simple solution to the issue. Any chance you would mind sharing the script you’re using? Would make this so much simplier for me.

So I’m doing this:
https://www.reddit.com/r/PlexACD/comments/6e4cwl/tutorial_how_to_transfer_your_data_from_amazon/

so far sync’ing the cloudf files has been a MAJOR pain, after 2-3hours or roughly 200-400k files odrive agent crashes and unless i’m paying attention time just wastes until I come back. rclone was so much nicer, it never froze, because it had smart ways to handle each of amazon’s flaws.

I’m hoping when I move to sync’ing cloud files it’ll go more smoothly, because I’m hoping the crashes are caused by making too many api requests too quickly (I’ve sync’d 1.5million cloudf files out of 3million or so)

so yeah, also @tanka feel free to use the reddit guide, one thing I did try though was using the screen and xarg commands on the cloudf sync, but I’m not sure if it really helped or not, I think performance differences were mainly due to off-peak versus peak usage hours morso than number of xarg’s I used.

I agree with you : rclone is almost perfect ! I still use it when I don’t connect to Amazon.

I’m sorry the odrive solution was so painfull for you. Like you said, you probably hit an API limit. I don’t have the same level than you. I’ve only synced 50 directories and I have 4000 files (big files).

I hope you could get your files back with any solution. :wink:

I still have roughly 40 days left. unless amazon screws me I’ll be fine, even if I have headaches (I hope.)

edit: So, odrive has a few flaws, but by using xarg 25 I was able to make it so that having a random crash every hour or two was no big deal at all (as that would still leave 20 commands running). At first this didn’t work well though, but raising my ram to 5gb basically solved all my problems and it’s running super great now.

With difficulty, I read through all the comments and get these conclusions. Please correct me for any misunderstanding.

  1. ACD oauth procy is still broken as of this day, 2018-02-04.
  2. You can copy/move data from ACD to GD (Google Drive) with ODrive running on GCP (Google Cloud Platform).
  3. You would run into small problems here and there, and this is a fairly high limit to the amount of data you can copy/move, but there are also fixes that allow you to muddle through.
  4. @left1000 kindly refers us to detailed instructions on how to do this
    https://www.reddit.com/r/PlexACD/comments/6e4cwl/tutorial_how_to_transfer_your_data_from_amazon/

The instructions seem detailed enough for me to follow, though it will take some efforts and a lot of carefulness.

With 893 GB and 80 days left, I should be OK.

Not only is rclone an amazing product, but the rclone pals, ncw and all, are also great companions to keep. Thank you all!

ugh, I seem to perhaps be once again doomed.
“Amazon Cloud Drive enforces a limit on the rate that requests can be made to their service. You have made too many requests recently and hit this limit. Please wait a few minutes before making additional requests to Amazon Cloud Drive.”

I’ve had this error for the past roughly 8 hours!~ Hopefully this is just a 24hour temp-ban (or shorter) if so, I might still make it. atm I’m at 2tb of 21tb transfered, but 2million of 2million files queued for transfer successfully.
(also note I did not confirm this, the transfer got stuck 8 hours ago, and that’s the error message I see now, it’s entirely possible odrive f’d up stopped transfering 8hours ago but kept hammering through api requests for no reason the entire time, I’ll update if/when it starts working again)

If following the guide I linked to I highly recommend using the screen and xarg method for cloudf sync’s if you have anywhere over 100,000 files to sync. Doing so while automatically restart the command. Meaning you don’t have to run the command once for each depth of subdirectory. You might only have to run it once.

893gb and 80days left would be super super easy to do via this method because the cost would be so low you could run gcc for a month, so no amount of throttling should be able to stop you @carlyunanliu

edit: waited an hour, still getting the message. not sure what to do.
edit2: I called them. they said my account is not locked. accounts flagged as locked have 48hour bans. The supervisors guess was that my ban would be at most 24hours. That’s doable, as long as it doesn’t happen again, but I’ve made 2.2 out of 4million needed api requests so far, by my estimation.

as a followup I would guess either amazon’s limit is somewhere around 2.2million api requests per 48hours, OR when odrive agent crashes it generates an infinite number of useless api requests (because it does set my cpu usage to 100% until I kill the agent, whereas normal operation is 30-50%)

edit3: 24hours after these events, It’s working again. So that answers that.

edit4: after only another 1.8TB and 9 more hours I got the error from ACD again. This is starting to make the task seem impossible. On the bright side, the logs indicate, it might be possible, that this ban lasted only 30-60minutes and was issued 5or6 times this morning while I was afk.
edit5: Just checked the logs and found no evidence to support my previous sentence in the odrive logs, despite the evidence existing in the network traffic logs. Might have to wait 24 or 48 hours and resume with 1connection at a time I guess. Although I really cannot explain this second ban, I was roughly 90% less active in how many requests I tried to make.
edit6: This makes my total actual data recovered 3.54TB after 3.5days with roughly 1.5of those days spent banned, so, well, I’m going to probably change from using a 21TB disk to a 5TB disk in order to buy myself an extra couple weeks. ACD is really not making this easy.

Sorry for the double post but this is kind of a new topic, and ENTIRELY renders my previous topic moot.

It looks like google drive’s 750GB per day limit is not bypassed by uploading from a google cloud disk.
I’d read before that when uploading from google cloud compute persistent disk’s this limit was bypassed. Which makes sense. They’re both in the same building/server farm/etc… At least I was pretty sure both were under us-east-1.

Anyone else experience this? or the opposite of this? Is this somehow a coincidence? doubtful.

“error googleapi: Error 403: User rate limit exceeded, userRateLimitExceeded”

The reason I’m asking this is because 6-8months ago this trick absolutely worked for a lot of people.
As you can see here:
https://www.reddit.com/r/PlexACD/comments/6e4cwl/tutorial_how_to_transfer_your_data_from_amazon/
Google may have closed the loophole, although, limiting bandwidth of transfers that are occurring within the same building is a little odd. It’s also possible that drive disks and cloud disks are no longer in the same building? or that drive disks were moved from us-east-1 to us-east-4 or us-central-1. If that were true I could move my cloud disk to another US server location and get this trick working for me.

However is this method hasn’t worked at all for anyone at all in months, I should resign myself to this fact and not bother experimenting.

The downside is that I’ll have to monitor this annoying process EVERYDAY for a month. The upside is I can limit my ACD usage and stop getting ACD bans.

From what I can remember, for a while there was no upload limit to gdrive, this is something that google brought in, although I can’t remember exactly when it happened.
So it does not matter where you upload from you, will always be limited to around 750GB per day

So I just tested it, and I was able to manually upload a file to google drive using the website. Yet rclone seems to be stopped after 750gb. I wonder if it’s an api limit and if I’d get another 750gb using airexplorer or odrive? or even another computer and rclone again?