Proxy for Amazon Cloud Drive


#64

With this Proxy for Amazon is possible to use also encrypt feature ?

Thanks,


#65

No reason why you couldn’t…


#66

Hi there,
This Proxy method is working marvelously, still, I cannot get any log output:
rclone sync --transfers=20 “/MySourcePath” amazonEncrypted:“MyDestinationPath” --log-file=/MyLogPath/rclone.log

rclone.log is empty.
Any idea?


#67

Hi,

thanks for this, it works very well. As I just broke a 4TB HDD I’m very happy that I can restore the data from Amazon this easily again.
I was just wondering why I have to enter my Amazon login and password on the proxy page and not directly on Amazon. The Oauth protocol shouldn’t need this, and there was actually an error occuring (something about enabling cookies) and then I was sent to the login form directly on amazon.com. Logging in there worked and created the token.
Well, I just used a throwaway password and changed it back afterwards, but it would feel way more secure if I wouldn’t have to enter my login outside of Amazon.
I just realized one can also just enter garbage for the first login form and is then forwarded to Amazon where one can log in securely.


#68

Try using the -v flag


#69

That isn’t what happens for me - I get redirected to the amazon page directly. The proxy should be invisible.


#70

Indeed, it appears now that I have to add --verbose while it was not necessary before.


#71

Hmm.
I tried it now on Linux and Windows, and in both cases the first login form is served by the auth proxy. See the address bar in the screenshot.
It’s like that with the newest beta.
I’m also a bit afraid that Amazon is taking things like that again as a reason for blocking a key or something.


#72

Sorry, yes that is the way it is supposed to work.

If it didn’t then it would leak the client_id at that point (in the URL).

The oauthproxy translates the client_id and client_secret on the fly.


#73

If it didn’t then it would leak the client_id at that point (in the URL).
The oauthproxy translates the client_id and client_secret on the fly.

Yes, but at this stage there is only supposed to be the client_id in the URL, not the secret.
There shouldn’t be a problem publishing that?


#74

I can’t find the docs at the moment which document exactly what Amazon said about keeping your credentials secret. My reading of the docs (which I can’t find now) was that both should be kept secret.

You can make an issue about this at https://github.com/ncw/oauthproxy if you’d like. I’m not strongly motivated to fix it though as Amazon haven’t exactly been my friend recently - PRs appreciated :wink:


#75

Hi,

I use rclone v1.37-024-g1ecf2bcb on windows and i can’t rich the token page.
Here the page


#76

@ncw - thanks for releasing this. I just recently stumbled upon rclone a little while ago after fumbling through the stock ACD gui client. Has sure made my life a lot easier.

But to everyone: Two questions I have going down this route:

  1. Realistically, what’s the estimated life-time for this proxy to continue working for access to ACD? I read that it’s running in Google App Engine thanks to the good graces of a fellow rclone user. But at the end of the day, someone has to be paying for that App Engine instance to run. What happens when that special someone stops paying the bill? (i.e. Do I only have another week / month / year to get all my data out of ACD in this way?)

  2. Can someone help me understand exactly what security risk is involved? Since this runs in Google App Engine, is it just a matter of someone adding some code to proxy server to willingly sniff out credentials, and then use those credentials to access all my data? I admit I’m being a bit naive here and am still trying to read up and understand how the authentication process works (I guess that ultimately, I still have my own username / password, but that still gets passed to the proxy during authentication, right?)

Thanks in advance!


#77

For #1. History tells us that it can go away at any minute but can be also be there forever too. Plan accordingly. To be frank though, any of these providers can ‘go away’ at any minute. I’ve seen instances where google has locked out accounts permanently through automated means and refused to unlock them. I personally have at least 2 providers holding my critical data. I doubt someone (Nick?) would simply stop paying the bill. The more likely scenario is that Amazon doesn’t like it and bans it.

For #2. If someone does modify the code to capture your credentials, they could access your data. You aren’t being naive. You’re actually asking good, legitimate questions. This very reason is why I believe @ncw didn’t have a proxy to begin with because. He didn’t want a single point of failure and increased risk with credentials passing through an intermediate device. That being said, these google App engines are pretty secure so as long as you trust the owner, you’re safe. But there is some element of trust associated with it. If it ‘were’ compromised (and you knew about it which may be a problem), it would be easy to deauthorize it.


#78

Hello. after i finish setting up a remote, can i back to use stable version or i need to stay on beta version?

Thanks for your answer.


#79

You can use 1.37 now - I’ll adjust the text


#80

Hi, yesterday I got this issue when I copying data from Amazon Cloud Drive to Google Drive

2017/07/30 12:59:10 ERROR : abc_file.mp4: Failed to copy: failed to open source object: Get https://content-na.drive.amazonaws.com/cdproxy/nodes/Kb2o4T7sQ2-Q0RIHf4oeOg/content: dial tcp 52.200.231.248:443: bind: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full.

2017/07/30 13:03:43 Failed to create file system for “acd:foldername”: failed to get endpoints: Get https://drive.amazonaws.com/drive/v1/account/endpoint: dial tcp 176.32.102.133:443: bind: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full.

This error repeat many times during copying process and I see have many different IP addresses. And then the copying process end and have many files not copied.

I’m using rclone v1.37 (not beta version) on a RDP (Remote Desktop Protocal) running Windows Server 2012.

Sorry for my poor English. Thanks so much!


#81

Likely you overwhelmed the networking with lots of open connections. Dial down the number of --transfers and --checkers, or increase the OS limits (not sure how to do this on Windows).

See this discussion for more info.


#82

I copied all data from ACD to Drive completely. :heart_eyes:
Now, I want to export all filename in ACD to a list and I see in Rclone Browser have export function and I tried it with .CSV format (with size and datetime) but it sorted filename randomly. I want to sort filename alphabetically.
Do you know how I can do it?
Thanks so much!


#83

On linux you can simply cat csv | sort. On windows you could use a csv program to sort it like excel.