Rclone has been banned from Amazon Drive

What is the name of the software?

ACD is only available for private use. What do you mean with “service providers”?

Bootleg ISP Service providers. Here in this corner of Europe i know of 7 or 8 that use them as a backup for their backup.

This. Only bandwidth counts (us downloading from ACD, mostly). Storage space is peanuts for AWS or Google.

@ncw Nick can you provide an update? Did amazon ever get back to you to open up an account to get new credentials or they keep you still locked out?

Hi,
Could you help me to do this from windows?
Thankyou!

I have tryed to download my files using ACD app.
It hanged sometimes and run multiple instances (?) of itself.
I wrote to Amazon.

Letter from acd supprt:
I’m really sorry to hear that you’re disappointed with the ACD and you want to get back Rclone.
Currently, we’re collating these sorts of feedback and forwarding it to our technical team.
They’ll surely look into it and try to re-launch Rclone in its next updates.
Please allow some time to our technical team to get this updated.
If you still have any concerns, please feel free to write back to us using the following link.

I suppose, it will be useful for Rclone author ncw, with all respect for his work on that project, to contact amazon and propose close cooperation in solving the problem.

I guess it is solvable.
Because of other applications have not been banned and still work! (odrive for example).

1 Like

If they do re-enable it (we can only hope), that’s my storage secondary backup sorted.

Noticed @ncw got interviewed for
this article

:slight_smile:

Yeah: “There’s speculation that Amazon may be having second thoughts about promising unlimited storage for $5 per month.” … No shizzle Sherlock!

It’s a great way of getting rid of loss making customers - make a service unusable and let them cancel; The desktop app is rubbish, crashes on every PC I have installed it on and the tech support and simply not interested - so rclone makes the service useable, which is not what they seem to want I guess.

The ban is not storage related. At their level 100PB of data is as big as 10PB of data, so for them 100TB is peanuts.

sure it is. storage costs money, even it is not much. thousands of users abusing this, costs money. why would anybody give anything away for free?!

disks don’t grow on trees, need power, cooling, maintenance and replacement.
best price consumer disk has a 26,995€/TB ratio. even if it is only 20% of consumer price, it still costs them money!

1 Like

Do you really think 10-100 TB of data from few users would be anything to worry about?
Youtube estimated 10-15 EB of disk space across about 17 datacenters in 2015...do you know what 2 years means in tech industry?

Sorry the geometry of modern datacenters doesn’t mean 1000 4TB disks ='s 4 PB Raw. You are thinking on a home datacenter scale.
Deduplication works on scale. The bigger the better.
Given enough data and enough storage, you could have almost infinite storage with finite physical storage.

The only issue is power consumption, for feeding all these disks and all those AC’s cooling them (reason why free cooling is such a buzz word now).

However bandwidth is NOT finite and for them it’s one of the primary costs for them. That’s why all of them (google, aws, etc) have their own multi fibre backbones and networking plexes.

So, if you abuse their network (fuse mount of rclone) that WILL cost them money effectively.
If you hog their cpu’s on their ACD frontends with pull’s and get’s like mad as the fuse mounts do that WILL cost them money.

That (besides the very SERIOUS security issue) is the problem.

1 Like

yeah since deduplication over multiple hosts, multiple datacenters and multiple continents is so easy peasy. you would wait also a almost infinite time until you get your file reassembled.

maybe google should hire you.

De-duplication works only with non-encrypted data. Random blocks and, for that matter, random looking blocks of data may even exist in “duplicate” in those vast amounts of data (that depends on the chosen block size), but as long as all data patterns are (expected to be) randomly distributed, you need at least as many bits to address the “copy” as you save by deduplication. Yes, deduplication works great at scale with un-encrypted data; but as far as rclone users use encryption, this argument doesn’t hold here.

All this is and has been studied in great depth in information theory.

1 Like

those sure look like huge ass storage appliances:

That’s from 2009. And i really don’t understand what do you want to prove with that.

so?

if you don’t have a whole bunch of disks connected directly do the system, IOPS and latency becomes an issue. if you distribute among many small blades with just a few disks you need costly interconnects. therefore dedup becomes problematic. there are many expert opinions available on the web, stating google, aws, etc… do not use dedup on large scale.

Deduplication works at a block level. Check the algorithms and data schematics.
You may not be deduping 200 movies, but could be deduping a block form a porno movie form 2 episodes of star trek.
It’s all about the content.