Issue with Mega

hello and welcome to the forum,

this behaviour is documented and a workaround is offered.
https://rclone.org/mega/#failure-to-log-in

Hi!
Thanks for taking the time to respond to my question! :slight_smile: I read the page you kindly sent to me through my email and right now I am planning a strategy to apply what is proposed as the workaround. Thanks again!

Regards,
Marco.

if you need help with that strategy, start a new topic using help and support.
if you get a working solution, start a new topic and share it....

Hello there!
Just to share my experience with this issue...
Today is just about a week since my Mega account got blocked because of the issue. Today I tried to mount mannually one single folder using the CLI (which should be fine, not so many concurrent requests to the Mega servers...) but I got blocked again. After that, tried to login using the MEGAsync Linux client and it was painfully slow, although it finally worked.

So, you see, from my point of view and experience... I think (as the mega support team wrote to me) they check the database server logs and when they see that someone is using rclone, they block the account if they see fit, no matter if there are less than 90 concurrent connections request...

What makes me stay with rclone is that being able to mount the remote folder locally lets me load all the files at a time and let me for example, listen to my favourite music files. On the contrary, MEGAsync client lets you load one single file at a time (kind of boring stuff to do the process each time to listen to all your music files, no way). Today I wrote to the support team to let them know about that, despite of the fact that I realize they won't pay attention to it...

I will try again next week, adding a 5 second sleep timer in my bash scripts between requests... Perhaps this way it works. I only have 8 folders in my Mega account, that's all.

Thanks for your support, regards,
Marco.

Of course we are triggered to use Mega seen the price settings.
But trust me, Mega.nz is rubbish.

Encryption.

You know I like that one. Every symmetric encryption is to avoid when You really want to secure your files.
duplicity does the job right. Asymmetric encryption, using a public and private key.
Mega sells it is zero-knowledge storage, except the problem You store your private key in their software...
When You encrypt with a public key, the private key is unknown.

Hashes.

When You transfer files you want to verify your files against corruption and if they do alter. When files are transfered of course they do some inline hashing of the blocks to avoid corruption, however, every decent sysadmin does check every file again in a second step anyway. Good luck with that with the poor designed API of Mega

Timestamps

When I transfer my files I want to see exactly the same date and time of my files of modification etc.
Webdav has issues with that. Direct access the API, maybe it works, seen syncovery seems to succeed to put the right date time stamp...

Sessions

Where You have decent login mechanism and roles in S3 Cloud space, You can mess with Mega
and the 2FA. The only software so far I have seen that works with the API and 2AUTH is syncovery
on my (crappy) windows box.

Speed

I do expect faster transfer speed to mega. But in Europe this seems to be a dream...

Pricesetting

Mega is cheap however, when You transfer files you have a limit on the amount of data You have in storage ! This means if you have 1TB space, once You have 1TB traffic the red lights go off...

Finally

My setup is as follow. I do use s3 cloud space wasabi (yes, I pay 2 times more) On top of that, I run s3ql so I have a posix filesystem, having encryption, compression and deduplication ! I use a permanent and big cache on a 16TB zfs filesytem on FreeBSD. God, I like ZFS. That filesystem ROCKS.

So in this setup, my ZFS secures me against bitrot somehow, however it stores the cache only anyway of my files of s3ql. s3ql provides to me a mountable space to s3 wasabi, so I have a posix filesystem. It does compress, deduplicate and encrypt. It syncs if needed the blocks not in the cache on the fly. The big cache is used to backup with borgbackup or to access files that are highly needed....
When it does crash, s3ql checks the cache against the files in s3... so I like that as well...

Once it is mounted I provide a share on the network. using NFS or SMB...

In this setup I have my files totally encrypted, they are on S3 space, I do backup them and I can use
rclone, rsync, borgbackup, streaming etc. And if needed I can replicate the bucket to other S3 spaces...
And nobody can see the cute .ss of my girlfriend in my files.

I pay 2 times more (but is it?, seen wasabi does not charge for egress), yes, but I am pretty happy I can kick that Mega back to New Zealand.

I will not bash on rclone seen it is a great product, however in my fair opinion, it should be written in
python. It would solved a lot of issues. But that is my humble opinion.

hello and welcome to the forum,

you posted as a suspected bug, so what is the rclone bug you are having trouble with?

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.