S3 Versions + MFA delete to protect from accidental destruction of remote backup

Thanks for the great tool rclone!

I would like to protect my remote backup from:

  1. myself accidentally uploading/overwriting with bad data and/or deleting data
  2. an intruder doing the same

One possible way to do this, is perhaps to enable versioning on the bucket and enable MFA delete. Thus permanent deletes can only be performed with additional security outside
of the rclone config.

If I could list the versions using rclone, then I could easily see if there has been mistaken/unauthorized uploads. It seems like b2 versions are supported, but does rclone support S3 versions?

Thanks!

I didn’t know about S3 versions! They seem quite similar to b2 versions…

Can you please make a new issue on github so I can record this request.

Nice! Should I create an issue for MFA delete too?
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMFADelete.html

Perhaps it is as easy as adding the mfa token on the commandline:
rclone delete --mfa “20899872 301749” remote:

and the token gets added to the http header:
x-amz-mfa: 20899872 301749

It is up to the user of rclone to know his mfa device nr and of course the recent token.

//Fredrik

Sure, why not!

That looks reasonably straight forward. Fancy helping to implement it?

Well, theoretically, it would be as simple as the diff below. Unfortunately, since I have
never programmed go before, I cannot even build by own source. Meh.
I tried all the commands on the web page and some others. The only command
that actually builds my source is go build s3/s3.go, but that will not rebuild the rclone binary.

diff --git a/s3/s3.go b/s3/s3.go
index 7df10b1..81f0777 100644
--- a/s3/s3.go
+++ b/s3/s3.go
@@ -231,6 +231,7 @@ var (
        // Flags
        s3ACL          = fs.StringP("s3-acl", "", "", "Canned ACL used when creating buckets and/or storing objects in S3")
        s3StorageClass = fs.StringP("s3-storage-class", "", "", "Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)")
+       s3MFA          = fs.StringP("s3-mfa", "", "", "MFA string to authorize deletes")
 )
 
 // Fs represents a remote s3 server
@@ -1037,6 +1038,9 @@ func (o *Object) Remove() error {
                Bucket: &o.fs.bucket,
                Key:    &key,
        }
+       if *s3MFA != "" {
+               req.SetMFA(*s3MFA)
+        }
        _, err := o.fs.c.DeleteObject(&req)
        return err
 }

That diff looks reasonable :slight_smile:

change to the top level rclone directory and type go build to build rclone in the current directory, or better type go install to install rclone in $GOPATH/bin (which is very likely ~/go/bin). go install is preferred since it does incremental builds.

I forked rclone on github, the cloned it to my laptop. Then introduce an explicit error
in s3.go to detect if it actually tries to compile my broken code. Then from the top rclone dir:

go build // Brief pause, then nothing

go install // go install: no install location for directory /home/…/rclone outside GOPATH

Trying instead:`
go install github.com/weetmuts/rclone

cd /home/…/go/src/github.com/weetmuts/rclone
Introduce error in s3.go
go build // Nothing
go install // Does recreate the binary, but no compilation error

Hmm, it seems like import “github.com/ncw/rclone/…” etc in rclone.go is not relative
the the actual cloned github repository, but instead an absolute link to your repository.
Thus, even if I am standing in my own forked cloned repository, it will build yours instead.
Which explains why my code is not recompiled.

Ok, if I do my changes in a clone of your repository (instead of changing in a fork of your repository), then it is easy to rebuild. Simply “go install”

However the --s3-mfa is not that useful until there is support for supplying the version id to the delete request in rclone, and list the versions etc etc

However I have tested s3 mfa delete versioning with the aws cli. These are my experiences so far.

  1. For the moment it seems that only the mfa recorded with the owner of the bucket can be used.
    Even though I create the buckets through an IAM user, I have to use my root owners mfa, since they seem to default to be owned by my root owner. I do not know if this is lack of skill from my side or intended.

The root owners mfa device nr can be found under “My Security Credentials -> MFA” (not in IAM users where all the other mfa devices can be found). It looks like this: arn:aws:iam::123456789012:mfa/root-account-mfa-device

  1. How to add versioning and mfa delete to a bucket. This requires supplying an mfa token!
    aws s3api put-bucket-versioning --bucket mybucketname --versioning-configuration MFADelete=Enabled,Status=Enabled --mfa ‘arn:aws:iam::123456789012:mfa/root-account-mfa-device 064546’

Unfortunately, since I have to use the owner/root account, I have to get an access_key and secret for my root account, again (after having properly deleted them when I started using IAM users)
If I do not do this, then the aws cli will complain of an inconsistency between the access key and the mfa token.

There no web-ui support for anything mfa delete! We are on the bleeding edge here.

  1. Now you can use the bucket with versioning. I.e. all versions are maintained and deletes simply insert a delete marker. Yes, you can “delete” but not “DELETE”.

  2. To delete a version do:
    aws s3api delete-object --bucket mybucket_name --key ‘TheFileName’ --version-id fB0KdXcWyaNRisFKzIUZWbS3xZRPTP7Z --mfa ‘arn:aws:iam::123456789012:mfa/root-account-mfa-device 983013’

If you delete the delete marker (by setting its version id) the file re-appears. To fully delete
everything, you have to go through each version and delete it.
You cannot delete anything through the web-ui. There is no support for popping up
an mfa question there. Might appear in the future, who knows.

So enable mfa delete with caution! It is not that easy to reverse or cleanup yet. The necessary
tools are somewhat lacking.

Hopefully there is (will be) a way to change the owner to an iam user, ie to the same user I normally use for rclone. Then this would work rather nicely, with support from rclone to delete a batch of versions using a file with a list of them, and to list all versions in the bucket.

When I was learning how to do this a few months back, what worked for me:

mkdir $HOME/GOLANG
export GOPATH=$HOME/GOLANG
go get github.com/ncw/rclone

This will download and build from upstream, all in $GOPATH.

Now $HOME/GOLANG/src/github.com/ncw/rclone/ has the source that you can hack on and compile and so on.

I then had my own github fork and created a branch on it (if I remembered). Then I’d diff the two trees to work out what to copy into my own branch, to commit back, to create the pull.

There’s probably better ways of doing it, but this worked for me!

Thanks sweh! Yeah, that seems to be the way to do it.

Ok, it seems like MFA delete is really deprecated and not useful at all.

  1. it has to be used with the root account and you have to have an access_key and secret for
    the root account! Blech.

https://forums.aws.amazon.com/thread.jspa?messageID=344462&#344462

  1. You can only delete a single object per token. If you get a token per 30 seconds, that is two files per minute! You can disable MFA delete, do all the deletes and then re-enable it. But that leaves a hole when it is indeed disabled…

There is supposedly a way to create MFA authenticated sessions for IAM users
that perhaps can be used to protect the delete object function for the bucket with MFA.

http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_configure-api-require.html

If I manage to solve this, I will updated this thread again.

With go 1.8 (?) GOPATH defaults to $HOME/go so you don’t even need to set that.

:frowning:

Great!

I’ve been playing with the AWS CLI tool at work for general purpose Amazon cloud stuff (creating VPCs, networks, EC2 instances, talking to S3, etc etc). Part of that is also working out how 2FA works with the CLI.

In general, the Access key and secret key are the entry point. So you start with that.

You can then do aws sts get-caller-identity to work out the AWS ARN for your user. It will end in user:/FOOBAR - you want the “FOOBAR” part.

This can be used to get the 2FA token ID with aws iam list-mfa-devices --username FOOBAR. You want the serial number from this.

Now you can use your 2FA token: aws sts get-session-token --serial-number SERIAL --token-code YOUR2FAVALUE

The returned JSON will a SecretAccessKey, SessionToken, AccessKeyID.

You set the environment variables

export AWS_SESSION_TOKEN=<SessionToken>
export AWS_SECRET_ACCESS_KEY=<SecretAccessKey>
export AWS_ACCESS_KEY=<AccessKeyID>

These environment variables override the access/secret keys in your configuration and are valid for a period of time (eg 12 hours; the expiration date is in the returned JSON as well).

It’s a lot of work, but it’s scriptable.

I dunno if rclone can use environment variables to override config files in this way; if not then it’ll need updating.

Adding a command to rclone so it does this handshake (eval $(rclone getaws2fa)) might make like easier as well, but then you’ll have to worry about the different shell types (sh, csh) and platforms (Windows).

It can if you set env_auth in the config.

The shell stuff is relatively easy to solve I’d say. It could also output an rclone config file or make or update a remote in the config file which might make more sense for rclone use.

Unlike normal oauth tokens these ones can’t be renewed; on expiration 2FA needs to be entered again, so if you rewrite the config then make sure you use different config options and don’t override the primary access/secret keys - they’re still needed to get new 2FA tokens :slight_smile:

I see! Making a new remote would be the best plan then probably.

Until then, setting the env_auth = true option for the s3 remote picks up the environment variables automatically, so the caller just needs to work out how to set the variables themselves.

This almost calls out for a set of “helper” scripts; the first script could be for s3 MFA, but maybe others would become useful in the future!

I would tend to write helper stuff in go to make it portable to all the platforms rclone runs on.

If you were to write a script for this, then I can have a go at porting it to go!

Here’s some code :slight_smile: https://github.com/sweharris/aws-cli-mfa

Nice!

If I were to port that to go, do you reckon you could have a go at debugging it?

I don’t have an AWS account with “must MFA” and I can’t break the one I have access too!