My regular backup of ~125GB from Google Drive to S3 via EC2 flow looks like this
- Google Takeout (10 GB ZIP) Google Drive (
/Takeout) EC2 (IPV6-only host , used to verify contents) S3 (Glacier Deep Archive).
I go this route for a few reasons
- cloud to cloud is extremely fast . < 1 hr vs days using home ISP
- Ec2 step let's me verify the archives
- very easy to re-start or correct if something goes wrong.
I used IPV6 on my EC2 instance since AWS is now charging ~ $5/mo for IPV4.
rclone had a few compatibility issues with IPV6.
- oauth authorization on Google Drive assumed IPV4
- default Amazon AWS S3 endpoint does not support IPV6
Here is a guide on how to use rclone on an IPV6-only server (no IPV4 addressing).
rclone assumes oauth redirect URL will be IPV4 . To workaround , I ran rclone from my desktop and copied the tokens to the EC2 instance (
I tried updating the oauth server & token URL to use IPV6 but that had no effect.
The blocker was SSH hosting the port-forward IPV6 but auth URL was set by rclone to
Connecting to Google Drive
rclone backend for Google Drive transfer supports IPV6 out of the box. No special config was needed.
rclone copy gcloud:/Takeout ./
Performance was very good, at around 60mbps (30 min for 120gb)
Connecting to S3
S3 & Amazon AWS API does not support IPV6 by default. Instead "dual-stack" endpoints must be used.
Configure rclone as usual, note your desired region (e.g.
us-west-2) and update the S3 endpoint in your
region = us-west-2
location_constraint = us-west-2
endpoint = s3.dualstack.us-west-2.amazonaws.com
Testing The Transfer
rclone copy --progress 2023-11-30-backup s3-backup/xxx-backup/
rclone should copy to S3 as usual. If it hangs, use the AWS cli (configured above with "dual-stack" endpoints) to confirm you can connect to S3 endpoints & list buckets
[optional] Testing Using AWS CLI
use_dualstack_endpoint = true
addressing_style = virtual
aws s3api list-buckets
# buckets should list here