Advice Requested on Long Running AWS Sessions for S3 Copies

As far as I can tell right now, AWS has two limits on session duration:

  • A maximum hard limit of 1 hour where role chaining is involved
  • A maximum hard limit of 12 hours where role chaining is not involved

In either case we have requirements for transfers to and from S3 that exceed the maximum limits.

Are there capabilities or techniques using rclone that will either:

  1. Expand these limits, or
  2. Allow relatively automatable recovery and restart when a transfer times out

How are you configuring these - with --s3-session-token (or the config file equivalent)?

It would be possible using the remote control API to swap out the --s3-session-token - would that help?

What happens at the moment?

Restarting rclone with new auth should be enough to get it running again with some cost of rclone working out where it got to.

How are you configuring these - with --s3-session-token (or the config file equivalent)?
The timeouts I listed are the longest ones that AWS can support.

In this AWS Doc in the "Role Chaining" section you'll see the one and 12 hour limits for chained and nonchained sessions ... and these limits are hard ones, regardless of what I set.

Yes, it well might. Not sure what the remote control API is, but I can certainly write code to call it from, say JS or Python. Most of the solutions that come to mind have something to do with changing sessions underneath a running job, but it needs to be done in a way that the job doesn't die -- perhaps by timeout parms and perhaps by retries. Safety is a big concern here, so I'd prefer to keep the extended session ability tightly scoped to a specific file transfer, rather than a user command line session!

That's a good idea too, but presumably have to wrap rclone in some kind of program that kills it and restarts it every 11 hours or so? Must that be done by API, or is there a capability in the program itself?

Done some more testing ... verified (contrary to my now-deleted post) that 12h long running sessions indeed do expire, leaving rclone with recurring messages like 2023/02/19 00:00:23 DEBUG : pacer: low level retry 1/2 (error ExpiredToken: The provided token has expired.

It is possible to then kill the process, get new temp creds for another 12 hours, manually import them into the environment and restart rclone. That's not a viable solution

If (see --s3-profile failing when explicit s3 endpoint is present - #4 by robinpc) the --s3-profile option worked with endpoints, which it appears not to, this problem might be easily resolved, because apparently the use of a named profile from the CLI also caches and automatically renews the temporary credentials (per this longish post, search for "cache")

So maybe, just maybe, getting s3-endpoints working with --s3-profile will solve both the "need endpoints" and the "need refreshable creds" problem.

Any progress?

I have replied on the other thread. One fix to fix two problems sounds attractive!

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.