How are you configuring these - with --s3-session-token (or the config file equivalent)?
The timeouts I listed are the longest ones that AWS can support.
In this AWS Doc in the "Role Chaining" section you'll see the one and 12 hour limits for chained and nonchained sessions ... and these limits are hard ones, regardless of what I set.
Yes, it well might. Not sure what the remote control API is, but I can certainly write code to call it from, say JS or Python. Most of the solutions that come to mind have something to do with changing sessions underneath a running job, but it needs to be done in a way that the job doesn't die -- perhaps by timeout parms and perhaps by retries. Safety is a big concern here, so I'd prefer to keep the extended session ability tightly scoped to a specific file transfer, rather than a user command line session!
That's a good idea too, but presumably have to wrap rclone in some kind of program that kills it and restarts it every 11 hours or so? Must that be done by API, or is there a capability in the program itself?
Done some more testing ... verified (contrary to my now-deleted post) that 12h long running sessions indeed do expire, leaving rclone with recurring messages like 2023/02/19 00:00:23 DEBUG : pacer: low level retry 1/2 (error ExpiredToken: The provided token has expired.
It is possible to then kill the process, get new temp creds for another 12 hours, manually import them into the environment and restart rclone. That's not a viable solution
If (see --s3-profile failing when explicit s3 endpoint is present - #4 by robinpc) the --s3-profile option worked with endpoints, which it appears not to, this problem might be easily resolved, because apparently the use of a named profile from the CLI also caches and automatically renews the temporary credentials (per this longish post, search for "cache")
So maybe, just maybe, getting s3-endpoints working with --s3-profile will solve both the "need endpoints" and the "need refreshable creds" problem.