Unable to auth via docker

What is the problem you are having with rclone?

I cannot access the HTTP server within Docker, even with the port forwarded.

I did come across Can't access 127.0.0.1:port when using docker which indicates it should work in 1.58 (without any reference), but 1.58 doesn't seem to work for me.

Run the command 'rclone version' and share the full output of the command.

Using the official docker image:

rclone v1.58.1
- os/version: alpine 3.15.4 (64 bit)
- os/kernel: 5.4.0-120-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.18.1
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

docker exec -it sync_gdrive_ohn rclone authorize "drive" "eyJj......asdf"

The rclone config contents with secrets removed.

[GDrive_setup]
type = drive
client_id = NOPE 
client_secret = NOPE
scope = drive

A log from the command with the -vv flag

2022/06/28 17:46:37 NOTICE: Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config.
2022/06/28 17:46:37 NOTICE: Please go to the following link: http://127.0.0.1:53682/auth?state=ihcW2NL0mmsjOc_8lW3chA
2022/06/28 17:46:37 NOTICE: Log in and authorize rclone for access
2022/06/28 17:46:37 NOTICE: Waiting for code...

Other Input

I noticed when messing with the webgui, it was necessary to set --rc-addr :5572 to bind to [::] or 0.0.0.0... There doesn't seem to be any such option for the authorize server, and thus I believe it is not possible to reach the server in the container, even with forwarding.

Any workaround suggestion would be great, or am I simply missing something? Other than volumes and ports, there are no relevant compose settings, and I can see the port forwarded fine in docker ps.

I did target 1.57.0 via docker and was able to get the classical google URL instead of the local authorize server url, but this is a bit more rigamarole than expected.

hello and welcome to the forum,

so you want to create and/or authorize a remote from inside docker,
instead of following the rclone docs?

have you seen
https://forum.rclone.org/t/it-is-possible-to-run-rclone-inside-the-docker-container-and-mount-the-folder-to-host/31268/3

Not sure why you'd:

  1. welcome me, and then immediately berate me
  2. make a statement declaring I'm not following rclone docs... It's literally right here, and here.
  3. you'd point me to a thread that has absolutely nothing to do with the topic.

The only relevant portion was the port mention, which I already covered:

Other than volumes and ports, there are no relevant compose settings, and I can see the port forwarded fine in docker ps.

Use the remote auth procedure outlined here: Remote Setup

PS please be kind in your responses - no one is being paid to help here and everyone is trying their best.

PS please be kind in your responses - no one is being paid to help here and everyone is trying their best.

See, here's where we disagree....

His/her demeanor and response was a direct berate out of exhaustion and jaded behavior, and regardless of setting I will NOT tolerate being talked to and treated this way.

Not only that, but this is now the second response completely neglecting to read what I posted.

I not only pointed out I'm executing rclone authorize exactly as detailed in the docs for - and get this, HEADLESS REMOTE AUTHORIZATION, but literally pointed to the exact same docs you did.

Ths is exactly TWO responses that literally provides ZERO help, doesn't address the issue, and neglects the information given.

Once again, I'll be extremely clear - THIS IS NOT ASSISTANCE, AND NO ONE HAS ATTEMPTED TO HELP YET.

I've re-read @asdffdsa quite a few times and even slept on it and I really don't see what you are seeing. Looks like he was trying to be helpful to me as that's how I read it.

I count two people that have attempted to help you but you don't like their answer. @ncw really is the most patient / helpful person I've seen in probably about 25 years of being on the internet and using forums and does the most amazing help I've ever seen. He's probably got more patient and thoughtfulness than me. The bold / capital letters are exactly what are you talking about being berated and really not called for.

That being said, your answer is actually given in both posts already.

When you have an oAuth process, there is an expectation on the response URL that looks for/must contain the link listed on the output for the authorization to work.

http://127.0.0.1:53682

Which is listed on the app registration so it must match 100% for the auth process to work. That is why if you have a system/server without a browser, you have to follow up the remote setup process. You can't bind it for a 0.0.0.0 as if the request comes from anything other than 127.0.0.1, it will not work per Google's oAuth requirements.

I'm not a huge docker user and do not have a large amount of expertise as for my containers so understand that. I do not think you can might be able to map the port from the docker to the host.

    ports:
      - "53682:53682"

In my case, I have 3579 mapped from a container and I can connect to localhost on the same port. This does require a browser though to be running on the host as my Linux box is headless, I can't do this.

tcp        0      0 0.0.0.0:3579            0.0.0.0:*               LISTEN
tcp        0      0 127.0.0.1:3579          127.0.0.1:37936         ESTABLISHED
tcp        0      0 172.18.0.1:48146        172.18.0.3:3579         ESTABLISHED
tcp        0      0 127.0.0.1:37936         127.0.0.1:3579          ESTABLISHED
tcp6       0      0 :::3579                 :::*                    LISTEN
felix@gemini:/opt/docker$ telnet localhost 3579
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
^]
telnet> q
Connection closed.

With all that being said, it's probably easier to do the remote setup that was linked as it's been shared and I won't relink it.

That requires you to pick advanced as noted in the docs and just paste in the code you get from a machine with a browser back in.

1 Like

I'm just going to drop the attention to the toxic jaded behavior since it seems to be distracting, and not the point of the post anyway...

When you have an oAuth process, there is an expectation on the response URL that looks for/must contain the link listed on the output for the authorization to work.

As for the response you give, I appreciate this tidbit of information that I suspected. I simply could not find clarity in Google's docs, and there's no mention of the expected request headers needing to be from 127.0.0.1 and avoid attempting to proxy within the rclone docs.

I've literally scanned forums, github issues, changelogs, and general googling to attempt to resolve this, and in the various locations it's difficult to know what is and isn't possible when it's not documented. Responses in forums and issues both indicate it works in docker "once configured properly" so I wasn't sure if this was a regression, or user error. Turns out it's more user error than anything, but with a heavy emphasis on "well how am I supposed to know this if it's not mentioned anywhere?"

My actual understanding based on Google's OAuth2 docs (and having implemented OAuth plenty of times before), is there is the ability to define the redirect_uri when requesting - so this might be more of how it was implemented than an actual limitation.

In the end, this still isn't the result or expectation I have as an end-user, and I authenticate with OAuth through 8 other docker services (not related to rclone) just fine, including with other Google OAuth2 Apps.


And to be clear, I realize this is a newer feature and being worked on, which is why I sought help and provided all the information I did. This is also why I took offense to others speaking condescendingly, as well as neglecting to thoughtfully read the provided information.

Google allows you/me/any developer to define a redirect URI for the authorization process to be allowed.

Rclone is expecting this is a browser based system and I am authorizing locally.

Use auto config?
 * Say Y if not sure
 * Say N if you are working on a remote or headless machine

y) Yes (default)
n) No
y/n>
2022/06/29 08:09:11 NOTICE: If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=_8gzj2Buq8TKJhzGZa16uQ
2022/06/29 08:09:11 NOTICE: Log in and authorize rclone for access
2022/06/29 08:09:11 NOTICE: Waiting for code...

So I get a browser launched and part of oAuth, the redirect URI is already set in rclone to be http://127.0.0.153682 as that's a choice rclone made to define it's oAuth process for Google.

If you have a headless machine/docker/whatever, you can use the remote setup and run the oAuth part on another machine that does have a browser.

For this to work, you will need rclone available on a machine that has
a web browser available.
For more help and alternate methods see: https://rclone.org/remote_setup/
Execute the following on the machine with the web browser (same rclone
version recommended):
	rclone authorize "drive"
Then paste the result.
Enter a value.

So I can launch that on any machine with a browser, get a token and paste it back over to the headless machine.

Each app can work based on what is best for that app. Without examples and what they are doing, it's really hard to compare/contrast the use case of 8 unknown apps compared to rclone. Choices are always made in terms of flexibility and ease of use. I personally use headless machines all the time and just use the remote setup if I have to as it's a one time thing and you never have to touch it again. There are always edge use cases or certain things that may not work out as trying to please 100% usually has poor results.

Those apps you mention may have a "looser" process on security or made choices with the assumption of always being run in a docker. I recently migrated all my stuff to containers locally but still see little value in putting rclone in a container so I don't. That's my personal use case / truth which is great for me, but clearly not great for you as you want to put it in a container. We're both right as our 'truths' are our use cases and what matters for us the most.

I'll never claim to be perfect at all times as I get punchy here/there as well, but the one person I've yet to ever see make a condescending comment ever in my time moderating and reading posts is @ncw

He goes out of his way to answer every question, every time someone pings him and always goes way above what I would ever imagine in terms of being a really thoughtful and helpful human being. He teaches folks countless times how to contribute and develop code offering guidance.

So yes, it would be best to drop that part of the post and focus on the question at hand.

If I own the service (as I'm hosting it, I do), I am the process. There's no assumptions, I'm explicitly setting these values through configuration.

That's my personal use case / truth which is great for me, but clearly not great for you as you want to put it in a container. We're both right as our 'truths' are our use cases and what matters for us the most.

Honestly I prefer containers for everything that isn't core OS, so I can operate virtually Host-OS-agnostic. There's also conceptually an OS, where docker replaces systemd/initrc/sysinit/etc, such as RancherOS or envrionments like Portainer/Unraid that is popular to use. I could also potentially have ephemeral systems like game servers or shared content that is accessible adhoc, but needs regular backup. There's tons of reasons to run in a container.

That really doesn't give me any specifics or context on what you are doing and how you are doing it though. What are you setting those values to? What's the use case? What's the flow?

Rclone's base is quite spread and consistency / ease of use are definitely up on the list so you'll see that redirect URI shared against a plethora of cloud providers as each provider has unique rules/requirements for how they do their setup and application registration.

Dropbox for example uses the same redirect URI as Google for consistency across providers:

My Caddy/Github oAuth flow is a bit different and requires an external redirect URI since Caddy is making the call back. There are quite the number of flavors and use cases for redirect URIs as rclone choose/picked a method that needs to be a bit more provider agnostic and generally user friendly across a cornucopia of providers. To the best of my knowledge, that's how we got the specific redirect URI and the localhost portion so it's the same across the landscape.

What's the flow?

It's OAuth... you know, the RFC?

I'm pretty sure we've all heard of "sensible defaults" right? Why is this even continuing to be a conversation when fully valid use-cases have been given (when not needed). The burden of proof is now on you why it's unnecessary since there are multiple posts as seen throughout the forum and issues alike. This to me is nothing but stubborness and hard-headed rebuttals that aren't addressing the issue.

I've given enough technical input backed with documentation, to where this should be a more productive conversation than what I'm receiving.


Consider the conversation killed. I'm over it, and will use an alternative solution.

As I try to have a good conversation, you continue to berate and condescend me for trying to understand and help you with repeated personal attacks to me which is the same thing you jumped out of the box accusing two other folks doing.

Good luck with the alternate solution.