I executed a service in a docker container and I exposed container port 8080 to host port 6000.
Command:
docker run \
-d \
--rm \
--name keycloak \
-p 6000:8080 \
-e KEYCLOAK_USER=admin \
-e KEYCLOAK_PASSWORD=admin \
quay.io/keycloak/keycloak \
-b 0.0.0.0 \
-Djboss.http.port=8080
Result of docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
71c6a8ea6529 quay.io/keycloak/keycloak "/opt/jboss/tools/do…" About an hour ago Up About an hour 8443/tcp, 0.0.0.0:6000->8080/tcp keycloak
Result of docker inspect keycloak
"Ports": {
"8080/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "6000"
}
],
"8443/tcp": null
},
Result of ps aux | grep docker
root 1481 0.0 0.5 1600328 83560 ? Ssl 18:17 0:02 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
root 2995 0.0 0.0 549300 4448 ? Sl 18:18 0:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 6000 -container-ip 172.17.0.2 -container-port 8080
root 3009 0.0 0.0 109104 5140 ? Sl 18:18 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/71c6a8ea6529bdcb1a04d5fa73b5ca0053a4d012905d592b6b342f1b0e8c9047 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
When I use curl then it can reach service inside container. curl -v http://localhost:6000/auth
* Trying 127.0.0.1:6000...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 6000 (#0)
> GET /auth/ HTTP/1.1
> Host: localhost:6000
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Cache-Control: no-cache, must-revalidate, no-transform, no-store
< X-XSS-Protection: 1; mode=block
< X-Frame-Options: SAMEORIGIN
< Referrer-Policy: no-referrer
< Content-Security-Policy: frame-src 'self'; frame-ancestors 'self'; object-src 'none';
< Date: Sun, 30 Aug 2020 17:44:03 GMT
< Connection: keep-alive
< X-Robots-Tag: none
< Strict-Transport-Security: max-age=31536000; includeSubDomains
< X-Content-Type-Options: nosniff
< Content-Type: text/html;charset=utf-8
< Content-Length: 4070
When I try to the same in the google-chrome web browser then I got an error:
This site can’t be reached
The webpage at http://localhost:6000/auth/ might be temporarily down or it may have moved permanently to a new web address.
ERR_UNSAFE_PORT
Why can't the Google Chrome browser access the docker service using localhost and the exposed port?
Port 6000 is by default used by X11 and as such considered unsafe by Chrome (see here for a list of other unsafe, blocked ports and here for an explanation).
You need to change it to one of the ports considered safe or start Chrome like this:
chrome --explicitly-allowed-ports=6000
Related
Using this docker-compose file:
version: '3'
services:
hello:
image: nginxdemos/hello
ports:
- 7080:80
tool:
image: wbitt/network-multitool
tty: true
networks:
default:
name: test-network
If I curl from the host, it works.
❯ curl -s -o /dev/null -v http://192.168.1.102:7080
* Expire in 0 ms for 6 (transfer 0x8088b0)
* Trying 192.168.1.102...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x8088b0)
* Connected to 192.168.1.102 (192.168.1.102) port 7080 (#0)
> GET / HTTP/1.1
> Host: 192.168.1.102:7080
> User-Agent: curl/7.64.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.23.1
< Date: Sun, 10 May 2071 00:06:00 GMT
< Content-Type: text/html
< Transfer-Encoding: chunked
< Connection: keep-alive
< Expires: Sun, 10 May 2071 00:05:59 GMT
< Cache-Control: no-cache
<
{ [6 bytes data]
* Connection #0 to host 192.168.1.102 left intact
If I try to contact another container from within the network, it fails.
❯ docker exec -it $(gdid tool) curl -s -o /dev/null -v http://hello
* Could not resolve host: hello
* Closing connection 0
Is this intended behaviour? I thought networks within the same network (and using docker-compose) are meant to be able to talk by their service name?
I am bringing the containers up with docker-compose up -d
I have a curl request to a SharePoint. This curl in not changed form is working on the HOST with Ubuntu and it is not working correctly on the docker container with ubuntu... The problem is that the request is quite long and if run from the container I receive 401 Unauthorized after exactly 50 seconds. On host, it is working fine. If the request is shorter than 50 seconds then it works on both systems. Any ideas?
The curl is:
curl -k -v --http1.1 --ntlm --negotiate -u john:ABCabc123 -H "Content-type:application/json" -H "X-RequestDigest:0xD014B3ADC4C93DC83F204FAA953830CDD534A6DB13ECAE0CF40F4E7ECAA6E45E877B94D0F8A214940E5BFE5B5BA82AE9CAFA5974345A0EA96FEA9C91932AB5EB,13 Aug 2019 10:42:21 -0000" -d '{"query":{"ViewXml":"<View><Query></Query></View>"}}' -X POST "https://myapp/coll/9630bbe88ab246cd993f0085204a796a/_api/Web/Lists/GetByTitle('1')/GetItems?$select=Id,ContentType/Name&$expand=ContentType"
And wrong response after 50 seconds:
TTP/1.1 401 Unauthorized
< Server: Microsoft-IIS/10.0
< WWW-Authenticate: NTLM TlRMTVNTUAACAAAACQAJADgAAAAGgokC5MF+J9Y9IxEAAAAAAAAAAKoAqgBBAAAACgA5OAAAAA9HUkVEU1BERVYCABIARwBSAEUARABTAFAARABFAFYAAQASAFMAUAAxADYARABFAFYAMQA5AAQAGgBnAHIAZQBkAHMAcABkAGUAdgAuAGwAbwBjAAMALgBzAHAAMQA2AGQAZQB2ADEAOQAuAGcAcgBlAGQAcwBwAGQAZQB2AC4AbABvAGMABQAaAGcAcgBlAGQAcwBwAGQAZQB2AC4AbABvAGMABwAIAL57xWXGUdUBAAAAAA==
< SPRequestGuid: 04cdf99e-219a-f0fb-63ce-01c13ef289a5
< request-id: 04cdf99e-219a-f0fb-63ce-01c13ef289a5
< X-FRAME-OPTIONS: SAMEORIGIN
< SPRequestDuration: 3
< SPIisLatency: 0
< X-Powered-By: ASP.NET
< MicrosoftSharePointTeamServices: 16.0.0.4783
< X-Content-Type-Options: nosniff
< X-MS-InvokeApp: 1; RequireReadOnly
< Date: Tue, 13 Aug 2019 11:01:02 GMT
< Content-Length: 0
Additional info:
Some parts of docker file:
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2.6-bionic AS runtime
ENTRYPOINT ["dotnet", "Core.API.dll"]
Some parts of docker compose:
version: "3.7"
services:
core_api:
container_name: core_debug
network_mode: host
build:
dockerfile: ./Dockerfile.lnx
network: host
image: core_api
env_file:
- Deployment/LocalVM/sonar.env
ports:
- "80:80"
- "44364:80"
- "8080:80"
tmpfs:
- /run
I was debugging a issue from my cluster, seems kubectl commands timeout inside the kube-addon-manager pod, while the equivalent curl command works fine.
bash-4.3# kubectl get node --v 10
I1119 16:35:55.506867 54 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.10.5 (linux/amd64) kubernetes/32ac1c9" http://localhost:8080/api
I1119 16:36:25.507550 54 round_trippers.go:405] GET http://localhost:8080/api in 30000 milliseconds
I1119 16:36:25.507959 54 round_trippers.go:411] Response Headers:
I1119 16:36:25.508122 54 cached_discovery.go:124] skipped caching discovery info due to Get http://localhost:8080/api: dial tcp: i/o timeout
Equivalent curl command output
bash-4.3# curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.10.5 (linux/amd64) kubernetes/32ac1c9" http://localhost:8080/api
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8080 (#0)
> GET /api HTTP/1.1
> Host: localhost:8080
> Accept: application/json, */*
> User-Agent: kubectl/v1.10.5 (linux/amd64) kubernetes/32ac1c9
>
< HTTP/1.1 200 OK
< Content-Type: application/json
< Date: Mon, 19 Nov 2018 16:43:00 GMT
< Content-Length: 134
<
{"kind":"APIVersions","versions":["v1"],"serverAddressByClientCIDRs":[{"clientCIDR":"0.0.0.0/0","serverAddress":"172.16.1.13:6443"}]}
* Connection #0 to host localhost left intact
Also tried to run a docker container with host network mode, kubectl command still timeout.
kube-addon-manager.yaml
apiVersion: v1
kind: Pod
metadata:
name: kube-addon-manager
namespace: kube-system
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
labels:
component: kube-addon-manager
spec:
hostNetwork: true
containers:
- name: kube-addon-manager
image: gcr.io/google-containers/kube-addon-manager:v8.6
imagePullPolicy: IfNotPresent
command:
- /bin/bash
- -c
- /opt/kube-addons.sh
resources:
requests:
cpu: 5m
memory: 50Mi
volumeMounts:
- mountPath: /etc/kubernetes/
name: addons
readOnly: true
volumes:
- name: addons
hostPath:
path: /etc/kubernetes/
Seems like in your config you are trying to talk to port 8080 which is the insecure port in the kube-apiserver.
You can try starting your kube-apiserver with this option:
--insecure-port
The default for the insecure port is 8080. Note that this option might be deprecated in the future.
Also, keep in mind the the kube-addon-manager is part of the legacy add-ons.
Update
The details in this question are getting long, but I think it narrows down to this:
For some reason the host name matters to Nginx when it's trying to figure out whether to proxy the request. If the host name is set to git.example.com the request does not seem to go through, but if it's set to 203.0.113.2 then it goes through. Why does the host name matter?
Filed an issue with Nginx here
And docker compose
Start of original question
When I type in the IP address of the reverse proxy directly into my browser bar, it does perform the redirect.
When using a URL that is resolved via the /etc/hosts entry 203.0.113.2 git.example.com the "Welcome to Ngnix page" is shown. Any ideas? This is the configuration:
server {
listen 203.0.113.2:80 default_server;
server_name 203.0.113.2 git.example.com;
proxy_set_header X-Real-IP $remote_addr; # pass on real client IP
location / {
proxy_pass http://203.0.113.1:3000;
}
}
This is the docker-compose.yml file that is used to launch the whole thing:
version: '3'
services:
gogs-nginx:
build: ./proxy
ports:
- "80:80"
networks:
mk1net:
ipv4_address: 203.0.113.2
gogs:
image: gogs/gogs
ports:
- "3000:3000"
volumes:
- gogs-data:/data
networks:
mk1net:
ipv4_address: 203.0.113.3
volumes:
gogs-data:
external: true
networks:
mk1net:
ipam:
config:
- subnet: 203.0.113.0/24
One interesting thing is that I can navigate to for example:
http://203.0.113.2/issues
The log for the above URL is:
gogs-nginx_1 | 203.0.113.1 - - [07/Oct/2018:11:28:06 +0000] "GET / HTTP/1.1" 200 38825 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36" "-"
If I then change 203.0.113.2 with git.example.com (So that the url ends up being git.example.com I get Nginxs "404 not found" page, and the log says:
gogs-nginx_1 | 2018/10/07 11:31:34 [error] 8#8: *10 open() "/usr/share/nginx/html/issues" failed (2: No such file or directory), client: 203.0.113.1, server: localhost, request: "GET /issues HTTP/1.1", host: "git.example.com"
If I only use http://git.example.com as the URL I get the NGINX welcome page, and the following log:
gogs-nginx_1 | 203.0.113.1 - - [07/Oct/2018:11:34:39 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36" "-"
It looks like Nginx understands that the request is for the proxy because it logs the IP of the proxy, but it does not redirect to the proxy and returns a 304 ...
Using Curl to perform requests
Using curl with a host name parameter that targets the proxy like this:
curl -H 'Host: git.example.com' -si http://203.0.113.2
Results in the Nginx welcome page:
ole#mki:~/Gogs/.gogs/docker$ curl -H 'Host: git.example.com' -si http://203.0.113.2
HTTP/1.1 200 OK
Server: nginx/1.15.1
Date: Sun, 07 Oct 2018 17:09:11 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 03 Jul 2018 13:27:08 GMT
Connection: keep-alive
ETag: "5b3b79ac-264"
Accept-Ranges: bytes
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
nginx.org.<br/>
Commercial support is available at
nginx.com.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
But if I change the host name to the ip address like this:
Using curl with a host name parameter that targets the proxy like this:
curl -H 'Host: 203.0.113.2' -si http://203.0.113.2
Then the proxy works as it should:
ole#mki:~/Gogs/.gogs/docker$ curl -H 'Host: 203.0.113.2' -si http://203.0.113.2
HTTP/1.1 302 Found
Server: nginx/1.15.1
Date: Sun, 07 Oct 2018 17:14:46 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 34
Connection: keep-alive
Location: /user/login
Set-Cookie: lang=en-US; Path=/; Max-Age=2147483647
Set-Cookie: i_like_gogits=845bb09d69587b81; Path=/; HttpOnly
Set-Cookie: _csrf=neGgBfG4LdOcdrdeA0snHjVGz4s6MTUzODkzMjQ4NjE5MzEzNzI3OQ%3D%3D; Path=/; Expires=Mon, 08 Oct 2018 17:14:46 GMT; HttpOnly
Set-Cookie: redirect_to=%252F; Path=/
Found.
I am sorry, I failed to realize what's happening on your side because the information is sometimes confusing and sometimes incomplete. But Stackoverflow provides a great explanation on what is considered a good question: How to create a Minimal, Complete, and Verifiable example and so I have just tried to implement a minimal example of a system you are likely going to build.
Below I am providing all the files and will show you a test run as well.
File #1: docker-compose.yml
gogs:
image: gogs/gogs
web:
build: .
ports:
- 8000:80
links:
- gogs
I have outdated Docker at my computer and I do not want to bother with Docker networking, so I have just linked both containers using Docker links. This is the most important part and the link will ensure that (1) our web container depends on gogs; (2) we are able to reference gogs IP from inside web as just gogs. Docker will resolve the name to an IP assigned to the container.
Since I want a minimal example, I've skipped everything else as irrelevant. For example, volume.
File #2: Dockerfile
Newer Compose versions support config options specified right in docker-compose.yml, but I need a custom Dockerfile instead. It's trivial:
FROM nginx:stable-alpine
COPY gogs.conf /etc/nginx/conf.d
File #3: gogs.conf
And finally we need Nginx configuration for proxy:
server {
listen 80 default_server;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location / {
proxy_pass http://gogs:3000;
}
}
You may notice here we are referring another container simply by name gogs and we need to know what port number it is exposes. We know: 3000.
Running
$ docker-compose build
$ docker-compose up
It's up and running:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1f74293df630 g_web "nginx -g 'daemon off" 2 minutes ago Up 26 seconds 0.0.0.0:8000->80/tcp g_web_1
dfa2dbaa6074 gogs/gogs "/app/gogs/docker/sta" 2 minutes ago Up 26 seconds 22/tcp, 3000/tcp g_gogs_1
web container is exposed to the world at port number 8000.
Tests
by IP
Let's request it by IP:
$ curl -si http://192.168.99.100:8000/
HTTP/1.1 302 Found
Server: nginx/1.14.0
Date: Sun, 07 Oct 2018 15:13:55 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 31
Connection: keep-alive
Location: /install
Set-Cookie: lang=en-US; Path=/; Max-Age=2147483647
Set-Cookie: i_like_gogits=50411f542e2ae8f8; Path=/; HttpOnly
Set-Cookie: _csrf=ZJxRPqnqayIbpAYgZ22zrPIOaSo6MTUzODkyNTIzNTQ2NTg5MDE1NA%3D%3D; Path=/; Expires=Mon, 08 Oct 2018 15:13:55 GMT; HttpOnly
Found.
Corresponding log file:
web_1 | 192.168.99.1 - - [07/Oct/2018:15:14:24 +0000] "GET / HTTP/1.1" 302 31 "-" "curl/7.61.1" "-"
gogs_1 | [Macaron] 2018-10-07 15:14:24: Started GET / for 192.168.99.1
gogs_1 | [Macaron] 2018-10-07 15:14:24: Completed GET / 302 Found in 199.519µs
gogs_1 | 2018/10/07 15:14:24 [TRACE] Session ID: 38d06d393a9e9d21
gogs_1 | 2018/10/07 15:14:24 [TRACE] CSRF Token: Xth986dFWhhj8w8vBdIqRZu4SbI6MTUzODkyNTI2NDYxMDYzNzAyNA==
I can see from the log that (1) both containers work and they were used to process the request; (2) 192.168.99.1 is my host's IP address, which means "gogs" successfully gets a real request IP via X-Forwarded-For.
by domain name
OK, let's request using a domain name:
$ curl -H 'Host: g.example.com' -si http://192.168.99.100:8000/
Trust me, this is just sufficient. Host is an HTTP protocol header to pass domain name. And any browser will do the same under the hood.
and the corresponding log file is --
gogs_1 | [Macaron] 2018-10-07 15:32:49: Started GET / for 192.168.99.1
gogs_1 | [Macaron] 2018-10-07 15:32:49: Completed GET / 302 Found in 618.701µs
gogs_1 | 2018/10/07 15:32:49 [TRACE] Session ID: 81f64d97e9c3dd1e
gogs_1 | 2018/10/07 15:32:49 [TRACE] CSRF Token: X5QyHM4LMIfn8OSJD1gwSSEyXV46MTUzODkyNjM2OTgyODQyMjExMA==
web_1 | 192.168.99.1 - - [07/Oct/2018:15:32:49 +0000] "GET / HTTP/1.1" 302 31 "-" "curl/7.61.1" "-"
No changes, everything works as expected.
I am trying to create a local swarm environment based on 'dind' images. Below are the steps for environment recreation:
docker network create --attachable --subnet 10.0.0.0/16 tools_network
docker run -d --privileged --name swarm-manager-1 --hostname swarm-manager-1 --network tools_network --ip 10.0.0.3 -p 42421:2375 docker:17.03.1-dind
docker --host localhost:42421 swarm init --advertise-addr 10.0.0.3
docker run -d --privileged --name swarm-worker-1 --hostname swarm-worker-1 --network tools_network --ip 10.0.0.4 -p 42423:2375 docker:17.03.1-dind
docker --host localhost:42423 swarm join --token <swarm-token> 10.0.0.3:2377
After that I add an nginx-based proxy:
docker run -d --name swarm-proxy --network tools_network -v $(pwd)/temp-proxy:/etc/nginx:ro -p 80:80 nginx:stable-alpine
with the following contents of nginx.conf in the 'temp-proxy' folder:
events {
worker_connections 1024;
}
http {
upstream swarm {
server 10.0.0.3;
server 10.0.0.4;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
location / {
proxy_pass http://swarm;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
The service I use for testing purposes is launched by:
docker --host localhost:42421 service create --name test-web --publish 80:80 yeasy/simple-web
What I expect at this stage, based on the mesh networking documentation is that curl localhost will return the result from the deployed service. However, I get a 502 Bad Gateway response, with the following log messages from the proxy:
[error] 7#7: *4 connect() failed (111: Connection refused) while connecting to upstream, client: 10.0.0.1, server: , request: "GET / HTTP/1.1", upstream: "http://10.0.0.4:80/", host: "localhost"
[error] 7#7: *4 connect() failed (111: Connection refused) while connecting to upstream, client: 10.0.0.1, server: , request: "GET / HTTP/1.1", upstream: "http://10.0.0.3:80/", host: "localhost"
10.0.0.1 - - "GET / HTTP/1.1" 502 173 "-" "curl/7.47.0"
Same result (connection refused) for curling from a docker container deployed to the tools_network.
Running netstat -l from inside one of the swarm nodes shows that they don't listen on port 80:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 127.0.0.11:38943 0.0.0.0:* LISTEN
tcp 0 0 :::2375 :::* LISTEN
tcp 0 0 :::2377 :::* LISTEN
tcp 0 0 :::7946 :::* LISTEN
udp 0 0 127.0.0.11:35094 0.0.0.0:*
udp 0 0 0.0.0.0:4789 0.0.0.0:*
udp 0 0 :::7946 :::*
The questions are:
Is there something wrong with my configuration? What is it?
If not, what steps should I take to get to the problem's root?
After a lot of investigations I found the problem looking at this.
Docker inside dind images couldn't initialize properly due to absent modules and I actually saw the errors in the container logs (docker logs swarm-manager-1) but didn't pay attention to them.
So the solution for me was launching the swarm nodes like this:
docker run -d --privileged --name swarm-manager-1 --hostname swarm-manager-1 --network tools_network --ip 10.0.0.3 -p 42421:2375 -v /lib/modules:/lib/modules:ro docker:17.03.1-dind
where the /lib/modules mapping is the piece that was absent.