I am running several services on my CentOS 7 Linux server. Nginx and netdata are being run as root and are working well.
I started Portainer as a Docker container:
docker volume create portainer_data
docker run -d -p 8000:8000 -p 9000:9000 --name=portainer \
--restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
portainer/portainer
I can connect to the Portainer port locally with telnet localhost 9000. But, when I try to telnet ip 9000 from an external client PC on the same network, it doesn't connect.
The Linux server does not have a firewall. Nginx, netdata, and myapp that are not running in Docker work fine. In short, all other services can be accessed from a Linux server without a firewall, but Docker's internal container service is inaccessible.
What do I need to change to be able to reach the container?
You have to disable you ipv6
add this links to /etc/sysctl.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
to effect those
sysctl -p
Related
Very basic question about proxying inside docker containers in OSX: I am running a toxiproxy docker container with:
docker run --name proxy -p 8474:8474 -p 19200:19200 shopify/toxiproxy
and an Elasticsearch container:
docker run --name es -p 9200:9200 elasticsearch:6.8.6
I want toxiproxy to redirect the traffic from the `Elasticsearch:9200 container to localhost:19200. I config the toxiproxy with:
curl -XPOST "localhost:8474/proxies -d "{ \"name\": \"proxy_es\", \"listen\": \"0.0.0.0:19200\", \"upstream\": \"localhost:9200\", \"enabled\": true}"
Now, I would expect that:
curl -XGET localhost:19200/_cat
would point me to the Elasticsearch endpoint. But get:
curl: (52) Empty reply from server
Any idea why this is wrong? How can I fix it?
From inside the toxiproxy container, localhost:9200 does not resolve to es container.
This is because by default these containers are attached to default network. On the default network, localhost refers to the localhost of the container. It does not resolve to the localhost of the host machine (where docker-machine is running).
You can use the host network by adding --net=host for this to work. A better approach would be to create a new network, and run all containers in that network.
docker run --name proxy -p 8474:8474 -p 19200:19200 --net host shopify/toxiproxy
docker run --name es -p 9200:9200 --net host elasticsearch:6.8.6
You localhost should be resolvable in both ways
I need to setup nginx-proxy container to forward requests to the container with my app. I use the following commands to start containers:
# app
docker run -d -p 8080:2368 \
--name app \
app
# nginx
docker run -d -p 80:8080 \
--name nginx-proxy \
jwilder/nginx-proxy
But when I try to access port 80 on my server, I get ERR_CONNECTION_REFUSED. It's clear for me that nginx container is forwarding not the port I want because on server port 8080 I can access the app.
I tried using network like this:
# network
docker network create -d bridge net
# app
docker run -d -p 8080:2368 \
--name app \
--network net \
app
# nginx
docker run -d -p 80:8080 \
--name nginx-proxy \
--network net \
jwilder/nginx-proxy
But the result seems to be the same.
I need to understand how to make nginx container proxy requests from server port 80 to my app.
It is looking that your app is running on port 2368 which users should not need to reach directly. So the app container's port does not need to be exposed.
You are correct in creating a bridge network and create the containers on it.
You need to remove port mapping from app container and change the port mapping of nginx-proxy container from 80:8080 to 80:80.
You also need to setup nginx-proxy to proxy requests from port 80 to app:2386
This way users hitting the port 80 on the host machine Docker runs will be proxied to your app.
The VIRTUAL_HOST env var with domain name for app container was required to let nginx proxy requests to the app container. No network setup or ports forwarding is needed with this approach. Here is the working setup I came up with:
# app
docker run -d \
--name app \
-e VIRTUAL_HOST=mydomain.com \
app
# nginx
docker run -d -p 80:80 \
--name nginx-proxy \
jwilder/nginx-proxy
The docker daemon is running on an Ubuntu machine. I'm trying to start up a zookeeper ensemble in a swarm. The zookeeper nodes themselves can talk to each other. However, from the host machine, I don't seem to be able to access the published ports.
If I start the container with -
docker run \
-p 2181:2181 \
--env ZOO_MY_ID=1 \
--env ZOO_SERVERS="server.1=0.0.0.0:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888" \
zookeeper
It works like a charm. On my host machine I can say echo conf | nc localhost 2181 and zookeeper says something back.
However if I do,
docker service create \
-p 2181:2181 \
--env ZOO_MY_ID=1 \
--env ZOO_SERVERS="server.1=0.0.0.0:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888" \
zookeeper
and run the same command echo conf | nc localhost 2181,
it just gets stuck. I don't even get a new prompt on my terminal.
This works just as expected on the Docker Playground on the official Zookeeper Docker Hub page. So I expect it should for me too.
But... If I docker exec -it $container sh and then try the command in there, it works again.
Aren't published ports supposed to be accessible even by the host machine for a service?
Is there some trick I'm missing about working with overlay networks?
Try to use docket service create --publish 2181:2181 instead.
I believe the container backing the service is not directly exposed and has to go through the Swarm networking.
Otherwise, inspect your service to check which port are published: docker service inspect <service_name>
Source: documentation
I am trying to connect from an application container to a database container in two situations, one succeeds, one doesn't.
There are two containers on my dockerhost:
mysql container with port 3306 connected to 3356 on dockerhost
application container
At work, dockerhost has IP-address 10.0.2.15, at home, dockerhost has IP-address 192.168.8.11 (hostname -I).
In both situations, I want to connect to the database container from the app container with host 10.0.2.15/192.168.8.11 and port 3356.
When I do this at work (Windows network, Vagrant/Virtualbox dockerhost), this is no problem. I can 'telnet 10.0.2.15 3356' from the app container and connect to the db container.
When I do this at home (Ubuntu), it is impossible to connect. The only way is to use the docker ip address of the db container (172.17.0.2) with port 3306. However, I can ping 192.168.8.11.
The scripts to start the containers are identical; I do not use --add-host, so the dockerhost IP-address is not in /etc/hosts.
Any suggestions?
Ok, use docker to run 3 database instances
docker run --name mysqldb1 -e MYSQL_ROOT_PASSWORD=changeme -d mysql
docker run --name mysqldb2 -e MYSQL_ROOT_PASSWORD=changeme -d mysql
docker run --name mysqldb3 -e MYSQL_ROOT_PASSWORD=changeme -d mysql
Each one will have a different IP address on my host machine:
$ for i in mysqldb1 mysqldb2 mysqldb3
> do
> docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $i
> done
172.17.0.2
172.17.0.3
172.17.0.4
Repeat this on your machine and you'll very likely have different IP addresses.
So how is this problem fixed.
The older approach (deprecated in docker 1.9) is to use links. The following commands will shows how environment variables are set within your linked application container (the one using the database)
$ docker run -it --rm --link mysqldb1:mysql mysql env
..
MYSQL_PORT_3306_TCP_ADDR=172.17.0.2
$ docker run -it --rm --link mysqldb2:mysql mysql env
..
MYSQL_PORT_3306_TCP_ADDR=172.17.0.3
$ docker run -it --rm --link mysqldb3:mysql mysql env
..
MYSQL_PORT_3306_TCP_ADDR=172.17.0.4
And the following demonstrates how links are also created:
$ docker run -it --rm --link mysqldb1:mysql mysql grep mysql /etc/hosts
172.17.0.2 mysql 2a12644351a0 mysqldb1
$ docker run -it --rm --link mysqldb2:mysql mysql grep mysql /etc/hosts
172.17.0.3 mysql 89140cbf68c7 mysqldb2
$ docker run -it --rm --link mysqldb3:mysql mysql grep mysql /etc/hosts
172.17.0.4 mysql 27535e8848ef mysqldb3
So you can just refer to the other container using the "mysql" hostname or the "MYSQL_PORT_3306_TCP_ADDR" environment variable.
In Docker 1.9 there is a more powerful networking feature that enables containers to be linked across hosts.
http://docs.docker.com/engine/userguide/networking/dockernetworks/
you can use my container acting as a NAT gateway to dockerhost without any manually setup https://github.com/qoomon/docker-host
Created a apache webserver as Docker container but want to access it on windows os browser as localhost.
I can access the webserver with boot2docker private ip address which is 192.168.59.103 but would like to access the webserver as localhost i.e 127.0.0.1.
Following is my Docker Container setup
Running Boot2docker on Oracle VM
Exposed ports : "EXPOSE 80 443" in docker file
Command used to create Docker File :
docker run --net=host --name=webserver1 -v /home/data:/data/www/www.samplewebserber.com -v `password`:/scripts -d folder/serverfolder /scripts/run.sh
boot2docker actually created a vm with linux core in your Mac OS with VirtualBox, and 192.168.59.103 is the ip for that vm.
So you need to set a port forward for that vm
Notice that in Mac OS, port 80 need a high permission, so I use 8080 instead in this example.
If you want to access localhost to ports 80 and 443 you need to perform two actions:
First, when you create your container, you must specify the port mapping specifically. If you run docker run with -P option, the ports set in dockerfile's EXPOSE will be expose to random ports in the Boot2Docker environment. If you want to map it specifically you must run:
docker run \
--net=host \
--name=webserver1 \
-v /home/data:/data/www/www.samplewebserber.com \
-v `password`:/scripts \
-d -p 80:80 -p 443:443 \
folder/serverfolder \
/scripts/run.sh
And in order to map Boot2Docker port to your host environment, as Joe Niland link suggested, you must do a port forwarding using SSH tunneling:
boot2docker ssh -L 80:localhost:80
boot2docker ssh -L 443:localhost:443
You can change to port mappings if you wish.