How to modify docker image config which got from inspect command - docker

I create a docker image for openvpn. But when I use docker inspect command to get config from this image, I always see this setting in ContainerConfig:
"ContainerConfig": {
"Hostname": "cfd8618fa650",
"ExposedPorts": {
"11194/tcp": {}
},
This is not good because every time I run this image, it will expose port 11194 automatically even I didn't want to. Does any one know how to remove this config ?

Pay attention that 11194 is the default OpenVpn port, so it's quite normal that it's exposed by Docker.
Anyway, if you have the Dockerfile, obviously you can build a new image removing the EXPOSE 11194 line from Dockerfile itself.
But if you run an image directly pulling it from a repo, or you can't remove the container, the port will be exposed, but you can bind it to a specific ip.
Because port mapping -p format can be
ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort
| containerPort
you can bind the host port to a single host (e.g. localhost) instead to all the world, for example
docker run -p 127.0.0.1:11194:11194 ...
So port 11194 (or whatever port number you assign locally) will be reachable from localhost only.
Otherwise you can close the port by iptables or other firewall.
The site Docker and IPtables explains well docker port binding, iptables forwarding rules, etc.

Related

Docker expose port internals

In Docker we all know how to expose ports, EXPOSE instruction publishes and -p or -P option to expose during runtime. When we use "docker inspect" or "docker port" to see the port mappings and these configs output are pulled /var/lib/docker/containers/container-id/config.v2.json.
The question I got is when we expose port how does Docker actually changes the port in container, say the Apache or Nginx, say we can have the installation anywhere in the OS or file path, how does Docker finds the correct conf file(Apache /etc/httpd/conf/httpd.conf) to change if I suppose Docker does this on the line "Listen 80" or Listen "443" in the httpd.conf file. Or my whole understanding of Docker is in stake:)
Any help is appreciated.
"docker" does not change anything in the internal configuation of the container (or the services it provides).
There are three different points where you can configure ports
the service itself (for instance nginx) inside the image/container
EXPOSE xxxx in the Dockerfile (ie at build time of the image)
docker run -p 80:80 (or the respective equivalent for docker compose) (ie at the runtime of the container)
All three are (in principle) independent of each other. Ie, you can have completely different values in each of them. But in practice, you will have to adjust them to each other to get a working system.
We know, EXPOSE xxxx in the dockerfile doesn't actually publish any port at runtime, but just tells the docker service, that that specific container will listen to port xxxx at runtime. You can see this as sort of documentation for that image. So it's your responsibility as creator of the Dockerfile to provide the correct value here. Because anyone using that image, will probaby rely on that value.
But regardless, of what port you have EXPOSEd (or not, EXPOSE is completely optional) you still have to publish that port when you run the container (for instance when using docker run via -p aaaa:xxxx).
Now let us assume you have an nginx image which has the nginx service configured to listen to port 8000. Regardless of what you define with EXPOSE or -p aaaa:xxxx, that nginx service will always listen to port 8000 only and nothing else.
So if you now run your container with docker run -p 80:80, the runtime will bind port 80 of the host to port 80 of the container. But as there is no service listening on port 80 within the container, you simply won't be able to contact your nginx service on port 80. And you also won't be able to connect to nginx on port 8000, because it hasn't been published.
So in a typical setup, if your service in the container is configured to listen to port 8000, you should also EXPOSE 8000 in your dockerfile and use docker run -p aaaa:8000 to bind port aaaa of your host machine to port 8000 of your container, so that you will be able to connect to the nginx service via http://hostmachine:aaaa

Custom configuration for redis container

I want to adapt my redis settings via custom conf file and followed the documentation for the implementation. Running my container with the following command throws no error - so far so good.
docker run --name redis-container --net redis -v .../redis:/etc/redis -d redis redis-server /etc/redis/redis.conf
To check if my config file is read I switched the default port 6379 to port 6380 but looking at my docker ports via docker ps shows the default 6379 as my port.
Is there a difference between the redis port itself and the container port or where is my problem located?
The standard Redis image Dockerfile contains the line
EXPOSE 6379
Once a port has been exposed this way, there is no way to un-expose it. Exposing a port has fairly few practical effects in modern Docker, but the most visible is that 6379/tcp will show up in the docker ps output for each exposed port even if it's not separately published (docker run -p). There's no way to remove this port number from the docker ps output.
Docker's port system (the EXPOSE directive and the docker run -p option) are a little bit disconnected from what the application inside the container is actually doing. In your case the container is configured to expose port 6379 but the process is actually listening on port 6380; Docker has no way of knowing these don't match up. Changing the application configuration won't change the container configuration, and vice versa.
As a practical matter you don't usually need to change application ports. Since this Redis will be the only thing running in its container and its corresponding isolated network namespace, it can't conflict with other Redises on the host or in other containers. If you need to remap it on the host, you can use different port numbers for -p; the second number must match what the process is listening on (and Docker can't auto-detect or check this) but the first can be any port.
docker run -p 6380:6379 ... redis
If you're trying to check whether your configuration has had an effect, running CONFIG GET via redis-cli could be a more direct way to ask what the server's configuration is.

prevent Docker from exposing port on host

If i start a container using -p 80 for example, docker will assign a random outbound port.
Everytime Docker assign a port, it also add an iptable rule to open this port to the world, is it possible to prevent this behaviour ?
Note : I am using a nginx load balancer to get the content, I really don't need to have my application associated with two different port.
You can specify both interface and port as follows:
-p ip:hostPort:containerPort
or
-p ip::containerPort
Another solution is to run nginx inside container and to use conteiner linking without exposing other services whatsoever.
The iptable feature is a startup parameter for the docker demon. Look for the docker demon conf file in your docker installation. Add --iptables=false and docker never touches your iptables.

Letting a container know its exposed ports in Docker

Let's say I start a container, exposing a random port to 80 like this: docker run -d -p 80 -name my_container username/container.
Is there any way to tell my_container on which host's port his 80 was exposed?
Edit: My situation:
I'm running Apache to serve some static HTML files, and an Go API server on this container. I'm exposing both services here. The static files request data from the server via javascript on the user's browser, but to be able to do this, the clients need to know on which port the API server is made available, to be able to connect to it. Is this the appropriate way to do this?
I don't think there exists an easy to tell from the container the host port on which its port 80 was exposed, but I also believe there is a good reason for that: making the container dependent on this would make it dependent on its containing environment, which goes against Docker's logic.
If you really need this, you could pass the host port as an environment variable to the container using the -e flag (assuming that the host port is fixed), or rely on a hack such as mounting the Docker socket in the container (-v /var/run/docker.sock:/var/run/docker.sock) and have it "inspect himself" (which is similar to what progrium/ambassadord does to implement its omni mode).
Maybe you should clarify why you need this information in the first place and perhaps there's a simpler solution that can help you achieve that.
You can run docker ps which will show the ports, for example
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
containerid ubuntu:14.04 /bin/bash 14 seconds ago Up 13 seconds 0.0.0.0:49153->80/tcp my_container
In this instance it is 49153.
Also docker inspect will tell you lots about your container, including the port mappings
$ docker inspect my_container | grep HostPort
"HostPort": "49153"

Remote access to webserver in docker container

I've started using docker for dev, with the following setup:
Host machine - ubuntu server.
Docker container - webapp w/ tomcat server (using https).
As far as host-container access goes - everything works fine.
However, I can't manage to access the container's webapp from a remote machine (though still within the same network).
When running
docker port <container-id> 443
output is as expected, so docker's port binding seems fine.
172.16.*.*:<random-port>
Any ideas?
Thanks!
I figured out what I missed, so here's a simple flow for accessing docker containers webapps from remote machines:
Step #1 : Bind physical host ports (e.g. 22, 443, 80, ...) to container's virtual ports.
possible syntax:
docker run -p 127.0.0.1:443:3444 -d <docker-image-name>
(see docker docs for port redirection with all options)
Step #2 : Redirect host's physical port to container's allocated virtual port. possible (linux) syntax:
iptables -t nat -A PREROUTING -i <host-interface-device> -p tcp --dport <host-physical-port> -j REDIRECT --to-port <container-virtual-port>
That should cover the basic use case.
Good luck!
Correct me if I'm wrong but as far as I'm aware docker host creates a private network for it's containers which is inaccessible from the outside. That said your best bet would probably be to access the container at {host_IP}:{mapped_port}.
If your container was built with a Dockerfile that has an EXPOSE statement, e.g. EXPOSE 443, then you can start the container with the -P option (as in "publish" or "public"). The port will be made available to connections from remote machines:
$ docker run -d -P mywebservice
If you didn't use a Dockerfile, or if it didn't have an EXPOSE statement (it should!), then you can also do an explicit port mapping:
$ docker run -d -p 80 mywebservice
In both cases, the result will be a publicly-accessible port:
$ docker ps
9bcb… mywebservice:latest … 0.0.0.0:49153->80/tcp …
Last but not least, you can force the port number if you need to:
$ docker run -d -p 8442:80 mywebservice
In that case, connecting to your Docker host IP address on port 8442 will reach the container.
There are some alternatives of how to access docker containers from an external device (in the same network), check out this post for more information http://blog.nunes.io/2015/05/02/how-to-access-docker-containers-from-external-devices.html

Resources