Custom configuration for redis container - docker

I want to adapt my redis settings via custom conf file and followed the documentation for the implementation. Running my container with the following command throws no error - so far so good.
docker run --name redis-container --net redis -v .../redis:/etc/redis -d redis redis-server /etc/redis/redis.conf
To check if my config file is read I switched the default port 6379 to port 6380 but looking at my docker ports via docker ps shows the default 6379 as my port.
Is there a difference between the redis port itself and the container port or where is my problem located?

The standard Redis image Dockerfile contains the line
EXPOSE 6379
Once a port has been exposed this way, there is no way to un-expose it. Exposing a port has fairly few practical effects in modern Docker, but the most visible is that 6379/tcp will show up in the docker ps output for each exposed port even if it's not separately published (docker run -p). There's no way to remove this port number from the docker ps output.
Docker's port system (the EXPOSE directive and the docker run -p option) are a little bit disconnected from what the application inside the container is actually doing. In your case the container is configured to expose port 6379 but the process is actually listening on port 6380; Docker has no way of knowing these don't match up. Changing the application configuration won't change the container configuration, and vice versa.
As a practical matter you don't usually need to change application ports. Since this Redis will be the only thing running in its container and its corresponding isolated network namespace, it can't conflict with other Redises on the host or in other containers. If you need to remap it on the host, you can use different port numbers for -p; the second number must match what the process is listening on (and Docker can't auto-detect or check this) but the first can be any port.
docker run -p 6380:6379 ... redis
If you're trying to check whether your configuration has had an effect, running CONFIG GET via redis-cli could be a more direct way to ask what the server's configuration is.

Related

Docker networks: How to get container1 to communicate with server in container2

I have 2 containers on a docker bridge network. One of them has an apache server that i am using as a reverse proxy to forward user to server on another container. The other container contains a server that is listening on port 8081. I have verified both containers are on the same network and when i log into an interactive shell on each container i tested successfully that i am able to ping the other container.
The problem is, is that when i am logged into the container with the apache server, i am not able to ping the actual server in the other container.
the ip address of container with server is 172.17.0.2
How i create the docker network
docker network create -d bridge jakeypoo
How i start the containers
docker container run -p 8080:8080 --network="jakeypoo" --
name="idpproxy" idpproxy:latest
docker run -p 8081:8080 --name geoserver --network="jakeypoo" geoserver:1.1.0
wouldn't the uri to reach out to the server be
http://172.17.0.2:8081/
?
PS: I am sure more information will be needed and i am new to stack overflow and will happily answer any other questions i can.
Since you started the two containers on the same --network, you can use their --name as hostnames to talk to each other. If the service inside the second container is listening on port 8080, use that port number. Remappings with docker run -p options are ignored, and you don't need a -p option to communicate between containers.
In your Apache config, you'd set up something like
ProxyPass "/" "http://geoserver:8080/"
ProxyPassReverse "/" "http://geoserver:8080/"
It's not usually useful to look up the container-private IP addresses: they will change whenever you recreate the container, and in most environments they can't be used outside of Docker (and inside of Docker the name-based lookup is easier).
(Were you to run this under Docker Compose, it automatically creates a network for you, and each service is accessible under its Compose service name. You do not need to manually set networks: or container_name: options, and like the docker run -p option, Compose ports: are not required and are ignored if present. Networking in Compose in the Docker documentation describes this further.)
Most probably this can be the reason.
when you log into one of the container that container do not know anything about the other container network. when you ping, that container think you are try to ping a service inside that container.
Try to use docker compose if you can use it in your context. Refer this link:
https://docs.docker.com/compose/

Docker expose port internals

In Docker we all know how to expose ports, EXPOSE instruction publishes and -p or -P option to expose during runtime. When we use "docker inspect" or "docker port" to see the port mappings and these configs output are pulled /var/lib/docker/containers/container-id/config.v2.json.
The question I got is when we expose port how does Docker actually changes the port in container, say the Apache or Nginx, say we can have the installation anywhere in the OS or file path, how does Docker finds the correct conf file(Apache /etc/httpd/conf/httpd.conf) to change if I suppose Docker does this on the line "Listen 80" or Listen "443" in the httpd.conf file. Or my whole understanding of Docker is in stake:)
Any help is appreciated.
"docker" does not change anything in the internal configuation of the container (or the services it provides).
There are three different points where you can configure ports
the service itself (for instance nginx) inside the image/container
EXPOSE xxxx in the Dockerfile (ie at build time of the image)
docker run -p 80:80 (or the respective equivalent for docker compose) (ie at the runtime of the container)
All three are (in principle) independent of each other. Ie, you can have completely different values in each of them. But in practice, you will have to adjust them to each other to get a working system.
We know, EXPOSE xxxx in the dockerfile doesn't actually publish any port at runtime, but just tells the docker service, that that specific container will listen to port xxxx at runtime. You can see this as sort of documentation for that image. So it's your responsibility as creator of the Dockerfile to provide the correct value here. Because anyone using that image, will probaby rely on that value.
But regardless, of what port you have EXPOSEd (or not, EXPOSE is completely optional) you still have to publish that port when you run the container (for instance when using docker run via -p aaaa:xxxx).
Now let us assume you have an nginx image which has the nginx service configured to listen to port 8000. Regardless of what you define with EXPOSE or -p aaaa:xxxx, that nginx service will always listen to port 8000 only and nothing else.
So if you now run your container with docker run -p 80:80, the runtime will bind port 80 of the host to port 80 of the container. But as there is no service listening on port 80 within the container, you simply won't be able to contact your nginx service on port 80. And you also won't be able to connect to nginx on port 8000, because it hasn't been published.
So in a typical setup, if your service in the container is configured to listen to port 8000, you should also EXPOSE 8000 in your dockerfile and use docker run -p aaaa:8000 to bind port aaaa of your host machine to port 8000 of your container, so that you will be able to connect to the nginx service via http://hostmachine:aaaa

Containers started with docker-compose inside another container are unreachable

I'm using a dedicated container for running generic project-related shell scripts in order to avoid having to test scripts on multiple environments(mac, win, ubuntu, debian...) and to minimize software requirements on the host OS. Even docker-compose commands are run from the console container. /var/run/docker.sock is bind mounted from host.
Everything else seems to be working fine, but for example if I run docker-compose up traefik inside the console container, traefik starts normally but it's unreachable both on the host and even on another container in the same network. If docker-compose up traefik is run from the host OS(Windows 10), traefik becomes reachable as expected. I suspect this has something to do with how Docker or docker-compose handle networking but I'm not completely sure. I did check that regardless of how I start the traefik container, the same ports appear instantly in NirSoft CurrPorts(sort of gui for netstat).
Is there any way (and how) to fix this?
EDIT
I realised that this must be somehow an error on my part, since dockerized docker guis exist and they assumably don't have any problems bringing up containers that are accessible from the host and outside world.
Now I'm wondering if this might be a simple configuration error either in my docker(-compose) settings or somewhere else on my host machine, or do guis like Portainer go through some extra steps in order to expose the started containers to the host?
For development purpose we all map the port of Traefik to 80, so I will assume the same in your case as well. Let's assume that you are running Traefik container in a port 80 which is mapped to the port 80 in the host. But according to your Traefik container the host machine is nothing but the container which is used for running the scripts. But the port 80 of the shell script container is not mapped to the Host machine of that particular container. I hope now you have been lost somewhere around the port mapping and containers.
Let me describe your situation in the image below.
To make your setup working you should deploy your containers as shown above along with the port mapping.
To simplify the answer,
docker run -t -d -p 80:80 shellScriptImage
docker run -t -d -p 80:80 traefik (- inside the shell script container)
By doing this you can access the traefik container from the outside.

start a docker LAMP image with apache bound to non-standard port

I am new to docker, using https://github.com/mattrayner/docker-lamp
I've read about the docker run command but still not quite getting the -p option. Is there a way to make it tell Apache to listen on a non-standard port?
I have succeeded in starting it on the default port 80, then re-configuring/re-loading Apache, from within the container, to bind itself to port 8080. But in that scenario I can't access the container's Apache from outside it via localhost:8080. (If that makes sense.)
I simply want to develop something using PHP 5.6 without disturbing anything else on my local setup, which is running PHP 7.0. If there's another way to achieve the same end, I'm good with that too.
The -p or --publish option is a host:container port mapping specifically so that you don't have to change what may already be running inside the container.
If the container is already running on port 80 but you want to access it externally (via your host or laptop) via port 8080, then you can simple run with -p 8080:80 which will map your host port 8080 to the container port 80.
Multiple containers can run and use port 80 on the same host (since the containers have their own IP address on the Docker network). But you can only expose one port at a time.
For example, if you had 3 containers you wanted to run and all of them were listening on port 80, you could start the first with -p 8080:80, the second with -p 8082:80, and the third with -p 8084:80.
The -p section of https://docs.docker.com/engine/reference/commandline/run/#publish-or-expose-port--p---expose does into this a bit deeper.

prevent Docker from exposing port on host

If i start a container using -p 80 for example, docker will assign a random outbound port.
Everytime Docker assign a port, it also add an iptable rule to open this port to the world, is it possible to prevent this behaviour ?
Note : I am using a nginx load balancer to get the content, I really don't need to have my application associated with two different port.
You can specify both interface and port as follows:
-p ip:hostPort:containerPort
or
-p ip::containerPort
Another solution is to run nginx inside container and to use conteiner linking without exposing other services whatsoever.
The iptable feature is a startup parameter for the docker demon. Look for the docker demon conf file in your docker installation. Add --iptables=false and docker never touches your iptables.

Resources