Is there a docker daemon equivalent to kubectl port-forward? - docker

Let's say you have a long docker-compose file with a lot of containers that speak to one another inside of a docker network. Let's call this a "stack". You want to launch this stack 3 times, each with a slightly different config. To do that you might say:
docker-compose -p pizza up
docker-compose -p pie up
docker-compose -p soda up
But this would fail if you have any ports exposed to the host:
nginx:
image: nginx:alpine
ports:
- "80:80"
networks:
- my_app_net
It would fail, because the host can only expose one thing on port 80.
One alternative is to define that port declaration in different files and use different ports:
$ cat pizza.yml
services:
nginx:
ports:
- "8001:80"
$ cat pie.yml
services:
nginx:
ports:
- "8002:80"
$ cat soda.yml
services:
nginx:
ports:
- "8003:80"
docker-compose -f docker-compose.yml -f pizza.yml -p pizza up
docker-compose -f docker-compose.yml -f pie.yml -p pie up
docker-compose -f docker-compose.yml -f soda.yml -p soda up
That works because each stack is publishing port 80 to a different port. That's fine, but that's a little bit annoying because we have to stop/start the stack to do this.
How do we do this without publishing the port or stopping/starting the stack?
If this were a kubernetes cluster, we could use kubectl to do this with a port-forward like so:
kubectl port-forward replicaset/nginx-75f59d57f4 8001:80
That way fits my situation a little better because we don't want to stop the stack to see what's going on in there. We can start the port-forward, see what's going on and then go away.
Is there an equivalent for docker?
Related Questions:
Is there a kubectl port-forward equivalent in podman?

You can start another container on the same network that is running something like socat to forward the ports:
docker run --rm -it -p 8001:80 --net pizza_default \
nicolaka/netshoot \
socat TCP6-LISTEN:80,fork TCP:nginx:80
A more automated example of this is seen with docker-publish that handles spinning up a separate network, attaching containers, and automatically stopping the forwarder when the target container exits by sharing the same pid namespace.

Related

How do I convert docker-compose configuration to dockerfile

I am a bit confused I was trying to convert dockercompose of elastic kibana to dockerfile. But networking part and connectivity part is bit confusing for me. Can anyone help me with conversion and a bit of explanation.
Thanks a lot!
version: "3.0"
services:
elasticsearch:
container_name: es-container
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
environment:
- xpack.security.enabled=true
- "discovery.type=single-node"
networks:
- es-net
ports:
- 9200:9200
kibana:
container_name: kb-container
image: docker.elastic.co/kibana/kibana:6.5.4
environment:
- ELASTICSEARCH_HOSTS=http://es-container:9200
networks:
- es-net
depends_on:
- elasticsearch
ports:
- 5601:5601
networks:
es-net:
driver: bridge
Docker Compose and Dockerfiles are completely different things. The Dockerfile is a configuration file used to create Docker images. The docker-compose.yml file is a configuration file used by Docker Compose to launch Docker containers using Docker images.
To launch the above containers without using Docker Compose you could run:
docker network create es-net
docker run -d -e xpack.security.enabled=true -e "discovery.type=single-node" -p 9200:9200 --network es-net --name es-container docker.elastic.co/elasticsearch/elasticsearch:6.5.4
docker run -d -e ELASTICSEARCH_HOSTS=http://es-container:9200 -p 5601:5601 --network es-net --name kb-container docker.elastic.co/kibana/kibana:6.5.4
Alternatively, you could run the containers on the hosts network stack (rather than the es-net nework). Kibana would then be able to talk to ElasticSearch on localhost:
docker run -d -e xpack.security.enabled=true -e "discovery.type=single-node" --network host --name es-container docker.elastic.co/elasticsearch/elasticsearch:6.5.4
docker run -d -e ELASTICSEARCH_HOSTS=http://localhost:9200 --network host --name kb-container docker.elastic.co/kibana/kibana:6.5.4
(I haven't actually run these so the commands might need some tweaking).
In that docker-compose.yml file, the only thing that could be built into an image at all are the environment variables, and there's not much benefit to hard-coding your deployment configuration like this. In particular you cannot force the eventual container name or manually specify the eventual networking configuration in an image.
If you're looking for a compact self-contained description of what to run that you can redistribute, the docker-compose.yml is it. Don't try to send around images, or focus on trying to have a single container; instead, distribute the docker-compose.yml file as the way to run your application. I'd consider Compose a standard enough tool that anyone who has Docker already has it and knows how to run docker-compose up -d.
# How to run this application on a different system
# (with Docker and Compose preinstalled):
here$ scp docker-compose.yml there:
here$ ssh there
there$ sudo docker-compose up -d

How to set up alertmanager.service for running in docker container

I am running prometheus in a docker container, and I want to configure an AlertManager for making it send me an email when the service is down. I created the alert_rules.yml and the prometheus.yml, and I run everything with the following command, mounting both the yml files onto the docker container at the path /etc/prometheus:
docker run -d -p 9090:9090 --add-host host.docker.internal:host-gateway -v "$PWD/prometheus.yml":/etc/prometheus/prometheus.yml -v "$PWD/alert_rules.yml":/etc/prometheus/alert_rules.yml prom/prometheus
Now, I also want prometheus to send me an email when an alert comes up, and that's where I encounter some problems. I configured my alertmanager.yml as follows:
route:
group_by: ['alertname']
group_wait: 30s
group_interval: 5m
repeat_interval: 1h
receiver: email-me
receivers:
- name: 'gmail'
email_configs:
- to: 'my_email#gmail.com'
from: 'askonlinetraining#gmail.com'
smarthost: smtp.gmail.com:587
auth_username: 'my_email#gmail.com'
auth_identity: 'my_email#gmail.com'
auth_password: 'the_password'
I actually don't know if the smarthost parameter is configured correctly since I can't find any documentation about it and I don't know which values it should contain
I also created an alertmanager.service file:
[Unit]
Description=AlertManager Server Service
Wants=network-online.target
After=network-online.target
[Service]
User=root
Group=root
Type=Simple
ExecStart=/usr/local/bin/alertmanager \
--config.file /etc/alertmanager.yml
[Install]
WantedBy=multi-user.target
I think something here is messed up: I think the first parameter I pass to ExecStart is a path that doesn't exist in the container, but I have no idea on how I should replace it.
I tried mounting the last two files into the docker container in the same directory where I mount the first two yml files by using the following command:
docker run -d -p 9090:9090 --add-host host.docker.internal:host-gateway -v "$PWD/prometheus.yml":/etc/prometheus/prometheus.yml -v "$PWD/alert_rules.yml":/etc/prometheus/alert_rules.yml -v "$PWD/alertmanager.yml":/etc/prometheus/alertmanager.yml -v "$PWD/alertmanager.service":/etc/prometheus/alertmanager.service prom/prometheus
But the mailing alert is not working and I don't know how to fix the configuration for smoothly running all of this into a docker container. As I said, I suppose the main problem resides in the ExecStart command present in alertmanager.service, but maybe I'm wrong. I can't find anything helpful online, hence I would really appreciate some help
The best practice with containers is to aim to run a single process per container.
In your container, this suggests one container for prom/prometheus and another for prom/alertmanager.
You can run these using docker as:
docker run \
--detach \
--name=prometheus \
--volume=${PWD}:/prometheus.yml:/etc/prometheus/prometheus.yml \
--volume=${PWD}:/rules.yml:/etc/alertmanager/rules.yml \
--publish=9090:9090 \
prom/prometheus:v2.26.0 \
--config.file=/etc/prometheus/promtheus.yml
docker run \
--detach \
--name=alertmanager \
--volume=${PWD}:/rules.yml:/etc/alertmanager/rules.yml \
--publish=9093:9093 \
prom/alertmanager:v0.21.0
A good tool when you run multiple container is Docker Compose in which case, your docker-compose.yml could be:
version: "3"
services:
prometheus:
restart: always
image: prom/prometheus:v2.26.0
container_name: prometheus
command:
- --config.file=/etc/prometheus/prometheus.yml
volumes:
- ${PWD}/prometheus.yml:/etc/prometheus/prometheus.yml
- ${PWD}/rules.yml:/etc/alertmanager/rules.yml
expose:
- "9090"
ports:
- 9090:9090
alertmanager:
restart: always
depends_on:
- prometheus
image: prom/alertmanager:v0.21.0
container_name: alertmanager
volumes:
- ${PWD}/alertmanager.yml:/etc/alertmanager/alertmanager.yml
expose:
- "9093"
ports:
- 9093:9093
and you could:
docker-compose up
In either case, you can then browse:
Prometheus on the host's port 9090 i.e. localhost:9090
Alert Manager on the host's port 9093, i.e. localhost:9093

Dynamically add docker container ip in Dockerfile ( redis)

How do I dynamically add container ip in other Dockerfile ( I am running two container a) Redis b) java application .
I need to pass redis url on run time to my java arguments
Currently I am manually checking the redis ip and copying it in Dockerfile. and later creating new image using redis ip for java application.
docker run --name my-redis -d redis
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' my-redis
IN Dockerfile (java application)
CMD ["-Dspring.redis.host=172.17.0.2", "-jar", "/apps/some-0.0.1-SNAPSHOT.jar"]
Can I use any script to update the DockerFile or can use any environment variable.
you can assign a static ip address to your dokcer container when you run it, following the steps:
1 - create custom network:
docker network create --subnet=172.17.0.0/16 redis-net
2 - run the redis container to use the specified network, and assign the ip address:
docker run --net redis-net --ip 172.17.0.2 --name my-redis -d redis
by then you have the static ip address 172.17.0.2 for my-redis container, you don't need to inspect it anymore.
3 - now it is possible to run the java appication container but it must use the same network:
docker run --net redis-net my-java-app
of course you can optimize the solution, by using env variables or whatever you find convenient to your setup.
More infos can be found in the official docs (search for --ip):
docker run
docker network
Edit (add docker-compose):
I just find out that it is also possible to assign static ips using docker-compose, and this answer gives an example how.
This is a similar example just in case:
version: '3'
services:
redis:
container_name: redis
image: redis:latest
restart: always
networks:
vpcbr:
ipv4_address: 172.17.0.2
java-app:
container_name: java-app
build: <path to Dockerfile>
networks:
vpcbr:
ipv4_address: 172.17.0.3
depends_on:
- redis
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 172.17.0.0/16
gateway: 172.17.0.1
official docs: https://docs.docker.com/compose/networking/
hope this helps you find your way.
You should add your containers in the same network . Then at runtime you can use that name to refer to the container with its name. Container's name is the host name in the network. Thus at runtime it will be resolved as container's ip address.
Follow these steps:
First, create a network for the containers:
docker network create my-network
Start redis: docker run -d --network=my-network --name=redis redis
Edit java application's Dockerfile, replace -Dspring.redis.host=172.17.0.2" with -Dspring.redis.host=redis" and build again.
Finally start java application container: docker run -it --network=my-network your_image. Optionally you can define a name for the container, but it is not required as you do not access java application's container from redis container.
Alternatively you can use a docker-compose file. By default docker-compose creates a network for running services. I am not aware of your full setup, so I will provide a sample docker-compose.yml that illustrates the main concept.
version: "3.7"
services:
redis:
image: redis
java_app_image:
image: your_image_name
In both ways, you are able to access redis container from java application dynamically using container's hostname instead of providing a static ip.

Running Kudu in a docker and master to tserver two-way connection / circular link issues - docker composition

How can you run Kudu, which requires two containers - one for the master and one for the tserver under docker, when the two containers need to connect to each other by DNS.
Kudu can be run under Docker using the following commands:
docker run --name kudu-master --hostname kudu-master --detach --publish 8051:8051 --publish 7051:7051 kunickiaj/kudu master
and:
docker run --name kudu-tserver --hostname kudu-tserver --detach --publish 8050:8050 --publish 7050:7050 --link kudu-master --env KUDU_MASTER=kudu-master kunickiaj/kudu tserver
However, the above defines a one way link, from kudu-tserver to kudu-master and not vice verse.
For Kudu to function correctly, bother kudu-master and kudu-tserver need to be able to connect to each other.
How can the Docker containers be configured, so that the two way link works?
Docker image reference
Similar image reference
The link parameter in docker run is a legacy feature which may be removed (references [1] and [2]).
You can raise multiple Docker containers and connect them to each other using docker-compose.
To get this working, create a folder named kudu and place the following docker-compose.yml file under it:
version: '3'
services:
kudu-master:
image: "kunickiaj/kudu"
hostname: kudu-master
ports:
- "8051:8051"
- "7051:7051"
command:
master
networks:
kudu_network:
aliases:
- kudu-master
kudu-tserver:
image: "kunickiaj/kudu"
hostname: kudu-tserver
ports:
- "8050:8050"
- "7050:7050"
environment:
- KUDU_MASTER=kudu-master
command:
tserver
networks:
kudu_network:
aliases:
- kudu-tserver
networks:
kudu_network:
This file includes 2 services (kudu-master and kudu-tserver) and a network within which both have aliases which are visible to the rest of the network (to each other). [File reference]
Then run docker-compose using the following command line:
docker-compose -f "filePathToYourDockerComposeYmlFile" up -d
or, if you want to recreate the Docker containers:
docker-compose -f "filePathToYourDockerComposeYmlFile" up -d --force-recreate
Other useful commands [reference]:
To stop the containers:
docker-compose -f "filePathToYourDockerComposeYmlFile" stop
To remove the containers:
docker-compose -f "filePathToYourDockerComposeYmlFile" rm -f

I need to run many apache2.0 servers on different docker container and give each one port number

I am quite new to Docker and I need to run 8 apache2.0 servers on different docker containers and give each container a port number using compose.
I found apache2.0 image and I created a container through this command:
docker create -t -i lamsley/apache2.0
How can I create many webservers and give each one a port number in way I can access it through the internet ?
With just Docker you can run:
docker run --name server1 -d -p 8000:80 lamsley/apache2.0
docker run --name server2 -d -p 8001:80 lamsley/apache2.0
...
It's easier with Docker Compose:
version: '2'
services:
httpd1:
image: lamsley/apache2.0
container_name: httpd1
ports:
- "8000:80"
httpd2:
image: lamsley/apache2.0
container_name: httpd1
ports:
- "8000:80"
...
But I strongly suggest you learn Docker first because these snippets are simplistic. You need to know about volumes to pass the content to be served, etc. Why use lamsley/apache2.0 when you can use the official httpd image? You can build your own custom image. The possibilities are endless and it is fun.
To learn about Docker Compose:
https://docs.docker.com/compose/
To learn about volumes:
https://docs.docker.com/engine/tutorials/dockervolumes/

Resources