Flink could not connect to Jobmanager in Docker - docker

I've started a Flink Jobmanager in docker using docker run --rm --name=jobmanager --network flink-network --publish 8081:8081 --env FLINK_PROPERTIES="jobmanager.rpc.address: jobmanager" apache/flink:1.16.0-java11 jobmanager, and I could visit Flink ui at 127.0.0.1:8081.
Then I built a docker image using Dockerfile:
FROM flink:1.16.0-java11
...
CMD ./bin/flink run --python /usr/local/flink_driver.py
And try to run this image by docker run -d --name=flink-driver --network flink-network --env FLINK_PROPERTIES="jobmanager.rpc.address: jobmanager" flink_test:latest. But this container exit with errors:
...
Caused by: org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /0.0.0.0:8081
...
I've passed the argument jobmanager.rpc.address: jobmanager to the container but it still trying to connect 0.0.0.0:8081, how should I correct my Docker usage?

Related

How to specify the port to map to on the host machine when you build the image wiith Dockerfile

When I run Docker from command line I do the following:
docker run -it -d --rm --hostname rabbit1 --name rabbit1 -p 127.0.0.1:8000:5672 -p 127.0.0.1:8001:15672 rabbitmq:3-management
I publish the ports with -p in order to see the connection on the host.
How can I do this automatically with a Dockerfile?
The Dockerfile provides the instructions used to build the docker image.
The docker run command provides instructions used to run a container from a docker image.
How can I do this automatically with a Dockerfile
You don't.
Port publishing is something you configure only when starting a container.
You cant specify ports in Dockerfile but you can use docker-compose to achieve that.
Docker Compose is a tool for running multi-container applications on Docker.
example for docker-compose.yml with ports:
version: "3.8"
services :
rabbit1:
image : mongo
container_name : rabbitmq:3-management
ports:
- 8000:5672
- 8001:15672

Docker not exposing port when using command line

I am trying to start an ASP.NET Core container hosting a website.
It does not exposes the ports when using the following command line
docker run my-image-name -d -p --expose 80
or
docker run my-image-name -d -p 80
Upon startup, the log will show :
Now listening on: http://[::]:80
So I assume the application is not bound to a specific address.
But does work when using the following docker compose file
version: '0.1'
services:
website:
container_name: "aspnetcore-website"
image: aspnetcoredocker
ports:
- '80:80'
expose:
- '80'
You need to make sure to pass all options (-d -p 80) to the docker command before naming the image as described in the docker run docs. The notation is:
docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
So please try the following:
docker run -d -p 80 my-image-name
Otherwise the parameters are used as command/args inside the container. So basically running your entrypoint of the docker image with the additional params of -d -p 80 instead of passing them to the docker command itself. So in your example the docker daemon is just not receiving the params -d and -p 80 and thus not mapping the port to the host. You can also notice that by not receiving the -d the command runs in the foreground and you see the logs in your terminal.

Pumba container exiting without any error

I am trying to setup Pumba on my docker swarm setup. I tried using the docker service create, docker stack deploy and a simple docker run command with following parameters:
docker run -d -v /var/run/docker.sock:/var/run/docker.sock gaiaadm/pumba:master Pumba kill --signal SIGTERM
docker service create --constraint 'node.role == manager' --mount type=bind,source=/var/run/docker.sock,destination=/var/run/docker.sock gaiaadm/pumba:master --with-registry-auth
docker-compose.yaml is:
version: "3.4"
services:
pumba:
image: gaiaadm/pumba:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
replicas: 3
command: ["pumba", "kill","re2:^customer-api*","--signal", "SIGTERM"]
and created the above compose file for stack deploy.
But in all the cases the pumba container just kills the mentioned container as customer-api* in compose file above and exits and restarts due to swarm maintaining state feature
I need the container to keep running.
I am new to docker and Pumba any help or direction will be really appreciated.
Thanks in advance.
I am able to solve the problem using the following service create command:
docker service create --name PUMBA --mode=global --mount=type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock gaiaadm/pumba:master pumba --random --interval 10s kill re2:"^customer-api*" --signal SIGTERM
I deployed it in global mode and changed Pumba command, after doing this Pumba doesn't kill itself and container keeps on running.

owncloud docker via nginx proxy with lets encrypt

I had a running setup of containers for owncloud, jwilder/nginx-proxy and JrCs/docker-letsencrypt-nginx-proxy-companion for my own cloud. Since I couldn't change the settings afterwards to accept larger files than 2MB I tried to set the hole thing up completely new.
Yet, for some reason I can't even get the standard configuration (without the 2MB limit) working again...
Could you help me here real quick?
First, I started the nginx-proxy:
docker run -d -p 80:80 -p 443:443 --name MY_PROXY_NAME1 -v /path/to/my/certs:/etc/nginx/certs:ro -v /etc/nginx/vhost.d -v /usr/share/nginx/html -v /var/run/docker.sock:/tmp/docker.sock:ro --label com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy jwilder/nginx-proxy
Second, I started the lets encrypt companion:
docker run -d --name MY_PROXY_NAME2 -v /path/to/my/certs:/etc/nginx/certs:rw -v /var/run/docker.sock:/var/run/docker.sock:ro --volumes-from MY_PROXY_NAME1 jrcs/letsencrypt-nginx-proxy-companion
Checking afterwards, both containers seem to run:
someID jrcs/letsencrypt-nginx-proxy-companion "/bin/bash /app/en..." 6 minutes ago Up 6 minutes MY_PROXY_NAME2
someID jwilder/nginx-proxy "/app/docker-entry..." 7 minutes ago Up 7 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp MY_PROXY_NAME1
But when starting any test container or the owncloud container, the browser simply tells me no secure connection could be established...
Here an example on how I started the owncloud container:
docker run -v /path/to/my/data:/usr/share/webapps/owncloud/data --link mysql:mysql -e "VIRTUAL_HOST=my.domain.com" -e "LETSENCRYPT_HOST=my.domain.com" -e "LETSENCRYPT_EMAIL=my#email.com" --name Owncloud -d --expose 80 --expose 443 owncloud:latest
Do you have any ideas why it isn't working? I got furious since I had it running at some point and can't figure out why its not working this time... (if I start the old proxy and owncloud containers I can still reach them, so no network problems or other issues)
So thanks a lot!

Reconstruct docker run command parameters from container

What's the best way to reconstruct docker run command parameters from existing docker container? I could use docker inspect and use the info found there. Is there any better way?
Not super easy, but you can do it by formatting the output from docker inspect. For a container started with this command:
> docker run -d -v ~:/home -p 8080:80 -e NEW_VAR=x --name web3 nginx:alpine sleep 10m
You can pull out the volumes, port mapping, environment variables, container name, image name and command with:
> docker inspect -f "V: {{.Mounts}} P: {{.HostConfig.PortBindings}} E:{{.Config.Env}} NAME: {{.Name }} IMAGE: {{.Config.Image}} COMMAND: {{.Path}} {{.Args}}" web3
That gives you the output:
V: [{ /home/scrapbook /home true rprivate}] P: map[80/tcp:[{ 8080}]] E:[NEW_VAR=x PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin NGINX_VERSION=1.11.5] NAME: /web3 IMAGE: nginx:alpine COMMAND: sleep [10m]
Which is a start.
Docker Captain Adrian Mouat has an excellent blog post on formatting the output: Docker Inspect Template Magic.
See also this answer which links to a tool which programmatically derives the docker run command from a container.

Resources