I am running the progrium/consul container with the gliderlabs/registrator container. I would like to be able to automatically create health checks for any container that is registered to consul with the registrator. Using this I would like to use consul health checks to know if any container has stopped running. I have read that there is a way to do this by adding environmental variables, but everything I have read has been far too vague, such as the post below:
how to define HTTP health check in a consul container for a service on the same host?
So I am supposed to set some environmental variables:
ENV SERVICE_CHECK_HTTP=/howareyou
ENV SERVICE_CHECK_INTERVAL=5s
Do I set them inside of my progrium/consul container or my gliderlabs/registrator? Would I set them by just adding the following tags inside my docker run command like this?
docker run ...... -e SERVICE_CHECK_HTTP=howareyou -e SERVICE_CHECK_INTERVAL=5s ......
Note: for some reason adding the above environmental variables to the docker run commands of my registrator just caused consul to think my nodes are failing from no acks received
I got Consul Health Checks and Gliderlabs Registrator working in three ways with my Spring Boot apps:
Put the environment variables in the Dockerfile with ENV or LABEL
Put the environment variables using -e with docker run
Put the environment variables into docker-compose.yml under "environment" or "labels"
Dockerfile
In your Dockerfile-file:
ENV SERVICE_NAME MyApp
ENV SERVICE_8080_CHECK_HTTP /health
ENV SERVICE_8080_CHECK_INTERVAL 60s
The /health endpoint here is coming from the Spring Boot Actuator lib that I simply put in my pom.xml file in my Spring Boot application. You can however use any other endpoint as well.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
docker run
docker run -d -e "SERVICE_NAME=myapp" -e "SERVICE_8080_CHECK_HTTP=/health" -e "SERVICE_8080_CHECK_INTERVAL=10s" -p 8080:8080 --name MyApp myapp
Make sure that you are using the correct HTTP server port and that it is accessible. In my case, Spring Boot uses 8080 by default.
Docker Compose
Add the health check information under either the "environment" or "labels" properties:
myapp:
image: apps/myapp
restart: always
environment:
- SERVICE_NAME=MyApp
- SERVICE_8080_CHECK_HTTP=/health
- SERVICE_8080_CHECK_INTERVAL=60s
ports:
- "8080:8080"
Starting Consul Server
docker run -d -p "8500:8500" -h "consul" --name consul gliderlabs/consul-server -server -bootstrap
The "gliderlabs/consul-server" image activates the Consul UI by default. So you don't have to specify any other parameters.
Then start Registrator
docker run -d \
--name=registrator \
-h $(docker-machine ip dockervm) \
-v=/var/run/docker.sock:/tmp/docker.sock \
gliderlabs/registrator:v6 -resync 120 -deregister on-success \
consul://$(docker-machine ip dockervm):8500
The "resync" and "deregister" parameters will ensure that Consul and Registrator will be in synch.
Related
When I run Docker from command line I do the following:
docker run -it -d --rm --hostname rabbit1 --name rabbit1 -p 127.0.0.1:8000:5672 -p 127.0.0.1:8001:15672 rabbitmq:3-management
I publish the ports with -p in order to see the connection on the host.
How can I do this automatically with a Dockerfile?
The Dockerfile provides the instructions used to build the docker image.
The docker run command provides instructions used to run a container from a docker image.
How can I do this automatically with a Dockerfile
You don't.
Port publishing is something you configure only when starting a container.
You cant specify ports in Dockerfile but you can use docker-compose to achieve that.
Docker Compose is a tool for running multi-container applications on Docker.
example for docker-compose.yml with ports:
version: "3.8"
services :
rabbit1:
image : mongo
container_name : rabbitmq:3-management
ports:
- 8000:5672
- 8001:15672
I have a MySQL container that I define with a docker-compose.yml file like so:
version: "3.7"
services:
mydb:
image: mysql:8
container_name: my_db_local
command: --default-authentication-plugin=mysql_native_password
restart: always
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: 12345
MYSQL_DATABASE: my_db_local
MYSQL_USER: someuser
MYSQL_PASSWORD: somepassword
volumes:
- ./my-db-data:/var/lib/mysql
If I run docker-compose up -d I see that it spins up pretty quickly and I am able to connect to it from a SQL client running on my host machine (I connect to it at 0.0.0.0:3306).
I also have a containerized Java Spring Boot application that I manage with the following Dockerfile:
FROM openjdk:8-jdk-alpine as cce
COPY application.yml application.yml
COPY build/libs/myservice.jar myservice.jar
HEALTHCHECK CMD curl --fail https://localhost:9200/healthCheck || exit 1
EXPOSE 443
ENTRYPOINT [ \
"java", \
"-Dspring.config=.", \
"-Ddb.hostAndPort=0.0.0.0:3306", \
"-Ddb.name=my_db_local", \
"-Ddb.username=someuser", \
"-Ddb.password=somepassword", \
"-jar", \
"cim-service.jar" \
]
I can build this image like so:
docker build . -t myorg/myservice
And then run it like so:
docker run -d -p9200:9200 myorg/myservice
When I run it, it quickly dies on startup because it cannot connect to the MySQL container (which it uses as a database). Clearly the MySQL container is running since I can connect to it from my host with a SQL client. So its pretty obvious my network/port settings are awry in either the Docker Compose file, or more likely, inside my Spring Boot app's Dockerfile. I just don't know enough about Docker to figure out where I could have the misconfiguration. Any ideas?
The database host is not 0.0.0.0, that address is IPv4 for "listen on all interfaces" and some OS's interpret it to connecting back to a local interface, none of which will work in a container. Container networks are namespaced, so the container has it's own network interface separate from the host, and separate from the other containers.
To connect between containers, you need to run the containers on the same docker network, that network needs to be user created (not the default bridge network named "bridge"), you connect by the container name or network alias, and you connect to the container port, not the host published port.
What that looks like:
ENTRYPOINT [ \
"java", \
"-Dspring.config=.", \
"-Ddb.hostAndPort=mydb:3306", \
"-Ddb.name=my_db_local", \
"-Ddb.username=someuser", \
"-Ddb.password=somepassword", \
"-jar", \
"cim-service.jar" \
]
and:
docker run -d -p9200:9200 --net $network_name_of_mysql myorg/myservice
mydb will work because compose automatically creates an alias for the service name. There's no need to define a container_name in compose for this, and you often don't want one to allow multiple projects to start separately and for scaling a container.
Note that it's a bad practice to include configuration like the database connection data in the image itself. You'll want to move this logic into an external config file that's mounted into the container, environment variables, or the compose file.
I am trying to start an ASP.NET Core container hosting a website.
It does not exposes the ports when using the following command line
docker run my-image-name -d -p --expose 80
or
docker run my-image-name -d -p 80
Upon startup, the log will show :
Now listening on: http://[::]:80
So I assume the application is not bound to a specific address.
But does work when using the following docker compose file
version: '0.1'
services:
website:
container_name: "aspnetcore-website"
image: aspnetcoredocker
ports:
- '80:80'
expose:
- '80'
You need to make sure to pass all options (-d -p 80) to the docker command before naming the image as described in the docker run docs. The notation is:
docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
So please try the following:
docker run -d -p 80 my-image-name
Otherwise the parameters are used as command/args inside the container. So basically running your entrypoint of the docker image with the additional params of -d -p 80 instead of passing them to the docker command itself. So in your example the docker daemon is just not receiving the params -d and -p 80 and thus not mapping the port to the host. You can also notice that by not receiving the -d the command runs in the foreground and you see the logs in your terminal.
I've a hadoop cluster working on docker for testing. I've installed zookeeper but I can't start zookeeper service from outside docker with docker exec command.
After installing zookeeper in the hadoop cluster, I'm able to start each zookeeper node connecting to each of them and executing: bin/zkServer.sh start
But when I try to launch the command from outside docker, with the
docker exec -u hadoop -it nodemaster /home/hadoop/zookeeper/bin/zkServer.sh start it fails.
Looks like any config (CLASSPATH) variables are not properly launched with docker exec
The only way to add config parameters to the zkServer.sh command is using the -config, but it only puts the zookeeper conf file.
I've executed zkServer.sh with the print-cmd option from inside and outside, and from outside the CLASSPATH seems not to be the OK.
Those are the zkServer commands working fine inside docker:
hadoop#nodemaster:~$ /home/hadoop/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
hadoop#nodemaster:~$ /home/hadoop/zookeeper/bin/zkServer.sh stop
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper/bin/../conf/zoo.cfg
Stopping zookeeper ... STOPPED
hadoop#nodemaster:~$ exit
And this is the same command launched from outside docker:
==> $ docker exec -u hadoop -it nodemaster /home/hadoop/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... FAILED TO START
This is the zkServer.sh command with the --print-cmd option:
Inside container:
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper/bin/../conf/zoo.cfg
"/usr/local/openjdk-8/bin/java" -Dzookeeper.log.dir="/home/hadoop/zookeeper/logs" -Dzookeeper.log.file="zookeeper--server-nodemaster.log" -Dzookeeper.root.logger="INFO,CONSOLE" -XX:+HeapDumpOnOutOfMemoryError -XX:OnOutOfMemoryError='kill -9 %p' -cp "/home/hadoop/zookeeper/bin/../zookeeper-server/target/classes: ...
... CLASSPATH WITH ALL THE STUFF .....
/hive/lib/zookeeper-3.4.6.jar:/home/hadoop/hive/lib/*.jar" -Xmx1000m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain "/home/hadoop/zookeeper/bin/../conf/zoo.cfg" > "/home/hadoop/zookeeper/logs/zookeeper--server-nodemaster.out" 2>&1 < /dev/null
Outside container:
docker exec -u hadoop -it nodemaster /home/hadoop/zookeeper/bin/zkServer.sh --config /home/hadoop/zookeeper/conf print-cmd
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper/conf/zoo.cfg
"/usr/local/openjdk-8/bin/java" -Dzookeeper.log.dir="/home/hadoop/zookeeper/logs" -Dzookeeper.log.file="zookeeper--server-nodemaster.log" -Dzookeeper.root.logger="INFO,CONSOLE" -XX:+HeapDumpOnOutOfMemoryError -XX:OnOutOfMemoryError='kill -9 %p' -cp "/home/hadoop/zookeeper/bin/../zookeeper-server/target/classes:/home/hadoop/zookeeper/bin/../build/classes:/home/hadoop/zookeeper/bin/../zookeeper-server/target/lib/*.jar:/home/hadoop/zookeeper/bin/../build/lib/*.jar:/home/hadoop/zookeeper/bin/../lib/*.jar:/home/hadoop/zookeeper/bin/../zookeeper-*.jar:/home/hadoop/zookeeper/bin/../zookeeper-server/src/main/resources/lib/*.jar:/home/hadoop/zookeeper/conf:" -Xmx1000m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain "/home/hadoop/zookeeper/conf/zoo.cfg" > "/home/hadoop/zookeeper/logs/zookeeper--server-nodemaster.out" 2>&1 < /dev/null
So, it looks like the classpath is not correct when launching the outsider command...
I've tried to put the command inside a script, but same error.
docker exec -u hadoop -it nodemaster /home/hadoop/zookeeper/zoo_init.sh
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper/conf/zoo.cfg
Error: Could not find or load main class org.apache.zookeeper.server.quorum.QuorumPeerMain
I'm trying to publish a tmpnb server, but am stuck. Following the Quickstart at http://github.com/jupyter/tmpnb, I can run the server locally and access it at 172.17.0.1:8000.
However, I can't access the server remotely. I've tried adding -p 8000:8000 when I create the proxy container with the following command:
docker run -it -p 8000:8000 --net=host -d -e CONFIGPROXY_AUTH_TOKEN=$TOKEN --name=proxy jupyter/configurable-http-proxy --default-target http://127.0.0.1:9999
I tried to access the server by typing the machine's IP address:8000 but my browser still returns "This site can't be reached."
The logs for proxy are:
docker logs --details 45d836f98450
08:33:20.981 - info: [ConfigProxy] Proxying http://*:8000 to http://127.0.0.1:9999
08:33:20.988 - info: [ConfigProxy] Proxy API at http://localhost:8001/api/routes
To verify that I can access other servers run on the same machine I tried the following command: docker run -d -it --rm -p 8888:8888 jupyter/minimal-notebook and was able to accessed it remotely at the machine's ip address:8888.
What am I missing?
I'm working on an Ubuntu 16.04 machine with Docker 17.03.0-ce
Thanks
Create file named docker-compose.yml with content following, then you can launch the container with docker-compose up. Since images will be directly pulled errors will be arrested.
httpproxy:
image: jupyter/configurable-http-proxy
environment:
CONFIGPROXY_AUTH_TOKEN: 716238957362948752139417234
container_name: tmpnb-proxy
net: "host"
command: --default-target http://127.0.0.1:9999
ports:
- 8000:8000
tmpnb_orchestrate:
image: jupyter/tmpnb
net: "host"
container_name: tmpnb_orchestrate
environment:
CONFIGPROXY_AUTH_TOKEN: $TOKEN$
volumes:
- /var/run/docker.sock:/docker.sock
command: python orchestrate.py --command='jupyter notebook --no-browser --port {port} --ip=0.0.0.0 --NotebookApp.base_url=/{base_path} --NotebookApp.port_retries=0 --NotebookApp.token="" --NotebookApp.disable_check_xsrf=True'
A solution is available from the github.com/jupyter/tmpnb README.md file. At the end of the file under the heading "Development" three commands are listed:
git clone https://github.com/jupyter/tmpnb.git
cd tmpnb
make dev
These commands clone the tmpnb repository, cd into the tmpnb repository, and run the "dev" command from the the makefile contained in the tmpnb repository. On my machine, entering those commands created a notebook on a temporary server that I could access remotely. Beware that the "make dev" command deletes potentially conflicting docker containers as part of the launching process.
Some insight into how this works can be gained by looking inside the makefile. When the configurable-http-proxy image is run on Docker, both port 8000 and 8001 are published, and the tmpnb image is run with CONFIGPROXY_ENDPOINT=http://proxy:8001