I have a MySQL container that I define with a docker-compose.yml file like so:
version: "3.7"
services:
mydb:
image: mysql:8
container_name: my_db_local
command: --default-authentication-plugin=mysql_native_password
restart: always
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: 12345
MYSQL_DATABASE: my_db_local
MYSQL_USER: someuser
MYSQL_PASSWORD: somepassword
volumes:
- ./my-db-data:/var/lib/mysql
If I run docker-compose up -d I see that it spins up pretty quickly and I am able to connect to it from a SQL client running on my host machine (I connect to it at 0.0.0.0:3306).
I also have a containerized Java Spring Boot application that I manage with the following Dockerfile:
FROM openjdk:8-jdk-alpine as cce
COPY application.yml application.yml
COPY build/libs/myservice.jar myservice.jar
HEALTHCHECK CMD curl --fail https://localhost:9200/healthCheck || exit 1
EXPOSE 443
ENTRYPOINT [ \
"java", \
"-Dspring.config=.", \
"-Ddb.hostAndPort=0.0.0.0:3306", \
"-Ddb.name=my_db_local", \
"-Ddb.username=someuser", \
"-Ddb.password=somepassword", \
"-jar", \
"cim-service.jar" \
]
I can build this image like so:
docker build . -t myorg/myservice
And then run it like so:
docker run -d -p9200:9200 myorg/myservice
When I run it, it quickly dies on startup because it cannot connect to the MySQL container (which it uses as a database). Clearly the MySQL container is running since I can connect to it from my host with a SQL client. So its pretty obvious my network/port settings are awry in either the Docker Compose file, or more likely, inside my Spring Boot app's Dockerfile. I just don't know enough about Docker to figure out where I could have the misconfiguration. Any ideas?
The database host is not 0.0.0.0, that address is IPv4 for "listen on all interfaces" and some OS's interpret it to connecting back to a local interface, none of which will work in a container. Container networks are namespaced, so the container has it's own network interface separate from the host, and separate from the other containers.
To connect between containers, you need to run the containers on the same docker network, that network needs to be user created (not the default bridge network named "bridge"), you connect by the container name or network alias, and you connect to the container port, not the host published port.
What that looks like:
ENTRYPOINT [ \
"java", \
"-Dspring.config=.", \
"-Ddb.hostAndPort=mydb:3306", \
"-Ddb.name=my_db_local", \
"-Ddb.username=someuser", \
"-Ddb.password=somepassword", \
"-jar", \
"cim-service.jar" \
]
and:
docker run -d -p9200:9200 --net $network_name_of_mysql myorg/myservice
mydb will work because compose automatically creates an alias for the service name. There's no need to define a container_name in compose for this, and you often don't want one to allow multiple projects to start separately and for scaling a container.
Note that it's a bad practice to include configuration like the database connection data in the image itself. You'll want to move this logic into an external config file that's mounted into the container, environment variables, or the compose file.
Related
I'm using docker for php and another one for sql. Also I have a makefile to run commands in a instance of this container. This is the entry I use for command execution, and I would like to use sql container I have.
command:
docker run --rm \
--volume=${PWD}/code:/code \
--volume=${PWD}/json:/json:rw \
--volume=${PWD}/file:/file:rw \
own_php:latest \
time php /code/public/index_hex.php ${page}
If I try to execute this command from the make file, I get the following error.
SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed:
Name does not resolve
This is the docker-compose I have in my project
version: '3'
services:
sql:
image: mariadb
ports:
- "3307:3306"
environment:
MYSQL_ROOT_PASSWORD: root
volumes:
- ./init-db:/docker-entrypoint-initdb.d
- ./.mysql-data:/var/lib/mysql/
crawler:
build:
context: './docker-base'
depends_on:
- sql
volumes:
- ./json:/json:rw
- ./file:/file:rw
- ./code:/code
But if I run the container using my docker-composer, and I enter inside the container the command executes well.
It is possible for docker run --rm to use another container?
Docker Compose creates a network for each compose file and you have to attach your manually docker run container to that network to be able to reach other containers on it. The network will usually be named something like directoryname_default, based on the name of the directory holding the compose file, and it will show up in the docker network ls listing.
You should run something like
docker run --rm --net directoryname_default ...
I have a postgres database in one container, and a java application in another container. Postgres database is accessible from port 1310 in localhost, but the java container is not able to access it.
I tried this command:
docker run modelpolisher_java java -jar ModelPolisher-noDB-1.7.jar --host=biggdb --port=5432 --user=postgres --passwd=postgres --dbname=bigg
But it gives error java.net.UnknownHostException: biggdb.
Here is my docker-compose.yml file:
version: '3'
services:
biggdb:
container_name: modelpolisher_biggdb
build: ./docker/bigg_docker
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=bigg
ports:
- "1310:5432"
java:
container_name: modelpolisher_java
build: ./docker/java_docker
stdin_open: true
tty: true
Dockerfile for biggdb:
FROM postgres:11.4
RUN apt update &&\
apt install wget -y &&\
# Create directory '/bigg_database_dump/' and download bigg_database dump as 'database.dump'
wget -P /bigg_database_dump/ https://modelpolisher.s3.ap-south-1.amazonaws.com/bigg_database.dump &&\
rm -rf /var/lib/apt/lists/*
COPY ./scripts/restore_biggdb.sh /docker-entrypoint-initdb.d/restore_biggdb.sh
EXPOSE 1310:5432
Can somebody please tell what changes I need to do in the docker-compose.yml, or in the command, to make java container access ports of biggdb (postgres) container?
The two containers have to be on the same Docker-internal network to be able to talk to each other. Docker Compose automatically creates a network for you and attaches containers to that network. If you're docker run a container alongside that, you need to find that network's name.
Run
docker network ls
This will list the Docker-internal networks you have. One of them will be named something like bigg_default, where the first part is (probably) your current directory name. Then when you actually run the container, you can attach to that network with
docker run --net bigg_default ...
Consider setting a command: in your docker-compose.yml file to pass these arguments when you docker-compose up. If the --host option is your code and doesn't come from a framework, passing settings like this via environment variables can be a little easier to manage than command-line arguments.
As you use docker-compose to bring up the two containers, they already share a common network. To be able to access that you should use docker-compose run and not docker run. Also, pass the service name (java) and not the container name (modelpolisher_java) in docker-compose run command.
So just use the following command to run your jar:
docker-compose run java java -jar ModelPolisher-noDB-1.7.jar --host=biggdb --port=5432 --user=postgres --passwd=postgres --dbname=bigg
I'm trying to publish a tmpnb server, but am stuck. Following the Quickstart at http://github.com/jupyter/tmpnb, I can run the server locally and access it at 172.17.0.1:8000.
However, I can't access the server remotely. I've tried adding -p 8000:8000 when I create the proxy container with the following command:
docker run -it -p 8000:8000 --net=host -d -e CONFIGPROXY_AUTH_TOKEN=$TOKEN --name=proxy jupyter/configurable-http-proxy --default-target http://127.0.0.1:9999
I tried to access the server by typing the machine's IP address:8000 but my browser still returns "This site can't be reached."
The logs for proxy are:
docker logs --details 45d836f98450
08:33:20.981 - info: [ConfigProxy] Proxying http://*:8000 to http://127.0.0.1:9999
08:33:20.988 - info: [ConfigProxy] Proxy API at http://localhost:8001/api/routes
To verify that I can access other servers run on the same machine I tried the following command: docker run -d -it --rm -p 8888:8888 jupyter/minimal-notebook and was able to accessed it remotely at the machine's ip address:8888.
What am I missing?
I'm working on an Ubuntu 16.04 machine with Docker 17.03.0-ce
Thanks
Create file named docker-compose.yml with content following, then you can launch the container with docker-compose up. Since images will be directly pulled errors will be arrested.
httpproxy:
image: jupyter/configurable-http-proxy
environment:
CONFIGPROXY_AUTH_TOKEN: 716238957362948752139417234
container_name: tmpnb-proxy
net: "host"
command: --default-target http://127.0.0.1:9999
ports:
- 8000:8000
tmpnb_orchestrate:
image: jupyter/tmpnb
net: "host"
container_name: tmpnb_orchestrate
environment:
CONFIGPROXY_AUTH_TOKEN: $TOKEN$
volumes:
- /var/run/docker.sock:/docker.sock
command: python orchestrate.py --command='jupyter notebook --no-browser --port {port} --ip=0.0.0.0 --NotebookApp.base_url=/{base_path} --NotebookApp.port_retries=0 --NotebookApp.token="" --NotebookApp.disable_check_xsrf=True'
A solution is available from the github.com/jupyter/tmpnb README.md file. At the end of the file under the heading "Development" three commands are listed:
git clone https://github.com/jupyter/tmpnb.git
cd tmpnb
make dev
These commands clone the tmpnb repository, cd into the tmpnb repository, and run the "dev" command from the the makefile contained in the tmpnb repository. On my machine, entering those commands created a notebook on a temporary server that I could access remotely. Beware that the "make dev" command deletes potentially conflicting docker containers as part of the launching process.
Some insight into how this works can be gained by looking inside the makefile. When the configurable-http-proxy image is run on Docker, both port 8000 and 8001 are published, and the tmpnb image is run with CONFIGPROXY_ENDPOINT=http://proxy:8001
I'm unsure if something obvious escapes me or if it's just not possible but I'm trying to compose an entire application stack with images from docker hub.
One of them is mysql and it supports adding custom configuration files through volumes and to run .sql-files from a mounted directory.
But, I have these files on the machine where I'm running docker-compose, not on the host. Is there no way to specify files from the local machine to copy into the container before it runs it entrypoint/cmd? Do I really have to create local images of everything just for this case?
Option A: Include the files inside your image. This is less than ideal since you are mixing configuration files with your image (that should really only contain your binaries, not your config), but satisfies the requirement to use only docker-compose to send the files.
This option is achieved by using docker-compose to build your image, and that build will send over any files from the build directory to the remote docker engine. Your docker-compose.yml would look like:
version: '2'
services:
my-db-app:
build: db/.
image: custom-db
And db/Dockerfile would look like:
FROM mysql:latest
COPY ./sql /sql
The entrypoint/cmd would remain unchanged. You would need to run docker-compose up --build if the image already exists and you need to change the sql files.
Option B: Use a volume to store your data. This cannot be done directly inside of docker-compose. However it's the preferred way to include files from outside of the image into the container. You can populate the volume across the network by using the docker CLI and input redirection along with a command like tar to pack and unpack those files being sent over stdin:
tar -cC sql . | docker run --rm -it -v sql-files:/sql \
busybox /bin/sh -c "tar -xC /sql"
Run that via a script and then have that same script bounce the db container to reload that config.
Option C: Use some kind of network attached filesystem. If you can configure NFS on the host where you are running your docker CLI, you can connect to those NFS shares from the remote docker node using one of the below options:
# create a reusable volume
$ docker volume create --driver local \
--opt type=nfs \
--opt o=addr=192.168.1.1,rw \
--opt device=:/path/to/dir \
foo
# or from the docker run command
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \
foo
# or to create a service
$ docker service create \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \
foo
Option D: With swarm mode, you can include files as configs in your image. This allows configuration files, that would normally need to be pushed to any node in the swarm, to be sent on demand to the node where your service is running. This uses a docker-compose.yml file to define it, but swarm mode isn't using docker-compose itself, so this may not fit your specific requirements. You can run a single node swarm mode cluster, so this option is available even if you only have a single node. This option does require that each of your sql files are added as a separate config. The docker-compose.yml would look like:
version: '3.4'
configs:
sql_file_1:
file: ./file_1.sql
services
my-db-app:
image: my-db-app:latest
configs:
- source: sql_file_1
target: /sql/file_1.sql
mode: 444
Then instead of a docker-compose up, you'd run a docker stack deploy -c docker-compose.yml my-db-stack.
If you can not use volumes (wants stateless docker-compose.yml and using remote machine), you can have config file written by command.
Example for nginx config in official image:
version: "3.7"
services:
nginx:
image: nginx:alpine
ports:
- 80:80
environment:
NGINX_CONFIG: |
server {
server_name "~^www\.(.*)$$" ;
return 301 $$scheme://$$1$$request_uri ;
}
server {
server_name example.com
...
}
command:
/bin/sh -c "echo \"$$NGINX_CONFIG\" > /etc/nginx/conf.d/redir.conf; nginx -g \"daemon off;\""
Environment variable could also be saved in .env file, you can use Compose's extend feature or load it from shell environment (where you fetched it from enywhere else):
https://docs.docker.com/compose/compose-file/#env_file
https://docs.docker.com/compose/compose-file/#variable-substitution
To get the original entrypoint command of a container:
docker container inspect [container] | jq --raw-output .[0].Config.Cmd
To investigate which file to modify this usually will work:
docker exec --interactive --tty [container] sh
This is how I'm doing it with volumes:
services:
my-db-app:
command: /shell_scripts/go.sh
volumes:
- ./shell_scripts:/shell_scripts
i think you had to do in a compose file:
volumes:
- src/file:dest/path
As a more recent update to this question: with a docker swarm hosted on Amazon, for example, you can define a volume that can be shared by services and is available across all nodes of the swarm (using the cloudstor driver, which in turn has AWS EFS underlying for persistence).
version: '3.3'
services:
my-db-app:
command: /shell_scripts/go.sh
volumes:
shell_scripts:/shell_scripts
volumes:
shell_scripts:
driver: "cloudstor:aws"
With Compose V2 you can simply do (as in the documentation) :
docker compose cp src [service:]dest
Before v2 you can use the workaround using docker cp explained in the associated issue
docker cp /path/to/my-local-file.sql "$(docker-compose ps -q mycontainer)":/file-on-container.sql
I'm setting up a salt-master to run in a Docker container. I'm using docker-compose to build and run the container. When I start the container I get:
salt_master | [WARNING ] Unable to bind socket, error: [Errno 99] Cannot assign requested address
salt_master | The ports are not available to bind
salt_master exited with code 4
– Any idea why this port cannot be bound, and how can I fix this?
I'm setting the following to /etc/salt/master:
interface: 192.168.99.100
...since this is the IP of my docker-machine (I'm running Docker Toolbox on OS X):
docker-machine ip default
> 192.168.99.100
Contents of my Dockerfile:
FROM centos:7
RUN rpm --import https://repo.saltstack.com/yum/redhat/7/x86_64/latest/SALTSTACK-GPG-KEY.pub
RUN touch /etc/yum.repos.d/saltstack.repo
RUN echo "[saltstack-repo]" >> /etc/yum.repos.d/saltstack.repo
RUN echo "name=SaltStack repo for RHEL/CentOS \$releasever" >> /etc/yum.repos.d/saltstack.repo
RUN echo "baseurl=https://repo.saltstack.com/yum/redhat/\$releasever/\$basearch/latest" >> /etc/yum.repos.d/saltstack.repo
RUN echo "enabled=1" >> /etc/yum.repos.d/saltstack.repo
RUN echo "gpgcheck=1" >> /etc/yum.repos.d/saltstack.repo
RUN echo "gpgkey=https://repo.saltstack.com/yum/redhat/\$releasever/\$basearch/latest/SALTSTACK-GPG-KEY.pub" >> /etc/yum.repos.d/saltstack.repo
RUN yum clean expire-cache
RUN yum update -y
RUN yum install -y virt-what
RUN yum install -y salt-master salt-minion salt-ssh salt-syndic salt-cloud
EXPOSE 4505
EXPOSE 4506
Contents of docker-compose.yml
image:
build: salt
container_name: salt_master_image
master:
image: saltmaster_image
container_name: salt_master
hostname: salt-master
ports:
- "4505:4505"
- "4506:4506"
volumes:
- ./salt/assets/etc/salt:/etc/salt
- ./salt/assets/var/cache/salt:/var/cache/salt
- ./salt/assets/var/logs/salt:/var/logs/salt
- ./salt/assets/srv/salt:/srv/salt
command: /usr/bin/salt-master --log-file=/var/logs/salt/salt-master.log --log-file-level=debug
In order to build and run I execute:
docker-compose build
docker-compose up
If I leave out interface: 192.168.99.100 from /etc/salt/master, I don't get these errors. But then the log says Starting the Salt Publisher on tcp://0.0.0.0:4505 which is not what I want.
I don't see a need to have the data as a separate volume in your case. Can you please change your compose file as follows and give a go
image:
build: salt
container_name: salt_master_image
master:
image: salt_master_image
container_name: salt_master
hostname: salt-master
restart: always
ports:
- "4505:4505"
- "4506:4506"
volumes:
- ./salt/assets/etc/salt:/etc/salt
- ./salt/assets/var/cache/salt:/var/cache/salt
- ./salt/assets/var/logs/salt:/var/logs/salt
- ./salt/assets/srv/salt:/srv/salt
command: /usr/bin/salt-master --log-file=/var/logs/salt/salt-master.log --log-file-level=debug
Edited with more details:
I would suggest you better connect to the container interactively and see what is happening inside the container. To start your docker container interactively, you should use the following command:
docker run -it --name salt_master -p 4505:4505 -p 4506:4506 -v ./salt/assets/etc/salt:/etc/salt -v ./salt/assets/var/cache/salt:/var/cache/salt -v ./salt/assets/var/logs/salt:/var/logs/salt -v ./salt/assets/srv/salt:/srv/salt salt_master_image /bin/bash
with the above command, you will login into the bash shell of the container, then you can execute the command manually:
/usr/bin/salt-master --log-file=/var/logs/salt/salt-master.log --log-file-level=debug
with this, you could understand what is going right and what is not, and accordingly you can take necessary action.
The container's IP address is not 192.168.99.100. This is the IP address of the Docker host.
The IP address of the container can be obtained by inspecting the running container: docker inspect salt_master | grep IPAddress. This reveals that the IP address of the container can be e.g. 172.17.0.2.
When defining interface 172.17.0.2 in /etc/salt/master, the service starts up without errors and the following can be found in the log:
Starting the Salt Publisher on tcp://172.17.0.2:4505
Since port 4505 was mapped to the Docker host, this service can now be reached through 192.168.99.100:4505, which means that salt-minions should be able to contact the salt-master via this address, by setting master: 192.168.99.100 in /etc/salt/minion on the minions.
EDIT: Since the container's IP address is bound to change, it's not safe to assume it's IP address will always be e.g. 172.17.0.2. Instead, it would be better, per #Phani's suggestion, to use interface: 172.0.0.1 instead. – turns out this doesn't work, instead use interface: 0.0.0.0.