Executing docker with php and then use other container in command - docker

I'm using docker for php and another one for sql. Also I have a makefile to run commands in a instance of this container. This is the entry I use for command execution, and I would like to use sql container I have.
command:
docker run --rm \
--volume=${PWD}/code:/code \
--volume=${PWD}/json:/json:rw \
--volume=${PWD}/file:/file:rw \
own_php:latest \
time php /code/public/index_hex.php ${page}
If I try to execute this command from the make file, I get the following error.
SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed:
Name does not resolve
This is the docker-compose I have in my project
version: '3'
services:
sql:
image: mariadb
ports:
- "3307:3306"
environment:
MYSQL_ROOT_PASSWORD: root
volumes:
- ./init-db:/docker-entrypoint-initdb.d
- ./.mysql-data:/var/lib/mysql/
crawler:
build:
context: './docker-base'
depends_on:
- sql
volumes:
- ./json:/json:rw
- ./file:/file:rw
- ./code:/code
But if I run the container using my docker-composer, and I enter inside the container the command executes well.
It is possible for docker run --rm to use another container?

Docker Compose creates a network for each compose file and you have to attach your manually docker run container to that network to be able to reach other containers on it. The network will usually be named something like directoryname_default, based on the name of the directory holding the compose file, and it will show up in the docker network ls listing.
You should run something like
docker run --rm --net directoryname_default ...

Related

error while removing network: <network> id has active endpoints

I am trying run docker compose down using jenkins job.
"sudo docker-compose down --remove-orphans"
I have used --remove-orphans command while using the docker-compose down.
Still it gives below error.
Removing network. abc
error while removing network: network id ************ has active endpoints
Failed command with status 1: sudo docker-compose down --remove-orphans
Below is my docker compose:
version: "3.9"
services:
abc:
image: <img>
container_name: 'abc'
hostname: abc
ports:
- "5****:5****"
- "1****:1***"
volumes:
- ~/.docker-conf/<volume>
networks:
- <network>
container-app-1:
image: <img2>
container_name: 'container-app-1'
hostname: 'container-app-1'
depends_on:
- abc
ports:
- "8085:8085"
env_file: ./.env
networks:
- <network>
networks:
<network>:
driver: bridge
name: <network>
To list your networks, run docker network ls. You should see your <network> there. Then get the containers still attached to that network with (replacing your network name at the end of the command):
docker network inspect \
--format '{{range $cid,$v := .Containers}}{{printf "%s: %s\n" $cid $v.Name}}{{end}}' \
"<network>"
For the various returned container id's, you can check why they haven't stopped (inspecting the logs, making sure they are part of the compose project, etc), or manually stop them if they aren't needed anymore with (replacing the <cid> with your container id):
docker container stop "<cid>"
Then you should be able to stop the compose project.
There is a situation when there are no containers at all, but there is an error. Then systemctl restart docker helped me
This can also happen, when you have a db instance running on separate container and using the same network. In this case, removing the db instance using the command
docker container stop "<cid>"
will stop the container. We can find the container id that is using the network by using the command provided by #BMitch
docker network inspect \
--format '{{range $cid,$v := .Containers}}{{printf "%s: %s\n" $cid $v.Name}}{{end}}' \
"<network>"
But in my case, when I did that, it also made that postgres instance "orphaned". Then i did
docker-compose up -d --remove-orphans
After that, I booted up a new db instance (postgres) using docker compose file and mapped the volume of data directory of that to the data directory of the previous db instance.
volumes:
- './.docker/postgres/:/docker-entrypoint-initdb.d/'
- ~/backup/postgress:/var/lib/postgresql/data
My Problem was solved only by restarting the docker and then deleting the container manually from the docker desktop.

How to find volume files from host while inside docker container?

In a docker-compose.yml file I have defined the following service:
php:
container_name: php
build:
context: ./container/php
dockerfile: Dockerfile
networks:
- saasnet
volumes:
- ./services:/var/www/html
- ./logs/php:/usr/local/etc/php-fpm.d/zz-log.conf
environment:
- "DB_PORT=3306"
- "DB_HOST=database"
It all builds fine, and another service (nginx) using the same volume mapping, - ./services:/var/www/html finds php as expected, so it all works in the browser. So far, so good.
But now I want to go into the container because I want to run composer install from a certain directory inside the container. So I go into the container using:
docker run -it php bash
And I find myself in the container at /var/www/html, where I expect to be able to navigate as if I were on my host machine in ./services directory, but ls at this point inside the container shows no files at all.
What am I missing or not understanding about how this works?
Your problem is that your are not specifying the volume on your run command - docker run is not aware of your docker-compose.yml. If you want to run it with all your options as specifiend in it, you need to either use docker-compose run, or pass all options to docker run:
docker-compose run php bash
docker run -it -e B_PORT=3306 -e DB_HOST=database -v ./services:/var/www/html -v ./logs/php:/usr/local/etc/php-fpm.d/zz-log.conf php bash

How do I specify the "model_config_file" variable to tensorflow-serving in docker-compose?

I will preface this by saying that I am inexperienced with docker and docker-compose. I am trying to convert my docker run ... command to a docker-compose.yml file, however, I cannot get the models.config file to be found.
I am able to correctly run a tensorflow-serving docker container using the following docker run ... command:
docker run -t --rm \
tensorflow/serving \
-p 8501:8501 \
-v "$(pwd)/models/:/models/" \
--model_config_file=/models/models.config \
--model_config_file_poll_wait_seconds=60
This works as expected, and the models.config file is found in the container at /models/models.config as expected.
The tensorflow-serving pages do not mention anything about docker-compose, however, I would much rather use this than a docker run ... command. My attempt at a docker-compose file is:
version: '3.3'
services:
server:
image: tensorflow/serving
ports:
- '8501:8501'
volumes:
- './models:/models'
environment:
- 'model_config_file=/models/models.config'
- 'model_config_file_poll_wait_seconds=60'
Using this docker-compose file, the container runs, however, it seems like the environment variables are completely ignored, so I'm not sure if this is how I should set them. The container image looks in the default location for the models.config file, and it doesn't exist there, and so it does not load the configuration defined in models.config.
So, how can I define these values, or run a tensorflow-serving container, using docker-compose?
I appreciate any help.
Thanks
So I have come across a solution elsewhere that I haven't found on any threads/posts/etc that talk about tensorflow/serving, so I will post my answer here.
Adding those options underneath a command section, as follows, works.
version: '3.3'
services:
server:
image: tensorflow/serving
ports:
- '8501:8501'
volumes:
- './models:/models'
command:
- '--model_config_file=/models/models.config'
- '--model_config_file_poll_wait_seconds=60'
I don't know a lot about docker so I don't know if this is an obvious answer, but I didn't find a solution even after a lot of Google'ing.
The problem is that the option --model_config_file is not an environment variable. It is an argument that you can pass to the command that is set as an entrypoint in the image tensorflow/serving.
If we look at the Dockerfile that was used to build the image, we can see :
# Create a script that runs the model server so we can use environment variables
# while also passing in arguments from the docker command line
RUN echo '#!/bin/bash \n\n\
tensorflow_model_server --port=8500 --rest_api_port=8501 \
--model_name=${MODEL_NAME} --model_base_path=${MODEL_BASE_PATH}/${MODEL_NAME} \
"$#"' > /usr/bin/tf_serving_entrypoint.sh \
&& chmod +x /usr/bin/tf_serving_entrypoint.sh
ENTRYPOINT ["/usr/bin/tf_serving_entrypoint.sh"]
This script accept various arguments (you can see them by running docker run -t --rm tensorflow/serving --help)
As far as I know, the only environment variables used by TF serving are MODEL_VERSION and MODEL_NAME. model_config_file and model_config_file_poll_wait_seconds are just arguments to the CLI serving executable.
To achieve the result you are after, you could override the entrypoint of the docker image by setting your own entrypoint in the docker-compose.yml :
version: '3.3'
services:
server:
image: tensorflow/serving
ports:
- '8501:8501'
volumes:
- './models:/models'
entrypoint:
- /usr/bin/tf_serving_entrypoint.sh
- --model_config_file=/models/models.config
- --model_config_file_poll_wait_seconds=60
(Note : I did not test that docker-compose.yml file)

Access port of one container from another container

I have a postgres database in one container, and a java application in another container. Postgres database is accessible from port 1310 in localhost, but the java container is not able to access it.
I tried this command:
docker run modelpolisher_java java -jar ModelPolisher-noDB-1.7.jar --host=biggdb --port=5432 --user=postgres --passwd=postgres --dbname=bigg
But it gives error java.net.UnknownHostException: biggdb.
Here is my docker-compose.yml file:
version: '3'
services:
biggdb:
container_name: modelpolisher_biggdb
build: ./docker/bigg_docker
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=bigg
ports:
- "1310:5432"
java:
container_name: modelpolisher_java
build: ./docker/java_docker
stdin_open: true
tty: true
Dockerfile for biggdb:
FROM postgres:11.4
RUN apt update &&\
apt install wget -y &&\
# Create directory '/bigg_database_dump/' and download bigg_database dump as 'database.dump'
wget -P /bigg_database_dump/ https://modelpolisher.s3.ap-south-1.amazonaws.com/bigg_database.dump &&\
rm -rf /var/lib/apt/lists/*
COPY ./scripts/restore_biggdb.sh /docker-entrypoint-initdb.d/restore_biggdb.sh
EXPOSE 1310:5432
Can somebody please tell what changes I need to do in the docker-compose.yml, or in the command, to make java container access ports of biggdb (postgres) container?
The two containers have to be on the same Docker-internal network to be able to talk to each other. Docker Compose automatically creates a network for you and attaches containers to that network. If you're docker run a container alongside that, you need to find that network's name.
Run
docker network ls
This will list the Docker-internal networks you have. One of them will be named something like bigg_default, where the first part is (probably) your current directory name. Then when you actually run the container, you can attach to that network with
docker run --net bigg_default ...
Consider setting a command: in your docker-compose.yml file to pass these arguments when you docker-compose up. If the --host option is your code and doesn't come from a framework, passing settings like this via environment variables can be a little easier to manage than command-line arguments.
As you use docker-compose to bring up the two containers, they already share a common network. To be able to access that you should use docker-compose run and not docker run. Also, pass the service name (java) and not the container name (modelpolisher_java) in docker-compose run command.
So just use the following command to run your jar:
docker-compose run java java -jar ModelPolisher-noDB-1.7.jar --host=biggdb --port=5432 --user=postgres --passwd=postgres --dbname=bigg

Executing docker run command from config file

I have several arguments in my docker run command like
docker run --rm -v /apps/hastebin/data:/app/data --name hastebin -d -p 7777:7777 -e STORAGE_TYPE=file rlister/hastebin
Can I put all the arguments of this in a default/config file so that I dont have to mention it explicitly in the run command?
You can try docker compose
With Compose, you use a Compose file to configure your application’s services. Then, using a single command, you create and start all the services from your configuration
In your case docker-compose.yml file will looks like
version: '2'
services:
hastebin:
image: rlister/hastebin
ports:
- "7777:7777"
volumes:
- /apps/hastebin/data:/app/data
environment:
- STORAGE_TYPE=file
And you can run service by command docker-compose up

Resources