Use files on host system with running containers using docker compose - docker

I want to use excel file and some folders with my container .
I am using volumes but , dont know what the problem is with my compose .
seleniumhub:
image: selenium/hub
ports:
- "4444:4444"
firefoxnode:
image: selenium/node-firefox-debug
ports:
- "5901:5900"
links:
- "seleniumhub:hub"
shm_size: '2gb'
environment:
- "NODE_MAX_SESSION=2"
- "NODE_MAX_INSTANCES=2"
chromenode2:
image: selenium/node-chrome-debug
ports:
- "5902:5900"
links:
- "seleniumhub:hub"
shm_size: '2gb'
environment:
- "NODE_MAX_SESSION=2"
- "NODE_MAX_INSTANCES=2"
test:
image: raveena1/dilsel
ports:
- 4579
links:
- "seleniumhub:hub"
container_name: mywebcontainer
**volumes:
- /$$(pwd)/Newfolder/Config/framework-config.properties:/var/lib/docker/**
I want to use the above property file in my container , how can i achieve this ?

I don't think docker-compose can interpret bash command inside a compose file. However, what you can do is use environment variable. In your case, you might want to use $PWD.
[...]
volumes:
- $PWD/Newfolder/Config/framework-config.properties:/var/lib/docker/
[...]
This will interpret the environment variable $PWD (which resolves to your current working directy) and mount this to /var/lib/docker.
Below is an example of using environment variable in docker-compose :
docker-compose.yml:
test:
image: debian:stretch-slim
ports:
- 4579
container_name: mywebcontainer
volumes:
- $PWD/:/current_directory_of_host
entrypoint: "ls -l /current_directory_of_host"
Start this container with docker-compose up. You should see a list of file that is in your current working directory.
You can also use custom environment variable : CUSTOM_ENV=$(pwd) docker-compose up. This will forward CUSTOM_ENV to docker-compose which can be used in your docker-compose.yml.

Related

docker compose I want use my local directory C:/html for /var/www/html

Hello I want to publish the "index.php" from the local folder "C:\html\index.php" with docker-compose.yml
in localhost I get the typical apache html "It works". But I do not get the content of my local folder. What I am doing wrong?
here is my docker-compose file:
version: "3"
services:
# --- MySQL 5.7
#
mysql:
container_name: "dstack-mysql"
image: bitnami/mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_USER=admin
- MYSQL_PASSWORD=root
ports:
- '3306:3306'
php:
container_name: "dstack-php"
image: bitnami/php-fpm:8.1
# --- Apache 2.4
#
apache:
container_name: "dstack-apache"
image: bitnami/apache:2.4
ports:
- '80:8080'
- '443:8443'
depends_on:
- php
volumes:
- C:/html:/var/www/html
phpmyadmin:
container_name: "dstack-phpmyadmin"
image: bitnami/phpmyadmin:latest
depends_on:
- mysql
ports:
- '81:8080'
- '8143:8443'
environment:
- DATABASE_HOST=host.docker.internal
volumes:
dstack-mysql:
driver: local
Update:
volumes:
- ./html:/var/www/html
Doesn't works.
I want to have a web development docker environment where I edit in the folder C:\html\index_hello.html in my computer and I will see the changes in the browser localhost:8080, the changes I did. My expectation is that I write in the browser http://localhost:8080/index_hello.html. Did I something wrong? shall I edit other files e.g. apache.conf?
I would suggest avoiding hardcoding directories and using relative directories.
If you place your docker-compose into your C:/html folder and then change you volume to read:
volumes:
- .:/var/www/html
if you run the following:
cd C:/html
docker-compose up -d
you are telling docker-compose to use . meaning the current directory.
if you put the docker-compose.yml in the C:/ directory you can run change the volume to:
volumes:
- ./html:/var/www/html
then the docker compose command should remain the same.

Issue in docker compose with volume undefined

I get the below error when I run docker-compose up, any pointers why I am getting this error
service "mysqldb-docker" refers to undefined volume mysqldb: invalid compose project
Also, is there a way to pass the $ENV value in CLI to docker-compose up , currently I have a ENV variable that specified dev, uat or prod that I use to specify the db name. Are there better alternatives to do this other than create a .env file explicitly for this
version: '3.8'
services:
mysqldb-docker:
image: '8.0.27'
restart: 'unless-stopped'
ports:
- "3309:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_PASSWORD=root
- MYSQL_DATABASE=reco-tracker-$ENV
volumes:
- mysqldb:/var/lib/mysql
reco-tracker-docker:
image: 'reco-tracker-docker:v1'
ports:
- "8083:8083"
environment:
- SPRING_DATASOURCE_USERNAME=root
- SPRING_DATASOURCE_PASSWORD=root
- SPRING_DATASOURCE_URL="jdbc:mysql://mysqldb-docker:3309/reco-tracker-$ENV"
depends_on: [mysqldb-docker]
You must define volumes at the top level like this:
version: '3.8'
services:
mysqldb-docker:
# ...
volumes:
- mysqldb:/var/lib/mysql
volumes:
mysqldb:
You can pass environment variables from your shell straight through to a service’s containers with the ‘environment’ key by not giving them a value
https://docs.docker.com/compose/environment-variables/#pass-environment-variables-to-containers
web:
environment:
- ENV
but from my tests you cant write $ENV in the compose file and expect it to read your env
for this you need to call docker-compose that way :
docker-compose run -e ENV web python console.py
see this : https://docs.docker.com/compose/environment-variables/#set-environment-variables-with-docker-compose-run

Dockerimage working on pull but not on pull image directive in yml file?

I have a dockerimage on a gitlab registry.
when I (after login on a target machine)
docker run -d -p 8081:8080/tcp gitlab.somedomain.com:5050/root/app
the laravel app is available and running and reachable. Things like php artisan config:clear are working. when I enter the container everything looks fine.
But I don't have any services running. So I had the idea to create a yml file to docker-compose run to set things up in docker-compose-gitlab.yml
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
volumes:
- .:/application
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
calling docker-compose --verbose -f docker-compose-gitlab.yml up shows me that the mysql service is created and working, the app seems also be creeated but then fails ... exiting with code 0 - no further message.
If I add commands in my yml like php artisan config:clear the error gets even unclearer for me: it says it cannot find artisan and it seems as if the command is executed outside the container ... exiting with code 1. (artisan is a helper and executed via php)
When I call the docker-compose with -d and then do docker ps I can only see mysql running but not the app.
When I use both strategies, the problem is, the two container do not share a common network and can so not work together.
What did I miss? Is this the wrong strategy?
The problem is, that I let a volume directive left over which overwrites my entier application with an empty directory.
You can just leave that out.
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
## volumes:
## - .:/application ## this would overwrite the app
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
You can debug the network of the containers listing the networks with docker network ls
then when the list is shown inspect the compose network with docker inspect <ComposeNetworkID>
Once you are shure that your services are not in the same network, remove your containers and recreate it again with docker-compose -f docker-compose-gitlab.yml up
If you notice they are in the same network try to use the container name instead localhost to reach each other, if it is the case.

How to see other container from one container in docker?

I have compose file as follows;
redis:
image: redis
ports:
- "6379:6379"
php:
build: .
image: php:fpm
volumes:
- ./code:/var/www/html
links:
- redis:redis
networks:
- code-network
I'm entering into php container with the following command.
docker exec -it php_id /bin/bash
but I can't run "redis-cli" command in this container. What do I need to do to run it.
I added "links" parameter to compose file but it didn't.
You are putting the php-fpm container in a network of its own. Here is a fixed compose file:
version: "3"
services:
redis:
image: redis
ports:
- "6379:6379"
php:
build: .
image: php:fpm
volumes:
- ./code:/var/www/html
networks:
- code-network
- default
networks:
code-network:
See this for more info on compose networking.
About the redis-cli issue: You'd need to add the appropriate repository on the php-fpm container and then install it. As you are using the php:fpm image, you propably want to use redis with some php-application, therefore you don't need debians redis-cli package, but rather the php-extension.
See this post for more info.

Docker Compose File cant get .env Variables

I am using docker-compose to run a traefik container. The Domain of this Container should be set by an environment file but everytime i start this service it says:
WARNING: The DOMAIN variable is not set. Defaulting to a blank string
My compose-file setup:
version: '3.5'
networks:
frontend:
name: frontend
backend:
name: backend
services:
Traefik:
image: traefik:latest
command: --api --docker --acme.email="test#test.de"
restart: always
container_name: Traefik
networks:
- backend
- frontend
env_file: ./env.env
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik/traefik.toml:/traefik.toml
- ./traefik/acme.json:/acme.json
ports:
- "80:80"
- "443:443"
labels:
- "traefik.docker.network=frontend"
- "traefik.enable=true"
- "traefik.frontend.rule=Host:traefik.${DOMAIN}"
- "traefik.port=8080"
- "traefik.protocol=http"
My env.env file setup:
DOMAIN=fiture.de
Thanks for your Help!
env_file: ./env.env
The file env.env isn't loaded to parse the compose file, it is loaded to add environment variables within the container being run. At the point docker processes the above instruction, the yaml file has already been loaded and variables have been expanded.
If you are using docker-compose to deploy containers on a single node, you can rename the file .env and docker-compose will load variables from that file before parsing the compose file.
If you are deploying with docker stack deploy, then you need to import the environment variables into your shell yourself. An example of doing that in bash looks like:
set -a && . ./env.env && set +a && docker stack deploy ...

Resources