I have a Docker container that runs a simple web application. That container is linked to two other containers by Docker Compose with the following docker-compose.yml file:
version: '2'
services:
mongo_service:
image: mongo
command: mongod
ports:
- '27017:27017'
tomcat_service:
image: 'bitnami/tomcat:latest'
ports:
- '8080:8080'
web:
# gain access to linked containers
links:
- mongo_service
- tomcat_service
# explicitly declare service dependencies
depends_on:
- mongo_service
- tomcat_service
# set environment variables
environment:
PYTHONUNBUFFERED: 'true'
# use the image from the Dockerfile in the cwd
build: .
ports:
- '8000:8000'
Once the web container starts, I want to write some content to /bitnami/tomcat/data/ on the tomcat_service container. I tried just writing to that disk location from within the web container but am getting an exception:
No such file or directory: '/bitnami/tomcat/data/'
Does anyone know what I can do to be able to write to the tomcat_service container from the web container? I'd be very grateful for any advice others can offer on this question!
you have to use docker volumes if you want one service to write to other service. If web writes to someFolderName the same file will exist in the tomcat_service.
version: '2'
services:
tomcat_service:
image: 'bitnami/tomcat:latest'
volumes:
- my_shared_data:/bitnami/tomcat/data/
web:
volumes:
- my_shared_data:/someFolderName
volumes:
my_shared_data:
Data in volumes persist and they will be available even next time you re-create docker containers. You should always use docker volumes when writing some data in docker containers.
Related
Hi guys and excuse me for my English. I'm using docker swarm, when I attempt to deploy docker application with this command
docker stack deploy -c docker-compose.yml -c docker-compose.prod.yml chatappapi
it shows the next error : services.chat-app-api Additional property pull_policy is not allowed
why this happens?
how do I solve this?
docker-compose.yml
version: "3.9"
services:
nginx:
image: nginx:stable-alpine
ports:
- "5000:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
chat-app-api:
build: .
image: username/myapp
pull_policy: always
volumes:
- ./:/app
- /app/node_modules
environment:
- PORT= 5000
- MAIL_USERNAME=${MAIL_USERNAME}
- MAIL_PASSWORD=${MAIL_PASSWORD}
- CLIENT_ID=${CLIENT_ID}
- CLIENT_SECRET=${CLIENT_SECRET}
- REDIRECT_URI=${REDIRECT_URI}
- REFRESH_TOKEN=${REFRESH_TOKEN}
depends_on:
- mongo-db
mongo-db:
image: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: 'username'
MONGO_INITDB_ROOT_PASSWORD: 'password'
ports:
- "27017:27017"
volumes:
- mongo-db:/data/db
volumes:
mongo-db:
docker-compose.prod.yml
version: "3.9"
services:
nginx:
ports:
- "80:80"
chat-app-api:
deploy:
mode: replicated
replicas: 8
restart_policy:
condition: any
update_config:
parallelism: 2
delay: 15s
build:
context: .
args:
NODE_ENV: production
environment:
- NODE_ENV=production
- MONGO_USER=${MONGO_USER}
- MONGO_PASSWORD=${MONGO_PASSWORD}
- MONGO_IP=${MONGO_IP}
command: node index.js
mongo-db:
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASSWORD}
Information
docker-compose version 1.29.2
Docker version 20.10.8
Ubuntu 20.04.2 LTS
Thanks in advance.
Your problem line is in docker-compose.yml
chat-app-api:
build: .
image: username/myapp
pull_policy: always # <== this is the bad line, delete it
The docker compose file reference doesn't have any pull_policy in the api because
If the image does not exist, Compose attempts to pull it, unless you have also specified build, in which case it builds it using the specified options and tags it with the specified tag.
I think pull_policy used to be a thing for compose? Maybe keep the latest api documentation open to refer to/search through whilst you're developing (things can and do change fairly frequently with compose).
If you want to ensure that the most recent version of an image is pulled onto all servers in a swarm then run docker compose -f ./docker-compose.yml pull on each server in turn (docker stack doesn't have functionality to run this over an entire swarm yet).
As an aside: I wouldn't combine two .yml files with a single docker stack command without a very good reason to do so.
You are mixing docker-compose and docker swarm ideas up in the same files:
It is probably worth breaking your project up into 3 files:
docker-compose.yml
This would contain just the basic service definitions common to both compose and swarm.
docker-compose.override.yml
Conveniently, docker-compose and docker compose both should read this file automatically. This file should contain any "port:", "depends_on:", "build:" directives, and any convenience volumes use for development.
stack.production.yml
The override file to be used in stack deployments should contain everything understood by swarm and not compose, and b. everything required for production.
Here you would use configs: or even secrets: rather than volume mappings to local folders to inject content into containers. Rather than relying on ports: directives, you would install an ingress router on the swarm such as traefik. and so on.
With this arrangement, docker compose can be used to develop and build your compose stack locally, and docker stack deploy won't have to be exposed to compose syntax it doesn't understand.
pull_policy is in the latest version of docker-compose.
To upgrade your docker-compose refer to:
https://docs.docker.com/compose/install/
The spec for more info:
https://github.com/compose-spec/compose-spec/blob/master/spec.md#pull_policy
I am trying to learn kong, using docker-compose, i am able to run kong+konga and create services. But whenever i do docker-compose down and then up again i lose all my data:
kong:
container_name: kong
image: kong:2.1.4-alpine
restart: unless-stopped
networks:
kong-net:
ipv4_address: 172.1.1.40
volumes:
- kong_data:/usr/local/kong/declarative
environment:
KONG_DATABASE: postgres
KONG_PG_HOST: kong-database
KONG_PG_USER: kong
KONG_PG_PASSWORD: password
KONG_ADMIN_LISTEN: "0.0.0.0:8001, 0.0.0.0:8444 ssl"
KONG_DB_UPDATE_FREQUENCY: 1m
KONG_PROXY_ACCESS_LOG: /dev/stdout
KONG_ADMIN_ACCESS_LOG: /dev/stdout
KONG_PROXY_ERROR_LOG: /dev/stderr
KONG_ADMIN_ERROR_LOG: /dev/stderr
depends_on:
- kong-migration
ports:
- "8001:8001"
- "8444:8444"
- "8000:8000"
- "8443:8443"
Looks like volume mapping not working. pleasE help
If you want to keep data when your kong docker-compose is down it is better to use kong in database mode.
So then you will create a persistent volume for your database and it will keep your changes.
By the kong manual you will find there are two type of database supported: postgresql and cassandra
Postgresql is my choice for small project as I'm not planning for huge horizontal scale with cassandra database.
As you will find in the manual starting your project with docker and database is very simple.
But remember to add a volume to your database service as in the sample mentioned in manual there is no volume.
For postgresql you can add: -v /custom/mount:/var/lib/postgresql/data in docker run command
or
volumes:
postgress-data:
driver: local
services:
postgress:
restart: unless-stopped
image: postgres:latest
environment:
- POSTGRES_USER=your_db_user
- POSTGRES_DB=kong
- POSTGRES_PASSWORD=your_db_password
volumes:
- postgres-data:/var/lib/postgresql/data
Answer : You should use docker volume for having persistent data
As reference says :
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers
First step is to create a volume that you want your host and docker container communicate using :
docker volume create new-volume
Second step is to use that volume in a docker-compose (in your case)
A single docker compose service with a volume looks like this:
version: "3.9"
services:
frontend:
image: node:lts
volumes:
- myapp:/home/node/app
volumes:
myapp:
On the first invocation of docker-compose up the volume will be created. The same volume will be reused on following invocations.
A volume may be created directly outside of compose with docker volume create and then referenced inside docker-compose.yml as follows:
version: "3.9"
services:
frontend:
image: node:lts
volumes:
- myapp:/home/node/app
volumes:
myapp:
external: true
I have got following compose file where i'm sharing some generated html data from Jenkins container to the host drive and reading this data by Nginx container from the host drive. I'm using Ubuntu Server 18.04 on AWS.
The problem is that I can read contents of the jenkins/workspace/allure-report only once. After updating of the html data it becomes inaccessible for Nginx and it throws 403 status code.
I tried all the possible solutions but nothing works. The only ugly solution is to restart Nginx container after every html data updating. I don't like this way and looking for some inbuilt docker features to resolve this.
What didn't help: sharing volume straight between containers without using docker host drive, using rslave option, using docker separate volume that can be used as buffer between the two containers... I believe it should be much more easier!
version: '2'
services:
jenkins:
container_name: jenkins
image: "jenkins/jenkins"
ports:
- "8088:8080"
- "50000:50000"
env_file:
- variables.env
volumes:
- ./jenkins:/var/jenkins_home
selenoid:
container_name: selenoid
network_mode: bridge
image: "aerokube/selenoid"
# default directory for browsers.json is /etc/selenoid/
command: -listen :4444 -conf /etc/selenoid/browsers.json -video-output-dir /opt/selenoid/video/ -timeout 3m
ports:
- "4444:4444"
env_file:
- variables.env
volumes:
- $PWD:/etc/selenoid/ # assumed current dir contains browsers.json
- /var/run/docker.sock:/var/run/docker.sock
selenoid-ui:
container_name: selenoid-ui
network_mode: bridge
image: "aerokube/selenoid-ui"
links:
- selenoid
ports:
- "8080:8080"
env_file:
- variables.env
command: ["--selenoid-uri", "http://selenoid:4444"]
nginx:
container_name: nginx
image: "nginx"
ports:
- "80:80"
volumes:
- ./jenkins/workspace/allure-report:/usr/share/nginx/html:ro,rslave
Found the solution: the easiest way to get access to the dynamic data is to use volumes_from in that container you want to look from.
When I configured my compose file like that I faced another issue - the 403 status has gone but the data was static. But that was my fault, I didn't use "cp -r " command correctly so my data has been copied only once.
I have 2 applications that are separate codebases, and they each have their own database on the same db server instance.
I am trying to replicate this in docker, locally on my laptop. I want to be able to have both apps use the same database instance.
I would like
both apps to start in docker at the same time
both apps to be able to access the database on localhost
the database data is persisted
be able to view the data in the database using an IDE on localhost
So each of my apps has its own dockerfile and docker-compose file.
On app1, I start the docker instance of the app which is tied to the database. It all starts fine.
When I try to start app2, I get the following error:
ERROR: for app2_mssql_1 Cannot start service mssql: driver failed programming external connectivity on endpoint app2_mssql_1 (12d550c8f032ccdbe67e02445a0b87bff2b2306d03da1d14ad5369472a200620): Bind for 0.0.0.0:1433 failed: port is already allocated
How can i have them both running at the same time? BOTH apps need to be able to access each others database tables!
Here is the docker-compose.yml files
app1:
version: "3"
services:
web:
build:
context: .
args:
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=P455w0rd!
volumes:
- app1_db:/var/lib/mssql/data
volumes:
app1_db:
and here is app2:
version: "3"
services:
web:
build:
context: .
args:
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=P455w0rd!
volumes:
- app2_db:/var/lib/mssql/data
volumes:
app2_db:
Should I be using the same volume in each docker-compose file?
I guess the problem is in each app i am spinning up 2 different db instances, when in reality I guess i just want one, and it be used by all my apps?
The ports part in docker-compose file will bound the container port to host's port which causes port conflict in your case.
You need to remove the ports part from at least one of the compose file. This way, docker-compose can be up for both. And you can have access to both app at same time. But remember both apps will be placed in separate network bridges.
How docker-compose up works:
Suppose your app is in a directory called myapp, and your docker-compose.yml
When you run docker-compose up, the following happens:
A network called myapp_default is created.
A container is created using web’s configuration. It joins the network myapp_default under the name web.
A container is created using db’s configuration. It joins the network myapp_default under the name db.
If you run the second docker-compose.yml in different folder myapp2, then the nework will be myapp2_default.
Current configuration creates two volumes, two datebase containers and two apps. If you can make them run in the same network and run database as the single container it will work.
I don't think you are expecting two database container two two volumes.
Approach 1:
docker-compose.yml as a single compose.
version: "3"
services:
app1:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app1.
ports:
- "3030:3000"
depends_on:
- mssql
app2:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app2.
ports:
- "3032:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SqlServer1234!
volumes:
- app_docker_db:/var/lib/mssql/data
volumes:
app_docker_db:
Approach 2:
To Isolate it further, still want to run them as the sepeare composefiles, create three compose file with network.
docker-compose.yml for database with network
version: "3"
services:
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SqlServer1234!
volumes:
- app_docker_db:/var/lib/mssql/data
networks:
- test_network
volumes:
app_docker_db
networks:
test_network:
docker-ompose.yml for app1
remove the database container and add below lines to your compose file
version: "3"
services:
app1:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app1.
ports:
- "3030:3000"
networks:
default:
external:
name: my-pre-existing-network
Do the same for another docker-compose by replacing the docker-compose file.
There are many other option to create docker-compose files. Configure the default network and Use a pre-existing network
You're exposing the same port (1433) two times to the host machine. (This is what "ports:..." does). This is not possible as it would block the same port on your host (That's what the message says).
I think the most common way in these cases is that you link your db's to your apps. (See https://docs.docker.com/compose/compose-file/#links). By doing this your applications can still access the databases on their common ports (1433), but the databases are not accessible from the host anymore (only from the container that is linked to it).
Another error I see in your docker compose file is that both applications are exposed by the same ports. This is also not possible for the same reason. I would suggest that you change one of them to "3000:3001", so you can access this application on port 3001.
I'm using docker compose for a web application that I'm creating with asp.net core, postgres and redis. I have everything set up in compose to connect to postgres using the service name I've specified in the docker-compose.yml file. When trying to do the same with redis, I get an exception. After doing research it turns out this exception is a known issue and the work around is using the ip address of the the machine instead of a host name. However I cannot figure out how to get the ipaddress of the redis service from the compose file. Is there a way to do that?
Edit
Here is the compose file
version: "3"
services:
postgres:
image: 'postgres:9.5'
env_file:
- '.env'
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- '5433:5432'
redis:
image: 'redis:3.0-alpine'
command: redis-server --requirepass devpassword
volumes:
- 'redis:/var/lib/redis/data'
ports:
- '6378:6379'
web:
build: .
env_file:
- '.env'
ports:
- "8000:80"
volumes:
- './src/edb/Controllers:/app/Controllers'
- './src/edb/Views:/app/Views'
- './src/edb/wwwroot:/app/wwwroot'
- './src/edb/Lib:/app/Lib'
volumes:
postgres:
redis:
Ok, I found the answer. It was something I was trying but didn't realize the address may change everytime you restart the containers.
Run docker ps to get a list of running contianers then copy the id of your container and run docker inspect {container_id} and that will output the ipaddress that you can access it with from within the other running containers.
The reason I was confused was because that address may change when the containers are started. So I had to guess what the ip address was going to be before I started the containers. Luckly after 5 times I guessed correctly.