Docker-Compose binding volumes, file sharing issue - docker

I am new to docker and had a problem that hope you could help.
I have defined multiple services (HTTPSERV, IMED, etc...) in my docker-compose file and these services have a python code inside and a docker file for running them. The dockerfile also copies the required files into a host path defined in docker-compose. The HTTPSERV and IMED must share a text file and expose it to an external user sending a GET request to HTTPSERV.
In docker-compose I have defined a local host directory and bind it to a named volume. The services and docker files are meant to share each service files and run.
As soon as I run the docker-compose, the files related to the first service copies files in the PATH directory where "src" and change the permission right of the "src" folder not letting the other services copy their files. This causes the next services to fail to find the appropriate files and the whole orchestration process fails.
version: "3.9"
networks:
default:
ipam:
config:
- subnet: 172.28.0.2/20
services:
httpserv:
user: root
container_name: httpserver
build: ./HTTPSERV
volumes:
- myapp:/httpApp:rw
networks:
default:
ipv4_address: 172.28.0.5
ports:
- "8080:3000"
rabitQ:
user: root
container_name: rabitQ
image: rabbitmq:3.8-management
networks:
default:
ipv4_address: 172.28.0.4
ports:
- "9000:15672"
imed:
user: root
container_name: IMED-Serv
build: ./IMED
volumes:
- myapp:/imed:rw
networks:
- default
# restart: on-failure
orig:
user: root
container_name: ORIG-Serv
build: ./ORIG
volumes:
- myapp:/orig:rw
networks:
- default
# restart: on-failure
obse:
container_name: OBSE-Serv
build: ./OBSE
volumes:
- myapp:/obse:rw
networks:
- default
# restart: on-failure
depends_on:
- "httpserv"
links:
- httpserv
volumes:
myapp:
driver: local
driver_opts:
type: none
o: bind
device: /home/dockerfiles/hj/a3/src
The content of the docker file is similar for most of the services and is as follow:
FROM python:3.8-slim-buster
WORKDIR /imed
COPY . .
RUN pip install --no-cache-dir -r imed-requirements.txt
RUN chmod 777 ./imed.sh
CMD ["./imed.sh"]
The code has root access and the UserID and GroupID are set
I also used the anonymously named volumes but the same issue happens.

In Docker it's often better to avoid "sharing files" as a first-class concept. Imagine running this in a clustered system like Kubernetes; if you have three copies of each service, and they're running in a cloud of a hundred systems, "sharing files" suddenly becomes difficult.
Given the infrastructure you've shown here, you have a couple of straightforward options:
Add an HTTP endpoint to update the file. You'd need to protect this endpoint in some way, since you don't want external callers accessing it; maybe filter it in an Nginx reverse proxy, or use a second HTTP server in the same process. The service that has the updated content would then call something like
r = requests.post('http://webserv/internal/file.txt', data=contents)
r.raise_for_status()
Add an HTTP endpoint to the service that owns the file. When the Web server service starts up, and periodically after that, it makes a request
r = requests.get('http://imed/file.txt')
You already have RabbitMQ in this stack; add a RabbitMQ consumer in a separate thread, and post the updated file content to a RabbitMQ topic.
There's some potential trouble with the "push" models if the Web server service restarts, since it won't be functional until the other service sends it the current version of file.
If you really do want to do this with a shared filesystem, I'd recommend creating a dedicated directory and dedicated volume to do this. Do not mount a volume over your entire application code. (It will prevent docker build from updating the application, and in the setup you've shown, you're mounting the same application-code volume over every service, so you're running four copies of the same service instead of four different services.)
Adding in the shared volume, but removing a number of other unnecessary options, I'd rewrite the docker-compose.yml file as:
version: '3.9'
services:
httpserv:
build: ./HTTPSERV
ports:
- "8080:3000"
volumes:
- shared:/shared
rabitQ:
image: rabbitmq:3.8-management
hostname: rabitQ # (RabbitMQ specifically needs this setting)
ports:
- "9000:15672"
imed:
build: ./IMED
volumes:
- shared:/shared
orig:
build: ./ORIG
obse:
build: ./OBSE
depends_on:
- httpserv
volumes:
shared:
I'd probably have the producing service unconditionally copy the file into the volume on startup, and have the consuming service block on it being present. (Don't depend on Docker to initialize the volume; it doesn't work on Docker bind mounts, or on Kubernetes, or if the underlying image has changed.) So long as the files are world-readable the two services' user IDs don't need to match.

Related

Accessing apps living in different Docker containers being parts of different docker-compose services

I have two django apps. Both are run as part of two different docker-compose files.
App 1 docker-compose.yml file:
services:
django:
build: .
command: ["python", "manage.py", "runserver", "0.0.0.0:8000"]
ports:
- "8013:8000"
volumes:
- ./:/app
depends_on:
- db
App 2 docker-compose.yml file
services:
django:
build: .
container_name: "web"
command: ["python", "manage.py", "runserver", "0.0.0.0:8000"]
ports:
- "8003:8000"
volumes:
- ./:/app
depends_on:
- db
So basically, my goal is to call App 2's django endpoint from App 1. To do this, in app 1's code, I utilize url http://web:8003/app2_endpoint
Also, I have ALLOWED_HOSTS=['*'] in both projects
Yet, I end up with Max retries exceeded error.
I also came across this question, but I failed to figure out the solution for my case.
If you don't specify custom docker network in your compose file, each compose file would create a separated network for itself. So basically your containers in separated compose can't see each other
The solution can be using same docker network in compose files. Sth like:
services:
...
networks:
default:
external: true
name: YOUR_DOCKER_NETWORK
And add it in another compose too
This tells compose to use an external docker network as default, named YOUR_DOCKER_NETWORK
Note that you should create this network by yourself, because it's external:
docker network create YOUR_DOCKER_NETWORK
You can also use custom networks
Docs in https://docs.docker.com/compose/networking/

Access docker-compose volume from host masine (copy files "into" container and acces contailer-files from outside)

I'm trying to run jira from docker compose (as of now this would not be necessary but later on I will add additional services). So far almost everything works.
version: '3'
services:
jira-local-standalone:
container_name: jira-local-standalone
image: atlassian/jira-software:8.5.0
volumes:
- ./work/jira-home:/var/atlassian/application-data/jira
ports:
- 8080:8080
environment:
TZ: Europe/Berlin
JVM_RESERVED_CODE_CACHE_SIZE: 1024m
JVM_MINIMUM_MEMORY: 512m
JVM_MAXIMUM_MEMORY: 2048m
networks:
- jiranet
networks:
jiranet:
driver: bridge
The only thing that does not work is this: I want the jira home to be accessible via the file manager on my host nachine. To access logs etc. and to "upload" import files. So I want to copy the files to the specified location on my host and be available to jira inside my container. More or less like some sort of a shared folder.
I expected the directory ./work/jira-home to be populated during startup (creating the /var/atlassian/application-data/jira inside ./work/jira-home and write logfiles and stuff). But the ./work/jira-home folder remains empty.
How can I expose the jira-home from inside the container to my host machine?
Thanks and best regards, Sebastian

Docker for local development with multiple environment

I'm looking for using docker to emulate the minimum of our current cloud environment. We have about 10 services (each with your own MySQL8 database, redis, php-fpm and nginx). Currently they have a docker-compose.yml per repository, but they cant talk to each other, if I want to test a feature where a service needs to talk to another I'm out of luck.
My first approach was to create a Dockerfile per service (and run all together using a new docker-compose.yml), using Debian, but i didn't got too far, was able to install nginx, (php-fpm and dependencies), but when I got to the databases it got weird and I had a feeling that this isn't the right way of doing this.
Is there a way to one docker-compose.yml "include" each of docker-compose.yml from the services?
Is there a better approach to this? Or should I just keep with the Dockerfile and run them all on the same network using docker-compose?
TL;DR;
You can configure docker-compose using external networks to communicate with services from other projects or (depending on your project) use the -f command-line option / COMPOSE_FILE environment variable to specify the path of the compose file(s) and bring all of the services up inside the same network.
Using external networks
Given the below tree with project a and b:
.
├── a
│   └── docker-compose.yml
└── b
└── docker-compose.yml
Project a's docker-compose sets a name for the default network:
version: '3.7'
services:
nginx:
image: 'nginx'
container_name: 'nginx_a'
expose:
- '80'
networks:
default:
name: 'net_a'
And project b is configured to using the named network net_b and the pre-existing net_a external network:
version: '3.7'
services:
nginx:
image: 'nginx'
container_name: 'nginx_b'
expose:
- '80'
networks:
- 'net_a'
- 'default'
networks:
default:
name: 'net_b'
net_a:
external: true
... exec'ing into the nginx_b container we can reach the nginx_a container:
Note: this is a minimalist example. The external network must exist before trying to bring up an environment that is configured with the pre-existing network. Rather than modifying the existing projects docker-compose.yml I'd suggest using overrides.
The configuration gives the nginx_b container a foot inside both networks:
Using the -f command-line option
Using the -f command-line option acts as an override. It will not work with the above compose files as both specify an nginx service (docker-compose will override / merge the first nginx service with the second).
Using the modified docker-compose.yml for project a:
version: '3.7'
services:
nginx_a:
container_name: 'nginx_a'
image: 'nginx'
expose:
- '80'
... and for project b:
version: '3.7'
services:
nginx_b:
image: 'nginx'
container_name: 'nginx_b'
expose:
- '80'
... we can bring both of the environments up inside the same network: docker-compose -f a/docker-compose.yml:b/docker-compose.yml up -d:

Docker: Write to disk of linked container

I have a Docker container that runs a simple web application. That container is linked to two other containers by Docker Compose with the following docker-compose.yml file:
version: '2'
services:
mongo_service:
image: mongo
command: mongod
ports:
- '27017:27017'
tomcat_service:
image: 'bitnami/tomcat:latest'
ports:
- '8080:8080'
web:
# gain access to linked containers
links:
- mongo_service
- tomcat_service
# explicitly declare service dependencies
depends_on:
- mongo_service
- tomcat_service
# set environment variables
environment:
PYTHONUNBUFFERED: 'true'
# use the image from the Dockerfile in the cwd
build: .
ports:
- '8000:8000'
Once the web container starts, I want to write some content to /bitnami/tomcat/data/ on the tomcat_service container. I tried just writing to that disk location from within the web container but am getting an exception:
No such file or directory: '/bitnami/tomcat/data/'
Does anyone know what I can do to be able to write to the tomcat_service container from the web container? I'd be very grateful for any advice others can offer on this question!
you have to use docker volumes if you want one service to write to other service. If web writes to someFolderName the same file will exist in the tomcat_service.
version: '2'
services:
tomcat_service:
image: 'bitnami/tomcat:latest'
volumes:
- my_shared_data:/bitnami/tomcat/data/
web:
volumes:
- my_shared_data:/someFolderName
volumes:
my_shared_data:
Data in volumes persist and they will be available even next time you re-create docker containers. You should always use docker volumes when writing some data in docker containers.

docker - multiple databases on local

I have 2 applications that are separate codebases, and they each have their own database on the same db server instance.
I am trying to replicate this in docker, locally on my laptop. I want to be able to have both apps use the same database instance.
I would like
both apps to start in docker at the same time
both apps to be able to access the database on localhost
the database data is persisted
be able to view the data in the database using an IDE on localhost
So each of my apps has its own dockerfile and docker-compose file.
On app1, I start the docker instance of the app which is tied to the database. It all starts fine.
When I try to start app2, I get the following error:
ERROR: for app2_mssql_1 Cannot start service mssql: driver failed programming external connectivity on endpoint app2_mssql_1 (12d550c8f032ccdbe67e02445a0b87bff2b2306d03da1d14ad5369472a200620): Bind for 0.0.0.0:1433 failed: port is already allocated
How can i have them both running at the same time? BOTH apps need to be able to access each others database tables!
Here is the docker-compose.yml files
app1:
version: "3"
services:
web:
build:
context: .
args:
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=P455w0rd!
volumes:
- app1_db:/var/lib/mssql/data
volumes:
app1_db:
and here is app2:
version: "3"
services:
web:
build:
context: .
args:
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=P455w0rd!
volumes:
- app2_db:/var/lib/mssql/data
volumes:
app2_db:
Should I be using the same volume in each docker-compose file?
I guess the problem is in each app i am spinning up 2 different db instances, when in reality I guess i just want one, and it be used by all my apps?
The ports part in docker-compose file will bound the container port to host's port which causes port conflict in your case.
You need to remove the ports part from at least one of the compose file. This way, docker-compose can be up for both. And you can have access to both app at same time. But remember both apps will be placed in separate network bridges.
How docker-compose up works:
Suppose your app is in a directory called myapp, and your docker-compose.yml
When you run docker-compose up, the following happens:
A network called myapp_default is created.
A container is created using web’s configuration. It joins the network myapp_default under the name web.
A container is created using db’s configuration. It joins the network myapp_default under the name db.
If you run the second docker-compose.yml in different folder myapp2, then the nework will be myapp2_default.
Current configuration creates two volumes, two datebase containers and two apps. If you can make them run in the same network and run database as the single container it will work.
I don't think you are expecting two database container two two volumes.
Approach 1:
docker-compose.yml as a single compose.
version: "3"
services:
app1:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app1.
ports:
- "3030:3000"
depends_on:
- mssql
app2:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app2.
ports:
- "3032:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SqlServer1234!
volumes:
- app_docker_db:/var/lib/mssql/data
volumes:
app_docker_db:
Approach 2:
To Isolate it further, still want to run them as the sepeare composefiles, create three compose file with network.
docker-compose.yml for database with network
version: "3"
services:
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SqlServer1234!
volumes:
- app_docker_db:/var/lib/mssql/data
networks:
- test_network
volumes:
app_docker_db
networks:
test_network:
docker-ompose.yml for app1
remove the database container and add below lines to your compose file
version: "3"
services:
app1:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app1.
ports:
- "3030:3000"
networks:
default:
external:
name: my-pre-existing-network
Do the same for another docker-compose by replacing the docker-compose file.
There are many other option to create docker-compose files. Configure the default network and Use a pre-existing network
You're exposing the same port (1433) two times to the host machine. (This is what "ports:..." does). This is not possible as it would block the same port on your host (That's what the message says).
I think the most common way in these cases is that you link your db's to your apps. (See https://docs.docker.com/compose/compose-file/#links). By doing this your applications can still access the databases on their common ports (1433), but the databases are not accessible from the host anymore (only from the container that is linked to it).
Another error I see in your docker compose file is that both applications are exposed by the same ports. This is also not possible for the same reason. I would suggest that you change one of them to "3000:3001", so you can access this application on port 3001.

Resources