I am using docker-compose to create a multi-container environment where I have one mongodb instance and two python applications. I am having trouble when I am changing my files locally, but docker-compose up doesn't reflect the changes I made in my file. What am I doing wrong?
My project structure:
.
├── docker-compose.yml
├── form
│ ├── app.py
│ ├── Dockerfile
│ ├── requirements.txt
│ ├── static
│ └── templates
│ ├── form_action.html
│ └── form_sumbit.html
├── notify
│ ├── app.py
│ ├── Dockerfile
│ ├── requirements.txt
└── README
Dockerfiles are pretty similar for two apps. One is given below:
FROM python:2.7
ADD . /notify
WORKDIR /notify
RUN pip install -r requirements.txt
Here is my docker-compose.yml file:
version: '3'
services:
db:
image: mongo:3.0.2
container_name: mongo
networks:
db_net:
ipv4_address: 172.16.1.1
web:
build: form
command: python -u app.py
ports:
- "5000:5000"
volumes:
- form:/form
environment:
MONGODB_HOST: 172.16.1.1
networks:
db_net:
ipv4_address: 172.16.1.2
notification:
build: notify
command: python -u app.py
volumes:
- notify:/notify
environment:
MONGODB_HOST: 172.16.1.1
networks:
db_net:
ipv4_address: 172.16.1.3
networks:
db_net:
external: true
volumes:
form:
notify:
Here is my output for docker volume ps:
local form
local healthcarereminder_form
local healthcarereminder_notify
local notify
[My understanding till now: You can see there are two instances of form and notify, with one having project folder name appended. So docker might be looking for changes in a different file. I am not sure.]
If you're trying to mount a host directory in the docker-compose file do not declare notify as a VOLUME directive.
Instead treat it like a local folder
notification:
build: notify
command: python -u app.py
volumes:
# this points to a relative ./notify directory.
- ./notify:/notify
environment:
....
volumes:
form:
#do not declare the volume here.
# notify:
When you were declare a VOLUME node at the bottom of the docker-compose file, docker makes an special internal directory meant to be shared between docker images. Here is more details: https://docs.docker.com/engine/tutorials/dockervolumes/#add-a-data-volume
Related
I'm trying to use krakend's flexible configuration, but there's no way to get it started in a simple way
ERROR parsing the configuration file: loading flexible-config settings:
2022-07-19T08:48:21.279006680Z - "config/settings/dev": open "config/settings/dev": no such file or directory
I'm just trying to load a configuration file with a simple variable, to test the gateway.
but I'm not assigning that variable anywhere for now
dev/env.json
{
"port": 8080
}
I show you my configuration of docker-compose.yaml
shared-gateway:
build:
context: ${PWD}/.docker/krakend
container_name: 'shared-gateway'
restart: "unless-stopped"
volumes:
- ${PWD}/.docker/krakend/:/etc/krakend/
ports:
- "9191:8080"
networks:
- network-gateway
environment:
- FC_ENABLE=1
- FC_SETTINGS="config/settings/dev"
command: ['run', '-c', '/etc/krakend/krakend.json']
Dockerfile
FROM devopsfaith/krakend:2.0.5
COPY krakend.json /etc/krakend/krakend.json
I show you my directory tree
.
├── Dockerfile
├── config
│ ├── partials
│ ├── settings
│ │ ├── dev
│ │ │ └── env.json
│ │ └── prod
│ └── templates
└── krakend.json
When I start the container, it tells me that it can't find the directory
ERROR parsing the configuration file: loading flexible-config settings:
2022-07-19T09:25:12.390870759Z - "config/settings/dev": open "config/settings/dev": no such file or directory
Does anyone know where I'm going wrong or have an example of how to use krakend's flexible-configuration with docker?
It seems you either don't copy the "config" directory into directory "/etc/krakend/" in your docker image or mount it ("config") from outside in your docker compose file. I believe the the image work directory is at "/etc/krakend", so make sure you make your config folder available under that directory, before you start "run" command
The problem is that the config folder is not present in your Docker image. I would suggest using this Dockerfile example that uses Flexible Configuration that does exactly what you want:
https://www.krakend.io/docs/deploying/docker/
I am trying to dockerize a microservice-based application. The api is built with nestjs and MySQL. The following is the directory structure
.
├── docker-compose.yml
├── api
│ ├── src
│ ├── Dockerfile
│ ├── package.json
│ ├── package-lock.json
│ ├── ormconfig.js
│ └── .env
├── payment
│ ├── src
│ ├── Dockerfile
│ ├── package.json
│ └── package-lock.json
├── notifications
│ ├── src
│ ├── Dockerfile
│ ├── package.json
│ └── package-lock.json
└
The following is the Dockerfile inside the api directory
FROM node:12.22.3
WORKDIR /usr/src/app
COPY package*.json .
RUN npm install
CMD ["npm", "run", "start:dev"]
The below is the docker-compose.yml file. Please note that the details for payment & notifications are not added yet in the docker-compose file.
version: '3.7'
networks:
server-network:
driver: bridge
services:
api:
image: api
build:
context: .
dockerfile: api/Dockerfile
command: npm run start:dev
volumes:
- ".:/usr/src/app"
- "/usr/src/app/node_modules"
networks:
- server-network
ports:
- '4000:4000'
depends_on:
- mysql
mysql:
image: mysql:5.7
container_name: api_db
restart: always
environment:
MYSQL_DATABASE: api
MYSQL_ROOT_USER: root
MYSQL_PASSWORD: 12345
MYSQL_ROOT_PASSWORD: root
ports:
- "3307:3306"
volumes:
- api_db_db:/var/lib/mysql
networks:
- server-network
volumes:
api_db:
Now, when I try to start the application using docker-compose up I'm getting the following error.
no such file or directory, open '/usr/src/app/package.json'
UPDATE
Tried removing the volumes and it didn't help too. Also, try to see what is there in the api by listing the contents of the directory by running
docker-compose run api ls /usr/src/app
and it shows the following contents in the folder
node_modules package-lock.json
Any help is much appreciated.
Your build: { context: } directory is set wrong.
The image build mechanism uses a build context to send files to the Docker daemon. The dockerfile: location is relative to this directory; within the Dockerfile, the left-hand side of any COPY (or ADD) directives is always interpreted as relative to this directory (even if it looks like an absolute path; and you can't step out of this directory with ..).
For the setup you show, where you have multiple self-contained applications, the easiest thing is to set context: to the directory containing the application.
build:
context: api
dockerfile: Dockerfile # the default value
Or, if you are using the default value for dockerfile, an equivalent shorthand
build: api
You need to set the build context to a parent directory if you need to share files between images (see How to include files outside of Docker's build context?). In this case, all of the COPY instructions need to be qualified with the subdirectory in the combined source tree.
# Dockerfile, when context: .
COPY api/package*.json ./
RUN npm ci
COPY api/ ./
You should not normally need the volumes: you show. These have the core effect of (1) replacing the application in the image with whatever's on the local system, which could be totally different, and then (2) replacing its node_modules directory with a Docker anonymous volume, which will never be updated to reflect changes in the package.json file. In this particular setup you also need to be very careful that the volume mappings match the filesystem layout. I would recommend removing the volumes: block here; use a local Node for day-to-day development, maybe configuring it to point at the Docker-hosted database.
If you also remove things that are set in the Dockerfile (command:) and things Compose can provide reasonable defaults for (image:, container_name:, networks:) you could reduce the docker-compose.yml file to:
version: '3.8'
services:
api: # without volumes:, networks:, image:, command:
build: api # shorthand corrected directory-only form
ports:
- '4000:4000'
depends_on:
- mysql
mysql: # without container_name:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: api
MYSQL_ROOT_USER: root
MYSQL_PASSWORD: 12345
MYSQL_ROOT_PASSWORD: root
ports:
- "3307:3306"
volumes:
- api_db:/var/lib/mysql
volumes:
api_db:
Assume my current public IP is 101.15.14.71, I have a domain called example.com which I configured using cloudflare and I created multiple DNS entry pointing to my public ip.
Eg:
1) new1.example.com - 101.15.14.71
2) new2.example.com - 101.15.14.71
3) new3.example.com - 101.15.14.71
Now, Here's my example project structure,
├── myapp
│ ├── app
│ │ └── main.py
│ ├── docker-compose.yml
│ └── Dockerfile
├── myapp1
│ ├── app
│ │ └── main.py
│ ├── docker-compose.yml
│ └── Dockerfile
└── traefik
├── acme.json
├── docker-compose.yml
├── traefik_dynamic.toml
└── traefik.toml
Here I have two fastAPIs (i.e., myapp, myapp1)
Here's the example code I have in main.py in both myapp and myapp1, Its exactly same but return staement is different that's all
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
def read_main():
return {"message": "Hello world for my project myapp"}
Here's my Dockerfile for myapp and myapp1, here too both are exactly same but the only difference is I deploy myapp on 7777 and myapp1 on 7778 in different containers
FROM ubuntu:latest
ARG DEBIAN_FRONTEND=noninteractive
RUN apt update && apt upgrade -y
RUN apt install -y -q build-essential python3-pip python3-dev
# python dependencies
RUN pip3 install -U pip setuptools wheel
RUN pip3 install gunicorn fastapi uvloop httptools "uvicorn[standard]"
# copy required files
RUN bash -c 'mkdir -p /app'
COPY ./app /app
ENTRYPOINT /usr/local/bin/gunicorn \
-b 0.0.0.0:7777 \ # this line I use for myapp dockerfile
-b 0.0.0.0:7778 \ # this line I change for myapp1 dockerfile
-w 1 \
-k uvicorn.workers.UvicornWorker app.main:app \
--chdir /app
Here's my docker-compose.yml file for myapp and myapp1, here also I have exactly same but only difference is I change the port,
services:
myapp: # I use this line for myapp docker-compose file
myapp1: # I use this line for myapp1 docker-compose file
build: .
restart: always
labels:
- "traefik.enable=true"
- "traefik.docker.network=traefik_public"
- "traefik.backend=myapp" # I use this line for myapp docker-compose file
- "traefik.backend=myapp1" # I use this line for myapp1 docker-compose file
- "traefik.frontend.rule=Host:new2.example.com" # I use this for myapp compose file
- "traefik.frontend.rule=Host:new3.example.com" # I use this for myapp1 compose file
- "traefik.port=7777" # I use this line for myapp docker-compose file
- "traefik.port=7778" # I use this line for myapp1 docker-compose file
networks:
- traefik_public
networks:
traefik_public:
external: true
Now coming to traefik folder,
acme.json # I created it using nano acme.json command with nothing in it,
but did chmod 600 acme.json for proper permissions.
traefik_dynamic.toml
[http]
[http.routers]
[http.routers.route0]
entryPoints = ["web"]
middlewares = ["my-basic-auth"]
service = "api#internal"
rule = "Host(`new1.example.com`)"
[http.routers.route0.tls]
certResolver = "myresolver"
[http.middlewares.test-auth.basicAuth]
users = [
["admin:your_encrypted_password"]
]
traefik.toml
[entryPoints]
[entryPoints.web]
address = ":80"
[entryPoints.web.http]
[entryPoints.web.http.redirections]
[entryPoints.web.http.redirections.entryPoint]
to = "websecure"
scheme = "https"
[entryPoints.websecure]
address = ":443"
[api]
dashboard = true
[certificatesResolvers.myresolver.acme]
email = "reallygoodtraefik#gmail.com"
storage= "acme.json"
[certificatesResolvers.myresolver.acme.httpChallenge]
entryPoint = "web"
[providers]
[providers.docker]
watch = true
network = "web"
[providers.file]
filename = "traefik_dynamic.toml"
docker-compose.yml
services:
traefik:
image: traefik:latest
ports:
- 80:80
- 443:443
- 8080:8080
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik.toml:/traefik.toml
- ./acme.json:/acme.json
- ./traefik_dynamic.toml:/traefik_dynamic.toml
networks:
- web
networks:
web:
These are the details about my files, what I am trying to achieve here is,
I want to setup traefik and traefik dashboard with basic authentication, and I deploy two of my fastapi services,
myapp 7777, I need to access this app via new2.example.com
myapp1 7778, I need to access this app via new3.example.com
traefik dashboard, I need to access this via new1.example.com
All of these should be https and also has certification autorenew enabled.
I got all these from online articles for latest version of traefik. But the problem is this is not working. I used docker-compose to build and deploy the traefik and I open the api dashboard. It is asking for password and user (basic auth I setup) I entered my user details I setup in traefik_dynamic.toml but its not working.
Where did I do wrong? Please help me correcting mistakes in my configuration. I am really interested to learn more about this.
Error Update:
traefik_1 | time="2021-06-16T01:51:16Z" level=error msg="Unable to obtain ACME certificate for domains \"new1.example.com\": unable to generate a certificate for the domains [new1.example.com]: error: one or more domains had a problem:\n[new1.example.com] acme: error: 403 :: urn:ietf:params:acme:error:unauthorized :: Invalid response from http://new1.example.com/.well-known/acme-challenge/mu85LkYEjlvnbDI-wM2xMaRFO1QsPDNjepTDb47dWF0 [2606:4700:3032::6815:55c4]: 404\n" rule="Host(`new1.example.com`)" routerName=api#docker providerName=myresolver.acme
traefik_1 | time="2021-06-16T01:51:19Z" level=error msg="Unable to obtain ACME certificate for domains \"new2.example.com\": unable to generate a certificate for the domains [new2.example.com]: error: one or more domains had a problem:\n[new2.example.com] acme: error: 403 :: urn:ietf:params:acme:error:unauthorized :: Invalid response from http://new2.example.com/.well-known/acme-challenge/ykiCAEpJeQ1qgVdeFtSRo3q-ATTwgKdRdGHUs2kgIsY [2606:4700:3031::ac43:d1e9]: 404\n" providerName=myresolver.acme routerName=myapp1#docker rule="Host(`new2.example.com`)"
traefik_1 | time="2021-06-16T01:51:20Z" level=error msg="Unable to obtain ACME certificate for domains \"new3.example.com\": unable to generate a certificate for the domains [new3.example.com]: error: one or more domains had a problem:\n[new3.example.com] acme: error: 403 :: urn:ietf:params:acme:error:unauthorized :: Invalid response from http://new3.example.com/.well-known/acme-challenge/BUZWuWdNd2XAXwXCwkeqe5-PHb8cGV8V6UtzeLaKryE [2606:4700:3031::ac43:d1e9]: 404\n" providerName=myresolver.acme routerName=myapp#docker rule="Host(`new3.example.com`)"
You only need one docker-compose file for all the services, and there is no need to define one for each container.
The project structure you should be using should be something like:
├── docker-compose.yml
├── myapp
│ ├── .dockerignore
│ ├── Dockerfile
│ └── app
│ └── main.py
├── myapp1
│ ├── .dockerignore
│ ├── Dockerfile
│ └── app
│ └── main.py
└── traefik
├── acme.json
└── traefik.yml
When creating containers, unless they are to be used for development purposes, it is recommended to not use a full-blown image, like ubuntu. Specifically for your purposes I would recommend a python image, such as python:3.7-slim.
Not sure if you are using this for development or production purposes, but you could also use volumes to mount the app directories inside the containers (especially useful if you are using this for development), and only use one Dockerfile for both myapp and myapp1, customizing it via environment variables.
Since you are already using traefik's dynamic configuration, I will do most of the setup for the container configuration via docker labels in the docker-compose.yml file.
Your dockerfile for myapp and myapp1 will be very similar at this point, but I've kept them as seperate ones, since you may need to make changes to them depending on the requirements of your apps in the future. I've used an environment variable for the port, which can allow you to change the port from your docker-compose.yml file.
You can use the following Dockerfile (./myapp/Dockerfile and ./myapp1/Dockerfile):
FROM python:3.7-slim
ARG DEBIAN_FRONTEND=noninteractive
ENV PYTHONUNBUFFERED=1
RUN pip3 install -U pip setuptools wheel && \
pip3 install gunicorn fastapi uvloop httptools "uvicorn[standard]"
COPY . /app
ENV PORT=7777 # and 7778 for myapp1
ENTRYPOINT /usr/local/bin/gunicorn -b 0.0.0.0:$PORT -w 1 -k uvicorn.workers.UvicornWorker app.main:app --chdir /app
Note: you should really be using something like poetry or a requirements.txt file for your app dependencies.
The .dockerignore file (./myapp/.dockerignore and ./myapp1/.dockerignore) should contain:
Dockerfile
Since the whole directory is being copied inside the container and you don't need the Dockerfile to be in there.
Your main traefik config (./traefik/traefik.yml) can be something like:
providers:
docker:
exposedByDefault: false
global:
checkNewVersion: false
sendAnonymousUsage: false
api: {}
accessLog: {}
entryPoints:
web:
address: ":80"
http:
redirections:
entryPoint:
to: "websecure"
scheme: "https"
websecure:
address: ":443"
ping:
entryPoint: "websecure"
certificatesResolvers:
myresolver:
acme:
caServer: "https://acme-staging-v02.api.letsencrypt.org/directory"
email: "example#example.com"
storage: "/etc/traefik/acme.json"
httpChallenge:
entryPoint: "web"
Note: The above acme config will use the stage letsencrypt server. Make sure all the details are correct, and remove caServer after you've tested that everything works, in order to communicate with the letsencrypt production server.
Your ./docker-compose.yml file should be something like:
version: "3.9"
services:
myapp:
build:
context: ./myapp
dockerfile: ./Dockerfile
image: myapp
depends_on:
- traefik
expose:
- 7777
labels:
- "traefik.enable=true"
- "traefik.http.routers.myapp.tls=true"
- "traefik.http.routers.myapp.tls.certResolver=myresolver"
- "traefik.http.routers.myapp.entrypoints=websecure"
- "traefik.http.routers.myapp.rule=Host(`new2.example.com`)"
- "traefik.http.services.myapp.loadbalancer.server.port=7777"
myapp1:
build:
context: ./myapp1
dockerfile: ./Dockerfile
image: myapp1
depends_on:
- traefik
expose:
- 7778
labels:
- "traefik.enable=true"
- "traefik.http.routers.myapp1.tls=true"
- "traefik.http.routers.myapp1.tls.certResolver=myresolver"
- "traefik.http.routers.myapp1.entrypoints=websecure"
- "traefik.http.routers.myapp1.rule=Host(`new3.example.com`)"
- "traefik.http.services.myapp1.loadbalancer.server.port=7778"
traefik:
image: traefik:v2.4
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik/traefik.yml:/etc/traefik/traefik.yml
- ./traefik/acme.json:/etc/traefik/acme.json
ports:
- 80:80
- 443:443
labels:
- "traefik.enable=true"
- "traefik.http.routers.api.tls=true"
- "traefik.http.routers.api.tls.certResolver=myresolver"
- "traefik.http.routers.api.entrypoints=websecure"
- "traefik.http.routers.api.rule=Host(`new1.example.com`)"
- "traefik.http.routers.api.service=api#internal"
- "traefik.http.routers.api.middlewares=myAuth"
- "traefik.http.middlewares.myAuth.basicAuth.users=admin:$$apr1$$4zjvsq3w$$fLCqJddLvrIZA.CCoGE2E." # generate with htpasswd. replace $ with $$
You can generate the password by using the command:
htpasswd -n admin | sed 's/\$/\$\$/g'
Note: If you need a literal dollar sign in the docker-compose file you need to use $$ as documented here.
Issuing docker-compose up in the directory should bring all the services up, and working as expected.
The above should work for you based on the details you have provided, but can be further improved at multiple points, depending on your needs.
Moreover, having the credentials for the traefik dashboard in the docker-compose.yml file is probably not the best, and you may want to use docker secrets for it. You can also add healthchecks and consider placing myapp and myapp1 into a seperate internal network.
If you want to get further into it, I propose that you start with Get started with Docker Compose and also read: Dockerfile reference and Compose file version 3 reference
I have a Golang project I am working and have multiple micro-services in the same code repository. My directory structure is roughly as follows:
├── pkg
├── cmd
│ ├── servicea
│ └── serviceb
├── internal
│ ├── servicea
│ └── serviceb
├── Makefile
├── scripts
│ └── protogen.sh
├── vendor
│ └── ...
├── go.mod
├── go.sum
└── readme.md
The main.go files for the respective services are in cmd/servicex/main.go
I've put the individual Dockerfiles for the services in cmd/servicex.
Roughly, this is how my Dockerfile looks like:
FROM golang:1.15.6
ARG version
COPY go.* <repo-path>
COPY pkg/ <repo-path>/pkg/
COPY internal/servicea internal/servicea
COPY vendor/ <repo-path>/vendor/
COPY cmd/servicea/ <repo-path>/cmd/servicea/
WORKDIR <repo-path>/cmd/servicea/
RUN GO111MODULE=on GOFLAGS=-mod=vendor CGO_ENABLED=0 GOOS=linux go build -v -ldflags "-X <repo-path>/cmd/servicea/main.version=$version" -a -installsuffix cgo -o servicea .
FROM alpine:3.12
RUN apk --no-cache add ca-certificates
WORKDIR /servicea/
COPY --from=0 <repo-path>/cmd/servicea .
EXPOSE 50051
ENTRYPOINT ["/servicea/servicea"]
I am using Scylla as my DB for this service and gRPC is the protocol for communication.
This is my docker-compose.yml for this service.
version: '3'
services:
db:
container_name: servicedb
image: scylladb/scylla
hostname: db
environment:
GET_HOST_FROM: dns
SCYLLA_USER: <user>
SCYLLA_PASS: <password>
ports:
- 9042:9042
networks:
- serviceanet
servicea:
container_name: servicea
image: servicea-production:latest
hostname: servicea
build:
context: .
dockerfile: Dockerfile
environment:
GET_HOSTS_FROM: dns
networks:
- serviceanet
volumes:
- .:<repo-path>
ports:
- 50051:50051
depends_on:
- db
links:
- db
labels:
kompose.service.type: LoadBalancer
networks:
serviceannet:
driver: bridge
I am using kompose to generate the corresponding kubernetes yaml files.
However, when I run the compose locally or try to deploy it on minikube/GKE, my service instance is not able to connect to my DB and I get an error like this:
failed to create scylla session, gocql: unable to create session: control: unable to connect to initial hosts: dial tcp 127.0.0.1:9042: connect: connection refused
Otherwise, if I run a local scylla docker instance with the following command:
docker run --name some-scylla -p 9042:9042 -d scylladb/scylla --broadcast-address 127.0.0.1 --listen-address 0.0.0.0 --broadcast-rpc-address 127.0.0.1
and then do a go run cmd/servicea/main.go my service seems to be running and the API endpoints are working(verified with Evans).
127.0.0.1 (localhost) is the host/container on which your service is running. in the case of multiple containers (either using docker-compose or k8s) they would have different IP addresses and 127.0.0.1 would correspond to different hosts depending where you are connecting from. In your gocql initialization, provide the db address using a configuration/environment variable. docker-compose will automatically configure a hostname for the db container and with k8s you can use its service discovery mechanism.
I would like different Docker projects for my closely related projects server_1 and server_2 to live in one folder that I can build/deploy simultaneously using Docker Compose.
Example project directory:
.
├── common_files
│ ├── grpc_pb2_grpc.py
│ ├── grpc_pb2.py
│ └── grpc.proto
├── docker-compose.yml
├── flaskui
│ ├── Dockerfile
│ └── flaskui.py
├── server_1
│ ├── Dockerfile
│ └── server_1.py
├── server_2
│ ├── Dockerfile
│ └── server_2.py
└── server_base.py
Two questions I am hoping have one common solution:
How can I make it so I only have the common dependency common_files/ in only one place?
How can I use the common code server_base.py in both server projects?
I've tried importing using relative directories in my project Python scripts, like from ..common_files import grpc_pb2, but I get ValueError: attempted relative import beyond top-level package.
I've considered using read_only volume mounting in docker-compose.yml, but that doesn't explain how to reference the common_files from within a project file like flaskui/Dockerfile.
You need to mount your local directory that contains the grpc files and server_base.py as a volume in your server_1 and server_2 containers. That way, there is a single source of truth (your local directory) and you can use them from both your containers.
You can add the volumes definition in your docker-compose.yml file for your containers. Here's a bare-bones compose file I created for your use-case:
version: "3"
services:
server_1:
image: tutum/hello-world
ports:
- "8080:8080"
container_name: server_1
volumes:
- ./common_files:/common_files
server_2:
image: tutum/hello-world
ports:
- "8081:8081"
container_name: server_2
volumes:
- ./common_files:/common_files
common_files is the folder in your local directory that has the server_base.py along with the grpc files which you want to mount as volumes to your containers which need to use them. These are called host volumes since you are mounting your local files from your host as volumes for your containers.
With this setup, when you exec into server_1, you can see that there's a common_files folder sitting in the / directory. Similarly for server_2.
You can exec into server_1 using docker-compose exec server_1 /bin/sh
You can also read up more on the documentation for Docker volumes.