How to share and run an app with docker-compose - docker

After spending hours to make it happen, I just can't make it work. I'm desperate for help as I couldn't find any questions related to my issue.
I've developed a Node.js web app for my university. IT department needs me to prepare a Docker image shared on a Docker Hub (although I chose Github Packages) and a docker-compose file so it can be easily run. I tried to host the app on my Raspberry Pi, but when I pull the image (with docker-compose.yaml, Dockerfile and .env present) it fails during build process:
npm ERR! enoent ENOENT: no such file or directory, open '/usr/src/app/package.json'
and during compose up process:
pi#raspberrypi:~/projects $ docker-compose up
Starting mysql ... done
Starting backend ... done
Attaching to mysql, backend
backend | exec /usr/local/bin/docker-entrypoint.sh: exec format error
mysql | 2022-09-22 08:04:47+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.30-1.el8 started.
mysql | 2022-09-22 08:04:48+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
mysql | 2022-09-22 08:04:48+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.30-1.el8 started.
mysql | '/var/lib/mysql/mysql.sock' -> '/var/run/mysqld/mysqld.sock'
backend exited with code 1
I executed bash inside my Docker container (on my dev machine) so I'm sure that /usr/src/app folder structure matches my app folder structure.
What's wrong with my solution? Should I provide more files than just docker-compose.yaml, Dockerfile and .env?
Dockerfile:
FROM node:18-alpine
WORKDIR /usr/src/app
COPY . ./
RUN npm i && npm cache clean --force
RUN npm run build
ENV NODE_ENV production
CMD [ "node", "dist/main.js" ]
EXPOSE ${PORT}
docker-compose.yaml:
version: "3.9"
services:
backend:
command: npm run start:prod
container_name: backend
build:
context: .
dockerfile: Dockerfile
image: ghcr.io/rkn-put/web-app/docker-backend/test
ports:
- ${PORT}:${PORT}
depends_on:
- mysql
environment:
- NODE_ENV=${NODE_ENV}
- PORT=${PORT}
- ORIGIN=${ORIGIN}
- DB_HOST=${DB_HOST}
- DB_PORT=${DB_PORT}
- DB_NAME=${DB_NAME}
- DB_USERNAME=${DB_USERNAME}
- DB_PASSWORD=${DB_PASSWORD}
- DB_SYNCHRONIZE=${DB_SYNCHRONIZE}
- EXPIRES_IN=${EXPIRES_IN}
- SECRET=${SECRET}
- GMAIL_USER=${GMAIL_USER}
- GMAIL_CLIENT_ID=${GMAIL_CLIENT_ID}
- GMAIL_CLIENT_SECRET=${GMAIL_CLIENT_SECRET}
- GMAIL_REFRESH_TOKEN=${GMAIL_REFRESH_TOKEN}
- GMAIL_ACCESS_TOKEN=${GMAIL_ACCESS_TOKEN}
mysql:
image: mysql:latest
container_name: mysql
hostname: mysql
restart: always
ports:
- ${DB_PORT}:${DB_PORT}
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_DATABASE=${DB_NAME}
- MYSQL_USER=${DB_USERNAME}
- MYSQL_PASSWORD=${DB_PASSWORD}
volumes:
- ./mysql:/var/lib/mysql
cap_add:
- SYS_NICE

Even if this is not a clear solution, there are multiple things that you should fix (and understand) and then things should work.
You say: "but when I pull the image (with docker-compose.yaml, Dockerfile and .env present) it fails during build process". This is actually where the biggest confusion happens. If you pull, there should be no build anymore.
You build locally, you push with docker-compose push and the image that you have in Github is ready to use. Because of this, on the target machine (where you want to run the project) you don't need to build any more - therefor you don't need a Dockerfile anymore.
The docker-compose.yml that you deliver should not have the build section for your app. Only the image name so that docker-compose knows where to pull the image from.
In local (your development environment) you should have the same docker-compose.yml without the build section, but also a file docker-compose.override.yml that should look like:
version: "3.9"
services:
backend:
build:
context: .
docker-compose automatically merges docker-compose.yml and docker-compose.override.yml when it finds the second one. That's also why it is important to not deliver the override file.
Only this should make your application work on the target machine. Remember all you need there is docker-compose.yml (no build section) and the .env.
Other points that you might want to address:
dockerfile: Dockerfile - not needed since that is the default
command: npm run start:prod if you overwrite it, why not just put it this way in the Dockerfile? If you have a good reason to do this then leave it
EXPOSE ${PORT} you are not declaring PORT anywhere in your Dockerfile. Just make your run on port 80 and expose port 80.
read the docs and save yourself some typing. if the env variables have the same names as in .env then docker-compose is clever enough to pick them if you only declare them
don't expose mysql ports on host: ${DB_PORT}:${DB_PORT}
consider using a volume for mysql instead of a folder. If you use a folder maybe place is in a different location so that you don't delete it by mistake

Related

How can I add a file to my volume without writing a new file to the host?

I'm trying to run a Next.js project inside docker-compose. To take advantage of hot-reloading, I'm mounting in the entire project to the Docker image as a volume.
So far, so good!
This is where things are starting to get tricky: For this particular project, it turns out Apple Silicon users need a .babelrc file included in their dockerized app, but NOT in the files on their computer.
All other users do not need a .babelrc file at all.
To sum up, this is what I'd like to be able to do:
hot reload project (hence ./:/usr/src/app/)
have an environment variable write content to /usr/src/app/.babelrc.
not have a .babelrc in the host's project root.
My attempt at solving was including the .babelrc under ci-cd/.babelrc in the host file system.
Then I tried mounting the file as a volume like - ./ci-cd/.babelrc:/usr/src/app/.babelrc. But then a .babelrc file gets written back to the root of the project in the host filesystem.
I also tried include COPY ./ci-cd/.babelrc /usr/src/app/.babelrc within the Dockerfile, but it seems to be overwritten with docker-composes's volume property.
Here's my Dockerfile:
FROM node:14
WORKDIR /usr/src/app/
COPY package.json .
RUN npm install
And the docker-compose.yml:
version: "3.8"
services:
# Database image
psql:
image: postgres:13
restart: unless-stopped
ports:
- 5432:5432
# image for next.js project
webapp:
build: .
command: >
bash -c "npm run dev"
ports:
- 3002:3002
expose:
- 3002
depends_on:
- testing-psql
volumes:
- ./:/usr/src/app/

Flask in docker working very slowly and not synch files

I've 2 problems with flask app in docker. Application working slowly and freeze after finish last request (for example: first route work fine, next click other link/page app freeze. If i go to homepage via URL and run page again working ok ). Outside docker app working very fast.
Second problem is docker not synch files in container after change files.
# Dockerfile
FROM python:3.9
# set work directory
WORKDIR /base
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update
RUN pip install --upgrade pip
COPY ./requirements.txt /base/requirements.txt
COPY ./base_app.py /base/base_app.py
COPY ./config.py /base/config.py
COPY ./certs/ /base/certs/
COPY ./app/ /base/app/
COPY ./tests/ /base/tests/
RUN pip install -r requirements.txt
# docker-compose
version: '3.3'
services:
web:
build: .
command: tail -f /dev/null
volumes:
- ${PWD}/app/:/usr/src/app/
networks:
- flask-network
ports:
- 5000:5000
depends_on:
- flaskdb
flaskdb:
image: postgres:13-alpine
volumes:
- ${PWD}/postgres_database:/var/lib/postgresql/data/
networks:
- flask-network
environment:
- POSTGRES_DB=db_name
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
ports:
- "5432:5432"
restart: always
networks:
flask-network:
driver: bridge
`
You have a couple of significant errors in the code you show.
The first problem is that your application doesn't run at all: the Dockerfile is missing the CMD line that tells Docker what to run, and you override it in the Compose setup with a meaningless tail command. You should generally set this in the Dockerfile:
CMD ["./base_app.py"]
You can remove most of the Compose settings you have. You do not need command: (it's in the Dockerfile), volumes: (what you have is ineffective and the code is in the image anyways), or networks: (Compose provides a network named default; delete all of the networks: blocks in the file).
Second problem is docker not synch files in container after change files.
I don't usually recommend trying to do actual development in Docker. You can tell Compose to just start the database
docker-compose up -d flaskdb
and then you can access it from the host (PGHOST=localhost, PGPORT=5432). This means you can use an ordinary non-Docker Python virtual environment for development.
If you do want to try to use volumes: to simulate a live development environment (you talk about performance; this specific path can be quite slow on non-Linux hosts) then you need to make sure the left side of volumes: is the host directory with your code (probably .), the right side is the container directory (your Dockerfile uses /base), and your Dockerfile doesn't rearrange, modify, or generate the files at all (the bind mount hides all of it).
# don't run the application in the image; use the Docker infrastructure
# to run something else
volumes:
# v-------- left side: host path (matches COPY source directory)
- .:/base
# ^^^^-- right side: container path (matches WORKDIR/destination directory)

Why is docker-compose running the same command and using the wrong Dockerfile?

I've got a simple Node / React project. I'm trying to use Docker to create two containers, one for the server, and one for the client, each with their own Dockerfile in the appropriate directory.
docker-compose.yml
version: '3.9'
services:
client:
image: node:14.15-buster
build:
context: ./src
dockerfile: Dockerfile.client
ports:
- '3000:3000'
- '45799:45799'
volumes:
- .:/app
tty: true
server:
image: node:14.15-buster
build:
context: ./server
dockerfile: Dockerfile.server
ports:
- '3001:3001'
volumes:
- .:/app
depends_on:
- redis
links:
- redis
tty: true
redis:
container_name: redis
image: redis
ports:
- '6379'
src/Dockerfile.client
FROM node:14.15-buster
# also the directory you land in on ssh
WORKDIR /app
CMD cd /app && \
yarn && \
yarn start:client
server/Dockerfile.server
FROM node:14.15-buster
# also the directory you land in on ssh
WORKDIR /app
CMD cd /app && \
yarn && \
yarn start:server
After building and starting the containers, both containers run the same command, seemingly at random. Either both run yarn start:server or yarn start:client. The logs clearly detail duplicate startup commands and ports being used. Requests to either port 3000 (client) or 3001 (server) confirm that the same one is being used in both containers. If I change the command in both Dockerfiles to echo the respective filename (Dockerfile.server! or Dockerfile.client!), startup reveals only one Dockerfile being used for both containers. I am also running the latest version of Docker on Mac.
What is causing docker-compose to use the same Dockerfile for both containers?
After a lengthy and painful bout of troubleshooting, I narrowed the issue down to duplicate image references. image: node:14.15-buster for each service in docker-compose.yml and FROM node:14.15-buster in each Dockerfile.
Why this would cause this behavior is unclear, but after removing the image references in docker-compose.yml and rebuilding / restarting, everything works as expected.
When you run docker-compose build with both image and build properties set on a service, it will build an image according to the build property and then tag the image according to the image property.
In your case, you have two services building different images and tagging them with the same tag node:14.15-buster. One will overwrite the other.
This probably has the additional unintended consequence of causing your next image to be built on top of the previously built image instead of the true node:14.15-buster.
Then when you start the service, both containers will use the image tagged node:14.15-buster.
From the docs:
If you specify image as well as build, then Compose names the built image with the webapp and optional tag specified in image

Go applications fails and exits when running using docker-compose, but works fine with docker run command

I am running all of these operations on a remove server that is a
VM running Ubuntu 16.04.5 x64.
My Go project's Dockerfile looks like:
FROM golang:latest
ADD . $GOPATH/src/example.com/myapp
WORKDIR $GOPATH/src/example.com/myapp
RUN go build
#EXPOSE 80
#ENTRYPOINT $GOPATH/src/example.com/myapp/myapp
ENTRYPOINT ./myapp
#CMD ["./myapp"]
When I run the docker container using docker-compose up -d, the Go application exits and I see this in the docker logs:
myapp_1 | /bin/sh: 1: ./myapp: Exec format error docker_myapp_1
exited with code 2
If I locate the image using docker images and run the image like:
docker run -it 75d4a95ef5ec
I can see that my golang applications runs just fine:
viper environment is: development HTTP server listening on address:
":3005"
When I googled for this error some people suggested compiling with some special flags but I am running this container on the same Ubuntu host so I am really confused why this isn't working using docker.
My docker-compose.yml looks like:
version: "3"
services:
openresty:
build: ./openresty
ports:
- "80:80"
- "443:443"
depends_on:
- myapp
env_file:
- '.env'
restart: always
myapp:
build: ../myapp
volumes:
- /home/deploy/apps/myapp:/go/src/example.com/myapp
ports:
- "3005:3005"
depends_on:
- db
- redis
- memcached
env_file:
- '.env'
redis:
image: redis:alpine
ports:
- "6379:6379"
volumes:
- "/home/deploy/v/redis:/data"
restart: always
memcached:
image: memcached
ports:
- "11211:11211"
restart: always
db:
image: postgres:9.4
volumes:
- "/home/deploy/v/pgdata:/var/lib/postgresql/data"
restart: always
Your docker-compose.yml file says:
volumes:
- /home/deploy/apps/myapp:/go/src/example.com/myapp
which means your host system's source directory is mounted over, and hides, everything that the Dockerfile builds. ./myapp is the host's copy of the myapp executable and if something is different (maybe you have a MacOS or Windows host) that will cause this error.
This is a popular setup for interpreted languages where developers want to run their application without running a normal test-build-deploy sequence, but it doesn't really make sense for a compiled language like Go where you don't have a choice. I'd delete this block entirely.
The Go container stops running because of this:
WORKDIR $GOPATH/src/example.com/myapp
RUN go build
#EXPOSE 80
#ENTRYPOINT $GOPATH/src/example.com/myapp/myapp
ENTRYPOINT ./myapp
You are switching directories to $GOPATH/src/example.com/myapp where you build your app, however, your entry point is pointing to the wrong location.
To solve this, you either copy the app into the root directory and keep the same ENTRYPOINT command or you copy the application to a different location and pass the full path such as:
ENTRYPOINT /my/go/app/location

Dockerfile and docker-compose not updating with new instructions

When I try to build a container using docker-compose like so
nginx:
build: ./nginx
ports:
- "5000:80"
the COPY instructions isnt working when my Dockerfile simply
looks like this
FROM nginx
#Expose port 80
EXPOSE 80
COPY html /usr/share/nginx/test
#Start nginx server
RUN service nginx restart
What could be the problem?
It seems that when using the docker-compose command it saves an intermediate container that it doesnt show you and constantly reruns that never updating it correctly.
Sadly the documentation regarding something like this is poor. The way to fix this is to build it first with no cache and then up it like so
docker-compose build --no-cache
docker-compose up -d
I had the same issue and a one liner that does it for me is :
docker-compose up --build --remove-orphans --force-recreate
--build does the biggest part of the job and triggers the build.
--remove-orphans is useful if you have changed the name of one of your services. Otherwise, you might have a warning leftover telling you about the old, now wrongly named service dangling around.
--force-recreate is a little drastic but will force the recreation of the containers.
Reference: https://docs.docker.com/compose/reference/up/
Warning I could do this on my project because I was toying around with really small container images. Recreating everything, everytime, could take significant time depending on your situation.
If you need to make docker-compose to copy files every time on up command I suggest declaring a volumes option to your service in the compose.yml file. It will persist your data and also will copy files from that folder into the container.
More info here volume-configuration-reference
server:
image: server
container_name: server
build:
context: .
dockerfile: server.Dockerfile
env_file:
- .envs/.server
working_dir: /app
volumes:
- ./server_data:/app # <= here it is
ports:
- "9999:9999"
command: ["command", "to", "run", "the", "server", "--some-options"]
Optionally, you can add the following section to the end of the compose.yml file. It will keep that folder persisted then. The data in that folder will not be removed after the docker-compose stop command or the docker-compose down command. To remove the folder you will need to run the down command with an additional flag -v:
docker-compose down -v
For example, including volumes:
services:
server:
image: server
container_name: server
build:
context: .
dockerfile: server.Dockerfile
env_file:
- .envs/.server
working_dir: /app
volumes:
- ./server_data:/app # <= here it is
ports:
- "9999:9999"
command: ["command", "to", "run", "the", "server", "--some-options"]
volumes: # at the root level, the same as services
server_data:

Resources