I am trying to make a Dockerfile and docker-compose.yml for a webapp that uses elasticsearch. I have connected elasticsearch to the webapp and exposed it to host. However, before the webapp runs I need to create elasticsearch indices and fill them. I have 2 scripts to do this, data_scripts/createElasticIndex.js and data_scripts/parseGenesToElastic.js. I tried adding these to the Dockerfile with
CMD [ "node", "data_scripts/createElasticIndex.js"]
CMD [ "node", "data_scripts/parseGenesToElastic.js"]
CMD ["npm", "start"]
but after I run docker-compose up there are no indexes made. How can I fill elasticsearch before running the webapp?
Dockerfile:
FROM node:11.9.0
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY package*.json ./
# Install any needed packages specified in requirements.txt
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
#
RUN npm build
RUN npm i natives
# Bundle app source
COPY . .
# Make port 80 available to the world outside this container
EXPOSE 80
# Run app.py when the container launches
CMD [ "node", "data_scripts/createElasticIndex.js"]
CMD [ "node", "data_scripts/parseGenesToElastic.js"]
CMD [ "node", "servers/PredictionServer.js"]
CMD [ "node", "--max-old-space-size=8192", "servers/PWAServerInMem.js"]
CMD ["npm", "start"]
docker-compose.yml:
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: webapp
ports:
- "1337:1337"
- "4000:85"
depends_on:
- redis
- elasticsearch
networks:
- redis
- elasticsearch
volumes:
- "/data:/data"
environment:
- "discovery.zen.ping.unicast.hosts=elasticsearch"
- ELASTICSEARCH_URL=http://elasticsearch:9200"
- ELASTICSEARCH_HOST=elasticsearch
redis:
image: redis
networks:
- redis
ports:
- "6379:6379"
expose:
- "6379"
elasticsearch:
image: elasticsearch:2.4
ports:
- 9200:9200
- 9300:9300
expose:
- "9200"
- "9300"
networks:
- elasticsearch
networks:
redis:
driver: bridge
elasticsearch:
driver: bridge
A Docker container only ever runs one command. When your Dockerfile has multiple CMD lines, only the last one has any effect, and the rest are ignored. (ENTRYPOINT here is just a different way to provide the single command; if you specify both ENTRYPOINT and CMD then the entrypoint becomes the main process and the command is passed as arguments to it.)
Given the example you show, I'd run this in three steps:
Start only the database
docker-compose up -d elasticsearch
Run the "seed" jobs. For simplicity I'd probably run them locally
ELASTICSEARCH_URL=http://localhost:9200 node data_scripts/createElasticIndex.js
(using your physical host's name from the point of view of a script running directly on the physical host, and the published port from the container) but if you prefer you can also run them via the Docker setup
docker-compose run web data_scripts/createElasticIndex.js
Once the database is set up, start your whole application
docker-compose up -d
This will leave the running Elasticsearch unaffected, and start the other containers.
An alternate pattern, if you're confident you want to run these "seed" or migration jobs on every single container start, is to write an entrypoint script. The basic pattern here is to start your server via CMD as you have it now, but to write a script that does first-time setup, ending in exec "$#" to run the command, and make that your container's ENTRYPOINT. This could look like
#!/bin/sh
# I am entrypoint.sh
# Stop immediately if any of these scripts fail
set -e
# Run the migration/seed jobs
node data_scripts/createElasticIndex.js
node data_scripts/parseGenesToElastic.js
# Run the CMD / `docker run ...` command
exec "$#"
# I am Dockerfile
FROM node:11.9.0
...
COPY entrypoint.sh ./ # if not already copied
RUN chmod +x entrypoint.sh # if not already executable
ENTRYPOINT ["/app/entrypoint.sh"]
CMD ["npm", "start"]
Since the entrypoint script really is just a shell script, you can use arbitrary logic for this, for instance only running the seed job based on the command, if [ "$1" == npm ]; then ... fi but not for debugging shells (docker run --rm -it myimage bash).
Your Dockerfile also looks like you might be trying to start three different servers (PredictionServer.js, PWAServerInMem.js, and whatever npm start starts); you can run these in three separate containers from the same image and specify the command: in each docker-compose.yml block.
Your docker-compose.yml will be simpler if you remove the networks: (unless it's vital to you that your Elasticsearch and Redis can't talk to each other; it usually isn't) and the expose: declarations (which do nothing, especially in the presence of ports:).
I faced the same issue, and I started my journey using the same approach posted here.
I was redesigning some queries that required me frequently index settings and properties mapping changes, plus changes in the dataset that I was using as an example.
I searched for a docker image that I could easily add to my docker-compose file to allow me to change anything in either the index settings or in the dataset example. Then, I could simply run docker-compose up, and I'd see the changes in my local kibana.
I found nothing, and I ended up creating one on my own. So I'm sharing here because it could be an answer, plus I really hope to help someone else with the same issue.
You can use it as follow:
elasticsearch-seed:
container_name: elasticsearch-seed
image: richardsilveira/elasticsearch-seed
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
- INDEX_NAME=my-index
volumes:
- ./my-custom-index-settings.json:/seed/index-settings.json
- ./my-custom-index-bulk-payload.json:/seed/index-bulk-payload.json
You can simply point your index settings file - which should have both index settings + type mappings as usual and point your bulk payload file that should contain your example data.
More instruction at elasticsearch-seed github repository
We could even use it in our E2E and Integrations tests scenarios running in our CI pipelines.
Related
After spending hours to make it happen, I just can't make it work. I'm desperate for help as I couldn't find any questions related to my issue.
I've developed a Node.js web app for my university. IT department needs me to prepare a Docker image shared on a Docker Hub (although I chose Github Packages) and a docker-compose file so it can be easily run. I tried to host the app on my Raspberry Pi, but when I pull the image (with docker-compose.yaml, Dockerfile and .env present) it fails during build process:
npm ERR! enoent ENOENT: no such file or directory, open '/usr/src/app/package.json'
and during compose up process:
pi#raspberrypi:~/projects $ docker-compose up
Starting mysql ... done
Starting backend ... done
Attaching to mysql, backend
backend | exec /usr/local/bin/docker-entrypoint.sh: exec format error
mysql | 2022-09-22 08:04:47+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.30-1.el8 started.
mysql | 2022-09-22 08:04:48+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
mysql | 2022-09-22 08:04:48+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.30-1.el8 started.
mysql | '/var/lib/mysql/mysql.sock' -> '/var/run/mysqld/mysqld.sock'
backend exited with code 1
I executed bash inside my Docker container (on my dev machine) so I'm sure that /usr/src/app folder structure matches my app folder structure.
What's wrong with my solution? Should I provide more files than just docker-compose.yaml, Dockerfile and .env?
Dockerfile:
FROM node:18-alpine
WORKDIR /usr/src/app
COPY . ./
RUN npm i && npm cache clean --force
RUN npm run build
ENV NODE_ENV production
CMD [ "node", "dist/main.js" ]
EXPOSE ${PORT}
docker-compose.yaml:
version: "3.9"
services:
backend:
command: npm run start:prod
container_name: backend
build:
context: .
dockerfile: Dockerfile
image: ghcr.io/rkn-put/web-app/docker-backend/test
ports:
- ${PORT}:${PORT}
depends_on:
- mysql
environment:
- NODE_ENV=${NODE_ENV}
- PORT=${PORT}
- ORIGIN=${ORIGIN}
- DB_HOST=${DB_HOST}
- DB_PORT=${DB_PORT}
- DB_NAME=${DB_NAME}
- DB_USERNAME=${DB_USERNAME}
- DB_PASSWORD=${DB_PASSWORD}
- DB_SYNCHRONIZE=${DB_SYNCHRONIZE}
- EXPIRES_IN=${EXPIRES_IN}
- SECRET=${SECRET}
- GMAIL_USER=${GMAIL_USER}
- GMAIL_CLIENT_ID=${GMAIL_CLIENT_ID}
- GMAIL_CLIENT_SECRET=${GMAIL_CLIENT_SECRET}
- GMAIL_REFRESH_TOKEN=${GMAIL_REFRESH_TOKEN}
- GMAIL_ACCESS_TOKEN=${GMAIL_ACCESS_TOKEN}
mysql:
image: mysql:latest
container_name: mysql
hostname: mysql
restart: always
ports:
- ${DB_PORT}:${DB_PORT}
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_DATABASE=${DB_NAME}
- MYSQL_USER=${DB_USERNAME}
- MYSQL_PASSWORD=${DB_PASSWORD}
volumes:
- ./mysql:/var/lib/mysql
cap_add:
- SYS_NICE
Even if this is not a clear solution, there are multiple things that you should fix (and understand) and then things should work.
You say: "but when I pull the image (with docker-compose.yaml, Dockerfile and .env present) it fails during build process". This is actually where the biggest confusion happens. If you pull, there should be no build anymore.
You build locally, you push with docker-compose push and the image that you have in Github is ready to use. Because of this, on the target machine (where you want to run the project) you don't need to build any more - therefor you don't need a Dockerfile anymore.
The docker-compose.yml that you deliver should not have the build section for your app. Only the image name so that docker-compose knows where to pull the image from.
In local (your development environment) you should have the same docker-compose.yml without the build section, but also a file docker-compose.override.yml that should look like:
version: "3.9"
services:
backend:
build:
context: .
docker-compose automatically merges docker-compose.yml and docker-compose.override.yml when it finds the second one. That's also why it is important to not deliver the override file.
Only this should make your application work on the target machine. Remember all you need there is docker-compose.yml (no build section) and the .env.
Other points that you might want to address:
dockerfile: Dockerfile - not needed since that is the default
command: npm run start:prod if you overwrite it, why not just put it this way in the Dockerfile? If you have a good reason to do this then leave it
EXPOSE ${PORT} you are not declaring PORT anywhere in your Dockerfile. Just make your run on port 80 and expose port 80.
read the docs and save yourself some typing. if the env variables have the same names as in .env then docker-compose is clever enough to pick them if you only declare them
don't expose mysql ports on host: ${DB_PORT}:${DB_PORT}
consider using a volume for mysql instead of a folder. If you use a folder maybe place is in a different location so that you don't delete it by mistake
I've 2 problems with flask app in docker. Application working slowly and freeze after finish last request (for example: first route work fine, next click other link/page app freeze. If i go to homepage via URL and run page again working ok ). Outside docker app working very fast.
Second problem is docker not synch files in container after change files.
# Dockerfile
FROM python:3.9
# set work directory
WORKDIR /base
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update
RUN pip install --upgrade pip
COPY ./requirements.txt /base/requirements.txt
COPY ./base_app.py /base/base_app.py
COPY ./config.py /base/config.py
COPY ./certs/ /base/certs/
COPY ./app/ /base/app/
COPY ./tests/ /base/tests/
RUN pip install -r requirements.txt
# docker-compose
version: '3.3'
services:
web:
build: .
command: tail -f /dev/null
volumes:
- ${PWD}/app/:/usr/src/app/
networks:
- flask-network
ports:
- 5000:5000
depends_on:
- flaskdb
flaskdb:
image: postgres:13-alpine
volumes:
- ${PWD}/postgres_database:/var/lib/postgresql/data/
networks:
- flask-network
environment:
- POSTGRES_DB=db_name
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
ports:
- "5432:5432"
restart: always
networks:
flask-network:
driver: bridge
`
You have a couple of significant errors in the code you show.
The first problem is that your application doesn't run at all: the Dockerfile is missing the CMD line that tells Docker what to run, and you override it in the Compose setup with a meaningless tail command. You should generally set this in the Dockerfile:
CMD ["./base_app.py"]
You can remove most of the Compose settings you have. You do not need command: (it's in the Dockerfile), volumes: (what you have is ineffective and the code is in the image anyways), or networks: (Compose provides a network named default; delete all of the networks: blocks in the file).
Second problem is docker not synch files in container after change files.
I don't usually recommend trying to do actual development in Docker. You can tell Compose to just start the database
docker-compose up -d flaskdb
and then you can access it from the host (PGHOST=localhost, PGPORT=5432). This means you can use an ordinary non-Docker Python virtual environment for development.
If you do want to try to use volumes: to simulate a live development environment (you talk about performance; this specific path can be quite slow on non-Linux hosts) then you need to make sure the left side of volumes: is the host directory with your code (probably .), the right side is the container directory (your Dockerfile uses /base), and your Dockerfile doesn't rearrange, modify, or generate the files at all (the bind mount hides all of it).
# don't run the application in the image; use the Docker infrastructure
# to run something else
volumes:
# v-------- left side: host path (matches COPY source directory)
- .:/base
# ^^^^-- right side: container path (matches WORKDIR/destination directory)
I've got a simple Node / React project. I'm trying to use Docker to create two containers, one for the server, and one for the client, each with their own Dockerfile in the appropriate directory.
docker-compose.yml
version: '3.9'
services:
client:
image: node:14.15-buster
build:
context: ./src
dockerfile: Dockerfile.client
ports:
- '3000:3000'
- '45799:45799'
volumes:
- .:/app
tty: true
server:
image: node:14.15-buster
build:
context: ./server
dockerfile: Dockerfile.server
ports:
- '3001:3001'
volumes:
- .:/app
depends_on:
- redis
links:
- redis
tty: true
redis:
container_name: redis
image: redis
ports:
- '6379'
src/Dockerfile.client
FROM node:14.15-buster
# also the directory you land in on ssh
WORKDIR /app
CMD cd /app && \
yarn && \
yarn start:client
server/Dockerfile.server
FROM node:14.15-buster
# also the directory you land in on ssh
WORKDIR /app
CMD cd /app && \
yarn && \
yarn start:server
After building and starting the containers, both containers run the same command, seemingly at random. Either both run yarn start:server or yarn start:client. The logs clearly detail duplicate startup commands and ports being used. Requests to either port 3000 (client) or 3001 (server) confirm that the same one is being used in both containers. If I change the command in both Dockerfiles to echo the respective filename (Dockerfile.server! or Dockerfile.client!), startup reveals only one Dockerfile being used for both containers. I am also running the latest version of Docker on Mac.
What is causing docker-compose to use the same Dockerfile for both containers?
After a lengthy and painful bout of troubleshooting, I narrowed the issue down to duplicate image references. image: node:14.15-buster for each service in docker-compose.yml and FROM node:14.15-buster in each Dockerfile.
Why this would cause this behavior is unclear, but after removing the image references in docker-compose.yml and rebuilding / restarting, everything works as expected.
When you run docker-compose build with both image and build properties set on a service, it will build an image according to the build property and then tag the image according to the image property.
In your case, you have two services building different images and tagging them with the same tag node:14.15-buster. One will overwrite the other.
This probably has the additional unintended consequence of causing your next image to be built on top of the previously built image instead of the true node:14.15-buster.
Then when you start the service, both containers will use the image tagged node:14.15-buster.
From the docs:
If you specify image as well as build, then Compose names the built image with the webapp and optional tag specified in image
When I try to build a container using docker-compose like so
nginx:
build: ./nginx
ports:
- "5000:80"
the COPY instructions isnt working when my Dockerfile simply
looks like this
FROM nginx
#Expose port 80
EXPOSE 80
COPY html /usr/share/nginx/test
#Start nginx server
RUN service nginx restart
What could be the problem?
It seems that when using the docker-compose command it saves an intermediate container that it doesnt show you and constantly reruns that never updating it correctly.
Sadly the documentation regarding something like this is poor. The way to fix this is to build it first with no cache and then up it like so
docker-compose build --no-cache
docker-compose up -d
I had the same issue and a one liner that does it for me is :
docker-compose up --build --remove-orphans --force-recreate
--build does the biggest part of the job and triggers the build.
--remove-orphans is useful if you have changed the name of one of your services. Otherwise, you might have a warning leftover telling you about the old, now wrongly named service dangling around.
--force-recreate is a little drastic but will force the recreation of the containers.
Reference: https://docs.docker.com/compose/reference/up/
Warning I could do this on my project because I was toying around with really small container images. Recreating everything, everytime, could take significant time depending on your situation.
If you need to make docker-compose to copy files every time on up command I suggest declaring a volumes option to your service in the compose.yml file. It will persist your data and also will copy files from that folder into the container.
More info here volume-configuration-reference
server:
image: server
container_name: server
build:
context: .
dockerfile: server.Dockerfile
env_file:
- .envs/.server
working_dir: /app
volumes:
- ./server_data:/app # <= here it is
ports:
- "9999:9999"
command: ["command", "to", "run", "the", "server", "--some-options"]
Optionally, you can add the following section to the end of the compose.yml file. It will keep that folder persisted then. The data in that folder will not be removed after the docker-compose stop command or the docker-compose down command. To remove the folder you will need to run the down command with an additional flag -v:
docker-compose down -v
For example, including volumes:
services:
server:
image: server
container_name: server
build:
context: .
dockerfile: server.Dockerfile
env_file:
- .envs/.server
working_dir: /app
volumes:
- ./server_data:/app # <= here it is
ports:
- "9999:9999"
command: ["command", "to", "run", "the", "server", "--some-options"]
volumes: # at the root level, the same as services
server_data:
I have this folder structure:
/home/me/composetest
/home/me/composetest/mywildflyimage
Inside composites I have this docker-compose.yml:
web:
image: test/mywildfly
container_name: wildfly
ports:
- "8080:8080"
- "9990:9990"
Inside mywildflyimage I have this docker image:
FROM jboss/wildfly
EXPOSE 8080 9990
ADD standalone.xml /opt/jboss/wildfly/standalone/configuration/
RUN /opt/jboss/wildfly/bin/add-user.sh admin admin --silent
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0"]
If i run
docker built -t test/mywildfly .
docker-compose up
Everything works great, and the management part is minded to 0.0.0.0 (-bmanagement 0.0.0.0 part of the CMD command).
If I change my docker-compose.yml:
web:
build: mywildflyimage
container_name: wildfly
ports:
- "8080:8080"
- "9990:9990"
and run
docker-compose up
It still boots, but the admin part is not bound to 0.0.0.0 anymore (this is the default behaviour for the image I inherited from).
Why does it stop working when I use the build command in the docker-compose.ml?
EDIT: It seems that it is ignoring all my docker file commands.
run docker-compose build after changing docker-comopse.yml and then docker-compose up
Before you type docker-compose up, you should build images with docker-compose build [options] [SERVICE...].
Options:
--force-rm Always remove intermediate containers.
--no-cache Do not use cache when building the image.
--pull Always attempt to pull a newer version of the image.
In your case, ex: docker-compose build --no-cache web