I have a python and java app that I want to run inside a container.
I have a folder named pass-hash with:
--h2o-start folder containing Dockerfile that I use to start h2o.jarwhich starts a server for machine learning.
--model-generator folder containing passhash.py and a data.csv file.
The passhash.py app contains h2o.import_file("/var/my-data/data.csv") which takes the data.csv file from the my-data folder I created in the container and generates a POJO file with it.
The h2o-start Dockerfile contains:
FROM openjdk:8
ADD h2o.jar h2o.jar
EXPOSE 54321
EXPOSE 54322
ENTRYPOINT ["java", "-jar", "h2o.jar"]
The model-generator Dockerfile contains:
FROM python:2.7-slim
WORKDIR /model-generator
ADD . /model-generator
RUN mkdir /var/my-data
COPY data.csv /var/my-data
RUN chmod 777 /var/my-data/data.csv
RUN pip install --trusted-host pypi.python.org -r requirements.txt
EXPOSE 8080
ENV NAME World
CMD ["python", "passhash.py"]
The docker-compose.yml file contains:
version: "3"
services:
h2o-start:
image: milanpanic2/h2o-start
build:
context: ./h2o-start
ports:
- "54321:54321"
- "54322:54322"
volumes:
- "home/data"
model-generator:
image: milanpanic2/model-generator
build:
context: ./model-generator
ports:
- "8080:8080"
depends_on:
- "h2o-start"
volumes:
- "csvdata:/var/my-data"
volumes:
csvdata:
Docker Volumes are designed to share folder in between Host Machine and Docker containers. If you copy any file on your host machine (volume location path), it will be automatically going to available inside containers.
The syntax for docker volume is as below:
-v /home/data:/data
In the above syntax /home/data is folder available on the host machine and /data this folder is available inside docker container.
If you copy any file on the host machine inside /home/data folder, it will be automatically going to be available inside container /data folder.
Hope this is clear to you.
If you are using docker-compose then add volume tag as below
volumes:
- /home/data:/data
for example:
version '3'
services:
app:
image: nginx:alpine
ports:
- 80:80
volumes:
- /home/data:/data
I don't know, that is the solution i came up with. Can you tell me the better solution for my problem? My problem is: I have a python app that uses a data.csv file to generate a POJO machine learning model. When I specify a path to this python app to the data file, it gives an exception that the file doen't exist. Also, I have another app written in java, that uses the generated POJO file and it gives predictions based on that data. The java app also updates the data.csv file every day. I want every app (microservice) to run in separate container, but want them both to use the data.csv
To answer this, You need to use volumes.
Try below code.
This is your docker-compose file
version: "3"
services:
h2o-start:
image: milanpanic2/h2o-start
build: context: ./h2o-start
ports:
- "54321:54321" - "54322:54322"
volumes:
- /home/data:/var/my-data
model-generator:
image: milanpanic2/model-generator
build: context: ./model-generator
ports:
- "8080:8080"
depends_on:
- "h2o-start"
volumes:
- /home/data:/var/my-data
This is your docker file
FROM python:2.7-slim
WORKDIR /model-generator
ADD . /model-generator
RUN mkdir /var/my-data
RUN pip install --trusted-host pypi.python.org -r requirements.txt
EXPOSE 8080
ENV NAME World
CMD ["python", "passhash.py"]
And where is your java docker file?
Now just create a default data.csv file and copy it in your host machine location /home/data.
and run application let me know.
If you mean add files when do docker build, take a look at ADD & COPY instructions.
Related
I'm trying to run a Next.js project inside docker-compose. To take advantage of hot-reloading, I'm mounting in the entire project to the Docker image as a volume.
So far, so good!
This is where things are starting to get tricky: For this particular project, it turns out Apple Silicon users need a .babelrc file included in their dockerized app, but NOT in the files on their computer.
All other users do not need a .babelrc file at all.
To sum up, this is what I'd like to be able to do:
hot reload project (hence ./:/usr/src/app/)
have an environment variable write content to /usr/src/app/.babelrc.
not have a .babelrc in the host's project root.
My attempt at solving was including the .babelrc under ci-cd/.babelrc in the host file system.
Then I tried mounting the file as a volume like - ./ci-cd/.babelrc:/usr/src/app/.babelrc. But then a .babelrc file gets written back to the root of the project in the host filesystem.
I also tried include COPY ./ci-cd/.babelrc /usr/src/app/.babelrc within the Dockerfile, but it seems to be overwritten with docker-composes's volume property.
Here's my Dockerfile:
FROM node:14
WORKDIR /usr/src/app/
COPY package.json .
RUN npm install
And the docker-compose.yml:
version: "3.8"
services:
# Database image
psql:
image: postgres:13
restart: unless-stopped
ports:
- 5432:5432
# image for next.js project
webapp:
build: .
command: >
bash -c "npm run dev"
ports:
- 3002:3002
expose:
- 3002
depends_on:
- testing-psql
volumes:
- ./:/usr/src/app/
I've 2 problems with flask app in docker. Application working slowly and freeze after finish last request (for example: first route work fine, next click other link/page app freeze. If i go to homepage via URL and run page again working ok ). Outside docker app working very fast.
Second problem is docker not synch files in container after change files.
# Dockerfile
FROM python:3.9
# set work directory
WORKDIR /base
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update
RUN pip install --upgrade pip
COPY ./requirements.txt /base/requirements.txt
COPY ./base_app.py /base/base_app.py
COPY ./config.py /base/config.py
COPY ./certs/ /base/certs/
COPY ./app/ /base/app/
COPY ./tests/ /base/tests/
RUN pip install -r requirements.txt
# docker-compose
version: '3.3'
services:
web:
build: .
command: tail -f /dev/null
volumes:
- ${PWD}/app/:/usr/src/app/
networks:
- flask-network
ports:
- 5000:5000
depends_on:
- flaskdb
flaskdb:
image: postgres:13-alpine
volumes:
- ${PWD}/postgres_database:/var/lib/postgresql/data/
networks:
- flask-network
environment:
- POSTGRES_DB=db_name
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
ports:
- "5432:5432"
restart: always
networks:
flask-network:
driver: bridge
`
You have a couple of significant errors in the code you show.
The first problem is that your application doesn't run at all: the Dockerfile is missing the CMD line that tells Docker what to run, and you override it in the Compose setup with a meaningless tail command. You should generally set this in the Dockerfile:
CMD ["./base_app.py"]
You can remove most of the Compose settings you have. You do not need command: (it's in the Dockerfile), volumes: (what you have is ineffective and the code is in the image anyways), or networks: (Compose provides a network named default; delete all of the networks: blocks in the file).
Second problem is docker not synch files in container after change files.
I don't usually recommend trying to do actual development in Docker. You can tell Compose to just start the database
docker-compose up -d flaskdb
and then you can access it from the host (PGHOST=localhost, PGPORT=5432). This means you can use an ordinary non-Docker Python virtual environment for development.
If you do want to try to use volumes: to simulate a live development environment (you talk about performance; this specific path can be quite slow on non-Linux hosts) then you need to make sure the left side of volumes: is the host directory with your code (probably .), the right side is the container directory (your Dockerfile uses /base), and your Dockerfile doesn't rearrange, modify, or generate the files at all (the bind mount hides all of it).
# don't run the application in the image; use the Docker infrastructure
# to run something else
volumes:
# v-------- left side: host path (matches COPY source directory)
- .:/base
# ^^^^-- right side: container path (matches WORKDIR/destination directory)
I'm making an image to host a PHP application. I'm using COPY to populate /var/www/html with the app files and creating a VOLUME /var/www/html to be able to mount the dir on the host and edit files like config.
But:
When I mount the volume on docker-compose.yml, the directory is empty.
When I omit the "volmue" entry on the docker-compose.yml and connect with the container shell, the directory /var/www/html is filled.
I already read tons of examples and documentations but, sincerely, don't know what is wrong.
dockerfile:
FROM php:8.0-apache-buster
LABEL description="OCOMON 3.2 Frontend (noDB)"
RUN docker-php-ext-install pdo_mysql
WORKDIR /var/www/html/ocomon
COPY ./ocomon .
VOLUME /var/www/html/ocomon
docker-compose.yml:
version: '3.5'
services:
ocomon:
image: ocomon:3.2
container_name: ocomon
volumes:
- ./volumes/ocomon/ocomon:/var/www/ocomon
ports:
- 4682:80
Assuming your host's directory is ${PWD}/www/html, then you need only provide the volumes value in docker-compose.yml and it should be:
Dockerfile:
FROM php:8.0-apache-buster
LABEL description="OCOMON 3.2 Frontend (noDB)"
RUN docker-php-ext-install pdo_mysql
WORKDIR /var/www/html/ocomon
COPY ./www/html/ocomon .
and:
docker-compose.yml:
version: '3.5'
services:
ocomon:
image: ocomon:3.2
container_name: ocomon
volumes:
- ${PWD}/www/html:/var/www/html
ports:
- 4682:80
Explanation
Dockerfile VOLUMES creates a volume in the image. By following WORKDIR (which always create if the path does not exist), with VOLUME, you overwrite whatever was either in (or created by WORKDIR). NOTE this is done during image build.
Docker Compose volumes mounts a directory from the host into the image. The syntax is volumes: - ${HOST_PATH}:${IMAGE_PATH}. NOTE this is done during image run.
I am trying to start a docker container using a redis db that I have a persistent copy saved to a local machine.
I currently have a docker container loading redis with a volume using this docker-compose.yml but it misses my redis.conf (which contains the loadmodule command) is located in the volume with the rdb file
version: '3'
services:
redis:
image: redis
container_name: "redis"
ports:
- "6379:6379"
volumes:
- E:\redis_backup_conf:/data
This begins to load the RDB but crashes out because the data uses this time series module.
I can load a seperate docker container with a fresh redis db that has the time seriese module loaded using the following dockerfile. My issue is I can't figure out how to do both at the same time!
Is there someway of calling a dockerfile from a docker-compose.yml or declaring the volume in the dockerfile?
That, or should I be creating my own image that I can call in the docker-compose.yml?
Any help woule be appreciated, I'm honeslty just going round in circles I think.
dockerfile
# BUILD redisfab/redistimeseries:${VERSION}-${ARCH}-${OSNICK}
ARG REDIS_VER=6.0.1
# stretch|bionic|buster
ARG OSNICK=buster
# ARCH=x64|arm64v8|arm32v7
ARG ARCH=x64
#----------------------------------------------------------------------------------------------
FROM redisfab/redis:${REDIS_VER}-${ARCH}-${OSNICK} AS builder
ARG REDIS_VER
ADD ./ /build
WORKDIR /build
RUN ./deps/readies/bin/getpy2
RUN ./system-setup.py
RUN make fetch
RUN make build
#----------------------------------------------------------------------------------------------
FROM redisfab/redis:${REDIS_VER}-${ARCH}-${OSNICK}
ARG REDIS_VER
ENV LIBDIR /usr/lib/redis/modules
WORKDIR /data
RUN mkdir -p "$LIBDIR"
COPY --from=builder /build/bin/redistimeseries.so "$LIBDIR"
EXPOSE 6379
CMD ["redis-server", "--loadmodule", "/usr/lib/redis/modules/redistimeseries.so"]
EDIT:
ok.. slight improvement i can call a redis-timeseries image in the docker-compose.yml
services:
redis:
image: redislabs/redistimeseries
container_name: "redis"
ports:
- "6379:6379"
volumes:
- E:\redis_backup_conf:/data
This is a start however I still need to increase the maximum number of db's, I have been using the redis.conf to do this in the past.
You can just have docker-compose build your dockerfile directly. Assume your docker-compose file is in folder called myproject . Also assume your dockerfile is in a folder called myredis and that myredis is in the myproject folder. Then you can replace this line in your docker-compose file:
Image: redis
With:
Build: ./myredis
That will build and use your custom image
I am trying to containerize the front-end of my website and automate its deployment. My goal is to be able to have a new image be generated and hosted when a change is pushed, and have the server automatically fetch it and restart the container. Here are the steps that I am taking:
I create the image by first building my Node application and then bundling the distribution and nginx configuration files into the latest linuxserver/letsencrypt image. This is the Dockerfile:
# Use the NodeJS image as builder
FROM node:alpine AS builder
# Create the workspace
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Copy the package file and source code
COPY package.json /usr/src/app
COPY . ./
# Install dependencies
RUN npm install
# Build the application
RUN npm run build
# The nginx server, this builds the final image
FROM linuxserver/letsencrypt
# Copy the nginx configuration
COPY ./config/nginx.conf /config
# Copy the output of the builder
COPY --from=builder /usr/src/app/dist /config/www
# Inform Docker to listen on port 443 and 80
EXPOSE 443 80
This image is uploaded to GitHub's package registry and I poll for updates using Watchtower.
The image is started using this docker-compose file:
version: "3"
services:
...
frontend:
image: [IMAGE]
container_name: frontend
cap_add:
- NET_ADMIN
environment:
- PUID=1000
- PGID=1000
- TZ=[TIMEZONE]
- URL=[URL]
- SUBDOMAINS=www,
- VALIDATION=http
ports:
- 443:443
- 80:80
volumes:
- ./frontend:/config
restart: unless-stopped
...
The issue is that the files that were packaged into the image using the COPY instruction are being overwritten when I use the following line in my docker-compose:
volumes:
- ./frontend:/config
If I remove that section from my docker-compose file everything works fine, however this is not a solution because that folder stores important data.
I have read that mounting a volume completely overwrites any previous data, however I like the fact that I can easily load the image onto my server and have all the required files already embedded. Is there anything that I can do to fix my issue, or am I misusing/misunderstanding docker images?
I have tried setting the volume to read only as suggested here, however this did not work and instead caused the image to continually stop and restart.
I have also briefly read about bind mounts and am wondering if they will be of any use.
This behavior is expected. Docker mounts work in the same way as Linux mounts, i.e. overwriting contents of the target directory with the source directory contents.
My suggestion is to use another destination directory for your volume, e.g.
volumes:
- ./frontend:/someotherdir
And then adjust your nginx configuration to look for JS files there.
I found out that I could retain the data in the image by first creating a named volume:
volumes:
frontend_data:
And then mounting the container to that volume:
services:
frontend:
...
volumes:
- frontend_data:/config
...