I am trying to install cron via my Dockerfile, so that docker-compose can create a dedicated cron container by using a different entrypoint when it's built, which will regularly create another container that runs a script, and then remove it. I'm trying to follow the Separating Cron From Your Application Services section of this guide: https://www.cloudsavvyit.com/9033/how-to-use-cron-with-your-docker-containers/
I know that order of operation is important and I wonder if I have that misconfigured in my Dockerfile:
FROM swift:5.3-focal as build
RUN export DEBIAN_FRONTEND=noninteractive DEBCONF_NONINTERACTIVE_SEEN=true \
&& apt-get -q update \
&& apt-get -q dist-upgrade -y \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /build
RUN apt-get update && apt-get install -y cron
COPY example-crontab /etc/cron.d/example-crontab
RUN chmod 0644 /etc/cron.d/example-crontab &&\
crontab /etc/cron.d/example-crontab
COPY ./Package.* ./
RUN swift package resolve
COPY . .
RUN swift build --enable-test-discovery -c release
WORKDIR /staging
RUN cp "$(swift build --package-path /build -c release --show-bin-path)/Run" ./
RUN [ -d /build/Public ] && { mv /build/Public ./Public && chmod -R a-w ./Public; } || true
RUN [ -d /build/Resources ] && { mv /build/Resources ./Resources && chmod -R a-w ./Resources; } || true
# ================================
# Run image
# ================================
FROM swift:5.3-focal-slim
RUN export DEBIAN_FRONTEND=noninteractive DEBCONF_NONINTERACTIVE_SEEN=true && \
apt-get -q update && apt-get -q dist-upgrade -y && rm -r /var/lib/apt/lists/*
RUN useradd --user-group --create-home --system --skel /dev/null --home-dir /app vapor
WORKDIR /app
COPY --from=build --chown=vapor:vapor /staging /app
USER vapor:vapor
EXPOSE 8080
ENTRYPOINT ["./Run"]
CMD ["serve", "--env", "production", "--hostname", "0.0.0.0", "--port", "8080"]
This is relevant portion of my docker-compose file:
services:
app:
image: prizmserver:latest
build:
context: .
environment:
<<: *shared_environment
volumes:
- $PWD/.env:/app/.env
links:
- db:db
ports:
- '8080:8080'
# user: '0' # uncomment to run as root for testing purposes even though Dockerfile defines 'vapor' user.
command: ["serve", "--env", "production", "--hostname", "0.0.0.0", "--port", "8080"]
cron:
image: prizmserver:latest
entrypoint: /bin/bash
command: ["cron", "-f"]
This is my example-scheduled-task.sh:
#!/bin/bash
timestamp=`date +%Y/%m/%d-%H:%M:%S`
echo "System path is $PATH at $timestamp"
And this is my crontab file:
*/5 * * * * /usr/bin/sh /example-scheduled-task.sh
My script example-scheduled-task.sh and my crontab example-crontab live inside my application folder where this Dockerfile and docker-compose.yml live.
Why won't my cron container launch?
In a multistage build, only the last FROM will be used to generate final image.
E.g., for next example, the a.txt only could be seen in the first stage, can't be seen in the final image.
Dockerfile:
FROM python:3.9-slim-buster
WORKDIR /tmp
RUN touch a.txt
RUN ls /tmp
FROM ubuntu:16.04
RUN ls /tmp
Execution:
# docker build -t abc:1 . --no-cache
Sending build context to Docker daemon 2.048kB
Step 1/6 : FROM python:3.9-slim-buster
---> c2f204720fdd
Step 2/6 : WORKDIR /tmp
---> Running in 1e6ed4ef521d
Removing intermediate container 1e6ed4ef521d
---> 25282e6f7ed6
Step 3/6 : RUN touch a.txt
---> Running in b639fcecff7e
Removing intermediate container b639fcecff7e
---> 04985d00ed4c
Step 4/6 : RUN ls /tmp
---> Running in bfc2429d6570
a.txt
tmp6_uo5lcocacert.pem
Removing intermediate container bfc2429d6570
---> 3356850a7653
Step 5/6 : FROM ubuntu:16.04
---> 065cf14a189c
Step 6/6 : RUN ls /tmp
---> Running in 19755da110b8
Removing intermediate container 19755da110b8
---> 890f13e709dd
Successfully built 890f13e709dd
Successfully tagged abc:1
Back to your example, you copy crontab to the stage of swift:5.3-focal, but the final stage is swift:5.3-focal-slim which won't have any crontab.
EDIT:
For you, the compose for cron also needs to update as next:
cron:
image: prizmserver:latest
entrypoint: cron
command: ["-f"]
cron don't need to use /bash to start, directly use cron to override the entrypoint could make the trick.
Related
I have been trying for almost 3 weeks to build and run a meteor app (bundle) using Docker, I have tried all the majors recommendations in Meteor forums, Stackoverflow and official documentarion with no success, first I tried to make the bundle and put inside the docker but have awful results, then I realized what I need to do is to make the bundle inside the docker and use a multistage Dockerfile, here is the one I am using right now:
FROM chneau/meteor:alpine as meteor
USER 0
RUN mkdir -p /build /app
RUN chown 1000 -R /build /app
WORKDIR /app
COPY --chown=1000:1000 ./app .
COPY --chown=1000:1000 ./app/packages.json ./packages/
RUN rm -rf node_modules/*
RUN rm -rf .meteor/local/*
USER 1000
RUN meteor update --packages-only
RUN meteor npm ci
RUN meteor build --architecture=os.linux.x86_64 --directory /build
FROM node:lts-alpine as mid
USER 0
RUN apk add --no-cache python make g++
RUN mkdir -p /app
RUN chown 1000 -R /app
WORKDIR /app
COPY --chown=1000:1000 --from=meteor /build/bundle .
USER 1000
WORKDIR /app/programs/server
RUN rm -rf node_modules
RUN rm -f package-lock.json
RUN npm i
FROM node:lts-alpine
USER 0
ENV TZ=America/Santiago
RUN apk add -U --no-cache tzdata && cp /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN mkdir -p /app
RUN chown 1000 -R /app
WORKDIR /app
COPY --chown=1000:1000 --from=mid /app .
USER 1000
ENV LC_ALL=C.UTF-8
ENV ROOT_URL=http://localhost/
ENV MONGO_URL=mongodb://locahost:21027/meteor
ENV PORT=3000
EXPOSE 3000
ENTRYPOINT [ "/usr/local/bin/node", "/app/main.js" ]
if I build with docker build -t my-image:v1 . then run my app with docker run -d --env-file .dockerenv --publish 0.0.0.0:3000:3000 --name my-bundle my-image:v1 it exposes port:3000 but when I try to navigate with my browser to http://127.0.0.1:3000 it redirects to https://localhost, if i do docker exec -u 0 -it my-bundle sh, then apk add curl, then curl 127.0.0.1:3000 I can see the meteor app running inside docker.
Has anyone had this issue before, maybe I am missing some configuration? my bundle also works fine outside docker with node main.js in the bundle folder and can visit http://127.0.0.1:3000 with my browser.
Thanks in advance
My api-server Dockerfile is following
FROM node:alpine
WORKDIR /src
COPY . .
RUN rm -rf /src/node_modules
RUN rm -rf /src/package-lock.json
RUN yarn install
CMD yarn start:dev
After docker-compose up -d
I tried
$ docker exec -it api-server sh
/src # curl 'http://localhost:3000/'
sh: curl: not found
Why is the command curl not found?
My host is Mac OS X.
node:alpine image doesn't come with curl. You need to add the installation instruction to your Dockerfile.
RUN apk --no-cache add curl
Full example from your Dockerfile would be:
FROM node:alpine
WORKDIR /src
COPY . .
RUN rm -rf /src/node_modules
RUN rm -rf /src/package-lock.json
RUN apk --no-cache add curl
RUN yarn install
CMD yarn start:dev
I'm using Docker v 19. I have this at the end of my web/Dockerfile ...
FROM python:3.7-slim
RUN apt-get update && apt-get install
RUN apt-get install -y dos2unix
RUN apt-get install -y libmariadb-dev-compat libmariadb-dev
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc \
&& rm -rf /var/lib/apt/lists/*
RUN python -m pip install --upgrade pip
WORKDIR /app/
COPY requirements.txt requirements.txt
COPY entrypoint.sh entrypoint.sh
RUN tr -d '\r' < entrypoint.sh > /app/entrypoint2.sh
RUN ls /app/entrypoint2.sh
RUN ls /app/
RUN python -m pip install -r requirements.txt
RUN ls /app/entrypoint.sh
RUN dos2unix /app/entrypoint.sh
RUN ls /app/entrypoint.sh
RUN chmod +x /app/*.sh
RUN ls ./
ENTRYPOINT ["./entrypoint2.sh"]
However, when I run "docker-compose up" (which references the above), the "entrypoiet" file can't be found, which is baffling because the line above ("ls ./") shows that it exists ...
Step 14/19 : RUN ls /app/entrypoint.sh
---> Running in db8c11ce3fad
/app/entrypoint.sh
Removing intermediate container db8c11ce3fad
---> c23e69de2a86
Step 15/19 : RUN dos2unix /app/entrypoint.sh
---> Running in 9e5bbd1c0b9a
dos2unix: converting file /app/entrypoint.sh to Unix format...
Removing intermediate container 9e5bbd1c0b9a
---> 32a069690845
Step 16/19 : RUN ls /app/entrypoint.sh
---> Running in 8a53e70f219b
/app/entrypoint.sh
Removing intermediate container 8a53e70f219b
---> 5444676f45fb
Step 17/19 : RUN chmod +x /app/*.sh
---> Running in 5a6b295217c8
Removing intermediate container 5a6b295217c8
---> 8b5bfa4fd75a
Step 18/19 : RUN ls ./
---> Running in 9df3acb7deb7
entrypoint.sh
entrypoint2.sh
requirements.txt
Removing intermediate container 9df3acb7deb7
---> 009f8bbe18c8
Step 19/19 : ENTRYPOINT ["./entrypoint2.sh"]
---> Running in 41a7e28641a7
Removing intermediate container 41a7e28641a7
---> 34a7d4fceb8b
Successfully built 34a7d4fceb8b
Successfully tagged maps_web:latest
WARNING: Image for service web was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
...
Creating maps_web_1 ... error
ERROR: for maps_web_1 Cannot start service web: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"./entrypoint2.sh\": stat ./entrypoint2.sh: no such file or directory": unknown
How do I tell Docker how to reference that entrypoint file? The docker-compose.yml file section including the above is below
web:
restart: always
build: ./web
ports: # to access the container from outside
- "8000:8000"
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn directory.wsgi:application --reload -w 2 -b :8000
volumes:
- ./web/:/app
depends_on:
- mysql
Based on the provided Dockerfile and docker-compose file you are doing the following
Copy files (entrypoint + requirments) to /app
Install the needed packages
Start containers with the volume that overwrite the content of the /app, which is causing the issue.
To solve the issue you have to do one of the following
Copy all the data from ./web to the docker image and remove the volume
Dockerfile : add the following lines
WORKDIR /app/
COPY ./web /app
Docker-compose: remove the below lines
volumes:
- ./web/:/app
The second option is to change the path of the entry point so it does not conflict with the volume
Dockerfile
RUN tr -d '\r' < entrypoint.sh > /entrypoint2.sh
RUN chmod +x /entrypoint2.sh
ENTRYPOINT ["./entrypoint2.sh"]
Using DockerToolbox, I've been trying for the past few days to update my container to run in heroku.
I cant seem to update the code in the container.
Here are some of things I've tried:
in Docker file change COPY . /app to ADD . /app
Removed docker machine and create a virtualbox
`docker-machine rm default`
`docker-machine create --drive virtualbox default`
build/run docker image
`docker build --no-cache -t appname`
`docker run -it -p 8888:8080 appname`
Also tried docker build --no-cache .
Docker File
FROM python:3.6
# create and set working directory
RUN mkdir /app
WORKDIR /app
# Add current directory code to working directory
ADD . /app/
# set default environment variables
ENV PYTHONUNBUFFERED 1
ENV LANG C.UTF-8
ENV DEBIAN_FRONTEND=noninteractive
ENV PORT 8080
RUN wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz && \
tar -xvzf ta-lib-0.4.0-src.tar.gz && \
cd ta-lib/ && \
./configure --prefix=/usr && \
make && \
make install
COPY . /app
WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
RUN pip install ta-lib
EXPOSE 8080
CMD gunicorn appname.wsgi:application --bind 0.0.0.0:$PORT
I have this multi-stage Dockerfile. I make a program in the build image, tar up the contents, copy it in the main image, untar it. Once the container starts, when i go into the container, I can no longer find the file. However, using "ls" commands I'm able to see that it was copied over and extracted. I don't know if this has anything to do with the fact that I have the root directory of the application as a volume. I did that to speed up the builds after making code changes.
docker-compose.yml
version: "3"
services:
web:
build: .
ports:
- "5000:5000"
- "5432:5432"
volumes:
- ".:/code"
environment:
- PORT=5000
# TODO: Should be set to 0 for production
- PYTHONUNBUFFERED=1
Dockerfile
# Build lab-D
FROM gcc:8.2.0 as builder
RUN apt-get update && apt-get install -y libxerces-c-dev
WORKDIR /lab-d/
RUN git clone https://github.com/lab-d/lab-d.git
WORKDIR /lab-d/lab-d/
RUN autoreconf -if
RUN ./configure --enable-silent-rules 'CFLAGS=-g -O0 -w' 'CXXFLAGS=-g -O0 -w' 'LDFLAGS=-g -O0 -w'
RUN make
RUN make install
WORKDIR /lab-d/
RUN ls
RUN tar -czf labd.tar.gz lab-d
# Main Image
FROM library/python:3.7-stretch
RUN apt-get update && apt-get install -y python3 python3-pip \
postgresql-client \
# lab-D requires this library
libxerces-c-dev \
# For VIM
apt-file
RUN apt-file update && apt-get install -y vim
RUN pip install --upgrade pip
COPY requirements.txt /
RUN pip3 install --trusted-host pypi.org -r /requirements.txt
RUN pwd
RUN ls .
COPY --from=builder /lab-d/labd.tar.gz /code/labd.tar.gz
WORKDIR /code
RUN pwd
RUN ls .
RUN tar -xzf labd.tar.gz
RUN ls .
run pwd
RUN ls .
CMD ["bash", "start.sh"]
docker-compose build --no-cache
...
Step 19/29 : RUN pwd
---> Running in a856867bf69a
/
Removing intermediate container a856867bf69a
---> f1ee3dca8500
Step 20/29 : RUN ls .
---> Running in ee8da6874808
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
requirements.txt
root
run
sbin
srv
sys
tmp
usr
var
Removing intermediate container ee8da6874808
---> e8aec80955c9
Step 21/29 : COPY --from=builder /lab-d/labd.tar.gz /code/labd.tar.gz
---> 72d14ab4e01f
Step 22/29 : WORKDIR /code
---> Running in 17873e785c17
Removing intermediate container 17873e785c17
---> 57e8361767ca
Step 23/29 : RUN pwd
---> Running in abafd210abcb
/code
Removing intermediate container abafd210abcb
---> c6f430e1b362
Step 24/29 : RUN ls .
---> Running in 40b9e85261c2
labd.tar.gz
Removing intermediate container 40b9e85261c2
---> f9ee8e04d065
Step 25/29 : RUN tar -xzf labd.tar.gz
---> Running in 6e60ce7e1886
Removing intermediate container 6e60ce7e1886
---> 654d3c791798
Step 26/29 : RUN ls .
---> Running in 0f445b35f399
lab-d
labd.tar.gz
Removing intermediate container 0f445b35f399
---> 7863a15534b1
Step 27/29 : run pwd
---> Running in 9658c6170bde
/code
Removing intermediate container 9658c6170bde
---> 8d8e472a1b95
Step 28/29 : RUN ls .
---> Running in 19da5b77f5b6
lab-d
labd.tar.gz
Removing intermediate container 19da5b77f5b6
---> 140645efadbc
Step 29/29 : CMD ["bash", "start.sh"]
---> Running in 02b006bdf868
Removing intermediate container 02b006bdf868
---> 28d819321035
Successfully built 28d819321035
Successfully tagged -server_web:latest
start.sh
#!/bin/bash
# Start the SQL Proxy (Local-only)
pwd
ls .
./cloud_sql_proxy -instances=api-project-123456789:us-central1:sq=tcp:5432 \
-credential_file=./config/google_service_account.json &
ls .
# Override with CircleCI for other environments
cp .env.development .env
ls .
python3 -u ./server/server.py
In your Dockerfile, you
COPY --from=builder /lab-d/labd.tar.gz /code/labd.tar.gz
WORKDIR /code
RUN tar -xzf labd.tar.gz
But then your docker-compose.yml specifies
volumes:
- ".:/code"
That causes the current directory on the host to be mounted over /code in the container, and every last bit of work your Dockerfile does is hidden.