I have a Dockerfile which looks something like this
FROM node:8.15.0-alpine
# ARGs and LABELs...
RUN apk add --no-cache curl
COPY src/scripts scripts/
COPY src/scripts/update_mmdb.sh /etc/periodic/weekly/update_mmdb
RUN /etc/periodic/weekly/update_mmdb
RUN mkdir -p /root/myapp/bin
COPY src/docker/bin/program_running_in_a_separate_process /root/myapp/bin/
WORKDIR /root/myapp
# Some more COPY-ing
COPY src/myapp/docker/cmd.sh /
CMD /cmd.sh
When I go to run my container I get an ENOENT error when my app attempts to spin up program_running_in_a_separate_process.
I investigated by doing docker exec -it myapp sh and attempting to run the binary directly. I successfully was able to stat it:
# stat program_running_in_a_separate_process
File: program_running_in_a_separate_process
Size: 14689327 Blocks: 28696 IO Block: 4096 regular file
Device: 39h/57d Inode: 15073498 Links: 1
Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2020-01-14 01:08:09.000000000
Modify: 2020-01-14 01:08:09.000000000
Change: 2020-01-14 01:08:09.000000000
, it came up when I ran ls from /root/myapp/, I even ran chmod +x to be sure, but sure enough when I go to run program_running_in_a_separate_process, I get sh: ./program_running_in_a_separate_process: not found
How could this be?
Related
Problem Description
I am unable to see any output from the cron job when I run docker-compose logs -f cron after running docker-compose up.
When I attached to the container using VSCode, I navigated to var/logs/cron.log and ran the cat command and saw no output. Curiously, when I run crontab -l I see * * * * * /bin/sh get_date.sh as the output.
Description of Attempted Solution
Here is how I organized the project (it is over engineered at the moment for reasons of extensibility later)
├── config
│ └── crontab
├── docker-compose.yml
├── Dockerfile
├── README.md
└── scripts
└── get_date.sh
Here is the details on the above, the contents are simple. Also, it is my attempt to use a lean python:3.8-slim-buster docker image so I can run bash or python scripts (not attempted):
crontab
* * * * * /bin/sh get_date.sh
get_date.sh
#!/bin/sh
echo "Current date and time is " "$(date +%D-%H:%M)"
docker-compose.yml
version: '3.8'
services:
cron:
build:
context: .
dockerfile: ./Dockerfile
Dockerfile
FROM python:3.8-slim-buster
#Install cron
RUN apt-get update \
&& apt-get install -y cron
# Copying script file into the container.
COPY scripts/get_date.sh .
# Giving executable permission to the script file.
RUN chmod +x get_date.sh
# Adding crontab to the appropriate location
ADD config/crontab /etc/cron.d/my-cron-file
# Giving executable permission to crontab file
RUN chmod 0644 /etc/cron.d/my-cron-file
# Running crontab
RUN crontab /etc/cron.d/my-cron-file
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Creating entry point for cron
CMD ["cron", "tail", "-f", "/var/log/cron.log"]
Things Attempted
I am new in trying to get cron working in a container environment. I am not getting any error messages, so not sure how I can debug this issue except describe the behavior.
I have changed the content of crontab from * * * * * root bash get_date.sh to the above. I also checked out stackoverflow and found a similar issue here but no clear solution was proposed as far as I could tell.
Thanks kindly in advance.
References
Stackoverflow discussion on running cron inside of container
How to run cron inside of containers
You have several issues that are preventing this from working:
Your attempt to run tail is a no-op: with your CMD as written you're simply running the command cron tail -f /var/log/cron.log. In other words, you're running cron and providing tail -f /var/log/cron.log as arguments. If you want to run cron followed by the tail command, you would need to write it like this:
CMD ["sh", "-c", "cron && tail -f /var/log/cron.log"]
While the above will both start cron and run the tail command, you still won't see any log output...because Debian cron doesn't log to a file; it logs to syslog. You won't see any output in /var/log/cron.log unless you have a syslog daemon installed, configured, and running.
I would suggest this as an alternative:
Fix your syntax in config/crontab; for files installed in /etc/cron.d, you need to provide the username:
* * * * * root /bin/sh /usr/local/bin/get_date.sh
I'm also being explicit about the path here, rather than assuming our cron job and the COPY command have the same working directory.
There's another problem here: this script outputs to stdout, but that won't go anywhere useful (cron generally takes output from your cron jobs and then emails it to root). We can explicitly send the output to syslog instead:
* * * * * root /bin/sh /usr/local/bin/get_date.sh | logger
We don't need to make get_date.sh executable, since we're explicitly running it with the sh command.
We'll use busybox for a syslog daemon that logs to stdout.
That all gets us:
FROM python:3.8-slim-buster
# Install cron and busybox
RUN apt-get update \
&& apt-get install -y \
cron \
busybox
# Copying script file into the container.
COPY scripts/get_date.sh /usr/local/bin/get_date.sh
# Adding crontab to the appropriate location
COPY config/crontab /etc/cron.d/get_date
# Creating entry point for cron
CMD sh -c 'cron && busybox syslogd -n -O-'
If we build an image from this, start a container, and leave it running for a while, we see as output:
Sep 22 00:17:52 701eb0bd249f syslog.info syslogd started: BusyBox v1.30.1
Sep 22 00:18:01 701eb0bd249f authpriv.err CRON[7]: pam_env(cron:session): Unable to open env file: /etc/default/locale: No such file or directory
Sep 22 00:18:01 701eb0bd249f authpriv.info CRON[7]: pam_unix(cron:session): session opened for user root by (uid=0)
Sep 22 00:18:01 701eb0bd249f cron.info CRON[8]: (root) CMD (/bin/sh /usr/local/bin/get_date.sh | logger)
Sep 22 00:18:01 701eb0bd249f user.notice root: Current date and time is 09/22/22-00:18
Sep 22 00:18:01 701eb0bd249f authpriv.info CRON[7]: pam_unix(cron:session): session closed for user root
Sep 22 00:19:01 701eb0bd249f authpriv.err CRON[12]: pam_env(cron:session): Unable to open env file: /etc/default/locale: No such file or directory
Sep 22 00:19:01 701eb0bd249f authpriv.info CRON[12]: pam_unix(cron:session): session opened for user root by (uid=0)
Sep 22 00:19:01 701eb0bd249f cron.info CRON[13]: (root) CMD (/bin/sh /usr/local/bin/get_date.sh | logger)
Sep 22 00:19:01 701eb0bd249f user.notice root: Current date and time is 09/22/22-00:19
Sep 22 00:19:01 701eb0bd249f authpriv.info CRON[12]: pam_unix(cron:session): session closed for user root
Here, I deployed 2 containers with --scale flag
docker-compose up -d --scale gitlab-runner=2
2.Two containers are being deployed with names scalecontainer_gitlab-runner_1 and scalecontainer_gitlab-runner_2 resp.
I want to map different volume for each container.
/srv/gitlab-runner/config_${DOCKER_SCALE_NUM}:/etc/gitlab-runner
Getting this error:
WARNING: The DOCKER_SCALE_NUM variable is not set. Defaulting to a blank string.
Is there any way, I can map different volume for separate container .
services:
gitlab-runner:
image: "gitlab/gitlab-runner:latest"
restart: unless-stopped
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- /srv/gitlab-runner/config_${DOCKER_SCALE_NUM}:/etc/gitlab-runner
version: "3.5"
I don't think you can, there's an open request on this here. Here I will try to describe an alternative method for getting what you want.
Try creating a symbolic link from within the container that links to the directory you want. You can determine the "number" of the container after it's constructed by reading the container name from docker API and taking the final segment. To do this you have to mount the docker socket into the container, which has big security implications.
Setup
Here is a simple script to get the number of the container (Credit Tony Guo).
get-name.sh
DOCKERINFO=$(curl -s --unix-socket /run/docker.sock http://docker/containers/$HOSTNAME/json)
ID=$(python3 -c "import sys, json; print(json.loads(sys.argv[1])[\"Name\"].split(\"_\")[-1])" "$DOCKERINFO")
echo "$ID"
Then we have a simple entrypoint file which gets the container number, creates the specific config directory if it doesn't exist, and links its specific config directory to a known location (/etc/config in this example).
entrypoint.sh
#!/bin/sh
# Get the number of this container
NAME=$(get-name)
CONFIG_DIR="/config/config_${NAME}"
# Create a config dir for this container if none exists
mkdir -p "$CONFIG_DIR"
# Create a sym link from a well known location to our individual config dir
ln -s "$CONFIG_DIR" /etc/config
exec "$#"
Next we have a Dockerfile to build our image, we need to set the entrypoint and install curl and python for it to work. Also copy in our get-name.sh script.
Dockerfile
FROM alpine
COPY entrypoint.sh entrypoint.sh
COPY get-name.sh /usr/bin/get-name
RUN apk update && \
apk add \
curl \
python3 \
&& \
chmod +x entrypoint.sh /usr/bin/get-name
ENTRYPOINT ["/entrypoint.sh"]
Last, a simple compose file that specifies our service. Note that the docker socket is mounted, as well as ./config which is where our different config directories go.
docker-compose.yml
version: '3'
services:
app:
build: .
command: tail -f
volumes:
- /run/docker.sock:/run/docker.sock:ro
- ./config:/config
Example
# Start the stack
$ docker-compose up -d --scale app=3
Starting volume-per-scaled-container_app_1 ... done
Starting volume-per-scaled-container_app_2 ... done
Creating volume-per-scaled-container_app_3 ... done
# Check config directory on our host, 3 new directories were created.
$ ls config/
config_1 config_2 config_3
# Check the /etc/config directory in container 1, see that it links to the config_1 directory
$ docker exec volume-per-scaled-container_app_1 ls -l /etc/config
lrwxrwxrwx 1 root root 16 Jan 13 00:01 /etc/config -> /config/config_1
# Container 2
$ docker exec volume-per-scaled-container_app_2 ls -l /etc/config
lrwxrwxrwx 1 root root 16 Jan 13 00:01 /etc/config -> /config/config_2
# Container 3
$ docker exec volume-per-scaled-container_app_3 ls -l /etc/config
lrwxrwxrwx 1 root root 16 Jan 13 00:01 /etc/config -> /config/config_3
Notes
I think gitlab/gitlab-runner has its own entrypoint file so you may need to chain them.
You'll need to adapt this example to your specific setup/locations.
I have a container running a custom Apache image. The image has these entries:
COPY healthcheck.sh /root/healthcheck.sh
RUN chmod +x /root/healthcheck.sh
HEALTHCHECK --interval=600s --timeout=5s CMD /root/healthcheck.sh
This setup used to work fine. But after I updated my docker to 20.10.7 the healthcheck is giving problems. The container is now 'unhealthy'.
I checked the logs for the error, it turns out, the healthcheck.sh script can not be found:
"ExitCode": 127,
"Output": "/bin/sh: /root/healthcheck.sh: not found\n"
I tried running the /root/healthcheck.sh script manually (the file does exist), both from inside and outside the container, and it runs fine, it gives back a 0 exit code.
Stat of the file:
File: /root/healthcheck.sh
Size: 147 Blocks: 8 IO Block: 4096 regular file
Device: 34h/52d Inode: 4661497 Links: 1
Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
I set the permissions to 777 as a test, but it doesn't matter.
Any ideas?
Please add sh to your HEALTHCHECK
HEALTHCHECK --interval=600s --timeout=5s CMD sh /root/healthcheck.sh
My goal is to have /ssc/bin/put-and-submit.sh to be executable. I looked at another question, but do not think it applies.
FROM perl:5.20
ENV PERL_MM_USE_DEFAULT 1
RUN cpan install Net::SSL inc:latest
RUN mkdir /ssc
COPY /ssc /ssc
RUN chmod a+rx /ssc/bin/*.sh
ENTRYPOINT ["/ssc/bin/put-and-submit.sh"]
stat /ssc/bin/put-and-submit.sh
File: '/ssc/bin/put-and-submit.sh'
Size: 1892 Blocks: 8 IO Block: 4096 regular file
Device: 7ah/122d Inode: 293302 Links: 1
Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2021-01-27 04:14:43.000000000 +0000
Modify: 2021-01-27 04:14:43.000000000 +0000
Change: 2021-01-27 04:52:44.700000000 +0000
Birth: -
I read the question below, and believe that circumstance is when another layer is added, it overwrites the previous one. In my case, I start with a Perl image, add a few CPAN libraries, copy a few files and then ask it to change permissions.
Dockerfile "RUN chmod" not taking effect
I remember I had this problem too and it basically only worked when I just replaced the default /usr/local/bin/docker-php-entrypoint WITHOUT firing the ENTRYPOINT command (to use a custom entrypoint script).
So in your case you have to find out what the default entrypoint file is perl is using (must also be in /usr/local/bin) and maybe replace that.
Sorry it's not the exact "right" solution but in my case it worked out fine and good enough.
So what I'm doing for example for my PHP-FPM containers is the following (note that ENTRYPOINT is commented out):
COPY docker-entrypoint.sh /usr/local/bin/docker-php-entrypoint
RUN chmod +x /usr/local/bin/docker-php-entrypoint
# ENTRYPOINT ["/usr/local/bin/docker-php-entrypoint"]
Just in case, my sh script looks like this (only starts supervisor):
#!/bin/sh
set -e
echo "Starting supervisor service"
exec supervisord -c /etc/supervisor/supervisord.conf
I hope this gets you somewhere mate, cheers
I have a problem with Docker which does not persist commands launch via "RUN".
Here is my Dockerfile :
FROM jenkins:latest
RUN echo "foo" > /var/jenkins_home/toto ; ls -alh /var/jenkins_home
RUN ls -alh /var/jenkins_home
RUN rm /var/jenkins_home/.bash_logout ; ls -alh /var/jenkins_home
RUN ls -alh /var/jenkins_home
RUN echo "bar" >> /var/jenkins_home/.profile ; cat /var/jenkins_home/.profile
RUN cat /var/jenkins_home/.profile
And here is the output :
Sending build context to Docker daemon 373.8 kB Step 1 : FROM jenkins:latest ---> fc39417bd5fb Step 2 : RUN echo "foo" > /var/jenkins_home/toto ; ls -alh /var/jenkins_home ---> Using cache
---> c614b13d9d83 Step 3 : RUN ls -alh /var/jenkins_home ---> Using cache ---> 8a16a0c92f67 Step 4 : RUN rm /var/jenkins_home/.bash_logout ; ls -alh /var/jenkins_home ---> Using cache ---> f6ca5d5bdc64 Step 5 : RUN ls -alh /var/jenkins_home
---> Using cache ---> 3372c3275b1b Step 6 : RUN echo "bar" >> /var/jenkins_home/.profile ; cat /var/jenkins_home/.profile ---> Running in 79842be2c6e3
# ~/.profile: executed by the command interpreter for login shells.
# This file is not read by bash(1), if ~/.bash_profile or ~/.bash_login
# exists.
# see /usr/share/doc/bash/examples/startup-files for examples.
# the files are located in the bash-doc package.
# the default umask is set in /etc/profile; for setting the umask
# for ssh logins, install and configure the libpam-umask package.
#umask 022
# if running bash if [ -n "$BASH_VERSION" ]; then
# include .bashrc if it exists
if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc"
fi fi
# set PATH so it includes user's private bin if it exists if [ -d "$HOME/bin" ] ; then
PATH="$HOME/bin:$PATH" fi bar ---> 28559b8fe041 Removing intermediate container 79842be2c6e3 Step 7 : RUN cat /var/jenkins_home/.profile ---> Running in c694e0cb5866
# ~/.profile: executed by the command interpreter for login shells.
# This file is not read by bash(1), if ~/.bash_profile or ~/.bash_login
# exists.
# see /usr/share/doc/bash/examples/startup-files for examples.
# the files are located in the bash-doc package.
# the default umask is set in /etc/profile; for setting the umask
# for ssh logins, install and configure the libpam-umask package.
#umask 022
# if running bash if [ -n "$BASH_VERSION" ]; then
# include .bashrc if it exists
if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc"
fi fi
# set PATH so it includes user's private bin if it exists if [ -d "$HOME/bin" ] ; then
PATH="$HOME/bin:$PATH" fi ---> b7e47d65d65e Removing intermediate container c694e0cb5866 Successfully built b7e47d65d65e
Do you guys know why "foo" file is not persisted on step 3? Why ".bash_logout" file is recreated on step 5? Why "bar" is not in my ".profile" file anymore on step 7?
And of course, if I start a container based on this image, none of my modifications are persisted... so my Dockerfile is... useless. Any clue?
The reason those changes are not persisted, is that they are inside a volume the Jenkins Dockerfile marks /var/jenkins_home/ as a VOLUME.
Information inside volumes is not persisted during docker build, or more precisely; each build-step creates a new volume based on the image's content, discarding the volume that was used in the previous build step.
How to resolve this?
I think the best way to resolve this, is to;
Add the files you want to modify inside jenkins_home in a different location inside the image, e.g. /var/jenkins_home_overrides/
Create a custom entrypoint based on, or "wrapping", the default entrypoint script that copies the content of your jenkins_home_overrides to jenkins_home the first time the container is started.
Actually...
And just when I wrote that up; It looks like the official Jenkins image already support this out of the box;
https://github.com/jenkinsci/docker/blob/683b0d6ed17016ee3211f247304ef2f265102c2b/jenkins.sh#L5-L23
According to the documentation, you need to add your files to the /usr/share/jenkins/ref/ directory, and those will be copied to /var/jenkins/home upon start.
Also see https://issues.jenkins-ci.org/browse/JENKINS-24986