Add at command to docker - docker

Hello i want to add At command to docker container. I am using linux alpine .
I tried to use apk add at andapk add atd it is giving me the same error.
ERROR: unsatisfiable constraints: atd (missing):
required by: world[atd]
Is there a way to fix that or can is there a way to use apt-get since at exists for apt-get

Looks like at just available as is: apk add at
this Dockerfile works fine for me:
FROM alpine:latest
RUN apk add at
CMD at --help
example run:
$ docker build -t at_command_line -f Dockerfile .
$ docker run at_command_line:latest
at: unrecognized option: -
Usage: at [-V] [-q x] [-f file] [-u username] [-mMlbv] timespec ...
at [-V] [-q x] [-f file] [-u username] [-mMlbv] -t time
at -c job ...
atq [-V] [-q x]
at [ -rd ] job ...
atrm [-V] job ...
batch

I would just add to #ujlbu4's answer that you need to run the at daemon atd once your container is up and running or else the jobs will sit in the queue without getting executed.
Example Dockerfile:
FROM python:alpine
RUN apk add at
ENTRYPOINT ["atd"]
If you don't run atd you may see the following:
$ docker exec -it my_running_container /bin/sh
# echo "echo hi" | at now + 1 minutes
warning: commands will be executed using /bin/sh
job 6 at Mon Jun 21 18:11:00 2021
Can't open /var/run/atd.pid to signal atd. No atd running?

Related

How to run an sh script in docker file?

When running a sh script in docker file, i got the following error:
./upload.sh: 5: ./upload.sh: sudo: not found ./upload.sh: 21:
./upload.sh: Bad substitution
sudo chmod 755 upload.sh # line 5
version=$(git rev-parse --short HEAD)
echo "version $version"
echo "Uploading file"
for path in $(find public/files -name "*.txt"); do
echo "path $path"
WORDTOREMOVE="public/"
echo "WORDTOREMOVE $WORDTOREMOVE"
# cause of the error
newpath=${path//$WORDTOREMOVE/} # Line 21
echo "new path $path"
url=http://localhost:3000/${newpath}
...
echo "Uploading file"
...
done
DockerFile
FROM node:10-slim
EXPOSE 3000 4001
WORKDIR /prod/code
...
COPY . .
RUN ./upload.sh
RUN npm run build
CMD ./DockerRun.sh
Any idea?
If anyone faces the same issue, here how I fixed it
chmod +x upload.sh
git update-index --chmod=+x upload.sh (mandatory if you pushed the file to remote branch before changing its permission)
The docker image you are using (node:10-slim) has no sudo installed on it because this docker image runs processes as user root:
docker run -it node:10-slim bash
root#68dcffceb88c:/# id
uid=0(root) gid=0(root) groups=0(root)
root#68dcffceb88c:/# which sudo
root#68dcffceb88c:/#
When your Dockerfile runs RUN ./upload.sh it will run:
sudo chmod 755 upload.sh
Using sudo inside the docker fails because sudo is not installed, there is no need to use sudo inside the docker because all of the commands inside the docker run as user root.
Simply remove the sudo from line number 5.
If you wish to update the running PATH variable run:
PATH=$PATH:/directorytoadd/bin
This will append the directory "/directorytoadd/bin" to the current path.

Is it possible to add an installer, run it and delete it during one build step in Docker?

I'm trying to create a Docker image from a pretty large installer binary (300+ MB). I want to add the installer to the image, install it, and delete the installer. This doesn't seem to be possible:
COPY huge-installer.bin /tmp
RUN /tmp/huge-installer.bin
RUN rm /tmp/huge-installer.bin # <- has no effect on the image size
Using multiple build stages doesn't seem to solve this, since I need to run the installer in the final image. If I could execute the installer directly from a previous build stage, without copying it, that would solve my problem, but as far as I know that's not possible.
Is there any way to avoid including the full weight of the installer in the final image?
I ended up solving this by using the built-in HTTP server in Python to make the project directory available to the image over HTTP.
Inside the Dockerfile, I can run commands like this, piping scripts directly to bash using curl:
RUN curl "http://127.0.0.1:${SERVER_PORT}/installer-${INSTALLER_VERSION}.bin" | bash
Or save binaries, run them and delete them in one step:
RUN curl -O "http://127.0.0.1:${SERVER_PORT}/binary-${INSTALLER_VERSION}.bin" && \
./binary-${INSTALLER_VERSION}.bin && \
rm binary-${INSTALLER_VERSION}.bin
I use a Makefile to start the server and stop it after the build, but you can use a build script instead.
Here's a Makefile example:
SHELL := bash
IMAGE_NAME := app-test
VERSION := 1.0.0
SERVER_PORT := 8580
.ONESHELL:
.PHONY: build
build:
# Kills the HTTP server when the build is done
function cleanup {
pkill -f "python3 -m http.server.*${SERVER_PORT}"
}
trap cleanup EXIT
# Starts a HTTP server that makes the contents of the project directory
# available to the image
python3 -m http.server -b 127.0.0.1 ${SERVER_PORT} &>/dev/null &
sleep 1
EXTRA_ARGS=""
# Allows skipping the build cache by setting NO_CACHE=1
if [[ -n $$NO_CACHE ]]; then
EXTRA_ARGS="--no-cache"
fi
docker build $$EXTRA_ARGS \
--network host \
--build-arg SERVER_PORT=${SERVER_PORT} \
-t ${IMAGE_NAME}:latest \
.
docker tag ${IMAGE_NAME}:latest ${IMAGE_NAME}:${VERSION}
I think the best way is to download the bin from a website then run it:
RUN wget http://myweb/huge-installer.bin && /tmp/huge-installer.bin && rm /tmp/huge-installer.bin
in this way your image layer will not contain the binary you download
I didn't test it thoroughly, but wouldn't such an approach be viable? (Besides LinPy's answer, which is way easier if you have the possibility to just do it that way.)
Dockerfile:
FROM alpine:latest
COPY entrypoint.sh /tmp/entrypoint.sh
RUN \
echo "I am an image that can run your huge installer binary!" \
&& echo "I will only function when you give it to me as a volume mount."
ENTRYPOINT [ "/tmp/entrypoint.sh" ]
entrypoint.sh:
#!/bin/sh
/tmp/your-installer # install your stuff here
while true; do
echo "installer finished, commit me now!"
sleep 5
done
Then run:
$ docker build -t foo-1
$ docker run --rm --name foo-1 --rm -d -v $(pwd)/your-installer:/tmp/your-installer
$ docker logs -f foo-1
# once it echoes "commit me now!", run the next command
$ docker commit foo-1 foo-2
$ docker stop foo-1
Since the installer was only mounted as a volume, the image foo-2 should not contain it anymore. You could also go and build another Dockerfile based on foo-2 to change the entrypoint, for example.
Cf. docker commit

Run dbus-daemon inside Docker container

I am trying to create a Docker container with a custom D-Bus bus running inside.
I configured my Dockerfile as follow:
FROM ubuntu:16.04
COPY myCustomDbus.conf /etc/dbus-1/
RUN apt-get update && apt-get install -y dbus
RUN dbus-daemon --config-file=/etc/dbus-1/myCustomDbus.conf
After building, the socket is created but it is flagged as "file", not as "socket", and I can not use it as a bus...
-rwxrwxrwx 1 root root 0 Mar 20 07:25 myCustomDbus.sock
If I remove this file and run the dbus-daemon command again in a terminal, the socket is successfully created :
srwxrwxrwx 1 root root 0 Mar 20 07:35 myCustomDbus.sock
I am not sure if it is a D-Bus problem or a docker one.
Instead of using the "RUN" command, you should use the "ENTRYPOINT" one to run a startup script.
The Dockerfile should look like that :
FROM ubuntu:14.04
COPY myCustomDbus.conf /etc/dbus-1/
COPY run.sh /etc/init/
RUN apt-get update && apt-get install -y dbus
ENTRYPOINT ["/etc/init/run.sh"]
And run.sh :
#!/bin/bash
dbus-daemon --config-file=/etc/dbus-1/myCustomDbus.conf --print-address
You should use a startup script. The "run" command is executed only when the container is created and then stopped.
my run.sh:
if ! pgrep -x "dbus-daemon" > /dev/null
then
# export DBUS_SESSION_BUS_ADDRESS=$(dbus-daemon --config-file=/usr/share/dbus-1/system.conf --print-address | cut -d, -f1)
# or:
dbus-daemon --config-file=/usr/share/dbus-1/system.conf
# and put in Dockerfile:
# ENV DBUS_SESSION_BUS_ADDRESS="unix:path=/var/run/dbus/system_bus_socket"
else
echo "dbus-daemon already running"
fi
if ! pgrep -x "/usr/lib/upower/upowerd" > /dev/null
then
/usr/lib/upower/upowerd &
else
echo "upowerd already running"
fi
then chrome runs with
--use-gl=swiftshader
without errors

How to workaround "the input device is not a TTY" when using grunt-shell to invoke a script that calls docker run?

When issuing grunt shell:test, I'm getting warning "the input device is not a TTY" & don't want to have to use -f:
$ grunt shell:test
Running "shell:test" (shell) task
the input device is not a TTY
Warning: Command failed: /bin/sh -c ./run.sh npm test
the input device is not a TTY
Use --force to continue.
Aborted due to warnings.
Here's the Gruntfile.js command:
shell: {
test: {
command: './run.sh npm test'
}
Here's run.sh:
#!/bin/sh
# should use the latest available image to validate, but not LATEST
if [ -f .env ]; then
RUN_ENV_FILE='--env-file .env'
fi
docker run $RUN_ENV_FILE -it --rm --user node -v "$PWD":/app -w /app yaktor/node:0.39.0 $#
Here's the relevant package.json scripts with command test:
"scripts": {
"test": "mocha --color=true -R spec test/*.test.js && npm run lint"
}
How can I get grunt to make docker happy with a TTY? Executing ./run.sh npm test outside of grunt works fine:
$ ./run.sh npm test
> yaktor#0.59.2-pre.0 test /app
> mocha --color=true -R spec test/*.test.js && npm run lint
[snip]
105 passing (3s)
> yaktor#0.59.2-pre.0 lint /app
> standard --verbose
Remove the -t from the docker run command:
docker run $RUN_ENV_FILE -i --rm --user node -v "$PWD":/app -w /app yaktor/node:0.39.0 $#
The -t tells docker to configure the tty, which won't work if you don't have a tty and try to attach to the container (default when you don't do a -d).
This solved an annoying issue for me. The script had these lines:
docker exec **-it** $( docker ps | grep mysql | cut -d' ' -f1) mysql --user= ..... > /var/tmp/temp.file
mutt -s "File is here" someone#somewhere.com < /var/tmp/temp.file
The script would run great if run directly and the mail would come with the correct output. However, when run from cron, (crontab -e) the mail would come with no content. Tried many things around permissions and shells and paths etc. However no joy!
Finally found this:
*/20 * * * * scriptblah.sh > $HOME/cron.log 2>&1
And on that cron.log file found this output:
the input device is not a TTY
Search led me here. And after I removed the -t, it's working great now!
docker exec **-i** $( docker ps | grep mysql | cut -d' ' -f1) mysql --user= ..... > /var/tmp/temp.file

Extending local Dockerfile

I'm trying to base a Dockerfile on another local one.
$ ls -lR
total 0
-rw-r--r-- 1 me me 42 14 avr 10:42 Dockerfile
drwxr-xr-x 3 me me 42 14 avr 10:42 prod
./prod:
total 0
-rw-r--r-- 1 me me 42 14 avr 10:42 Dockerfile
$ cat prod/Dockerfile
FROM ../Dockerfile
...
$ docker build - < prod/Dockerfile
unable to process Dockerfile: unable to parse repository info: repository name component must match "a-z0-9(?:[._]a-z0-9)*"
I know that FROM expects an image and not a path.
How can I extend Dockerfile from prod/Dockerfile ?
Dockerfiles don't extend Dockerfiles but images, the FROM line is not an "include" statement.
So, if you want to "extend" another Dockerfile, you need to build the original Dockerfile as an image, and extend that image.
For example;
Dockerfile1:
FROM alpine
RUN echo "foo" > /bar
Dockerfile2:
FROM myimage
RUN echo "bar" > /baz
Build the first Dockerfile (since it's called Dockerfile1, use the -f option as docker defaults to look for a file called Dockerfile), and "tag" it as myimage
docker build -f Dockerfile1 -t myimage .
# Sending build context to Docker daemon 3.072 kB
# Step 1 : FROM alpine
# ---> d7a513a663c1
# Step 2 : RUN echo "foo" > /bar
# ---> Running in d3a3e5a18594
# ---> a42129418da3
# Removing intermediate container d3a3e5a18594
# Successfully built a42129418da3
Then build the second Dockerfile, which extends the image you just built. We tag the resulting image as "myextendedimage";
docker build -f Dockerfile2 -t myextendedimage .
# Sending build context to Docker daemon 3.072 kB
# Step 1 : FROM myimage
# ---> a42129418da3
# Step 2 : RUN echo "bar" > /baz
# ---> Running in 609ae35fe135
# ---> 4ea44560d4b7
# Removing intermediate container 609ae35fe135
# Successfully built 4ea44560d4b7
To check the results, run a container from the image and verify that both files (/bar and /baz) are in the image;
docker run -it --rm myextendedimage sh -c "ls -la ba*"
# -rw-r--r-- 1 root root 4 Apr 14 10:18 bar
# -rw-r--r-- 1 root root 4 Apr 14 10:19 baz
I suggest to read the User guide, which explains how to work with images and containers
Take a look at multi-stage builds it could help you
https://docs.docker.com/engine/userguide/eng-image/multistage-build/
https://blog.alexellis.io/mutli-stage-docker-builds/
I wrote simple bash script for this. It works next way:
Example structure:
|
|_Dockerfile(base)
|_prod
|_Dockerfile(extended)
Dockerfile(extended):
FROM ../Dockerfile
...
Run script:
./script.sh prod
It merges your base dockerfile with extended and build merged file.
Script:
#!/bin/bash
fromLine=$(head -n 1 $1/Dockerfile)
read -a fromLineArray <<< $fromLine
extPath=${fromLineArray[1]}
tail -n +2 "$1/Dockerfile" > strippedDocker
cat $1/$extPath strippedDocker > resDocker
rm strippedDocker
docker build - < resDocker
rm resDocker
I'm using conditionals:
Dockerfile
Install sudo only on local build.
FROM ubuntu:latest
ARG APP_ENVIRONMENT=local
RUN apt-get update && bash -c "set -ex ; \
apt-get install -y $([ ${APP_ENVIRONMENT} = local ] \
&& echo 'curl sudo' \
|| echo 'curl' \
)"
CMD bash -c "set -ex ; \
[ ${APP_ENVIRONMENT} = local ] \
&& { app debug ; exit $? ; } \
|| { app start ; exit $? ; } \
"
Build
# Production
docker build \
-t my-image \
--build-arg APP_ENVIRONMENT='prod' \
.
# Local
docker build \
-t my-image \
.
Docker Compose
version: "3.7"
services:
app:
build:
context: .
args:
APP_ENVIRONMENT: "${APP_ENVIRONMENT:-local}"
If you use Docker 20.10+, you can do this:
# syntax = edrevo/dockerfile-plus
INCLUDE+ ../Dockerfile
RUN ...
The INCLUDE+ instruction gets imported by the first line in the Dockerfile. You can read more about the dockerfile-plus at https://github.com/edrevo/dockerfile-plus

Resources