I am trying to build a docker image that is based on centos:systemd. In my Dockerfile I am running a command that depends on systemd running, this fails with the following error:
Failed to get D-Bus connection: Operation not permitted
error: %pre(mod-php-7.1-apache2-zend-server-7.1.7-16.x86_64) scriptlet failed, exit status 1
Error in PREIN scriptlet in rpm package mod-php-7.1-apache2-zend-server-7.1.7-16.x86_64
how can I get the intermediate containers to run with --privileged and mapping -v /sys/fs/cgroup:/sys/fs/cgroup:ro ?
If I comment out the installer and just run the container and manually execute the installer it works fine.
Here is the Dockerfile
FROM centos/systemd
COPY ./ZendServer-9.1.0-RepositoryInstaller-linux.tar.gz /opt
RUN tar -xvf /opt/ZendServer-9.1.0-RepositoryInstaller-linux.tar.gz -C /opt/
RUN /opt/ZendServer-RepositoryInstaller-linux/install_zs.sh 7.1 java --automatic
If your installer needs systemd running, I think you will need to launch a container with the base centos/systemd image, manually run the commands, and then save the result using docker commit. The base image ENTRYPOINT and CMD are not run while child images are getting built, but they do run if you launch a container and make your changes. After manually executing the installer, run docker commit {my_intermediate_container} {my_image}:{my_version}, replacing the bits in curly braces with the container name/hash, your desired image name, and image version.
You might also be able to change your Dockerfile to launch init before running your installer. I am not sure if that will work here in the context of building an image, but that would look like:
FROM centos/systemd
COPY ./ZendServer-9.1.0-RepositoryInstaller-linux.tar.gz /opt
RUN tar -xvf /opt/ZendServer-9.1.0-RepositoryInstaller-linux.tar.gz -C /opt/ \
&& /usr/sbin/init \
&& /opt/ZendServer-RepositoryInstaller-linux/install_zs.sh 7.1 java --automatic
A LAMP stack inside a docker container does not need systemd - I have made to work with the docker-systemctl-replacement script. It is able to start and stop a service according to what's written in the *.service file. You could try it with what the ZendServer is normally doing outside a docker container.
Related
The error I'm experiencing is that I want to execute the command "change directory" in my Docker machine, but every time I execute RUN instruction in my Dockerfile, it deletes the actual container (intermediate container).
DOCKERFILE
This happens when I execute the Dockerfile from above
How can I prevent Docker from doing that?
docker build --rm=false
Remove intermediate containers after a successful build (default true)
The current paths are different for Dockerfile and RUN (inside container).
Each RUN command starts from the Dockerfile path (e. g. '/').
When you do RUN cd /app, the "inside path" changes, but not the "Dockerfile path". The next RUN command will again be run at '/'.
To change the "Dockerfile path", use WORKDIR (see reference), for example WORKDIR /opt/firefox.
The alternative would be chaining the executed RUN commands, as EvgeniySharapov pointed out: RUN cd opt; ls; cd firefox; ls
on multiple lines:
RUN cd opt; \
ls; \
cd firefox; \
ls
(To clarify: It doesn't matter that Docker removes intermediate containers, that is not the problem in this case.)
When you use docker build --no-cache this will delete intermediate containers when you build an image. This may affect building times when you run build multiple times. Alternatively you can choose to put multiple shell commands into one shell command using \ and then use it as a RUN argument.
Mote tips could be found here
I want to create new image from jdk, build it, it works; when I run my new imag, it return container id, but can't get the process-info when docker ps,this is my dockerfile:
# specified jdk version
FROM openjdk:7-jre
# env
ENV APP_HOME /usr/src/KOAL-OCSP
ENV PATH $APP_HOME:$PATH
# copy my app in .zip to /usr/src
COPY myapp.zip /usr/src/
# unzip copy file
RUN unzip /usr/src/myapp.zip
WORKDIR $APP_HOME
#port
expose 80
#run the setup script of my app when start container
CMD ["service.sh" "start"]
service.sh is a setup script is my app root-file, I wish the script can auto execuced when run the new self-build image.
I suspect that the container has executed and exited successfully. The container will stay alive as long as the processes that you have started using the services.sh script is still running.
In your case, the services.sh has executed and exited, thus causing the container to exit.
To view all containers, use docker ps -a
Update:
The error /bin/sh: 1: ./service.sh: not found indicates that the services.sh script is not found under $APP_HOME inside the Docker image. Make sure you add it under $APP_HOME using
ADD `service.sh` $APP_HOME
CMD ["service.sh" "start"]
The above is not valid json, it's missing a comma in the the array, so docker will execute it as a string which will fail since ["service.sh" will not be found as a command to run.
If you use docker ps -a you will see a list of all containers, including exited ones. From there, you can use docker logs $(docker ps -lq) to see the logs of the last container you tried to run. And you can docker inspect $(docker ps -lq) to see all the details about the last container it tried to run, including the exit code.
To get past your current error, correct your syntax with the missing comma:
CMD ["service.sh", "start"]
For the next problem you are seeing, a "not found" error can indicate:
The command doesn't exist inside your container (at the expected location). In your scenario, make sure it is included in /usr/src/KOAL-OCSP that you unzip in your image.
The shell script does exist, but calls a binary on the first line that doesn't exist in your image. E.g. if you call #!/bin/bash but only have /bin/sh in your container. This also happens when you edit the files on a Windows system and have ^M linefeeds that become part of the name of the binary that the container is looking for (/bin/sh^M instead of /bin/sh).
For binaries, this can happen if executable you are running has library dependencies that do not exist inside your container. For example, if you build on a glibc environment and run the container with a musl libc environment of Alpine, this same error message will appear.
FROM centos
RUN yum -y update
ENV zk=dx
RUN mkdir $zk
after building image and after running fallowing command
docker run -it -e zk="hifi" <image ID>
I get a directory with name dx but not with hifi
can anyone help me how to set a Dockerfile variable from docker run command
This has behaved this way because:
The RUN commands in the Dockerfile are executed when the Docker image is built (like almost all Dockerfile instructions) - ie. when you run docker build
The docker run command runs when the container is run from the image.
So when you run docker run and set the value to "hifi", the image already exists which has a directory called "dx" in it. The directory creation task has already been performed - updating the environment variable to "hifi" won't change it.
You cannot set a Dockerfile build variable at run time. The build has already happened.
Incidentally, you're over-writing the value of the zk variable right before you create the directory. If you did successfully pass "hifi" into the docker build, it would be over-written and the folder would always be called "dx".
I have a docker image which installs grunt, but when I try to run it, I get an error:
Error response from daemon: Cannot start container foo_1: \
exec: "grunt serve": executable file not found in $PATH
If I run bash in interactive mode, grunt is available.
What am I doing wrong?
Here is my Dockerfile:
# https://registry.hub.docker.com/u/dockerfile/nodejs/ (builds on ubuntu:14.04)
FROM dockerfile/nodejs
MAINTAINER My Name, me#email.com
ENV HOME /home/web
WORKDIR /home/web/site
RUN useradd web -d /home/web -s /bin/bash -m
RUN npm install -g grunt-cli
RUN npm install -g bower
RUN chown -R web:web /home/web
USER web
RUN git clone https://github.com/repo/site /home/web/site
RUN npm install
RUN bower install --config.interactive=false --allow-root
ENV NODE_ENV development
# Port 9000 for server
# Port 35729 for livereload
EXPOSE 9000 35729
CMD ["grunt"]
This was the first result on google when I pasted my error message, and it's because my arguments were out of order.
The container name has to be after all of the arguments.
Bad:
docker run <container_name> -v $(pwd):/src -it
Good:
docker run -v $(pwd):/src -it <container_name>
When you use the exec format for a command (e.g., CMD ["grunt"], a JSON array with double quotes), it will be executed without a shell. This means that most environment variables will not be present.
If you specify your command as a regular string (e.g. CMD grunt) then the string after CMD will be executed with /bin/sh -c.
More info on this is available in the CMD section of the Dockerfile reference.
I found the same problem. I did the following:
docker run -ti devops -v /tmp:/tmp /bin/bash
When I change it to
docker run -ti -v /tmp:/tmp devops /bin/bash
it works fine.
For some reason, I get that error unless I add the "bash" clarifier. Even adding "#!/bin/bash" to the top of my entrypoint file didn't help.
ENTRYPOINT [ "bash", "entrypoint.sh" ]
There are several possible reasons for an error like this.
In my case, it was due to the executable file (docker-entrypoint.sh from the Ghost blog Dockerfile) lacking the executable file mode after I'd downloaded it.
Solution: chmod +x docker-entrypoint.sh
I had the same problem, After lots of googling, I couldn't find out how to fix it.
Suddenly I noticed my stupid mistake :)
As mentioned in the docs, the last part of docker run is the command you want to run and its arguments after loading up the container.
NOT THE CONTAINER NAME !!!
That was my embarrassing mistake.
Below I provided you with the picture of my command line to see what I have done wrong.
And this is the fix as mentioned in the docs.
A Docker container might be built without a shell (e.g. https://github.com/fluent/fluent-bit-docker-image/issues/19).
In this case, you can copy-in a statically compiled shell and execute it, e.g.
docker create --name temp-busybox busybox:1.31.0
docker cp temp-busybox:/bin/busybox busybox
docker cp busybox mycontainerid:/busybox
docker exec -it mycontainerid /bin/busybox sh
In the error message shown:
Error response from daemon: Cannot start container foo_1: \
exec: "grunt serve": executable file not found in $PATH
It is complaining that it cannot find the executable grunt serve, not that it could not find the executable grunt with the argument serve. The most likely explanation for that specific error is running the command with the json syntax:
[ "grunt serve" ]
in something like your compose file. That's invalid since the json syntax requires you to split up each parameter that would normally be split by the shell on each space for you. E.g.:
[ "grunt", "serve" ]
The other possible way you can get both of those into a single parameter is if you were to quote them into a single arg in your docker run command, e.g.
docker run your_image_name "grunt serve"
and in that case, you need to remove the quotes so it gets passed as separate args to the run command:
docker run your_image_name grunt serve
For others seeing this, the executable file not found means that Linux does not see the binary you are trying to run inside your container with the default $PATH value. That could mean lots of possible causes, here are a few:
Did you remember to include the binary inside your image? If you run a multi-stage image, make sure that binary install is run in the final stage. Run your image with an interactive shell and verify it exists:
docker run -it --rm your_image_name /bin/sh
Your path when shelling into the container may be modified for the interactive shell, particularly if you use bash, so you may need to specify the full path to the binary inside the container, or you may need to update the path in your Dockerfile with:
ENV PATH=$PATH:/custom/dir/bin
The binary may not have execute bits set on it, so you may need to make it executable. Do that with chmod:
RUN chmod 755 /custom/dir/bin/executable
The binary may include dynamically linked libraries that do not exist inside the image. You can use ldd to see the list of dynamically linked libraries. A common reason for this is compiling with glibc (most Linux environments) and running with musl (provided by Alpine):
ldd /path/to/executable
If you run the image with a volume, that volume can overlay the directory where the executable exists in your image. Volumes do not merge with the image, they get mounted in the filesystem tree same as any other Linux filesystem mount. That means files from the parent filesystem at the mount point are no longer visible. (Note that named volumes are initialized by docker from the image content, but this only happens when the named volume is empty.) So the fix is to not mount volumes on top of paths where you have executables you want to run from the image.
If you run a binary for a different platform, and haven't configured binfmt_misc with the --fix-binary option, qemu will be looking for the interpreter inside the container filesystem namespace instead of the host filesystem. See this Ubuntu bug report for more details on this issue.
If the error is from a shell script, the issue is often with the first line of that script (e.g. the #!/bin/bash). Either the command doesn't exist inside the image for a reason above, or the file is not saved as ascii or utf8 with Linux linefeeds. You can attempt dos2unix to fix the linefeeds, or check your git and editor settings.
in my case i order params wrong move all switchs before image name
I got this error message, when I was building alpine base image :
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "bash": executable file not found in $PATH: unknown
In my docker-compose file, I had the command directive in which executing command using bash and bash does not come with alpine base image.
command: bash -c "python manage.py runserver 0.0.0.0:8000"
Then I realized and executed command using sh (shell).
It worked for me.
problem is glibc, which is not part of apline base iamge.
After adding it worked for me :)
Here are the steps to get the glibc
apk --no-cache add ca-certificates wget
wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub
wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.28-r0/glibc-2.28-r0.apk
apk add glibc-2.28-r0.apk
Refering to the title.
My mistake was to put variables via --env-file during docker run. Among others the file consisted of a PATH extension: PATH=$PATH:something, which caused PATH var look literally like PATH=$PATH:something (var resolution hadn't been performed) instead of PATH:/usr/bin...:something.
I couldn't make the resolution work through --env-file, so the only way I see this working is by using ENV in Dockerfile.
I ran into this issue using docker-compose. None of the solutions here or on this related question resolved my issue. Ultimately what worked for me was clearing all cached docker artifacts with docker prune -a and restarting docker.
to make it work add soft reference to /usr/bin:
ln -s $(which node) /usr/bin/node
ln -s $(which npm) /usr/bin/npm
Here's my problem: I want to build a chroot environment inside a docker container. The problem is that debootstrap cannot run, because it cannot mount proc in the chroot:
W: Failure trying to run: chroot /var/chroot mount -t proc proc /proc
(in the log the problem turns out to be: mount: permission denied)
If I run --privileged the container, it (of course) works...
I'd really really really like to debootstrap the chroot in the Dockerfile (much much cleaner). Is there a way I can get it to work?
Thanks a lot!
You could use the fakechroot variant of debootstrap, like this:
fakechroot fakeroot debootstrap --variant=fakechroot ...
Cheers!
No, this is not currently possible.
Issue #1916 (which concerns running privileged operations during docker build) is still an open issue. There was discussion at one point of adding a command-line flag and RUNP command but neither of these have been implemented.
Adding --cap-add=SYS_ADMIN --security-opt apparmor:unconfined to the docker run command works for me.
See moby/moby issue 16429
This still doesn't work (2018-05-31).
Currently the only option is debootstrap followed by docker import - Import from a local directory
# mkdir /path/to/target
# debootstrap bionic /path/to/target
# tar -C /path/to/target -c . | docker import - ubuntu:bionic
debootstrap version 1.0.107, which is available since Debian 10 Buster (July 2019) or in Debian 9 Stretch-Backports has native support for Docker and allows building a Debian root image without requiring privileges.
Dockerfile:
FROM debian:buster-slim AS builder
RUN apt-get -qq update \
&& apt-get -q install --assume-yes debootstrap
ARG MIRROR=http://deb.debian.org/debian
ARG SUITE=sid
RUN debootstrap --variant=minbase "$SUITE" /work "$MIRROR"
RUN chroot /work apt-get -q clean
FROM scratch
COPY --from=builder /work /
CMD ["bash"]
docker build -t my-debian .
docker build -t my-debian:bullseye --build-arg SUITE=bullseye .
There is a fun workaround, but it involves running Docker twice.
The first time, using a standard docker image like ubuntu:latest, only run the first stage of debootstrap by using the --foreign option.
debootstrap --foreign bionic /path/to/target
Then don't let it do anything that would require privileged and isn't needed anyway by modifying the functions that will be used in the second stage.
sed -i '/setup_devices ()/a return 0' /path/to/target/debootstrap/functions
sed -i '/setup_proc ()/a return 0' /path/to/target/functions
The last step for that docker run is to have that docker execution tar itself up to a directory that is included as a volume.
tar --exclude='dev/*' -cvf /guestpath/to/volume/rootfs.tar -C /path/to/target .
Ok, now prep for a second run. First load your tar file as a docker image.
cat /hostpath/to/volume/rootfs.tar | docker import - my_image:latest
Then, run docker using FROM my_image:latest and run the second debootstrap stage.
/debootstrap/debootstrap --second-stage
That might be obtuse, but it does work without requiring --priveledged. You are effectively replacing running chroot with running a 2nd docker container.
This does not address the OP requirements for doing chroot in a container without --privileged set, but it is an alternative method that may be of use.
See Docker Moby for hetergenous rootfs builds. It creates a native temp directory and creates a rootfs in it using debootstrap which needs sudo. THEN it creates a docker image using
FROM scratch
ADD rootfs.tar.xz /
CMD ["/bin/bash"]
This is a common recipe for running a pre-made rootfs in a docker image. Once the image is built, it does not need special permissions. AND it's supported by the docker devel team.
Short answer, without privileged mode no there isn't a way.
Docker is targeted at micro-services and is not a drop in replacement for virtual machines. Having multiple installations in one container definitely not congruent with that. Why not use multiple docker containers instead?