Dockerfile
FROM centos:7
ENV container docker
VOLUME ["/sys/fs/cgroup"]
RUN yum -y update
RUN yum install -y httpd
RUN systemctl start httpd.service
ADD . /code
WORKDIR /code
docker-compose.yml
version: '2'
services:
web:
privileged: true
build: .
ports:
- "80:80"
volumes:
- .:/code
command
docker-compose build
error:
Step 6 : RUN systemctl start httpd.service
---> Running in 5989c6576ac9
?[91mFailed to get D-Bus connection: Operation not permitted
?[0m?[31mERROR?[0m: Service 'web' failed to build: The command '/bin/sh -c syste
mctl start httpd.service' returned a non-zero code: 1
Obs: running on a windows 7 :(
Any tip?
As explained in centos docker image repository, Systemd is not active by default. In order to use systemd, you will need to include text similar to the example Dockerfile below:
FROM centos:7
MAINTAINER "you" <your#email.here>
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]
This Dockerfile deletes a number of unit files which might cause issues. From here, you are ready to build your base image.
$ docker build --rm -t local/c7-systemd .
In order to use the systemd enabled base container created above, you will need to change your Dockerfile to:
FROM local/c7-systemd
ENV container docker
VOLUME ["/sys/fs/cgroup"]
RUN yum -y update
RUN yum install -y httpd
RUN systemctl start httpd.service
ADD . /code
WORKDIR /code
For the same issue I have created a docker-systemctl-replacement which will do the steps that "systemctl start httpd.service" would do... but without the need of a running SystemD. Just say "systemctl.py start httpd.service" and see if that works.
Related
I tried to install httpd inside docker container and tried to restart the httpd via systemctl in docker. But I'm getting a exception like below
As per my analysis systemctl not enabled by default in docker base images, experts suggested to configure like below.
FROM centos:7
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in ; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done);
rm -f /lib/systemd/system/multi-user.target.wants/;
rm -f /etc/systemd/system/.wants/;
rm -f /lib/systemd/system/local-fs.target.wants/;
rm -f /lib/systemd/system/sockets.target.wants/udev;
rm -f /lib/systemd/system/sockets.target.wants/initctl;
rm -f /lib/systemd/system/basic.target.wants/;
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ “/sys/fs/cgroup” ]
CMD ["/usr/sbin/init"]
I tried the same but still no luck. Another option I tried using centos/systemd as my base image, But no luck. This is my sample docker file which I'm trying.
FROM centos:7
RUN yum -y install httpd php php-mysql php-gd mariadb-server php-xml php-intl mysql weget
RUN systemctl restart httpd.service
RUN systemctl enable httpd.service
RUN systemctl start mariadb
RUN systemctl enable mariadb
# RUN mysql -u root -p -u root
EXPOSE 80
Any one please advise me on this ? Any other possibilities to achieve the same in different way?
References:
Docker CentOS systemctl not permitted
https://forums.docker.com/t/systemctl-status-is-not-working-in-my-docker-container/9075/2
I have a TypeScript code that reads the contents of a directory and has to delete them one by one at some intervals.
Everything works fine locally. I made a docker container for my code and wanted to achieve the same purpose, however, I realized that the directory contents are the same ones existed at the time of building the container.
As for my understanding, the connection between the docker container and the local file system is missing.
I have been wandering around bind and volume options, and I came across the following simple tutorial:
How To Share Data Between the Docker Container and the Host
According to the previous tutorial, theoretically, I would be able to achieve my goal:
If you make any changes to the ~/nginxlogs folder, you’ll be able to see them from inside the Docker container in real-time as well.
However, I followed exactly the same steps but still couldn't see the changes made locally reflected in the docker container, or vice versa.
My question is: How can I access my local file system from a docker container to read/write/delete files?
Update
This is my dockerfile
FROM ampervue/ffmpeg
RUN curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash -
RUN apt-get update -qq && apt-get install -y --force-yes \
nodejs; \
apt-get clean
RUN npm install -g fluent-ffmpeg
RUN rm -rf /usr/local/src
RUN apt-get autoremove -y; apt-get clean -y
WORKDIR /work
COPY package.json .
COPY . .
CMD ["node", "sizeCalculator.js"]
Easy way to volume mount on docker run command
docker run -it -v /<Source Dir>/:/<Destination Dir> <container_name> bash
Another way is using docker-compose.
Let's try it with docker-compose
put your dockerfile and docker-compose at the same location or dir
main focus
volumes:
- E:\dirToMap:/work
docker-compose.yaml
version: "3"
services:
ampervue:
build:
context: ./
image: <Image Name>
container_name: ampervueservice
volumes:
- E:\dirToMap:/vol1
ports:
- 8080:8080
And add volume in dockerfile
FROM ampervue/ffmpeg
RUN curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash -
RUN apt-get update -qq && apt-get install -y --force-yes \
nodejs; \
apt-get clean
RUN npm install -g fluent-ffmpeg
RUN rm -rf /usr/local/src
RUN apt-get autoremove -y; apt-get clean -y
WORKDIR /work
VOLUME /vol1
COPY package.json .
COPY . .
CMD ["node", "sizeCalculator.js"]
and run following command to up the container
docker-compose -f "docker-compose-sr.yml" up -d --build
At the examples below which come directly from the docs:
The --mount and -v examples below produce the same result. You can't run them both unless you remove the devtest container after running the first one.
with -v:
docker run -d -it --name devtest -v "$(pwd)"/target:/app nginx:latest
with --mount:
docker run -d -it --name devtest --mount type=bind,source="$(pwd)"/target,target=/app nginx:latest
This is where you have to type your 2 different paths:
-v /path/from/your/host:/path/inside/the/container
<-------host------->:<--------container------->
--mount type=bind,source=/path/from/your/host,target=/path/inside/the/container
<-------host-------> <--------container------->
I'm trying to create a docker file (base os must be Centos) that will install mariadb, start mariadb, and keep mariadb running. So that I can use the container in gitlab to run my integration tests (Java). This is what I have so far
FROM centos:7
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Install epel and java
RUN yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel wget
ENV JAVA_HOME /usr/lib/jvm/java-1.8.0-openjdk/
EXPOSE 8080
EXPOSE 3306
# install mariadb
RUN yum -y install mariadb
RUN yum -y install mariadb-server
RUN systemctl start mariadb
ENTRYPOINT tail -f /dev/null
The error I'm getting is
Failed to get D-Bus connection: Operation not permitted
You can do something like this:
FROM centos/mariadb-102-centos7
USER root
# Install epel and java
RUN yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel wget
ENV JAVA_HOME /usr/lib/jvm/java-1.8.0-openjdk/
You can mount your code folder into this container and execute it with docker exec.
It is recommended however you use two different containers: one for the db and one for your code. You can then pass the code container the env vars required to connect to the db container.
nothing is running by default in containers including systemd so you cannot use systemd to start mariadb
if we reference the official mariadb dockerfile, we can find that you can start mariadb by adding CMD ["mysqld"] to our dockerfile.
you must also make sure to install mariadb in your container with RUN yum -y mariadb-server mariadb-client as it is not installed by default either
I am pretty new to docker and was following the documentation found here, trying deploy several containers inside dind using docker-compose 1.14.0 I get the following
docker run -v /home/dudarev/compose/:/compose/ --privileged docker:dind /compose/docker-compose
/usr/local/bin/dockerd-entrypoint.sh: exec: line 21: /compose/docker-compose: not found
Did I miss something?
There is official docker image on dockerhub for docker-compose, just use that.
Follow these steps:
Create a directory on host mkdir /root/test
Create docker-compose.yaml file with following contents:
version: '2'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
redis:
image: redis
Run docker run command to run docker-compose inside the container.
docker run -itd -v /var/run/docker.sock:/var/run/docker.sock -v /root/test/:/var/tmp/ docker/compose:1.24.1 -f /var/tmp/docker-compose.yaml up -d
NOTE: Here /var/tmp directory inside the container will contain docker-compose.yaml file so I have used -f option to specify complete path of the yaml file. Also docker.sock is mounted from host onto the container.
Hope this helps.
Add docker-compose installation to your Dockerfile before executing docker run.
For example, if you have an Ubuntu docker, add to your Dockerfile:
RUN aptitude -y install docker-compose
RUN ln -s /usr/local/bin/docker-compose /compose/docker-compose
Because it looks like if your entry-point looks up docker compose in /compose folder, while docker-compose is installed in /usr/local/bin by default.
If you want a concrete docker-compose version (for example 1.20.0-rc2):
RUN curl -L https://github.com/docker/compose/releases/download/1.20.0-rc2/docker-compose-`uname -s`-`uname -m` -o /compose/docker-compose
RUN chmod +x /compose/docker-compose
here is a full dockerfile to run docker-compose inside docker
FROM ubuntu:21.04
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y python3
RUN apt-get install -y pip
RUN apt-get install -y curl
RUN curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
I have a container that is being built that only really contains memcached, and I want it to start once the container is built.
This is my current Docker file -
FROM centos:7
MAINTAINER Some guy <someguy#guysome.org>
RUN yum update -y
RUN yum install -y git https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN yum install -y ansible && yum clean all -y
RUN yum install -y memcached
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
EXPOSE 11211/tcp 11211/udp
CMD ["/usr/bin/memcached"]
#CMD ["/usr/bin/memcached -u root"]
#CMD ["/usr/bin/memcached", "-D", "FOREGROUND"]
The container builds successfully, but when I try to run the container using the command
docker run -d -i -t -P <image id>, I cannot see the image inside of the list that is returned with docker ps.
I attempted to have my memcached service run the same way as my httpd container, but I cannot pass in the argument using the -D flag (since its already a daemon im guessing). This is how my httpd CMD was set up -
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
Locally, if I run the command /usr/bin/memcached -u root it runs as a process, but when I try in the container CMD it informs me that it cannot find the specified file (having to do with the -u root section I am guessing).
Setting the CMD to /bin/bash still did not allow the service to start either.
How can I have my memcached service run and allow it to be seen when I run docker ps, so that I can open a bash section inside of it?
Thanks.
memcached will run in the foreground by default, which is what you want. The -d option would run memcached as a daemon which would cause the container to exit immediately.
The Dockerfile looks overly complex, try this
FROM centos:7
RUN yum update -y && yum install -y epel-release && yum install -y memcached && yum clean all
EXPOSE 11211
CMD ["/usr/bin/memcached","-p","11211","-u","memcached","-m","64"]
Then you can do what you need
$ docker build -t me/memcached .
<snipped build>
$ CID=$(docker create me/memcached)
$ docker start $CID
4ac5afed0641f07f4694c30476cef41104f6fd864c174958b971822005fd292a
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4ac5afed0641 me/memcached "/usr/bin/memcached -" About a minute ago Up 4 seconds 11211/tcp jovial_bardeen
$ docker exec $CID ps -ef
UID PID PPID C STIME TTY TIME CMD
memcach+ 1 0 0 01:03 ? 00:00:00 /usr/bin/memcached -p 11211 -u memcached -m 64
root 10 0 2 01:04 ? 00:00:00 ps -ef
$ docker exec -ti $CID bash
[root#4ac5afed0641 /]#
Or skip your Dockerfile if it actually only runs memcached and use:
docker run --name my-memcache -d memcached
At least to get your basic set-up going, and then you can update that official image as needed.