I have a container that is being built that only really contains memcached, and I want it to start once the container is built.
This is my current Docker file -
FROM centos:7
MAINTAINER Some guy <someguy#guysome.org>
RUN yum update -y
RUN yum install -y git https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN yum install -y ansible && yum clean all -y
RUN yum install -y memcached
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
EXPOSE 11211/tcp 11211/udp
CMD ["/usr/bin/memcached"]
#CMD ["/usr/bin/memcached -u root"]
#CMD ["/usr/bin/memcached", "-D", "FOREGROUND"]
The container builds successfully, but when I try to run the container using the command
docker run -d -i -t -P <image id>, I cannot see the image inside of the list that is returned with docker ps.
I attempted to have my memcached service run the same way as my httpd container, but I cannot pass in the argument using the -D flag (since its already a daemon im guessing). This is how my httpd CMD was set up -
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
Locally, if I run the command /usr/bin/memcached -u root it runs as a process, but when I try in the container CMD it informs me that it cannot find the specified file (having to do with the -u root section I am guessing).
Setting the CMD to /bin/bash still did not allow the service to start either.
How can I have my memcached service run and allow it to be seen when I run docker ps, so that I can open a bash section inside of it?
Thanks.
memcached will run in the foreground by default, which is what you want. The -d option would run memcached as a daemon which would cause the container to exit immediately.
The Dockerfile looks overly complex, try this
FROM centos:7
RUN yum update -y && yum install -y epel-release && yum install -y memcached && yum clean all
EXPOSE 11211
CMD ["/usr/bin/memcached","-p","11211","-u","memcached","-m","64"]
Then you can do what you need
$ docker build -t me/memcached .
<snipped build>
$ CID=$(docker create me/memcached)
$ docker start $CID
4ac5afed0641f07f4694c30476cef41104f6fd864c174958b971822005fd292a
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4ac5afed0641 me/memcached "/usr/bin/memcached -" About a minute ago Up 4 seconds 11211/tcp jovial_bardeen
$ docker exec $CID ps -ef
UID PID PPID C STIME TTY TIME CMD
memcach+ 1 0 0 01:03 ? 00:00:00 /usr/bin/memcached -p 11211 -u memcached -m 64
root 10 0 2 01:04 ? 00:00:00 ps -ef
$ docker exec -ti $CID bash
[root#4ac5afed0641 /]#
Or skip your Dockerfile if it actually only runs memcached and use:
docker run --name my-memcache -d memcached
At least to get your basic set-up going, and then you can update that official image as needed.
Related
I tried to install httpd inside docker container and tried to restart the httpd via systemctl in docker. But I'm getting a exception like below
As per my analysis systemctl not enabled by default in docker base images, experts suggested to configure like below.
FROM centos:7
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in ; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done);
rm -f /lib/systemd/system/multi-user.target.wants/;
rm -f /etc/systemd/system/.wants/;
rm -f /lib/systemd/system/local-fs.target.wants/;
rm -f /lib/systemd/system/sockets.target.wants/udev;
rm -f /lib/systemd/system/sockets.target.wants/initctl;
rm -f /lib/systemd/system/basic.target.wants/;
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ “/sys/fs/cgroup” ]
CMD ["/usr/sbin/init"]
I tried the same but still no luck. Another option I tried using centos/systemd as my base image, But no luck. This is my sample docker file which I'm trying.
FROM centos:7
RUN yum -y install httpd php php-mysql php-gd mariadb-server php-xml php-intl mysql weget
RUN systemctl restart httpd.service
RUN systemctl enable httpd.service
RUN systemctl start mariadb
RUN systemctl enable mariadb
# RUN mysql -u root -p -u root
EXPOSE 80
Any one please advise me on this ? Any other possibilities to achieve the same in different way?
References:
Docker CentOS systemctl not permitted
https://forums.docker.com/t/systemctl-status-is-not-working-in-my-docker-container/9075/2
I'm having a Dockerfile
FROM centos:7
ENV container docker
RUN yum -y update && yum -y install net-tools && yum -y install initscripts && yum -y install epel-release && yum -y install nginx && yum clean all
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
# expose Nginx ports http/https
EXPOSE 80 443
RUN curl https://www.sheldonbrown.com/web_sample1.html > /usr/share/nginx/index.html
RUN mv /usr/share/nginx/index.html /usr/share/nginx/html/index.html -f
RUN mkdir -p /local/nginx
RUN touch /local/nginx/start.sh
RUN chmod 755 /local/nginx/start.sh
RUN echo "#!/bin/bash" >> /local/nginx/start.sh
RUN echo "nginx" >> /local/nginx/start.sh
RUN echo "tail -f /var/log/nginx/access.log" >> /local/nginx/start.sh
ENTRYPOINT ["/bin/bash", "-c", "/local/nginx/start.sh"]
I'm building it with docker build -t "my_nginx" .
And then running it with docker run -i -t --rm -p 8888:80 --name nginx "my_nginx"
https://localhost:8888/ shows the page but no logging is shown.
If I press Ctrl-C, nginx is stopped but the tail on the logging is shown.
If I press Ctrl-C again, the container is gone.
Question: How can I let nginx run AND show the tail on the logging (which is preferably also visible using the "docker logs"-command)
The easiest way to accomplish this is just to use the Docker Hub nginx image, which deals with this for you. A Dockerfile that could be as little as
FROM nginx:1.19
# Specifically use ADD because it will fetch a URL
ADD https://www.sheldonbrown.com/web_sample1.html /usr/share/nginx/html/index.html
If you look at its Dockerfile it actually uses symbolic links to cause Nginx's "normal" logs to go to the container stdout
# forward request and error logs to docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log
That gets around most of the mechanics in your Dockerfile: you can just run nginx as the main container command without trying to use a second process to cat logs. You can basically trim out the entire last half, and get
FROM centos:7
ENV container docker
RUN yum -y install epel-release \
&& yum -y install net-tools initscripts nginx \
&& yum clean all
RUN echo "daemon off;" >> /etc/nginx/nginx.conf \
&& ln -s /dev/stdout /var/log/nginx/access.log \
&& ln -s /dev/stderr /var/log/nginx/error.log
RUN curl -o /usr/share/nginx/html/index.html https://www.sheldonbrown.com/web_sample1.html
EXPOSE 80 443
CMD ["nginx"]
Yet another possibility here (with any of these images) is to mount your own volume over the container's /var/log/nginx directory. That gives you your own host-visible directory of logs that you can inspect at your convenience.
mkdir logs
docker run -v $PWD/logs:/var/log/nginx -d ...
less logs/access.log
(In the shell script you construct in the Dockerfile, you use the Nginx daemon off directive to run as a foreground process, which means the nginx command will never exit on its own. That in turn means the script never advances to the tail line, which is why you don't get logs out.)
You should remove the tail command from the Dockerfile, run the container with docker run -it -d --rm -p 8888:80 --name nginx "my_nginx" and then use docker logs -f nginx.
I'm trying to create a docker file (base os must be Centos) that will install mariadb, start mariadb, and keep mariadb running. So that I can use the container in gitlab to run my integration tests (Java). This is what I have so far
FROM centos:7
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Install epel and java
RUN yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel wget
ENV JAVA_HOME /usr/lib/jvm/java-1.8.0-openjdk/
EXPOSE 8080
EXPOSE 3306
# install mariadb
RUN yum -y install mariadb
RUN yum -y install mariadb-server
RUN systemctl start mariadb
ENTRYPOINT tail -f /dev/null
The error I'm getting is
Failed to get D-Bus connection: Operation not permitted
You can do something like this:
FROM centos/mariadb-102-centos7
USER root
# Install epel and java
RUN yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel wget
ENV JAVA_HOME /usr/lib/jvm/java-1.8.0-openjdk/
You can mount your code folder into this container and execute it with docker exec.
It is recommended however you use two different containers: one for the db and one for your code. You can then pass the code container the env vars required to connect to the db container.
nothing is running by default in containers including systemd so you cannot use systemd to start mariadb
if we reference the official mariadb dockerfile, we can find that you can start mariadb by adding CMD ["mysqld"] to our dockerfile.
you must also make sure to install mariadb in your container with RUN yum -y mariadb-server mariadb-client as it is not installed by default either
I m trying to start services(gunicorn, nginx) with my dockerfile but I got that error.
This is my dockerfile
FROM centos:centos7
RUN yum -y install epel-release
RUN yum -y --enablerepo=base clean metadata
RUN yum -y install nginx
RUN yum -y install python-pip
RUN pip install --upgrade pip
RUN yum -y install systemd;
RUN yum clean all;
COPY . /
RUN pip install --no-cache-dir -r requirements.txt
RUN ./manage.py makemigrations
RUN ./manage.py migrate
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
#Gunicorn
RUN cp gunicorn_systemd /etc/systemd/system/gunicorn.service
RUN systemctl start gunicorn
RUN systemctl enable gunicorn
And this is my build command
docker build -t guni ./
Any help please ?
You are trying to interact with systemd in your build script:
RUN systemctl start gunicorn
There are a number of problems here. First, trying to "start" a service as part of the build process doesn't make any sense: you're building an image, not starting a container.
Secondly, you're trying to interact with systemd, but you're never starting systemd, and it is unlikely that you want to [1]. Since a docker container is typically a "single process" environment, you don't need any init-like process supervisor to start things for you. You just need to arrange to run the necessary command yourself.
Taking Apache httpd as an example, rather than running:
systemctl start httpd
You would run:
httpd -DFOREGROUND
This runs the webserver and ensures that it stays in the foreground (the Docker container will exit when the foreground process exits). You can surely do something similar with gunicorn.
Your container is also missing a CMD or ENTRYPOINT directive, so it's not going to do anything when you run it unless you provide an explicit command, and that's probably not the behavior you want.
[1] If you really think you need systemd, you would need to arrange to start it when the container starts (e.g, CMD /sbin/init), but systemd is not something that runs well in an unprivileged container environment. It's possible but I wouldn't recommend it.
I installed docker image - centos 7 on my ubuntu machine. But ssh service not found. so I cant run this service.
[root#990e92224a82 /]# yum install openssh-server openssh-clients
Loaded plugins: fastestmirror, ovl
Loading mirror speeds from cached hostfile
* base: mirror.dhakacom.com
* extras: mirror.dhakacom.com
* updates: mirror.dhakacom.com
Package openssh-server-6.6.1p1-31.el7.x86_64 already installed and latest version
Package openssh-clients-6.6.1p1-31.el7.x86_64 already installed and latest version
Nothing to do
[root#990e92224a82 /]# ss
ssh ssh-agent ssh-keygen sshd ssltap
ssh-add ssh-copy-id ssh-keyscan sshd-keygen
How can I remotely login docker image?
You have to do the following instructions on Dockerfile.
RUN yum install -y sudo wget telnet openssh-server vim git ncurses-term
RUN useradd your_account
RUN mkdir -p /home/your_account/.ssh && chown -R your_account /home/your_account/.ssh/
# Create known_hosts
RUN touch /home/your_account/.ssh/known_hosts
COPY files/authorized_keys /home/your_account/.ssh/
COPY files/config /home/your_account/.ssh/
COPY files/pam.d/sshd /etc/pam.d/sshd
RUN touch /home/your_account/.ssh/environment
RUN chown -R your_account /home/your_account/.ssh
RUN chmod 400 -R /home/your_account/.ssh/*
RUN chmod 700 -R /home/your_account/.ssh/known_hosts
RUN chmod 700 /home/your_account/.ssh/environment
# Enable sshd
COPY files/sshd_config /etc/ssh/
RUN ssh-keygen -f /etc/ssh/ssh_host_rsa_key -N '' -t rsa
# Add a account into sudoers and this account doesn't need to type his password
COPY files/sudoers /etc/
COPY files/start.sh /root/
I have to remove "pam_nologin.so" on the file /etc/pam.d/sshd, because when I upgrade the openssh-server's version to openssh-server-6.6.1p1-31.el7, the pam_nologin.so will disallow remote login for any users even the file /etc/nologin is not exist.
start.sh
#!/bin/bash
/usr/sbin/sshd -E /tmp/sshd.log
Start centos container
docker run -d -t -p $(sshPort):22 --name $(containerName) $(imageName) /bin/bash
docker exec -d $(containerName) bash -c "sh /root/start.sh"
Login container
ssh $(Docker ip) $(sshPort)
In extend to #puritys
You could do this in the Dockerfile instead
Last in the file:
ENTRYPOINT /usr/sbin/sshd -E /tmp/sshd.log && /bin/bash
Then you will only need to run:
docker run -d -p -t $(sshPort):22 --name $(containerName) $(imageName) /bin/bash