How to view the logs of a container? - docker

I'm having a Dockerfile
FROM centos:7
ENV container docker
RUN yum -y update && yum -y install net-tools && yum -y install initscripts && yum -y install epel-release && yum -y install nginx && yum clean all
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
# expose Nginx ports http/https
EXPOSE 80 443
RUN curl https://www.sheldonbrown.com/web_sample1.html > /usr/share/nginx/index.html
RUN mv /usr/share/nginx/index.html /usr/share/nginx/html/index.html -f
RUN mkdir -p /local/nginx
RUN touch /local/nginx/start.sh
RUN chmod 755 /local/nginx/start.sh
RUN echo "#!/bin/bash" >> /local/nginx/start.sh
RUN echo "nginx" >> /local/nginx/start.sh
RUN echo "tail -f /var/log/nginx/access.log" >> /local/nginx/start.sh
ENTRYPOINT ["/bin/bash", "-c", "/local/nginx/start.sh"]
I'm building it with docker build -t "my_nginx" .
And then running it with docker run -i -t --rm -p 8888:80 --name nginx "my_nginx"
https://localhost:8888/ shows the page but no logging is shown.
If I press Ctrl-C, nginx is stopped but the tail on the logging is shown.
If I press Ctrl-C again, the container is gone.
Question: How can I let nginx run AND show the tail on the logging (which is preferably also visible using the "docker logs"-command)

The easiest way to accomplish this is just to use the Docker Hub nginx image, which deals with this for you. A Dockerfile that could be as little as
FROM nginx:1.19
# Specifically use ADD because it will fetch a URL
ADD https://www.sheldonbrown.com/web_sample1.html /usr/share/nginx/html/index.html
If you look at its Dockerfile it actually uses symbolic links to cause Nginx's "normal" logs to go to the container stdout
# forward request and error logs to docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log
That gets around most of the mechanics in your Dockerfile: you can just run nginx as the main container command without trying to use a second process to cat logs. You can basically trim out the entire last half, and get
FROM centos:7
ENV container docker
RUN yum -y install epel-release \
&& yum -y install net-tools initscripts nginx \
&& yum clean all
RUN echo "daemon off;" >> /etc/nginx/nginx.conf \
&& ln -s /dev/stdout /var/log/nginx/access.log \
&& ln -s /dev/stderr /var/log/nginx/error.log
RUN curl -o /usr/share/nginx/html/index.html https://www.sheldonbrown.com/web_sample1.html
EXPOSE 80 443
CMD ["nginx"]
Yet another possibility here (with any of these images) is to mount your own volume over the container's /var/log/nginx directory. That gives you your own host-visible directory of logs that you can inspect at your convenience.
mkdir logs
docker run -v $PWD/logs:/var/log/nginx -d ...
less logs/access.log
(In the shell script you construct in the Dockerfile, you use the Nginx daemon off directive to run as a foreground process, which means the nginx command will never exit on its own. That in turn means the script never advances to the tail line, which is why you don't get logs out.)

You should remove the tail command from the Dockerfile, run the container with docker run -it -d --rm -p 8888:80 --name nginx "my_nginx" and then use docker logs -f nginx.

Related

How To Start MariaDB And Keep it Running Centos Based Docker Image

I'm trying to create a docker file (base os must be Centos) that will install mariadb, start mariadb, and keep mariadb running. So that I can use the container in gitlab to run my integration tests (Java). This is what I have so far
FROM centos:7
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Install epel and java
RUN yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel wget
ENV JAVA_HOME /usr/lib/jvm/java-1.8.0-openjdk/
EXPOSE 8080
EXPOSE 3306
# install mariadb
RUN yum -y install mariadb
RUN yum -y install mariadb-server
RUN systemctl start mariadb
ENTRYPOINT tail -f /dev/null
The error I'm getting is
Failed to get D-Bus connection: Operation not permitted
You can do something like this:
FROM centos/mariadb-102-centos7
USER root
# Install epel and java
RUN yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel wget
ENV JAVA_HOME /usr/lib/jvm/java-1.8.0-openjdk/
You can mount your code folder into this container and execute it with docker exec.
It is recommended however you use two different containers: one for the db and one for your code. You can then pass the code container the env vars required to connect to the db container.
nothing is running by default in containers including systemd so you cannot use systemd to start mariadb
if we reference the official mariadb dockerfile, we can find that you can start mariadb by adding CMD ["mysqld"] to our dockerfile.
you must also make sure to install mariadb in your container with RUN yum -y mariadb-server mariadb-client as it is not installed by default either

Starting Gunicorn Service in Dockerfile : Failed to get D-Bus connection: Operation not permitted

I m trying to start services(gunicorn, nginx) with my dockerfile but I got that error.
This is my dockerfile
FROM centos:centos7
RUN yum -y install epel-release
RUN yum -y --enablerepo=base clean metadata
RUN yum -y install nginx
RUN yum -y install python-pip
RUN pip install --upgrade pip
RUN yum -y install systemd;
RUN yum clean all;
COPY . /
RUN pip install --no-cache-dir -r requirements.txt
RUN ./manage.py makemigrations
RUN ./manage.py migrate
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
#Gunicorn
RUN cp gunicorn_systemd /etc/systemd/system/gunicorn.service
RUN systemctl start gunicorn
RUN systemctl enable gunicorn
And this is my build command
docker build -t guni ./
Any help please ?
You are trying to interact with systemd in your build script:
RUN systemctl start gunicorn
There are a number of problems here. First, trying to "start" a service as part of the build process doesn't make any sense: you're building an image, not starting a container.
Secondly, you're trying to interact with systemd, but you're never starting systemd, and it is unlikely that you want to [1]. Since a docker container is typically a "single process" environment, you don't need any init-like process supervisor to start things for you. You just need to arrange to run the necessary command yourself.
Taking Apache httpd as an example, rather than running:
systemctl start httpd
You would run:
httpd -DFOREGROUND
This runs the webserver and ensures that it stays in the foreground (the Docker container will exit when the foreground process exits). You can surely do something similar with gunicorn.
Your container is also missing a CMD or ENTRYPOINT directive, so it's not going to do anything when you run it unless you provide an explicit command, and that's probably not the behavior you want.
[1] If you really think you need systemd, you would need to arrange to start it when the container starts (e.g, CMD /sbin/init), but systemd is not something that runs well in an unprivileged container environment. It's possible but I wouldn't recommend it.

docker container can't use `service sshd restart`

I am trying to build a hadoop Dockerfile.
In the build process, I added:
&& apt install -y openssh-client \
&& apt install -y openssh-server \
&& ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa \
&& cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys \
&& chmod 0600 ~/.ssh/authorized_keys
&& sed -i '/\#AuthorizedKeysFile/ d' /etc/ssh/sshd_config \
&& echo "AuthorizedKeysFile ~/.ssh/authorized_keys" >> /etc/ssh/sshd_config \
&& /etc/init.d/ssh restart
I assumed that when I ran this container:
docker run -it --rm hadoop/tag bash
I would be able to:
ssh localhost
But I got an error:
ssh: connect to host localhost port 22: Connection refused
If I run this manually inside the container:
/etc/init.d/ssh restart
# or this
service ssh restart
Then I can get connected. I am thinking that this means the sshd restart didn't work.
I am using FROM java in the Dockerfile.
The build process only builds an image. Processes that are run at that time (using RUN) are no longer running after the build, and are not started again when a container is launched using the image.
What you need to do is get sshd to start at container runtime. The simplest way to do that is using an entrypoint script.
Dockerfile:
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["whatever", "your", "command", "is"]
entrypoint.sh:
#!/bin/sh
# Start the ssh server
/etc/init.d/ssh restart
# Execute the CMD
exec "$#"
Rebuild the image using the above, and when you use it to start a container, it should start sshd before running your CMD.
You can also change the base image you start from to something like Phusion baseimage if you prefer. It makes it easy to start some services like syslogd, sshd, that you may wish the container to have running.

RUN command not called in Dockerfile

Here is my Dockerfile:
# Pull base image (Ubuntu)
FROM dockerfile/python
# Install socat
RUN \
cd /opt && \
wget http://www.dest-unreach.org/socat/download/socat-1.7.2.4.tar.gz && \
tar xvzf socat-1.7.2.4.tar.gz && \
rm -f socat-1.7.2.4.tar.gz && \
cd socat-1.7.2.4 && \
./configure && \
make && \
make install
RUN \
start-stop-daemon --quiet --oknodo --start --pidfile /run/my.pid --background --make-pidfile \
--exec /opt/socat-1.7.2.4/socat PTY,link=/dev/ttyMY,echo=0,raw,unlink-close=0 \
TCP-LISTEN:9334,reuseaddr,fork
# Define working directory.
WORKDIR /data
# Define default command.
CMD ["bash"]
I build an image from this Dockerfile and run it like this:
docker run -d -t -i myaccount/socat_test
After this I login into the container and check if socat daemon is running. But it is not. I just started playing around with docker. Am I misunderstanding the concept of Dockerfile? I would expect docker to run the socat command when the container starts.
You are confusing RUN and CMD. The RUN command is meant to build up the docker container, as you correctly did with the first one. The second command will also executed when building the container, but not when running it. If you want to execute commands when the docker container is started, you ought to use the CMD command.
More information can be found in the Dockerfile reference. For instance, you could also use ENTRYPOINT in stead of CMD.

Start sshd automatically with docker container

Given:
container based on ubuntu:13.10
installed ssh (via apt-get install ssh)
Problem: each when I start container I have to run sshd manually service ssh start
Tried: update-rc.d ssh defaults, but it does not helps.
Question: how to setup container to start sshd service automatically during container start?
Just try:
ENTRYPOINT service ssh restart && bash
in your dockerfile, it works fun for me!
more details here: How to automatically start a service when running a docker container?
Here is a Dockerfile which installs ssh server and runs it:
# Build Ubuntu image with base functionality.
FROM ubuntu:focal AS ubuntu-base
ENV DEBIAN_FRONTEND noninteractive
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
# Setup the default user.
RUN useradd -rm -d /home/ubuntu -s /bin/bash -g root -G sudo ubuntu
RUN echo 'ubuntu:ubuntu' | chpasswd
USER ubuntu
WORKDIR /home/ubuntu
# Build image with Python and SSHD.
FROM ubuntu-base AS ubuntu-with-sshd
USER root
# Install required tools.
RUN apt-get -qq update \
&& apt-get -qq --no-install-recommends install vim-tiny=2:8.1.* \
&& apt-get -qq --no-install-recommends install sudo=1.8.* \
&& apt-get -qq --no-install-recommends install python3-pip=20.0.* \
&& apt-get -qq --no-install-recommends install openssh-server=1:8.* \
&& apt-get -qq clean \
&& rm -rf /var/lib/apt/lists/*
# Configure SSHD.
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
RUN mkdir /var/run/sshd
RUN bash -c 'install -m755 <(printf "#!/bin/sh\nexit 0") /usr/sbin/policy-rc.d'
RUN ex +'%s/^#\zeListenAddress/\1/g' -scwq /etc/ssh/sshd_config
RUN ex +'%s/^#\zeHostKey .*ssh_host_.*_key/\1/g' -scwq /etc/ssh/sshd_config
RUN RUNLEVEL=1 dpkg-reconfigure openssh-server
RUN ssh-keygen -A -v
RUN update-rc.d ssh defaults
# Configure sudo.
RUN ex +"%s/^%sudo.*$/%sudo ALL=(ALL:ALL) NOPASSWD:ALL/g" -scwq! /etc/sudoers
# Generate and configure user keys.
USER ubuntu
RUN ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519
#COPY --chown=ubuntu:root "./files/authorized_keys" /home/ubuntu/.ssh/authorized_keys
# Setup default command and/or parameters.
EXPOSE 22
CMD ["/usr/bin/sudo", "/usr/sbin/sshd", "-D", "-o", "ListenAddress=0.0.0.0"]
Build with the following command:
docker build --target ubuntu-with-sshd -t ubuntu-with-sshd .
Then run with:
docker run -p 2222:22 ubuntu-with-sshd
To connect to container via local port, run: ssh -v localhost -p 2222.
To check for container IP address, use docker ps and docker inspect.
Here is example of docker-compose.yml file:
---
version: '3.4'
services:
ubuntu-with-sshd:
image: "ubuntu-with-sshd:latest"
build:
context: "."
target: "ubuntu-with-sshd"
networks:
mynet:
ipv4_address: 172.16.128.2
ports:
- "2222:22"
privileged: true # Required for /usr/sbin/init
networks:
mynet:
ipam:
config:
- subnet: 172.16.128.0/24
To run, type:
docker-compose up --build
I think the correct way to do it would follow docker's instructions to dockerizing the ssh service.
And in correlation to the specific question, the following lines added at the end of the dockerfile will achieve what you were looking for:
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Dockerize a SSHD service
I have created dockerfiler to run ssh inside. I think it is not secure, but for testing/development in DMZ it could be ok:
FROM ubuntu:20.04
USER root
# change root password to `ubuntu`
RUN echo 'root:ubuntu' | chpasswd
ENV DEBIAN_FRONTEND noninteractive
# install ssh server
RUN apt-get update && apt-get install -y \
openssh-server sudo \
&& rm -rf /var/lib/apt/lists/*
# workdir for ssh
RUN mkdir -p /run/sshd
# generate server keys
RUN ssh-keygen -A
# allow root to login
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config
EXPOSE 22
# run ssh server
CMD ["/usr/sbin/sshd", "-D", "-o", "ListenAddress=0.0.0.0"]
You can start ssh server when starting your container probably. Something like this:
docker run ubuntu /usr/sbin/sshd -D
Check out this official tutorial.
This is what I did:
FROM nginx
# install gosu
# seealso:
# https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
# https://github.com/tianon/gosu/blob/master/INSTALL.md
# https://github.com/tianon/gosu
RUN set -eux; \
apt-get update; \
apt-get install -y gosu; \
rm -rf /var/lib/apt/lists/*; \
# verify that the binary works
gosu nobody true
ENV myenv='default'
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
COPY entrypoint.sh /entrypoint.sh
ENV AIRFLOW_HOME=/usr/local/airflow
RUN mkdir $AIRFLOW_HOME
RUN groupadd --gid 8080 airflow
RUN useradd --uid 8080 --gid 8080 -ms /bin/bash -d $AIRFLOW_HOME airflow
RUN echo 'airflow:mypass' | chpasswd
EXPOSE 22
CMD ["/entrypoint.sh"]
Inside entrypoint.sh:
echo "starting ssh as root"
gosu root service ssh start &
#gosu root /usr/sbin/sshd -D &
echo "starting tail user"
exec gosu airflow tail -f /dev/null
Well, I used the following command to solve that
docker run -i -t mycentos6 /bin/bash -c '/etc/init.d/sshd start && /bin/bash'
First login to your container and write an initialization script /bin/init as following:
# execute in the container
cat <<EOT >> /bin/init
#!/bin/bash
service ssh start
while true; do sleep 1; done
EOT
Then make the root user is permitted to logging via ssh:
# execute in the container
echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
Commit the container to a new image after exiting from the container:
# execute in the server
docker commit <YOUR_CONTAINER> <ANY_REPO>:<ANY_TAG>
From now on, as long as you run your container with the following command, the ssh service will be automatically started.
# execute in the server
docker run -it -d --name <NAME> <REPO>:<TAG> /bin/init
docker exec -it <NAME> /bin/bash
Done.
You can try a more elegant way to do that with phusion/baseimage-docker
https://github.com/phusion/baseimage-docker#readme

Resources