Im having a recurring issue while trying to set up a Docker container so that it stays running.
Here is a sample of the Dockerfile that I am wanting to use:
RUN wget -O /usr/local/nexus-2.11.3-01-bundle.tar.gz http://www.sonatype.org/downloads/nexus-2.11.3-01-bundle.tar.gz
WORKDIR /usr/local
RUN tar xvzf /usr/local/nexus-2.11.3-01-bundle.tar.gz
RUN ln -s nexus-2.11.3-01 nexus
ENV NEXUS_HOME /usr/local/nexus
ENV RUN_AS_USER root
CMD ["/usr/local/nexus/bin/nexus", "start"]
EXPOSE 8081
Basically when I build this, and then run it, the container just dies, and doing a docker ps command returns that there are no running containers.
As far as I know, (please correct me if I'm wrong...) the docker container should stay running so long as theres a process with a pid of 1. Would the usage of the previous commands use PID 1, and if so, how can I force the nexus start command to use it? Or to just keep the container alive...
The contents of a docker logs nexus gives:
****************************************
WARNING - NOT RECOMMENDED TO RUN AS ROOT
****************************************
Starting Nexus OSS...
Started Nexus OSS.
It seems to suggest that Nexus has started, but then again when I do a docker ps, I don't see it running.
If the process running with PID 1 exits, then the container is automatically stopped. You can check on the sonatype/nexus repository here, using the concept of Launcher.
Here is how they are avoiding the container to exit:
...
RUN mkdir -p /opt/sonatype/nexus \
&& curl --fail --silent --location --retry 3 \
https://download.sonatype.com/nexus/professional-bundle/nexus-professional-${NEXUS_VERSION}-bundle.tar.gz \
| gunzip \
| tar x -C /tmp nexus-professional-${NEXUS_VERSION} \
&& mv /tmp/nexus-professional-${NEXUS_VERSION}/* /opt/sonatype/nexus/ \
&& rm -rf /tmp/nexus-professional-${NEXUS_VERSION}
RUN useradd -r -u 200 -m -c "nexus role account" -d ${SONATYPE_WORK} -s /bin/false nexus
...
EXPOSE 8081
WORKDIR /opt/sonatype/nexus
USER nexus
ENV CONTEXT_PATH /
ENV MAX_HEAP 768m
ENV MIN_HEAP 256m
ENV JAVA_OPTS -server -XX:MaxPermSize=192m -Djava.net.preferIPv4Stack=true
ENV LAUNCHER_CONF ./conf/jetty.xml ./conf/jetty-requestlog.xml
CMD java \
-Dnexus-work=${SONATYPE_WORK} -Dnexus-webapp-context-path=${CONTEXT_PATH} \
-Xms${MIN_HEAP} -Xmx${MAX_HEAP} \
-cp 'conf/:lib/*' \
${JAVA_OPTS} \
org.sonatype.nexus.bootstrap.Launcher ${LAUNCHER_CONF}
Since it is an open repository, you can directly refer to their repo, if you like.
A quick guess from the logs is that running /usr/local/nexus/bin/nexus start would start it as a daemon.
That would cause another process to spawn and the one that started the daemon would exit, terminating the container.
One solution is to start the process not as a daemon, but I couldn't find a option to do this in your nexus case.
Another is to use something like supervisord as the CMD to docker. Then make it start your process.
Related
Is it normal to lose all data, installed applications and created folders inside a container when executing docker-compose stop my_image and docker-compose start my_image?
I'm creating container with docker-compose up --scale my_image=4
update no. 1
my containers have sshd server running in them. When I connect to a container execute touch test.txt I see that the file was created.
However, after executing docker-compose stop my_image and docker-compose start my_image a container is empty and ls -l shows absence of file test.txt
update no. 2
my Dockerfile
FROM oraclelinux:8.5
RUN (yum update -y; \
yum install -y openssh-server openssh-clients initscripts wget passwd tar crontabs unzip; \
yum clean all)
RUN (ssh-keygen -A; \
sed -i 's/UsePAM yes/#UsePAM yes/g' /etc/ssh/sshd_config; \
sed -i 's/#UsePAM no/UsePAM no/g' /etc/ssh/sshd_config; \
sed -i 's/#PermitRootLogin yes/PermitRootLogin yes/' /etc/ssh/sshd_config; \
sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/' /etc/ssh/sshd_config)
RUN (mkdir -p /root/.ssh/; \
echo "StrictHostKeyChecking=no" > /root/.ssh/config; \
echo "UserKnownHostsFile=/dev/null" >> /root/.ssh/config)
RUN echo "root:oraclelinux" | chpasswd
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 22
my docker-compose
version: '3.9'
services:
my_image:
build:
context: .
dockerfile: Dockerfile
ports:
- 30000-30007:22
when I connect to a container
Execute touch test.txt
Execute docker-compose stop my_image
Execute docker-compose start my_image
Execute ls -l
I see no file test.txt (in fact I see that the folder is empty)
update no. 3
entrypoint.sh
#!/bin/sh
# Start the ssh server
/usr/sbin/sshd -D
# Execute the CMD
exec "$#"
Other details
When containers are all up and running, I choose a container running
on a specific port, say port 30001, then using putty I connect to that specific container,
execute touch test.txt
execute ls -l
I do see that the file was created
I execute docker-compose stop my_image
I execute docker-compose start my_image
I connect via putty to port 30001
I execute ls -l
I see no file (folder is empty)
I try other containers to see if file exists inside one of them, but
I see no file present.
So, after a brutal brute force debugging I realized that I lose data
only when I fail to disconnect from ssh before stopping / restarting
container. When I do disconnect data does not disappear after stopping / restarting
I have created a 2 containers in docker. However, one of them is visible and other is not.
Context:
I have created 1 container by downloading the docker jenkins image file and that is up and running and can be seen using docker ps command.
Then, I have tried to create an Image file to be consumed by the second container.
The script I have used in VI to create image file:
FROM centos
RUN yum -y install openssh-server
RUN yum install -y passwd
RUN useradd remote_user && \
echo "1234" | passwd remote_user --stdin && \
mkdir /home/remote_user/.ssh && \
chmod 700 /home/remote_user/.ssh
COPY remote-key.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user/.ssh/ && \
chmod 600 /home/remote_user/.ssh/authorized_keys
CMD /usr/sbin/sshd -D
The script ran successfully as "docker-compose build" has successfully build the image from the script.
Once it was successfully built, I tried to start the it using:
[jenkins#localhost jenkins-data]$ docker-compose up -d
jenkins is up-to-date
Starting remote-host ... done
Post this, when i am doing :
[jenkins#localhost jenkins-data]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5c1ee0507091 jenkins/jenkins "/sbin/tini -- /usr/…" 5 days ago Up 5 minutes 0.0.0.0:8080->8080/tcp, 50000/tcp jenkins
Its only showing me one container running while the remote-host container is not visible.
Any way to ensure if the remote-host container is actually running or is there any issue?
New to docker and jenkins, any lead is highly appreciated. thank you.
docker ps only shows running containers.
Using docker ps -a you see both running and stopped containers.
See Docker documentation about ps.
Probably the remote-host container is not running any more?
Container stopped because of the main process which launched by the CMD command detached and become daemon
the main process should be attached to the terminal so you have to remove -D from CMD command CMD /usr/sbin/sshd -D or you can follow this approach
run sshd in detached mode and use while sleep to keep the container
running
I have containers for multiple Atlassian products; JIRA, Bitbucket and Confluence. When I'm trying to access the running containers I'm usually using:
docker exec -it -u root ${DOCKER_CONTAINER} bash
With this command I'm able to access as usual, but after running a script to extract and compress log files, I can't access that one container anymore.
Excerpt from the 'clean up script'
This is the first point of failure, and the script is running once each week (scheduled by Jenkins).
docker cp ${CLEAN_UP_SCRIPT} ${DOCKER_CONTAINER}:/tmp/${CLEAN_UP_SCRIPT}
if [ $? -eq 0 ]; then
docker exec -it -u root ${DOCKER_CONTAINER} bash -c "cd ${LOG_DIR} && /tmp/compressOldLogs.sh ${ARCHIVE_FILE}"
fi
When the script executes these two lines towards the Bitbucket container the result is:
unable to find user root: no matching entries in passwd file
It's failing on the 'docker cp'-command, but only towards the Bitbucket container. After the script has ran, the container is unaccessible with both the 'bitbucket' (defined in Dockerfile) and 'root' users.
I was able to copy /etc/passwd out of the container, and it contains all of the users as expected. When trying to access by uid, I get the following error:
rpc error: code = 2 desc = oci runtime error: exec failed: process_linux.go:75: starting setns process caused "fork/exec /proc/self/exe: no such file or directory"
Dockerfile for Bitbucket image:
FROM java:openjdk-8-jre
ENV BITBUCKET_HOME /var/atlassian/application-data/bitbucket
ENV BITBUCKET_INSTALL_DIR /opt/atlassian/bitbucket
ENV BITBUCKET_VERSION 4.12.0
ENV DOWNLOAD_URL https://downloads.atlassian.com/software/stash/downloads/atlassian-bitbucket-${BITBUCKET_VERSION}.tar.gz
ARG user=bitbucket
ARG group=bitbucket
ARG uid=1000
ARG gid=1000
RUN mkdir -p $(dirname $BITBUCKET_HOME) \
&& groupadd -g ${gid} ${group} \
&& useradd -d "$BITBUCKET_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
RUN mkdir -p ${BITBUCKET_HOME} \
&& mkdir -p ${BITBUCKET_HOME}/shared \
&& chmod -R 700 ${BITBUCKET_HOME} \
&& chown -R ${user}:${group} ${BITBUCKET_HOME} \
&& mkdir -p ${BITBUCKET_INSTALL_DIR}/conf/Catalina \
&& curl -L --silent ${DOWNLOAD_URL} | tar -xz --strip=1 -C "$BITBUCKET_INSTALL_DIR" \
&& chmod -R 700 ${BITBUCKET_INSTALL_DIR}/ \
&& chown -R ${user}:${group} ${BITBUCKET_INSTALL_DIR}/
${BITBUCKET_INSTALL_DIR}/bin/setenv.sh
USER ${user}:${group}
EXPOSE 7990
EXPOSE 7999
WORKDIR $BITBUCKET_INSTALL_DIR
CMD ["bin/start-bitbucket.sh", "-fg"]
Additional info:
Docker version 1.12.0, build 8eab29e
docker-compose version 1.8.0, build f3628c7
All containers are running at all times, even Bitbucket works as usual after the issue occurres
The issue disappears after a restart of the container
You can use this command to access to the container with root user:
docker exec -u 0 -i -t {container_name_or_hash} /bin/bash
try debug with that. i think the script maybe remove or disable root user.
This issue is caused by a docker engine bug but which is tracked privately, Docker is asking users to restart the engine!
It seems that the bug is likely to be older than two years!
https://success.docker.com/article/ucp-health-checks-fail-unable-to-find-user-nobody-no-matching-entries-in-passwd-file-observed
https://forums.docker.com/t/unable-to-find-user-root-no-matching-entries-in-passwd-file/26545/7
... what can I say, someone is doing his best to get more funding.
Its a Long standing issue, replicated on my old version 1.10.3 to at least 1.17
As mentioned by #sorin the the docker forum says Running docker stop and then docker start fixes the problem but is hardly a long-term solution...
The docker exec -u 0 -i -t {container_name_or_hash} /bin/bash solution also in the same forum post mentioned here by #ObranZoltan might work for you, but does not work for many. See my output below
$ sudo docker exec -u 0 -it berserk_nobel /bin/bash
exec: "/bin/bash": stat /bin/bash: input/output error
I have a deployment script that builds new images, stop the existing containers with the same image names, then starts new containers from those images.
I stop the container by image name using the answer here: Stopping docker containers by image name - Ubuntu
But this command stops containers that don't have the specified image name. What am I doing wrong?
See here to watch docker stopping the wrong container:
Here is the dockerfile:
FROM ubuntu:14.04
MAINTAINER j#eka.com
# Settings
ENV NODE_VERSION 5.11.0
ENV NVM_DIR /root/.nvm
ENV NODE_PATH $NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Install libs
RUN apt-get update
RUN apt-get install curl -y
RUN curl https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh | bash \
&& chmod +x $NVM_DIR/nvm.sh \
&& source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
RUN apt-get clean
# Install app
RUN mkdir /app
COPY ./app /app
#Run the app
CMD ["node", "/app/src/app.js"]
I build like so:
docker build -t "$serverImageName" .
and start like so:
docker run -d -p "3000:"3000" -e db_name="$db_name" -e db_username="$db_username" -e db_password="$db_password" -e db_host="$db_host" "$serverImageName"
Why not use the container name to differentiate you environments?
docker run -d --rm --name nginx-dev nginx
40ca9a6db09afd78e8e76e690898ed6ba2b656f777b84e7462f4af8cb4a0b17d
docker run -d --rm --name nginx-qa nginx
347b32c85547d845032cbfa67bbba64db8629798d862ed692972f999a5ff1b6b
docker run -d --rm --name nginx nginx
3bd84b6057b8d5480082939215fed304e65eeac474b2ca12acedeca525117c36
Then use docker ps
docker ps -f name=nginx$
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3bd84b6057b8 nginx "nginx -g 'daemon ..." 30 seconds ago Up 28 seconds 80/tcp, 443/tcp nginx
According to the docs --filter ancestor could be finding the wrong containers if they are in any way children of other containers.
So to be sure my images are separate right from the start I added this line to the start of my dockerfile, after the FROM and MAINTAINER commands:
RUN echo DEVTESTLIVE: This line ensures that this container will never be confused as an ancestor of another environment
Then in my build scripts after copying the dockerfile to the distribution folder I replace DEVTESTLIVE with the appropriate environment:
sed -i -e "s/DEVTESTLIVE/$env/g" ../dist/server/dockerfile
This seems to have worked; I now have containers for all three environments running simultaneously and can start and stop them automatically through their image names.
We have docker running on one machine
Workstation running on other machine
I want to do bootstrap from workstation on docker container then our image should be ssh enabled
How to make docker image ssh enabled.
Before you add ssh you should see if docker exec will be sufficient for what you need. (doc link)
If you do need SSH, the following Dockerfile should help (copied from Docker docs):
# sshd
#
# VERSION 0.0.2
FROM ubuntu:14.04
MAINTAINER Sven Dowideit <SvenDowideit#docker.com>
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Using the CMD command in your Dockerfile will indeed enable ssh
CMD ["/usr/sbin/sshd", "-D"]
But there is a huge downside. If you already have a CMD command (that starts MySQL for example), then you are facing a problem not easily resolved in Docker. You can use only one CMD in Dockerfile. But there is a workaround for that, using supervisor. What you do is tell Dockerfile to install Supervisor:
RUN apt-get install -y openssh-server supervisor
Using supervisor, you can start as many processes as you want on container startup. These processes are defined in supervisor.conf file (naming is arbitrary) located in the directory with your Dockerfile. In your Dockerfile you tell Docker to copy this file during building:
ADD supervisor-base.conf /etc/supervisor.conf
Then you tell Docker to start supervisor when container starts (when supervisor starts, supervisor will also start all processes listed in the supervisor.conf file mentioned above).
CMD ["supervisord", "-c", "/etc/supervisor.conf"]
Your supervisor.conf file may look like this:
[supervisord]
nodaemon=true
[program:sshd]
directory=/usr/local/
command=/usr/sbin/sshd -D
autostart=true
autorestart=true
redirect_stderr=true
There is one issue to be careful about. Supervisor needs to start as a root, otherwise it will throw errors. So if your Dockerfile defines an user to start container with (e.g USER jboss), then you should put USER root at the end of your Dockerfile, so that supervisor starts with root. In your supervisor.conf file you simply define a user for each process:
[program:wildfly]
user=jboss
command=/opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0
[program:chef]
user=chef
command=/bin/bash -c chef-2.1/bin/start.sh
Of course, these users need to be pre-defined in your dockerfile. E.g.
RUN groupadd -r -f jboss -g 2000 && useradd -u 2000 -r -g jboss -m -d /opt/jboss -s /sbin/nologin -c "JBoss user" jboss
You can learn more about Supervisor+Docker+SSH in more details in this article.
Notice: this answer promotes a tool I've written.
Some answers here suggest to place an SSH server inside your container. Conceptually running multiple processes in one container is not the right approach (https://docs.docker.com/articles/dockerfile_best-practices/). A more favorable solution is one that involves multiple containers each running their own process/service. Linking them together would result in a coherent application.
I've created a containerized SSH server that you can 'stick' to any running container. This way you can create compositions with every container, without that container even knowing about ssh. The only requirement is that the container has bash.
The following example would start an SSH server attached to a container with name 'sshd-web-server1'.
docker run -ti --name sshd-web-server1 -e CONTAINER=web-server1 -p 2222:22 \
-v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker \
jeroenpeeters/docker-ssh
You connect to the SSH server with your ssh client of choice, just as you normally would.
Be adviced: Docker-SSH is currently still under development, but it does work! Please let me know what you think
For more pointers and documentation see: https://github.com/jeroenpeeters/docker-ssh
You can find prebuilt images with SSH installed, for instance CentOS tutum/centos and Debian tutum/debian
And the Dockerfiles used to build them
https://github.com/tutumcloud/tutum-centos/blob/master/Dockerfile
https://github.com/tutumcloud/tutum-debian/blob/master/Dockerfile