I’m trying to extend CouchDB docker image to pre-populate CouchDB (with initial databases, design documents, etc.).
In order to create a database named db, I first tried this initial Dockerfile:
FROM couchdb
RUN curl -X PUT localhost:5984/db
but the build failed since couchdb service is not yet started at build time. So I changed it into this:
FROM couchdb
RUN service couchdb start && \
sleep 3 && \
curl -s -S -X PUT localhost:5984/db && \
curl -s -S localhost:5984/_all_dbs
Note:
the sleep was the only way I found to make it work, since it did not work with curl option --connect-timeout,
the second curl is only to check that the database was created.
The build seems to work fine:
$ docker build . -t test3 --no-cache
Sending build context to Docker daemon 6.656kB
Step 1/2 : FROM couchdb
---> 7f64c92d91fb
Step 2/2 : RUN service couchdb start && sleep 3 && curl -s -S -X PUT localhost:5984/db && curl -s -S localhost:5984/_all_dbs
---> Running in 1f3b10080595
Starting Apache CouchDB: couchdb.
{"ok":true}
["db"]
Removing intermediate container 1f3b10080595
---> 7d733188a423
Successfully built 7d733188a423
Successfully tagged test3:latest
What is weird is that now when I start it as a container, database db does not seem to be saved into test3 image:
$ docker run -p 5984:5984 -d test3
b34ad93f716e5f6ee68d5b921cc07f6e1c736d8a00e354a5c25f5c051ec01e34
$ curl localhost:5984/_all_dbs
[]
Most of the standard Docker database images include a VOLUME line that prevents creating a derived image with prepopulated data. For the official couchdb image you can see the relevant line in its Dockerfile. Unlike the relational-database images, this image doesn’t have any support for scripts that run at first startup.
That means you need to do the initialization from the host or from another container. If you can directly interact with it using its HTTP API, then this could look like:
# Start the container
docker run -d -p 5984:5984 -v ... couchdb
# Wait for it to be up
for i in $(seq 20); do
if curl -s http://localhost:5984 >/dev/null 2>&1; then
break
fi
sleep 1
done
# Create the database
curl -XPUT http://localhost:5984/db
Related
I am trying to configure a bitbucket CI pipeline to run tests.Stripping out the details I have a make file which looks as follows to run some form of integration tests.
test-e2e:
docker-compose -f ${DOCKER_COMPOSE_FILE} up -d ${APP_NAME}
godog
docker-compose -f ${DOCKER_COMPOSE_FILE} down
Docker compose is a single webserver with ports exposed.
Pipeline looks as follows:
- step: &integration-testing
name: Run integration tests script: # do this to make go module work with private repo
- apk add libc-dev py-pip python-dev libffi-dev openssl-dev gcc libc-dev make bash
- pip install docker-compose - git config --global url."git#bitbucket.org:".insteadOf "https://bitbucket.org/"
- go get github.com/onsi/ginkgo/ginkgo
- go get github.com/onsi/gomega/...
- go get github.com/DATA-DOG/godog/cmd/godog
- make build-only && make test-e2e
I am facing two separate issues for both i have not been able to find a solution.
Keep getting connection refused when the tests are run.
To elaborate above, the docker compose brings up a server with proper host:port mapping ("127.0.0.1:10077:10077"). The command godog is intended to run the tests by querying the server. This however always ends in connection refused.This link has a possible solution , so i am exploring that.
The pipeline almost always runs commands before the container is up. I've tried fixing this by changing the invoke to.
test-e2e:
docker-compose -f ${DOCKER_COMPOSE_FILE} up -d ${APP_NAME} && sleep 10 && docker exec -i oracle-go godog && docker-compose -f ${DOCKER_COMPOSE_FILE} down
However the container is always brought up after the sleep (almost instantaneously).
Example:
Creating oracle-go ...
Sleep 10
docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
docker exec -i oracle-go godog
Creating oracle-go ... done
Error response from daemon: Container 7bab5322203756b972e7f0a3c6e5827413279914a68c705221b8af7daadc1149 is not running
Please let me know if there is a way around it.
If I understood your question correctly, you want to wait for the server to start before running tests.
Instead of manually sleeping, you should use wait-for-it.sh (or an alternative). See the relevant Docker docs for more information.
For example:
test-e2e:
bash wait-for-it.sh <HOST>:<PORT> -- docker-compose -f ${DOCKER_COMPOSE_FILE} up -d ${APP_NAME} && docker exec -i oracle-go godog && docker-compose -f ${DOCKER_COMPOSE_FILE} down
Change <HOST> and <PORT> to your service's host name and port respectively. Alternatively, you could use wait-for-it.sh in your Docker Compose command or the like.
Docker version 17.11.0-ce, build 1caf76c
I need to run Ansible to build & deploy to wildfly some java projects during docker build time, so that when I run docker image I have everything setup. However, Ansible needs ssh to localhost. So far I was unable to make it working. I've tried different docker images and now I ended up with phusion (https://github.com/phusion/baseimage-docker#login_ssh). What I have atm:
FROM phusion/baseimage
# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]
RUN rm -f /etc/service/sshd/down
# Regenerate SSH host keys. baseimage-docker does not contain any, so you
# have to do that yourself. You may also comment out this instruction; the
# init system will auto-generate one during boot.
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
RUN ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ''
RUN cat ~/.ssh/id_rsa.pub | tee -a ~/.ssh/authorized_keys
RUN sed -i "s/#PermitRootLogin no/PermitRootLogin yes/" /etc/ssh/sshd_config && \
exec ssh-agent bash && \
ssh-add ~/.ssh/id_rsa
RUN /usr/sbin/sshd -d &
RUN ssh -tt root#127.0.0.1
CMD ["/bin/bash"]
But I still get
Step 11/12 : RUN ssh -tt root#127.0.0.1
---> Running in cf83f9906e55
ssh: connect to host 127.0.0.1 port 22: Connection refused
The command '/bin/sh -c ssh -tt root#127.0.0.1' returned a non-zero code: 255
Any suggestions what could be wrong? Is it even possible to achieve that?
RUN /usr/sbin/sshd -d &
That will run a process in the background using a shell. As soon as the shell that started the process returns from running the background command, it exits with no more input, and the container used for that RUN command terminates. The only thing saved from a RUN is the change to the filesystem. You do not save running processes, environment variables, or shell state.
Something like this may work, but you may also need a sleep command to give sshd time to finish starting.
RUN /usr/sbin/sshd -d & \
ssh -tt root#127.0.0.1
I'd personally look for another way to do this without sshd during the build. This feels very kludgy and error prone.
There are multiple problems in that Dockerfile
First of all, you can't run a background process in a RUN statement and expect that process in another RUN. Each statement of a Dockerfile are run in a different containers so processes don't persist between them.
Other issue was that 127.0.0.1 is not in known_hosts.
And finally, you must give some time to sshd to start.
Here is a working Dockerfile:
FROM phusion/baseimage
CMD ["/sbin/my_init"]
RUN rm -f /etc/service/sshd/down
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
RUN ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ''
RUN cat ~/.ssh/id_rsa.pub | tee -a ~/.ssh/authorized_keys
RUN printf "Host 127.0.0.1\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config
RUN sed -i "s/#PermitRootLogin no/PermitRootLogin yes/" /etc/ssh/sshd_config && \
exec ssh-agent bash && \
ssh-add ~/.ssh/id_rsa
RUN /usr/sbin/sshd & sleep 5 && ssh -tt root#127.0.0.1 'ls -al'
CMD ["/bin/bash"]
Anyway, I would rather find another solution than provisioning you image with Ansible in Dockerfile. Check out ansible-container
I have containers for multiple Atlassian products; JIRA, Bitbucket and Confluence. When I'm trying to access the running containers I'm usually using:
docker exec -it -u root ${DOCKER_CONTAINER} bash
With this command I'm able to access as usual, but after running a script to extract and compress log files, I can't access that one container anymore.
Excerpt from the 'clean up script'
This is the first point of failure, and the script is running once each week (scheduled by Jenkins).
docker cp ${CLEAN_UP_SCRIPT} ${DOCKER_CONTAINER}:/tmp/${CLEAN_UP_SCRIPT}
if [ $? -eq 0 ]; then
docker exec -it -u root ${DOCKER_CONTAINER} bash -c "cd ${LOG_DIR} && /tmp/compressOldLogs.sh ${ARCHIVE_FILE}"
fi
When the script executes these two lines towards the Bitbucket container the result is:
unable to find user root: no matching entries in passwd file
It's failing on the 'docker cp'-command, but only towards the Bitbucket container. After the script has ran, the container is unaccessible with both the 'bitbucket' (defined in Dockerfile) and 'root' users.
I was able to copy /etc/passwd out of the container, and it contains all of the users as expected. When trying to access by uid, I get the following error:
rpc error: code = 2 desc = oci runtime error: exec failed: process_linux.go:75: starting setns process caused "fork/exec /proc/self/exe: no such file or directory"
Dockerfile for Bitbucket image:
FROM java:openjdk-8-jre
ENV BITBUCKET_HOME /var/atlassian/application-data/bitbucket
ENV BITBUCKET_INSTALL_DIR /opt/atlassian/bitbucket
ENV BITBUCKET_VERSION 4.12.0
ENV DOWNLOAD_URL https://downloads.atlassian.com/software/stash/downloads/atlassian-bitbucket-${BITBUCKET_VERSION}.tar.gz
ARG user=bitbucket
ARG group=bitbucket
ARG uid=1000
ARG gid=1000
RUN mkdir -p $(dirname $BITBUCKET_HOME) \
&& groupadd -g ${gid} ${group} \
&& useradd -d "$BITBUCKET_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
RUN mkdir -p ${BITBUCKET_HOME} \
&& mkdir -p ${BITBUCKET_HOME}/shared \
&& chmod -R 700 ${BITBUCKET_HOME} \
&& chown -R ${user}:${group} ${BITBUCKET_HOME} \
&& mkdir -p ${BITBUCKET_INSTALL_DIR}/conf/Catalina \
&& curl -L --silent ${DOWNLOAD_URL} | tar -xz --strip=1 -C "$BITBUCKET_INSTALL_DIR" \
&& chmod -R 700 ${BITBUCKET_INSTALL_DIR}/ \
&& chown -R ${user}:${group} ${BITBUCKET_INSTALL_DIR}/
${BITBUCKET_INSTALL_DIR}/bin/setenv.sh
USER ${user}:${group}
EXPOSE 7990
EXPOSE 7999
WORKDIR $BITBUCKET_INSTALL_DIR
CMD ["bin/start-bitbucket.sh", "-fg"]
Additional info:
Docker version 1.12.0, build 8eab29e
docker-compose version 1.8.0, build f3628c7
All containers are running at all times, even Bitbucket works as usual after the issue occurres
The issue disappears after a restart of the container
You can use this command to access to the container with root user:
docker exec -u 0 -i -t {container_name_or_hash} /bin/bash
try debug with that. i think the script maybe remove or disable root user.
This issue is caused by a docker engine bug but which is tracked privately, Docker is asking users to restart the engine!
It seems that the bug is likely to be older than two years!
https://success.docker.com/article/ucp-health-checks-fail-unable-to-find-user-nobody-no-matching-entries-in-passwd-file-observed
https://forums.docker.com/t/unable-to-find-user-root-no-matching-entries-in-passwd-file/26545/7
... what can I say, someone is doing his best to get more funding.
Its a Long standing issue, replicated on my old version 1.10.3 to at least 1.17
As mentioned by #sorin the the docker forum says Running docker stop and then docker start fixes the problem but is hardly a long-term solution...
The docker exec -u 0 -i -t {container_name_or_hash} /bin/bash solution also in the same forum post mentioned here by #ObranZoltan might work for you, but does not work for many. See my output below
$ sudo docker exec -u 0 -it berserk_nobel /bin/bash
exec: "/bin/bash": stat /bin/bash: input/output error
Im having a recurring issue while trying to set up a Docker container so that it stays running.
Here is a sample of the Dockerfile that I am wanting to use:
RUN wget -O /usr/local/nexus-2.11.3-01-bundle.tar.gz http://www.sonatype.org/downloads/nexus-2.11.3-01-bundle.tar.gz
WORKDIR /usr/local
RUN tar xvzf /usr/local/nexus-2.11.3-01-bundle.tar.gz
RUN ln -s nexus-2.11.3-01 nexus
ENV NEXUS_HOME /usr/local/nexus
ENV RUN_AS_USER root
CMD ["/usr/local/nexus/bin/nexus", "start"]
EXPOSE 8081
Basically when I build this, and then run it, the container just dies, and doing a docker ps command returns that there are no running containers.
As far as I know, (please correct me if I'm wrong...) the docker container should stay running so long as theres a process with a pid of 1. Would the usage of the previous commands use PID 1, and if so, how can I force the nexus start command to use it? Or to just keep the container alive...
The contents of a docker logs nexus gives:
****************************************
WARNING - NOT RECOMMENDED TO RUN AS ROOT
****************************************
Starting Nexus OSS...
Started Nexus OSS.
It seems to suggest that Nexus has started, but then again when I do a docker ps, I don't see it running.
If the process running with PID 1 exits, then the container is automatically stopped. You can check on the sonatype/nexus repository here, using the concept of Launcher.
Here is how they are avoiding the container to exit:
...
RUN mkdir -p /opt/sonatype/nexus \
&& curl --fail --silent --location --retry 3 \
https://download.sonatype.com/nexus/professional-bundle/nexus-professional-${NEXUS_VERSION}-bundle.tar.gz \
| gunzip \
| tar x -C /tmp nexus-professional-${NEXUS_VERSION} \
&& mv /tmp/nexus-professional-${NEXUS_VERSION}/* /opt/sonatype/nexus/ \
&& rm -rf /tmp/nexus-professional-${NEXUS_VERSION}
RUN useradd -r -u 200 -m -c "nexus role account" -d ${SONATYPE_WORK} -s /bin/false nexus
...
EXPOSE 8081
WORKDIR /opt/sonatype/nexus
USER nexus
ENV CONTEXT_PATH /
ENV MAX_HEAP 768m
ENV MIN_HEAP 256m
ENV JAVA_OPTS -server -XX:MaxPermSize=192m -Djava.net.preferIPv4Stack=true
ENV LAUNCHER_CONF ./conf/jetty.xml ./conf/jetty-requestlog.xml
CMD java \
-Dnexus-work=${SONATYPE_WORK} -Dnexus-webapp-context-path=${CONTEXT_PATH} \
-Xms${MIN_HEAP} -Xmx${MAX_HEAP} \
-cp 'conf/:lib/*' \
${JAVA_OPTS} \
org.sonatype.nexus.bootstrap.Launcher ${LAUNCHER_CONF}
Since it is an open repository, you can directly refer to their repo, if you like.
A quick guess from the logs is that running /usr/local/nexus/bin/nexus start would start it as a daemon.
That would cause another process to spawn and the one that started the daemon would exit, terminating the container.
One solution is to start the process not as a daemon, but I couldn't find a option to do this in your nexus case.
Another is to use something like supervisord as the CMD to docker. Then make it start your process.
We have docker running on one machine
Workstation running on other machine
I want to do bootstrap from workstation on docker container then our image should be ssh enabled
How to make docker image ssh enabled.
Before you add ssh you should see if docker exec will be sufficient for what you need. (doc link)
If you do need SSH, the following Dockerfile should help (copied from Docker docs):
# sshd
#
# VERSION 0.0.2
FROM ubuntu:14.04
MAINTAINER Sven Dowideit <SvenDowideit#docker.com>
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Using the CMD command in your Dockerfile will indeed enable ssh
CMD ["/usr/sbin/sshd", "-D"]
But there is a huge downside. If you already have a CMD command (that starts MySQL for example), then you are facing a problem not easily resolved in Docker. You can use only one CMD in Dockerfile. But there is a workaround for that, using supervisor. What you do is tell Dockerfile to install Supervisor:
RUN apt-get install -y openssh-server supervisor
Using supervisor, you can start as many processes as you want on container startup. These processes are defined in supervisor.conf file (naming is arbitrary) located in the directory with your Dockerfile. In your Dockerfile you tell Docker to copy this file during building:
ADD supervisor-base.conf /etc/supervisor.conf
Then you tell Docker to start supervisor when container starts (when supervisor starts, supervisor will also start all processes listed in the supervisor.conf file mentioned above).
CMD ["supervisord", "-c", "/etc/supervisor.conf"]
Your supervisor.conf file may look like this:
[supervisord]
nodaemon=true
[program:sshd]
directory=/usr/local/
command=/usr/sbin/sshd -D
autostart=true
autorestart=true
redirect_stderr=true
There is one issue to be careful about. Supervisor needs to start as a root, otherwise it will throw errors. So if your Dockerfile defines an user to start container with (e.g USER jboss), then you should put USER root at the end of your Dockerfile, so that supervisor starts with root. In your supervisor.conf file you simply define a user for each process:
[program:wildfly]
user=jboss
command=/opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0
[program:chef]
user=chef
command=/bin/bash -c chef-2.1/bin/start.sh
Of course, these users need to be pre-defined in your dockerfile. E.g.
RUN groupadd -r -f jboss -g 2000 && useradd -u 2000 -r -g jboss -m -d /opt/jboss -s /sbin/nologin -c "JBoss user" jboss
You can learn more about Supervisor+Docker+SSH in more details in this article.
Notice: this answer promotes a tool I've written.
Some answers here suggest to place an SSH server inside your container. Conceptually running multiple processes in one container is not the right approach (https://docs.docker.com/articles/dockerfile_best-practices/). A more favorable solution is one that involves multiple containers each running their own process/service. Linking them together would result in a coherent application.
I've created a containerized SSH server that you can 'stick' to any running container. This way you can create compositions with every container, without that container even knowing about ssh. The only requirement is that the container has bash.
The following example would start an SSH server attached to a container with name 'sshd-web-server1'.
docker run -ti --name sshd-web-server1 -e CONTAINER=web-server1 -p 2222:22 \
-v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker \
jeroenpeeters/docker-ssh
You connect to the SSH server with your ssh client of choice, just as you normally would.
Be adviced: Docker-SSH is currently still under development, but it does work! Please let me know what you think
For more pointers and documentation see: https://github.com/jeroenpeeters/docker-ssh
You can find prebuilt images with SSH installed, for instance CentOS tutum/centos and Debian tutum/debian
And the Dockerfiles used to build them
https://github.com/tutumcloud/tutum-centos/blob/master/Dockerfile
https://github.com/tutumcloud/tutum-debian/blob/master/Dockerfile