docker-compose stop / start my_image - docker

Is it normal to lose all data, installed applications and created folders inside a container when executing docker-compose stop my_image and docker-compose start my_image?
I'm creating container with docker-compose up --scale my_image=4
update no. 1
my containers have sshd server running in them. When I connect to a container execute touch test.txt I see that the file was created.
However, after executing docker-compose stop my_image and docker-compose start my_image a container is empty and ls -l shows absence of file test.txt
update no. 2
my Dockerfile
FROM oraclelinux:8.5
RUN (yum update -y; \
yum install -y openssh-server openssh-clients initscripts wget passwd tar crontabs unzip; \
yum clean all)
RUN (ssh-keygen -A; \
sed -i 's/UsePAM yes/#UsePAM yes/g' /etc/ssh/sshd_config; \
sed -i 's/#UsePAM no/UsePAM no/g' /etc/ssh/sshd_config; \
sed -i 's/#PermitRootLogin yes/PermitRootLogin yes/' /etc/ssh/sshd_config; \
sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/' /etc/ssh/sshd_config)
RUN (mkdir -p /root/.ssh/; \
echo "StrictHostKeyChecking=no" > /root/.ssh/config; \
echo "UserKnownHostsFile=/dev/null" >> /root/.ssh/config)
RUN echo "root:oraclelinux" | chpasswd
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 22
my docker-compose
version: '3.9'
services:
my_image:
build:
context: .
dockerfile: Dockerfile
ports:
- 30000-30007:22
when I connect to a container
Execute touch test.txt
Execute docker-compose stop my_image
Execute docker-compose start my_image
Execute ls -l
I see no file test.txt (in fact I see that the folder is empty)
update no. 3
entrypoint.sh
#!/bin/sh
# Start the ssh server
/usr/sbin/sshd -D
# Execute the CMD
exec "$#"
Other details
When containers are all up and running, I choose a container running
on a specific port, say port 30001, then using putty I connect to that specific container,
execute touch test.txt
execute ls -l
I do see that the file was created
I execute docker-compose stop my_image
I execute docker-compose start my_image
I connect via putty to port 30001
I execute ls -l
I see no file (folder is empty)
I try other containers to see if file exists inside one of them, but
I see no file present.

So, after a brutal brute force debugging I realized that I lose data
only when I fail to disconnect from ssh before stopping / restarting
container. When I do disconnect data does not disappear after stopping / restarting

Related

Trying to copy a script into a detached Docker container, and execute it with docker exec

Right now I am setting my Docker instance running with:
sudo docker run --name docker_verify --rm \
-t -d daoplays/rust_v1.63
so that it runs in detached mode in the background. I then copy a script to that instance:
sudo docker cp verify_run_script.sh docker_verify:/.
and I want to be able to execute that script with what I expected to be:
sudo docker exec -d docker_verify bash \
-c "./verify_run_script.sh"
However, this doesn't seem to do anything. If from another terminal I run
sudo docker container logs -f docker_verify
nothing is shown. If I attach myself to the Docker instance then I can run the script myself but that sort of defeats the point of running in detached mode.
I assume I am just not passing the right arguments here, but I am really not clear what I should be doing!
When you run a command in a container you need to also allocate a pseudo-TTY if you want to see the results.
Your command should be:
sudo docker exec -t docker_verify bash \
-c "./verify_run_script.sh"
(note the -t flag)
Steps to reproduce it:
# create a dummy script
cat > script.sh <<EOF
echo This is running!
EOF
# run a container to work with
docker run --rm --name docker_verify -d alpine:latest sleep 3000
# copy the script
docker cp script.sh docker_verify:/
# run the script
docker exec -t docker_verify sh -c "chmod a+x /script.sh && /script.sh"
# clean up
docker container rm -f docker_verify
You should see This is running! in the output.

How to run a crontab job on elastic search docker image

I want to run a crontab job on elastic search docker image and here is my docker file
FROM docker.elastic.co/elasticsearch/elasticsearch:6.6.0
ENV PATH=$PATH:/usr/share/elasticsearch/bin
RUN yum -y update
RUN yum -y install crontabs
RUN echo -e "root\nelasticsearch" > /etc/cron.allow
RUN echo "" >> /etc/cron.allow
RUN chmod -R 644 /etc/cron.d
RUN cat /etc/cron.allow
RUN chown -R elasticsearch /etc/cron.d
RUN chmod -R 755 /etc/cron.d
RUN chown -R elasticsearch /var/spool/cron
RUN chmod -R 744 /var/spool/cron
RUN chown -R elasticsearch /etc/crontab
RUN chmod -R 744 /etc/crontab
RUN chown -R elasticsearch /etc/cron.d
RUN chmod -R 744 /etc/cron.d
COPY ./purge.sh /usr/share/elasticsearch
RUN ls -l /etc/crontab
RUN ls -l /etc/cron.d
RUN touch /usr/share/elasticsearch/cron.log
ADD ./cron /etc/cron.d/cron_test
RUN chmod 0644 /etc/cron.d/cron_test
RUN cd /etc/cron.d && cat cron_test
RUN chown -R elasticsearch /etc/cron.d/cron_test
RUN ls -l /etc/cron.d/cron_test
RUN crontab /etc/cron.d/cron_test
RUN crontab -l
RUN cd /var/spool/cron && ls
USER elasticsearch
ENTRYPOINT elasticsearch
CMD crond start && pgrep cron && tail -f && tail -f /usr/share/elasticsearch/cron.log
EXPOSE 9200 9300
after running this docker file and executing the container, i am getting this
enter image description here
In this step in docker file
RUN cd /var/spool/cron && ls
it's showing only root , but how can i get elasticsearch user in it ?**
my cron file present locally
*/1 * * * * echo "Hello world" >> /usr/share/elasticsearch/cron.log
*/1 * * * * elasticsearch /usr/share/elasticsearch/purge.sh
my purge.sh file
curl -XPOST "http://localhost:9200/hydro_dashboard_index/_delete_by_query" -H 'Content-Type: application/json' -d'
{
"query": {
"range" : {
"query_service_entry_time" : {
"lt" : "now-14d"
}
}
}
}'
It's usually considered a better practice to run only one process in a container. Since the thing you're trying to run in cron is just making an HTTP request to elasticsearch, there's nothing about it that needs to run in the same container, or even in Docker at all.
If your host is running a standard Linux distribution with a standard cron daemon, the absolute easiest thing to do is just to stash this purge script somewhere on your host and run it via the host's cron service. If you know cron and elasticsearch are on the same host and you start the container with a -p 9200:9200 option to publish the standard elasticsearch HTTP port, the script should work unmodified.
If absolutely everything must run in Docker, you might search Docker Hub for a prebuilt cron image (there are a couple, though none look especially actively maintained). You also might be able to use the minimal set of tools in the busybox image; its documentation can be a little light. Still, the basic approach you'd need to take looks like:
Find or build a Docker image that contains only cron and curl – no Elasticsearch, no actual crontabs, just the programs themselves.
If you're manually docker running containers, docker network create some_network (with any name and default options), and run both the Elasticsearch and cron containers with --net some_network.
In the curl commands, use the docker run --name of the Elasticsearch container, or the name of its Docker Compose services: block, as a hostname; localhost will always mean "this container".
Put the crontabs and support scripts in some directory on your host, and inject them into the cron container with the docker run -v option (that is, treat them as configuration).

Why does docker "--filter ancestor=imageName" find the wrong container?

I have a deployment script that builds new images, stop the existing containers with the same image names, then starts new containers from those images.
I stop the container by image name using the answer here: Stopping docker containers by image name - Ubuntu
But this command stops containers that don't have the specified image name. What am I doing wrong?
See here to watch docker stopping the wrong container:
Here is the dockerfile:
FROM ubuntu:14.04
MAINTAINER j#eka.com
# Settings
ENV NODE_VERSION 5.11.0
ENV NVM_DIR /root/.nvm
ENV NODE_PATH $NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Install libs
RUN apt-get update
RUN apt-get install curl -y
RUN curl https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh | bash \
&& chmod +x $NVM_DIR/nvm.sh \
&& source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
RUN apt-get clean
# Install app
RUN mkdir /app
COPY ./app /app
#Run the app
CMD ["node", "/app/src/app.js"]
I build like so:
docker build -t "$serverImageName" .
and start like so:
docker run -d -p "3000:"3000" -e db_name="$db_name" -e db_username="$db_username" -e db_password="$db_password" -e db_host="$db_host" "$serverImageName"
Why not use the container name to differentiate you environments?
docker run -d --rm --name nginx-dev nginx
40ca9a6db09afd78e8e76e690898ed6ba2b656f777b84e7462f4af8cb4a0b17d
docker run -d --rm --name nginx-qa nginx
347b32c85547d845032cbfa67bbba64db8629798d862ed692972f999a5ff1b6b
docker run -d --rm --name nginx nginx
3bd84b6057b8d5480082939215fed304e65eeac474b2ca12acedeca525117c36
Then use docker ps
docker ps -f name=nginx$
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3bd84b6057b8 nginx "nginx -g 'daemon ..." 30 seconds ago Up 28 seconds 80/tcp, 443/tcp nginx
According to the docs --filter ancestor could be finding the wrong containers if they are in any way children of other containers.
So to be sure my images are separate right from the start I added this line to the start of my dockerfile, after the FROM and MAINTAINER commands:
RUN echo DEVTESTLIVE: This line ensures that this container will never be confused as an ancestor of another environment
Then in my build scripts after copying the dockerfile to the distribution folder I replace DEVTESTLIVE with the appropriate environment:
sed -i -e "s/DEVTESTLIVE/$env/g" ../dist/server/dockerfile
This seems to have worked; I now have containers for all three environments running simultaneously and can start and stop them automatically through their image names.

Dockerfile entrypoint not lunching on interactive mode

I'm absolutely new to Docker. I want a docker to do git pull / git push / resolve conflicts on some repositories.
I've created this Dockerfile
FROM needcaffeine/git
RUN apt-get update
RUN mkdir -p /root/.ssh
ADD .ssh/id_rsa /root/.ssh/id_rsa
RUN chmod 700 /root/.ssh/id_rsa
RUN echo "Host github.com\n\tStrictHostKeyChecking no\n" >> /root/.ssh/config
VOLUME ["/root/repos"]
ENTRYPOINT ["bash"]
but when I run it with
docker run my-tag-for-that-image
It don't give me an interactive prompt even when I'm using bash as ENTRYPOINT.
Try passing -it to docker run, i.e.:
docker run my-tag-for-that-image
From docker help run:
-i, --interactive=false Keep STDIN open even if not attached
-t, --tty=false Allocate a pseudo-TTY

Keeping Docker container alive running Java application

Im having a recurring issue while trying to set up a Docker container so that it stays running.
Here is a sample of the Dockerfile that I am wanting to use:
RUN wget -O /usr/local/nexus-2.11.3-01-bundle.tar.gz http://www.sonatype.org/downloads/nexus-2.11.3-01-bundle.tar.gz
WORKDIR /usr/local
RUN tar xvzf /usr/local/nexus-2.11.3-01-bundle.tar.gz
RUN ln -s nexus-2.11.3-01 nexus
ENV NEXUS_HOME /usr/local/nexus
ENV RUN_AS_USER root
CMD ["/usr/local/nexus/bin/nexus", "start"]
EXPOSE 8081
Basically when I build this, and then run it, the container just dies, and doing a docker ps command returns that there are no running containers.
As far as I know, (please correct me if I'm wrong...) the docker container should stay running so long as theres a process with a pid of 1. Would the usage of the previous commands use PID 1, and if so, how can I force the nexus start command to use it? Or to just keep the container alive...
The contents of a docker logs nexus gives:
****************************************
WARNING - NOT RECOMMENDED TO RUN AS ROOT
****************************************
Starting Nexus OSS...
Started Nexus OSS.
It seems to suggest that Nexus has started, but then again when I do a docker ps, I don't see it running.
If the process running with PID 1 exits, then the container is automatically stopped. You can check on the sonatype/nexus repository here, using the concept of Launcher.
Here is how they are avoiding the container to exit:
...
RUN mkdir -p /opt/sonatype/nexus \
&& curl --fail --silent --location --retry 3 \
https://download.sonatype.com/nexus/professional-bundle/nexus-professional-${NEXUS_VERSION}-bundle.tar.gz \
| gunzip \
| tar x -C /tmp nexus-professional-${NEXUS_VERSION} \
&& mv /tmp/nexus-professional-${NEXUS_VERSION}/* /opt/sonatype/nexus/ \
&& rm -rf /tmp/nexus-professional-${NEXUS_VERSION}
RUN useradd -r -u 200 -m -c "nexus role account" -d ${SONATYPE_WORK} -s /bin/false nexus
...
EXPOSE 8081
WORKDIR /opt/sonatype/nexus
USER nexus
ENV CONTEXT_PATH /
ENV MAX_HEAP 768m
ENV MIN_HEAP 256m
ENV JAVA_OPTS -server -XX:MaxPermSize=192m -Djava.net.preferIPv4Stack=true
ENV LAUNCHER_CONF ./conf/jetty.xml ./conf/jetty-requestlog.xml
CMD java \
-Dnexus-work=${SONATYPE_WORK} -Dnexus-webapp-context-path=${CONTEXT_PATH} \
-Xms${MIN_HEAP} -Xmx${MAX_HEAP} \
-cp 'conf/:lib/*' \
${JAVA_OPTS} \
org.sonatype.nexus.bootstrap.Launcher ${LAUNCHER_CONF}
Since it is an open repository, you can directly refer to their repo, if you like.
A quick guess from the logs is that running /usr/local/nexus/bin/nexus start would start it as a daemon.
That would cause another process to spawn and the one that started the daemon would exit, terminating the container.
One solution is to start the process not as a daemon, but I couldn't find a option to do this in your nexus case.
Another is to use something like supervisord as the CMD to docker. Then make it start your process.

Resources