I want to run a crontab job on elastic search docker image and here is my docker file
FROM docker.elastic.co/elasticsearch/elasticsearch:6.6.0
ENV PATH=$PATH:/usr/share/elasticsearch/bin
RUN yum -y update
RUN yum -y install crontabs
RUN echo -e "root\nelasticsearch" > /etc/cron.allow
RUN echo "" >> /etc/cron.allow
RUN chmod -R 644 /etc/cron.d
RUN cat /etc/cron.allow
RUN chown -R elasticsearch /etc/cron.d
RUN chmod -R 755 /etc/cron.d
RUN chown -R elasticsearch /var/spool/cron
RUN chmod -R 744 /var/spool/cron
RUN chown -R elasticsearch /etc/crontab
RUN chmod -R 744 /etc/crontab
RUN chown -R elasticsearch /etc/cron.d
RUN chmod -R 744 /etc/cron.d
COPY ./purge.sh /usr/share/elasticsearch
RUN ls -l /etc/crontab
RUN ls -l /etc/cron.d
RUN touch /usr/share/elasticsearch/cron.log
ADD ./cron /etc/cron.d/cron_test
RUN chmod 0644 /etc/cron.d/cron_test
RUN cd /etc/cron.d && cat cron_test
RUN chown -R elasticsearch /etc/cron.d/cron_test
RUN ls -l /etc/cron.d/cron_test
RUN crontab /etc/cron.d/cron_test
RUN crontab -l
RUN cd /var/spool/cron && ls
USER elasticsearch
ENTRYPOINT elasticsearch
CMD crond start && pgrep cron && tail -f && tail -f /usr/share/elasticsearch/cron.log
EXPOSE 9200 9300
after running this docker file and executing the container, i am getting this
enter image description here
In this step in docker file
RUN cd /var/spool/cron && ls
it's showing only root , but how can i get elasticsearch user in it ?**
my cron file present locally
*/1 * * * * echo "Hello world" >> /usr/share/elasticsearch/cron.log
*/1 * * * * elasticsearch /usr/share/elasticsearch/purge.sh
my purge.sh file
curl -XPOST "http://localhost:9200/hydro_dashboard_index/_delete_by_query" -H 'Content-Type: application/json' -d'
{
"query": {
"range" : {
"query_service_entry_time" : {
"lt" : "now-14d"
}
}
}
}'
It's usually considered a better practice to run only one process in a container. Since the thing you're trying to run in cron is just making an HTTP request to elasticsearch, there's nothing about it that needs to run in the same container, or even in Docker at all.
If your host is running a standard Linux distribution with a standard cron daemon, the absolute easiest thing to do is just to stash this purge script somewhere on your host and run it via the host's cron service. If you know cron and elasticsearch are on the same host and you start the container with a -p 9200:9200 option to publish the standard elasticsearch HTTP port, the script should work unmodified.
If absolutely everything must run in Docker, you might search Docker Hub for a prebuilt cron image (there are a couple, though none look especially actively maintained). You also might be able to use the minimal set of tools in the busybox image; its documentation can be a little light. Still, the basic approach you'd need to take looks like:
Find or build a Docker image that contains only cron and curl – no Elasticsearch, no actual crontabs, just the programs themselves.
If you're manually docker running containers, docker network create some_network (with any name and default options), and run both the Elasticsearch and cron containers with --net some_network.
In the curl commands, use the docker run --name of the Elasticsearch container, or the name of its Docker Compose services: block, as a hostname; localhost will always mean "this container".
Put the crontabs and support scripts in some directory on your host, and inject them into the cron container with the docker run -v option (that is, treat them as configuration).
Related
Is it normal to lose all data, installed applications and created folders inside a container when executing docker-compose stop my_image and docker-compose start my_image?
I'm creating container with docker-compose up --scale my_image=4
update no. 1
my containers have sshd server running in them. When I connect to a container execute touch test.txt I see that the file was created.
However, after executing docker-compose stop my_image and docker-compose start my_image a container is empty and ls -l shows absence of file test.txt
update no. 2
my Dockerfile
FROM oraclelinux:8.5
RUN (yum update -y; \
yum install -y openssh-server openssh-clients initscripts wget passwd tar crontabs unzip; \
yum clean all)
RUN (ssh-keygen -A; \
sed -i 's/UsePAM yes/#UsePAM yes/g' /etc/ssh/sshd_config; \
sed -i 's/#UsePAM no/UsePAM no/g' /etc/ssh/sshd_config; \
sed -i 's/#PermitRootLogin yes/PermitRootLogin yes/' /etc/ssh/sshd_config; \
sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/' /etc/ssh/sshd_config)
RUN (mkdir -p /root/.ssh/; \
echo "StrictHostKeyChecking=no" > /root/.ssh/config; \
echo "UserKnownHostsFile=/dev/null" >> /root/.ssh/config)
RUN echo "root:oraclelinux" | chpasswd
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 22
my docker-compose
version: '3.9'
services:
my_image:
build:
context: .
dockerfile: Dockerfile
ports:
- 30000-30007:22
when I connect to a container
Execute touch test.txt
Execute docker-compose stop my_image
Execute docker-compose start my_image
Execute ls -l
I see no file test.txt (in fact I see that the folder is empty)
update no. 3
entrypoint.sh
#!/bin/sh
# Start the ssh server
/usr/sbin/sshd -D
# Execute the CMD
exec "$#"
Other details
When containers are all up and running, I choose a container running
on a specific port, say port 30001, then using putty I connect to that specific container,
execute touch test.txt
execute ls -l
I do see that the file was created
I execute docker-compose stop my_image
I execute docker-compose start my_image
I connect via putty to port 30001
I execute ls -l
I see no file (folder is empty)
I try other containers to see if file exists inside one of them, but
I see no file present.
So, after a brutal brute force debugging I realized that I lose data
only when I fail to disconnect from ssh before stopping / restarting
container. When I do disconnect data does not disappear after stopping / restarting
I'm trying to use a Docker container to build a project that uses rust; I'm trying to build as my user. I have a Dockerfile that installs rust in $HOME/.cargo, and then I'm trying to docker run the container, map the sources from $HOME/<some/subdirs/to/project> on the host in the same subfolder in the container. The Dockerfile looks like this:
FROM ubuntu:16.04
ARG RUST_VERSION
RUN \
export DEBIAN_FRONTEND=noninteractive && \
apt-get update && \
# install library dependencies
apt-get install [... a bunch of stuff ...] && \
curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain $RUST_VERSION && \
echo 'source $HOME/.cargo/env' >> $HOME/.bashrc && \
echo apt-get DONE
The build container is run something like this:
docker run -i -t -d --net host --privileged -v /mnt:/mnt -v /dev:/dev --volume /home/stefan/<path/to/project>:/home/stefan/<path/to/project>:rw --workdir /home/stefan/<path/to/project> --name <container-name> -v /etc/group:/etc/group:ro -v /etc/passwd:/etc/passwd:ro -v /etc/shadow:/etc/shadow:ro -u 1000 <image-name>
And then I try to exec into it and run the build script, but it can't find rust or $HOME/.cargo:
docker exec -it <container-name> bash
$ ls ~/.cargo
ls: cannot access '/home/stefan/.cargo': No such file or directory
It looks like the /home/stefan/<path/to/project> volume is masking the contents of /home/stefan in the container. Is this expected? Is there a workaround possible to be able to map the source code from a folder under $HOME on the host, but keep $HOME from the container?
I'm un Ubuntu 18.04, docker 19.03.12, on x86-64.
Dockerfile read variable in physical machine. So you user don't have in virtual machine.
Try change: $HOME to /root
echo 'source /root/.cargo/env' >> /root/.bashrc && \
I'll post this as an answer, since I seem to have figured it out.
When the Dockerfile is expanded, $HOME is /root, and the user is root. I couldn't find a way to reliably introduce my user in the build step / Dockerfile. I tried something like:
ARG BUILD_USER
ARG BUILD_GROUP
RUN mkdir /home/$BUILD_USER
ENV HOME=/home/$BUILD_USER
USER $BUILD_USER:$BUILD_GROUP
RUN \
echo "HOME is $HOME" && \
[...]
But didn't get very far, because inside the container, the user doesn't exist:
unable to find user stefan: no matching entries in passwd file
So what I ended up doing was to docker run as my user, and run the rust install from there - that is, from the script that does the actual build.
I also realized why writing to /home/$USER doesn't work - there is no /home/$USER in the container; mapping /etc/passwd and /etc/group in the container teaches it about the user, but does not create any directory. I could've mapped $HOME from the host, but then the container would control the rust versions on the host, and would not be that self contained. I also ended up needing to install rust in a non-standard location, since I don't have a writable $HOME in the container: I had to set CARGO_HOME and RUSTUP_HOME to do that.
I have containers for multiple Atlassian products; JIRA, Bitbucket and Confluence. When I'm trying to access the running containers I'm usually using:
docker exec -it -u root ${DOCKER_CONTAINER} bash
With this command I'm able to access as usual, but after running a script to extract and compress log files, I can't access that one container anymore.
Excerpt from the 'clean up script'
This is the first point of failure, and the script is running once each week (scheduled by Jenkins).
docker cp ${CLEAN_UP_SCRIPT} ${DOCKER_CONTAINER}:/tmp/${CLEAN_UP_SCRIPT}
if [ $? -eq 0 ]; then
docker exec -it -u root ${DOCKER_CONTAINER} bash -c "cd ${LOG_DIR} && /tmp/compressOldLogs.sh ${ARCHIVE_FILE}"
fi
When the script executes these two lines towards the Bitbucket container the result is:
unable to find user root: no matching entries in passwd file
It's failing on the 'docker cp'-command, but only towards the Bitbucket container. After the script has ran, the container is unaccessible with both the 'bitbucket' (defined in Dockerfile) and 'root' users.
I was able to copy /etc/passwd out of the container, and it contains all of the users as expected. When trying to access by uid, I get the following error:
rpc error: code = 2 desc = oci runtime error: exec failed: process_linux.go:75: starting setns process caused "fork/exec /proc/self/exe: no such file or directory"
Dockerfile for Bitbucket image:
FROM java:openjdk-8-jre
ENV BITBUCKET_HOME /var/atlassian/application-data/bitbucket
ENV BITBUCKET_INSTALL_DIR /opt/atlassian/bitbucket
ENV BITBUCKET_VERSION 4.12.0
ENV DOWNLOAD_URL https://downloads.atlassian.com/software/stash/downloads/atlassian-bitbucket-${BITBUCKET_VERSION}.tar.gz
ARG user=bitbucket
ARG group=bitbucket
ARG uid=1000
ARG gid=1000
RUN mkdir -p $(dirname $BITBUCKET_HOME) \
&& groupadd -g ${gid} ${group} \
&& useradd -d "$BITBUCKET_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
RUN mkdir -p ${BITBUCKET_HOME} \
&& mkdir -p ${BITBUCKET_HOME}/shared \
&& chmod -R 700 ${BITBUCKET_HOME} \
&& chown -R ${user}:${group} ${BITBUCKET_HOME} \
&& mkdir -p ${BITBUCKET_INSTALL_DIR}/conf/Catalina \
&& curl -L --silent ${DOWNLOAD_URL} | tar -xz --strip=1 -C "$BITBUCKET_INSTALL_DIR" \
&& chmod -R 700 ${BITBUCKET_INSTALL_DIR}/ \
&& chown -R ${user}:${group} ${BITBUCKET_INSTALL_DIR}/
${BITBUCKET_INSTALL_DIR}/bin/setenv.sh
USER ${user}:${group}
EXPOSE 7990
EXPOSE 7999
WORKDIR $BITBUCKET_INSTALL_DIR
CMD ["bin/start-bitbucket.sh", "-fg"]
Additional info:
Docker version 1.12.0, build 8eab29e
docker-compose version 1.8.0, build f3628c7
All containers are running at all times, even Bitbucket works as usual after the issue occurres
The issue disappears after a restart of the container
You can use this command to access to the container with root user:
docker exec -u 0 -i -t {container_name_or_hash} /bin/bash
try debug with that. i think the script maybe remove or disable root user.
This issue is caused by a docker engine bug but which is tracked privately, Docker is asking users to restart the engine!
It seems that the bug is likely to be older than two years!
https://success.docker.com/article/ucp-health-checks-fail-unable-to-find-user-nobody-no-matching-entries-in-passwd-file-observed
https://forums.docker.com/t/unable-to-find-user-root-no-matching-entries-in-passwd-file/26545/7
... what can I say, someone is doing his best to get more funding.
Its a Long standing issue, replicated on my old version 1.10.3 to at least 1.17
As mentioned by #sorin the the docker forum says Running docker stop and then docker start fixes the problem but is hardly a long-term solution...
The docker exec -u 0 -i -t {container_name_or_hash} /bin/bash solution also in the same forum post mentioned here by #ObranZoltan might work for you, but does not work for many. See my output below
$ sudo docker exec -u 0 -it berserk_nobel /bin/bash
exec: "/bin/bash": stat /bin/bash: input/output error
I have a deployment script that builds new images, stop the existing containers with the same image names, then starts new containers from those images.
I stop the container by image name using the answer here: Stopping docker containers by image name - Ubuntu
But this command stops containers that don't have the specified image name. What am I doing wrong?
See here to watch docker stopping the wrong container:
Here is the dockerfile:
FROM ubuntu:14.04
MAINTAINER j#eka.com
# Settings
ENV NODE_VERSION 5.11.0
ENV NVM_DIR /root/.nvm
ENV NODE_PATH $NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Install libs
RUN apt-get update
RUN apt-get install curl -y
RUN curl https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh | bash \
&& chmod +x $NVM_DIR/nvm.sh \
&& source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
RUN apt-get clean
# Install app
RUN mkdir /app
COPY ./app /app
#Run the app
CMD ["node", "/app/src/app.js"]
I build like so:
docker build -t "$serverImageName" .
and start like so:
docker run -d -p "3000:"3000" -e db_name="$db_name" -e db_username="$db_username" -e db_password="$db_password" -e db_host="$db_host" "$serverImageName"
Why not use the container name to differentiate you environments?
docker run -d --rm --name nginx-dev nginx
40ca9a6db09afd78e8e76e690898ed6ba2b656f777b84e7462f4af8cb4a0b17d
docker run -d --rm --name nginx-qa nginx
347b32c85547d845032cbfa67bbba64db8629798d862ed692972f999a5ff1b6b
docker run -d --rm --name nginx nginx
3bd84b6057b8d5480082939215fed304e65eeac474b2ca12acedeca525117c36
Then use docker ps
docker ps -f name=nginx$
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3bd84b6057b8 nginx "nginx -g 'daemon ..." 30 seconds ago Up 28 seconds 80/tcp, 443/tcp nginx
According to the docs --filter ancestor could be finding the wrong containers if they are in any way children of other containers.
So to be sure my images are separate right from the start I added this line to the start of my dockerfile, after the FROM and MAINTAINER commands:
RUN echo DEVTESTLIVE: This line ensures that this container will never be confused as an ancestor of another environment
Then in my build scripts after copying the dockerfile to the distribution folder I replace DEVTESTLIVE with the appropriate environment:
sed -i -e "s/DEVTESTLIVE/$env/g" ../dist/server/dockerfile
This seems to have worked; I now have containers for all three environments running simultaneously and can start and stop them automatically through their image names.
Im having a recurring issue while trying to set up a Docker container so that it stays running.
Here is a sample of the Dockerfile that I am wanting to use:
RUN wget -O /usr/local/nexus-2.11.3-01-bundle.tar.gz http://www.sonatype.org/downloads/nexus-2.11.3-01-bundle.tar.gz
WORKDIR /usr/local
RUN tar xvzf /usr/local/nexus-2.11.3-01-bundle.tar.gz
RUN ln -s nexus-2.11.3-01 nexus
ENV NEXUS_HOME /usr/local/nexus
ENV RUN_AS_USER root
CMD ["/usr/local/nexus/bin/nexus", "start"]
EXPOSE 8081
Basically when I build this, and then run it, the container just dies, and doing a docker ps command returns that there are no running containers.
As far as I know, (please correct me if I'm wrong...) the docker container should stay running so long as theres a process with a pid of 1. Would the usage of the previous commands use PID 1, and if so, how can I force the nexus start command to use it? Or to just keep the container alive...
The contents of a docker logs nexus gives:
****************************************
WARNING - NOT RECOMMENDED TO RUN AS ROOT
****************************************
Starting Nexus OSS...
Started Nexus OSS.
It seems to suggest that Nexus has started, but then again when I do a docker ps, I don't see it running.
If the process running with PID 1 exits, then the container is automatically stopped. You can check on the sonatype/nexus repository here, using the concept of Launcher.
Here is how they are avoiding the container to exit:
...
RUN mkdir -p /opt/sonatype/nexus \
&& curl --fail --silent --location --retry 3 \
https://download.sonatype.com/nexus/professional-bundle/nexus-professional-${NEXUS_VERSION}-bundle.tar.gz \
| gunzip \
| tar x -C /tmp nexus-professional-${NEXUS_VERSION} \
&& mv /tmp/nexus-professional-${NEXUS_VERSION}/* /opt/sonatype/nexus/ \
&& rm -rf /tmp/nexus-professional-${NEXUS_VERSION}
RUN useradd -r -u 200 -m -c "nexus role account" -d ${SONATYPE_WORK} -s /bin/false nexus
...
EXPOSE 8081
WORKDIR /opt/sonatype/nexus
USER nexus
ENV CONTEXT_PATH /
ENV MAX_HEAP 768m
ENV MIN_HEAP 256m
ENV JAVA_OPTS -server -XX:MaxPermSize=192m -Djava.net.preferIPv4Stack=true
ENV LAUNCHER_CONF ./conf/jetty.xml ./conf/jetty-requestlog.xml
CMD java \
-Dnexus-work=${SONATYPE_WORK} -Dnexus-webapp-context-path=${CONTEXT_PATH} \
-Xms${MIN_HEAP} -Xmx${MAX_HEAP} \
-cp 'conf/:lib/*' \
${JAVA_OPTS} \
org.sonatype.nexus.bootstrap.Launcher ${LAUNCHER_CONF}
Since it is an open repository, you can directly refer to their repo, if you like.
A quick guess from the logs is that running /usr/local/nexus/bin/nexus start would start it as a daemon.
That would cause another process to spawn and the one that started the daemon would exit, terminating the container.
One solution is to start the process not as a daemon, but I couldn't find a option to do this in your nexus case.
Another is to use something like supervisord as the CMD to docker. Then make it start your process.