How can "sbt runAll" command stay alive? Does "sbt runAll" command have parameters for it to stay alive?
I created a Logam sample file and then a docker image by installing Java v8 and sbt v1.2.1. My Dockerfile is at the end of my question.
When I run docker command
"docker run -p 57798:57797 -d lagom-hello-world",
sbt command goes through its initiating process and everything appears ok. However, sbt command ends itself as soon as it completes the initiation process. How can "sbt runAll" command stay alive?
Contents in My Dockerfile
FROM openjdk:8
RUN \
curl -L -o sbt-1.2.1.deb http://dl.bintray.com/sbt/debian/sbt-1.2.1.deb && \
dpkg -i sbt-1.2.1.deb && \
rm sbt-1.2.1.deb && \
apt-get update && \
apt-get install sbt && \
sbt sbtVersion
EXPOSE 57797
WORKDIR /app
ADD . /app
CMD sbt runAll
Related
I have built a Docker image which calls a bash file and processes some files in a specific folder, but I can't manage to make it run/work. I have tried different approaches but cannot find where the issue is. Building an image with
docker build -t user/mycontainer .
works, but the bash script doesn't run when I
docker run mycontainer `pwd`:/app
Instead it produces an error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/path/c/to/where/i/run/teh/docker:/app": stat /path/c/to/where/i/run/teh/docker:/app: no such file or directory: unknown.
When I run docker ps there aren't any containers, but if I docker ps -a the container just built appears.
I try running
docker exec -it <container id build> bash
and I get a different error response from daemon:
Container <container id build> is not running
The docker image seems to build all the dependencies and the bash code works when run it separately in my local within the folder.
My dockerfile looks:
FROM alpine:latest
# Create directory in container image for app code
RUN mkdir -p /app
# Copy app code (.) to /app in container image
COPY . /app
# Set working directory context
ARG TIPPECANOE_RELEASE="1.36.0"
RUN apk add --no-cache sudo git g++ make libgcc libstdc++ sqlite-libs sqlite-dev zlib-dev bash \
&& addgroup sudo && adduser -G sudo -D -H tippecanoe && echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers \
&& cd /root \
&& git clone https://github.com/mapbox/tippecanoe.git tippecanoe \
&& cd tippecanoe \
&& git checkout tags/$TIPPECANOE_RELEASE \
&& cd /root/tippecanoe \
&& make \
&& make install \
&& cd /root \
&& rm -rf /root/tippecanoe \
&& apk del git g++ make sqlite-dev
RUN chmod +x ./app/bash.sh
USER tippecanoe
WORKDIR /app
CMD ["./bash.sh"]
I built a docker image using Dockerfile with Python and some libraries inside (no my project code inside). In my local work dir, there are some scripts to be run on the docker. So, here what I did
$ cd /path/to/my_workdir
$ docker run -it --name test -v `pwd`:`pwd` -w `pwd` my/code:test python src/main.py --config=test --results-dir=/home/me/Results
The command python src/main.py --config=test --results-dir=/home/me/Results is what I want to run inside the Docker container.
However, it returns,
/home/docker/miniconda3/bin/python: /home/docker/miniconda3/bin/python: cannot execute binary file
How can I fix it and run my code?
Here is my Dockerfile
FROM nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04
MAINTAINER Me <me#me.com>
RUN apt update -yq && \
apt install -yq curl wget unzip git vim cmake sudo
RUN adduser --disabled-password --gecos '' docker && \
adduser docker sudo && \
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER docker
WORKDIR /home/docker/
RUN chmod a+rwx /home/docker/ && \
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh && \
bash Miniconda3-latest-Linux-x86_64.sh -b && rm Miniconda3-latest-Linux-x86_64.sh
ENV PATH /home/docker/miniconda3/bin:$PATH
Run pip install absl-py==0.5.0 atomicwrites==1.2.1 attrs==18.2.0 certifi==2018.8.24 chardet==3.0.4 cycler==0.10.0 docopt==0.6.2 enum34==1.1.6 future==0.16.0 idna==2.7 imageio==2.4.1 jsonpickle==1.2 kiwisolver==1.0.1 matplotlib==3.0.0 mock==2.0.0 more-itertools==4.3.0 mpyq==0.2.5 munch==2.3.2 numpy==1.15.2 pathlib2==2.3.2 pbr==4.3.0 Pillow==5.3.0 pluggy==0.7.1 portpicker==1.2.0 probscale==0.2.3 protobuf==3.6.1 py==1.6.0 pygame==1.9.4 pyparsing==2.2.2 pysc2==3.0.0 pytest==3.8.2 python-dateutil==2.7.3 PyYAML==3.13 requests==2.19.1 s2clientprotocol==4.10.1.75800.0 sacred==0.8.1 scipy==1.1.0 six==1.11.0 sk-video==1.1.10 snakeviz==1.0.0 tensorboard-logger==0.1.0 torch==0.4.1 torchvision==0.2.1 tornado==5.1.1 urllib3==1.23
USER docker
ENTRYPOINT ["/bin/bash"]
Try making the file executable before running it.
as John mentioned to do in the dockerfile
FROM python:latest
COPY src/main.py /usr/local/share/
RUN chmod +x /usr/local/share/src/main.py #<-**--- just add this also
# I have some doubts about the pathing
CMD ["/usr/local/share/src/main.py", "--config=test --results-dir=/home/me/Results"]
You can run a python script in docker by adding this to your docker file:
FROM python:latest
COPY src/main.py /usr/local/share/
CMD ["src/main.py", "--config=test --results-dir=/home/me/Results"]
I'm just testing out Docker so this might be a pretty simple question but I cannot seem to find out why it's not doing what I expect.
I created a pretty simple Dockerfile for testing, just to build a simple image that installs some packages, clones a git repo and build its requirements:
FROM ubuntu:18.04
ENV PYTHONEXEC=python3 \
PIPEXEC=pip \
VIRTUALENVEXEC=virtualenv \
GITREPO=https://github.com/test/test.git \
REPODIR=test
RUN apt-get update && apt-get install -y git \
python3 \
python3-dev \
python3-virtualenv \
python-virtualenv \
qt5-default \
libcurl4-openssl-dev \
libxml2 \
libxml2-dev \
libxslt1-dev \
libssl-dev \
virt-viewer
RUN mkdir -p /app
WORKDIR /app
RUN git clone $GITREPO $REPODIR \
&& $VIRTUALENVEXEC -p $PYTHONEXEC venv \
&& . venv/bin/activate \
&& cd $REPODIR \
&& $PIPEXEC install -r requirements.txt
CMD ["sleep", "1000000"]
Then I build the image with:
docker build -t gitapp:latest .
This works so far. However, if I specify a -e parameter on the docker container run command, it seems not to replace it in the last RUN command.
So if I run docker container run -d -e "REPODIR=blah" gitapp, I expect it to be cloned in /app/blah, but it's still cloned in the /app/test directory.
When I run a docker container exec -it -e "REPODIR=blah" <container-id> env I get:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=2f6ba38341d6
TERM=xterm
REPODIR=blah
PYTHONEXEC=python3
PIPEXEC=pip
VIRTUALENVEXEC=virtualenv
GITREPO=https://github.com/test/test.git
HOME=/root
So it seems that the variable is indeed passed to the container. Then why it isn't passed to the last RUN command so it clones the repo in the right directory? Am I missing something basic here?
When you execute a docker run you are instructing a container to execute Dockerfile's CMD or ENTRYPOINT command. Dockerfile commands that are above entrypoint have been already executed during build and are not executing again at runtime.
That's exactly the reason your github repo is being cloned to the directory defined initially at the Dockerfile and not in the one passed at the run command with -e flag.
A workaround would be to alter your image's entrypoint. You may transfer this part
RUN git clone $GITREPO $REPODIR \
&& $VIRTUALENVEXEC -p $PYTHONEXEC venv \
&& . venv/bin/activate \
&& cd $REPODIR \
&& $PIPEXEC install -r requirements.txt
to a bash script(let's call it my.script.sh) file that will be executed as image's entrypoint. Copy this file during build process in a preferred location, ensuring it holds executable flag and edit your Dockerfile's entrypoint accordingly:
CMD ["/path_to_script/myscript.sh" ]
This however has the caveat that the script will be executed each time the container is started in contrast with your current setup, possibly leading to delay depending on myscript.sh content.
In Jenkins I installed Docker build step plugin.
In Jenkins, created job and in it, executed docker command selected build image. The image is created using the Dockerfile.The Dockerfile is :
FROM ubuntu:latest
#OS Update
RUN apt-get update
RUN apt-get -y install git git-core unzip python-pip make wget build-essential python-dev libpcre3 libpcre3-dev libssl-dev vim nano net-tools iputils-ping supervisor curl supervisor
WORKDIR /home/wipro
#Mongo Setup
RUN curl -O http://downloads.mongodb.org/linux/mongodb-linux-x86_64-3.0.2.tgz && tar -xzvf mongodb-linux-x86_64-3.0.2.tgz && cd mongodb-linux-x86_64-3.0.2/bin && cp * /usr/bin/
#RUN mongod --dbpath /home/azureuser/CI_service/data/ --logpath /home/azureuser/CI_service/log.txt --logappend --noprealloc --smallfiles --port 27017 --fork
#Node Setup
#RUN curl -O https://nodejs.org/dist/v0.12.7/node-v0.12.7.tar.gz && tar -xzvf node-v0.12.7.tar.gz && cd node-v0.12.7
#RUN cd /opt/node-v0.12.7 && ./configure && make && make install
#RUN cp /usr/local/bin/node /usr/bin/ && cp /usr/local/bin/npm /usr/bin/
RUN wget https://nodejs.org/dist/v0.12.7/node-v0.12.7-linux-x64.tar.gz
RUN cd /usr/local && sudo tar --strip-components 1 -xzf /home/wipro/node-v0.12.7-linux-x64.tar.gz
RUN npm install forever -g
#CI SERVICE
ADD prod /home//
ADD servicestart.sh /home/
RUN chmod +x /home/servicestart.sh
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD ["sh", "/home/servicestart.sh"]
EXPOSE 80
EXPOSE 27017
Then I tried to create the container and container is created.
When I tried to start the container, the container is not running.
When I checked with command:
docker ps -a
, it shows status as created only.
Its not in running or Exited state.
The output of docker ps -a is:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8ac762c4dc84 d85c2d90be53 "sh /home/servi" 15 hours ago Created hungry_liskov
7d8864940515 d85c2d90be53 "sh /home/servi" 16 hours ago Created ciservice
How to start the container using jenkins?
It depends on your container main command (ENTRPOINT + CMD)
A created state (for non data-volume container) means the main command failed to execute.
Try a docker logs <container_id> to see if there is any error message recorded.
CMD ["sh", "/home/servicestart.sh"] should be:
CMD ["/home/servicestart.sh"]
(The default ENTRYPOINT for Ubuntu should be ["sh", "-c"], so no need to repeat an "sh")
Here is my Dockerfile:
# Pull base image (Ubuntu)
FROM dockerfile/python
# Install socat
RUN \
cd /opt && \
wget http://www.dest-unreach.org/socat/download/socat-1.7.2.4.tar.gz && \
tar xvzf socat-1.7.2.4.tar.gz && \
rm -f socat-1.7.2.4.tar.gz && \
cd socat-1.7.2.4 && \
./configure && \
make && \
make install
RUN \
start-stop-daemon --quiet --oknodo --start --pidfile /run/my.pid --background --make-pidfile \
--exec /opt/socat-1.7.2.4/socat PTY,link=/dev/ttyMY,echo=0,raw,unlink-close=0 \
TCP-LISTEN:9334,reuseaddr,fork
# Define working directory.
WORKDIR /data
# Define default command.
CMD ["bash"]
I build an image from this Dockerfile and run it like this:
docker run -d -t -i myaccount/socat_test
After this I login into the container and check if socat daemon is running. But it is not. I just started playing around with docker. Am I misunderstanding the concept of Dockerfile? I would expect docker to run the socat command when the container starts.
You are confusing RUN and CMD. The RUN command is meant to build up the docker container, as you correctly did with the first one. The second command will also executed when building the container, but not when running it. If you want to execute commands when the docker container is started, you ought to use the CMD command.
More information can be found in the Dockerfile reference. For instance, you could also use ENTRYPOINT in stead of CMD.