Dockerfile accept multiple args and choose an action - docker

I have a python image that launches a web app and I'm wondering if it's possible to run pytest from container - I would like to choose if I want to run the app or run the tests.
Is possible?
My dockerfile looks like:
FROM python:3.7-slim-buster
COPY ./ ./x
WORKDIR ./x
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["gunicorn", "-b", "0.0.0.0:5000", "--log-level=info", "app:app"]
Is possible to run something like docker run x --someargumenttolaunchtests?

You can set an ARGS value in your dockerfile which is an argument that you provided during build time. If you want to provide an arguement in run time, you can set an environment variable via docker run -e some_environment.
Then, you can, with a bash script, choose what you want to run. So your bash script provides your if some_eivonrment = ? then etc. You would have to make this bash script prior to run time and either COPY it to your dockerfile or bind it on run time.
So here is an example of a bash script.
#!bin/bash
ENVIRONMENT=$(export some_environment)
if("$ENVIRONMENT" = "test") ; then
python run_test.py
else
python main.py
fi
Before I forget, you need to set the permissions for this bash script.
So in your dockerfile:
COPY ./bash_script.sh /app
WORKDIR /app
RUN chmod u+x bash_script.sh

You can completely override the entrypoint script and avoid gunicorn. Use something like:
docker run --rm -it --entrypoint bash myimagename pytest

Related

How to use env variables set from build phase in run. (Docker)

I want to preface this in saying that I am very new to docker and have just got my feet wet with using it. In my Docker file that I run to build the container I install a program that sets some env variables. Here is my Docker file for context.
FROM python:3.8-slim-buster
COPY . /app
RUN apt-get update
RUN apt-get install wget -y
RUN wget http://static.matrix-vision.com/mvIMPACT_Acquire/2.40.0/install_mvGenTL_Acquire.sh
RUN wget http://static.matrix-vision.com/mvIMPACT_Acquire/2.40.0/mvGenTL_Acquire-x86_64_ABI2-2.40.0.tgz
RUN chmod +x ./install_mvGenTL_Acquire.sh
RUN ./install_mvGenTL_Acquire.sh -u
RUN apt-get install -y python3-opencv
RUN pip3 install USSCameraTools
WORKDIR /app
CMD python3 main.py
After executing the build docker command the program "mvGenTL_Acquire.sh" sets env inside the container. I need these variables to be set when executing the run docker command. But when checking the env variables after running the image it is not set. I know I can pass them in directly but would like to use the ones that are set from the install in the build.
Any help would be greatly appreciated, thanks!
For running a bash script when your container is creating:
make an script.sh file:
#!/bin/bash
your commands here
If you are using an alpine image, you must use #!/bin/sh instead of #!/bin/bash on the first line of your bash file.
Now in your Dockerfile copy your bash file in the container and use the ENTRYPOINT instruction for running this file when the container is creating:
.
.
.
COPY script.sh /
RUN chmod +x /script.sh
.
.
.
ENTRYPOINT ["/script.sh"]
Notice that in the ENTRYPOINT instruction use your bash file address in your image.
Now when you create a container, the script.sh file will be executed.

How to run multiple ENTRYPOINT script in docker [duplicate]

I'm trying to build a custom tcserver docker image. But I'm having some problems starting the webserver and the tomcat.
As far as I understand I should use ENTRYPOINT to run the commands I want.
The question is, is it possible to run multiple commands with ENTRYPOINT?
Or should I create a small bash script to start all?
Basically what I would like to do is:
ENTRYPOINT /opt/pivotal/webserver/instance1/bin/httpdctl start && /opt/pivotal/webserver/instance2/bin/httpdctl start && /opt/pivotal/pivotal-tc-server-standard/standard-4.0.1.RELEASE/tcserver start instance1 -i /opt/pivotal/pivotal-tc-server-standard && /opt/pivotal/pivotal-tc-server-standard/standard-4.0.1.RELEASE/tcserver start instance2 -i /opt/pivotal/pivotal-tc-server-standard
But I don't know if that is a good practice or if that would even work.
In case you want to run many commands at entrypoint, the best idea is to create a bash file. For example commands.sh like this
#!/bin/bash
mkdir /root/.ssh
echo "Something"
cd tmp
ls
...
And then, in your DockerFile, set entrypoint to commands.sh file (that execute and run all your commands inside)
COPY commands.sh /scripts/commands.sh
RUN ["chmod", "+x", "/scripts/commands.sh"]
ENTRYPOINT ["/scripts/commands.sh"]
After that, each time you start your container, commands.sh will be execute and run all commands that you need. You can take a look here https://github.com/dangminhtruong/drone-chatwork
You can use something like this:
ENTRYPOINT ["/bin/sh", "-c" , "<command A> && <command B> && <command C>"]
You can use npm concurrently package.
For e.g.
ENTRYPOINT ["npx","concurrently","command1","command2"]
It will run them in parallel.

How to access build args in ENTRYPOINT dockerfile

I am trying to deploy an app in payara micro based on payara dockerimage and I need to pass one arguement snapshotversion in ENTRYPOINT(basically i want to access the build args in ENTRYFORM) exec form, as exec form of ENTRYPOINT is preferred: my docker file is as follows:
FROM payara/micro:5.193.1
ARG snapshotversion
ENV snapshotvs=$snapshotversion
RUN jar xf payara-micro.jar
COPY /service/war/target/app-emailverification-service-war-${snapshotversion}.war ${DEPLOY_DIR}/
COPY ojdbc6.jar ${PAYARA_HOME}/
COPY --chown=payara domain.xml /opt/payara/MICRO-INF/domain/domain.xml
RUN cd /opt/payara/MICRO-INF/domain && ls -lrt
#ENTRYPOINT ["java", "-jar", "/opt/payara/payara-micro.jar", "--deploy", "/opt/payara/deployments/app-service-war-$snapshotvs.war", "--domainConfig", "/opt/payara/MICRO-INF/domain/domain.xml","--addLibs", "/opt/payara/ojdbc6.jar"]
ENTRYPOINT java -jar /opt/payara/payara-micro.jar --deploy /opt/payara/deployments/app-service-war-$snapshotvs.war --domainConfig /opt/payara/MICRO-INF/domain/domain.xml --addLibs /opt/payara/ojdbc6.jar
The commented ENTRYPOINT does not work. Container logs says invalid deployment. What am i missing here? Also how can I use CMD with this. Can someone post an example.
The commented line doesn't work, because it is an exec form of ENTRYPOINT, which doesn't invoke shell (/bin/sh -c), so variable substitution doesn't happening.
If you want to use an exec form and environment variables you need to specify it directly:
ENTRYPOINT ["sh", "-c", "your command with env variable"]
To your question about how can you use CMD with this, for example like this:
ENTRYPOINT ["sh", "-c"]
CMD ["your command with env variable"]
You mentioned, that you want to use build args in ENTRYPOINT instruction. It's not really possible, because nor ARG nor ENV are expanded in ENTRYPOINT or CMD: https://docs.docker.com/engine/reference/builder/#environment-replacement, https://docs.docker.com/engine/reference/builder/#scope
Also you could take a look at great page with best practices for writing Dockerfile and ENTRYPOINT instructions specifically.
Two suggestions that complement each other:
If you're COPYing a file into the image, you can give it a fixed name inside the image. That avoids this problem.
WORKDIR /opt/payara
COPY service/war/target/app-emailverification-service-war-${snapshotversion}.war deployments/app-service.war
If you have a particularly long or involved command that you're trying to make be the main container process, wrap it in a shell script. You want to make sure to exec the main container process to avoid some trouble around signal handling (resulting in docker stop pausing for 10 seconds and then hard-killing your actual process).
#!/bin/sh
exec java \
-jar /opt/payara/payara-micro.jar \
--deploy /opt/payara/deployments/app-service.war \
--domainConfig /opt/payara/MICRO-INF/domain/domain.xml \
--addLibs /opt/payara/ojdbc6.jar
COPY launch.sh ./
RUN chmod +x launch.sh
CMD ["/opt/payara/launch.sh"]
In this second case, it's a shell script, so you can have ordinary shell variable substitutions.

Docker - ASP.CORE 2.2 application and SSH

I'm trying to configure my docker container so it's possible to ssh into it (the container will be run on Azure). I managed to create an image that enables user to ssh into a container created from that image, the Dockerfile looks like that (it's not mine, I found it on the internet):
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-stretch AS build
EXPOSE 2222
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
COPY sshd_config /etc/ssh
RUN echo 'root:Docker' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
CMD ["/usr/sbin/sshd", "-D"]
I'm using mcr.microsoft.com/dotnet/core/sdk:2.2-stretch because it's what I need later on to run the application.
Having the Dockerfile above, I run docker build . -t ssh. I can confirm that it's possible to ssh into a container created from ssh image with following instructions:
docker run -d -p 0.0.0.0:2222:22 --name ssh ssh
ssh root#localhost -p 2222
My application's Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-stretch AS build
WORKDIR /src
COPY ["Application.WebAPI/Application.WebAPI.csproj", "Application.WebAPI/"]
COPY ["Processing.Dependency/Processing.Dependency.csproj", "Processing.Dependency/"]
COPY ["Processing.QueryHandling/Processing.QueryHandling.csproj", "Processing.QueryHandling/"]
COPY ["Model.ViewModels/Model.ViewModels.csproj", "Model.ViewModels/"]
COPY ["Core.Infrastructure/Core.Infrastructure.csproj", "Core.Infrastructure/"]
COPY ["Model.Values/Model.Values.csproj", "Model.Values/"]
COPY ["Sql.Business/Sql.Business.csproj", "Sql.Business/"]
COPY ["Model.Events/Model.Events.csproj", "Model.Events/"]
COPY ["Model.Messages/Model.Messages.csproj", "Model.Messages/"]
COPY ["Model.Commands/Model.Commands.csproj", "Model.Commands/"]
COPY ["Sql.Common/Sql.Common.csproj", "Sql.Common/"]
COPY ["Model.Business/Model.Business.csproj", "Model.Business/"]
COPY ["Processing.MessageBus/Processing.MessageBus.csproj", "Processing.MessageBus/"]
COPY [".Processing.CommandHandling/Processing.CommandHandling.csproj", "Processing.CommandHandling/"]
COPY ["Processing.EventHandling/Processing.EventHandling.csproj", "Processing.EventHandling/"]
COPY ["Sql.System/Sql.System.csproj", "Sql.System/"]
COPY ["Application.Common/Application.Common.csproj", "Application.Common/"]
RUN dotnet restore "Application.WebAPI/Application.WebAPI.csproj"
COPY . .
WORKDIR "/src/Application.WebAPI"
RUN dotnet build "Application.WebAPI.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "Application.WebAPI.csproj" -c Release -o /app
FROM ssh AS final
WORKDIR /app
EXPOSE 80
EXPOSE 443
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "Application.WebApi.dll"]
As you can see I'm using ssh image as a base image in the final stage. Even though I was able to sshe into the container created from ssh image, I'm unable to ssh into a container created from the latter Dockerfile. Here is the docker-compose.yml I'm using in order to ease starting the container:
version: '3.7'
services:
application.webapi:
image: application.webapi
container_name: webapi
ports:
- "0.0.0.0:5000:80"
- "0.0.0.0:2222:22"
build:
context: .
dockerfile: Application.WebAPI/Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT=docker
When I run docker exec -it webapi bashand execute service ssh status, I'm getting [FAIL] sshd is not running ... failed! - but when I do service ssh start and try to ssh into that container, it works. Unfortunately this approach is not acceptable, ssh daemon should launch itself on startup.
I tried using cron and other stuff available on debian but it's a slim version and systemd is not available there - I'm also not fond of installing hundreds of things on slim versions.
Do you have any ideas what could be wrong here?
You have conflicting startup command definitions in your final image. Note that CMD does not simply run a command in your image, it defines the startup command, and has a complex interaction with ENTRYPOINT (in short: if both are present, CMD just supplies extra arguments to ENTRYPOINT).
You can see the table of possibilities in the Dockerfile documentation: https://docs.docker.com/engine/reference/builder/. In addition, there's a bonus complication when you mix and match CMD and ENTRYPOINT in different layers:
Note: If CMD is defined from the base image, setting ENTRYPOINT will reset CMD to an empty value. In this scenario, CMD must be defined in the current image to have a value.
As far as I know, you can't get what you want just by layering images. You will need to create a startup script in your final image that both runs sshd -D and then runs dotnet Application.WebApi.dll.

Docker with Go cli project

I use the following docker file which works as expected
The project is a cli and when I run command docker run -it cli
I got error from the cli (which is ok since the entry point is just running fzr ENTRYPOINT ["./fzr”])
Typically I run in on my machine like fzr -help or fzr version etc
I want that when I use command like docker run -it cli that I will be able to run commands inside the container
like fzr -help and fzr version, how can I do that ?
FROM golang:1.10.5 AS build-env
ADD https://github.com/golang/dep/releases/download/v0.4.2/dep-linux-amd64 /usr/bin/dep
RUN chmod +x /usr/bin/dep
RUN mkdir -p $GOPATH/src/github.com/fzr
WORKDIR $GOPATH/src/github.com/fzr
COPY Gopkg.toml Gopkg.lock ./
# install project dep
RUN dep ensure
COPY . ./
RUN go build -o /fzr
FROM scratch
COPY --from=build-env /fzr ./
ENTRYPOINT ["./fzr"]
TL;DR;
docker run -it cli version
If you set ENTRYPOINT to your binary then everything that you pass after image name will be used as arg to that binary. If for some reason you need to overwrite entrypoint use --entrypoint flag to docker run.

Resources