I'm trying to build the project in our GitLab pipeline and then copy it into the Docker container which gets deployed to my company's OpenShift environment.
My dockerfile is below, which does a RUN ls to show that all the files were copied in correctly:
FROM bar.prod.myc.com/myc-docker/myc-ubi8-dotnet60-sdk AS builder
USER root
WORKDIR /App
EXPOSE 5000
RUN dotnet --info
COPY MyApp/bin/publish/ .
RUN pwd
RUN ls
FROM bar.prod.myc.com/myc-docker/myc-ubi8-dotnet60-sdk:latest
ENV ASPNETCORE_ENVIRONMENT="Debug"
ENV WEBSERVER="Kestrel"
ENV ASPNETCORE_URLS=http://0.0.0.0:5000
ENTRYPOINT ["dotnet", "MyApp.dll"]
It builds and deploys correctly, but when OpenShift tries to run it the logs show this error:
Could not execute because the specified command or file was not found.
Possible reasons for this include:
* You misspelled a built-in dotnet command.
* You intended to execute a .NET program, but dotnet-MyApp.dll does not exist.
* You intended to run a global tool, but a dotnet-prefixed executable with this name could not be found on the PATH.
I've double checked the spelling and case multiple times, so what else could be causing this issue?
Your dockerfile builds two containers:
The first one is:
FROM bar.prod.myc.com/myc-docker/myc-ubi8-dotnet60-sdk AS builder
USER root
WORKDIR /App
EXPOSE 5000
RUN dotnet --info
COPY MyApp/bin/publish/ .
RUN pwd
RUN ls
And the second one is:
FROM bar.prod.myc.com/myc-docker/myc-ubi8-dotnet60-sdk:latest
ENV ASPNETCORE_ENVIRONMENT="Debug"
ENV WEBSERVER="Kestrel"
ENV ASPNETCORE_URLS=http://0.0.0.0:5000
ENTRYPOINT ["dotnet", "MyApp.dll"]
The second container is what you are trying to run? It doesn't include a COPY --from=builder .... That means it doesn't actually contain your application at all. It's expected that dotnet complains, because your dll is not present. You can confirm that by doing an ls in your second container:
FROM bar.prod.myc.com/myc-docker/myc-ubi8-dotnet60-sdk:latest
ENV ASPNETCORE_ENVIRONMENT="Debug"
ENV WEBSERVER="Kestrel"
ENV ASPNETCORE_URLS=http://0.0.0.0:5000
RUN pwd
RUN ls
ENTRYPOINT ["dotnet", "MyApp.dll"]
Aside: If you have already published your application as framework-dependent, you can probably run it using the smaller registry.access.redhat.com/ubi8/dotnet-60-runtime image.
Related
My Docker File -
FROM mcr.microsoft.com/dotnet/aspnet:3.1
ENV ASPNETCORE_URLS=http://*:5000
ENV ASPNETCORE_ENVIRONMENT="production"
EXPOSE 5000
WORKDIR /app
COPY ./dist .
ENTRYPOINT ["dotnet", "JustLogin.API.dll"]
The Image builds successfully via command prompt but when trying to run in it via visual studio it throws error and by docker desktop it doesn't shows any error but site still doesn't run
the dist folder location was inside child folder.But project was building by default for root path
I am having difficulties with ARG & ENV in docker after I have upgraded to Docker version 20.10.7, build f0df350 on windows 10.
I have made dockerfile to show issue:
FROM microsoft/dotnet:2.2-aspnetcore-runtime AS base
ARG node_build=production
ENV node_build_env=${node_build}
FROM node:12.18.3 AS node-build
WORKDIR /root
RUN echo $node_build_env > test.txt
FROM base AS final
WORKDIR /app
COPY --from=node-build /root/test.txt ./
My goal here is that an ARG can be set and it will be then set as environment variable inside the container and if none is set it has a default value.
In this Dockerfile I am attempting to write the environment variable node_build_env to a text file then copy it to the final layer. The problem though is that the file is empty.
To re-create these are commands I am using:
docker build -t testargs:test .
docker run -it --rm testargs:test /bin/bash
cat test.txt
The file is empty. However if I run:
docker build -t testargs:test . --target node-build
and then manually run the command:
echo $node_build_env > test.txt
It works and the value production is written into the file.
Why does it work when I do it manually but not as part of the RUN command?
You are using multi-stage builds.
Your ARG & ENV belongs to base stage. And you're not using your base stage in your node-build build stage.
That means there is no node_build_env value in node-build. Hence the following line creates an empty test.txt file.
RUN echo $node_build_env > test.txt
However your final stage uses base stage. Which means it has access to node_build_env variable. So after building your image using docker build -t testargs:test . and then open up an interactive session with that container and try to execute the following command,
echo $node_build_env
You will see production will be printed out in the terminal.
I believe this will help you solve the problem. Cheers 🍻 !!!
edit:
this is working version:
ARG node_build=production
FROM microsoft/dotnet:2.2-aspnetcore-runtime AS base
FROM node:12.18.3 AS node-build
ARG node_build
ENV node_build_env=$node_build
WORKDIR /root
RUN echo $node_build_env > test.txt
FROM base AS final
WORKDIR /app
COPY --from=node-build /root/test.txt ./
I have an Angular - Flask app that I'm trying to dockerize with the following Dockerfile:
FROM node:latest as node
COPY . /APP
COPY package.json /APP/package.json
WORKDIR /APP
RUN npm install
RUN npm install -g #angular/cli#7.3.9
CMD ng build --base-href /static/
FROM python:3.6
WORKDIR /root/
COPY --from=0 /APP/ .
RUN pip install -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["python"]
CMD ["app.py"]
On building the image and running the image, console gives no errors. However, it seems to be stuck. What could be the issue here?
Is it because they are both in different directories?
Since I'm dockerizing Flask as well as Angular, how can I put both in same directory (right now one is in /APP and the other in /root)
OR should I put the two in separate containers and use a docker-compose.yml file?
In that case, how do I write the file? Actually my Flask calls my Angular and both run on same port. So I'm not sure if running in two different containers is a good idea.
I am also providing the commands that I use to build and run the image for reference:
docker image build -t prj .
docker container run --publish 5000:5000 --name prj prj
In the Angular build first stage of the Dockerfile, use RUN instead of CMD.
CMD is for running a command after the final image is built.
I have created a jenkins slave with work directory. I then have a maven java application with a Dockerfile.
Dockerfile
#### BUILD image ###
FROM maven:3-jdk-11 as builder
RUN mkdir -p /build
WORKDIR /build
COPY pom.xml /build
RUN mvn -B dependency:resolve dependency:resolve-plugins
COPY /src /build/src
RUN mvn package
### RUN ###
FROM openjdk:11-slim as runtime
EXPOSE 8080
ENV APP_HOME /app
ENV JAVA_OPTS=""
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/config
RUN mkdir $APP_HOME/log
RUN mkdir $APP_HOME/src
VOLUME $APP_HOME/log
VOLUME $APP_HOME/config
WORKDIR $APP_HOME
COPY --from=builder /build/src $APP_HOME/src
COPY --from=builder /build/target $APP_HOME/target
COPY --from=builder /build/target/*.jar app.jar
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar app.jar" ]
Jenkins slave sees this Dockerfile and executes it. It builds the target folder. In the target folder, I have Jacoco to show code coverage.
Jenkins slave workspace is unable to see that target folder to show Code Coverage on the Jenkins jacoco page. How can i make this visible? I tried volumes in my docker-compose file as shown
docker-compose.yml
version: '3'
services:
my-application-service:
image: 'my-application-image:v1.0'
build: .
container_name: my-application-container
ports:
- 8090:8090
volumes:
- /home/bob/Desktop/project/jenkins/workspace/My-Application:/app
However, I am not able to get target folder in the workspace. How can i let jenkins slave see this jacoco code coverage?
It looks like you have a mapped volume from /home/bob/Desktop/project/jenkins/workspace/My-Application on the slave to /app in the docker image, so normally, if you copy your jacoco reports back to /app/..., it should appear on the Jenkins Slave.
But the volume mapping only works for the first stage of you docker build process. As soon as you start your 2nd stage (after FROM openjdk:11-slim as runtime), there is no more mapped volume. So these lines copies the data to the target image's /app dir, not the original mapped directory.
COPY --from=builder /build/src $APP_HOME/src
COPY --from=builder /build/target $APP_HOME/target
I think you could make it work if you do this in the first stage:
RUN cp /build/target /app/target
In your second stage you probably only need the built jar file, so you only need:
COPY --from=builder /build/target/*.jar app.jar
An alternative could be to extract the data from the docker image itself after it has been bult, but that needs some command line kungfu:
docker cp $(docker ps -a -f "label=image=$IMAGE" -f "label=build=$BUILDNR" -f "status=exited" --format "{{.ID}}"):/app/target .
See also https://medium.com/#cyril.delmas/how-to-keep-the-build-reports-with-a-docker-multi-stage-build-on-jenkins-507200f4007f
If I am understanding correctly, you want the contents of $APP_HOME/target to be visible from your Jenkins slave? If so, you may need to re-think your approach.
Jenkins is running a Docker build which builds your app and outputs your code coverage report under $APP_HOME/target however since you are building it in Docker, those files won't be available to the slave but rather the image itself.
I'd consider running code coverage outside the Docker build or you may have to do something hackey like run this build, and copy the files from the image to your ${WORKSPACE} which I believe you have to run it in a container first and then copy the files and destroy the container after.
I'm trying to create a Docker image but it's not working. I'm trying to run this Docker command from the CLI and I'm getting the following error:
(from the <repo root>/src/Services/Accounts/Accounts.Api)
> docker build .
Error message:
Step 7/18 : COPY ["src/Services/Accounts/Accounts.Api/Accounts.Api.csproj", "src/Services/Accounts/Accounts.Api/"]
COPY failed: stat /var/lib/docker/tmp/docker-builder937056687/src/Services/Accounts/Accounts.Api/Accounts.Api.csproj: no such file or directory
and here's my Dockerfile
FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80
FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /src
RUN ls -al
COPY ["src/Services/Accounts/Accounts.Api/Accounts.Api.csproj", "src/Services/Accounts/Accounts.Api/"]
COPY ["src/Services/Accounts/Accounts.Api/NuGet.config", "src/Services/Accounts/Accounts.Api/"]
RUN dotnet restore "src/Services/Accounts/Accounts.Api/Accounts.Api.csproj"
COPY . .
WORKDIR "/src/src/Services/Accounts/Accounts.Api"
RUN dotnet build "Accounts.Api.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "Accounts.Api.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "Hornet.Accounts.Api.dll"]
I've also tried running this from the repo root directory:
> docker build .\src\Services\Accounts\Accounts.Api
Why is it trying to find my files in some /var/lib/docker/tmp/blah folder?
Further info:
Windows 10 OS
Docker CE with Linux Containers
VSCode
I think we can solve this by clarifying how to use the build context and how to specify the location of the Dockerfile.
The Docker build command usage looks like this:
docker build [OPTIONS] PATH
Your build context is the location that you define with PATH, Docker can use the files in the build context for the build. (You cannot use files outside the build context for the build.)
In the Dockerfile in your COPY statements you are specifying the source files' location relative to the root of your repo. This implies that
You should run your build command from the <root of your repo> with PATH . like this:
docker build .
You have not specified the path of your Dockerfile in the question, but it seems to me that it's in a subfolder of your repo. To make your build work issue the docker build command from the <root of your repo> like this:
docker build -f <path to Dockerfile> .
Your Dockerfile must be in the build context.
I think getting the build context and the location of the Dockerfile right will fix your build.
Also remember to use the -t flag to tag your image.
Why is it trying to find my files in some /var/lib/docker/tmp/blah folder?
As you probably know, Docker uses a Linux VM under the hood in the Docker for Windows app to develop Linux based containers. This location simply refers to the location in the VM.