I'm building a .NET Core application that I would like to deploy via Azure Devops build pipeline. The pipeline will build, test and deploy using Docker containers.
I have successfully built the first docker image for my application using the following Dockerfile and am now attempting to run it on my local machine before using it in the pipeline:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
MAINTAINER yummylumpkins <yummy#lumpkins.com>
WORKDIR /app
COPY . ./
RUN dotnet publish MyAPIApp -c Release -o out
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "MyAPIApp.dll"]
Running this image inside a docker container locally crashes because my application uses AzureServiceTokenProvider() to attempt to fetch a token from Azure Services that will then be used to fetch secrets from Azure Key Vault.
The local docker container that the image runs from does not have the authorization to access Azure Services.
The docker container error output looks like this:
---> Microsoft.Azure.Services.AppAuthentication.AzureServiceTokenProviderException: Parameters: Connection String: [No connection string specified], Resource: https://vault.azure.net, Authority: https://login.windows.net/XXXX-XXXX-XXXX-XXXX. Exception Message: Tried the following 3 methods to get an access token, but none of them worked.
Parameters: Connection String: [No connection string specified], Resource: https://vault.azure.net, Authority: https://login.windows.net/XXXX-XXXX-XXXX-XXXX. Exception Message: Tried to get token using Managed Service Identity. Access token could not be acquired. Connection refused
Parameters: Connection String: [No connection string specified], Resource: https://vault.azure.net, Authority: https://login.windows.net/XXXX-XXXX-XXXX-XXXX. Exception Message: Tried to get token using Visual Studio. Access token could not be acquired. Environment variable LOCALAPPDATA not set.
Parameters: Connection String: [No connection string specified], Resource: https://vault.azure.net, Authority: https://login.windows.net/XXXX-XXXX-XXXX-XXXX. Exception Message: Tried to get token using Azure CLI. Access token could not be acquired. /bin/bash: az: No such file or directory
After doing a lot of research (and getting some positive feedback here) it would appear that the best way to authorize the locally running docker container is to build the image on top of an Azure CLI base image from Microsoft, then use az login --service-principal -u <app-url> -p <password-or-cert> --tenant <tenant> somewhere during the build/run process to authorize the local docker container.
I have successfully pulled the Azure CLI image from Microsoft (docker pull mcr.microsoft.com/azure-cli) and can run it via docker run -it mcr.microsoft.com/azure-cli. The container runs with the Azure CLI command line and I can log in via bash, but that's as far as I've come.
The next step would be to layer this Azure CLI image into my previous Dockerfile during the image build, but I am unsure to do this. I've tried the following:
# New base image is now Azure CLI
FROM mcr.microsoft.com/azure-cli
RUN az login -u yummylumpkins -p yummylumpkinspassword
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
MAINTAINER yummylumpkins <yummy#lumpkins.com>
WORKDIR /app
COPY . ./
RUN dotnet publish MyAPIApp -c Release -o out
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "MyAPIApp.dll"]
But this still doesn't work, the process still results in the same error mentioned above (I think because the login does not persist when adding the new dotnet core layer. My question is, how would I explicitly build Azure CLI image into my dockerfile/image building process with an azure login command to authorize the docker container, persist the authorization, and then set a command to run the app (MyAPIApp.dll) with the persisted authorization?
Or, am I taking the completely wrong approach with this? Thanks in advance for any feedback.
Posting an update with an answer here just in case anyone else has a similar problem. I haven't found any other solutions to this so I had to make my own. Below is my Dockerfile. Right now the image is sitting at 1GB so I will definitely need to go through and optimize, but I'll explain what I did:
#1 Install .NET Core SDK Build Environment
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
WORKDIR /app
#2 Build YummyApp
COPY . ./
RUN dotnet publish YummyAppAPI -c Release -o out
#3 Install Ubuntu Base Image
FROM ubuntu:latest
MAINTAINER yummylumpkins <yummy#lumpkins.com>
WORKDIR /app
ENV ASPNETCORE_URLS=http://+:80
EXPOSE 80
#4 Install package dependencies & .NET Core SDK
RUN apt-get update \
&& apt-get install apt-transport-https \
&& apt-get update \
&& apt-get install -y curl bash dos2unix wget dpkg \
&& wget -q https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb \
&& dpkg -i packages-microsoft-prod.deb \
&& apt-get install -y software-properties-common \
&& apt-get update \
&& add-apt-repository universe \
&& apt-get update \
&& apt-get install apt-transport-https \
&& apt-get update \
&& apt-get install -y dotnet-sdk-3.1 \
&& apt-get update \
&& rm packages-microsoft-prod.deb
#5 Copy project files from earlier SDK build
COPY --from=build-env /app/out .
#6 Install Azure CLI for AppAuthorization
RUN curl -sL https://aka.ms/InstallAzureCLIDeb | bash
#7 Login to Azure Services and run application
COPY entrypoint.sh ./
RUN dos2unix entrypoint.sh && chmod +x entrypoint.sh
CMD ["/app/entrypoint.sh"]
Step 1 - Install .NET Core SDK Build Environment: We start with using .NET Core SDK as a base image to build my application. It should be noted that I have a big app with one solution and multiple project files. The API project is dependent on the other projects.
Step 2 - Build YummyApp: We copy the entire project structure from our local directory to our working directory inside the docker image (/app). Just in case anyone is curious, my project is a basic API app. It looks like this:
[YummyApp]
|-YummyAppDataAccess
|YummyAppDataAccess.csproj
|-YummyAppInfrastructure
|YummyAppInfrastructure.csproj
|-YummyAppAPI
|-YummyAppAPI.csproj
|-YummyAppServices
|-YummyAppServices.csproj
|-YummyApp.sln
After we copy everything over, we build/publish a Release configuration of the app.
Step 3 - Install Ubuntu Base Image: We start a new layer using Ubuntu. I initially tried to go with Alpine Linux but found it almost impossible to install Azure CLI on it without having to do some really hacky workarounds so I went w/ Ubuntu for the ease of installation.
Step 4 - Install package dependencies & .NET Core SDK: Inside the Ubuntu layer we set our work directory and install/update a bunch of libraries including our .NET Core SDK. It should be noted that I needed to install dos2unix for a shell script file I had to run later on. . .I will explain later.
Note: I initially tried to install .NET Core Runtime only as it is more lightweight and would bring this image down to about 700MB (from 1GB) but for some reason when I tried to run my application at the end of the file (Step 7) I was getting an error saying that no runtime was found. So I went back to the SDK.
Step 5 - Copy project files from earlier SDK Build: To save space, I copied the built project files from the first 'build image' over to this Ubuntu layer to save some space (about 1GB worth).
Step 6 - Install Azure CLI: In order to authorize my application to fetch a token from Azure Services, normally I use Microsoft.Azure.Services.AppAuthentication. This package provides a method called AzureServiceTokenProvider() which (via my IDE) authorizes my application to connect to Azure Services to get a token that is then used to access the Azure Key Vault. This whole issues started because my application is unable to do this from within a docker container, because Azure doesn't recognize the request coming from the container itself.
So in order to work around this, we need to login via az login in the Azure CLI, inside the container, before we start the app.
Step 7 - Login to Azure Services and run application: Now it's showtime. I had two different problems to solve here. I had to figure out how to execute az login and dotnet YummyAppAPI.dll when this container would be fired up. But Dockerfiles only allow one ENTRYPOINT or CMD to be executed at runtime, so I found a workaround. By making a shell script file (entrypoint.sh) I was able to put both commands into this file and then execute that one file.
After setting this up, I was getting an error with the entrypoint.sh that read something like this: entrypoint.sh: executable file not found in $PATH. I found out that I had to change the permissions of this file using chmod because otherwise, my docker container was unable to access it. That made the file visible, but the file was still unable to execute. I was receiving another error: Standard_init_linux.go:211: exec user process caused “no such file or directory”
After some more digging, it turns out that this problem happens when you try to use a .sh file created in Windows on a Linux-based system. So I had to install dos2unix to convert this file to something Linux compatible. I also had to make sure the file was formatted correctly. For anyone curious, this is what my entrypoint.sh looks like:
#!/bin/sh
set -e
az login -u yummy#lumpkins.com -p ItsAlwaysYummy
dotnet /app/YummyAppAPI.dll
exec "$#"
Note: The login and password is hard-coded. . .I know this is bad practice (in fact, it's terrible) however, this is only for my local machine and will never see production. The next step would be to introduce environment variables with a service principle login. Since this deployment will eventually happen in the Azure Devops pipeline, I can inject those ENV vars straight into the devops pipeline YAML so that all of this happens without me ever punching in credentials; they will come straight from the Key Vault where they are stored.
Lastly, the size of this container is huge (1GB) and it does need to be optimized if it will be updated/built regularly. I will continue working on that but I am open to suggestions on how best to do that moving forward.
Thanks again all.
Related
I am trying to make my application work in a Linux container. It will eventually be deployed to Azure Container Instances. I have absolutely no experience with containers what so ever and I am getting lost in the documentation and examples.
I believe the first thing I need to do is create a Docker image for my project. I have installed Docker Desktop.
My project has this structure:
MyProject
MyProject.Core
MyProject.Api
MyProject.sln
Dockerfile
The contents of my Dockerfile is as follows.
#Use Ubuntu Linux as base
FROM ubuntu:22.10
#Install dotnet6
RUN apt-get update && apt-get install -y dotnet6
#Install LibreOffice
RUN apt-get -y install default-jre-headless libreoffice
#Copy the source code
WORKDIR /MyProject
COPY . ./
#Compile the application
RUN dotnet publish -c Release -o /compiled
#ENV PORT 80
#Expose port 80
EXPOSE 80
ENTRYPOINT ["dotnet", "/compiled/MyProject.Api.dll"]
#ToDo: Split build and deployment
Now when I try to build the image using command prompt I am using the following command
docker build - < Dockerfile
This all processed okay up until the dotnet publish command where it errors saying
Specify a project or solution file
Now I have verified that this command works fine when run outside of the docker file. I suspect something is wrong with the copy? Again I have tried variations of paths for the WORKDIR, but I just can't figure out what is wrong.
Any advice is greatly appreciated.
Thank you SiHa in the comments for providing a solution.
I made the following change to my docker file.
WORKDIR app
Then I use the following command to build.
docker build -t ImageName -f FileName .
The image now creates successfully. I am able to run this in a container.
What works:
I do have a php-fpm docker container hosting an PHP application that is using composer for managing dependencies. Jenkins builds the container, what also runs composer install and pushes it to the registry.
What should work:
I want to include a private package from git with composer, what requires authentication. Therefore the container has to be in posses of secrets that should not be leaked to the container registry.
How can I install composer packages from private repositories without exposing the secrets to the registry?
What wont work:
let Jenkins run composer install. It is necessary for the dev environment to have the dependencies installing while building.
copy in and out the ssh key during build as that would save it to the layers.
What other options do I have?
As there might be better solutions out there, mine was to use docker multi stage builds to have the build process in an early layer that is not included in the final image. That way the container registry never sees the secrets. To verify that I used dive.
Please see the Dockerfile below
FROM php-fpm
COPY ./id_rsa /root/.ssh/id_rsa
RUN chmod 600 /root/.ssh/id_rsa
RUN wget https://raw.githubusercontent.com/composer/getcomposer.org/76a7060ccb93902cd7576b67264ad91c8a2700e2/web/installer -O - -q | php -- --quiet
COPY ./src /var/www/html
RUN composer install
FROM php-fpm
COPY --from=0 /var/www/html/vendor /var/www/html/vendor
I created a docker container for talking to the google api using GoLang. I started off using a SCRATCH container and am getting the error certificate signed by unknown authority upon changing to ubuntu/alpine i still get the error.
resp, err := client.Get("https://www.googleapis.com/oauth2/v3/userinfo")
Any help solving this issue would be great. I can run the code fine on my mac.
Having done some research I can see the issue
https://github.com/golang/go/issues/24652
but I dont know if this is directly related or if I need to share some certificate with the container.
With scratch, you need to include the trusted certificates in addition to your application inside the image. E.g. if you have the ca-certificates.crt in your project to inject directly:
FROM scratch
ADD ca-certificates.crt /etc/ssl/certs/
ADD main /
CMD ["/main"]
If you are using a multi stage build and only want the certificates packaged by the distribution vendor, that looks like:
FROM golang:alpine as build
# Redundant, current golang images already include ca-certificates
RUN apk --no-cache add ca-certificates
WORKDIR /go/src/app
COPY . .
RUN CGO_ENABLED=0 go-wrapper install -ldflags '-extldflags "-static"'
FROM scratch
# copy the ca-certificate.crt from the build stage
COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=build /go/bin/app /app
ENTRYPOINT ["/app"]
You can use the self sign certificate specially for ubuntu.
Before you begin, you should have a non-root user configured with sudo privileges. You can learn how to set up such a user account by following our initial server setup for Ubuntu 16.04.
I have a Go application that I build into a binary and distribute as a Docker image.
Currently, I'm using ubuntu as my base image, but this causes an issue where if a user tries to use a Timezone other than UTC or their local timezone, they get an error stating:
pod error: panic: open /usr/local/go/lib/time/zoneinfo.zip: no such file or directory
This error is caused because the LoadLocation package in Go requires that file.
I can think of two ways to fix this issue:
Continue using the ubuntu base image, but in my Dockerfile add the commands: RUN apt-get install -y tzdata
Use one of Golang's base images, eg. golang:1.7.5-alpine.
What would be the recommended way? I'm not sure if I need to or should be using a Golang image since this is the container where the pre-built binary runs. My understanding is that Golang images are good for building the binary in the first place.
I prefer to use multi-stage build. On 1st step you use special golang building container for installing all dependencies and build an application. On 2nd stage I copy binary file to empty alpine container. This allows having all required tooling and minimal docker image simultaneously (in my case 6MB instead of 280MB).
Example of Dockerfile:
# build stage
FROM golang:1.8
ADD . /src
RUN set -x && \
cd /src && \
go get -t -v github.com/lisitsky/go-site-search-string && \
CGO_ENABLED=0 GOOS=linux go build -a -o goapp
# final stage
FROM alpine
WORKDIR /app
COPY --from=0 /src/goapp /app/
ENTRYPOINT /goapp
EXPOSE 8080
Since not all OS have localized timezone installed, this is what I did to make it work:
ADD https://github.com/golang/go/raw/master/lib/time/zoneinfo.zip /usr/local/go/lib/time/zoneinfo.zip
The full example of multi-step Docker image for adding timezone is here
This is more of a vote, but apt-get is what we (my company's tech group) do in situations like this. It gives us complete control over the hierarchy of images, but this is assuming you may have future images based on this one.
You can use the system's tzdata. Or you can copy $GOROOT/lib/time/zoneinfo.zip into your image, which is a trimmed version of the system one.
In our project, we have an ASP.NET Core project with an Angular2 client. At Docker build time, we launch:
FROM microsoft/dotnet:latest
COPY . /app
WORKDIR /app
RUN ["dotnet", "restore"]
RUN apt-get -qq update ; apt-get -qqy --no-install-recommends install \
git \
unzip
RUN curl -sL https://deb.nodesource.com/setup_7.x | bash -
RUN apt-get install -y nodejs build-essential
RUN ["dotnet", "restore"]
RUN npm install
RUN npm run build:prod
RUN ["dotnet", "build"]
EXPOSE 5000/tcp
ENV ASPNETCORE_URLS http://*:5000
ENTRYPOINT ["dotnet", "run"]
Since restoring the npm packages is essential to be able to build the Angular2 client using npm run build, our Docker image is HUGE, I mean almost 2GB. Built Angular2 client is only 1.7Mb itself.
Our app does nothing fancy: simple web API writing to MongoDB and displaying static files.
In order to improve the size of our image, is there any way to exclude path which are useless at run time? For example node_modules or any .NET Core source?
Dotnet may restore much, especially if you have multiple targets platforms (linux, mac, windows).
Depending on how your application is configured (i.e. as portable .NET Core app or as self-contained), it can also pull the whole .NET Core Framework for one, or multiple platforms and/or architectures (x64, x86). This is mainly explained here.
When "Microsoft.NETCore.App" : "1.0.0" is defined, without the type platform, then then complete framework will be fetched via nuget. Then if you have multiple runtimes defined
"runtimes": {
"win10-x64": {},
"win10-x86": {},
"osx.10.10-x86": {},
"osx.10.10-x64": {}
}
it will get native libraries for all this platforms too. But not only in your project directory but also in ~/.nuget and npm-cache additionally to node_modules in your project + eventual copies in your wwwdata.
However, this is not how docker works. Everything you execute inside the Dockerfile is written to the virtual filesystem of the container! That's why you see this issues.
You should follow my previous comment on your other question:
Run dotnet restore, dotne build and dotnet publish outside the Dockerfile, for example in a bash or powershell/batch script.
Once finished call copy the content of the publish folder in your container with
dotnet publish
docker build bin\Debug\netcoreapp1.0\publish ... (your other parameters here)
This will generate publish files on your file system, only containing the required dll files, Views and wwwroot content without all the other build files, artifacts, caches or source and will run the docker process from the bin\Debug\netcoreapp1.0\publish folder.
You also need to change your docker files, to copy the files instead of running the commands you have during container building.
Scott uses this Dockerfile for his example in his blog:
FROM ... # Your base image here
ENTRYPOINT ["dotnet", "YourWebAppName.dll"] # Application to run
ARG source=. # An argument from outside, here store the path from real filesystem
WORKDIR /app
ENV ASPNETCORE_URLS http://+:82 # Define the port it should listen
EXPOSE 82
COPY $source . # copy the files from defined folder, here bin\Debug\netcoreapp1.0\publish to inside the docker container
This is the recommended approach for building docker containers. When you run the build commands inside, all the build and publish artifacts remain in the virtual file system and the docker image grows unexpectedly.