Docker dotnet restore private feed fails - docker

I have the following dockerfile
FROM mcr.microsoft.com/dotnet/sdk:6.0-alpine3.16 as build
WORKDIR /app
RUN apk add --no-cache bash
RUN wget -qO- https://aka.ms/install-artifacts-credprovider.sh | bash
ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS '{"endpointCredentials": [{"endpoint":"https://<myprivatefeed>/_packaging/<myName>/nuget/v3/index.json", "password":"<PAT>"}]}'
COPY . .
RUN dotnet restore
RUN dotnet publish -o /app/published-app
FROM mcr.microsoft.com/dotnet/aspnet:6.0-alpine3.16 as runtime
WORKDIR /app
COPY --from=build /app/published-app /app
ENTRYPOINT [ "dotnet", "/app/ApplicationConfigurationApi.WebApi.dll" ]
but when I try to build an image I get the following error:
/app/ApplicationConfigurationApi.WebApi/ApplicationConfigurationApi.WebApi.csproj : error NU1301: Unable to load the service index for source https://<myprivatefeed>/_packaging/<myName>/nuget/v3/index.json. [/app/ApplicationConfigurationApi.sln]
I try to copy my gitlab *.crt downloaded from chrome, inside the container adding these instruction:
...
COPY . .
COPY ./mycert.crt /usr/local/share/ca-certificates/mycert.crt
RUN cat /usr/local/share/ca-certificates/mycert.crt >> /etc/ssl/certs/mycert.crt && \
apk --no-cache add \
curl
RUN update-ca-certificates
RUN dotnet restore
...
I also try to add (without the certificate) this RUN line:
...
COPY . .
RUN dotnet nuget update source "gitlab" --username "<my-userName>" --password "<PAT>" --store-password-in-clear-text --valid-authentication-types basic
RUN dotnet restore
...
Using this feed on my host machine does not cause any issue and I can perform restore operation correctly.
I tried to use 'dotnet restore --verbosity detailed' and on the output seems that the feed has been persisted succesfully.
NuGet Config files used:
/app/nuget.config
/root/.nuget/NuGet/NuGet.Config
Feeds used:
https://api.nuget.org/v3/index.json
https://<myprivatefeed>/_packaging/<myName>/nuget/v4/index.json
Nuget packages coming from api.nuget.org are successfully fetched, the ones from my private feed not.
docker version output is:
Server: Docker Desktop 4.15.0 (93002)
Engine:
Version: 20.10.21
API version: 1.41 (minimum version 1.12)
Go version: go1.18.7
Git commit: 3056208
Built: Tue Oct 25 18:00:19 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.10
GitCommit: 770bd0108c32f3fb5c73ae1264f7e503fe7b2661
runc:
Version: 1.1.4
GitCommit: v1.1.4-0-g5fd4c4d
docker-init:
Version: 0.19.0
GitCommit: de40ad0
dotnet solution is net6.0
----UPDATE
Here I will put the dockerfile updated with some suggestion in comments below:
FROM mcr.microsoft.com/dotnet/sdk:6.0-focal as build
WORKDIR /app
RUN wget -O - https://raw.githubusercontent.com/Microsoft/artifacts-credprovider/master/helpers/installcredprovider.sh | bash
RUN wget -qO- https://aka.ms/install-artifacts-credprovider.sh | bash
ENV NUGET_CREDENTIALPROVIDER_SESSIONTOKENCACHE_ENABLED true
ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS "{\"endpointCredentials\": [{\"endpoint\":\"${MY-PRIVATE-FEED-BASE-URL}\", \"username\":\"${USERNAME}\", \"password\":\"${PAT}\"}]}"
COPY . .
RUN echo | openssl s_client -host <my-private-feed-base-url> -port 443 -prexit -showcerts> tmpfile
RUN echo | sed -n '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' tmpfile > /usr/local/share/ca-certificates/<my-private-feed-base-url>.crt
RUN apt-get install -y ca-certificates
RUN chmod 644 /usr/local/share/ca-certificates/<my-private-feed-base-url>.crt && update-ca-certificates
RUN dotnet restore
RUN dotnet publish -o /app/published-app
FROM mcr.microsoft.com/dotnet/aspnet:6.0-focal as runtime
WORKDIR /app
COPY --from=build /app/published-app /app
ENTRYPOINT [ "dotnet", "/app/ApplicationConfigurationApi.WebApi.dll" ]
The error is the same as with the first dockerfile.
I will attach also a screenshot about solution structure (maybe could be helpful)
----END UPDATE
I tried also the following solution but no one worked:
Nuget package restore error in Docker Compose build
NuGet in Docker: Error NU1301: Unable to load the service index for source - Sequence contains no elements
Restore NuGet Packages from a Private Feed when building Docker Containers
Docker "Dotnet Restore" fails with private package with .net 6.0 SDK
Thanks in advance,
Dave.

You probably did not install the certificate for your private feed.
For debian-based docker images you can use following snipped in your Dockerfile to download and install the certificate:
RUN echo | openssl s_client -host <private-feed-domain> -port 443 -prexit -showcerts> tmpfile
RUN echo | sed -n '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' tmpfile > /usr/local/share/ca-certificates/<private-feed-domain>.crt
RUN apt-get install -y ca-certificates
RUN chmod 644 /usr/local/share/ca-certificates/<private-feed-domain>.crt && update-ca-certificates
Disclaimer:
Use this snippet only if you are in charge of the destination, otherwise its a security risk.
For a more secure approach, download your CA manuelly (if its a chained one ,the root and any intermediate CA as well), verify it and copy it to your docker container:
RUN apt-get install -y ca-certificates
COPY <private-feed-domain>.crt /usr/local/share/ca-certificates/<private-feed-domain>.crt
RUN chmod 644 /usr/local/share/ca-certificates/<private-feed-domain>.crt && update-ca-certificates

Related

Cannot run installed tool in Dockerfile even though its there

I installed diesel-cli in a Dockerfile:
FROM alpine:latest
ENV PATH="/root/.cargo/bin:${PATH}"
RUN apk update
RUN apk add postgresql curl gcc musl-dev libpq-dev bash
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y
WORKDIR /app
RUN cargo install diesel_cli --no-default-features --features postgres
COPY . .
EXPOSE 8000
CMD [ "docker/entrypoint.sh"]
That works fine. The entrypoint.sh is:
#!/bin/bash
export PATH="/root/.cargo/bin:${PATH}"
ls /root/.cargo/bin/diesel
bash -c "/root/.cargo/bin/diesel setup"
The strange this is that the ls shows that the diesel binary is there. But when running the docker container it still says:
bash: line 1: /root/.cargo/bin/diesel: No such file or directory
I also tried calling diesel right from the Dockerfile with the same result.
Why can't I run diesel this way?
See comment by The Fool!
Using a different base image resolves the problem:
FROM debian:bullseye-slim
ENV PATH="/root/.cargo/bin:${PATH}"
RUN apt update -y
RUN apt install postgresql curl gcc libpq-dev bash -y
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y
WORKDIR /app
# This may take a minute
RUN cargo install diesel_cli --no-default-features --features postgres
COPY . .
# provision the database
EXPOSE 8000
CMD [ "docker/entrypoint.sh"]

Getting "Additional property ssh is not allowed" error when specifying ssh-agent in docker-compose

I'm trying to build a Python docker image which pip installs from a private repository using ssh. The details of which are in a requirements.txt file.
I've spent a long time reading guides from StackOverflow as well as the official Docker documentation on the subject ...
https://docs.docker.com/develop/develop-images/build_enhancements/#using-ssh-to-access-private-data-in-builds
https://docs.docker.com/compose/compose-file/build/#ssh
... and have come up with a Dockerfile which builds and runs fine when using:
$ docker build --ssh default -t build_tester .
However, when I try to do the same in a docker-compose.yml file, I get the following error:
$ docker-compose up
services.build-tester.build Additional property ssh is not allowed
This is the same even when enabling buildkit:
$ COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker-compose up
services.build-tester.build Additional property ssh is not allowed
Project structure
- docker-compose.yml
- build_files
- Dockerfile
- requirements.txt
- app
- app.py
Dockerfile
# syntax=docker/dockerfile:1.2
FROM python:bullseye as builder
RUN mkdir -p /build/
WORKDIR /build/
RUN apt-get update; \
apt-get install -y git; \
rm -rf /var/lib/apt/lists/*
RUN mkdir -p -m 0600 ~/.ssh; \
ssh-keyscan -H github.com >> ~/.ssh/known_hosts
RUN python3 -m venv env; \
env/bin/pip install --upgrade pip
COPY requirements.txt .
RUN --mount=type=ssh \
env/bin/pip install -r requirements.txt; \
rm requirements.txt
FROM python:slim as runner
RUN mkdir -p /app/
WORKDIR /app/
COPY --from=builder /build/ .
COPY app/ .
CMD ["env/bin/python", "app.py"]
docker-compose.yml
services:
build-tester:
container_name: build-tester
image: build-tester
build:
context: build_files
dockerfile: Dockerfile
ssh:
- default
If I remove ...
ssh:
- default
... the docker-compose up command builds the image OK but obviously doesn't run as app.py doesn't have the required packages installed from pip.
I'd really like to be able to get this working in this way if possible so any advice would be much appreciated.
OK - so ended up being a very simple fix... Just needed to ensure docker-compose was updated to version 2.6 on my Mac.
For some reason brew wasn't updating my docker cask properly so was still running a package from early Jan 2022. Seems --ssh compatibility was added sometime between then and now.

Dockerfile not recognized when trying to create it

I am trying to build a Docker image from a dockerfile command I received from the previous developer:
bash-5.1$ ls
data_collection demo.py examples requirements.txt start.py
demonstrateur.ipynb Dockerfile README.md serious_game test
bash-5.1$ docker build Dockerfile .
Usage: docker build [OPTIONS] PATH | URL | -
Build an image from a Dockerfile
I also tried with
bash-5.1$ docker build -t serious-game:0.0.1 -t serious-game:latest Dockerfile .
and already completely reinstalled docker by following this tutorial but it gave the same error.
Here is my Dockerfile content:
bash-5.1$ cat Dockerfile
FROM nvidia/cuda:10.2-base-ubuntu18.04
MAINTAINER me
EXPOSE 5555
EXPOSE 8886
ENV DEBIAN_FRONTEND noninteractive
ENV WD=/home/serious-game/
WORKDIR ${WD}
# Add git and ssh
RUN apt-get -y update && \
apt-get -y upgrade && \
apt-get -y install git ssh pkg-config python3-pip python3-opencv
# DĂ©pendances python
COPY requirements.txt /requirements.txt
RUN cd / && \
python3 -m pip install --upgrade pip && \
pip3 install -r requirements.txt
CMD ["start.py"]
If you are trying to build an image from a local Dockerfile, given your current bash location is in the same folder where Dockerfile resides - all you have to do is
docker build .
As written in the docs, Docker uses the file named Dockerfile by default. If you want to specify the file you can use the option --file or -f of the docker build command.
In your case you can just use to solve your problem:
docker build -t serious-game:0.0.1 -t serious-game:latest .
But if you want to specify another file named TestDockerfile (example for testing):
docker build -t serious-game:0.0.1 -t serious-game:latest -f TestDockerfile .

How to install Unix ODBC drivers to a unix docker instance?

I'm trying to connect my .net core application, hosted on a unix docker container to an external Vertica database.
It works fine when it's a windows client because there are Vertica Drivers for Windows. But there isn't a unix driver for Vertica under unix.
When I try to run a Query against Vertica I get the following error:
Dependency unixODBC with minimum version 2.3.1 is required. Unable to
load shared library 'libodbc.so.2' or one of its dependencies. In
order to help diagnose loading problems, consider setting the LD_DEBUG
environment variable: liblibodbc.so.2.so:
My docker file looks like this
FROM microsoft/dotnet:sdk AS build-env
WORKDIR /app
ARG DEBIAN_FRONTEND=noninteractive
# Copy csproj and restore as distinct layers
COPY ./*.sln ./
COPY ./MyApp/*.csproj ./MyApp/
RUN dotnet restore MyApp.sln
COPY . ./
RUN dotnet publish MyApp.sln -c Release -f=netcoreapp2.1 -o out
RUN cp /app/MyApp/*.yml /app/MyApp/out
RUN cp /app/*.ini /app/VMyApp/out
#ODBC
FROM microsoft/dotnet:aspnetcore-runtime
RUN apt-get update
RUN apt-get install -y apt-utils
RUN curl -O -k https://www.vertica.com/client_drivers/9.1.x/9.1.1-0/vertica-client-9.1.1-0.x86_64.tar.gz
RUN tar vzxf vertica-client-9.1.1-0.x86_64.tar.gz && rm vertica-client-9.1.1-0.x86_64.tar.gz
RUN apt-get install -y unixodbc-dev
ADD odbc.ini /root/odbc.ini
ADD odbcinst.ini /root/odbcinst.ini
ADD vertica.ini /root/vertica.ini
ENV VERTICAINI=/root/vertica.ini
ENV ODBCINI=/root/odbc.ini
RUN echo "$VERTICAINI $ODBCINI"
WORKDIR /app
COPY --from=build-env /app/MyApp/out .
ENTRYPOINT ["dotnet", "MyApp.dll"]

Docker - "The command '/bin/sh -c apt-get install nodejs' returned a non-zero code: 100"

I am trying to build a Docker image for my API with the following Dockerfile:
FROM microsoft/dotnet AS build-env
ARG source
RUN echo "source: $source"
WORKDIR /app
RUN apt-get update
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash
RUN apt-get install nodejs
RUN node -v
RUN npm -v
# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
#Copy everything else & build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM microsoft/dotnet
WORKDIR /app
COPY --from=build-env /app/out .
EXPOSE 80
ENTRYPOINT ["dotnet", "API_App.dll"]
However, when I run the docker build command, I keep getting the following error:
Unable to locate package nodejs
The command '/bin/sh -c apt-get install nodejs returned a non-zero code: 100
Can someone tell me why I am getting this error?
Node Version: 8.11.3
npm Version: 5.6.0
You may occasionally experience some cache issues when the live repositories you’re pulling data from have changed.
To fix this, modify the Dockerfile to do a cleanup and update of the sources before you install any new packages.
...
# clean and update sources
RUN apt-get clean && apt-get update
...
This answer is from digitalocean-issue

Resources