I have a private repository that I use as a dependency in my frontend react app, currently when I download it I use a fine-grained PAT that allows me access to that github repository noted in the .env file as:
PERSONAL_ACCESS_TOKEN: blablabla
I understand that it is not safe to put the personal access token inside the ENV file as it could be used by anyone else, and since it is in the environment people can have access to it as well.
It still did not install the package when I did use it in the .env file and it gave me the following error:
fatal: could not read Username for 'https://github.com': terminal prompts disabled
What would be the best practises to allow for this token to be validated and used to install the private dependency when running Docker build to create the docker image?
My current Dockerfile is as follows:
# Build stage:
FROM node:14-alpine AS build
RUN apk update && apk upgrade && \
apk add --no-cache bash git openssh
# set working directory
WORKDIR /react
# install app dependencies
# (copy _just_ the package.json here so Docker layer caching works)
COPY ./package*.json yarn.lock ./
RUN yarn install --network-timeout 1000000
# build the application
COPY . .
RUN yarn build
# Final stage:
FROM node:14-alpine
# set working directory
WORKDIR /react
# get the build tree
COPY --from=build /react/build/ ./build/
# Install `serve` to run the application.
RUN npm install -g serve
EXPOSE 3000
# explain how to run the application
# ENTRYPOINT ["npx"]
CMD ["serve", "-s", "build"]
I use github actions to deploy the actual production version, would I need to simply set it up as a actions secret?
Context
I'm juggling between Dockerfile and docker-compose to figure out the best security practice to deploy my docker image and push it to the docker registry so everyone can use it.
Currently, I have a FastAPI application that uses an AWS API token for an AWS Service. I'm trying to figure out a solution that can work in both Docker for Windows (GUI) and Docker for Linux.
In Docker Windows GUI it's well and clear that after I pull the image from the registry I can add API tokens in the environment of the image and spin a container.
I need to know
When it comes to Docker for Linux, I'm trying to figure out a way to build an image with an AWS API token either via Dockerfile or docker-compose.yml.
Things I tried
Followed the solution from this blog
As I said earlier if I do something like that as mentioned in the blog. It's fine for my personal use. A user who pulls my docker image from the registry will also be having my AWS Secrets. How do I handle this situation in a better way
Current state of Dockerfile
FROM python:3.10
# Set the working directory to /app
WORKDIR /src
# Copy the current directory contents into the container at /app
ADD ./ /src
# Install any needed packages specified in requirements.txt
#RUN /usr/local/bin/python -m pip install --upgrade pip
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 8000
# Run app.py when the container launches
CMD ["python", "main.py"]
I have a Dockerfile like below:
# syntax=docker/dockerfile:1
FROM continuumio/miniconda3
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
# Create directory to store our application
WORKDIR /app
## The following three commands adapted from Dockerfile snippet at
## https://docs.docker.com/develop/develop-images/build_enhancements/#using-ssh-to-access-private-data-in-builds
# Install ssh client and git
RUN apt-get upgrade && apt-get update && apt-get install openssh-client git -y
# Download public key for gitlab.com
RUN mkdir -p -m 0700 ~/.ssh && ssh-keyscan gitlab.com >> ~/.ssh/known_hosts
# clone my-repo, authenticating using client's ssh-agent.
RUN --mount=type=ssh git clone git#gitlab.com:mycompany/data-products/my-repo.git /app/
# set up python (conda) environment to run application
RUN conda create --name recenv --file conda-linux-64.lock
# run my-package with the conda environment we just created.
CMD ["conda", "run", "-n", "recenv", "python", "-m", "my_package.train" "path/to/gcp/service/account.json"]
This dockerfile builds successfully with docker build . --no-cache --tag my-package --ssh default but fails (as expected) on docker run my-package:latest with:
FileNotFoundError: [Errno 2] No such file or directory: path/to/gcp/service/account.json
So I've gotten the ssh secrets management working so the RUN ...git clone step uses my ssh/rsa creds successfully. But I'm having trouble using my other secret - my gcp service account json file. The difference is I only need the ssh secret in a RUN step but I need my gcp service account secret in my CMD step.
While everything I've read, such as docker docs page on using the --secret flag, tutorials and SO answers I've found, all reference how to pass in a secret to be used in a RUN step but not the CMD step. But I need to pass my GCP service account json file to my CMD step.
I could just COPY the file into my container, but from my reading that's supposedly not a great solution from a security standpoint.
What is the recommended, secure way of passing a secret json file to the CMD step of a docker container?
I want to run local docker.
But I have content filtering by my internet service provider, So it doesn't work correctly.
my docker file :
FROM node:15.14
COPY . .
ENV NODE_ENV=development
RUN npm install --unsafe-perm
RUN npm run build
RUN npm i -g pm2
I tried docker run x and got error :
certificate signed by unknown authority.
Can somebody please tell me how to solve it?
I found a solution.
I added this lines to the dockerFile:
ADD your_provider_certificate_sign_path local_path
RUN cat local_path | sh
ENV NODE_EXTRA_CA_CERTS=/etc/ca-bundle.crt
ENV REQUESTS_CA_BUNDLE=/etc/ca-bundle.crt
ENV SSL_CERT_FILE=/etc/ca-bundle.crt
and it run successfully.
I'm building a .NET Core application that I would like to deploy via Azure Devops build pipeline. The pipeline will build, test and deploy using Docker containers.
I have successfully built the first docker image for my application using the following Dockerfile and am now attempting to run it on my local machine before using it in the pipeline:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
MAINTAINER yummylumpkins <yummy#lumpkins.com>
WORKDIR /app
COPY . ./
RUN dotnet publish MyAPIApp -c Release -o out
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "MyAPIApp.dll"]
Running this image inside a docker container locally crashes because my application uses AzureServiceTokenProvider() to attempt to fetch a token from Azure Services that will then be used to fetch secrets from Azure Key Vault.
The local docker container that the image runs from does not have the authorization to access Azure Services.
The docker container error output looks like this:
---> Microsoft.Azure.Services.AppAuthentication.AzureServiceTokenProviderException: Parameters: Connection String: [No connection string specified], Resource: https://vault.azure.net, Authority: https://login.windows.net/XXXX-XXXX-XXXX-XXXX. Exception Message: Tried the following 3 methods to get an access token, but none of them worked.
Parameters: Connection String: [No connection string specified], Resource: https://vault.azure.net, Authority: https://login.windows.net/XXXX-XXXX-XXXX-XXXX. Exception Message: Tried to get token using Managed Service Identity. Access token could not be acquired. Connection refused
Parameters: Connection String: [No connection string specified], Resource: https://vault.azure.net, Authority: https://login.windows.net/XXXX-XXXX-XXXX-XXXX. Exception Message: Tried to get token using Visual Studio. Access token could not be acquired. Environment variable LOCALAPPDATA not set.
Parameters: Connection String: [No connection string specified], Resource: https://vault.azure.net, Authority: https://login.windows.net/XXXX-XXXX-XXXX-XXXX. Exception Message: Tried to get token using Azure CLI. Access token could not be acquired. /bin/bash: az: No such file or directory
After doing a lot of research (and getting some positive feedback here) it would appear that the best way to authorize the locally running docker container is to build the image on top of an Azure CLI base image from Microsoft, then use az login --service-principal -u <app-url> -p <password-or-cert> --tenant <tenant> somewhere during the build/run process to authorize the local docker container.
I have successfully pulled the Azure CLI image from Microsoft (docker pull mcr.microsoft.com/azure-cli) and can run it via docker run -it mcr.microsoft.com/azure-cli. The container runs with the Azure CLI command line and I can log in via bash, but that's as far as I've come.
The next step would be to layer this Azure CLI image into my previous Dockerfile during the image build, but I am unsure to do this. I've tried the following:
# New base image is now Azure CLI
FROM mcr.microsoft.com/azure-cli
RUN az login -u yummylumpkins -p yummylumpkinspassword
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
MAINTAINER yummylumpkins <yummy#lumpkins.com>
WORKDIR /app
COPY . ./
RUN dotnet publish MyAPIApp -c Release -o out
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "MyAPIApp.dll"]
But this still doesn't work, the process still results in the same error mentioned above (I think because the login does not persist when adding the new dotnet core layer. My question is, how would I explicitly build Azure CLI image into my dockerfile/image building process with an azure login command to authorize the docker container, persist the authorization, and then set a command to run the app (MyAPIApp.dll) with the persisted authorization?
Or, am I taking the completely wrong approach with this? Thanks in advance for any feedback.
Posting an update with an answer here just in case anyone else has a similar problem. I haven't found any other solutions to this so I had to make my own. Below is my Dockerfile. Right now the image is sitting at 1GB so I will definitely need to go through and optimize, but I'll explain what I did:
#1 Install .NET Core SDK Build Environment
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
WORKDIR /app
#2 Build YummyApp
COPY . ./
RUN dotnet publish YummyAppAPI -c Release -o out
#3 Install Ubuntu Base Image
FROM ubuntu:latest
MAINTAINER yummylumpkins <yummy#lumpkins.com>
WORKDIR /app
ENV ASPNETCORE_URLS=http://+:80
EXPOSE 80
#4 Install package dependencies & .NET Core SDK
RUN apt-get update \
&& apt-get install apt-transport-https \
&& apt-get update \
&& apt-get install -y curl bash dos2unix wget dpkg \
&& wget -q https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb \
&& dpkg -i packages-microsoft-prod.deb \
&& apt-get install -y software-properties-common \
&& apt-get update \
&& add-apt-repository universe \
&& apt-get update \
&& apt-get install apt-transport-https \
&& apt-get update \
&& apt-get install -y dotnet-sdk-3.1 \
&& apt-get update \
&& rm packages-microsoft-prod.deb
#5 Copy project files from earlier SDK build
COPY --from=build-env /app/out .
#6 Install Azure CLI for AppAuthorization
RUN curl -sL https://aka.ms/InstallAzureCLIDeb | bash
#7 Login to Azure Services and run application
COPY entrypoint.sh ./
RUN dos2unix entrypoint.sh && chmod +x entrypoint.sh
CMD ["/app/entrypoint.sh"]
Step 1 - Install .NET Core SDK Build Environment: We start with using .NET Core SDK as a base image to build my application. It should be noted that I have a big app with one solution and multiple project files. The API project is dependent on the other projects.
Step 2 - Build YummyApp: We copy the entire project structure from our local directory to our working directory inside the docker image (/app). Just in case anyone is curious, my project is a basic API app. It looks like this:
[YummyApp]
|-YummyAppDataAccess
|YummyAppDataAccess.csproj
|-YummyAppInfrastructure
|YummyAppInfrastructure.csproj
|-YummyAppAPI
|-YummyAppAPI.csproj
|-YummyAppServices
|-YummyAppServices.csproj
|-YummyApp.sln
After we copy everything over, we build/publish a Release configuration of the app.
Step 3 - Install Ubuntu Base Image: We start a new layer using Ubuntu. I initially tried to go with Alpine Linux but found it almost impossible to install Azure CLI on it without having to do some really hacky workarounds so I went w/ Ubuntu for the ease of installation.
Step 4 - Install package dependencies & .NET Core SDK: Inside the Ubuntu layer we set our work directory and install/update a bunch of libraries including our .NET Core SDK. It should be noted that I needed to install dos2unix for a shell script file I had to run later on. . .I will explain later.
Note: I initially tried to install .NET Core Runtime only as it is more lightweight and would bring this image down to about 700MB (from 1GB) but for some reason when I tried to run my application at the end of the file (Step 7) I was getting an error saying that no runtime was found. So I went back to the SDK.
Step 5 - Copy project files from earlier SDK Build: To save space, I copied the built project files from the first 'build image' over to this Ubuntu layer to save some space (about 1GB worth).
Step 6 - Install Azure CLI: In order to authorize my application to fetch a token from Azure Services, normally I use Microsoft.Azure.Services.AppAuthentication. This package provides a method called AzureServiceTokenProvider() which (via my IDE) authorizes my application to connect to Azure Services to get a token that is then used to access the Azure Key Vault. This whole issues started because my application is unable to do this from within a docker container, because Azure doesn't recognize the request coming from the container itself.
So in order to work around this, we need to login via az login in the Azure CLI, inside the container, before we start the app.
Step 7 - Login to Azure Services and run application: Now it's showtime. I had two different problems to solve here. I had to figure out how to execute az login and dotnet YummyAppAPI.dll when this container would be fired up. But Dockerfiles only allow one ENTRYPOINT or CMD to be executed at runtime, so I found a workaround. By making a shell script file (entrypoint.sh) I was able to put both commands into this file and then execute that one file.
After setting this up, I was getting an error with the entrypoint.sh that read something like this: entrypoint.sh: executable file not found in $PATH. I found out that I had to change the permissions of this file using chmod because otherwise, my docker container was unable to access it. That made the file visible, but the file was still unable to execute. I was receiving another error: Standard_init_linux.go:211: exec user process caused “no such file or directory”
After some more digging, it turns out that this problem happens when you try to use a .sh file created in Windows on a Linux-based system. So I had to install dos2unix to convert this file to something Linux compatible. I also had to make sure the file was formatted correctly. For anyone curious, this is what my entrypoint.sh looks like:
#!/bin/sh
set -e
az login -u yummy#lumpkins.com -p ItsAlwaysYummy
dotnet /app/YummyAppAPI.dll
exec "$#"
Note: The login and password is hard-coded. . .I know this is bad practice (in fact, it's terrible) however, this is only for my local machine and will never see production. The next step would be to introduce environment variables with a service principle login. Since this deployment will eventually happen in the Azure Devops pipeline, I can inject those ENV vars straight into the devops pipeline YAML so that all of this happens without me ever punching in credentials; they will come straight from the Key Vault where they are stored.
Lastly, the size of this container is huge (1GB) and it does need to be optimized if it will be updated/built regularly. I will continue working on that but I am open to suggestions on how best to do that moving forward.
Thanks again all.