Build docker-image from remote repository (github-actions, gitlab-ci) with env and secrets from another remote repo? - docker

I have a (private) repository at GitHub with my project and integrated GitHub-actions which is building a docker-image and pushing it directly to GHCR.
But I have a problem with storing and passing secrets to my build image. I have the following structure in my project:
.git (gitignored)
.env (gitignored)
config (gitignored) / config files (jsons)
src (git) / other folders and files
As you may see, I have .env file and config folder. Both of them store data or files, which are not in the repo but are required to be in the built environment.
So I'd like to ask, is there any option not to pass all these files to my main remote repo (even if it's private) but to link them during the build stage within the github-actions?
It's not a problem to publish env & configs somewhere else, privately, in another separate private remote-repo. The point is not to push these files to the main-private-repo, because RBAC logic doesn't allow me to restrict access to the selected files.
P.S. Any other advice of using GitLab CI or BitBucket, if you know how to solve the problem is also appreciated. Don't be shy to share it!

So it seems that this question is a bit hot, so I have found an answer for it.
Example that is shown above is based on node.js and nest.js app and pulling the private remote repo from GitHub.
In my case, this scenario was about pulling from separate private repo config files and other secrets. And we merge them with our project during container build. This option isn't about security of secrets inside container itself. But for making one part of a project (repo itself with business logic) available to other developers (they won't see credentionals and configs from separate private repo, in your development repo) and a secret-private repo with separate access permission.
You all need your personal access token (PAT), on github you can found it here:
As for GitLab, the flow is still the same. You'll need to pass token from somewhere in the settings. And also, just a good advice, create not just one, but two docker files, before testing it.
Why https instead of ssh? In that case you'll need also to pass ssh keys and also config the client correctly. It's a bit more complicated because of CRLF and LF formats, crypto-algos supported by ssh and so on.
# it could be Go, PHP, what-ever
FROM node:17
# you will need your GitHub token from settings
# we will pass it to build env via GitHub action
ARG CR_PAT
ENV CR_PAT=$CR_PAT
# update OS in build container
RUN apt-get update
RUN apt-get install -y git
# workdir app, it is a cd (directory)
WORKDIR /usr/src/app
# installing nest library
RUN npm install -g #nestjs/cli
# config git with credentials
# we will use https since it is much easier to config instead of ssh
RUN git config --global url."https://${github_username}:${CR_PAT}#github.com/".insteadOf "https://github.com/"
# cloning the repo to WORKDIR
RUN git clone https://github.com/${github_username}/${repo_name}.git
# we move all files from pulled repo to root of WORKDIR
# including files named with dot at the beginning (like .env)
RUN mv repo_folder/* repo_folder/.[^.]* . && rmdir repo_folder/
# node.js stuff
COPY package.json ./
RUN yarn install
COPY . .
RUN nest build app
CMD wait && ["node"]
As a result, you'll see a fully container with your code merged with files and code from other separate repo which we pull from.

Related

Dockerfile, RUN git clone does not create required files/folders

English is not my native language, so I apologize in advance for very much possible errors.
I'm new to Docker and Linux in general, and trying to learn rn.
I have a task, I need to create dockerfile based upon tomcat:9.0-alpine, then in this exact file clone provided repository. Then I need to build image from this dockerfile, run container and then visit index.html in a browser, where I will see a specific page.
This how my dockerfile looks like:
FROM tomcat:9.0-alpine
RUN apk update
RUN apk add git
RUN git clone https://github.com/a1qatraineeship/docker_task.git $TOMCAT_HOME/webapps/whateverApp/
When I build image from this, I see that repo was cloned in a directory, that I specified:
#7 [4/4] RUN git clone https://github.com/a1qatraineeship/docker_task.git $TOMCAT_HOME/webapps/aquaApp/
#7 sha256:72b802c3b98dad7151daeba7db257b7b1f1089dc26fb5809fee52f881e19edb5
#7 0.319 Cloning into '/webapps/whateverApp'...
#7 DONE 1.9s
But when I run container and go to http://localhost:8888/whateverApp/, I get "404 - not found".
If I go to http://localhost:8888, I see Tomcat default page, so Tomcat is deffinetely working.
If I bash into container and go to /webapps - I don't see a folder that I specified (whateverApp) in there.
What I'm missing? All seems to work ok, there is no errors thrown, but don't see supposedly cloned repository. In examples that I saw there was no mention about any access restrictions or whatever. Even my teacher said before, that whole dockerfile will essentially consist of 4 lines, and only thing I really need to find out is to where to clone repo for everything to work properly. But If it don't clone at all, how can I verify that I placed files into right place?
The problem is your environment variable, is empty when the container run TOMCAT_HOME
FROM tomcat:9.0-alpine
ENV TOMCAT_HOME=/usr/local/tomcat
RUN apk update
RUN apk add git
RUN git clone https://github.com/a1qatraineeship/docker_task.git $TOMCAT_HOME/webapps/whateverApp/
and with that ENV should work, gook luck!

How to configure docker specific image dependencies which are managed in a different source code repository

How to configure docker specific artifact dependencies which are managed in a different source code repository. My docker image depends on jar files (say project-auth), configuration (say project-theme) which is actually maintained in a different repository than the docker image.
What would be the best way to copy dependencies for a docker image (say project-deploy repo), prior to building the image. i.e in the above case project-deploy needs jar files and configuration which needs to be mounted as a volume from the current folder.
I don't want this to be committed as the dependencies tend to get stale and I want the docker image creation to be part of the build process itself.
You can use Docker multi-stage builds for this purpose.
With multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image.
For example:
Suppose that the source code for dependencies is present in repo - "https://github.com/demo/demo.git"
Using multi stage builds, you can create a stage in which you'll clone the git repo and create the dependency Jar (or anything else that you need) at runtime.
At last, you can copy the jar into your final image.
# Use any base image. I took centos here
FROM centos:7 as builder
# Install only those packages which are required.
RUN yum install -y maven git \
&& git clone <YOUR_GIT_REPO_URL>
WORKDIR /myfolder
# Create jar at run time. You can update this step according to your project requirements.
RUN mvn clean package
# From here our normal Dockerfile steps starts.
FROM centos:7
# Add all the necessary steps required to build your image
.
.
.
# This is how you can copy the jar which was created above (Step 4) in your final docker image.
COPY --from=builder SOURCE_PATH DESTINATION_PATH
Please refer this to get a better understanding about multi stage builds in docker.

How can I cache a nix derivations's dependencies when built via Docker?

FROM nixos/nix#sha256:af330838e838cedea2355e7ca267280fc9dd68615888f4e20972ec51beb101d8
# FROM nixos/nix:2.3
ADD . /build
WORKDIR /build
RUN nix-build
ENTRYPOINT /build/result/bin/app
I have the very simple Dockerfile above that can succesfully build my application. However each time I modify any of the files within the application directory (.), it'll have to rebuild from scratch + download all the nix store dependencies.
Can I somehow grab a "list" of store dependencies downloaded and then add them in on the beginning of the Dockerfile for the purpose of caching them independently (for the ultimate goal of saving time + bandwidth)?
I'm aware I could build this docker image using nix natively which has it's own caching functionality (well the nix store), but I'm trying to have this buildable in a non nix environment (hence using docker).
I can suggest split source in two parts. The idea is to create a separate Docker layer with dependencies only, which changes rarely:
FROM nixos/nix:2.3
ADD ./default.nix /build
# if you have any other Nix files, put them to ./nix subdirectory
ADD ./nix /build/nix
# now let's download all the dependencies
RUN nix-shell --run exit
# At this point, Docker has cached all the dependencies. We can perform the build
ADD . /build
WORKDIR /build
RUN nix-build
ENTRYPOINT /build/result/bin/app

Safe way to include a NuGet private source in a Docker container

I'm setting up a Docker container for my ASP.NET Core server and need to find a safe way for restoring NuGet packages before building and running the project.
I've managed to mount a drive containing a new NuGet.config file solely created for this purpose, as my team doesn't include the config file as a part of the Git repository, but it feels wrong.
As the official Docker image for .NET Core runtime/sdk doesn't include nuget as a part of the library, some have suggesting downloading a windows image just to run nuget source add but that seems terrible as well.
My Dockerfile:
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2 AS base
WORKDIR /app
EXPOSE 5050
FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS build
WORKDIR /src
COPY . .
#Config file needs to be in root of solution or in User/share
RUN dotnet restore "MyProject.csproj"
Adding a private NuGet source should be achievable without downloading a 2GB windows image or copying an existing config file that includes the password.
Have a nuget.config file that lists only the package sources, not credentials, that's commit in your repo with your source code.
Use cross platform authentication providers to allow devs and CI machines to authenticate to your private feeds.
set the nuget's source path is not good enough?
RUN dotnet restore -s https://api.nuget.org/v3/index.json -s https://my-local-private-nuget-source/nuget/nuget

Best practice to cache downloaded resources between builds

I am building a web application I'd like to deploy as a Docker container. The application depends on a set of assets stored in a separate Git repository. The reason for using a separate repository is that the history of that repository is much larger than the current checkout and we'd like to have a way to throw away that history without touching the history of the repository containing the source code.
In the example below, containing only the relevant parts, I'm passing the assets repository commit ID into the build process using a file:
FROM something:something
# [install Git and stuff]
COPY ["assets_git_id", "/root/"]
RUN git clone --bare git://lala/assets.git /root/assets.git \
&& mkdir -p /srv/app/assets
&& git --git-dir=/root/assets.git --work-tree=/srv/app/assets checkout $(</root/assets_git_id) .
&& rm -r /root/assets.git
# [set up the rest of the application]
The problem here is that whenever that ID changes, the whole repository is cloned during the build process and most of the data is thrown away.
What is the canonical way reduce the wasted resources in such a case? Ideally I'd like to have access to a directory from inside the container during build whose contents are kept between multiple runs of the same build. The RUN script could then just update the repository and copy the relevant data from it instead of cloning the whole repository each time.
Use multi stage builds
# Stage 1
FROM something:something as GitSource
# [install Git]
RUN git clone --bare git://lala/assets.git /root/assets.git
COPY ["assets_git_id", "/root/"]
RUN git --git-dir=/root/assets.git pull
RUN mkdir -p /srv/app/assets
RUN git --git-dir=/root/assets.git --work-tree=/srv/app/assets checkout $(</root/assets_git_id) .
# Stage 2
FROM something:something
COPY --from=GitSource /srv/app/assets /srb/app/assets
# [set up the rest of the application]
For the final image, it will discard whatever you do in Stage 1, except what is being copied to Stage 2.

Resources