What is the proper way to handle conan profiles in a container? - docker

I want to pack a service in a container. The service is built with usage of conan. How would you put conan profiles (separate repo from the service) in ~/.conan/profiles? I suppose git clone inside a container is not the best solution (private ssh keys etc). Copying conan profile from host to a container is not the best either (conan profile would have to be inside the service folder and then moved to container's ~/.conan/profiles)
Is pinning conan profile repo as a submodule to the service the only option?
service
- conan-profiles
- rest of the service files
And in the Dockerfile
COPY conan-profiles/ /root/.conan/profiles/
COPY . /service

Related

Using a Docker Image to Provide Static File Content

I thought I had seen examples of this before, but cannot find anything in my Docker-tagged bookmarks or in the Docker in Action book.
The system I am currently working on has a Docker container providing an nginx webserver, and a second container providing a Java application server. The Java app-server also handles the static content of the site (HTML, JS, CSS). Because of this, the building and deployment of changes to the non-Java part is tightly coupled to the app-server. I am trying to separate it from that.
My goal is to be able to produce a third container that provides only the (static) web-application files, something that can be easily updated and swapped out as needed. I have been reading up on Docker volumes, but I'm not 100% certain that is the solution I need here. I simply want a container with a number of files, that provides these files at some given mount-point to the nginx container, in a read-only fashion.
The rough steps would be:
Start with a node.js image
Copy the content from the local instance of the git repo
Run yarn install and yarn build on the content, creating a build/ directory in the process
Copy this to somewhere stand-alone
Result in a container that contains only the results of the "build" step
Is this something that can be done? Or is there a better way to architect this in terms of the Docker ecosystem?
A Docker container fundamentally runs a program; it's not just a collection of files. Probably the thing you want to do with these files is serve them over HTTP, and so you can combine a multi-stage build with a basic Nginx server to do this. A Dockerfile like this would be fairly routine:
FROM node AS build
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install
COPY ./ ./
RUN yarn build
FROM nginx
COPY --from=build /app/dist/ /usr/share/nginx/html/
This could be its own container, or it could be the same Nginx you have as a reverse proxy already.
Other legitimate options include combining this multi-stage build with your Java application container, so you have a single server but the asset build pipeline is in Docker; or running this build sequence completely outside of Docker and uploading it to some sort of external service that can host your static files (Amazon S3 can work for this, if you're otherwise in an AWS environment).
Docker volumes are not a good match for something you would build and deploy. Uploading content into a named volume can be tricky (more so if you eventually move this to Kubernetes) and it's more straightforward to version an image than a collection of files in a volume.

VScode remote-container extension to docker container - build results root

I am using an ubuntu host (22.04) which uses docker container in which I defined my build environment (compiler, toolchain, usb devices). I created a volume share so that I can access the git repo on my host, inside my container.
The problem is, when I compile a project, and I need to do something on my host with the build artifacts (e.g. upload a binary to a web portal), the files belong to the root user (which is the only user on my docker environment). Thus, I need to chmod specific files before I can access them on my host which is annoying.
I tried to run the docker image with a user name, but then VScode no longer is able to install stuff when it connects to the docker container.
Is there a way to get an active user in my container, and still allow VScode remote-container to install extensions on connecting to the container? Or is there a better way to avoid chmodding all build results?

How to install git SSH keys in a docker container to access gitlab

This is my use-case. I have a node application with a lot of dependencies. One of the dependency is from another git repo. When I try to build the container it fails for obvious reasons since it does not have ssh keys to access the repository. What is the best way to pull the repository and build the docker container ?
Method #1: Put username/password in the repository URL:
git clone https://username:password#example.com/username/repository.git
Method #2: Copy the SSH key and related config file in Dockerfile:
# In Dockerfile
COPY sshkey /root/.ssh/sshkey
COPY sshconfig /root/.ssh/sshconfig
Method #3: Bind-mount the SSH key and related config file when running the container:
docker run -v sshkey:/root/.ssh/sshkey -v sshconfig:/root/.ssh/sshconfig ...
Be careful of any potential security risks.
Use a volume to "copy" the ssh keys to the place where node will look for them during the build process within the container.

Path interpretation in a Dockerfile

I want to run a container, by mounting on the fly my ~/.ssh path (so as to be able to clone some private gitlab repositories).
The
COPY ~/.ssh/ /root/.ssh/
directive did not work out, because the Dockerfile interpreted paths relative to a tmp dir it creates for the builds, e.g.
/var/lib/docker/tmp/docker-builder435303036/
So my next shot was to try and take advantage of the ARGS directive as follows:
ARG CURRENTUSER
COPY /home/$CURRENTUSER/.ssh/ /root/.ssh/
and run the build as:
docker build --build-arg CURRENTUSER=pkaramol <whatever follows ...>
However, I am still faced with the same issue:
COPY failed: stat /var/lib/docker/tmp/docker-builder435303036/home/pkaramol/.ssh: no such file or directory
1: How to make Dockerfile access a specific path inside my host?
2: Is there a better pattern for accessing private git repos from within ephemeral running containers, than copying my .ssh dir? (I just need it for the build process)
Docker Build Context
A build for a Dockerfile can't access specific paths outside the "build context" directory. This is the last argument to docker build, normally .. The docker build command tars up the build context and sends it to the Docker daemon to build the image from. Only files that are within the build context can be referenced in the build. To include a users .ssh directory, you would need to either base the build in the .ssh directory, or a parent directory like /home/$USER.
Build Secrets
COPYing or ADDing credentials in at build time is a bad idea as the credentials will be saved in the image build for anyone who has access to the image to see. There are a couple of caveats here. If you flatten the image layers after removal of the sensitive files in build, or create a multi stage build (17.05+) that only copies non sensitive artefacts into the final image.
Using ENV or ARG is also bad as the secrets will end up in the image history.
There is a long an involved github issue about secrets that covers most the variations on the idea. It's long but worth reading through the comments in there.
The two main solutions are to obtain secrets via the network or a volume.
Volumes are not available in standard builds, so that makes them tricky.
Docker has added secrets functionality but this only available at container run time for swarm based containers.
Network Secrets
Custom
The secrets github issue has a neat little net cat example.
nc -lp 10.8.8.8 8080 < $HOME/.ssh/id_rsa &
And using curl to collect it in the Dockerfile, use it, and remove it in the one RUN step.
RUN set -uex; \
curl -s http://10.8.8.8:8000 > /root/.ssh/id_rsa; \
ssh -i /root/.ssh/id_rsa root#wherever priv-command; \
rm /root/.ssh/id_rsa;
To make unsecured network services accessible, you might want to add an alias IP address to your loopback interface so your build container or local services can access it, but no one external can.
HTTP
Simply running a web server with your keys mounted could suffice.
docker run -d \
-p 10.8.8.8:80:80 \
-v /home/me/.ssh:/usr/share/nginx/html:ro \
nginx
You may want to add TLS or authentication depending on your setup and security requirements.
Hashicorp Vault
Vault is a tool built specifically for managing secrets. It goes beyond the requirements for a Docker build It's written and Go and also distributed as a container.
Build Volumes
Rocker
Rocker is a custom Docker image builder that extends Dockerfiles to support some new functionality. The MOUNT command they added allows you to mount a volume at build time.
Packer
The Packer Docker Builder also allows you to mount arbitrary volumes at build time.

Private Github repositories in dockerized rails application during build

I dockerized a new Rails 5 app with docker-compose.yml and I'm forwarding my ssh-agent socket into the container within the compose file.
If I build and run via docker-compose this is working fine, I can access the ssh key.
However, if I add bundle install to the build process, which fetches from private Git repositories and needs the SSH key, it's of course not yet available.
How can I solve this?
My current Dockerfile and docker-compose.yml files are:
https://gist.github.com/solars/d9ffbc4c570e9a128d6b0254268d785a
Thank you!
You need ~/.ssh/id_rsa to clone private repo into docker image.
One way: Copy your id_rsa and paste into docker image (~/.ssh/ location)
Other: you can create another temporary id_rsa (maybe run in a docker container and copy the .ssh file in your local machine). Copy the new .ssh folder into docker image (~/ location) alwasy when creating a docker image using Dockerfile. Next, Add you new .ssh/id_rsa.pub in your github account -> settings -> SSH and GPG keys -> new SSH key.
Working procedure: when you creating a new image, you are doing copy same .ssh folder inside image by Dockerfile, so id_rsa remaining same and the id_rsa.pub is added in your github account before. So, you are able to clone your private repo from your docker container always.

Resources