Private Github repositories in dockerized rails application during build - ruby-on-rails

I dockerized a new Rails 5 app with docker-compose.yml and I'm forwarding my ssh-agent socket into the container within the compose file.
If I build and run via docker-compose this is working fine, I can access the ssh key.
However, if I add bundle install to the build process, which fetches from private Git repositories and needs the SSH key, it's of course not yet available.
How can I solve this?
My current Dockerfile and docker-compose.yml files are:
https://gist.github.com/solars/d9ffbc4c570e9a128d6b0254268d785a
Thank you!

You need ~/.ssh/id_rsa to clone private repo into docker image.
One way: Copy your id_rsa and paste into docker image (~/.ssh/ location)
Other: you can create another temporary id_rsa (maybe run in a docker container and copy the .ssh file in your local machine). Copy the new .ssh folder into docker image (~/ location) alwasy when creating a docker image using Dockerfile. Next, Add you new .ssh/id_rsa.pub in your github account -> settings -> SSH and GPG keys -> new SSH key.
Working procedure: when you creating a new image, you are doing copy same .ssh folder inside image by Dockerfile, so id_rsa remaining same and the id_rsa.pub is added in your github account before. So, you are able to clone your private repo from your docker container always.

Related

What is the proper way to handle conan profiles in a container?

I want to pack a service in a container. The service is built with usage of conan. How would you put conan profiles (separate repo from the service) in ~/.conan/profiles? I suppose git clone inside a container is not the best solution (private ssh keys etc). Copying conan profile from host to a container is not the best either (conan profile would have to be inside the service folder and then moved to container's ~/.conan/profiles)
Is pinning conan profile repo as a submodule to the service the only option?
service
- conan-profiles
- rest of the service files
And in the Dockerfile
COPY conan-profiles/ /root/.conan/profiles/
COPY . /service

How to install git SSH keys in a docker container to access gitlab

This is my use-case. I have a node application with a lot of dependencies. One of the dependency is from another git repo. When I try to build the container it fails for obvious reasons since it does not have ssh keys to access the repository. What is the best way to pull the repository and build the docker container ?
Method #1: Put username/password in the repository URL:
git clone https://username:password#example.com/username/repository.git
Method #2: Copy the SSH key and related config file in Dockerfile:
# In Dockerfile
COPY sshkey /root/.ssh/sshkey
COPY sshconfig /root/.ssh/sshconfig
Method #3: Bind-mount the SSH key and related config file when running the container:
docker run -v sshkey:/root/.ssh/sshkey -v sshconfig:/root/.ssh/sshconfig ...
Be careful of any potential security risks.
Use a volume to "copy" the ssh keys to the place where node will look for them during the build process within the container.

Path interpretation in a Dockerfile

I want to run a container, by mounting on the fly my ~/.ssh path (so as to be able to clone some private gitlab repositories).
The
COPY ~/.ssh/ /root/.ssh/
directive did not work out, because the Dockerfile interpreted paths relative to a tmp dir it creates for the builds, e.g.
/var/lib/docker/tmp/docker-builder435303036/
So my next shot was to try and take advantage of the ARGS directive as follows:
ARG CURRENTUSER
COPY /home/$CURRENTUSER/.ssh/ /root/.ssh/
and run the build as:
docker build --build-arg CURRENTUSER=pkaramol <whatever follows ...>
However, I am still faced with the same issue:
COPY failed: stat /var/lib/docker/tmp/docker-builder435303036/home/pkaramol/.ssh: no such file or directory
1: How to make Dockerfile access a specific path inside my host?
2: Is there a better pattern for accessing private git repos from within ephemeral running containers, than copying my .ssh dir? (I just need it for the build process)
Docker Build Context
A build for a Dockerfile can't access specific paths outside the "build context" directory. This is the last argument to docker build, normally .. The docker build command tars up the build context and sends it to the Docker daemon to build the image from. Only files that are within the build context can be referenced in the build. To include a users .ssh directory, you would need to either base the build in the .ssh directory, or a parent directory like /home/$USER.
Build Secrets
COPYing or ADDing credentials in at build time is a bad idea as the credentials will be saved in the image build for anyone who has access to the image to see. There are a couple of caveats here. If you flatten the image layers after removal of the sensitive files in build, or create a multi stage build (17.05+) that only copies non sensitive artefacts into the final image.
Using ENV or ARG is also bad as the secrets will end up in the image history.
There is a long an involved github issue about secrets that covers most the variations on the idea. It's long but worth reading through the comments in there.
The two main solutions are to obtain secrets via the network or a volume.
Volumes are not available in standard builds, so that makes them tricky.
Docker has added secrets functionality but this only available at container run time for swarm based containers.
Network Secrets
Custom
The secrets github issue has a neat little net cat example.
nc -lp 10.8.8.8 8080 < $HOME/.ssh/id_rsa &
And using curl to collect it in the Dockerfile, use it, and remove it in the one RUN step.
RUN set -uex; \
curl -s http://10.8.8.8:8000 > /root/.ssh/id_rsa; \
ssh -i /root/.ssh/id_rsa root#wherever priv-command; \
rm /root/.ssh/id_rsa;
To make unsecured network services accessible, you might want to add an alias IP address to your loopback interface so your build container or local services can access it, but no one external can.
HTTP
Simply running a web server with your keys mounted could suffice.
docker run -d \
-p 10.8.8.8:80:80 \
-v /home/me/.ssh:/usr/share/nginx/html:ro \
nginx
You may want to add TLS or authentication depending on your setup and security requirements.
Hashicorp Vault
Vault is a tool built specifically for managing secrets. It goes beyond the requirements for a Docker build It's written and Go and also distributed as a container.
Build Volumes
Rocker
Rocker is a custom Docker image builder that extends Dockerfiles to support some new functionality. The MOUNT command they added allows you to mount a volume at build time.
Packer
The Packer Docker Builder also allows you to mount arbitrary volumes at build time.

how to copy dir from remote host to docker image

I am trying to build a docker image, I have dockerfile with all necessary commands. but in my build steps I need to copy one dir from remote host to docker image. But if I put scp command into dockerfile, i'll have to provide password also into dockerfile, which I dont have to.
Anyone has some better solution to do this. any suggestion would be appreciated.
I'd say there are at least options for dealing with that:
Option 1:
If you can execute scp before running docker build this may turn out to be the easiest option:
Run scp -r somewhere:remote_dir ./local_dir
Add COPY ./local_dir some_path to your Dockerfile
Run docker build
Option 2: If you have to execute scp during the build:
Start some key-value store such as etcd before the build
Place a correct SSH key (it cannot be password-protected) temporarily in the key-value store
Within a single RUN command (to avoid leaving secrets inside the image):
retrieve the SSH key from the key-value store;
put it in ~/.ssh/id_rsa or start an ssh-agent and add it;
retrieve the directory with scp
remove the SSH key
Remove the key from the key-value store
The second option is a bit convoluted, so it may be worth creating a wrapper script that retrieves the required secrets, runs any command, and removes the secrets.
You can copy a directory into a (even running) container at post build time
On remote machine: Copy from remote host to docker host with
scp -r /your/dir/ <user-at-docker-host>#<docker-host>:/some/remote/directory
On docker machine: Copy from docker host into docker container
docker cp /some/remote/directory <container-name>:/some/dir/within/docker/
Of course you can do step 1 also from your docker machine if you prefere that by simply adapting the source and target of the scp command.

docker private image can not be pulled in centos7

I created a docker image and pushed to docker hub, then i changed it to private.
In my Mac, I can pull it after I issued "docker login" and entered all the info.
But in Cento7 (VM), this is no longer working, the private repository can not be found. I have to change the repo from private to public, then I can pull the image.
Why this happened? What do I need to do in order to pull a private repository from docker hub?
Thanks
create a new file .netrc to
#vim .netrc
machine github.com
login < your github token >
Add those 2 lines and pass your github token
Then copy the .netrc file to the container by including this line in dockerfile will pass credentials inside the docker containers and helps to pull more than one private repositories
COPY .netrc /root/

Resources