GitLab CI ssh registry login - docker

I have a GitLab project gitlab.com/my-group/my-project which has a CI pipeline that builds an image and pushes it to the project's GitLab registry registry.gitlab.com/my-group/my-project:tag. I want to deploy this image to Google Compute Engine, where I have a VM running docker.
Easy enough to do it manually by ssh'ing into the VM, then docker login registry.gitlab.com and docker run ... registry.gitlab.com/my-group/my-project:tag. Except the docker login command is interactive, which is a no-go for CI. It can accept a username and password on the command line, but that hardly feels like the right thing to do, even if my login info is in a secret variable (storing my GitLab login credentials in a GitLab secret variable?...)
This is the intended workflow on the Deploy stage of the pipeline:
Either install the gcloud tool or use an image with it preinstalled
gcloud compute ssh my-gce-vm-name --quiet --command \
"docker login registry.gitlab.com && docker run registry.gitlab.com/my-group/my-project:tag"
Since the gcloud command would be running within the GitLab CI Runner, it could have access to secret variables, but is that really the best way to log in to the GitLab Registry over ssh from GitLab?

I'll answer my own question in case anyone else stumbles upon it. GitLab creates ephemeral access tokens for each build of the pipeline that give the user gitlab-ci-token access to the GitLab Registry. The solution was to log in as the gitlab-ci-token user in the build.
.gitlab-ci.yml (excerpt):
deploy:
stage: deploy
before_script:
- gcloud compute ssh my-instance-name --command "docker login registry.gitlab.com/my-group/my-project -u gitlab-ci-token -p $CI_BUILD_TOKEN"

The docker login command creates a local configuration file in which your credentials are stored at $HOME/.docker/config.json that looks like this (also see the documentation on this):
{
"auths": {
"<registry-url>": {
"auth": "<credentials>"
}
}
}
As long as the config.json file is present on your host and your credentials (in this case simply being stored as base64("<username>:<password>")) do not change, there is no need to run docker login on every build or to store your credentials as variables for your CI job.
My suggestion would be to simply ensure that the config.json file is present on your target machine (either by running docker login once manually or by deploying the file using whatever configuration management tool you like). This saves you from handling the login and managing credentials within your build pipeline.
Regarding the SSH login per se; this should work just fine. If you really want to eliminate the SSH login, you could setup the Docker engine on your target machine to listen on an external socket, configure authentication and encryption using TLS client certificates as described in the official documentation and directly talk to the remote server's Docker API from within the build job:
variables:
DOCKER_HOST: "tcp://<target-server>:2376"
DOCKER_TLS_VERIFY: "1"
script:
- docker run registry.gitlab.com/my-group/my-project:tag

We had the same "problem" on other hosting providers. Our solution is to use some kind of custom script which runs on the target machine and can be called via a Rest-Api Endpoint (secured by Basic-Auth or what ever).
So you could just trigger the remote host to do the docker login and upgrade your service without granting SSH Access via gitlab-ci.

Related

Pulling Docker image from private protected repository within bash script

I am trying to write a bash script to automatize the setup of a multi-containers environment.
Each container is built from images pulled from a private protected repository.
The problem is that when the script calls for docker-compose up for the first time, access to the repository is denied, like if it does not know I have properly done docker login before running the script.
If I docker pull an image manually, that very image is no longer a problem when the script tries to build its container. But when it has to docker pull on its own from a Dockerfile definition, it gets access denied.
Considering that I would like this script to be portable to other devs' environments, how can I get it to be able to access the repository using the credentials each dev will have already set on its computer with docker login?
You can do something like:
#!/bin/bash
cat ~/pwd.txt | docker login <servername> -u <username> --password-stdin
docker pull
This reads the password from pwd.txt and logs in to the specified server.
In case you have multiple servers you want to log in you can try:
#!/bin/bash
serverlist="server1.com server2.com"
for server in $serverlist; do
cat ~/${server}_pwd.txt | docker login $server -u <username> --password-stdin
done
docker pull
This reads the passwords from files like server1.com_pwd.txt.

Accessing Stored Jenkins credentials from Docker container

i am trying to trigger a google cli command in jenkins pipeline.
gcloud auth activate-service-account --key-file=user.json
currently using googlesdk docker image
Here i have my private key stored as credentials in Jenkins server while running command directly from agent i can authenticate to the account. now i wanted to run command inside docker container.
need to know how can i access private key stored in Jenkins from Docker container ?
i tried to access it directly and got following error message
ERROR: gcloud crashed (ValueError): No key could be detected.
some Assistance will be helpful.
i use scripted pipeline.

Pulling from google container registry in Jenkins scripted pipeline on compute engine vm

I've setup Jenkins on a Google Cloud compute engine vm. Docker is installed, and I'm successfully using a scripted pipeline to pull and run public docker images. I can't seem to pull from Google Container registry though, and I can't find any examples of how to do this in a scripted pipeline. Here's my Jenkinsfile:
node {
checkout scm
docker.image('mysql:5.7').withRun('--env MYSQL_DATABASE=my_db --env MYSQL_ROOT_PASSWORD=password -p 3306:3306') { c ->
docker.image('mysql:5.7').inside("--link ${c.id}:db") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
}
/* Fails here */
docker.image('gcr.io/my-project/my-image').withRun("--link ${c.id}:db --env MYSQL_HOST=localhost --env MYSQL_USER=root --env MYSQL_PWD=password --env MYSQL_DB=my_db --network=host")
}
}
It seems like since I'm on a compute engine vm, there shouldn't need to be any credential configuration for Jenkins (clearly I'm wrong). I've run gcloud auth configure-docker on the vm, and I can easily ssh in and pull the image I want from gcr.io with a simple docker pull. When jenkins tries to pull though, I get Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication.
Any ideas?
Edit:
I've discovered wrapping my pull step with docker.withRegistry() works, but this requires me to add my gcloud credentials via the Jenkins interface. It seems strange that I need to do this since Jenkins is already running on a compute engine vm that has the correct auth and docker correctly configured to be able to pull from gcr.io. Is there some special way Jenkins (or the docker pipeline plugin) is running docker that it somehow doesn't have the same authentication that docker has when run manually on the vm?
Cracked this, and it was a bit silly. While I did indeed setup auth correctly for my user on the vm, I did not do this for the jenkins user on the vm. After ssh-ing into the vm, I needed to do:
sudo su jenkins
gcloud auth configure-docker
This adds the gcloud config for docker to jenkins' home directory. Then you have no need for withRegistry or any additional jenkins credential configuration. Nice and clean if you are doing this on a vm.
It looks like that you’re running into some auth issues with Jenkins & Docker on a GCE VM.
This document may help [1], and also, did you have the chance of looking into a helper [2]?
[1] https://googleapis.dev/python/google-api-core/latest/auth.html#using-google-compute-engine
[2] https://cloud.google.com/container-registry/docs/advanced-authentication#helpers

Is it possible to use Access Tokens to login & push images to Docker Hub from Travis CI?

This is my .travis.yml
sudo: required
services:
- docker
....
....
# login to docker
- echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_ID" --password--stdin
- docker push <username>/<image-name>
Instead of using my Docker Hub password, I generated an Access Token at https://hub.docker.com/settings/security and set then up in Travis CI like so.
Travis CI Environment Variables
However, I get the following output in my build.
denied: requested access to the resource is denied
Turns out this is possible. Docker just re-uses the "password" mechanics for the access token, which seems misleading and inconsistent with similar types of tools.
From the Docker documentation:
At the password prompt, enter the personal access token.
For Travis CI specifically, specify your username via the DOCKER_USERNAME environment variable and your access token via the DOCKER_PASSWORD environment variable.

Docker in Docker unable to push

I'm trying to execute docker commands inside of a Docker container (don't ask why). To do so I start up a container by running.
sudo docker run -v /var/run/docker.sock:/var/run/docker.sock -it my_docker_image
I am able to run all of the docker commands (pull, login, images, etc) but when I try to push to my remote (Gitlab) registry I get denied access. Yes, I did do a docker login and was able to successfully log in.
When looking at the Gitlab logs I see an error telling me no access token was sent with the push. After I do a docker login I see a /root/.docker/config.json with the remote url and a string of random characters (my credentials in base 64 I believe)? I'm using an access token as my password because i have MFA enabled on my Gitlab server.
Appreciate the help!
I ended up resolving the issue by using docker:stable as my runner image. Not quite sure what the problem was with the centos:centos7 image.

Resources