Passing auth key to a container registry in Jenkins Job builder - jenkins

I am trying to push a built docker container to a private registry and am having difficulty understanding how to pass the key safely and securely. I am able to successfully connect and push my container if I "build with parameters" in the Jenkins UI and just paste in my key.
This is my yaml file, and my templates to take care of most other things:
- project:
name: 'merge-monitor'
github_project: 'merge-monitor'
value_stream: 'enterprise'
hipchat_rooms:
- ''
defaults: clojure-project-var-defaults
docker_registry: 'private'
jobs:
- '{value_stream}_{name}_docker-build': # build docker images
wrappers:
- credentials-binding:
- text:
credential-id: our-credential-id
variable: DOCKER_REGISTRY_PASSWORD
I have read through the docs, and maybe I am missing something about credentials-binding, but I thought I simply had to call what key I had saved in Jenkins by name, and pass key as a variable into my password
Thank you in advance for the help

The issue here was completely different than what I was searching. Here, we simply needed to give our worker permissions within our own container registry as a user before it would have push access

Related

Bitbucket auto deploy to Linux server (DigitalOcean Droplet)

I have encountered a problem while attempting to deploy my code to a Droplet server (running Ubuntu) using BitBucket Pipeline.
I have set the necessary environment variables (SSH_PRIVATE_KEY, SSH_USER, SSH_HOST) and added the public key of the SSH_PRIVATE_KEY to the ~/.ssh/authorized_keys file on the server. When I manually deploy from the server, there are no issues with cloning or pulling. However, during the automatic CI deployment stage, I am encountering the error shown in the attached image.
This is my .yml configuration.
Thanks for helps in advance.
To refer to the values of the variables in defined in the configuration, you script should use $VARIABLE_NAME, not VARIABLE_NAME as that is literally that string.
- pipe: atlassian/ssh-run:latest
variables:
SSH_KEY: $SSH_KEY
SSH_USER: $SSH_USER
SERVER: $SSH_HOST
COMMAND: "/app/deploy_dev01.sh"
Also, note some pitfalls exists when using an explicit $SSH_KEY, it is generally easier and safer to use the default key provided by Bitbucket, see Load key "/root/.ssh/pipelines_id": invalid format

GitLab runner ignoring DOCKER_AUTH_CONFIG when credential helper specified

We have a GitLab CI pipeline that currently pulls images from our internal Docker registry, authenticated using a variable defined in .gitlab-ci.yml:
variables:
...
DOCKER_AUTH_CONFIG: '{"auths": {"our.registry": {"auth": "$B64AUTH"}}}'
This works fine.
We are trying to add a step to the end of the pipeline, to push our built Docker images to an Amazon ECR registry. We have installed the amazon-ecr-credential-helper on our runner instances, and given them the correct IAM permissions to be able to push to these registries. We have changed the .gitlab-ci.yml variable to:
DOCKER_AUTH_CONFIG: '{"auths": {"our.registry": {"auth": "$B64AUTH"}}, "credHelpers": { "<account-id>.dkr.ecr.<region>.amazonaws.com": "ecr-login"}}'
However, this causes the runner to fail to authenticate to our internal registry, so it cannot pull the images in which our jobs run. Whereas previously we would see in our pipeline jobs' logs:
Authenticating with credentials from $DOCKER_AUTH_CONFIG
... we are no longer seeing this. We're not even getting to the step where we want to push to ECR.
We have added a wrapper script around the credential helper, to log all the ins and outs to a file, and try and debug what is happening. However, it appears as if the helper isn't getting called at all, as there is nothing in the log file.
What can we do to try and get this working?
Our problems here boiled down to a number of causes:
Since we referenced the credential helper in DOCKER_AUTH_CONFIG, we needed the helper installed on the machine spawning the runners. (We use the docker+machine runner.) This machine also needed IAM permissions. Without this, it just gave up on the DOCKER_AUTH_CONFIG variable completely (a questionable decision if you ask me...)
In order to authenticate from within the jobs and push the images to ECR, we needed to configure the helper there too. We did this by modifying our spawner's config.toml file to add a volume /usr/bin/docker-credential-ecr-login:/usr/bin/docker-credential-ecr-login. (We also mounted the log directory and our helper wrapper.) In the docker push command, we added a --config docker-config flag, and wrote out an appropriate config to docker-config.config.json
Finally, our job image was docker/compose, and our verbose wrapper was written in bash, which isn't included in that image, so that was another silent failure. 😖.

GCP manage kubernetes autodeploy image path

In my project on GCP i setup an autmatedodeploy for a specific deploy on my kubernetes cluster, ath the end of procedure an image path like:
gcr.io/direct-variety-325450/cc-mirror:$COMMIT_SHA
was create.
If i see in my GCP "Container Registry" i se images wit tag like c15c5019183ded74814d570a9a33d2f95ecdfb32
Now my question is:
How can i in my deployment.yaml file specify the latest image name if there are no latest or other tag?
...
spec:
containers:
- name: django
image: ????
...
if i put:
gcr.io/direct-variety-325450/cc-mirror:$COMMIT_SHA
or:
gcr.io/direct-variety-325450/cc-mirror
i get an Error:
Cannot download Image, Image does not exist
What i have to put into my image: entry of deployment.yaml?
So many thanks in advance
Manuel
TL;DR: You need to specify the latest tag in your deployment.
In fact, Kubernetes automates a lot of thing for you. You tell what you want, Kubernetes compares its state with your wishes and perform actions.
If you don't specify the image tag, kubernetes will compare your wish (no tag) with the current state of the cluster (no tag) and because it's equal, it will do nothing.
Now how to automate the new tag deployment. Here no magic: you need a placeholder in your deployment.yaml file and to execute a sed in your file to replace the placeholder by the real value.
And then apply the change on this updated file.

Jenkins X use secrets in Preview environments

I'm using Jenkins X for microservice build / deployment. In each environment there are shared secrets used across microservices (client keys etc) which are injected into deployment.yaml as environment variables using valueFrom and secretKeyRef. This works well in Production and Staging where the namespaces are well know, but since preview generates a new namespace each time, these secrets will no exist. Is there a way to copy secrets from another, known, namespace, or a better approach?
You can create another namespace called jx-preview to store preview specific secrets, and add this line after the jx preview command in your Jenkinsfile
sh "kubectl get secret {secret_name} --namespace={from_namespace} --export -o yaml | kubectl apply --namespace=jx-$ORG-$PREVIEW_NAMESPACE -f -"
Not sure if this is the best way though
We've got a command to service link services from one namespace to another - such as to link services from staging to your preview environment via jx step link services.
It would be nice to add a similar command to copy secrets from a namespace in the same way. I've raised an issue to track this new feature
Another option is to create your own Job in charts/preview/templates/myjob.yaml and in that job create whatever Secrets you need however you want and then annotate it so that its triggered as a post-install hook of your Preview chart

when pushing docker image to private docker registry, having trouble marking it 'public' via my script (but can do via web ui)

I am pushing a docker image to a private docker registry, and am having trouble marking it 'public' via
a script.
For this discussion, I'm guessing the content of the Dockerfile doesn't matter... so lets assume I have the following in my
current working directory:
Dockerfile
from ubuntu
touch /tmp/foo
I build like this:
docker build -t my.private.docker.registry.com/foo/jdk1.8.on.ubuntu14.04 .
Then, I am doing my push like this:
docker push my.private.docker.registry.com/foo/jdk1.8.on.ubuntu14.04
Next, I navigate to the web site that allows me to manage my private registry (# the url http://my.private.docker.registry.com)
I look at my image, and I see it has a padlock icon next to it, indicating that it is private. I can manually unlock from the
web UI, but I'd like to know if there are any options to docker's 'push command that will allow me to mark the image
as 'public' without manual intervention.
One thing I tried was setting global settings for my namespace such that all new repos would be readable/writable by all users.
Specifically: I went into the Docker web ui for my private registry and for the namespace 'foo' I tried adding default permissions
(for any newly created repos) such that all users will have 'write' access to any new repo pushed under the 'foo' namespace.
However, even after doing the above, when I pushed a new image to my private registry under namespace foo, that image was still
marked with the pad-lock. I looked up the command line options for 'docker push', and I did not see any option that looked like
it would affect the visibility of the image at the time of push.
thanks in advance for your help !
-chris
So, according to the folks who manage the Docker registry at the company I'm at now: there is no command line way to enable permissions for users other than the repository creator to have write access to that repo. You have to go to the web UI and manually mark the repo 'public', and you have to add permissions for each user (although it is possible to have groups of users, and then add a whole group -- this still is clunky because new employees have to be manually added to the group).
I find it hard to believe that there's no command line way.. But this is what our experts say.. If there are other experts out there who have a better idea, please chime in ! Otherwise I will do it manually through the web UI (grrrrRRrr).

Resources