We have a GitLab CI pipeline that currently pulls images from our internal Docker registry, authenticated using a variable defined in .gitlab-ci.yml:
variables:
...
DOCKER_AUTH_CONFIG: '{"auths": {"our.registry": {"auth": "$B64AUTH"}}}'
This works fine.
We are trying to add a step to the end of the pipeline, to push our built Docker images to an Amazon ECR registry. We have installed the amazon-ecr-credential-helper on our runner instances, and given them the correct IAM permissions to be able to push to these registries. We have changed the .gitlab-ci.yml variable to:
DOCKER_AUTH_CONFIG: '{"auths": {"our.registry": {"auth": "$B64AUTH"}}, "credHelpers": { "<account-id>.dkr.ecr.<region>.amazonaws.com": "ecr-login"}}'
However, this causes the runner to fail to authenticate to our internal registry, so it cannot pull the images in which our jobs run. Whereas previously we would see in our pipeline jobs' logs:
Authenticating with credentials from $DOCKER_AUTH_CONFIG
... we are no longer seeing this. We're not even getting to the step where we want to push to ECR.
We have added a wrapper script around the credential helper, to log all the ins and outs to a file, and try and debug what is happening. However, it appears as if the helper isn't getting called at all, as there is nothing in the log file.
What can we do to try and get this working?
Our problems here boiled down to a number of causes:
Since we referenced the credential helper in DOCKER_AUTH_CONFIG, we needed the helper installed on the machine spawning the runners. (We use the docker+machine runner.) This machine also needed IAM permissions. Without this, it just gave up on the DOCKER_AUTH_CONFIG variable completely (a questionable decision if you ask me...)
In order to authenticate from within the jobs and push the images to ECR, we needed to configure the helper there too. We did this by modifying our spawner's config.toml file to add a volume /usr/bin/docker-credential-ecr-login:/usr/bin/docker-credential-ecr-login. (We also mounted the log directory and our helper wrapper.) In the docker push command, we added a --config docker-config flag, and wrote out an appropriate config to docker-config.config.json
Finally, our job image was docker/compose, and our verbose wrapper was written in bash, which isn't included in that image, so that was another silent failure. 😖.
Related
I have encountered a problem while attempting to deploy my code to a Droplet server (running Ubuntu) using BitBucket Pipeline.
I have set the necessary environment variables (SSH_PRIVATE_KEY, SSH_USER, SSH_HOST) and added the public key of the SSH_PRIVATE_KEY to the ~/.ssh/authorized_keys file on the server. When I manually deploy from the server, there are no issues with cloning or pulling. However, during the automatic CI deployment stage, I am encountering the error shown in the attached image.
This is my .yml configuration.
Thanks for helps in advance.
To refer to the values of the variables in defined in the configuration, you script should use $VARIABLE_NAME, not VARIABLE_NAME as that is literally that string.
- pipe: atlassian/ssh-run:latest
variables:
SSH_KEY: $SSH_KEY
SSH_USER: $SSH_USER
SERVER: $SSH_HOST
COMMAND: "/app/deploy_dev01.sh"
Also, note some pitfalls exists when using an explicit $SSH_KEY, it is generally easier and safer to use the default key provided by Bitbucket, see Load key "/root/.ssh/pipelines_id": invalid format
I have a docker image that receives a set of environment variables to customize its execution.
A simple example would be a web-server, that has stuff like client secret for OAuth2, a secret to sign cookies, etc.
The whole app is containerized on a docker image, that receives (runtime) environment variables.
I distribute that docker image on a private registry, and I would like to document that image, so that users can understand how they can customize the image.
Is it possible to ship, as part of the docker image, annotations that e.g. using docker describe my_image output markdown to the stdout?
I could of course use a static page on the web for documentation, but the user would still need to know where that documentation could be found, and the whole distribution would be more complext this way (e.g. documentation changes with image tag).
Any ideas?
There is no silver bullet here as far as I know, All solutions below work, but require the user to be informed of how to retrieve the documentation.
There is no standard way of doing it.
The open container initiative have created an image spec annotation suggesting that
A link to more information about the image should be provided in a label called org.opencontainers.image.documentation.
A description of the software packaged inside the container should be provided in a label called org.opencontainers.image.description
According to OCI, one of the variations of option 1 below is correct.
Option 1: Providing a link in a label (Prefered by OCI)
Assuming the Dockerfile and related assets are version controlled in a git repository that is publicly accessible (for example on github), that git repository could also contain a README.md file. If you have a pipeline hooked up to the repo that builds and publishes the Docker image to a registry automatically, you could setup the docker build command to add a label with a link to the documentation as follows
# Get the current commit id
commit=$(git rev-parse HEAD)
# Build docker image and attach a link to the Readme as a label
docker build -t myimagename:myversion \
--label "org.opencontainers.image.documentation=https://github.com/<user>/<repo>/blob/$commit/README.md"
This solution links to specific commit documentation for that particular commit versioned alongside your Dockerfile. It does however require the user to have access to internet to be able to read the documentation
Option 1b: Providing full documentation in a label (Prefered by OCI)
A variation of option 1 where the full documentation is serialized and put into the label (there is no length restrictions on labels). This way the documentation is bundled with the image itself
As Jorge Leitao pointed out in the comments, the image annotaion spec from OCI specifies the name of such a label as org.opencontainers.image.description
Option 2: Bundling documentation inside image
If you prefer to actually bundle the Readme.md file inside the image to make it independent on any external web page consider the following
Upon build, make sure to copy the Readme.md file to the docker image
Also create a simple shell script describe that cats the Readme.md
describe
#!/usr/bin/env sh
cat /docs/Readme.md
Dockerfile additions
...
COPY Readme.md /docs/Readme.md
COPY describe /opt/bin/describe
RUN chmod +x /opt/bin/describe
ENV PATH="/opt/bin:${PATH}"
...
A user that have your Docker image an now run the following command to have the markdown sent to stdout
docker run myimage:version describe
This solution bundles the documentation for this particular version of the image inside the image and it can be retrieved without any external dependencies
I'm using Jenkins X for microservice build / deployment. In each environment there are shared secrets used across microservices (client keys etc) which are injected into deployment.yaml as environment variables using valueFrom and secretKeyRef. This works well in Production and Staging where the namespaces are well know, but since preview generates a new namespace each time, these secrets will no exist. Is there a way to copy secrets from another, known, namespace, or a better approach?
You can create another namespace called jx-preview to store preview specific secrets, and add this line after the jx preview command in your Jenkinsfile
sh "kubectl get secret {secret_name} --namespace={from_namespace} --export -o yaml | kubectl apply --namespace=jx-$ORG-$PREVIEW_NAMESPACE -f -"
Not sure if this is the best way though
We've got a command to service link services from one namespace to another - such as to link services from staging to your preview environment via jx step link services.
It would be nice to add a similar command to copy secrets from a namespace in the same way. I've raised an issue to track this new feature
Another option is to create your own Job in charts/preview/templates/myjob.yaml and in that job create whatever Secrets you need however you want and then annotate it so that its triggered as a post-install hook of your Preview chart
I am trying to push a built docker container to a private registry and am having difficulty understanding how to pass the key safely and securely. I am able to successfully connect and push my container if I "build with parameters" in the Jenkins UI and just paste in my key.
This is my yaml file, and my templates to take care of most other things:
- project:
name: 'merge-monitor'
github_project: 'merge-monitor'
value_stream: 'enterprise'
hipchat_rooms:
- ''
defaults: clojure-project-var-defaults
docker_registry: 'private'
jobs:
- '{value_stream}_{name}_docker-build': # build docker images
wrappers:
- credentials-binding:
- text:
credential-id: our-credential-id
variable: DOCKER_REGISTRY_PASSWORD
I have read through the docs, and maybe I am missing something about credentials-binding, but I thought I simply had to call what key I had saved in Jenkins by name, and pass key as a variable into my password
Thank you in advance for the help
The issue here was completely different than what I was searching. Here, we simply needed to give our worker permissions within our own container registry as a user before it would have push access
I am pushing a docker image to a private docker registry, and am having trouble marking it 'public' via
a script.
For this discussion, I'm guessing the content of the Dockerfile doesn't matter... so lets assume I have the following in my
current working directory:
Dockerfile
from ubuntu
touch /tmp/foo
I build like this:
docker build -t my.private.docker.registry.com/foo/jdk1.8.on.ubuntu14.04 .
Then, I am doing my push like this:
docker push my.private.docker.registry.com/foo/jdk1.8.on.ubuntu14.04
Next, I navigate to the web site that allows me to manage my private registry (# the url http://my.private.docker.registry.com)
I look at my image, and I see it has a padlock icon next to it, indicating that it is private. I can manually unlock from the
web UI, but I'd like to know if there are any options to docker's 'push command that will allow me to mark the image
as 'public' without manual intervention.
One thing I tried was setting global settings for my namespace such that all new repos would be readable/writable by all users.
Specifically: I went into the Docker web ui for my private registry and for the namespace 'foo' I tried adding default permissions
(for any newly created repos) such that all users will have 'write' access to any new repo pushed under the 'foo' namespace.
However, even after doing the above, when I pushed a new image to my private registry under namespace foo, that image was still
marked with the pad-lock. I looked up the command line options for 'docker push', and I did not see any option that looked like
it would affect the visibility of the image at the time of push.
thanks in advance for your help !
-chris
So, according to the folks who manage the Docker registry at the company I'm at now: there is no command line way to enable permissions for users other than the repository creator to have write access to that repo. You have to go to the web UI and manually mark the repo 'public', and you have to add permissions for each user (although it is possible to have groups of users, and then add a whole group -- this still is clunky because new employees have to be manually added to the group).
I find it hard to believe that there's no command line way.. But this is what our experts say.. If there are other experts out there who have a better idea, please chime in ! Otherwise I will do it manually through the web UI (grrrrRRrr).