drone.io use private repo for pulling FROM image - docker

Have to use a private Image from aws or gcp for my build process in drone.
The simplest DOCKERFILE example:
FROM ***.dkr.ecr.eu-central-1.amazonaws.com/***:latest
That means i have to login, which works fine. My drone.yml example:
steps:
- name: docker
privileged: true
image: revenuehack/drone-ecr-auth
environment:
AWS_ACCESS_KEY_ID:
from_secret: aws_access_id
AWS_SECRET_ACCESS_KEY:
from_secret: aws_key
AWS_REGION: eu-central-1
commands:
- aws ecr get-login --region $AWS_REGION --no-include-email | sh
But now i have to pull the image and use it in different steps of the ci process. Other questions suggest binding the docker.sock like here. Does not feel right to me. Id rather have some sort of service for that. Is that possible? Also this binding does not work:
volumes:
- /var/run/docker.sock:/var/run/docker.sock)

Option 1: Global Registry
Personally I would suggest using the drone-registry-plugin, works great to keep access to the ECR repos live:
https://github.com/drone/drone-registry-plugin
Now, a caveat to this suggestion, is that it mimics the v0.8 global registries function, so all pipelines maintained in the installation would have ability to access the registry.
Option 2: Local docker config.json
Since this could technically be run by anyone also, I'm not sure what you gain over the registry plugin, but here is the ref:
https://discourse.drone.io/t/how-to-pull-private-images-with-1-0/3155
This option involves placing a .docker/config.json onto the agents using cloud-init or some other mechanism and then in the individual pipelines you would then be able to add another root level yaml block image_pull_secrets:
kind: pipeline
name: default
steps:
- name: someStep
image: some.registry.dev/some-image:latest
image_pull_secrets:
- dockerconfigjson

Related

How can I run a docker container in ansible?

This is my yml file
- name: Start jaegar daemon services
docker:
name: jaegar-logz
image: logzio/jaeger-logzio:latest
state: started
env:
ACCOUNT_TOKEN: {{ token1 }}
API_TOKEN: {{ token2 }}
ports:
- "5775:5775"
- "6831:6831"
- "6832:6832"
- "5778:5778"
- "16686:16686"
- "14268:14268"
- "14250:14250"
- "9411:9411"
- name: Wait for jaegar services to be up
wait_for: delay=60 port=5775
Can ansible discover the docker image from the docker hub registry by itself?
Does this actually start the jaegar daemons or does it just build the image? If it's the latter, how can I run the container?
The docker image is from here - https://hub.docker.com/r/logzio/jaeger-logzio
Assuming you are using docker CE.
You should be able to run according to this documentation from ansible. Do note however; this module is deprecated in ansible 2.4 and above as the documentation itself dictates. Use the docker_container task if you want to run containers instead. The links are available in said documentation link.
As far as your questions go:
Can ansible discover the docker image from the docker hub registry by itself?
This would depend on the client machine that you will run it on. By default, docker will point to it's own docker hub registry unless you specifically log in to another repository. If you use the public repo (which it looks like in your link) and the client is able to go out online to said repo you should be fine.
Does this actually start the jaegar daemons or does it just build the image? If it's the latter, how can I run the container?
According to the docker_container documentation you should be able to run the container directly from this task. This would mean that you are good to go.
P.S.: The image parameter on that page tells us that:
Repository path and tag used to create the container. If an image is
not found or pull is true, the image will be pulled from the registry.
If no tag is included, 'latest' will be used.
In other words, with a small adjustment to your task you should be fine.

How do I build a docker-compose container from a git-resource in Concourse CI?

I am currently trying to build and deploy a dockerized Go project, pulled from a Git repo in using Concourse.
To give you some background about my current setup:
I got two AWS Lightsail instances set up, both of them using a Docker container to serve Concourse.
One of those instances is serving the web node, the other one is acting as a worker node, which connects to the web node.
My current pipeline looks like this:
resources:
- name: zsu-wasserlabor-api-repo
type: git
webhook_token: TOP_SECRET
source:
uri: git#github.com:lennartschoch/zsu-wasserlabor-api
branch: master
private_key: TOP_SECRET
jobs:
- name: build-api
plan:
- get: zsu-wasserlabor-api-repo
trigger: true
- task: build
config:
platform: linux
image_resource:
type: docker-image
source: {repository: alpine}
inputs:
- name: zsu-wasserlabor-api-repo
run:
path: sh
args:
- -c
- |
cd zsu-wasserlabor-api-repo
docker-compose build
The problem is that docker-compose is not installed.
I am feeling like I am doing something fundamentally wrong. Could anyone give me a hint?
Best,
Lennart
The pipeline described above specifies that it should use the alpine image, which doesn't have docker-compose on it. Thus, you will need to find an image that has docker-compose installed on it, but even then, there are additional steps you will need to take to make it work in Concourse (see this link for more details).
Fortunately, someone has made an image available that takes care of the additional steps, with a sample pipeline that you can find here: https://github.com/meAmidos/dcind
That being said, if you are simply trying to build a Docker image, you can use the docker-image-resource instead and just specify the Dockerfile.

Mixing local and remote Docker images repo?

I work on a Kubernetes cluster based CI-CD pipeline.
The pipeline runs like this:
An ECR machine has Docker.
Jenkins runs as a container.
"Builder image" with Java, Maven etc is built.
Then this builder image is run to build an app image(s)
Then the app is run in kubernetes AWS cluster (using Helm).
Then the builder image is run with params to run Maven-driven tests against the app.
Now part of these steps doesn't require the image to be pushed. E.g. the builder image can be cached or disposed at will - it would be rebuilt if needed.
So these images are named like mycompany/mvn-builder:latest.
This works fine when used directly through Docker.
When Kubernetes and Helm comes, it wants the images URI's, and try to fetch them from the remote repo. So using the "local" name mycompany/mvn-builder:latest doesn't work:
Error response from daemon: pull access denied for collab/collab-services-api-mvn-builder, repository does not exist or may require 'docker login'
Technically, I can name it <AWS-repo-ID>/mvn-builder and push it, but that breaks the possibility to run all this locally in minikube, because that's quite hard to keep authenticated against the silly AWS 12-hour token (remember it all runs in a cluster).
Is it possible to mix the remote repo and local cache? In other words, can I have Docker look at the remote repository and if it's not found or fails (see above), it would take the cached image?
So that if I use foo/bar:latest in a Kubernetes resource, it will try to fetch, find out that it can't, and would take the local foo/bar:latest?
I believe an initContainer would do that, provided it had access to /var/run/docker.sock (and your cluster allows such a thing) by conditionally pulling (or docker load-ing) the image, such that when the "main" container starts, the image will always be cached.
Approximately like this:
spec:
initContainers:
- name: prime-the-cache
image: docker:18-dind
command:
- sh
- -c
- |
if something_awesome; then
docker pull from/a/registry
else
docker load -i some/other/path
fi
volumeMounts:
- name: docker-sock
mountPath: /var/run/docker.lock
readOnly: true
containers:
- name: primary
image: a-local-image
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock

How to build docker image frome .drone.yml?

I have a (.drone.yml) test file from which i want to build a docker image. According to the documentations i have to build it using drone .
I tried this tutorial ( https://www.digitalocean.com/community/tutorials/how-to-perform-continuous-integration-testing-with-drone-io-on-coreos-and-docker ) and several other tutorials but i failed .
can anyone show me please a simple way to build .drone.yml !
Thank you
Note that this answer applies to drone version 0.5
You can use the Docker plugin to build and publish a Docker image at the successful completion of your build. You add the Docker plugin as a step in your build pipeline section of the .drone.yml file:
pipeline:
build:
image: golang
commands:
- go build
- go test
publish:
image: plugins/docker
repo: foo/bar
In many cases you will want to limit execution of the this step to certain branches. This can be done by adding runtime conditions:
publish:
image: plugins/docker
repo: foo/bar
when:
branch: master
You will need to provide drone with credentials to your Docker registry in order for drone to publish. These credentials can be declared directly in the yaml file, although storing these values in plain text in the yaml is generally not recommended:
publish:
image: plugins/docker
repo: foo/bar
username: johnsmith
password: pa55word
when:
branch: master
You can alternatively provide your credentials using the built-in secret store. Secrets can be added to the secret store on a per-repository basis using the Drone command line utility:
export DRONE_SERVER=http://drone.server.address.com
export DRONE_TOKEN=...
drone secret add \
octocat/hello-world DOCKER_USERNAME johnsmith
drone secret add \
octocat/hello-world DOCKER_PASSWORD pa55word
drone sign octocat/hello-world
Secrets are then interpolated in your yaml at rutnime:
publish:
image: plugins/docker
repo: foo/bar
username: ${DOCKER_USERNAME}
password: ${DOCKER_PASSWORD}
when:
branch: master

Kubernetes AWS deployment can not set docker credentials

I set up a Kubernetes cluster on AWS using kube-up script with one master and two minions. I want to create a pod that uses a private docker image. So I need to add my credential to docker daemons of each minion of the cluster. But I don't know how to log into the minions created by AWS script. What is the recommended way to pass credentials to the docker demons of each minion?
Probably the best method for you is ImagePullSecrets - you will create secret (docker config), which be will be used for image pull. Read more about different concepts of using private registry http://kubernetes.io/docs/user-guide/images/#using-a-private-registry
Explained here: https://kubernetes.io/docs/concepts/containers/images/
There are 3 options for ImagePullPolicy: Always, IfNotPresent and Never
1) example of yaml:
...
spec:
containers:
- name: uses-private-image
image: $PRIVATE_IMAGE_NAME
imagePullPolicy: Always
command: [ "echo", "SUCCESS" ]
2) By default, the kubelet will try to pull each image from the specified registry. However, if the imagePullPolicy property of the container is set to IfNotPresent or Never, then a local image is used (preferentially or exclusively, respectively).
If you want to rely on pre-pulled images as a substitute for registry authentication, you must ensure all nodes in the cluster have the same pre-pulled images.
This can be used to preload certain images for speed or as an alternative to authenticating to a private registry.
All pods will have read access to any pre-pulled images.

Resources