This is my yml file
- name: Start jaegar daemon services
docker:
name: jaegar-logz
image: logzio/jaeger-logzio:latest
state: started
env:
ACCOUNT_TOKEN: {{ token1 }}
API_TOKEN: {{ token2 }}
ports:
- "5775:5775"
- "6831:6831"
- "6832:6832"
- "5778:5778"
- "16686:16686"
- "14268:14268"
- "14250:14250"
- "9411:9411"
- name: Wait for jaegar services to be up
wait_for: delay=60 port=5775
Can ansible discover the docker image from the docker hub registry by itself?
Does this actually start the jaegar daemons or does it just build the image? If it's the latter, how can I run the container?
The docker image is from here - https://hub.docker.com/r/logzio/jaeger-logzio
Assuming you are using docker CE.
You should be able to run according to this documentation from ansible. Do note however; this module is deprecated in ansible 2.4 and above as the documentation itself dictates. Use the docker_container task if you want to run containers instead. The links are available in said documentation link.
As far as your questions go:
Can ansible discover the docker image from the docker hub registry by itself?
This would depend on the client machine that you will run it on. By default, docker will point to it's own docker hub registry unless you specifically log in to another repository. If you use the public repo (which it looks like in your link) and the client is able to go out online to said repo you should be fine.
Does this actually start the jaegar daemons or does it just build the image? If it's the latter, how can I run the container?
According to the docker_container documentation you should be able to run the container directly from this task. This would mean that you are good to go.
P.S.: The image parameter on that page tells us that:
Repository path and tag used to create the container. If an image is
not found or pull is true, the image will be pulled from the registry.
If no tag is included, 'latest' will be used.
In other words, with a small adjustment to your task you should be fine.
Related
I am trying to run a script (unitest) that uses docker behind the scenes on a CI. The script works as expected on droneci but switching to CloudBuild it is not clear how to setup DinD.
For the droneci I basically use the DinD as shown here my question is, how do I translate the code to Google CloudBuild. Is it even possible?
I searched the internet for the syntax of CloudBuild wrt DinD and couldn't find something.
Cloud Build lets you create Docker container images from your source code. The Cloud SDK provides the container buildsubcommand for using this service easily.
For example, here is a simple command to build a Docker image:
gcloud builds submit -t gcr.io/my-project/my-image
This command sends the files in the current directory to Google Cloud Storage, then on one of the Cloud Build VMs, fetch the source code, run Docker build, and upload the image to Container Registry
By default, Cloud Build runs docker build command for building the image. You can also customize the build pipeline by having custom build steps.If you can use any arbitrary Docker image as the build step, and the source code is available, then you can run unit tests as a build step. By doing so, you always run the test with the same Docker image. There is a demonstration repository at cloudbuild-test-runner-example. This tutorial uses the demonstration repository as part of its instructions.
I would also recommend you to have a look at these informative links with similar use case:
Running Integration test on Google cloud build
Google cloud build pipeline
I managed to figure out a way to run Docker-in-Docker (DinD) in CloudBuild. To do that we need to launch a service in the background with docker-compose. Your docker-compose.yml script should look something like this.
version: '3'
services:
dind-service:
image: docker:<dnd-version>-dind
privileged: true
ports:
- "127.0.0.1:2375:2375"
- "127.0.0.1:2376:2376"
networks:
default:
external:
name: cloudbuild
In my case, I had no problem using versions 18.03 or 18.09, later versions should also work. Secondly, it is important to attach the container to the cloudbuild network. This way the dind container will be on the same network as every container spawned during your step.
To start the service you need to add a step to your cloudbuild.yml file.
- id: start-dind
name: docker/compose
args: ['-f', 'docker-compose.yml', 'up', '-d', 'dind-service']
To validate that the dind service works as expected, you can just create a ping step.
- id: 'Check service is listening'
name: gcr.io/cloud-builders/curl
args: ["dind-service:2375"]
waitFor: [start-dind]
Now if it works you can run your script as normal with dind in the background. What is important is to pass the DOCKER_HOST env variable so that the docker client can locate the docker engine.
- id: my-script
name: my-image
script: myscript
env:
- 'DOCKER_HOST=tcp://dind-service:2375'
Take note, any container spawned by your script will be located in dind-service, thus if you are to do any request to it you shouldn't do it to http://localhost but instead to the http://dind-service. Moreover, if you are to use private images you will require some type of authentication before running your script. For that, you should run gcloud auth configure-docker --quiet before running your script. Make sure your docker image has gcloud installed. This creates the required authentication credentials to run your app. The credentials are saved in path relevant to the $HOME variable, so make sure your app is able to access it. You might have some problems if you use tox for example.
I would like to run a K8s Cronjob (or controller) to mirror copies of the actual container images used to a external (ECR) Docker repo. How can I do the equivalent of:
docker pull localimage
docker tag localimage newlocation
docker push newlocation
Kubernetes doesn't have any way to push or rename images, or to manually pull images beyond declaring them in a Pod spec.
The Docker registry system has its own HTTP API. One possibility is, when you discover a new image, manually make the API calls to pull and push it. In this context you wouldn't specifically need to "tag" the image since the image repository, name, and tag only appear as URL components. I'm not specifically aware of a prebuilt tool that can do this, though I'd be very surprised if nobody's built it.
If you can't do this, then the only reliable way to get access to some Docker daemon in Kubernetes is to run one yourself. In this scenario you don't need access to "the real" container system, just somewhere you can target the specific docker commands you list, so you're not limited by the container runtime Kubernetes uses. The one big "but" here is that the Docker daemon must run in a privileged container, which your local environment may not allow.
It's a little unusual to run two containers in one Pod but this is a case where it makes sense. The Docker daemon can run as a prepackaged separate container, tightly bound to its client, as a single unit. Here you don't need persistent storage or anything else that might want the Docker daemon to have a different lifecycle than the thing that's using it; it's just an implementation detail of the copy process. Carefully Googling "docker in docker" kubernetes finds write-ups like this or this that similarly describe this pattern.
By way of illustration, here's a way you might do this in a Kubernetes Job:
apiVersion: batch/v1
kind: Job
metadata: { ... }
spec:
template:
spec:
containers:
- name: cloner
image: docker:latest # just the client and not a daemon
environment:
- name: IMAGE
value: some/image:tag
- name: REGISTRY
value: registry.example.com
- name: DOCKER_HOST
value: tcp://localhost:2375 # pointing at the other container
command:
- /bin/sh
- -c
- |-
docker pull "$IMAGE"
docker tag "$IMAGE" "$REGISTRY/$IMAGE"
docker push "$REGISTRY/$IMAGE"
docker rmi "$IMAGE" "$REGISTRY/$IMAGE"
- name: docker
image: docker:dind
securityContext:
privileged: true # <-- could be a problem with your security team
volumes:
- name: dind-storage
mountPath: /var/lib/docker
volumes:
- name: dind-storage
emptyDir: {} # won't outlive this Pod and that's okay
In practice I suspect you'd want a single long-running process to manage this, maybe running the DinD daemon as a second container in your controller's Deployment.
I looked at any other questions but can't find my own solution! I setting up a CI in gitlab and use the gitlab's shared runner. In build stage I used docker image as base image but when i use docker command it says :
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I looked at this topic but still don't understand what should I do?
.gitlab-ci.yml :
stages:
- test
- build
- deploy
job_1:
image: python:3.6
stage: test
script:
- sh ./sh_script/install.sh
- python manage.py test -k
job_2:
image: docker:stable
stage: build
before_script:
- docker info
script:
- docker build -t my-docker-image .
I know that the gitlab runner must registered to use docker and share /var/run/docker.sock! But how to do this when using the gitlab own runner?
Ahh, that's my lovely topic - using docker for gitlab ci. The problem you are experiencing is better known as docker-in-docker.
Before configuring it, you may want to read this brilliant post: http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
That will give you a bit of understanding what is the problem and which solution best fits you. Generally there are 2 major approaches: actual installation of docker daemon inside docker and sharing host's daemon to containers. Which approach to choose - depends on your needs.
In gitlab you can go in several ways, I will just share our experience.
Way 1 - using docker:dind as a service.
It is pretty simple to setup. Just add docker:dind as a shared service to your gitlab-ci.yml file and use docker:latest image for your jobs.
image: docker:latest # this sets default image for jobs
services:
- docker:dind
Pros:
simple to setup.
simple to run - your source codes are available by default to your job in cwd because they are being pulled directly to your docker runner
Cons: you have to configure docker registry for that service, otherwise you will get your Dockerfiles built from scratch each time your pipeline starts. As for me, it is unacceptable, because can take more than an hour depending on the number of containers you have.
Way 2 - sharing /var/run/docker.sock of host docker daemon
We setup our own docker executor with docker daemon and shared the socket by adding it in /etc/gitlab-runner/config.toml file. Thus we made our machine's docker daemon available to docker cli inside containers. Note - you DONT need privileged mode for executor in this case.
After that we can use both docker and docker-compose in our custom docker images. Moreover, we dont need special docker registry because in this case we share executor's registry among all containers.
Cons
You need to somehow pass sources to your containers in this case, because you get them mounted only to docker executor, but not to containers, launched from it. We've stopped on cloning them with command like git clone $CI_REPOSITORY_URL --branch $CI_COMMIT_REF_NAME --single-branch /project
I have created a stack which contains one container (service) on rancher.
This container has been created from an image which is hosted on a gitlab-ci project registry.
I want to force rancher to download a new version of this image and upgrade container.
I want to do this from a .gitlab-ci.yml script.
Here is an extract of my .gitlab-ci.yml:
(Please note i have set RANCHER_ACCESS_KEY, RANCHER_SECRET_KEY, RANCHER_URL secrets variables in gitlab web interface)
deploiement:
stage: deploiement
tags: [dockerrunnertag]
image: tagip/rancher-cli
script:
- rancher --debug up -d --stack "mystack"
- rancher --debug up -d --force-upgrade --pull --stack "mystack" --confirm-upgrade app
My problem is that gitlab is automaticly copy my source code into this tagip/rancher-cli container.
This container is temporary. I just want to run it in order to fire an action on rancher server.
How can i disable this fetching source code feature ?
Thanks
Yes it is possible.
Simply add the GIT_STRATEGY variable to your deployment job.
variables:
GIT_STRATEGY: none
Reference: https://gitlab.com/gitlab-org/gitlab-ce/issues/21337
I work on a Kubernetes cluster based CI-CD pipeline.
The pipeline runs like this:
An ECR machine has Docker.
Jenkins runs as a container.
"Builder image" with Java, Maven etc is built.
Then this builder image is run to build an app image(s)
Then the app is run in kubernetes AWS cluster (using Helm).
Then the builder image is run with params to run Maven-driven tests against the app.
Now part of these steps doesn't require the image to be pushed. E.g. the builder image can be cached or disposed at will - it would be rebuilt if needed.
So these images are named like mycompany/mvn-builder:latest.
This works fine when used directly through Docker.
When Kubernetes and Helm comes, it wants the images URI's, and try to fetch them from the remote repo. So using the "local" name mycompany/mvn-builder:latest doesn't work:
Error response from daemon: pull access denied for collab/collab-services-api-mvn-builder, repository does not exist or may require 'docker login'
Technically, I can name it <AWS-repo-ID>/mvn-builder and push it, but that breaks the possibility to run all this locally in minikube, because that's quite hard to keep authenticated against the silly AWS 12-hour token (remember it all runs in a cluster).
Is it possible to mix the remote repo and local cache? In other words, can I have Docker look at the remote repository and if it's not found or fails (see above), it would take the cached image?
So that if I use foo/bar:latest in a Kubernetes resource, it will try to fetch, find out that it can't, and would take the local foo/bar:latest?
I believe an initContainer would do that, provided it had access to /var/run/docker.sock (and your cluster allows such a thing) by conditionally pulling (or docker load-ing) the image, such that when the "main" container starts, the image will always be cached.
Approximately like this:
spec:
initContainers:
- name: prime-the-cache
image: docker:18-dind
command:
- sh
- -c
- |
if something_awesome; then
docker pull from/a/registry
else
docker load -i some/other/path
fi
volumeMounts:
- name: docker-sock
mountPath: /var/run/docker.lock
readOnly: true
containers:
- name: primary
image: a-local-image
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock