docker daemon start using ansible - docker

I am working on ansible script for start docker damon , docker container, docker exec After start docker container with in the docker container i need to start some services.
I have installed docker engine , configured and working with some docker container in remote machines. i have used to start docker daemon with specific path, because i need to store my volumes and containers with in path.
$docker daemon -g /test/docker
My issue is when start the docker daemon its started, but not go to next process. via ansible. still running docker daemon.
---
- hosts: webservers
remote_user: root
# Apache Subversion dnf -y install python-pip
tasks:
- name: Start Docker Deamon
shell: docker -d -g /test/docker
become: yes
become_user: root
- name: Start testing docker machine
command: docker start testing
async: True
poll: 0
I follow async to start the process ,but its not working for me ,
Suggest me After start docker daemon, How to run next process.

In order to start the docker daemon you should use the ansible service module:
- name: Ensure docker deamon is running
service:
name: docker
state: started
become: true
any docker daemon customisation should be placed in /etc/docker/daemon.json as described in official documentation. in your case the file would look like:
{
"graph": "/test/docker"
}
In order to interact with containers, use the ansible docker_container module:
- name: Ensure My docker container is running
docker_container:
name: testing
image: busybox
state: started
become: true
Try to avoid doing anything in ansible using the shell module, since it can cause headaches down the line.

You can also start Docker and other services automatically when booting the machine. For that you can use the systemd module in Ansible like this:
- name: Enable docker.service
systemd:
name: docker.service
daemon_reload: true
enabled: true
- name: Enable containerd.service
systemd:
name: containerd.service
daemon_reload: true
enabled: true
Reference: here

Related

Why can't I reach this helm chart defined app? (in Minikube)

I've used helm create helloworld-chart to create an application using a local docker image I created. i think the issue is that i have the ports all messed up.
DOCKER PIECES
--------------------------
Docker File
FROM busybox
ADD index.html /www/index.html
EXPOSE 8008
CMD httpd -p 8008 -h /www; tail -f /dev/null
(I also have an index.html file in the same directory as my Dockerfile)
Create Docker Image (and publish locally)
docker build -t hello-world .
I then ran this with docker run -p 8080:8008 hello-world and verified I am able to reach it from localhost:8080. (I then stopped that docker container)
I also verified this image was in docker locally with docker image ls and got the output:
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest 8640a285e98e 20 minutes ago 1.23MB
HELM PIECES
--------------------------
Created a helm chart via helm create helloworld-chart.
Edited the files:
values.yaml
# ...elided because left the same as default...
image:
repository: hello-world
tag: latest
pullPolicy: IfNotPresent
# ...elided because left the same as default...
service:
name: hello-world
type: NodePort # Chose this because MiniKube doesn't have LoadBalancer installed
externalPort: 30007
internalPort: 8008
port: 80
service.yaml
# ...elided because left the same as default...
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: {{ .Values.service.internalPort }}
nodePort: {{ .Values.service.externalPort }}
deployment.yaml
# ...elided because left the same as default...
spec:
# ...elided because left the same as default...
containers:
ports:
- name: http
containerPort: {{ .Values.service.internalPort }}
protocol: TCP
I verified this "looked" correct with both helm lint helloworld-chart and helm template ./helloworld-chart
HELM AND MINIKUBE COMMANDS
--------------------------
# Packaging my helm
helm package helloworld-chart
# Installing into Kuberneters (Minikube)
helm install helloworld helloworld-chart-0.1.0.tgz
# Getting an external IP
minikube service helloworld-helloworld-chart
When I do that, it gives me an external ip like http://172.23.13.145:30007 and opens in a browser but just says the site cannot be reached. What do i have mismatched?
UPDATE/MORE INFO
---------------------------------------
When I check the pod, it's in a CrashLoopBackOff state. However, I see nothing in the logs:
kubectl logs -f helloworld-helloworld-chart-6c886d885b-grfbc
Logs:
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
I'm not sure why it's exiting.
The issue was that Minikube was actually looking in the public Docker image repo and finding something also called hello-world. It was not finding my docker image since "local" to minikube is not local to the host computer's docker. Minikube has its own docker running internally.
You have to add your image to minikube's local repo: minikube cache add hello-world:latest.
You need to change the pull policy: imagePullPolicy: Never

Injecting host network into container in CircleCI

I have this CircleCI configuration.
version: 2
jobs:
build:
docker:
- image: docker:18.09.2-git
- image: docker.elastic.co/elasticsearch/elasticsearch:6.6.0
name: elasticsearch
working_directory: ~/project
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- run:
name: test
command: |
docker run --rm \
--network host \
byrnedo/alpine-curl \
elasticsearch:9200
I'm looking for a way to allow my new container to access to the elasticsearch port 9200. With this configuration, the elasticsearch is not even a known host name.
Creating an extra network is not possible, so I have this error message container sharing network namespace with another container or host cannot be connected to any other network
Host network seems to be working only in the primary image
How could I do this?
That will not work. Containers started during a build via the docker run command are running via a remote Docker engine. The cannot talk to the containers running as part of the executor via TCP since they are isolated. Just docker exec.
The solution will ultimately depend on your end goal, but one option might be to remove the Elasticsearch image/container from the executor, and use Docker Compose to get both images to talk to each other within the build.

Docker on Windows10 home - inside docker container connect to the docker engine

When creating a Jenkins Docker container, it is very useful to able to connect to the Docker daemon. In that way, I can start docker commands inside the Jenkins container.
For example, after starting the Jenkins Docker container, I would like to 'docker exec -it container-id bash' and start 'docker ps'.
On Linux you can use bind-mounts on /var/run/docker.sock. On Windows this seems not possible. The solution is by using 'named pipes'. So, in my docker-compose.yml file I tried to create a named pipe.
version: '2'
services:
jenkins:
image: jenkins-docker
build:
context: ./
dockerfile: Dockerfile_docker
ports:
- "8080:8080"
- "50000:50000"
networks:
- jenkins
volumes:
- jenkins_home:/var/jenkins_home
- \\.\pipe\docker_engine:\\.\pipe\docker_engine
# - /var/run/docker.sock:/var/run/docker.sock
# - /path/to/postgresql/data:/var/run/postgresql/data
# - etc.
Starting docker-compose with this file, I get the following error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is
the docker daemon running?
How can I setup the docker-compose file so that I can use the docker.sock (or Docker) inside the started container?
On Linux you can use something like volumes: /var/run/docker.sock:/var/run/docker.sock. This does not work in a Windows environment. When you add this folder (/var) to Oracle VM Virtualbox, it won't get any IP forever. And on many posts
You can expose the daemon on tcp://localhost:2375 without TLS in the settings. This way you can configure Jenkins to use the Docker API instead of the socket. I encourage you to read this article by Nick Janetakis about "Understanding how the Docker Daemon and the Docker CLI work together".
And then there are several Docker plugins for Jenkins that allows this connection:
Also, you can find additional information in the Docker plugin documentation on wiki.jenkins.io:
def dockerCloudParameters = [
connectTimeout: 3,
containerCapStr: '4',
credentialsId: '',
dockerHostname: '',
name: 'docker.local',
readTimeout: 60,
serverUrl: 'unix:///var/run/docker.sock', // <-- Replace here by the tcp address
version: ''
]
EDIT 1:
I don't know if it is useful, but the Docker Daemon on Windows is located to C:\ProgramData\docker according to the Docker Daemon configuration doc.
EDIT 2:
You need to say explicitly the container to use the host network because you want to expose both Jenkins and Docker API.
Following this documentation, you only have to add --network=host (or network_mode: 'host' in docker-compose) to your container/service. For further information, you can read this article to understand what is the purpose of this network mode.
First try was to start a Docker environment using "Docker Quickstart terminal". This is a good solution when running Docker commands within that environment.
When installing a complete CI/CD Jenkins environment via Docker means that WITHIN the Jenkins Docker container you need to access the Docker daemon. After trying many solutions, reading many posts, this did not work. #Paul Rey, thank you very much for trying all kinds of routes.
A good solution is to get an Ubuntu Virtual Machine and install it via the Oracle VM Virtualbox. It is then VERY IMPORTANT to install Docker via this official description.
Before installing Docker, of course you need to install Curl, Git, etc.

How to pass docker run parameter via kubernetes pod

Hi I am running kubernetes cluster where I run Logstash container.
But I need to run it with own docker run parameter. If I would run it in docker directly. I would use command:
docker run --log-driver=gelf logstash -f /config-dir/logstash.conf
But I need to run it via Kubernetes pod. My pod looks like:
spec:
containers:
- name: logstash-logging
image: "logstash:latest"
command: ["logstash", "-f" , "/config-dir/logstash.conf"]
volumeMounts:
- name: configs
mountPath: /config-dir/logstash.conf
How to achieve to run Docker container with parameter --log-driver=gelf via kubernetes. Thanks.
Kubernetes does not expose docker-specific options such as --log-driver. A higher abstraction of logging behavior might be added in the future, but it is not in the current API yet. This issue was discussed in https://github.com/kubernetes/kubernetes/issues/15478, and the suggestion was to change the default logging driver for docker daemon in the per-node configuration/salt template.

Weave + Ansible Docker Module

I'm using weave to launch some containers which form a database cluster. I have gotten this working manually on two hosts in EC2 by doing the following:
$HOST1> weave launch
$HOST2> weave launch $HOST1
$HOST1> eval $(weave env)
$HOST2> eval $(weave env)
$HOST1> docker run --name neo-1 -d -P ... my/neo4j-cluster
$HOST2> docker run --name neo-2 -d -P ... my/neo4j-cluster
$HOST3> docker run --name neo-1 -d -P -e ARBITER=true ... my/neo4j-cluster
I can check the logs and everthing starts up ok.
When using ansible I can get the above to work using the command: ... module and an environment variable:
- name: Start Neo Arbiter
command: 'docker run --name neo-2 -d -P ... my/neo4j-cluster'
environment:
DOCKER_HOST: 'unix:///var/run/weave/weave.sock'
As that's basically all eval $(weave env) does.
But when I use the docker module for ansible, even with the docker_url parameter set to the same thing you see above with DOCKER_HOST, DNS does not resolve between hosts. Here's what that looks like:
- name: Start Neo Arbiter
docker:
image: "my/neo4j-cluster:{{neo4j_version}}"
docker_url: unix:///var/run/weave/weave.sock
name: neo-3
pull: missing
state: reloaded
detach: True
publish_all_ports: True
OR
- name: Start Neo Arbiter
docker:
image: "my/neo4j-cluster:{{neo4j_version}}"
docker_url: unix:///var/run/weave/weave.sock
name: neo-3
pull: missing
state: reloaded
detach: True
publish_all_ports: True
environment:
DOCKER_HOST: 'unix:///var/run/weave/weave.sock'
Neither of those work. The DNS does not resolve so the servers never start. I do have other server options (like SERVER_ID for neo4j, etc set just not shown here for simplicity).
Anyone run into this? I know the docker module for ansible uses docker-py and stuff. I wonder if there's some type of incompatibility with weave?
EDIT
I should mention that when the containers launch they actually show up in WeaveDNS and appear to have been added to the system. I can ping the local hostname of each container as long as its on the host. When I go to the other host though, it cannot ping the ones on the other host. This despite them registering in WeaveDNS (weave status dns) and weave status showing correct # of peers and established connections.
This could be caused by the client sending a HostConfig struct in the Docker start request, which is not really how you're supposed to do it but is supported by Docker "for backwards compatibility".
Weave has been fixed to cope, but the fix is not in a released version yet. You could try the latest snapshot version if you're brave.
You can probably kludge it by explicitly setting the DNS resolver to the docker bridge IP in your containers' config - weave has an undocumented helper weave docker-bridge-ip to find this address, and it generally won't change.

Resources