How to run Kafka from Jenkins Pipeline using Groovy and Kubernetes plugin? - docker

I couldn't find such a specific command around the internet so I kindly ask for your help with this one :)
Context
I have defined a podTemplate with a few containers, by using the containerTemplate methods:
ubuntu:trusty (14.04 LTS)
postgres:9.6
and finally, wurstmeister/kafka:latest
Doing some Groovy coding in Pipeline, I install several dependencies into my ubuntu:trusty container, such as latest Git, Golang 1.9, etc., and I also checkout my project from Github.
After all that dependencies are dealt with, I manage to compile, run migrations (which means Postgres is up and running and my app is connected to it), and spin up my app just fine until it complains that Kafka is not running because it couldn't connect to any broker.
Debugging sessions
After some debug sessions I have ps aux'ed each and every container to make sure all the services I needed were running in their respective containers, such as:
container(postgres) {
sh 'ps aux' # Show Postgres, as expected
}
container(linux) {
sh 'ps aux | grep post' # Does not show Postgres, as expected
sh 'ps aux | grep kafka' # Does not show Kafka, as expected
}
container(kafka) {
sh 'ps aux' # Does NOT show any Kafka running
}
I have also exported KAFKA_ADVERTISED_HOST_NAME var to 127.0.0.1 as explained in the image docs, without success, with the following code:
containerTemplate(
name: kafka,
image: 'wurstmeister/kafka:latest',
ttyEnabled: true,
command: 'cat',
envVars: [
envVar(key: 'KAFKA_ADVERTISED_HOST_NAME', value: '127.0.0.1'),
envVar(key: 'KAFKA_AUTO_CREATE_TOPICS_ENABLE', value: 'true'),
]
)
Questions
This image documentation details https://hub.docker.com/r/wurstmeister/kafka/ is explicit about starting a Kafka cluster with docker-compose up -d
1) How do I actually do that with this Kubernetes plugin + Docker + Groovy + Pipeline combo in Jenkins?
2) Do I actually need to do that? Postgres image docs (https://hub.docker.com/_/postgres/) also mentions about running the instance with docker run, but I didn't need to do that at all, which makes me think that containerTemplate is probably doing it automatically. So why is it not doing this for the Kafka container?
Thanks!

So... problem is with this image, and way how kubernetes works with them.
Kafka does not start because you override dockers CMD with command:'cat' which causes start-kafka.sh to never run.
Because of above I suggest using different image. Below template worked for me.
containerTemplate(
name: 'kafka',
image: 'quay.io/jamftest/now-kafka-all-in-one:1.1.0.B',
resourceRequestMemory: '500Mi',
ttyEnabled: true,
ports: [
portMapping(name: 'zookeeper', containerPort: 2181, hostPort: 2181),
portMapping(name: 'kafka', containerPort: 9092, hostPort: 9092)
],
command: 'supervisord -n',
envVars: [
containerEnvVar(key: 'ADVERTISED_HOST', value: 'localhost')
]
),

Related

Running Karate UI tests with “driverTarget” in GitLab CI

Question was:
I would like to run Karate UI tests using the driverTarget options to test my Java Play app which is running locally during the same job with sbt run.
I have a simple assertion to check for a property but whenever the tests runs I keep getting "description":"TypeError: Cannot read property 'getAttribute' of null This is my karate-config.js:
if (env === 'ci') {
karate.log('using environment:', env);
karate.configure('driverTarget',
{
docker: 'justinribeiro/chrome-headless',
showDriverLog: true
});
}
This is my test scenario:
Scenario: test 1: some test
Given driver 'http://localhost:9000'
waitUntil("document.readyState == 'complete'")
match attribute('some selector', 'some attribute') == 'something'
My guess is that because justinribeiro/chrome-headless is running in its own container, localhost:9000 is different in the container compared to what's running outside of it.
Is there any workaround for this? thanks
A docker container cannot talk to localhost port as per what was posted: "My guess is that because justinribeiro/chrome-headless is running in its own container, localhost:9000 is different in the container compared to what's running outside of it."
To get around this and have docker container communicate with running app on localhost port use command host.docker.internal
Change to make:
From: Given driver 'http://localhost:9000'.
To: Given driver 'http://host.docker.internal:9000'
Additionally, I was able to use the ptrthomas/karate-chrome image in CI (GITLAB) by inserting the following inside my gitlab-ci.yml file
stages:
- uiTest
featureOne:
stage: uiTest
image: docker:latest
cache:
paths:
- .m2/repository/
services:
- docker:dind
script:
- docker run --name karate --rm --cap-add=SYS_ADMIN -v "$PWD":/karate -v
"$HOME"/.m2:/root/.m2 ptrthomas/karate-chrome &
- sleep 45
- docker exec -w /karate karate mvn test -DargLine='-Dkarate.env=docker' Dtest=testParallel
allow_failure: true
artifacts:
paths:
- reports
- ${CLOUD_APP_NAME}.log
my karate-config.js file looks like
if (karate.env == 'docker') {
karate.configure('driver', {
type: 'chrome',
showDriverLog: true,
start: false,
beforeStart: 'supervisorctl start ffmpeg',
afterStop: 'supervisorctl stop ffmpeg',
videoFile: '/tmp/karate.mp4'
});
}

Local Vault using docker-compose

I'm having big trouble running Vault in docker-compose.
My requirements are :
running as deamon (so restarting when I restart my Mac)
secret being persisted between container restart
no human intervention between restart (unsealing, etc.)
using a generic token
My current docker-compose
version: '2.3'
services:
vault-dev:
image: vault:1.2.1
restart: always
container_name: vault-dev
environment:
VAULT_DEV_ROOT_TOKEN_ID: "myroot"
VAULT_LOCAL_CONFIG: '{"backend": {"file": {"path": "/vault/file"}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h"}'
ports:
- "8200:8200"
volumes:
- ./storagedc/vault/file:/vault/file
However, when the container restart, I get the log
==> Vault server configuration:
Api Address: http://0.0.0.0:8200
Cgo: disabled
Cluster Address: https://0.0.0.0:8201
Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
Log Level: info
Mlock: supported: true, enabled: false
Storage: file
Version: Vault v1.2.1
Error initializing Dev mode: Vault is already initialized
Is there any recommendation on that matter?
I'm going to pseudo-code an answer to work around the problems specified, but please note that this is a massive hack and should NEVER be deployed in production as a hard-coded master key and single unseal key is COLOSSALLY INSECURE.
So, you want a test vault server, with persistence.
You can accomplish this, it will need a little bit of work because of the default behavior of the vault container - if you just start it, it will start with a dev mode container, which won't allow for persistence. Just adding persistence via the environment variable won't solve that problem entirely because it will conflict with the default start mode of the container.
so we need to replace this entrypoint script with something that does what we want it to do instead.
First we copy the script out of the container:
$ docker create --name vault vault:1.2.1
$ docker cp vault:/usr/local/bin/docker-entrypoint.sh .
$ docker rm vault
For simplicity, we're going to edit the file and mount it into the container using the docker-compose file. I'm not going to make it really functional - just enough to get it to do what's desired. The entire point here is sample, not something that is usable in production.
My customizations all start at about line 98 - first we launch a dev-mode server in order to record the unseal key, then we terminate the dev mode server.
# Here's my customization:
if [ ! -f /vault/unseal/sealfile ]; then
# start in dev mode, in the background to record the unseal key
su-exec vault vault server \
-dev -config=/vault/config \
-dev-root-token-id="$VAULT_DEV_ROOT_TOKEN_ID" \
2>&1 | tee /vault/unseal/sealfile &
while ! grep -q 'core: vault is unsealed' /vault/unseal/sealfile; do
sleep 1
done
kill %1
fi
Next we check for supplemental config. This is where the extra config goes for disabling TLS, and for binding the appropriate interface.
if [ -n "$VAULT_SUPPLEMENTAL_CONFIG" ]; then
echo "$VAULT_SUPPLEMENTAL_CONFIG" > "$VAULT_CONFIG_DIR/supplemental.json"
fi
Then we launch vault in 'release' mode:
if [ "$(id -u)" = '0' ]; then
set -- su-exec vault "$#"
"$#"&
Then we get the unseal key from the sealfile:
unseal=$(sed -n 's/Unseal Key: //p' /vault/unseal/sealfile)
if [ -n "$unseal" ]; then
while ! vault operator unseal "$unseal"; do
sleep 1
done
fi
We just wait for the process to terminate:
wait
exit $?
fi
There's a full gist for this on github.
Now the docker-compose.yml for doing this is slightly different to your own:
version: '2.3'
services:
vault-dev:
image: vault:1.2.1
restart: always
container_name: vault-dev
command: [ 'vault', 'server', '-config=/vault/config' ]
environment:
VAULT_DEV_ROOT_TOKEN_ID: "myroot"
VAULT_LOCAL_CONFIG: '{"backend": {"file": {"path": "/vault/file"}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h"}'
VAULT_SUPPLEMENTAL_CONFIG: '{"ui":true, "listener": {"tcp":{"address": "0.0.0.0:8200", "tls_disable": 1}}}'
VAULT_ADDR: "http://127.0.0.1:8200"
ports:
- "8200:8200"
volumes:
- ./vault:/vault/file
- ./unseal:/vault/unseal
- ./docker-entrypoint.sh:/usr/local/bin/docker-entrypoint.sh
cap_add:
- IPC_LOCK
The command is the command to execute. This is what's in the "$#"& of the script changes.
I've added VAULT_SUPPLEMENTAL_CONFIG for the non-dev run. It needs to specify the interfaces, it needs to turn of tls. I added the ui, so I can access it using http://127.0.0.1:8200/ui. This is part of the changes I made to the script.
Because this is all local, for me, test purposes, I'm mounting ./vault as the data directory, I'm mounting ./unseal as the place to record the unseal code and mounting ./docker-entrypoint.sh as the entrypoint script.
I can docker-compose up this and it launches a persistent vault - there are some errors on the log as I try to unseal before the server has launched, but it works, and persists across multiple docker-compose runs.
Again, to mention that this is completely unsuitable for any form of long-term use. You're better off using docker's own secrets engine if you're doing things like this.
I'd like to suggest a simpler solution for local development with docker-compose.
Vault is always unsealed
Vault UI is enabled and accessible at http://localhost:8200/ui/vault on your dev machine
Vault has predefined root token which can be used by services to communicate with it
docker-compose.yml
vault:
hostname: vault
container_name: vault
image: vault:1.12.0
environment:
VAULT_ADDR: "http://0.0.0.0:8200"
VAULT_API_ADDR: "http://0.0.0.0:8200"
ports:
- "8200:8200"
volumes:
- ./volumes/vault/file:/vault/file:rw
cap_add:
- IPC_LOCK
entrypoint: vault server -dev -dev-listen-address="0.0.0.0:8200" -dev-root-token-id="root"

configure docker variables with ansible

I have a docker image for an FTP server in a repository, this image will be used for several machines, I need to deploy container and change PORT variable depending on the destination machine.
This is my image (I've deleted lines for proftpd installation cause it is not relevant to this case):
FROM alpine:3.5
ARG vcs_ref="Unknown"
ARG build_date="Unknown"
ARG build_number="1"
LABEL org.label-schema.vcs-ref=$vcs_ref \
org.label-schema.build-date=$build_date \
org.label-schema.version="alpine-r${build_number}"
ENV PORT=10000
COPY assets/port.conf /usr/local/etc/ports.conf
COPY replace.sh /replace.sh
#It is for a proFTPD Server
CMD ["/replace.sh"]
My port.conf file (Also deleted not relevant information for this case)
# This is a basic ProFTPD configuration file (rename it to
# 'proftpd.conf' for actual use. It establishes a single server
# and a single anonymous login. It assumes that you have a user/group
# "nobody" and "ftp" for normal operation and anon.
ServerName "ProFTPD Default Installation"
ServerType standalone
DefaultServer on
# Port 21 is the standard FTP port.
Port {{PORT}}
.
.
.
And replace.sh script is:
#!/bin/bash
set -e
[ -z "${PORT}" ] && echo >&2 "PORT is not set" && exit 1
sed -i "s#{{PORT}}#$PORT#g" /usr/local/etc/ports.conf
/usr/local/sbin/proftpd -n -c /usr/local/etc/proftpd.conf
... Is there any way to avoid using replace.sh and use ansible as the one who replace PORT variable in /usr/local/etc/proftpd.conf the file inside the container?
My actual ansible script for container is:
- name: (ftpd) Run container
docker_container:
name: "myimagename"
image: "myimage"
state: present
pull: true
restart_policy: always
env:
"PORT": "{{ myportUsingAnsible}}"
networks:
- name: "{{ network }}"
Resuming all that I need is to use Ansible to replace configuration variable instead of using a shell script that replaces variables before running services, is it possible?
Many thanks
You are using the docker_container module which will need a pre-built image. The file port.conf is baked inside the image. What you need to do is set a static port inside this file. Inside the container, you always use
the static port 21 and depending on the machine, you map this port onto a different port using ansible.
Inside port.conf always use port 21
# Port 21 is the standard FTP port.
Port 21
The ansible task would look like:
- name: (ftpd) Run container
docker_container:
name: "myimagename"
image: "myimage"
state: present
pull: true
restart_policy: always
networks:
- name: "{{ network }}"
ports:
- "{{myportUsingAnsible}}:21"
Now when you connect to the container, you need to use the <hostnamne>:{{myportUsingAnsible}}. This is the standard docker way of doing things. The port inside the image is static and you change the port mappings based on the
available ports that you have.

Ansible w/ Docker - Show current Container state

Im working on a little Ansible project in which I'm using Docker Containers.
I'll keep my question short:
I want to get the state of a running Dockercontainer!
What I mean by that is, that i want to get the current state of the container, that Docker shows you by using the "docker ps" command.
Examples would be:
Up
Exited
Restarting
I want to get one of those results from a specific container. But without using the Command or the Shell module!
KR
As of Ansible 2.8 you can use the docker_container_info, which essentially returns the input from docker inspect <container>:
- name: Get infos on container
docker_container_info:
name: my_container
register: result
- name: Does container exist?
debug:
msg: "The container {{ 'exists' if result.exists else 'does not exist' }}"
- name: Print the status of the container
debug:
msg: "The container status is {{ result.container['State']['Status'] }}"
when: result.exists
With my Docker version, State contains this:
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 8235,
"ExitCode": 0,
"Error": "",
"StartedAt": "2019-01-25T14:10:08.3206714Z",
"FinishedAt": "0001-01-01T00:00:00Z"
}
See https://docs.ansible.com/ansible/2.8/modules/docker_container_info_module.html for more details.
Unfortunately, none of the modules around docker can currently "List containers".
I did the following as work around to grab the status:
- name: get container status
shell: docker ps -a -f name={{ container }} --format {%raw%}"table {{.Status}}"{%endraw%} | awk 'FNR == 2 {print}' | awk '{print $1}'
register: status
Result will then be available in the status variable
This worked for me:
- name: Get container status
shell: docker inspect --format={{ '{{.State.Running}}' }} {{ container_name }}
register: status
#Start the container if it is not running
- name: Start the container if it is in stopeed state.
shell: docker start heuristic_mestorf
when: status.stdout != "true"
Edit: If you are running Ansible 2.8+ you can use docker_container_info. See David Pärsson's answer for details.
Here is one way to craft it using the docker_container module (note that it will create the container if it does not exist):
- name: "Check if container is running"
docker_container:
name: "{{ container_name }}"
state: present
register: container_test_started
ignore_errors: yes
- set_fact:
container_exists: "{{ container_test_started.ansible_facts is defined }}"
- set_fact:
container_is_running: "{{ container_test_started.ansible_facts is defined and container_test_started.ansible_facts.docker_container.State.Status == 'running' }}"
container_is_paused: "{{ container_test_started.ansible_facts is defined and container_test_started.ansible_facts.docker_container.State.Status == 'paused' }}"
For me the gotchya was that if the container doesn't exist, ansible_facts is not defined. If it does though, then that contains basically the whole docker inspect <container> output so I navigate that for the status.
If you just need to short circuit, a simpler alternative would be to move the desired set_fact'd value into a failed_when on the docker_container task.
I do it through set_fact to keep my options open for forking behavior elsewhere.. e.g. stop, do task, then put back how it was.
I included pause because it is commonly forgotten as a state :)
There is an ansible module docker_image_facts which give you information about images. You are looking for something that would be docker_container_facts, which does not currently exist. Good idea though.
The question is not clear, but generally speaking you can use ansible with docker in two cases:
by using docker module of ansible
http://docs.ansible.com/ansible/latest/docker_module.html
- name: data container
docker:
name: mydata
image: busybox
state: present
volumes:
- /data
by calling ansible inside Dockerfile
FROM centos7
RUN ansible-playbook -i inventory playbook.yml
Your question is slightly unclear.
My best try - you want to have output of 'docker ps' - the first thing comes in mind is to use the Ansible command module, but you don't want to use it.
Then there are few docker modules:
docker - This is the original Ansible module for managing the Docker container life cycle.
docker_container - ansible module to manage the life cycle of docker containers.
You can look into the options -> parameters to get what exactly you're looking for.
Here's the complete list of Ansible modules for managing/using Docker.

Rancher CLI update loadbalancer

I'm using Rancher over Kubernetes to create our test/dev environment. First of all, it's a great tool and I'm amazed of how it simplify the management of such environments.
That said, I have an issue (which is probably more a knowledge lack of Rancher). I try to automate the deployment via Jenkins, and as we will have several stacks into our test environment, I want to dynamically update the loadbalancer instances to add routes for new environement from Jenkins with Rancher CLI.
At the moment, I just try to run this command :
rancher --url http://myrancher_server:8080 --access-key <key> --secret-key <secret> --env dev-test stack create kubernetes-ingress-lbs -r loadbalancer-rancher-service.yml
My docker-compose.yml file is like the following :
version: '2'
services:
frontend:
image: 172.19.51.97:5000/frontend
dev-test-lb:
image: rancher/load-balancer-service
ports:
- 82: 8086
links:
- fronted:frontend
My rancher compose file is like this:
version: '2'
services:
dev-test-lb:
scale: 4
lb_config:
port_rules:
- source_port: 82
path: /products
target_port: 8086
service: products
- source_port: 82
path: /
target_port: 4201
service: frontend
health_check:
port: 42
interval: 2000
unhealthy_threshold: 3
healthy_threshold: 2
response_timeout: 2000
Now when I execute this I have the following response :
Bad response statusCode [422]. Status [422 status code 422]. Body: [code=NotUnique, fieldName=name, baseType=error] from [http://myrancher_server:8080/v2-beta/projects/1a21/stacks]
Obviously I can't edit an existing stack with a service that already exsit. Do you know if it's best practice do to this like that ? I checked man, and I only see the "create" action on "rancher stack", so I'm wondering if we can update ?
My rancher server is v1.5.10 and all my rancher agents and Kubernetes drivers are up-to-date.
Thanks a lot for your help fellows :)
Ok, just for the information, I found that this is possible via the Rest API of Rancher.
Check the following link : http://docs.rancher.com/rancher/v1.2/en/api/v2-beta/api-resources/service/
I didn't found that at first 'cause the Googling I've done around was all about rancher cli at first. But as it's still beta, we can't do the same stuff as via the rest API.
Basically, just send an update resource query :
PUT rancherserver/v2-beta/projects/1a12/services/
{
"description": "Loadbalancer for our test env",
"lbConfig": {
"portRules": [
{
"hostname": "",
"protocol": "http",
"source_port": "80",
"targetPort": "4200",
"path": "/"
}
]
},
"name": "kubernetes-ingress-lbs"
}

Resources