ansible - define var's value depending on another variable - docker

today I have a loop that allows me to start multiple docker containers
- name: start container current
docker_container:
name: "{{ item.name }}"
image: "{{ item.name }}:{{ item.version }}"
state: started
recreate: true
ports:
- "{{ item.ports }}"
volumes:
- /opt/application/i99/{{ item.type }}/logs:/opt/application/i99/{{ item.type }}/logs
env_file: /opt/application/i99/{{ item.type }}/{{ item.name }}/{{ item.name }}-PreProd-config.list
env:
LOG_FILE_WS: "/opt/application/i99/{{ item.type }}/logs/{{ hostname }}_WS.log"
with_items:
- { name: 'backend', ports: '8000:8000', type: 'current', version: '{{RCD_VERSION_CURRENT}}' }
- { name: 'connecteur', ports: '8400:8400', type: 'current', version: '{{RCD_VERSION_CURRENT}}' }
- { name: 'api-alerting', ports: '8100:8100', type: 'current', version: '{{RCD_VERSION_CURRENT}}' }
- { name: 'api-tracking', ports: '8200:8200', type: 'current', version: '{{RCD_VERSION_CURRENT}}' }
I have a extra variable {{ RCD_APIS }} that contain a list of all my containers name. I would like to loop over that list and define the following variable conditionnally to the name and run the containers
vars to define : ports, type, version
I want to do something like
- name: start container current
docker_container:
name: "{{ item }}"
image: "{{ item }}:{{ version }}"
state: started
user: adi99api
recreate: true
ports:
- "{{ ports }}"
volumes:
- /opt/application/i99/{{ type }}/logs:/opt/application/i99/{{ type }}/logs
env_file: /opt/application/i99/{{ type }}/{{ item }}/{{ name }}-PreProd-config.list
env:
LOG_FILE_WS: "/opt/application/i99/{{ type }}/logs/{{ hostname }}_WS.log"
with_items: "{{ RCD_APIS.split(',') }}"
when: ( item == "backend", ports: '8000:8000', type: 'current', version: '{{RCD_VERSION_CURRENT}}') or
( item == "connecteur", ports: '8400:8400', type: 'pilote', version: '{{RCD_VERSION_PILOTE}}')

# in a vars file, or a `vars` section
---
docker_containers_config:
backend:
ports: '8000:8000'
type: current
version: '{{RCD_VERSION_CURRENT}}'
connecteur:
ports: '8400:8400'
type: current
version: '{{RCD_VERSION_CURRENT}}'
api-alerting:
ports: '8100:8100'
type: 'current'
version: '{{RCD_VERSION_CURRENT}}'
api-tracking:
ports: '8200:8200'
type: 'current'
version: '{{RCD_VERSION_CURRENT}}'
_
# In your tasks
- name: start container current
docker_container:
name: "{{ item }}"
image: "{{ item }}:{{ docker_containers_config[item].version }}"
state: started
recreate: true
ports:
- "{{ docker_containers_config[item].ports }}"
volumes:
- /opt/application/i99/{{ item.type }}/logs:/opt/application/i99/{{ item.type }}/logs
env_file: /opt/application/i99/{{ docker_containers_config[item].type }}/{{ item }}/{{ item }}-PreProd-config.list
env:
LOG_FILE_WS: "/opt/application/i99/{{ docker_containers_config[item].type }}/logs/{{ hostname }}_WS.log"
with_items: "{{ RCD_APIS.split(',') }}"

Related

How to use Loops in cloudbees CASC bundle for Controller configuration?

Here is what I'm trying to use --
## Items.yaml
removeStrategy:
rbac: SYNC
items: NONE
items:
- kind: multibranch
name: "{{ item.name }}"
projectFactory:
workflowBranchProjectFactory:
scriptPath: Jenkinsfile
sourcesList:
- branchSource:
source:
bitbucket:
repoOwner: "{{ item.owner }}"
serverUrl: https://github.com
credentialsId: github_account
id: github_id
repository: "{{ item.repo }}"
loop:
- { name: 'x', owner: 'y' , repo: 'z' }
- { name: 'a', owner: 'b' , repo: 'c' }

filebeat + kubernetes + elasticsearch not save specific fields

I created a namespace to get logs with filebeats and save to elasticsearch.
Why not save on elasticsearch the fields about Kubernetes how to example follow?
Kubernetes fields
"kubernetes" : {
"labels" : {
"app" : "MY-APP",
"pod-template-hash" : "959f54cd",
"serving" : "true",
"version" : "1.0",
"visualize" : "true"
},
"pod" : {
"uid" : "e20173cb-3c5f-11ea-836e-02c1ee65b375",
"name" : "MY-APP-959f54cd-lhd5p"
},
"node" : {
"name" : "ip-xxx-xx-xx-xxx.ec2.internal"
},
"container" : {
"name" : "istio"
},
"namespace" : "production",
"replicaset" : {
"name" : "MY-APP-959f54cd"
}
}
Currently is being saved like this:
"_source" : {
"#timestamp" : "2020-01-23T12:33:14.235Z",
"ecs" : {
"version" : "1.0.0"
},
"host" : {
"name" : "worker-node1"
},
"agent" : {
"hostname" : "worker-node1",
"id" : "xxxxx-xxxx-xxx-xxxx-xxxxxxxxxxxxxx",
"version" : "7.1.1",
"type" : "filebeat",
"ephemeral_id" : "xxxx-xxxx-xxxx-xxxxxxxxxxxxx"
},
"log" : {
"offset" : xxxxxxxx,
"file" : {
"path" : "/var/lib/docker/containers/xxxx96ec2bfd9a3e4f4ac83581ad90/7fd55e1249aa009df3f8e3250c967bbe541c9596xxxxxac83581ad90-json.log"
}
},
"stream" : "stdout",
"message" : "xxxxxxxx",
"input" : {
"type" : "docker"
}
}
To follow my filebeat.config:
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.config:
inputs:
# Mounted `filebeat-inputs` configmap:
path: ${path.config}/inputs.d/*.yml
# Reload inputs configs as they change:
reload.enabled: false
multiline.pattern: '^[[:space:]]'
multiline.negate: false
multiline.match: after
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
# To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this:
#filebeat.autodiscover:
# providers:
# - type: kubernetes
# hints.enabled: true
processors:
- add_cloud_metadata:
- add_kubernetes_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
protocol: "http"
setup.ilm.enabled: false
ilm.enabled: false
xpack.monitoring:
enabled: true
DamemonSet is shown below:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
spec:
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
hostNetwork: true
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat-oss:7.1.1
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: xxxxxxxxxxxxx
- name: ELASTICSEARCH_PORT
value: "9200"
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: inputs
mountPath: /usr/share/filebeat/inputs.d
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: inputs
configMap:
defaultMode: 0600
name: filebeat-inputs
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
Before to apply config into kubernetes, I did remove ever registry filebeats of elasticsearch.
As already stated in my comment. It looks like your ConfigMap is missing the paths: to containers' logs. It should be something like this:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
Compare your config file with this one.
I hope it helps.
I had the same problem, I resolved by removing a hostNetwork: true configuration from DaemonSet. This means that the pod name was the same as the node name. Looking at the filebeat startup log, you can see this.

Redirection in envoy

I'm testing an envoy configuration in my stage environment, here I need to make a redirection to custom page "/oops", whenever there occurs any 5xx error while calling test.com. It is accessible the path "http://test1.com/oops" directly. Can anybody please suggest me ideas?
static_resources:
listeners:
- name: test-listener
address:
socket_address: { address: 0.0.0.0, port_value: 30000 }
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
access_log:
- name: envoy.file_access_log
config:
path: "/dev/stdout"
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["test1.com"]
routes:
- match: { prefix: "/" }
route: { cluster: test1_service }
- name: local_service2
domains: ["test2.com"]
routes:
- match: { prefix: "/" }
route: { cluster: google_service }
http_filters:
- name: envoy.router
clusters:
- name: test1_service
connect_timeout: 0.25s
type: LOGICAL_DNS
dns_lookup_family: V4_ONLY
lb_policy: ROUND_ROBIN
hosts: [{ socket_address: { address: 172.17.0.3, port_value: 80 }}]
- name: google_service
connect_timeout: 0.25s
type: LOGICAL_DNS
dns_lookup_family: V4_ONLY
lb_policy: ROUND_ROBIN
hosts: [{ socket_address: { address: google.com, port_value: 80 }}]
You can use HTTP Routing in envoy to accomplish this. Also refer the route.VirtualHost config.

dockerhub in kubernetes give unauthorized: incorrect username or password with right credentials

I'm trying to pull a private image from docker hub and every time I get the error "ImagePullBackOff" using describe on the pods I see the error "unauthorized: incorrect username or password", I created the secret in the cluster using the following guide: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ using the cli method with the correct credentials (I checked and I can login on the website with these one) and this is my yaml file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-typescript
labels:
app: app-typescript
spec:
selector:
matchLabels:
app: app-typescript
replicas: 1
minReadySeconds: 15
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: app-typescript
spec:
containers:
- name: api
image: dockerhuborg/api:latest
imagePullPolicy: Always
env:
- name: "ENV_TYPE"
value: "production"
- name: "NODE_ENV"
value: "production"
- name: "MONGODB_URI"
value: "mongodb://mongo-mongodb/db"
ports:
- containerPort: 4000
imagePullSecrets:
- name: regcred
I found a solution, apparently the problem is that docker hub use different domains for login and containers pulling, so you must edit your secret created with the kubectl command and replace the base64 of .dockerconfigjson with an encoded base64 version of this json (yeah I know maybe I added too much domain but I'm trying to fix this sh*t from about 2 days I don't have patience anymore to find the exact ones)
{
"auths":{
"https://index.docker.io/v1/":{
"username":"user",
"password":"password",
"email":"yourdockeremail#gmail.com",
"auth":"base64 of string user:password"
},
"auth.docker.io":{
"username":"user",
"password":"password",
"email":"yourdockeremail#gmail.com",
"auth":"base64 of string user:password"
},
"registry.docker.io":{
"username":"user",
"password":"password",
"email":"yourdockeremail#gmail.com",
"auth":"base64 of string user:password"
},
"docker.io":{
"username":"user",
"password":"password",
"email":"yourdockeremail#gmail.com",
"auth":"base64 of string user:password"
},
"https://registry-1.docker.io/v2/": {
"username":"user",
"password":"password",
"email":"yourdockeremail#gmail.com",
"auth":"base64 of string user:password"
},
"registry-1.docker.io/v2/": {
"username":"user",
"password":"password",
"email":"yourdockeremail#gmail.com",
"auth":"base64 of string user:password"
},
"registry-1.docker.io": {
"username":"user",
"password":"password",
"email":"yourdockeremail#gmail.com",
"auth":"base64 of string user:password"
},
"https://registry-1.docker.io": {
"username":"user",
"password":"password",
"email":"yourdockeremail#gmail.com",
"auth":"base64 of string user:password"
}
}
}

jenkins kubernetes-plugin set idletimeout in pipeline

How to set Time in minutes to retain slave when idle and Max number of instances in pipeline when config podTemplate ?
I see these two config options in System->Could->kubernetes. But I use pipeline and I didn't figure it out how to set them.
Now My pipeline looks like below.
podTemplate(label: 'docker-go',
containers: [
containerTemplate(
name: 'jnlp',
image: 'docker.mydomain.com/library/jnlp-slave:2.62',
command: '',
args: '${computer.jnlpmac} ${computer.name}',
),
containerTemplate(name: 'docker', image: 'docker.mydomain.com/library/docker:1.12.6', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'golang', image: 'docker.mydomain.com/library/golang:1.8.3', ttyEnabled: true, command: '')
],
volumes: [hostPathVolume(hostPath: '/var/run/docker.sock', mountPath: '/var/run/docker.sock')]
) {
def image_tag = "docker.mydomain.com/deploy-demo/demo-go:v0.1"
def workdir = "/go/src/demo-go"
node('docker-go') {
stage('setup') {
}
stage('clone') {
}
stage('compile') {
}
stage('build and push image') {
}
}
}
Ok, I figuire it out
Add these two.
idleMinutes: 10
instanceCap: 10
podTemplate(label: 'docker-go',
containers: [
containerTemplate(
name: 'jnlp',
image: 'docker.mydomain.com/library/jnlp-slave:2.62',
command: '',
args: '${computer.jnlpmac} ${computer.name}',
),
containerTemplate(name: 'docker', image: 'docker.mydomain.com/library/docker:1.12.6', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'golang', image: 'docker.mydomain.com/library/golang:1.8.3', ttyEnabled: true, command: '')
],
volumes: [hostPathVolume(hostPath: '/var/run/docker.sock', mountPath: '/var/run/docker.sock')],
idleMinutes: 10
instanceCap: 10
) {
def image_tag = "docker.mydomain.com/deploy-demo/demo-go:v0.1"
def workdir = "/go/src/demo-go"
node('docker-go') {
stage('setup') {
}
stage('clone') {
}
stage('compile') {
}
stage('build and push image') {
}
}
}

Resources