Here is what I'm trying to use --
## Items.yaml
removeStrategy:
rbac: SYNC
items: NONE
items:
- kind: multibranch
name: "{{ item.name }}"
projectFactory:
workflowBranchProjectFactory:
scriptPath: Jenkinsfile
sourcesList:
- branchSource:
source:
bitbucket:
repoOwner: "{{ item.owner }}"
serverUrl: https://github.com
credentialsId: github_account
id: github_id
repository: "{{ item.repo }}"
loop:
- { name: 'x', owner: 'y' , repo: 'z' }
- { name: 'a', owner: 'b' , repo: 'c' }
Related
how can i use 2 anyof. What i want is both condition must be met then only execute the job
stage("Spring App") {
when {
anyOf {
environment name: 'key1', value: 'true'
environment name: 'key2', value: 'true'
}
anyOf {
environment name: 'key3', value: 'string1'
environment name: 'key4', value: 'string2'
environment name: 'key5', value: 'string3'
}
}
I need met both condition . How to use and between 2 anyof
You can wrap everything into the allOf condition, like this :
allOf {
anyOf {
environment name: 'key1', value: 'true'
environment name: 'key2', value: 'true'
}
anyOf {
environment name: 'key3', value: 'string1'
environment name: 'key4', value: 'string2'
environment name: 'key5', value: 'string3'
}
}
The website is built using Gatsby with Netlify CMS in Bitbucket. The error shows when I tried to change something on the custom page using the Netlify CMS (Live), but works perfectly on the local setup. This confuses me and I don't know what and why is that happening.
Here's my config.yml
backend:
name: bitbucket
repo: repo-name
branch: master
auth_type: implicit
app_id: app-id
commit_messages:
create: "Create {{collection}} “{{slug}}”"
update: "Update {{collection}} “{{slug}}”"
delete: "Delete {{collection}} “{{slug}}”"
uploadMedia: "[skip ci] Upload “{{path}}”"
deleteMedia: "[skip ci] Delete “{{path}}”"
local_backend: true
publish_mode: editorial_workflow
media_folder: static/img
public_folder: /img
(Skipped some because it is too long but here is the setup of the custom page)
- name: "pages"
label: "Pages"
label_singular: "Page"
create: true
files:
- file: "src/pages/resources/i-mop-xl-operator-resources.md"
label: "i-mop xl operator resources"
name: "i-mop xl operator resources"
fields:
- {
label: "Template Key",
name: "templateKey",
widget: "hidden",
default: "i-mop-xl-operator-resources",
}
- { label: Title, name: title, widget: string }
- { label: Heading, name: heading, widget: string }
- { label: Description, name: description, widget: string }
- {
label: "Seo Description",
name: "seodescription",
widget: "string",
}
- { label: "Seo Keyword", name: "seokeyword", widget: "string" }
- {
label: "Seo Title",
name: "seotitle",
widget: "string",
required: false,
}
And the error is saying
P.S this is only happening on the new pages that I've created.
So I've found the solution to this and I think it is pretty simple. The name should not have a blank space.
From this:
- file: "src/pages/resources/i-mop-xl-operator-resources.md"
label: "i-mop xl operator resources"
name: "i-mop xl operator resources"
To this:
- file: "src/pages/resources/i-mop-xl-operator-resources.md"
label: "i-mop xl operator resources"
name: "i-mop-xl-operator-resources"
I'm trying to send customized slack notifications about the status of github action workflows. I've built my custom messages with the help of https://github.com/8398a7/action-slack integration.
However, I'm trying to send the github action artifact that is generated with each workflow to my slack channel. Is that something that can be done?
Below is the yaml file that I'm using:
...
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
jobs:
smoke-test:
runs-on: ubuntu-latest
container:
image: dannydainton/htmlextra
steps:
- name: Cloning Repository
id: initializing
uses: actions/checkout#master
- name: Executing API test suite
run: newman run "postman_collection.json" --environment "postman_environment.json" --reporters cli,htmlextra --reporter-htmlextra-export report.html
- if: failure()
uses: actions/upload-artifact#v2
id: generating-report
with:
name: reports
path: report.html
- uses: 8398a7/action-slack#v3
with:
status: custom
custom_payload: |
{
text: "Test Execution Passed",
attachments: [{
color: 'good',
text: `Test Execution for ${process.env.AS_WORKFLOW} workflow has SUCCEEDED! :heavy_check_mark:`,
fields: [{
title: "Test Type",
value: 'Smoke Test',
short: false
},
{
title: "Overall APIs Status",
value: "Healthy :heavy_check_mark:",
short: false
},
{
title: "Repository",
value: `${process.env.AS_REPO}`,
short: false
},
{
title: "Author",
value: `${process.env.AS_AUTHOR}`,
short: false
},
{
title: "Execution Time",
value: `${process.env.AS_TOOK}`,
short: false
},
{
title: "Number of Requests",
value: '39',
short: true
},
{
title: "Number of Assertions",
value: '64',
short: true
}]
}]
}
if: success()
- uses: 8398a7/action-slack#v3
if: failure()
with:
status: custom
custom_payload: |
{
text: 'Test Execution Failed :siren_alert::siren_alert:',
attachments: [{
color: 'danger',
text: `Test Execution for ${process.env.AS_WORKFLOW} workflow has FAILED! :x:`,
fields: [{
title: "Test Type",
value: 'Smoke Test',
short: true
},
{
title: "Repository",
value: `${process.env.AS_REPO}`,
short: true
},
{
title: "Author",
value: `${process.env.AS_AUTHOR}`,
short: false
}],
actions: [{
name: "report",
text: "View Report",
type: "button",
value: `https://github.com/${{github.repository}}/actions/runs/${{github.run_id}}`
}]
}]
}
So I need to pass the artifact to the step which sends a slack notification on failure.
Thanks
I created a namespace to get logs with filebeats and save to elasticsearch.
Why not save on elasticsearch the fields about Kubernetes how to example follow?
Kubernetes fields
"kubernetes" : {
"labels" : {
"app" : "MY-APP",
"pod-template-hash" : "959f54cd",
"serving" : "true",
"version" : "1.0",
"visualize" : "true"
},
"pod" : {
"uid" : "e20173cb-3c5f-11ea-836e-02c1ee65b375",
"name" : "MY-APP-959f54cd-lhd5p"
},
"node" : {
"name" : "ip-xxx-xx-xx-xxx.ec2.internal"
},
"container" : {
"name" : "istio"
},
"namespace" : "production",
"replicaset" : {
"name" : "MY-APP-959f54cd"
}
}
Currently is being saved like this:
"_source" : {
"#timestamp" : "2020-01-23T12:33:14.235Z",
"ecs" : {
"version" : "1.0.0"
},
"host" : {
"name" : "worker-node1"
},
"agent" : {
"hostname" : "worker-node1",
"id" : "xxxxx-xxxx-xxx-xxxx-xxxxxxxxxxxxxx",
"version" : "7.1.1",
"type" : "filebeat",
"ephemeral_id" : "xxxx-xxxx-xxxx-xxxxxxxxxxxxx"
},
"log" : {
"offset" : xxxxxxxx,
"file" : {
"path" : "/var/lib/docker/containers/xxxx96ec2bfd9a3e4f4ac83581ad90/7fd55e1249aa009df3f8e3250c967bbe541c9596xxxxxac83581ad90-json.log"
}
},
"stream" : "stdout",
"message" : "xxxxxxxx",
"input" : {
"type" : "docker"
}
}
To follow my filebeat.config:
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.config:
inputs:
# Mounted `filebeat-inputs` configmap:
path: ${path.config}/inputs.d/*.yml
# Reload inputs configs as they change:
reload.enabled: false
multiline.pattern: '^[[:space:]]'
multiline.negate: false
multiline.match: after
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
# To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this:
#filebeat.autodiscover:
# providers:
# - type: kubernetes
# hints.enabled: true
processors:
- add_cloud_metadata:
- add_kubernetes_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
protocol: "http"
setup.ilm.enabled: false
ilm.enabled: false
xpack.monitoring:
enabled: true
DamemonSet is shown below:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
spec:
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
hostNetwork: true
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat-oss:7.1.1
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: xxxxxxxxxxxxx
- name: ELASTICSEARCH_PORT
value: "9200"
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: inputs
mountPath: /usr/share/filebeat/inputs.d
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: inputs
configMap:
defaultMode: 0600
name: filebeat-inputs
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
Before to apply config into kubernetes, I did remove ever registry filebeats of elasticsearch.
As already stated in my comment. It looks like your ConfigMap is missing the paths: to containers' logs. It should be something like this:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
Compare your config file with this one.
I hope it helps.
I had the same problem, I resolved by removing a hostNetwork: true configuration from DaemonSet. This means that the pod name was the same as the node name. Looking at the filebeat startup log, you can see this.
today I have a loop that allows me to start multiple docker containers
- name: start container current
docker_container:
name: "{{ item.name }}"
image: "{{ item.name }}:{{ item.version }}"
state: started
recreate: true
ports:
- "{{ item.ports }}"
volumes:
- /opt/application/i99/{{ item.type }}/logs:/opt/application/i99/{{ item.type }}/logs
env_file: /opt/application/i99/{{ item.type }}/{{ item.name }}/{{ item.name }}-PreProd-config.list
env:
LOG_FILE_WS: "/opt/application/i99/{{ item.type }}/logs/{{ hostname }}_WS.log"
with_items:
- { name: 'backend', ports: '8000:8000', type: 'current', version: '{{RCD_VERSION_CURRENT}}' }
- { name: 'connecteur', ports: '8400:8400', type: 'current', version: '{{RCD_VERSION_CURRENT}}' }
- { name: 'api-alerting', ports: '8100:8100', type: 'current', version: '{{RCD_VERSION_CURRENT}}' }
- { name: 'api-tracking', ports: '8200:8200', type: 'current', version: '{{RCD_VERSION_CURRENT}}' }
I have a extra variable {{ RCD_APIS }} that contain a list of all my containers name. I would like to loop over that list and define the following variable conditionnally to the name and run the containers
vars to define : ports, type, version
I want to do something like
- name: start container current
docker_container:
name: "{{ item }}"
image: "{{ item }}:{{ version }}"
state: started
user: adi99api
recreate: true
ports:
- "{{ ports }}"
volumes:
- /opt/application/i99/{{ type }}/logs:/opt/application/i99/{{ type }}/logs
env_file: /opt/application/i99/{{ type }}/{{ item }}/{{ name }}-PreProd-config.list
env:
LOG_FILE_WS: "/opt/application/i99/{{ type }}/logs/{{ hostname }}_WS.log"
with_items: "{{ RCD_APIS.split(',') }}"
when: ( item == "backend", ports: '8000:8000', type: 'current', version: '{{RCD_VERSION_CURRENT}}') or
( item == "connecteur", ports: '8400:8400', type: 'pilote', version: '{{RCD_VERSION_PILOTE}}')
# in a vars file, or a `vars` section
---
docker_containers_config:
backend:
ports: '8000:8000'
type: current
version: '{{RCD_VERSION_CURRENT}}'
connecteur:
ports: '8400:8400'
type: current
version: '{{RCD_VERSION_CURRENT}}'
api-alerting:
ports: '8100:8100'
type: 'current'
version: '{{RCD_VERSION_CURRENT}}'
api-tracking:
ports: '8200:8200'
type: 'current'
version: '{{RCD_VERSION_CURRENT}}'
_
# In your tasks
- name: start container current
docker_container:
name: "{{ item }}"
image: "{{ item }}:{{ docker_containers_config[item].version }}"
state: started
recreate: true
ports:
- "{{ docker_containers_config[item].ports }}"
volumes:
- /opt/application/i99/{{ item.type }}/logs:/opt/application/i99/{{ item.type }}/logs
env_file: /opt/application/i99/{{ docker_containers_config[item].type }}/{{ item }}/{{ item }}-PreProd-config.list
env:
LOG_FILE_WS: "/opt/application/i99/{{ docker_containers_config[item].type }}/logs/{{ hostname }}_WS.log"
with_items: "{{ RCD_APIS.split(',') }}"