how to exclude certain resource types from the velero backup schedule defined in the HELM chart - velero

I need to exclude certain resource types from the velero backup schedule defined in the helm chart
specifically I need the following resources to be excluded:
--exclude-resources=orders.acme.cert-manager.io,challenges.acme.cert-manager.io,certificaterequests.cert-manager.io
this is my schedule:
staging:
labels:
backup: staging
schedule: "#every 24h"
template:
ttl: "480h"
includedNamespaces:
- staging
what should be the key/value to achieve this?
will the following work?
staging:
labels:
backup: staging
schedule: "#every 24h"
template:
ttl: "480h"
includedNamespaces:
- staging
excludeResources:
- orders.acme.cert-manager.io
- challenges.acme.cert-manager.io
- certificaterequests.cert-manager.io
os excludedResources? Or excludeResources: "long string" vs array.

Related

serverless deploy error - Resource handler returned message: "Lambda function xxxxxxxx could not be found"

Hi can anyone help me how to deploy serverless with specific stage, I have 1 app with 2 stage dev and prod. When deploy to dev its working fine and successfully deployed, but with prod stage always get below error:
Error:
UPDATE_FAILED: FilterOptionLambdaFunction (AWS::Lambda::Function)
Resource handler returned message: "Lambda function xxxxxxx-api-prod-xxxxxx could not be found" (RequestToken: ee621797-de45-aa3f-118b-8f512d4a5f62, HandlerErrorCode: NotFound)
I tried to comment all function and leave 1 function to test deploy, but received another error as below:
Error:
UPDATE_FAILED: EnterpriseLogAccessIamRole (AWS::IAM::Role)
Unable to retrieve Arn attribute for AWS::Logs::LogGroup, with error message Resource of type 'AWS::Logs::LogGroup' with identifier '{"/properties/LogGroupName":"/aws/lambda/xxxxx-api-prod-api"}' was not found.
Here is my serverless.yml:
org: xxxxxx
app: comeby-api
service: comeby-scheduler-api
frameworkVersion: "3"
custom:
serverless-offline:
noPrependStageInUrl: true
myEnvironment:
MESSAGE:
prod: "This is production environment"
staging: "This is staging environment"
dev: "This is development environment"
useDotenv: true
provider:
name: aws
runtime: nodejs14.x
region: ap-southeast-1
stage: prod
functions:
api:
handler: handler.handler
events:
- httpApi: "*"
# Alikhsan
SyncAlikhsanSB2:
SyncAlikhsanAMT:
SyncAlikhsanASG:
SyncAlikhsanIOI:
SyncAlikhsanJSB:
SyncAlikhsanSPY:
# Sync Product
Shopify:
SyncSenheng:
SyncXilnix:
Puma:
# Anything
FilterOption:
AriadneMaps:
handler: scheduler/update/AriadneMaps.handler
description: "Update Ariadne Maps (to view report of total visitor of specific store) in Database"
memorySize: 512
timeout: 900
events:
- schedule:
rate: cron(00 22 * * ? *)
enabled: true
- http:
path: /cron/ariadne
method: get
SendEmailUpdateProduct:
ReportPurchasing:
UpdateProductPricePuma:
UpdateFootFallCam:
plugins:
# - serverless-dotenv-plugin
- serverless-offline
- serverless-offline-scheduler
I am guessing from those UPDATE_FAILEDs, you are using the same serverless file for both dev and prod deployment. Based on this assumption, you may have to provide separate service names for both of your deployments. If you have deployed to the dev environment already with service name comeby-scheduler-api, the next deployment for prod stage with the same service name will try to override the previous deployment.
In my case, I tackled this using 2 separate serverless configuration files (one for dev and the other for prod). For dev deployment, my config file serverless-dev.yml looks like the following.
service: service-dev
provider:
name: aws
role: arn:aws:iam::<aws-account-id>:role/<my-lambda-role-name>
region: <region>
runtime: python3.8
environment:
DB_HOST: <host>
DB_PASSWORD: <pass>
DB_PORT: <port>
DB_DATABASE: <db_name>
DB_USER: <db_user>
plugins:
- serverless-python-requirements
- serverless-secrets-plugin
- serverless-api-compression
package:
patterns:
- '!venv/**'
- '!__pycache__/**'
- '!node_modules/**'
- '!test/**'
functions:
Lambda1:
handler: lambda_file_name.handler_function_name
memorySize: 512
timeout: 900
events:
- s3:
bucket: <bucket_name_for_this_lambda_trigger>
event: s3:ObjectCreated:*
rules:
- prefix: <filter_trigger_file_prefix>
- suffix: <filter_trigger_file_suffix>
existing: <true if an existing s3 bucket, false otherwise>
Whereas for the prod, the serverless-prod.yml file is,
service: service-prod
provider:
name: aws
role: arn:aws:iam::<aws-account-id>:role/<my-lambda-role-name>
region: <region>
runtime: python3.8
... rest is similar
My deployment commands for these separate stages are.
sls deploy -s dev -c serverless-dev.yml
sls deploy -s prod -c serverless-prod.yml

How to use custom ingest pipelines with docker autodiscover

I'm having a hard time using custom Elasticsearch ingest pipelines with Filebeat's Docker autodiscovery. I've started out with custom processors in my filebeat.yml file, however I would prefer to shift this to custom ingest pipelines I've created.
Firstly, here is my configuration using custom processors that works to provide custom grok-like processing for my Servarr app Docker containers (identified by applying a label to them in my docker-compose.yml file). The processor copies the 'message' field to 'log.original', uses dissect to extract 'log.level', 'log.logger' and overwrite 'message'. The final processor is a JavaScript function used to convert the log.level to lowercase (overkill perhaps, but humour me).
Filebeat configuration:
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
reload.period: 60s
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
json.keys_under_root: true
processors:
- if:
equals:
docker.container.labels.co_elastic_logs/custom_processor: servarr
then:
- copy_fields:
fields:
- from: message
to: log.original
fail_on_error: false
ignore_missing: true
- dissect:
tokenizer: "[%{log.level}] %{log.logger}: %{message}"
field: message
target_prefix: ""
overwrite_keys: true
ignore_failure: true
- script:
lang: javascript
id: lowercase
source: >
function process(event) {
var level = event.Get("log.level");
if(level != null) {
event.Put("log.level", level.toString().toLowerCase());
}
}
output.elasticsearch:
hosts: 'elasticsearch:9200'
username: 'elastic'
password: '*************'
setup.kibana.host: 'kibana:5601'
logging.json: true
logging.metrics.enabled: false
Excerpt from docker-compose.yml file...
lidarr:
image: ghcr.io/linuxserver/lidarr:latest
container_name: lidarr
labels:
co.elastic.logs/custom_processor: "servarr"
And an example log line (in json):
{"log":"[Info] DownloadDecisionMaker: Processing 100 releases \n","stream":"stdout","time":"2021-08-07T10:10:49.125702754Z"}
This works well, and achieves my aims of extracting fields, but ideally I'd like to use Elasticsearch's (more powerful) ingest pipelines instead, and live with a cleaner filebeat.yml, so I created a working ingest pipeline "filebeat-7.13.4-servarr-stdout-pipeline" like so (ignore the fact that for now, this only does the grokking):
[
{
"grok": {
"field": "message",
"patterns": [
"\\[%{LOGLEVEL:log.level}\\] %{WORD:log.logger}: %{GREEDYDATA:message}"
],
"trace_match": true,
"ignore_missing": true
}
}
]
I tested the pipeline against existing documents (not ones that have had my custom processing applied, I should note). The pipeline worked against all the documents I tested it against in the Kibana interface.
So now I come to shift my Filebeat config to use this pipeline for containers with my custom_processor label. This is the filebeat.yml I came up with, which is apparently valid and works for the most part, but doesn't apply the grokking:
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
reload.period: 60s
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
json.keys_under_root: true
appenders:
- type: config
condition.equals:
docker.container.labels.co_elastic_logs/custom_processor: servarr
config:
pipeline: filebeat-7.13.4-servarr-stdout-pipeline
output.elasticsearch:
hosts: 'elasticsearch:9200'
username: 'elastic'
password: '*************'
setup.kibana.host: 'kibana:5601'
logging.json: true
logging.metrics.enabled: false
If I use Filebeat's inbuilt modules for my other containers such as nginx, by using a label such as in this example below, the inbuild module pipelines are used:
nginx-repo:
image: nginx:latest
container_name: nginx-repo
mem_limit: 2048m
environment:
- VIRTUAL_HOST=repo.***.***.***,repo
- VIRTUAL_PORT=80
- HTTPS_METHOD=noredirect
networks:
- default
- proxy
labels:
co.elastic.logs/module: "nginx"
co.elastic.logs/fileset.stdout: "access"
co.elastic.logs/fileset.stderr: "error"
What am I doing wrong here? The logs still end up in Elasticsearch and Kibana, and are processed, but my grok isn't applied, new fields aren't created, and the 'message' field is unchanged.
EDIT: In response to one of the comments linking to a post on the elastic forums, which suggested both the path(s) and the pipeline need to be made explicit, I tried the following filebeat.yml autodiscovery excerpt, which also fails to work (but is apparently valid config):
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
json.keys_under_root: true
appenders:
- type: config
condition:
equals:
docker.container.labels.co_elastic_logs/custom_processor: "servarr"
config:
- type: docker
containers:
ids:
- "${data.docker.container.id}"
stream: all
paths:
- /var/lib/docker/containers/${data.docker.container.id}/${data.docker.container.id}-json.log
pipeline: filebeat-7.13.4-servarr-stdout-pipeline
I tried with the docker.container.labels.co_elastic_logs/custom_processor value both quoted and unquoted. I have the same behaviour where the logs end up in Elasticsearch / Kibana, but they are processed as if they skipped my ingest pipeline.
We're using Kubernetes instead of Docker with Filebeat but maybe our config might still help you out.
We have autodiscover enabled and have all pod logs sent to a common ingest pipeline except for logs from any Redis pod which use the Redis module and send their logs to Elasticsearch via one of two custom ingest pipelines depending on whether they're normal Redis logs or slowlog Redis logs, this is configured in the following block:
All other detected pod logs get sent in to a common ingest pipeline using the following catch-all configuration in the "output" section:
Something else that we do is add the name of the ingest pipeline to ingested documents using the "set" processor:
This has proven to be really helpful when diagnosing whether or not a pipeline was actually executed when viewing an event document in Kibana.

Prometheus and Alertmanager - route based on env label

I'm trying to configure alertmanager so that it sends alerts to the right channels, based on value of a specific label. I have 3 slack channels - dev/staging/prod and I want the alerts coming from instances that have "env" label set to dev to be sent to the dev slack channel. Staging and prod would obviously work in the same manner. Here is part of my config:
global:
resolve_timeout: 1m
slack_api_url: 'https://slack-url'
route:
group_by: [...]
receiver: 'default'
routes:
- match:
env: 'prod'
receiver: 'slack-notifications-prod'
- match:
env: 'staging'
receiver: 'slack-notifications-staging'
- match:
env: 'dev'
receiver: 'slack-notifications-dev'
receivers:
- name: 'default'
- name: 'slack-notifications-prod'
...
- name: 'slack-notifications-staging'
...
- name: 'slack-notifications-dev'
...
The slack-notifications receivers are all the same and they only differ in one thing, which is the appropriate channel name.
Current behaviour: All alerts are sent to the prod slack channel
Expected behaviour: Alerts from "dev" env are sent to dev channel, "staging" to staging channel, and "prod" to prod channel.
Alertmanager sees these labels just fine (judging from the info from alertmanager webUI).
Turns out my config was fine and I was using a webhook URL which was tied only to one slack channel, I wasn't aware of that.
You have to add continue: true attribute on the first match:
global:
resolve_timeout: 1m
slack_api_url: 'https://slack-url'
route:
group_by: [...]
receiver: 'default'
routes:
- match:
env: 'prod'
receiver: 'slack-notifications-prod'
continue: true
- match:
env: 'staging'
receiver: 'slack-notifications-staging'
- match:
env: 'dev'
receiver: 'slack-notifications-dev'
receivers:
- name: 'default'
- name: 'slack-notifications-prod'
...
- name: 'slack-notifications-staging'
...
- name: 'slack-notifications-dev'
...
The AlertManager will evaluate children routes until there are no routes left or no routes for a given level are matching the current alert.
In that case, the AlertManager will take the configuration of the current node evaluated.
The continue attribute is a value used to define if you want to evaluate route siblings (belonging to the same level) if a route on the same level was already matching.
https://devconnected.com/alertmanager-and-prometheus-complete-setup-on-linux/

Deploying Jenkins to AWS using cloudformation and secrets manager

My objective is to build Jenkins as a docker image and deploy it to AWS Elastic Beanstalk.
To build the docker image I am using the Configuration as Code plugin and injecting all secrets via environment variables in the Dockerfile.
What I am trying to figure out now is how to automate this deployment using CloudFormation or CodePipeline.
My question is:
Can I fetch secrets from AWS Secrets Manager using either CloudFormation or CodePipeline and inject them as environment variables in the deployment to Elastic Beanstalk?
Not sure why you want to do stuff in this way in general, but couldn't you just use the AWS CLI to get the secrets from Secrets Manager directly from your ELB instance?
Cloudformation templates can recover secrets from Secrets Manager. It is somewhat ugly, but works pretty well. In general, I use a security.yaml nested stack to generate secrets for me in SM, then recover them in other stacks.
I can't speak too much to EB, but if you are deploying that through CF, then this should help.
Generating a secret in SM (CF security.yaml):
Parameters:
DeploymentEnvironment:
Type: String
Description: Deployment environment, e.g. prod, stage, qa, dev, or userdev
Default: "dev"
...
Resources:
...
RegistryDbAdminCreds:
Type: 'AWS::SecretsManager::Secret'
Properties:
Name: !Sub "RegistryDbAdminCreds-${DeploymentEnvironment}"
Description: "RDS master uid/password for artifact registry database."
GenerateSecretString:
SecretStringTemplate: '{"username": "artifactadmin"}'
GenerateStringKey: "password"
PasswordLength: 30
ExcludeCharacters: '"#/\+//:*`"'
Tags:
-
Key: AppName
Value: RegistryDbAdminCreds
Using the secret in another yaml:
Parameters:
DeploymentEnvironment:
Type: String
Description: Deployment environment, e.g. prod, stage, qa, dev, or userdev
Default: "dev"
...
Resources:
DB:
Type: 'AWS::RDS::DBInstance'
DependsOn: security
Properties:
Engine: postgres
DBInstanceClass: db.t2.small
DBName: quilt
MasterUsername: !Sub '{{resolve:secretsmanager:RegistryDbAdminCreds-${DeploymentEnvironment}:SecretString:username}}'
MasterUserPassword: !Sub '{{resolve:secretsmanager:RegistryDbAdminCreds-${DeploymentEnvironment}:SecretString:password}}'
StorageType: gp2
AllocatedStorage: "100"
PubliclyAccessible: true
DBSubnetGroupName: !Ref SubnetGroup
MultiAZ: true
VPCSecurityGroups:
- !GetAtt "network.Outputs.VPCSecurityGroup"
Tags:
- Key: Name
Value: !Join [ '-', [ !Ref StackName, "dbinstance", !Ref DeploymentEnvironment ] ]
The trick is in !Sub '{{resolve:secretsmanager:RegistryDbAdminCreds-${DeploymentEnvironment}:SecretString:username}}' and !Sub '{{resolve:secretsmanager:RegistryDbAdminCreds-${DeploymentEnvironment}:SecretString:password}}'

Jenkins Job Builder: Project Level Variables

Within JJB, you can define project-level variables like this:
- defaults:
name: global
git_url: "git#....."
- project
name: some-test
jobs:
- test-{name}
- job-template
name: test-{name}
scm:
- git:
url: "{git_url}"
branches:
- master
My question, must I hardcode the value of git_url at the default level or can I use some JJB mechanism to bring that in at job load/execution?
The reason I ask is that the yaml script that contains these JJB jobs can be used to define TEST, QA and PROD. It would be nice to just point at a properties file that contains the value for git_url and any other global variable values. I took a look at: http://docs.openstack.org/infra/jenkins-job-builder/definition.html?highlight=default#defaults and I did not see any mechanism.
If I understand your question correctly, there are two other approaches available within the context of a single yaml file
Approach 1: Set git_url at the project level
- project
name: some-test
git_url: "git#dogs.net:woof/bark.git"
jobs:
- test-{name}:
- job-template
name: test-{name}
scm:
- git:
url: "{git_url}"
branches:
- master
Here git_url is set at the project level. This approach allows you to define a second project with a different value for git_url, ie
- project
name: some-other-test
git_url: "git#cats.net:meow/meow.git"
jobs:
- test-{name}:
Approach 2: Set git_url at the job-template instance level
- project
name: some-test
jobs:
- test-{name}:
git_url: "git#....."
- job-template
name: test-{name}
scm:
- git:
url: "{git_url}"
branches:
- master
Here git_url is set on the actual instance of the job-template where it is specified. If your job-template had more than just {name} in its name, this would allow you to create multiple instances of it in the list of jobs at the project level, ie
- project
name: some-test
git_url: "git#....."
jobs:
- test-{name}-{type}:
type: 'cat'
- test-{name}-{type}:
type: 'dog'
- job-template
name: test-{name}-{type}
display-name: 'Test for {type} projects'
scm:
- git:
url: "{git_url}"
branches:
- master
Thoughts on TEST vs QA vs PROD
You also mentioned that you would like some kind of external properties file to differentiate between TEST, QA, and PROD environments. To address this let's consider four different files, project.yaml, defaults/TEST.yaml, defaults/QA.yaml, defaults/PROD.yaml whose contents are enumerated below.
project.yaml
- project
name: some-test
jobs:
- test-{name}:
defaults/TEST.yaml
- defaults:
name: global
git_url: "git#dogs.net:woof/test.git"
defaults/QA.yaml
- defaults:
name: global
git_url: "git#dogs.net:woof/qa.git"
defaults/PROD.yaml
- defaults:
name: global
git_url: "git#dogs.net:woof/prod.git"
Okay so these aren't great examples because you probably wouldn't have a different git repository for each environment, but I don't want to complicate things by straying too far from your original example.
With JJB you can specify more than one YAML file on the command line (I don't want to complicate the example or its explanation, but you can also specify directories full of JJB yaml). To differentiate between TEST, QA, and PROD deployments of your Jenkins job you can then do something like:
jenkins-jobs project.yaml:defaults/TEST.yaml
For your test environment.
jenkins-jobs project.yaml:defaults/QA.yaml
For your qa environment.
jenkins-jobs project.yaml:defaults/PROD.yaml
For your prod environment.
Hope that helps.

Resources