I am new to CloudbaseInit. I have setup a image with CloudbaseInit,and can build the machine with new password and expand hdd size all ok (using command :nova boot). But I want to use heat stack-create with a heat template file
heat_template_version: 2013-05-23
description: dtb test hottest,for test add parameters.
parameters:
flavor:
type: string
label: paasflavor
description: paasflavor flavor to be used
default: c1m2h90
availability_zone:
type: string
description: The Availability Zone to launch the instance.
default: nova
name:
type: string
description: name of host.
resources:
server1_port1:
type: OS::Neutron::Port
properties:
network_id: 70c1faf0-51f6-4cb9-b324-7bc2cc6fab5b
server1:
type: OS::Nova::Server
properties:
name: { get_param: name }
image: template_win2008
flavor: { get_param: flavor }
availability_zone: { get_param: availability_zone }
networks:
- port: { get_resource: server1_port1 }
user_data:
echo 11 > \"c:\\yp\\333"\n,
outputs:
server1_ip:
description: Private IP address of server1
value: { get_attr: [ server1, first_address ] }
Machine builds ok. When I login into it, and go to c:\yp, I find nothing. I think user_data is wrong, and command not work. I have tried several other ways for writing user_data part, but all failed. I never build windows machine with heat template.
Add pipe(|) after user_data:
user_data: |
Related
I'm having a hard time using custom Elasticsearch ingest pipelines with Filebeat's Docker autodiscovery. I've started out with custom processors in my filebeat.yml file, however I would prefer to shift this to custom ingest pipelines I've created.
Firstly, here is my configuration using custom processors that works to provide custom grok-like processing for my Servarr app Docker containers (identified by applying a label to them in my docker-compose.yml file). The processor copies the 'message' field to 'log.original', uses dissect to extract 'log.level', 'log.logger' and overwrite 'message'. The final processor is a JavaScript function used to convert the log.level to lowercase (overkill perhaps, but humour me).
Filebeat configuration:
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
reload.period: 60s
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
json.keys_under_root: true
processors:
- if:
equals:
docker.container.labels.co_elastic_logs/custom_processor: servarr
then:
- copy_fields:
fields:
- from: message
to: log.original
fail_on_error: false
ignore_missing: true
- dissect:
tokenizer: "[%{log.level}] %{log.logger}: %{message}"
field: message
target_prefix: ""
overwrite_keys: true
ignore_failure: true
- script:
lang: javascript
id: lowercase
source: >
function process(event) {
var level = event.Get("log.level");
if(level != null) {
event.Put("log.level", level.toString().toLowerCase());
}
}
output.elasticsearch:
hosts: 'elasticsearch:9200'
username: 'elastic'
password: '*************'
setup.kibana.host: 'kibana:5601'
logging.json: true
logging.metrics.enabled: false
Excerpt from docker-compose.yml file...
lidarr:
image: ghcr.io/linuxserver/lidarr:latest
container_name: lidarr
labels:
co.elastic.logs/custom_processor: "servarr"
And an example log line (in json):
{"log":"[Info] DownloadDecisionMaker: Processing 100 releases \n","stream":"stdout","time":"2021-08-07T10:10:49.125702754Z"}
This works well, and achieves my aims of extracting fields, but ideally I'd like to use Elasticsearch's (more powerful) ingest pipelines instead, and live with a cleaner filebeat.yml, so I created a working ingest pipeline "filebeat-7.13.4-servarr-stdout-pipeline" like so (ignore the fact that for now, this only does the grokking):
[
{
"grok": {
"field": "message",
"patterns": [
"\\[%{LOGLEVEL:log.level}\\] %{WORD:log.logger}: %{GREEDYDATA:message}"
],
"trace_match": true,
"ignore_missing": true
}
}
]
I tested the pipeline against existing documents (not ones that have had my custom processing applied, I should note). The pipeline worked against all the documents I tested it against in the Kibana interface.
So now I come to shift my Filebeat config to use this pipeline for containers with my custom_processor label. This is the filebeat.yml I came up with, which is apparently valid and works for the most part, but doesn't apply the grokking:
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
reload.period: 60s
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
json.keys_under_root: true
appenders:
- type: config
condition.equals:
docker.container.labels.co_elastic_logs/custom_processor: servarr
config:
pipeline: filebeat-7.13.4-servarr-stdout-pipeline
output.elasticsearch:
hosts: 'elasticsearch:9200'
username: 'elastic'
password: '*************'
setup.kibana.host: 'kibana:5601'
logging.json: true
logging.metrics.enabled: false
If I use Filebeat's inbuilt modules for my other containers such as nginx, by using a label such as in this example below, the inbuild module pipelines are used:
nginx-repo:
image: nginx:latest
container_name: nginx-repo
mem_limit: 2048m
environment:
- VIRTUAL_HOST=repo.***.***.***,repo
- VIRTUAL_PORT=80
- HTTPS_METHOD=noredirect
networks:
- default
- proxy
labels:
co.elastic.logs/module: "nginx"
co.elastic.logs/fileset.stdout: "access"
co.elastic.logs/fileset.stderr: "error"
What am I doing wrong here? The logs still end up in Elasticsearch and Kibana, and are processed, but my grok isn't applied, new fields aren't created, and the 'message' field is unchanged.
EDIT: In response to one of the comments linking to a post on the elastic forums, which suggested both the path(s) and the pipeline need to be made explicit, I tried the following filebeat.yml autodiscovery excerpt, which also fails to work (but is apparently valid config):
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
json.keys_under_root: true
appenders:
- type: config
condition:
equals:
docker.container.labels.co_elastic_logs/custom_processor: "servarr"
config:
- type: docker
containers:
ids:
- "${data.docker.container.id}"
stream: all
paths:
- /var/lib/docker/containers/${data.docker.container.id}/${data.docker.container.id}-json.log
pipeline: filebeat-7.13.4-servarr-stdout-pipeline
I tried with the docker.container.labels.co_elastic_logs/custom_processor value both quoted and unquoted. I have the same behaviour where the logs end up in Elasticsearch / Kibana, but they are processed as if they skipped my ingest pipeline.
We're using Kubernetes instead of Docker with Filebeat but maybe our config might still help you out.
We have autodiscover enabled and have all pod logs sent to a common ingest pipeline except for logs from any Redis pod which use the Redis module and send their logs to Elasticsearch via one of two custom ingest pipelines depending on whether they're normal Redis logs or slowlog Redis logs, this is configured in the following block:
All other detected pod logs get sent in to a common ingest pipeline using the following catch-all configuration in the "output" section:
Something else that we do is add the name of the ingest pipeline to ingested documents using the "set" processor:
This has proven to be really helpful when diagnosing whether or not a pipeline was actually executed when viewing an event document in Kibana.
I have declared a mapping named StageMap in my sam.yaml file:
AWSTemplateFormatVersion: 2010-09-09
Transform: AWS::Serverless-2016-10-31
Parameters:
ProjectName:
Type: String
SubProjectName:
Type: String
Stage:
Type: String
AllowedValues:
- dev
- test
- preprod
- prod
...
Mappings:
StageMap:
dev:
AuthorizerArn: !Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:auth-bk-main-dev-AuthorizerFunction-1RR2YJ5STBUB6/invocations
test:
AuthorizerArn: !Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:auth-bk-main-test-AuthorizerFunction-UQ1EQ2SP5W6G/invocations
preprod:
AuthorizerArn: !Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:auth-bk-main-preprod-AuthorizerFunction-UQ1W6EQ2SP5G/invocations
prod:
AuthorizerArn: !Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:auth-bk-main-prod-AuthorizerFunction-5STBUB61RR2YJ/invocations
I would like to use this mapping in my swagger.yaml I have tried the following:
...
x-amazon-apigateway-authorizer:
type: request
authorizerUri:
Fn::FindInMap:
- 'StageMap'
- Ref: 'Stage'
- 'AuthorizerArn
I also tried this solution but I got an error Every Mappings attribute must be a String or a List.
Can you please let me know how to access one of the values in the mapping in the swagger.yaml? Thanks!
I found the following in the AWS SAM docs:
You cannot include parameters, pseudo parameters, or intrinsic functions in the Mappings section.
So I changed:
AuthorizerArn: !Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:auth-bk-main-dev-AuthorizerFunction-1RR2YJ5STBUB6/invocations
For:
AuthorizerFunctionName: auth-bk-main-dev-AuthorizerFunction-1RR2YJ5STBUB6
And in the swagger.yaml I used the following:
x-amazon-apigateway-authorizer:
type: request
authorizerUri:
Fn::Sub:
- arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:${AuthorizerFunctionName}/invocations
- AuthorizerFunctionName:
Fn::FindInMap:
- 'StageMap'
- Ref: 'Stage'
- 'AuthorizerFunctionName'
I want to use swagger to document our internal Jenkins Jobs and parameters. So that everybody knows what the body has to look like to properly trigger a Jenkins jobs over the API.
I am writing the API documentation with a swagger.yml file. Where I am totally struggling is to document nested objects.
Jenkins needs the parameters in JSON. The curl request looks like this
curl --request POST \
--url https://myjenkins.com/job/demojob/build \
--form 'json={
"parameter": [
{
"name": "FilePath",
"value": "E:\\Jenkins"
},
{
"name": "FileName",
"value": "JenkinsAPI.txt"
},
{
"name": "FileContent",
"value": "I am a file created by jenkins through API"
},
{
"name": "Computer",
"value": "myhost"
}
]
}'
I am able to create a yml file that contains something like this, and that does not resemble what I need at all.
Can somebody point me in the right direction, or give me an example?
paths:
/job/demojob/build:
post:
summary: triggers the job
parameters:
- name: parameters
in: body
required: true
- name: filePath
in: body
schema:
type: string
default: "C:\\fancyfolder"
Please check below mentioned yml file with swagger properties mapping and you can change ApiResponse according to your requirement. :
openapi: 3.0.1
info:
title: Swagger Jenkins
description: 'This is a sample server Jenkins server. You can find out more about Swagger'
version: 1.0.0
servers:
- url: https://jenkins.com
tags:
- name: build
description: Everything about your build
paths:
/job/demojob/build:
post:
tags:
- build
summary: build the given branch
operationId: build
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/RequestParameters'
responses:
200:
description: successful operation
content:
application/json:
schema:
$ref: '#/components/schemas/ApiResponse'
components:
schemas:
RequestParameters:
type: array
items:
$ref: '#/components/schemas/Parameters'
Parameters:
type: object
properties:
name:
type: string
value:
type: string
description: Order Status
ApiResponse:
type: object
properties:
code:
type: integer
format: int32
type:
type: string
message:
type: string
My objective is to build Jenkins as a docker image and deploy it to AWS Elastic Beanstalk.
To build the docker image I am using the Configuration as Code plugin and injecting all secrets via environment variables in the Dockerfile.
What I am trying to figure out now is how to automate this deployment using CloudFormation or CodePipeline.
My question is:
Can I fetch secrets from AWS Secrets Manager using either CloudFormation or CodePipeline and inject them as environment variables in the deployment to Elastic Beanstalk?
Not sure why you want to do stuff in this way in general, but couldn't you just use the AWS CLI to get the secrets from Secrets Manager directly from your ELB instance?
Cloudformation templates can recover secrets from Secrets Manager. It is somewhat ugly, but works pretty well. In general, I use a security.yaml nested stack to generate secrets for me in SM, then recover them in other stacks.
I can't speak too much to EB, but if you are deploying that through CF, then this should help.
Generating a secret in SM (CF security.yaml):
Parameters:
DeploymentEnvironment:
Type: String
Description: Deployment environment, e.g. prod, stage, qa, dev, or userdev
Default: "dev"
...
Resources:
...
RegistryDbAdminCreds:
Type: 'AWS::SecretsManager::Secret'
Properties:
Name: !Sub "RegistryDbAdminCreds-${DeploymentEnvironment}"
Description: "RDS master uid/password for artifact registry database."
GenerateSecretString:
SecretStringTemplate: '{"username": "artifactadmin"}'
GenerateStringKey: "password"
PasswordLength: 30
ExcludeCharacters: '"#/\+//:*`"'
Tags:
-
Key: AppName
Value: RegistryDbAdminCreds
Using the secret in another yaml:
Parameters:
DeploymentEnvironment:
Type: String
Description: Deployment environment, e.g. prod, stage, qa, dev, or userdev
Default: "dev"
...
Resources:
DB:
Type: 'AWS::RDS::DBInstance'
DependsOn: security
Properties:
Engine: postgres
DBInstanceClass: db.t2.small
DBName: quilt
MasterUsername: !Sub '{{resolve:secretsmanager:RegistryDbAdminCreds-${DeploymentEnvironment}:SecretString:username}}'
MasterUserPassword: !Sub '{{resolve:secretsmanager:RegistryDbAdminCreds-${DeploymentEnvironment}:SecretString:password}}'
StorageType: gp2
AllocatedStorage: "100"
PubliclyAccessible: true
DBSubnetGroupName: !Ref SubnetGroup
MultiAZ: true
VPCSecurityGroups:
- !GetAtt "network.Outputs.VPCSecurityGroup"
Tags:
- Key: Name
Value: !Join [ '-', [ !Ref StackName, "dbinstance", !Ref DeploymentEnvironment ] ]
The trick is in !Sub '{{resolve:secretsmanager:RegistryDbAdminCreds-${DeploymentEnvironment}:SecretString:username}}' and !Sub '{{resolve:secretsmanager:RegistryDbAdminCreds-${DeploymentEnvironment}:SecretString:password}}'
I would like to add an AWS resource conditionally based on presence of an env var. I tried serverless-cloudformation-parameter-setter but I get a generic error on deployment and I don't see what I need to do to fix it
I'm trying to deploy a simple lambda + SQS stack and if a env var is defined also subscribe the queue to the topic denoted by the env var - or if the var is not defined then not do that part at all, just the lambda and the queue
This is what I tried:
plugins:
- serverless-cloudformation-parameter-setter
provider:
name: aws
runtime: nodejs8.10
region: eu-west-1
functions:
update:
handler: index.update
events:
- sqs:
arn:
Fn::GetAtt:
- Queue
- Arn
custom:
cf-parameters:
SourceTopicArn: "${env:UPDATE_SNS_ARN}"
resources:
Parameters:
SourceTopicArn:
Type: string
Resources:
Queue:
Type: "AWS::SQS::Queue"
Subscription:
Type: "AWS::SNS::Subscription"
Condition: SourceTopicArn
Properties:
TopicArn:
Ref: SourceTopicArn
Endpoint:
Ref: Queue
The error I receive is: The CloudFormation template is invalid: Template format error: Unrecognized parameter type: string
If I remove all the parameter stuff it works fine
The Type has to be String, not string. See the supported parameter data types section in the docs.