Retrieve first securty group from aws ssm parameter store - serverless

I have SSM Parameter with name vpc-subnet-ids and value is comma separated string like: "subnet1,subnet2"
I want to get the first subnet using serverless.yaml file.I am using this but this is not working:
SubnetId: !Split [",", !Ref ${ssm:/vpc-subnet-ids}[0]]
Tried this as well !Select [1, !Split [",", ${ssm:/vpc-subnet-ids}]]. this is giving
missed comma between flow collection entries in
Serverless framework: 3.26.0

Related

Neo4j Python REST API

Query via Python REST-APi
message: Invalid input: ':'
Hello,
i am starting a query via my Python-Neo4j-Api.
But the code ist not working, resulting in the the error message above.
But the same query is working in the Neo4J Desktop App.
Why is it working in the Neo4j Desktop App, but not via my Api Query. Why is the : before param a Problem?
I am new to Python and Neo4j, please help.
King regards.
Trying to query via a Python-Neo4j-RestAPI.
Below is the syntax on passing parameters in neo4j python driver. Unfortunately, you cannot use labels or relationship types in the parameter. If you need to pass labels (like Human:Moviestar) then you can use string function in python like this: passing parameters in neo4j using python
name = "Tom Cruise"
placeOfBirth = "Syracuse, New York, United States"
query = "Create (n:Human:Moviestar { name: $name, placeOfBirth: $placeOfBirth})"
session = driver.session()
result = session.run(query, name=name, placeOfBirth=placeOfBirth)
I see that you have been working with the database though the browser application. So commands that are prefixed with ":" like :params or :connect are browser commands and is not valid cypher. Instead, in python pass your parameters as the second argument to your to your session.run() function or transaction. Then use variable substitution to in your cypher query.
params = {"name": "Tom Hanks" }
with driver.session as session:
result = session.run ("MATCH (p:person) where p.name = $name return p", params)

How to add PTR record for EC2 Instance's Private IP in CDK?

I've two private hosted zones created for populating A records and PTR records corresponding to my EC2 instance's private ip. Yes, it's the private ip that I need. This subnet is routed to our corporate data center, so we need non-cryptic hostnames and consistent reverse lookup on them within the account.
I've got the forward lookup working well, however I'm confused how exactly it should be for the reverse lookup on the IP. Assume, my CIDR is 192.168.10.0/24 where the EC2 instances will get created.
const fwdZone = new aws_route53.PrivateHostedZone(
this, "myFwdZone", {
zoneName: "example.com",
vpc: myVpc,
});
const revZone = new aws_route53.PrivateHostedZone(
this, "myRevZone", {
zoneName: "10.168.192.in-addr.arpa",
vpc: myVpc,
}
);
I'm later creating the A record by referencing the EC2 instance's privateIp property. This worked well.
const myEc2 = new aws_ec2.Instance(this, 'myEC2', {...})
new aws_route53.RecordSet(this, "fwdRecord", {
zone: fwdZone,
recordName: "myec2.example.com",
recordType: aws_route53.RecordType.A,
target: aws_route53.RecordTarget.fromIpAddresses(
myEc2.instancePrivateIp
),
});
However, when I try to create the PTR record for the same, I've got some trouble. I needed to extract the fourth octet and specify as the recordName
new aws_route53.RecordSet(this, "revRecord", {
zone: revZone,
recordName: myEc2.instancePrivateIp.split('.')[3],
recordType: aws_route53.RecordType.PTR,
target: aws_route53.RecordTarget.fromValues("myec2.example.com"),
});
The CDK synthesized CloudFormation template looks odd as well, especially the token syntax.
revRecordDEADBEEF:
Type: AWS::Route53::RecordSet
Properties:
Name: ${Token[TOKEN.10.168.192.in-addr.arpa.
Type: PTR
HostedZoneId: A12345678B00CDEFGHIJ3
ResourceRecords:
- myec2.example.com
TTL: "1800"
Is this the right way to achieve this ? If I specified the recordName as just the privateIp, then the synthesized template ends up doing something else, which I see is incorrect too.
revRecordDEADBEEF:
Type: AWS::Route53::RecordSet
Properties:
Name:
Fn::Join:
- ""
- - Fn::GetAtt:
- myEC2123A01BC
- PrivateIp
- .10.168.192.in-addr.arpa.
Type: PTR
HostedZoneId: A12345678B00CDEFGHIJ3
ResourceRecords:
- myec2.example.com
TTL: "1800"
Answering the CDK part of your question: the original error was because you were performing string manipulation on an unresolved token. Your CDK code runs before any resources are provisioned. This has to be the case, since it generates the CloudFormation template that will be submitted to CloudFormation to provision the resources. So when the code runs, the instance does not exist, and its IP address is not knowable.
CDK still allows you to access unresolved properties, returning a Token instead. You can pass this token around and it will be resolved to the actual value during deployment.
To perform string manipulation on a token, you can use CloudFormation's bult-in functions, since they run during deployment, after the token has been resolved.
Here's what it would look like:
recordName: Fn.select(0, Fn.split('.', myEc2.instancePrivateIp))
As you found out yourself, you were also selecting the wrong octet of the IP address, so the actual solution would include replacing 0 with 3 in the call.
References:
https://docs.aws.amazon.com/cdk/v2/guide/tokens.html
https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib-readme.html#intrinsic-functions-and-condition-expressions

How to set the scope using Google Operators in Airflow

I have a task using the GCSToGoogleSheetsOperator in Airflow where Im trying to add data to a sheet.
I have added the service credential email to the sheet I want to edit with editor privileges, and received this error:
googleapiclient.errors.HttpError:
<HttpError 403 when requesting
https://sheets.googleapis.com/v4/spreadsheets/<SHEET_ID>/values/Sheet1?valueInputOption=RAW&includeValuesInResponse=false&responseValueRenderOption=FORMATTED_VALUE&responseDateTimeRenderOption=SERIAL_NUMBER&alt=json
returned "Request had insufficient authentication scopes.".
Details: "[{
'#type': 'type.googleapis.com/google.rpc.ErrorInfo',
'reason': 'ACCESS_TOKEN_SCOPE_INSUFFICIENT',
'domain': 'googleapis.com',
'metadata': {
'service': 'sheets.googleapis.com',
'method': 'google.apps.sheets.v4.SpreadsheetsService.UpdateValues'}
}]>
I cant update the sheet, but the GCS and BigQuery operators work fine.
My connection configuration looks like the following:
AIRFLOW_CONN_GOOGLE_CLOUD=google-cloud-platform://?extra__google_cloud_platform__key_path=%2Fopt%2Fairflow%2Fcredentials%2Fgoogle_credential.json
I tried following the instructions to add the scope https://www.googleapis.com/auth/spreadsheets.
Which URL encoded looks like:
AIRFLOW_CONN_GOOGLE_CLOUD=google-cloud-platform://?extra__google_cloud_platform__key_path=%2Fopt%2Fairflow%2Fcredentials%2Fgoogle_credential.json&extra__google_cloud_platform__scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fspreadsheets
Now, operators which previously worked error out like this:
google.api_core.exceptions.Forbidden: 403 POST https://bigquery.googleapis.com/bigquery/v2/projects/my-project/jobs?prettyPrint=false: Request had insufficient authentication scopes.
And the GCSToGoogleSheetsOperator operator still error out like this:
google.api_core.exceptions.Forbidden: 403 GET https://storage.googleapis.com/download/storage/v1/b/my-bucket/o/folder%2Fobject.csv?alt=media: Insufficient Permission: ('Request failed with status code', 403, 'Expected one of', <HTTPStatus.OK: 200>, <HTTPStatus.PARTIAL_CONTENT: 206>)
How can I set the permissions correctly to use both BigQuery, GCS and Sheets operators?
Adding a scope seems to ignore the IAM roles, so its either one or the other.
The service account had roles needed to access GCS and BigQuery, but by adding the scope https://www.googleapis.com/auth/spreadsheets, the service would ignore the privileges granted by the roles and look only at the ones specified by the scopes.
So, to recover it, you must add both the spreadsheet and cloud-platform scopes (or more strict scopes). cloud-platform will provide access to GCS and BigQuery and spreadsheets to Google Sheets API.
If you set your connection using environment variables, you have to URL encode the arguments, so to create a GOOGLE_CLOUD connection, you will have to do something like this, which is not encoded...
AIRFLOW_CONN_GOOGLE_CLOUD=google-cloud-platform://?extra__google_cloud_platform__key_path=/abs/path_to_file/credential.json&extra__google_cloud_platform__scope=https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/spreadsheets
To encode, which is the version you have to use, replace /, , and ::
AIRFLOW_CONN_GOOGLE_CLOUD=google-cloud-platform://?extra__google_cloud_platform__key_path=%2Fabs%2Fpath_to_file%2Fcredentials%2Fgoshare-driver-c08e0904285b.json&extra__google_cloud_platform__scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform%2Chttps%3A%2F%2Fwww.googleapis.com%2Fauth%2Fspreadsheets

How to set passwords in configmap

I want to configure passwords in the config map.YAML instead of deployment.yaml.able to set username and other variables.attcahing the config map.YAML file which I worked on.
kind: ConfigMap
apiVersion: v1
metadata:
name: poc-configmapconfiguration-configmap
data:
Environment: [[.Environment]]
dockerRegistryUrl: [[.Env.dockerRegistryUrl]]
CassandraSettings__CassandraPassword:
valueFrom:
secretKeyRef:
name: abcd-passwords
key: "[[ .Environment ]]-abcd-cassandra-password
As already suggested its better practice to use secrets to store passwords
Secrets obscure your data using a Base64 encoding so it is good practice to use Secrets for confidential data over using ConfigMaps.
If you perform a explain on ConfigMap field to get more details from CLI it self on the ConfigMap.data it accepts map of strings.
$ kubectl explain ConfigMap.data
KIND: ConfigMap
VERSION: v1
FIELD: data <map[string]string>
DESCRIPTION:
Data contains the configuration data. Each key must consist of alphanumeric
characters, '-', '_' or '.'. Values with non-UTF-8 byte sequences must use
the BinaryData field. The keys stored in Data must not overlap with the
keys in the BinaryData field, this is enforced during validation process.
So above yaml structure you used should throw a error on creation time something like below ..
invalid type for io.k8s.api.core.v1.ConfigMap.data
Refer this git request for such feature request , which is already closed with no support considered.
https://github.com/kubernetes/kubernetes/issues/79224
It would be more common to use a Secret, as you can see from the secretKeyRef, however an equivalent configMapRef exists and be used in the same way.

What is the best way to use the AWS CDK to get the CIDR of the VPC?

I've started using the AWS CDK to stand up a new VPC, but I am struggling when trying to query other existing VPCs and their CIDR ranges - this is to ensure that my new VPC does not overlap with existing CIDR ranges. The return string is not something I can understand. Could you provide an example on how to query for a list of CIDR ranges in subnets?
Thanks.
If you are trying to reference an existing VPC into your CDK stack, you should use the VpcNetwork.import static method which doesn't require you to specify the CIDR blocks of the VPC.
You will need other information specified in VpcNetworkRefProps, which shouldn't be too hard to obtain from the AWS Console or the AWS CLI:
Something like:
const externalVpc = VpcNetwork.import(this, 'ExternalVpc', {
vpcId: 'vpc-bd5656d4',
availabilityZones: [ 'us-east1a', 'us-east-1b' ],
publicSubnetIds: [ 'subnet-1111aaaa', 'subnet-2222bbbb' ],
privateSubnetIds: [ 'subnet-8368fbce', 'subnet-8368abcc' ],
});
We are looking at making this easier (see #506)

Resources