How to set passwords in configmap - docker

I want to configure passwords in the config map.YAML instead of deployment.yaml.able to set username and other variables.attcahing the config map.YAML file which I worked on.
kind: ConfigMap
apiVersion: v1
metadata:
name: poc-configmapconfiguration-configmap
data:
Environment: [[.Environment]]
dockerRegistryUrl: [[.Env.dockerRegistryUrl]]
CassandraSettings__CassandraPassword:
valueFrom:
secretKeyRef:
name: abcd-passwords
key: "[[ .Environment ]]-abcd-cassandra-password

As already suggested its better practice to use secrets to store passwords
Secrets obscure your data using a Base64 encoding so it is good practice to use Secrets for confidential data over using ConfigMaps.
If you perform a explain on ConfigMap field to get more details from CLI it self on the ConfigMap.data it accepts map of strings.
$ kubectl explain ConfigMap.data
KIND: ConfigMap
VERSION: v1
FIELD: data <map[string]string>
DESCRIPTION:
Data contains the configuration data. Each key must consist of alphanumeric
characters, '-', '_' or '.'. Values with non-UTF-8 byte sequences must use
the BinaryData field. The keys stored in Data must not overlap with the
keys in the BinaryData field, this is enforced during validation process.
So above yaml structure you used should throw a error on creation time something like below ..
invalid type for io.k8s.api.core.v1.ConfigMap.data
Refer this git request for such feature request , which is already closed with no support considered.
https://github.com/kubernetes/kubernetes/issues/79224

It would be more common to use a Secret, as you can see from the secretKeyRef, however an equivalent configMapRef exists and be used in the same way.

Related

Retrieve first securty group from aws ssm parameter store

I have SSM Parameter with name vpc-subnet-ids and value is comma separated string like: "subnet1,subnet2"
I want to get the first subnet using serverless.yaml file.I am using this but this is not working:
SubnetId: !Split [",", !Ref ${ssm:/vpc-subnet-ids}[0]]
Tried this as well !Select [1, !Split [",", ${ssm:/vpc-subnet-ids}]]. this is giving
missed comma between flow collection entries in
Serverless framework: 3.26.0

How to add PTR record for EC2 Instance's Private IP in CDK?

I've two private hosted zones created for populating A records and PTR records corresponding to my EC2 instance's private ip. Yes, it's the private ip that I need. This subnet is routed to our corporate data center, so we need non-cryptic hostnames and consistent reverse lookup on them within the account.
I've got the forward lookup working well, however I'm confused how exactly it should be for the reverse lookup on the IP. Assume, my CIDR is 192.168.10.0/24 where the EC2 instances will get created.
const fwdZone = new aws_route53.PrivateHostedZone(
this, "myFwdZone", {
zoneName: "example.com",
vpc: myVpc,
});
const revZone = new aws_route53.PrivateHostedZone(
this, "myRevZone", {
zoneName: "10.168.192.in-addr.arpa",
vpc: myVpc,
}
);
I'm later creating the A record by referencing the EC2 instance's privateIp property. This worked well.
const myEc2 = new aws_ec2.Instance(this, 'myEC2', {...})
new aws_route53.RecordSet(this, "fwdRecord", {
zone: fwdZone,
recordName: "myec2.example.com",
recordType: aws_route53.RecordType.A,
target: aws_route53.RecordTarget.fromIpAddresses(
myEc2.instancePrivateIp
),
});
However, when I try to create the PTR record for the same, I've got some trouble. I needed to extract the fourth octet and specify as the recordName
new aws_route53.RecordSet(this, "revRecord", {
zone: revZone,
recordName: myEc2.instancePrivateIp.split('.')[3],
recordType: aws_route53.RecordType.PTR,
target: aws_route53.RecordTarget.fromValues("myec2.example.com"),
});
The CDK synthesized CloudFormation template looks odd as well, especially the token syntax.
revRecordDEADBEEF:
Type: AWS::Route53::RecordSet
Properties:
Name: ${Token[TOKEN.10.168.192.in-addr.arpa.
Type: PTR
HostedZoneId: A12345678B00CDEFGHIJ3
ResourceRecords:
- myec2.example.com
TTL: "1800"
Is this the right way to achieve this ? If I specified the recordName as just the privateIp, then the synthesized template ends up doing something else, which I see is incorrect too.
revRecordDEADBEEF:
Type: AWS::Route53::RecordSet
Properties:
Name:
Fn::Join:
- ""
- - Fn::GetAtt:
- myEC2123A01BC
- PrivateIp
- .10.168.192.in-addr.arpa.
Type: PTR
HostedZoneId: A12345678B00CDEFGHIJ3
ResourceRecords:
- myec2.example.com
TTL: "1800"
Answering the CDK part of your question: the original error was because you were performing string manipulation on an unresolved token. Your CDK code runs before any resources are provisioned. This has to be the case, since it generates the CloudFormation template that will be submitted to CloudFormation to provision the resources. So when the code runs, the instance does not exist, and its IP address is not knowable.
CDK still allows you to access unresolved properties, returning a Token instead. You can pass this token around and it will be resolved to the actual value during deployment.
To perform string manipulation on a token, you can use CloudFormation's bult-in functions, since they run during deployment, after the token has been resolved.
Here's what it would look like:
recordName: Fn.select(0, Fn.split('.', myEc2.instancePrivateIp))
As you found out yourself, you were also selecting the wrong octet of the IP address, so the actual solution would include replacing 0 with 3 in the call.
References:
https://docs.aws.amazon.com/cdk/v2/guide/tokens.html
https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib-readme.html#intrinsic-functions-and-condition-expressions

AWS CDK DocDB::DBCluster fails with 'not a valid password'

I am trying to use AWS CKD (JAVA) to create a DocumentDB instance.
This works with a "simple" plaintext password, but fails when I try to use a DatabaseSecret and a password stored in Secrets Manager.
The error I get is this:
1:44:42 PM | CREATE_FAILED | AWS::DocDB::DBCluster | ApiDocDb15EB2C21
The parameter MasterUserPassword is not a valid password. Only printable ASCII characters besides '/', '#', '"', ' ' may
be used. (Service: AmazonRDS; Status Code: 400; Error Code: InvalidParameterValue; Request ID: c786d247-8ff2-4f30-9a8a-5
065fc89d3d1; Proxy: null)
which is clear enough, but it continues to happen, even if I set the password to something such as simplepassword - so I am now somewhat confused as to what am I supposed to fix now.
Here is the code, mostly adapted from the DocDB documentation:
String id = String.format(DOCDB_PASSWORD_ID);
return DatabaseSecret.Builder.create(scope, id)
.secretName(store.getSsmSecretName())
.encryptionKey(passwordKey)
.username(store.getAdminUser())
.build();
where the ssmSecretName is the name of the secret in SecretManager:
└─( aws secretsmanager get-secret-value --secret-id api-db-admin-pwd
ARN: arn:aws:secretsmanager:us-west-2:<ACCT>:secret:api-db-admin-pwd-HHxpFf
Name: api-db-admin-pwd
SecretString: '{"api-db-admin-pwd":"simplepassword"}'
This is the code used to build the DbCluster:
DatabaseCluster dbCluster = DatabaseCluster.Builder.create(scope, id)
.dbClusterName(properties.getDbName())
.masterUser(Login.builder()
.username(properties.getAdminUser())
.kmsKey(passwordKey)
.password(masterPassword.getSecretValue())
.build())
.vpc(vpc)
.vpcSubnets(ISOLATED_SUBNETS)
.securityGroup(dbSecurityGroup)
.instanceType(InstanceType.of(InstanceClass.MEMORY5, InstanceSize.LARGE))
.instances(properties.getReplicas())
.storageEncrypted(true)
.build();
The question I have is: should I use a DatabaseSecret? or just retrieve the password from SM and be done with it?
A sub-question then: what is one supposed to use the DatabaseSecret for then?
(NOTE -- this is the same class, almost, as in the rds package; but here I am using the docdb package)
Thanks for any suggestion!
Turns out that the DatabaseSecret creates a key/value pair as the secret:
{
"username": <value of username()>,
"password": <generated>
}
However, the call to Login.password() completely ingnores this, and treats the whole JSON body as the password (so the " double quotes trip it).
The trick is to use DatabaseSecret.secretValueFromJson("password") in the call to Login.password() and it works just fine.
This is (incidentally) inconsistent with the behavior of rds.DatabaseCluster and the rds.Credentials class behavior (who take a JSON SecretValue and parse it correctly for the "password" field).
Leaving it here in case others stumble on this, as there really is NO information out there.

How to use a Google Secret in a deployed Cloud Run Service (managed)?

I have a running cloud run service user-service. For test purposes I passed client secrets via environment variables as plain text. Now since everything is working fine I'd like to use a secret instead.
In the "Variables" tab of the "Edit Revision" option I can declare environment variables but I have no idea how to pass in a secret? Do I just need to pass the secret name like ${my-secret-id} in the value field of the variable? There is not documentation on how to use secrets in this tab only a hint at the top:
Store and consume secrets using Secret Manager
Which is not very helpful in this case.
You can now read secrets from Secret Manager as environment variables in Cloud Run. This means you can audit your secrets, set permissions per secret, version secrets, etc, and your code doesn't have to change.
You can point to the secrets through the Cloud Console GUI (console.cloud.google.com) or make the configuration when you deploy your Cloud Run service from the command-line:
gcloud beta run deploy SERVICE --image IMAGE_URL --update-secrets=ENV_VAR_NAME=SECRET_NAME:VERSION
Six-minute video overview: https://youtu.be/JIE89dneaGo
Detailed docs: https://cloud.google.com/run/docs/configuring/secrets
UPDATE 2021: There is now a Cloud Run preview for loading secrets to an environment variable or a volume. https://cloud.google.com/run/docs/configuring/secrets
The question is now answered however I have been experiencing a similar problem using Cloud Run with Java & Quarkus and a native image created using GraalVM.
While Cloud Run is a really interesting technology at the time of writing it lacks the ability to load secrets through the Cloud Run configuration. This has certainly added complexity in my app when doing local development.
Additionally Google's documentation is really quite poor. The quick-start lacks a clear Java example for getting a secret[1] without it being set in the same method - I'd expect this to have been the most common use case!
The javadoc itself seems to be largely autogenerated with protobuf language everywhere. There are various similarly named methods like getSecret, getSecretVersion and accessSecretVersion
I'd really like to see some improvment from Google around this. I don't think it is asking too much for dedicated teams to make libraries for common languages with proper documentation.
Here is a snippet that I'm using to load this information. It requires the GCP Secret library and also the GCP Cloud Core library for loading the project ID.
public String getSecret(final String secretName) {
LOGGER.info("Going to load secret {}", secretName);
// SecretManagerServiceClient should be closed after request
try (SecretManagerServiceClient client = buildClient()) {
// Latest is an alias to the latest version of a secret
final SecretVersionName name = SecretVersionName.of(getProjectId(), secretName, "latest");
return client.accessSecretVersion(name).getPayload().getData().toStringUtf8();
}
}
private String getProjectId() {
if (projectId == null) {
projectId = ServiceOptions.getDefaultProjectId();
}
return projectId;
}
private SecretManagerServiceClient buildClient() {
try {
return SecretManagerServiceClient.create();
} catch(final IOException e) {
throw new RuntimeException(e);
}
}
[1] - https://cloud.google.com/secret-manager/docs/reference/libraries
Google have documentation for the Secret manager client libraries that you can use in your api.
This should help you do what you want
https://cloud.google.com/secret-manager/docs/reference/libraries
Since you haven't specified a language I have a nodejs example of how to access the latest version of your secret using your project id and secret name. The reason I add this is because the documentation is not clear on the string you need to provide as the name.
const [version] = await this.secretClient.accessSecretVersion({
name: `projects/${process.env.project_id}/secrets/${secretName}/versions/latest`,
});
return version.payload.data.toString()
Be sure to allow secret manager access in your IAM settings for the service account that your api uses within GCP.
I kinda found a way to use secrets as environment variables.
The following doc (https://cloud.google.com/sdk/gcloud/reference/run/deploy) states:
Specify secrets to mount or provide as environment variables. Keys
starting with a forward slash '/' are mount paths. All other keys
correspond to environment variables. The values associated with each
of these should be in the form SECRET_NAME:KEY_IN_SECRET; you may omit
the key within the secret to specify a mount of all keys within the
secret. For example:
'--update-secrets=/my/path=mysecret,ENV=othersecret:key.json' will
create a volume with secret 'mysecret' and mount that volume at
'/my/path'. Because no secret key was specified, all keys in
'mysecret' will be included. An environment variable named ENV will
also be created whose value is the value of 'key.json' in
'othersecret'. At most one of these may be specified
Here is a snippet of Java code to get all secrets of your Cloud Run project. It requires the com.google.cloud/google-cloud-secretmanager artifact.
Map<String, String> secrets = new HashMap<>();
String projectId;
String url = "http://metadata.google.internal/computeMetadata/v1/project/project-id";
HttpURLConnection conn = (HttpURLConnection)(new URL(url).openConnection());
conn.setRequestProperty("Metadata-Flavor", "Google");
try {
InputStream in = conn.getInputStream();
projectId = new String(in.readAllBytes(), StandardCharsets.UTF_8);
} finally {
conn.disconnect();
}
Set<String> names = new HashSet<>();
try (SecretManagerServiceClient client = SecretManagerServiceClient.create()) {
ProjectName projectName = ProjectName.of(projectId);
ListSecretsPagedResponse pagedResponse = client.listSecrets(projectName);
pagedResponse
.iterateAll()
.forEach(secret -> { names.add(secret.getName()); });
for (String secretName : names) {
String name = secretName.substring(secretName.lastIndexOf("/") + 1);
SecretVersionName nameParam = SecretVersionName.of(projectId, name, "latest");
String secretValue = client.accessSecretVersion(nameParam).getPayload().getData().toStringUtf8();
secrets.put(secretName, secretValue);
}
}
Cloud Run support for referencing Secret Manager Secrets is now at general availability (GA).
https://cloud.google.com/run/docs/release-notes#November_09_2021

How to Authenticate Google Vision/Cloud Using ENV Variable in Ruby on Rails

My app is hosted on Heroku, so I'm trying to figure out how to use the JSON Google Cloud provides (to authenticate) as an environment variable, but so far I can't get authenticated.
I've searched Google and Stack Overflow and the best leads I found were:
Google Vision API authentication on heroku
How to upload a json file with secret keys to Heroku
Both say they were able to get it to work, but they don't provide code that I've been able to get work. Can someone please help me? I know it's probably something stupid.
I'm currently just trying to test the service in my product model leveraging this sample code from Google. Mine looks like this:
def self.google_vision_labels
# Imports the Google Cloud client library
require "google/cloud/vision"
# Your Google Cloud Platform project ID
project_id = "foo"
# Instantiates a client
vision = Google::Cloud::Vision.new project: project_id
# The name of the image file to annotate
file_name = "http://images5.fanpop.com/image/photos/27800000/FOOTBALL-god-sport-27863176-2272-1704.jpg"
# Performs label detection on the image file
labels = vision.image(file_name).labels
puts "Labels:"
labels.each do |label|
puts label.description
end
end
I keep receiving this error,
RuntimeError: Could not load the default credentials. Browse to
https://developers.google.com/accounts/docs/application-default-credentials for more information
Based on what I've read, I tried placing the JSON contents in secrets.yml (I'm using the Figaro gem) and then referring to it in a Google.yml file based on the answer in this SO question.
In application.yml, I put (I overwrote some contents in this post for security):
GOOGLE_APPLICATION_CREDENTIALS: {
"type": "service_account",
"project_id": "my_project",
"private_key_id": "2662293c6fca2f0ba784dca1b900acf51c59ee73",
"private_key": "-----BEGIN PRIVATE KEY-----\n #keycontents \n-----END PRIVATE KEY-----\n",
"client_email": "foo-labels#foo.iam.gserviceaccount.com",
"client_id": "100",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url":
"https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url":
"https://www.googleapis.com/robot/v1/metadata/x509/get-product-labels%40foo.iam.gserviceaccount.com"
}
and in config/google.yml, I put:
GOOGLE_APPLICATION_CREDENTIALS = ENV["GOOGLE_APPLICATION_CREDENTIALS"]
also, tried:
GOOGLE_APPLICATION_CREDENTIALS: ENV["GOOGLE_APPLICATION_CREDENTIALS"]
I have also tried changing these variable names in both files instead of GOOGLE_APPLICATION_CREDENTIALS with GOOGLE_CLOUD_KEYFILE_JSON and VISION_KEYFILE_JSON based on this Google page.
Can someone please, please help me understand what I'm doing wrong in referencing/creating the environmental variable with the JSON credentials? Thank you!
It's really annoying that Google decides to buck defacto credential standards by storing secrets via a file instead of a series of environment variables.
That said, my solution to this problem is to create a single .env variable GOOGLE_API_CREDS.
I paste the raw JSON blob into the .env then remove all newlines. Then in the application code I use JSON.parse(ENV.fetch('GOOGLE_API_CREDS') to convert the JSON blob into a real hash:
The .env file:
GOOGLE_API_CREDS={"type": "service_account","project_id": "your_app_name", ... }
Then in the application code (Google OCR client as an example):
Google::Cloud::Vision::ImageAnnotator.new(credentials: JSON.parse(ENV.fetch('GOOGLE_API_CREDS'))
Cheers
Building on Dylan's answer, I found that I needed to use an extra line to configure the credentials as follows:
Google::Cloud::Language.configure {|gcl| gcl.credentials = JSON.parse(ENV['GOOGLE_APP_CREDS'])}
because the .new(credentials: ...) method was not working for Google::Cloud::Language
had to look in the (sparse) ruby reference section of Google Cloud Language.
And yeah... storing secrets in a file is quite annoying, indeed.
I had the same problem with Google Cloud Speech, using the "Getting Started" doc from Google.
The above answers helped a great deal, coupled with updating my Google Speech Gem to V1 (https://googleapis.dev/ruby/google-cloud-speech-v1/latest/Google/Cloud/Speech/V1/Speech/Client.html)
I simply use a StringIO object so that Psych thinks that it's an actual file that I read:
google:
service: GCS
project: ''
bucket: ''
credentials: <%= StringIO.new(ENV['GOOGLE_CREDENTIALS']) %>

Resources