aws cdk to use non-default profile - aws-cdk

I don't want to use my default aws profile and account for cdk development
So I created a new account and a new profile cdkprof using aws configure --profile cdkprof .
I have verified in ~/.aws/credentials and ~/.aws/config files that the new profile created correctly. running export AWS_PROFILE=cdkprof && aws configure list && aws sts get-caller-identity returns me my profile details correctly.
I have also exported
CDK_DEFAULT_REGION,
CDK_DEFAULT_ACCOUNT,
AWS_DEFAULT_REGION,
AWS_PROFILE,
AWS_SECRET_ACCESS_KEY and
AWS_ACCESS_KEY_ID
and these are available as environment variables in bash.
However when I try to run :
$ npx cdk bootstrap --profile cdkprof
I get the error
Unable to resolve AWS account to use. It must be either configured when you define your CDK or through the environment
How do I use my new profile and account with the cdk commands?
Thanks.

By default, CDK commands will use the default AWS CLI profile. However, you can specify a named profile for a project by adding it to the file which handles CDK commands. For a TypeScript project, this is done in cdk.json at the project root:
{
"app": "npx ts-node --prefer-ts-exts bin/project.ts",
"profile": "cdkprof",
"context": {
...
}
}
Then, your named profile will be used when you run commands such as cdk bootstrap.

the profile name in credentials and config file should be the same,
for example:
-credentials
[cdk]
aws_access_key_id = xxxxxxx
aws_secret_access_key = xxxxxxx
-config
[cdk]
region = "us-east-1"

This worked for me: cdk bootstrap aws://[ACCOUNT-NUMBER]/[REGION] --profile [YOUR PROFILE]

Related

How to pass in AWS environmental variables to Nextflow for use in Docker container

I would like to run a Nextflow pipeline through a Docker container. As part of the pipeline I would like to push and pull from AWS. To accomplish this end, I need to pass in AWS credentials to the container, but I do not want to write them into the image.
Nextflow has an option to pass in environmental variables as part of the Docker scope via the envWhitelist option, however I have not been able to find an example for correct syntax when doing this.
I have tried the following syntax and get an access denied error, suggesting that I am not passing in the variables properly.
docker {
enabled = true
envWhitelist = "AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID"
}
I explicitly passed these variables into my environment and I can see them using printenv.
Does this syntax seem correct? Thanks for any help!
Usually you can just keep your AWS security credentials in a file called ~/.aws/credentials:
If AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are not defined in the
environment, Nextflow will attempt to retrieve credentials from your
~/.aws/credentials or ~/.aws/config files.
Alternatively, you can declare your AWS credentials in your nextflow.config (or in a separate config profile) using the aws scope:
aws {
accessKey = '<YOUR S3 ACCESS KEY>'
secretKey = '<YOUR S3 SECRET KEY>'
region = '<REGION IDENTIFIER>'
}
You could also use an IAM Instance Role to provide your credentials.

docker-compose - how to provide credentials or API key in order to pull image from private repository?

I have private repo where I am uploading images outside of the docker.
image: example-registry.com:4000/test
I have that defined in my docker-compose file.
How I can provide credentials or API key in order to pull from that repository? Is it possible to do it without executing "docker login" command or it is required to always execute those commands prior the docker-compose command?
I have API key which I am using for example to do the REST API from PowerShell or any other tool.
Can I use that somehow in order to avoid "docker login" command constantly?
Thank you
docker login creates or updates the ~/.docker/config.json file for you. With just the login part, it look likes
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "REDACTED"
}
}
}
There can be many things in this file, here is the doc
So to answer your question, you can avoid the login command by distributing this file instead. Something like:
Create a dedicated token (you shouldn't have multiple usage by token) here https://hub.docker.com/settings/security
Move your current config elsewhere if it does exist mv ~/.docker/config.json /tmp
Execute docker login -u YOUR-ACCOUNT, using the token as password
Copy the generated ~/.docker/config.json that you can then distribute to your server(s). This file is as much a secret as your password , don't make it public!
Move back your current config mv /tmp/config.json ~/.docker/
Having the file as a secret that you distribute doesn't make much difference than inputing the docker login command though, especially if you've some scripting to do that.

Serverless offline not taking environment variables

I am new to serverless and would like your help to figure out what I am doing wrong.
In my local development after using sls offline --config cusom.yml i am unable to retrieve secrets. After a bit of debugging, found out that the credentials is null.
However, when i invoke it separately using pure js aws-sdk (not with serverless) I am able to retrieve the secrets and the credentials is prepopulated. Please let me know if you have any suggestions on why this is not working with sls offline
Do you have the following files locally?
~/.aws/credentials
~/.aws/config
These files serves as the credentials if you don't write them in your code. Most libraries and aws cli relies on them for access
$ cat ~/.aws/credentials
[default]
aws_secret_access_key = your_aws_secret_access_key
aws_access_key_id = your_aws_access_key_id
$ cat ~/.aws/config
[default]
region = us-east-1 # or your preferred region

Enable CI secret variables inside ecosystem.config?

How can I use secret variables inside my ecosystem.config.js?
So this is inside my gitlab-ci.yml file. I can access secret variables via "$...":
....
- echo "$AWS_SSH_PRIVATE_KEY" | ssh-add -
- ssh-add <(echo "$RUNNER_SSH_PRIVATE_KEY")
...
script:
- pm2 deploy ecosystem.config.js production
My ecosystem.config looks like this:
apps: [{
name: 'test',
script: './test.js',
env_production: {
NODE_ENV: 'production'
},
env: {
"test_ENV": "$MY_SECRET_VARIABLE" // not working
}
}],
So I want to set env variables to make them available inside node via process.env.
How can I achieve this?
This might work! I just found it, but haven't tested yet.
https://github.com/icehaunter/pm2-better-deploy
Adds save_env deployment setup config key. It can be an array of strings or an object. All elements are filled from environment variables where the deployment command was run and saved into the config on the deployment server. This allows to "pass" environment vars, like secrets from the gitlab runner instance to the pm2 instance

Connecting to different ARN/Role/Amazon Account when trying to deploy

I have previously had Serverless installed on a server, and then when I tried to edit the function and package it back up to edit the zip file I broke it, so I have to start all over. So to begin this issue: I had Serverless running and was using it with this package - https://github.com/adieuadieu/serverless-chrome/tree/master/examples/serverless-framework/aws
When I sudo npm run deploy, I get the ServerlessError:
ServerlessError: User: arn:aws:sts::XXX:assumed-role/EC2CodeDeploy/i-268b1acf is not authorized to perform: cloudformation:DescribeStackResources on resource: arn:aws:cloudformation:us-east-1:YYY:stack/aws-dev/*
I'm not sure why it is trying to connect to a Role and not an IAM. So I check the Role, and it is in an entirely different AWS account than the account I've configured. Let's call this Account B.
When it comes to configuration, I've installed AWS CLI and entered in the key, id, and region in my Account A in AWS. Not touching Account B whatsoever. When I run aws s3 ls I see the correct s3 buckets of the account with the key/id/regioin, so I know CLI is working with the correct account. Sounds good. I check the ~/.aws/creditionals file and just has one profile [default] which seems normal. No other profiles are in here. I copied this over to the ~/.aws/config file so now both files are same. Works great.
I then go into my SSH where I've installed serverless, and run npm run deploy and it gives me the same message above. I think maybe somehow it is not using the correct account for whatever reason. So I manually set the access key and secret with the following commands:
serverless config credentials --provider aws --key XXX --secret YYY
It tells me there already is a profile in the aws creds file, so I then add --o to the end to overwrite. I run sudo npm run deploy and still same error.
I then run this command to manually set a profile in the creds for serverless, with the profile name matching the IAM user name:
serverless config credentials --provider aws --key XXX --secret YYY --profile serverless-agent
Where "serverless-agent" is the name of my IAM user I've been trying to use to deploy. I run this, it tells me there already is an existing profile in the aws creds file so I run it with --o and it tells me the aws file is now updated. In bash I go to Vim the file and I only see the single "[default]" settings, as if nothing has changed. I run sudo npm run deploy and it gives me the same Error.
I then go and manually set the access and secret:
export AWS_ACCESS_KEY_ID=XXX
export AWS_SECRET_ACCESS_KEY=YYY
I run sudo npm run deploy and it gives me the same Error.
I even removed AWS CLI, and the directory that holds the creditionals and config files - and when I manually set my account creds via serverless config it tells me there already is a profile set up in my aws file, prompting me to use the overwrite command - how is this possible when the file is literally not on my computer?
So I then think that serverless itself has a cache or something, calling the wrong file or whatever for creds, so I uninstall serverless via sudo npm uninstall -g serverless so that I can start from zero again. I then do all of the above steps and more all over again, and nothing has changed. Same error message.
I do have Apex.run set up, but that should be using my AWS CLI config file so I'm not sure if that is causing any problems. But then again I've no clue of anything deep on this subject, and I can't find any ability to remove Apex itself in their docs.
In the package I am trying to deploy, I do not have a profile:XXX set in the serverless.yml file, because I've read if you do not then it just defaults to the [default] profile you have set in the aws creds file on your computer. Just to check, I go into the serverless.yml file and set the profile: default, and the error I now get when I run npm run deploy is
Profile default does not exist
How is that possible when I have the "default" profile set in my creds file? So I remember that previously I ran the serverless config creditionals command and added the profile name of serverless-agent to it (yet didn't save in the aws creds file as I mentioned above), so I add that profile name to the serverless.yml file just to see if this works, and same error of "Profile default does not exist".
So back to the error message. The Role is an account not even related to the IAM user I'm using in my aws creds. Without knowing a lot about this, it's as if the config in serverless via ssh isn't correct or something. Is it using old creds I had set up in Apex.run? Why is the aws creds file not updated with the profile when I manually set it in serverless config command? I am using the same user account (but with new key and secret) that I used a few weeks ago when I correctly deployed and my Lambda and API was set up for me on AWS. Boy do I miss those time and wish I didn't mess up my existing Lambda functions, without setting version number prior, forcing me to start all over.
I am so confused. Any help would be greatly appreciated.
If you are using IAM role then you have to use that IAM role through assume role using powershell.
I was also facing same issue earlier, when we moved from from user to role.

Resources