Default credentials can not be used to assume new style deployment roles - aws-cdk

Following pipelines readme to set up a deployment pipeline, I ran
$ env CDK_NEW_BOOTSTRAP=1 npx cdk bootstrap \
--cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \
aws://[ACCOUNT_ID]/us-west-2
to create the necessary roles. I would assume the roles would automatically add sts assume role permissions from my account principle. However, when I run cdk deploy I get the following warning
current credentials could not be used to assume
'arn:aws:iam::[ACCOUNT_ID]:role/cdk-hnb659fds-file-publishing-role-[ACCOUNT_ID]-us-west-2',
but are for the right account. Proceeding anyway.
I have root credentials in ~/.aws/credentials.
Looking at the deploy role policy, I don't see any sts permissions. What am I missing?

You will need to add permission to assume the role to the credentials from which you are trying to execute cdk deploy
{
"Sid": "assumerole",
"Effect": "Allow",
"Action": [
"sts:AssumeRole",
"iam:PassRole"
],
"Resource": [
"arn:aws-cn:iam::*:role/cdk-readOnlyRole",
"arn:aws-cn:iam::*:role/cdk-hnb659fds-deploy-role-*",
"arn:aws-cn:iam::*:role/cdk-hnb659fds-file-publishing-*"
]
}

First thing you need to do is enabling the verbose mode to see what is actually happenning.
cdk deploy --verbose
If you see similar message below. Continue with step 2. Otherwise, you need to address the problem by understanding the error message.
Could not assume role in target account using current credentials User: arn:aws:iam::XXX068599XXX:user/cdk-access is not authorized to perform: sts
:AssumeRole on resource: arn:aws:iam::XXX068599XXX:role/cdk-hnb659fds-deploy-role-XXX068599XXX-us-east-2 . Please make sure that this role exists i
n the account. If it doesn't exist, (re)-bootstrap the environment with the right '--trust', using the latest version of the CDK CLI.
Check S3 buckets related to CDK and CloudFormation stacks from AWS Console. Delete them manually.
Enable the new style bootstrapping by one of the method mentioned here
Bootstrap the stack using below command. Then it should create all required roles automatically.
cdk bootstrap --trust=ACCOUNT_ID --cloudformation-execution-policies=arn:aws:iam::aws:policy/AdministratorAccess --verbose
NOTE: If you are working with docker image assets, make sure you have setup your repository before you deploy. New style bootstrapping does not create the repos automatically for you as mentioned in this comment.

This may be of use to somebody... The issue could be a mismatch of regions. I spotted it in verbose mode - the roles were created for us-east-1 but I had specified eu-west-2 in the bootstrap. For some reason it had not worked. The solution was to set the region (by adding AWS_REGION=eu-west-2 before the cdk deploy command).

I ran into a similar error. The critical part of my error was
failed: Error: SSM parameter /cdk-bootstrap/<>/version not found.
I had to re-run using the new bootstrap method that creates the SSM parameter. To run the new bootstrap method first set CDK_NEW_BOOTSTRAP via export CDK_NEW_BOOTSTRAP=1

Don't forget to run cdk bootstrap with those credentials against your account [ACCOUNT_ID].

For me, the problem was that I was using expired credentials. I was trying to use temporary credentials from AWS SSO, which were expired. The problem was that the error message is misleading: it says
current credentials could not be used to assume 'arn:aws:iam::123456789012:role/cdk-xxx999xxx-deploy-role-123456789012-us-east-1', but are for the right account. Proceeding anyway.
(To get rid of this warning, please upgrade to bootstrap version >= 8)
However, applying the --verbose flag as suggested above showed the real problem:
Assuming role 'arn:aws:iam::123456789012:role/cdk-xxx999xxx-deploy-role-123456789012-us-east-1'.
Assuming role failed: The security token included in the request is expired
Could not assume role in target account using current credentials The security token included in the request is expired . Please make sure that this role exists in the account. If it doesn't exist, (re)-bootstrap the environment with the right '--trust', using the latest version of the CDK CLI.
Getting the latest SSO credentials fixed the problem.

After deploying with --verbose I could see it was a clock issue in my case:
Assuming role failed: Signature expired: 20220428T191847Z is now earlier than 20220428T192528Z (20220428T194028Z - 15 min.)
I resolve the clock issue on ubuntu using:
sudo ntpdate ntp.ubuntu.com
which then resolves the cdk issue.

Related

Unable to define params variables in Serverless console

According to the Serverless documentation, I should be able to define params within the dashboard/console. But when I navigate there, the inputs are disabled:
I've tried following the instructions to update via CLI, with: serverless deploy --param="domain=myapp.com" --param="key=value". The deploy runs successfully (I get a ✔ Service deployed to... message with no errors), but nothing appears in my dashboard. Likewise, when I run a command to check whether there are any params stored: serverless param list, I get
Running "serverless" from node_modules
No parameters stored
Passing param flags will not upload the parameters to Dashboard/Console, it will only expose them in your configuration so you can access them with ${param:<param-name>}. To my best knowledge, it is not possible to set Dashboard parameters with CLI, you need to set them manually via UI.
It was a permissions problem. The owner of the account updated the permissions and I was able to update the inputs.

Problem enabling Keycloak read-only user attributes

I've attempted to enable Read-only user attributes in Keycloak as per the docs: https://www.keycloak.org/docs/latest/server_admin/
However the documented configuration does not actually prevent a user from changing their attributes.
Using Keycloak 15.0.0 with the regular Docker image from docker hub
Made a .cli file and added it to my Docker image, built from
FROM jboss/keycloak:15.0.0
ADD RESTRICT_USER_ATTRIBUTES.cli /opt/jboss/startup-scripts/
With contents of RESTRICT_USER_ATTRIBUTES.cli:
embed-server --server-config=standalone-ha.xml --std-out=echo
batch
/subsystem=keycloak-server/spi=userProfile/:add
/subsystem=keycloak-server/spi=userProfile/provider=legacy-user-profile/:add(properties={},enabled=true)
/subsystem=keycloak-server/spi=userProfile/provider=legacy-user-profile/:map-put(name=properties,key=read-only-attributes,value=[myUserAttribute])
run-batch
stop-embedded-server
The .cli file is processed according to the log. I can exec into the docker instance and check the configuration using jboss-cli.sh.
But the end user can freely edit myUserAttribute using Postman or another tool.
What am i doing wrong here?
I just had this issue, and it seems the documentation is out-of-date.
They changed the provider name, probably in 15.0.0.
Try changing your cli script to:
# ...
/subsystem=keycloak-server/spi=userProfile/:add
/subsystem=keycloak-server/spi=userProfile/provider=declarative-user-profile/:add(properties={},enabled=true)
/subsystem=keycloak-server/spi=userProfile/provider=declarative-user-profile/:map-put(name=properties,key=read-only-attributes,value=[myUserAttribute])
# ...

Deploying code on lambda failed using serverless

I was trying to deploy code on lambda using serverless deploy and got below error, tried multiple solutions available online but didn't work.
Error -
Serverless: Packaging service...
Serverless Error ---------------------------------------
The specified bucket does not exist
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information -----------------------------
OS: darwin
Node Version: 8.12.0
Serverless Version: 1.31.0
When you are deploying your Serverless application it uses the service attribute (defined in your serverless.yaml) as a unique identifier of your application in the CloudFormation.
Said so, you may have some conflict if you change the name of the bucket without removing the stack. Ex:
You deploy you application with the bucket called myBucket.
CloudFormation will be created considering this info.
You change this name to myBucketPlus and try to deploy.
Serverless will clean the mybucketPlus with the last deploy before pushing the new one.
But wait! myBucketPlus does not exist.
As you did not describe what exactly you did, I tried to give an example but it could be something else.
Also you could try removing and deploying again.
The best way to resolve this issue is -
Execute below command to see the lambda information which will also provide the S3 bucket name, region, endpoint info etc but you need only bucket name and region for this case.
sls info -v
Create the bucket in the intended region.
Done.

Google Cloud Storage: Output path does not exist or is not writeable

I am trying to follow this simple Dataflow example from google cloud site.
I have successfully installed the dataflow pipeline plugin and gcloud SDK (as well as Python 2.7). I have also set up a project on google cloud and enabled billing and all the necessary API's - as specified in the instructions above.
However, when I go to the run configurations and change the Pipeline Arguments tab to select BlockingDataflowPipelineRunner, after entering creating a bucket and setting my project-id, hitting run gives me:
Caused by: java.lang.IllegalArgumentException: Output path does not exist or is not writeable: gs://my-cloud-dataflow-bucket
at com.google.cloud.dataflow.sdk.repackaged.com.google.common.base.Preconditions.checkArgument(Preconditions.java:146)
at com.google.cloud.dataflow.sdk.util.DataflowPathValidator.verifyPathIsAccessible(DataflowPathValidator.java:79)
at com.google.cloud.dataflow.sdk.util.DataflowPathValidator.validateOutputFilePrefixSupported(DataflowPathValidator.java:62)
at com.google.cloud.dataflow.sdk.runners.DataflowPipelineRunner.fromOptions(DataflowPipelineRunner.java:255)
at com.google.cloud.dataflow.sdk.runners.BlockingDataflowPipelineRunner.fromOptions(BlockingDataflowPipelineRunner.java:82)
... 9 more
I have used my terminal to execute 'gcloud auth login' and I see in the browser that I am successfully logged in.
I am really not sure what I have done wrong here. Can anyone confirm if this is a known issue with using dataflow pipeline and google buckets?
Thanks!
I had a similar issue with GCS bucket permissions, though I certainly had write permissions and I could upload files into the bucket.
What solved the problem for me was acquiring roles/dataflow.admin permission for the project I was submitting the pipeline to.
When submitting pipelines to the Google Cloud Dataflow Service, the pipeline runner on your local machine uploads files, which are necessary for execution in the cloud, to a "staging location" in Google Cloud Storage.
The pipeline runner on your local machine seems to be unable to write the required files to the staging location provided (gs://my-cloud-dataflow-bucket). It could be that the location doesn't exist, or that it belongs to a different GCP project than you authenticated against, or that there are more specific permissions set on that bucket, etc.
You can start debugging the issue via gsutil command-line too. For example, try running gsutil ls gs://my-cloud-dataflow-bucket to attempt to list the contents of the bucket. Then, try to upload via gsutil cp command. This will perhaps produce enough information to root-cause the issue you are facing.
Try to provide zone parameter, it works in my case with similar error. And of course export GOOGLE_APPLICATION_CREDENTIALS environment variable before running your app.
...
-Dexec.args="--runner=DataflowRunner \
--gcpTempLocation=gs://bucket/tmp \
--zone=bucket-zone \
...
Got the same error. Fixed it by setting GOOGLE_APPLICATION_CREDENTIALS using the key file with write permissions in ~/.bash_profile on Mac.
I realised I needed to use a specific acl command via gsutil. Setting my account to have owner permissions did not do the job. Instead using:
gsutil acl set public-read-write gs://my-bucket-name-here
worked in this case. Hope this helps someone!

Jenkins: How to Change LDAP Password

My institution requires me to periodically change my LDAP password.
In the past, I was able to perform the following steps to change my password:-
Create a Base64 encoded password at http://www.base64encode.org/
Edit /var/lib/jenkins/config.xml and change <managerPassword/>.
However, the recent version of Jenkins no longer use <managerPassword/>. Instead, I'm seeing <managerPasswordSecret/>.
I'm not sure how to generate the new secret password, so I did the following:-
Backup /var/lib/jenkins/config.xml first.
Edit /var/lib/jenkins/config.xml and change <useSecurity/> to false.
Restart Jenkins service.
Go to Jenkins.
Enable LDAP Security.
Enter new LDAP password.
Save it.
Open up /var/lib/jenkins/config.xml and copy <managerPasswordSecret/>.
Restore backup config file.
Replace <managerPasswordSecret/> with the new value.
This is incredibly convoluted.
Is there a more straightforward way for me to maintain my LDAP password change in the future?
Thanks much!
None of the above solutions worked for me with a newer version of Jenkins (2.78). What did work was putting the managerPasswordSecret in without any encryption. Once I ran Jenkins, the password got encrypted for me.
You can still use <managerPassword>.
Generate the new encoded password with
perl -e 'use MIME::Base64; print encode_base64("yourNewPassword");'
In your config.xml, find <hudson>/<securityRealm>/<managerPasswordSecret>. Change <managerPasswordSecret> to <managerPassword> (both before and after) and put the encoding from #1 between them. Save the file.
Restart jenkins
Login and using the UI, reset the LDAP Manager password to the same yourNewPassword. config.xml should now be back to <managerPasswordSecret>.
If you are paranoid (like me), restart jenkins again to use the newly modified config.xml.
I was trying to do same thing and this is simple solution (use from Jenkins console):
import com.trilead.ssh2.crypto.Base64;
import javax.crypto.Cipher;
import jenkins.security.CryptoConfidentialKey;
import hudson.util.Secret;
CryptoConfidentialKey KEY = new CryptoConfidentialKey(Secret.class.getName());
Cipher cipher = KEY.encrypt();
String MAGIC = "::::MAGIC::::";
String VALUE_TO_ENCRYPT = "";
println(new String(Base64.encode(cipher.doFinal((VALUE_TO_ENCRYPT + MAGIC).getBytes("UTF-8")))));
Decoding is simpler:
println(hudson.util.Secret.decrypt(HashFromConfigXmlHere));
Edit your config.xml file by hand.
If your Jenkins uses a <managerPasswordSecret> set of tags, put the new plain text password in there and Jenkins will read it. Once Jenkins starts up, go to the Configure System > Configure Global Security page and click Save. That will update that field with the encrypted version.
The current easiest and fastest solution (just worked for me) is from Cloudbees: simply enter the new password into the password field in the config.xml as plain text (not encrypted) then Jenkins will read that correctly. Once you start Jenkins and just re-save the Manage Jenkins -> Configure Global Security page
https://support.cloudbees.com/hc/en-us/articles/221230028-Changing-LDAP-Password
I tried solution provided by #alkuzad and its working fine. Just to clarify that you can't use Jenkins web Console when LDAP user password is expired. So what I did is as follow (I have groovy script plugin in Jenkins. I also provided run script access to anonymous user - not a good idea but it's the way I initially found to resolve this recurring issue).
Downloaded jenkins-cli.jar
put above code in GroovyPasswordClass.txt (not to forget using new password in place of VALUE_TO_ENCRYPT in code)
start jenkins server (its requirement to have jenkins running)
run below command from command prompt
java -jar jenkins-cli.jar -s groovy GroovyPasswordClass.txt
This will print encrypted password.
Better Option
Well, later I found better way to do authentication if directory service provider is MS Active Directory. In that case instead of LDAP plugin, I used Active Directory plugin for authentication. This I found better because
1) Response is faster when use Active directory plugin instead of generic LDAP protocol based plugin
2) Active Directory plugin uses user data with which Jenkins service was started and no need to configure any user account in Jenkins. So you will never have situation that your Jenkins login not working because user configured for ldap has expired password.
Hope this will help others trying to resolve this issue.

Resources