According to the Serverless documentation, I should be able to define params within the dashboard/console. But when I navigate there, the inputs are disabled:
I've tried following the instructions to update via CLI, with: serverless deploy --param="domain=myapp.com" --param="key=value". The deploy runs successfully (I get a ✔ Service deployed to... message with no errors), but nothing appears in my dashboard. Likewise, when I run a command to check whether there are any params stored: serverless param list, I get
Running "serverless" from node_modules
No parameters stored
Passing param flags will not upload the parameters to Dashboard/Console, it will only expose them in your configuration so you can access them with ${param:<param-name>}. To my best knowledge, it is not possible to set Dashboard parameters with CLI, you need to set them manually via UI.
It was a permissions problem. The owner of the account updated the permissions and I was able to update the inputs.
Related
I've attempted to enable Read-only user attributes in Keycloak as per the docs: https://www.keycloak.org/docs/latest/server_admin/
However the documented configuration does not actually prevent a user from changing their attributes.
Using Keycloak 15.0.0 with the regular Docker image from docker hub
Made a .cli file and added it to my Docker image, built from
FROM jboss/keycloak:15.0.0
ADD RESTRICT_USER_ATTRIBUTES.cli /opt/jboss/startup-scripts/
With contents of RESTRICT_USER_ATTRIBUTES.cli:
embed-server --server-config=standalone-ha.xml --std-out=echo
batch
/subsystem=keycloak-server/spi=userProfile/:add
/subsystem=keycloak-server/spi=userProfile/provider=legacy-user-profile/:add(properties={},enabled=true)
/subsystem=keycloak-server/spi=userProfile/provider=legacy-user-profile/:map-put(name=properties,key=read-only-attributes,value=[myUserAttribute])
run-batch
stop-embedded-server
The .cli file is processed according to the log. I can exec into the docker instance and check the configuration using jboss-cli.sh.
But the end user can freely edit myUserAttribute using Postman or another tool.
What am i doing wrong here?
I just had this issue, and it seems the documentation is out-of-date.
They changed the provider name, probably in 15.0.0.
Try changing your cli script to:
# ...
/subsystem=keycloak-server/spi=userProfile/:add
/subsystem=keycloak-server/spi=userProfile/provider=declarative-user-profile/:add(properties={},enabled=true)
/subsystem=keycloak-server/spi=userProfile/provider=declarative-user-profile/:map-put(name=properties,key=read-only-attributes,value=[myUserAttribute])
# ...
Following pipelines readme to set up a deployment pipeline, I ran
$ env CDK_NEW_BOOTSTRAP=1 npx cdk bootstrap \
--cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \
aws://[ACCOUNT_ID]/us-west-2
to create the necessary roles. I would assume the roles would automatically add sts assume role permissions from my account principle. However, when I run cdk deploy I get the following warning
current credentials could not be used to assume
'arn:aws:iam::[ACCOUNT_ID]:role/cdk-hnb659fds-file-publishing-role-[ACCOUNT_ID]-us-west-2',
but are for the right account. Proceeding anyway.
I have root credentials in ~/.aws/credentials.
Looking at the deploy role policy, I don't see any sts permissions. What am I missing?
You will need to add permission to assume the role to the credentials from which you are trying to execute cdk deploy
{
"Sid": "assumerole",
"Effect": "Allow",
"Action": [
"sts:AssumeRole",
"iam:PassRole"
],
"Resource": [
"arn:aws-cn:iam::*:role/cdk-readOnlyRole",
"arn:aws-cn:iam::*:role/cdk-hnb659fds-deploy-role-*",
"arn:aws-cn:iam::*:role/cdk-hnb659fds-file-publishing-*"
]
}
First thing you need to do is enabling the verbose mode to see what is actually happenning.
cdk deploy --verbose
If you see similar message below. Continue with step 2. Otherwise, you need to address the problem by understanding the error message.
Could not assume role in target account using current credentials User: arn:aws:iam::XXX068599XXX:user/cdk-access is not authorized to perform: sts
:AssumeRole on resource: arn:aws:iam::XXX068599XXX:role/cdk-hnb659fds-deploy-role-XXX068599XXX-us-east-2 . Please make sure that this role exists i
n the account. If it doesn't exist, (re)-bootstrap the environment with the right '--trust', using the latest version of the CDK CLI.
Check S3 buckets related to CDK and CloudFormation stacks from AWS Console. Delete them manually.
Enable the new style bootstrapping by one of the method mentioned here
Bootstrap the stack using below command. Then it should create all required roles automatically.
cdk bootstrap --trust=ACCOUNT_ID --cloudformation-execution-policies=arn:aws:iam::aws:policy/AdministratorAccess --verbose
NOTE: If you are working with docker image assets, make sure you have setup your repository before you deploy. New style bootstrapping does not create the repos automatically for you as mentioned in this comment.
This may be of use to somebody... The issue could be a mismatch of regions. I spotted it in verbose mode - the roles were created for us-east-1 but I had specified eu-west-2 in the bootstrap. For some reason it had not worked. The solution was to set the region (by adding AWS_REGION=eu-west-2 before the cdk deploy command).
I ran into a similar error. The critical part of my error was
failed: Error: SSM parameter /cdk-bootstrap/<>/version not found.
I had to re-run using the new bootstrap method that creates the SSM parameter. To run the new bootstrap method first set CDK_NEW_BOOTSTRAP via export CDK_NEW_BOOTSTRAP=1
Don't forget to run cdk bootstrap with those credentials against your account [ACCOUNT_ID].
For me, the problem was that I was using expired credentials. I was trying to use temporary credentials from AWS SSO, which were expired. The problem was that the error message is misleading: it says
current credentials could not be used to assume 'arn:aws:iam::123456789012:role/cdk-xxx999xxx-deploy-role-123456789012-us-east-1', but are for the right account. Proceeding anyway.
(To get rid of this warning, please upgrade to bootstrap version >= 8)
However, applying the --verbose flag as suggested above showed the real problem:
Assuming role 'arn:aws:iam::123456789012:role/cdk-xxx999xxx-deploy-role-123456789012-us-east-1'.
Assuming role failed: The security token included in the request is expired
Could not assume role in target account using current credentials The security token included in the request is expired . Please make sure that this role exists in the account. If it doesn't exist, (re)-bootstrap the environment with the right '--trust', using the latest version of the CDK CLI.
Getting the latest SSO credentials fixed the problem.
After deploying with --verbose I could see it was a clock issue in my case:
Assuming role failed: Signature expired: 20220428T191847Z is now earlier than 20220428T192528Z (20220428T194028Z - 15 min.)
I resolve the clock issue on ubuntu using:
sudo ntpdate ntp.ubuntu.com
which then resolves the cdk issue.
I'm using Next.js with Vercel. This is my .env.local file:
# Created by Vercel CLI
VERCEL="1"
VERCEL_ENV="development"
VERCEL_URL=""
VERCEL_GIT_PROVIDER=""
VERCEL_GIT_REPO_SLUG=""
VERCEL_GIT_REPO_OWNER=""
VERCEL_GIT_REPO_ID=""
VERCEL_GIT_COMMIT_REF=""
VERCEL_GIT_COMMIT_SHA=""
VERCEL_GIT_COMMIT_MESSAGE=""
VERCEL_GIT_COMMIT_AUTHOR_LOGIN=""
VERCEL_GIT_COMMIT_AUTHOR_NAME=""
I have a component that is trying to access: process.env.NEXT_PUBLIC_VERCEL_ENV to make sure it is on development environment.
This is what I'm getting when running npm run dev.
Logs from the server:
The logs above make perfect sense. Since it's running on the local server to render the pages.
But when my client code tries to do the same, I'm getting:
This is how I'm trying to acess it:
console.log(`process.env.NEXT_PUBLIC_VERCEL_ENV: ${JSON.stringify(process.env.NEXT_PUBLIC_VERCEL_ENV)}`);
console.log(`process.env.VERCEL_ENV: ${JSON.stringify(process.env.VERCEL_ENV)}`);
On client, the VERCEL_ENV should be undefined, but NEXT_PUBLIC_VERCEL_ENV should be development, right?
What could be happening?
UPDATE
I even tried to add NEXT_PUBLIC_VERCEL_ENV="development" to the .env.local file. But so far, the result is the same.
NEXT_PUBLIC_VERCEL_ENV="development"
You will have access to this everywhere (in the browser and Server).
VERCEL_ENV="development"
You only have access to this in the server, in the browser it will show undefined.
Please note after you add or make any changes in the .env.local file you have to restart your server otherwise it will show undefined if you console.log the variables.
Make sure Automatically expose System Environment Variables is checked in your Project Settings.
System Environment Variables
More info in docs
Not sure how to ask this question but would like to put it here and want to hear some suggestions.
So far, I use "DB_LINK" variable that has mongo database url in my config.json file. My Node app uses this variable to connect to the Mongo. But this DB_LINK also gets checked into git, which we dont want this to happen because we dont want to check in the passwords into git.
in my local development, I use local.json file that has all these configs and not check that file into git (in .gitignore entry). So it is fine making it work in my local dev environment, but the challenge is when Jenkins try to push the code to TEST, it has to pass testcases (it has to run test cases, so that time the DB_LINK value is needed) before deployment happens. so this is when I need this DB_LINK variable be passed from Jenkins.
Here is what I did so far ..
in Jenkins configurations, at the 'predefined parameters' I added DB_LINK=myMongoLink to parameters list.
but this value is not being handed over to my node app. Any suggestions on how to achieve what I am trying to achieve?
Ok. I figured this out. before the change, I used to pass a command from Jenkins to run my test cases like this
npm run test
but now,
DBLink=myDB npm run test
so the DBLink variable from here being handed over to the node app and was able to run the test cases. Before this change,
I used to pass this DBLink=myDB from a config file
I am trying to follow this simple Dataflow example from google cloud site.
I have successfully installed the dataflow pipeline plugin and gcloud SDK (as well as Python 2.7). I have also set up a project on google cloud and enabled billing and all the necessary API's - as specified in the instructions above.
However, when I go to the run configurations and change the Pipeline Arguments tab to select BlockingDataflowPipelineRunner, after entering creating a bucket and setting my project-id, hitting run gives me:
Caused by: java.lang.IllegalArgumentException: Output path does not exist or is not writeable: gs://my-cloud-dataflow-bucket
at com.google.cloud.dataflow.sdk.repackaged.com.google.common.base.Preconditions.checkArgument(Preconditions.java:146)
at com.google.cloud.dataflow.sdk.util.DataflowPathValidator.verifyPathIsAccessible(DataflowPathValidator.java:79)
at com.google.cloud.dataflow.sdk.util.DataflowPathValidator.validateOutputFilePrefixSupported(DataflowPathValidator.java:62)
at com.google.cloud.dataflow.sdk.runners.DataflowPipelineRunner.fromOptions(DataflowPipelineRunner.java:255)
at com.google.cloud.dataflow.sdk.runners.BlockingDataflowPipelineRunner.fromOptions(BlockingDataflowPipelineRunner.java:82)
... 9 more
I have used my terminal to execute 'gcloud auth login' and I see in the browser that I am successfully logged in.
I am really not sure what I have done wrong here. Can anyone confirm if this is a known issue with using dataflow pipeline and google buckets?
Thanks!
I had a similar issue with GCS bucket permissions, though I certainly had write permissions and I could upload files into the bucket.
What solved the problem for me was acquiring roles/dataflow.admin permission for the project I was submitting the pipeline to.
When submitting pipelines to the Google Cloud Dataflow Service, the pipeline runner on your local machine uploads files, which are necessary for execution in the cloud, to a "staging location" in Google Cloud Storage.
The pipeline runner on your local machine seems to be unable to write the required files to the staging location provided (gs://my-cloud-dataflow-bucket). It could be that the location doesn't exist, or that it belongs to a different GCP project than you authenticated against, or that there are more specific permissions set on that bucket, etc.
You can start debugging the issue via gsutil command-line too. For example, try running gsutil ls gs://my-cloud-dataflow-bucket to attempt to list the contents of the bucket. Then, try to upload via gsutil cp command. This will perhaps produce enough information to root-cause the issue you are facing.
Try to provide zone parameter, it works in my case with similar error. And of course export GOOGLE_APPLICATION_CREDENTIALS environment variable before running your app.
...
-Dexec.args="--runner=DataflowRunner \
--gcpTempLocation=gs://bucket/tmp \
--zone=bucket-zone \
...
Got the same error. Fixed it by setting GOOGLE_APPLICATION_CREDENTIALS using the key file with write permissions in ~/.bash_profile on Mac.
I realised I needed to use a specific acl command via gsutil. Setting my account to have owner permissions did not do the job. Instead using:
gsutil acl set public-read-write gs://my-bucket-name-here
worked in this case. Hope this helps someone!