I'm getting error when trying to access a bucket using cloud composer...
[2019-03-18 11:50:00,651] {models.py:1594} ERROR - 404 GET https://www.googleapis.com/storage/v1/b/gs://xxxx-cloud-composer?projection=noAcl: Not Found
def Ian_Log_Message():
from google.cloud import storage
import logging
logging.info('Hello Ian')
gcs_bucket=models.Variable.get('gcs_bucket')
logging.info('gcs_bucket - '+gcs_bucket)
storage_client = storage.Client()
bucket_results_out = storage_client.get_bucket(gcs_bucket)
The bucket exists and it logs the correct bucket.
I have set a service account against the environment.
The service account has permissions ..
BigQuery Admin
Composer Administrator
Environment and Storage Object Administrator
Composer Worker
Security Reviewer
Service Account Actor
Storage Admin
I also set the service account as an owner against the bucket to see if that helped.
404 error indicates that the service couldn't find the given GCS bucket, looking at GCS API spec and your error message, it appears that you probably shouldn't use gs://{bucket}, instead models.Variable.get('gcs_bucket') should return just the bucket name (e.g., foo as apposed to gs://foo).
Related
Getting above error in the logs, while installing informatica server 10.4, this is happening while i am in the create domain menu
then i get this error
when i check the logs, I am seeing this error.
OutPut : [ICMD_10033] Command [generateEncryptionKey] failed with error [[INFASETUP_10000] [FrameworkUtils_0006] The encryption key file cannot be generated. [[FrameworkUtils_0022] Failed to find user name [WORKGROUP\SYSTEM] during Informatica service startup, and so cannot grant read and write permissions on the node configuration directory to the user. Verify that the user that started the Informatica service is valid. If you are a Local System User, you can ignore this message as you inherit the read-write permissions.]..].
I am not getting where this [WORKGROUP\SYSTEM] user name is coming up
Following pipelines readme to set up a deployment pipeline, I ran
$ env CDK_NEW_BOOTSTRAP=1 npx cdk bootstrap \
--cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \
aws://[ACCOUNT_ID]/us-west-2
to create the necessary roles. I would assume the roles would automatically add sts assume role permissions from my account principle. However, when I run cdk deploy I get the following warning
current credentials could not be used to assume
'arn:aws:iam::[ACCOUNT_ID]:role/cdk-hnb659fds-file-publishing-role-[ACCOUNT_ID]-us-west-2',
but are for the right account. Proceeding anyway.
I have root credentials in ~/.aws/credentials.
Looking at the deploy role policy, I don't see any sts permissions. What am I missing?
You will need to add permission to assume the role to the credentials from which you are trying to execute cdk deploy
{
"Sid": "assumerole",
"Effect": "Allow",
"Action": [
"sts:AssumeRole",
"iam:PassRole"
],
"Resource": [
"arn:aws-cn:iam::*:role/cdk-readOnlyRole",
"arn:aws-cn:iam::*:role/cdk-hnb659fds-deploy-role-*",
"arn:aws-cn:iam::*:role/cdk-hnb659fds-file-publishing-*"
]
}
First thing you need to do is enabling the verbose mode to see what is actually happenning.
cdk deploy --verbose
If you see similar message below. Continue with step 2. Otherwise, you need to address the problem by understanding the error message.
Could not assume role in target account using current credentials User: arn:aws:iam::XXX068599XXX:user/cdk-access is not authorized to perform: sts
:AssumeRole on resource: arn:aws:iam::XXX068599XXX:role/cdk-hnb659fds-deploy-role-XXX068599XXX-us-east-2 . Please make sure that this role exists i
n the account. If it doesn't exist, (re)-bootstrap the environment with the right '--trust', using the latest version of the CDK CLI.
Check S3 buckets related to CDK and CloudFormation stacks from AWS Console. Delete them manually.
Enable the new style bootstrapping by one of the method mentioned here
Bootstrap the stack using below command. Then it should create all required roles automatically.
cdk bootstrap --trust=ACCOUNT_ID --cloudformation-execution-policies=arn:aws:iam::aws:policy/AdministratorAccess --verbose
NOTE: If you are working with docker image assets, make sure you have setup your repository before you deploy. New style bootstrapping does not create the repos automatically for you as mentioned in this comment.
This may be of use to somebody... The issue could be a mismatch of regions. I spotted it in verbose mode - the roles were created for us-east-1 but I had specified eu-west-2 in the bootstrap. For some reason it had not worked. The solution was to set the region (by adding AWS_REGION=eu-west-2 before the cdk deploy command).
I ran into a similar error. The critical part of my error was
failed: Error: SSM parameter /cdk-bootstrap/<>/version not found.
I had to re-run using the new bootstrap method that creates the SSM parameter. To run the new bootstrap method first set CDK_NEW_BOOTSTRAP via export CDK_NEW_BOOTSTRAP=1
Don't forget to run cdk bootstrap with those credentials against your account [ACCOUNT_ID].
For me, the problem was that I was using expired credentials. I was trying to use temporary credentials from AWS SSO, which were expired. The problem was that the error message is misleading: it says
current credentials could not be used to assume 'arn:aws:iam::123456789012:role/cdk-xxx999xxx-deploy-role-123456789012-us-east-1', but are for the right account. Proceeding anyway.
(To get rid of this warning, please upgrade to bootstrap version >= 8)
However, applying the --verbose flag as suggested above showed the real problem:
Assuming role 'arn:aws:iam::123456789012:role/cdk-xxx999xxx-deploy-role-123456789012-us-east-1'.
Assuming role failed: The security token included in the request is expired
Could not assume role in target account using current credentials The security token included in the request is expired . Please make sure that this role exists in the account. If it doesn't exist, (re)-bootstrap the environment with the right '--trust', using the latest version of the CDK CLI.
Getting the latest SSO credentials fixed the problem.
After deploying with --verbose I could see it was a clock issue in my case:
Assuming role failed: Signature expired: 20220428T191847Z is now earlier than 20220428T192528Z (20220428T194028Z - 15 min.)
I resolve the clock issue on ubuntu using:
sudo ntpdate ntp.ubuntu.com
which then resolves the cdk issue.
I was trying to deploy code on lambda using serverless deploy and got below error, tried multiple solutions available online but didn't work.
Error -
Serverless: Packaging service...
Serverless Error ---------------------------------------
The specified bucket does not exist
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information -----------------------------
OS: darwin
Node Version: 8.12.0
Serverless Version: 1.31.0
When you are deploying your Serverless application it uses the service attribute (defined in your serverless.yaml) as a unique identifier of your application in the CloudFormation.
Said so, you may have some conflict if you change the name of the bucket without removing the stack. Ex:
You deploy you application with the bucket called myBucket.
CloudFormation will be created considering this info.
You change this name to myBucketPlus and try to deploy.
Serverless will clean the mybucketPlus with the last deploy before pushing the new one.
But wait! myBucketPlus does not exist.
As you did not describe what exactly you did, I tried to give an example but it could be something else.
Also you could try removing and deploying again.
The best way to resolve this issue is -
Execute below command to see the lambda information which will also provide the S3 bucket name, region, endpoint info etc but you need only bucket name and region for this case.
sls info -v
Create the bucket in the intended region.
Done.
i have migrated my iOS app parse DB to Azure, but had not finalised it yet.
I have changed end point to azure URL and its almost fine i mean i can login with my previous accounts.
But when i am querying anything to fetch or update data it giving me errors in response like:
[Error]: {"code":1,"message":"Internal server error."} (Code: 1, Version: 1.13.0)
[Error]: Response status code was unacceptable: 404
[Error]: This user is not allowed to access non-existent class: user (Code: 119, Version: 1.13.0)
Can anybody help me here that, will it be possible to update or fetch data to azure only after i finalize at parse migration or still there is something which i need to update from my code?
I had this issue. You just need to go to the parse dashboard and create the class
I found the solution from azure document.
I was getting error for fetching parse's PFFile object, and for that i need to add file key in azure settings. the value for the key can be find from azure settings.
image of azure settings
You can create a class in the ParseDashboard which will fix the problem. I was not able to open parse dashboard in browser (Only allowed remote https connection). I had to start it locally and finally was able to open the dashboard. Then i created the class there and it fixed my problem.
To launch the dashboard locally run the following commands
npm install -g parse-dashboard
and
parse-dashboard --appId yourAppId --masterKey yourMasterKey --serverURL "https://example.com/parse" --appName optionalName
I am trying to follow this simple Dataflow example from google cloud site.
I have successfully installed the dataflow pipeline plugin and gcloud SDK (as well as Python 2.7). I have also set up a project on google cloud and enabled billing and all the necessary API's - as specified in the instructions above.
However, when I go to the run configurations and change the Pipeline Arguments tab to select BlockingDataflowPipelineRunner, after entering creating a bucket and setting my project-id, hitting run gives me:
Caused by: java.lang.IllegalArgumentException: Output path does not exist or is not writeable: gs://my-cloud-dataflow-bucket
at com.google.cloud.dataflow.sdk.repackaged.com.google.common.base.Preconditions.checkArgument(Preconditions.java:146)
at com.google.cloud.dataflow.sdk.util.DataflowPathValidator.verifyPathIsAccessible(DataflowPathValidator.java:79)
at com.google.cloud.dataflow.sdk.util.DataflowPathValidator.validateOutputFilePrefixSupported(DataflowPathValidator.java:62)
at com.google.cloud.dataflow.sdk.runners.DataflowPipelineRunner.fromOptions(DataflowPipelineRunner.java:255)
at com.google.cloud.dataflow.sdk.runners.BlockingDataflowPipelineRunner.fromOptions(BlockingDataflowPipelineRunner.java:82)
... 9 more
I have used my terminal to execute 'gcloud auth login' and I see in the browser that I am successfully logged in.
I am really not sure what I have done wrong here. Can anyone confirm if this is a known issue with using dataflow pipeline and google buckets?
Thanks!
I had a similar issue with GCS bucket permissions, though I certainly had write permissions and I could upload files into the bucket.
What solved the problem for me was acquiring roles/dataflow.admin permission for the project I was submitting the pipeline to.
When submitting pipelines to the Google Cloud Dataflow Service, the pipeline runner on your local machine uploads files, which are necessary for execution in the cloud, to a "staging location" in Google Cloud Storage.
The pipeline runner on your local machine seems to be unable to write the required files to the staging location provided (gs://my-cloud-dataflow-bucket). It could be that the location doesn't exist, or that it belongs to a different GCP project than you authenticated against, or that there are more specific permissions set on that bucket, etc.
You can start debugging the issue via gsutil command-line too. For example, try running gsutil ls gs://my-cloud-dataflow-bucket to attempt to list the contents of the bucket. Then, try to upload via gsutil cp command. This will perhaps produce enough information to root-cause the issue you are facing.
Try to provide zone parameter, it works in my case with similar error. And of course export GOOGLE_APPLICATION_CREDENTIALS environment variable before running your app.
...
-Dexec.args="--runner=DataflowRunner \
--gcpTempLocation=gs://bucket/tmp \
--zone=bucket-zone \
...
Got the same error. Fixed it by setting GOOGLE_APPLICATION_CREDENTIALS using the key file with write permissions in ~/.bash_profile on Mac.
I realised I needed to use a specific acl command via gsutil. Setting my account to have owner permissions did not do the job. Instead using:
gsutil acl set public-read-write gs://my-bucket-name-here
worked in this case. Hope this helps someone!