I have previously had Serverless installed on a server, and then when I tried to edit the function and package it back up to edit the zip file I broke it, so I have to start all over. So to begin this issue: I had Serverless running and was using it with this package - https://github.com/adieuadieu/serverless-chrome/tree/master/examples/serverless-framework/aws
When I sudo npm run deploy, I get the ServerlessError:
ServerlessError: User: arn:aws:sts::XXX:assumed-role/EC2CodeDeploy/i-268b1acf is not authorized to perform: cloudformation:DescribeStackResources on resource: arn:aws:cloudformation:us-east-1:YYY:stack/aws-dev/*
I'm not sure why it is trying to connect to a Role and not an IAM. So I check the Role, and it is in an entirely different AWS account than the account I've configured. Let's call this Account B.
When it comes to configuration, I've installed AWS CLI and entered in the key, id, and region in my Account A in AWS. Not touching Account B whatsoever. When I run aws s3 ls I see the correct s3 buckets of the account with the key/id/regioin, so I know CLI is working with the correct account. Sounds good. I check the ~/.aws/creditionals file and just has one profile [default] which seems normal. No other profiles are in here. I copied this over to the ~/.aws/config file so now both files are same. Works great.
I then go into my SSH where I've installed serverless, and run npm run deploy and it gives me the same message above. I think maybe somehow it is not using the correct account for whatever reason. So I manually set the access key and secret with the following commands:
serverless config credentials --provider aws --key XXX --secret YYY
It tells me there already is a profile in the aws creds file, so I then add --o to the end to overwrite. I run sudo npm run deploy and still same error.
I then run this command to manually set a profile in the creds for serverless, with the profile name matching the IAM user name:
serverless config credentials --provider aws --key XXX --secret YYY --profile serverless-agent
Where "serverless-agent" is the name of my IAM user I've been trying to use to deploy. I run this, it tells me there already is an existing profile in the aws creds file so I run it with --o and it tells me the aws file is now updated. In bash I go to Vim the file and I only see the single "[default]" settings, as if nothing has changed. I run sudo npm run deploy and it gives me the same Error.
I then go and manually set the access and secret:
export AWS_ACCESS_KEY_ID=XXX
export AWS_SECRET_ACCESS_KEY=YYY
I run sudo npm run deploy and it gives me the same Error.
I even removed AWS CLI, and the directory that holds the creditionals and config files - and when I manually set my account creds via serverless config it tells me there already is a profile set up in my aws file, prompting me to use the overwrite command - how is this possible when the file is literally not on my computer?
So I then think that serverless itself has a cache or something, calling the wrong file or whatever for creds, so I uninstall serverless via sudo npm uninstall -g serverless so that I can start from zero again. I then do all of the above steps and more all over again, and nothing has changed. Same error message.
I do have Apex.run set up, but that should be using my AWS CLI config file so I'm not sure if that is causing any problems. But then again I've no clue of anything deep on this subject, and I can't find any ability to remove Apex itself in their docs.
In the package I am trying to deploy, I do not have a profile:XXX set in the serverless.yml file, because I've read if you do not then it just defaults to the [default] profile you have set in the aws creds file on your computer. Just to check, I go into the serverless.yml file and set the profile: default, and the error I now get when I run npm run deploy is
Profile default does not exist
How is that possible when I have the "default" profile set in my creds file? So I remember that previously I ran the serverless config creditionals command and added the profile name of serverless-agent to it (yet didn't save in the aws creds file as I mentioned above), so I add that profile name to the serverless.yml file just to see if this works, and same error of "Profile default does not exist".
So back to the error message. The Role is an account not even related to the IAM user I'm using in my aws creds. Without knowing a lot about this, it's as if the config in serverless via ssh isn't correct or something. Is it using old creds I had set up in Apex.run? Why is the aws creds file not updated with the profile when I manually set it in serverless config command? I am using the same user account (but with new key and secret) that I used a few weeks ago when I correctly deployed and my Lambda and API was set up for me on AWS. Boy do I miss those time and wish I didn't mess up my existing Lambda functions, without setting version number prior, forcing me to start all over.
I am so confused. Any help would be greatly appreciated.
If you are using IAM role then you have to use that IAM role through assume role using powershell.
I was also facing same issue earlier, when we moved from from user to role.
Related
I am trying to setup Fastlane for iOS certificates and profile syncing in my MacBook. When I execute the command sudo fastlane match development --readonly, I get below error.
What is it actually? I guess that the terminal is blocking the prompt where I could enter the match password. I do not understand why I face this issue, also I have access to the repository.
Normally, the password would be cached in your Credential Storage (osxkeychain on Mac), but for your current user.
And you are executing that command as root.
Check:
if you have a credential helper set (with your account)
git config credential.helper
if your credentials are stored
git ls-remote https://...
# if a popup is displayed, enter your credentials there
if the same command would work if executed as you (instead of root)
I assume you have used ssh with the old macbook, are you trying now with https ?
Btw, it is not recommended to use the local fastlane, see Bundler.
I'm trying to delete gcloud environments. One did not successfully create (no associated Airflow or Bucket) and one did. When I attempt to delete, I get an error message (after a really long time) of RPC Skipped due to required preoperation not finished yet. The logs don't provide any valuable information, and I wasn't able to find anything wrong in the cluster. The only solution I have found so far is to delete the entire project, but I would prefer not to. Any suggestions would be greatly appreciated!
Follow the steps below to delete the environment's resources manually:
Delete GKE cluster that corresponds to the environment
Delete the Google Storage bucket used by the environment
Delete the related deployment with:
gcloud deployment-manager deployments delete <DEPLOYMENT_NAME> --delete-policy=ABANDON
Then try again to delete the Composer environment with:
gcloud composer environments delete <ENVIRONMENT_NAME> --location <LOCATION>
I would like to share what worked for me in case someone else runs into this problem as I followed all the steps above and still could not delete the composer environment.
My 'gcloud composer environments list' command was returning '0', but I could see my environment was still in the console view and when I tried to delete it, I would get the same error message as honlicious. Additionally, I ran 'gcloud projects add-iam-policy-binding' to try to give my Compute Engine ServiceAccount the composer.serviceAgent role, but this still did not resolve my issue. What eventually worked was disabling the Cloud Composer API and then re-enabling it. This removed my old environment I was unable to previously delete.
I got this issue when I tried to create and delete Cloud Composer by Terraform.
I created a Service Account apart from the Composer and this led to deletion it in the first order during a terraform destroy operation.
So the correct order is:
Delete Composer environment
Delete Composer’s Service Account
I have tried looking for an answer to this problem for over a month. Apparently I am the only one in the world who ever had this issue... Here's the console commands I am giving over ssh. I am logged into the only account with permissions and it has FULL permissions over the project... I am stumped.
sudo gcloud compute copy-files /home/lestado/trs.key adminuser#the-real-strategy-vm:/etc/ssl/private/
This is the response I am getting:
Did you mean zone [us-central1-f] for instances:
[['[the-real-strategy-vm]']]?
Do you want to continue (Y/n)? y
ERROR: (gcloud.compute.copy-files) Could not fetch instance:
- Insufficient Permission
Set your project name before you start copying files.
gcloud config set project <project_name>
Complete process will be:
Login: gcloud init
Set project: gcloud config set project <project_name>
Upload using root user for instance: gcloud compute copy-files ./ root#instance:/etc/ssl/pvt
After Executing > rhc setup and then entering my hostname i always get this error message
Steps that i've done:
1- installed Ruby 1.9.3
2- installed rhc using gem gem install rhc
3- Executed > rhc setup
It seems that this is some kind of bug.
But, there is another way to manually generate SSH public-private key pairs and upload them to OpenShift
1- Generate new SSH keys
C:\> ssh-keygen
It will ask you where to save the key files just press "Enter" -> this will generate key-pairs with name "id_rsa" in "C:\Users\YOU\.ssh"
also Press Enter when asked for passphrase to keep it empty
2- Upload you Public Key to OpenShift
C:/>rhc sshkey add id_rsa C:\Users\YOU\.ssh\id_rsa.pub
It will then ask you for your credentials on OpenShift, once done your public key is now uploaded to OpenShift
3- Configuring SSH to use the Generated private key when connecting to your APP
a- Make sure you have an environment variable "HOME" pointing to "C:/Users/YOU/.ssh", if not create one
b- open "C:/Users/YOU/.ssh/" if you find config file open it , if not create one by running the following command:
touch config
Now add the following lines to config file:
Host ChooseAName
HostName APPName-NameSpace.rhcloud.com
IdentityFile ~\.ssh\id_rsa.pub
save and close
4- Now Connecting to your App:
First, Get the command that enables you to connect remotely to your app on rhcloud server , you can get it using the OpenShift web console
enter that to you command line and you will be connected trough a secure shell to you APP on rhcloud
In my case, it was because I was typing
rhc setup --server=my_app_domain
but in fact, the relevant server was the Openshift Enterprise server that was hosting my domain. When I used this server, it worked fine.
I am following the rails tutorial, and I am at a point where it instructs to deploy the app to heroku for the second time. I have successfully deployed an app in the past, but it will not work now.
I get this error : Permission denied (public key)
fatal: could not read from remote repository.
The remote exists and is correct, and when using the "heroku key" my key appears. I can add a new stack to heroku as well. I also tried re-adding the key, and that did not work.
Very confused, all the solutions I have found have not worked.
Sounds like you need to configure your ssh keys (usually located at ~/.ssh). Are you using github? If so, your ssh keys should already be set up (you won't be able to push to github.com without setting those up).
If you haven't already set up your ssh keys, follow these instructions from github to do so.
Once your ssh keys are set up, performing the command 'git push heroku' should do the trick. Make sure Heroku is set up correctly by following the instructions from the tutorial
You are probably not deploying as the same user you deployed the first app as. If you are in a linux environment this probably means you deployed as root one time and tried to as a user the other time, maybe you used sudo .
Or possibly you deleted your ssh public keys....or maybe you changed the permissions of your ssh keys.
I am not high enough rated to comment, so please navigate to ~/.ssh and type "ls -l" so I can see your permissions. Then navigate one directory up to ~/ and type "ls -la" so I can see your permissions on the actual .ssh folder
then navigate to /.ssh and do the same permissions posting so I can see them.