azuread Error: Cannot import non-existent remote object - terraform-provider-azure

I'm trying to import an existing azure active directory resource into the terraform state. I used the following:
terraform import azuread_service_principal.example 00000000-0000-0000-0000-000000000000
The 00000000-0000-0000-0000-000000000000 is the object_id of the above resource.
but when I run the command, I get this error:
Error: Cannot import non-existent remote object
do I need to do anything special in my script before I run this command?

I tested the same in my lab and importing the service principal using the objectId (from portal) returns an error that non-existent remote object cannot be imported .
Solution: Run the command mentioned below using azure CLI for your service principal you want to import and get the objectID for it .
az ad sp list --display-name "Your Service Principal Name"
After getting the objectID of the service principal, run the terraform import command using the objectId obtained from CLI and not the objectId from portal and it will successfully get imported.
terraform import azuread_service_principal.example your-service-principal-objectId
Note: The ObjectId shown in the Portal refers to the objectId of the application rather than the ObjectId of Service Principal.
## My Main.tf File
provider "azuread" {
version = "=0.7.0"
}
resource "azuread_service_principal" "example" {
}

I just had the same error with the User resource instead of service principal.
My fault was to be still logged in with az login on the command line to another tenant when importing the user with terraform import
After logging into the correct tenant, the user's objectId was the same in the portal as well as on the command line with az ad user show --id <upn>

Related

How to get list of Jenkins credentials using curl?

I have a jenkins instance which contains multiple credentials in different scope.
How do I get the list of jenkins credentials using curl?
I did try to fetch them using
curl -u [USERNAME]:[PASSWORD] -X GET http://[JENKINS_URL]/credentials/store/system/domain/_/api/json
But I'm getting below output which doesn't contain creds IDs or names etc.
3326{"class":"com.cloudbees.plugins.credentials.CredentialsStoreAction$DomainWrapper","credentials":[{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{}],"description":"Credentials
that should be available irrespective of domain specification to
requirements matching.","displayName":"Global credentials
(unrestricted)","fullDisplayName":"System » Global credentials
(unrestricted)","fullName":"system/","global":true,"urlName":"_"}
How do I get the creds IDs and names which I see in Jenkins.

WorkerExtensions.csproj trying to access private feed

WorkerExtensions.csproj : error NU1301: Unable to load the service index for source...
WorkerExtensions.csproj was trying to access our private feed, hence it did not have the permissions to do so encountered the above error.
How can it be resolved?
Nuget.exe sources update -Name “xxpackages” -UserName xxx -Password token
by running the above command in the package console, I successfully managed to resolve this error.

Error while trying to assign a custom role "Secret Reader" to an object ID for an Azure Key Vault

Can anyone tell me why i am getting this error while trying to run this command and assign a custom role "Secret Reader" to a guest account Object Id :
az role assignment create --role "Secret Reader" --assignee-object-id "12526c57-c91b-405b-9068-2b582b23e83a" --scope "/subscriptions/Not-putting this-here/resourceGroups/pallabdev/providers/Microsoft.KeyVault/vaults/testhalvault"
The error i get is :
request failed: Error occurred in request., InvalidSchema: No connection adapters were found for 'C:/Program Files/Git/subscriptions/Not-Putting-This-Here/resourceGroups/pallabdev/providers/Microsoft.KeyVault/vaults/testhalvault/providers/Microsoft.Authorization/roleDefinitions?$filter=roleName%20eq%20%27Secret%20Reader%27&api-version=2018-01-01-preview'
From the error message, I suppose you ran the command in Git Bash of Windows, I can also reproduce this on my side, it was caused by the Auto-translation of Resource IDs in Git Bash, similar issue here.
To solve this issue, just set environment variable MSYS_NO_PATHCONV=1 or set it temporarily when you running the command.
$ MSYS_NO_PATHCONV=1 az role assignment create --role "Secret Reader" --assignee-object-id "12526c57-c91b-405b-9068-2b582b23e83a" --scope "/subscriptions/Not-putting this-here/resourceGroups/pallabdev/providers/Microsoft.KeyVault/vaults/testhalvault"
I had the same problem and I simply ran the command using the Windows powershell instead of Gitbash and it worked like a charm.

Argo artifact passing cant save output

I am trying to run the artifact passing example on Argoproj. However, I am getting the following error:
failed to save outputs: verify serviceaccount platform:default has necessary privileges
This error is appearing in the first step (generate-artifact) itself.
Selecting the generate-artifact component and clicking YAML gives following line highlighted
Nothing appears on clicking LOGS.
I need to understand the correct sequence of steps in running the YAML file so that this error does not appear and artifacts are passed. Could not find much resources on this issue other than this page where the issue is discussed on argo repository.
All pods in a workflow run with the service account specified in workflow.spec.serviceAccountName, or if omitted, the default service account of the workflow's namespace.
Here the default service account of that namespace doesn't seem to be given any roles by default.
Try granting a role to the “default” service account in a namespace:
kubectl create rolebinding argo-default-binding \
--clusterrole=cluster-admin \
--serviceaccount=platform:default \
--namespace=platform
Since the default service account now gets all access via the 'cluster-admin' role, the example should work now.

Connecting to different ARN/Role/Amazon Account when trying to deploy

I have previously had Serverless installed on a server, and then when I tried to edit the function and package it back up to edit the zip file I broke it, so I have to start all over. So to begin this issue: I had Serverless running and was using it with this package - https://github.com/adieuadieu/serverless-chrome/tree/master/examples/serverless-framework/aws
When I sudo npm run deploy, I get the ServerlessError:
ServerlessError: User: arn:aws:sts::XXX:assumed-role/EC2CodeDeploy/i-268b1acf is not authorized to perform: cloudformation:DescribeStackResources on resource: arn:aws:cloudformation:us-east-1:YYY:stack/aws-dev/*
I'm not sure why it is trying to connect to a Role and not an IAM. So I check the Role, and it is in an entirely different AWS account than the account I've configured. Let's call this Account B.
When it comes to configuration, I've installed AWS CLI and entered in the key, id, and region in my Account A in AWS. Not touching Account B whatsoever. When I run aws s3 ls I see the correct s3 buckets of the account with the key/id/regioin, so I know CLI is working with the correct account. Sounds good. I check the ~/.aws/creditionals file and just has one profile [default] which seems normal. No other profiles are in here. I copied this over to the ~/.aws/config file so now both files are same. Works great.
I then go into my SSH where I've installed serverless, and run npm run deploy and it gives me the same message above. I think maybe somehow it is not using the correct account for whatever reason. So I manually set the access key and secret with the following commands:
serverless config credentials --provider aws --key XXX --secret YYY
It tells me there already is a profile in the aws creds file, so I then add --o to the end to overwrite. I run sudo npm run deploy and still same error.
I then run this command to manually set a profile in the creds for serverless, with the profile name matching the IAM user name:
serverless config credentials --provider aws --key XXX --secret YYY --profile serverless-agent
Where "serverless-agent" is the name of my IAM user I've been trying to use to deploy. I run this, it tells me there already is an existing profile in the aws creds file so I run it with --o and it tells me the aws file is now updated. In bash I go to Vim the file and I only see the single "[default]" settings, as if nothing has changed. I run sudo npm run deploy and it gives me the same Error.
I then go and manually set the access and secret:
export AWS_ACCESS_KEY_ID=XXX
export AWS_SECRET_ACCESS_KEY=YYY
I run sudo npm run deploy and it gives me the same Error.
I even removed AWS CLI, and the directory that holds the creditionals and config files - and when I manually set my account creds via serverless config it tells me there already is a profile set up in my aws file, prompting me to use the overwrite command - how is this possible when the file is literally not on my computer?
So I then think that serverless itself has a cache or something, calling the wrong file or whatever for creds, so I uninstall serverless via sudo npm uninstall -g serverless so that I can start from zero again. I then do all of the above steps and more all over again, and nothing has changed. Same error message.
I do have Apex.run set up, but that should be using my AWS CLI config file so I'm not sure if that is causing any problems. But then again I've no clue of anything deep on this subject, and I can't find any ability to remove Apex itself in their docs.
In the package I am trying to deploy, I do not have a profile:XXX set in the serverless.yml file, because I've read if you do not then it just defaults to the [default] profile you have set in the aws creds file on your computer. Just to check, I go into the serverless.yml file and set the profile: default, and the error I now get when I run npm run deploy is
Profile default does not exist
How is that possible when I have the "default" profile set in my creds file? So I remember that previously I ran the serverless config creditionals command and added the profile name of serverless-agent to it (yet didn't save in the aws creds file as I mentioned above), so I add that profile name to the serverless.yml file just to see if this works, and same error of "Profile default does not exist".
So back to the error message. The Role is an account not even related to the IAM user I'm using in my aws creds. Without knowing a lot about this, it's as if the config in serverless via ssh isn't correct or something. Is it using old creds I had set up in Apex.run? Why is the aws creds file not updated with the profile when I manually set it in serverless config command? I am using the same user account (but with new key and secret) that I used a few weeks ago when I correctly deployed and my Lambda and API was set up for me on AWS. Boy do I miss those time and wish I didn't mess up my existing Lambda functions, without setting version number prior, forcing me to start all over.
I am so confused. Any help would be greatly appreciated.
If you are using IAM role then you have to use that IAM role through assume role using powershell.
I was also facing same issue earlier, when we moved from from user to role.

Resources