Service Principal Creation by Terraform doesn't provide password/secret in the output - terraform-provider-azure

when generating Service Principal in Azure manually, as a result of the operation I'm provided a password.
It's not the case however if I create service principal with Terraform, the password is not among the outputs of this module:
+ azuread_service_principal.k8s_principal
id: <computed>
application_id: "${azuread_application.app.application_id}"
display_name: <computed>
Is there anything I missed? Why does the Terraform behavior differs in the output compared to CLI?

password is required INPUT to the azuread_service_principal_password block. As such, you can generate a random password and export it yourself. Complete Terraform code is something like this:
resource "azuread_application" "app" {
name = "${local.application_name}"
}
# Create Service Principal
resource "azuread_service_principal" "app" {
application_id = "${azuread_application.app.application_id}"
}
resource "random_string" "password" {
length = 32
special = true
}
# Create Service Principal password
resource "azuread_service_principal_password" "app" {
end_date = "2299-12-30T23:00:00Z" # Forever
service_principal_id = "${azuread_service_principal.app.id}"
value = "${random_string.password.result}"
}
output "sp_password" {
value = "${azuread_service_principal_password.app.value}"
sensitive = true
}

to who using newer version of Terraform, you don't need to preset the password, following code is working fine:
resource "azuread_service_principal_password" "auth_pwd" {
service_principal_id = azuread_service_principal.auth.id
}
output "auth_client_secret" {
value = azuread_service_principal_password.auth_pwd.value
description = "output password"
sensitive = true
}
then you can run the following cli to retrieve the password:
terraform output -raw auth_client_secret
tested on terraform 1.0.10, hashicorp/azuread provider 2.11

In the terraform document, the azuread_service_principal block only defines the Argument application_id and Attributes id, display_name, So you only could see these resources. Also, the azuread_service_principal_password block allows you to export the Key ID for the Service Principal Password. You still could not see the real password.
In the Azure CLI az ad sp create-for-rbac has an optional parameter --Password. So you could see the password output.

Related

Azure DevOps PAT API to be able to list all tokens in organization

Need to obtain the list of all tokens in organization.
Used the token to make a call to https://vssps.dev.azure.com/{organization}/_apis/tokens/pats?api-version=6.1-preview.1
My permission in DevOps are set as the Collection Administrator.
Received response was:
{“$id”:“1”,“innerException”:null,“message”:“The requested operation is not allowed.”,“typeName”:“Microsoft.TeamFoundation.Framework.Server.InvalidAccessException, Microsoft.TeamFoundation.Framework.Server”,“typeKey”:“InvalidAccessException”,“errorCode”:0,“eventId”:3000}
Is there some lack of permissions or do I need to set up something else to get list of tokens in organization?
You don't mention how you get your token, and criteria for authentication flow but I will share my adventure that started similarly yours.
I got your exact error while following this guide: https://learn.microsoft.com/en-gb/azure/devops/organizations/accounts/manage-personal-access-tokens-via-api?view=azure-devops
The token I got from that python code just didn't work.
Then I found this code instead: https://learn.microsoft.com/en-us/azure/databricks/dev-tools/api/latest/aad/app-aad-token#--username-password-flow-programmatic
While using the same app registration from the link above, I copied my scope and tenantID from the dysfunctional code into this new code, and then go to your app registration --> authentication --> Allow public client flows to yes, see screenshot.
I ran the script after giving the credentials and now the token worked.
Dumping the code for future reference:
# Given the client ID and tenant ID for an app registered in Azure,
# along with an Azure username and password,
# provide an Azure AD access token and a refresh token.
# If the caller is not already signed in to Azure, the caller's
# web browser will prompt the caller to sign in first.
# pip install msal
from msal import PublicClientApplication
import sys
# You can hard-code the registered app's client ID and tenant ID here,
# along with the Azure username and password,
# or you can provide them as command-line arguments to this script.
client_id = '<client-id>'
tenant_id = '<tenant-id>'
username = '<username>'
password = '<password>'
# Do not modify this variable. It represents the programmatic ID for
# Azure Databricks along with the default scope of '/.default'.
scope = [ '2ff814a6-3304-4ab8-85cb-cd0e6f879c1d/.default' ]
# Check for too few or too many command-line arguments.
if (len(sys.argv) > 1) and (len(sys.argv) != 5):
print("Usage: get-tokens-for-user.py <client ID> <tenant ID> <username> <password>")
exit(1)
# If the registered app's client ID and tenant ID along with the
# Azure username and password are provided as command-line variables,
# set them here.
if len(sys.argv) > 1:
client_id = sys.argv[1]
tenant_id = sys.argv[2]
username = sys.argv[3]
password = sys.argv[4]
app = PublicClientApplication(
client_id = client_id,
authority = "https://login.microsoftonline.com/" + tenant_id
)
acquire_tokens_result = app.acquire_token_by_username_password(
username = username,
password = password,
scopes = scope
)
if 'error' in acquire_tokens_result:
print("Error: " + acquire_tokens_result['error'])
print("Description: " + acquire_tokens_result['error_description'])
else:
print("Access token:\n")
print(acquire_tokens_result['access_token'])
print("\nRefresh token:\n")
print(acquire_tokens_result['refresh_token'])

how to load jenkins username and password as env vars in a jenkins file

in the Jenkins credentials I have several types of credentials.
One of them, called my_password is of the type "Secret Text", in which in a Jenkinsfile, I can access like so:
environment {
my_env_var = credentials('my_password')
}
Now I created a credential of type "Username with Password" called user_and_pass in which I can set up both fields in the same credential.
How can I access both params at the same time, and load them into env variables?
I was thinking something like:
environment {
my_user = credentials('user_and_pass').someFunctionThatReturnsUser()
my_pass = credentials('user_and_pass').someFunctionThatReturnsPass()
}
but I don't think it works like that.
When you get back the credentials from a "username and password" secret, you get one string with a username and a password separated by a colon in the format username:password.
Check if using usernamePassword works, as in here.
It is from the Jenkins Credentials Binding Plugin.
withCredentials(
[usernamePassword(credentialsId: 'mycreds',
usernameVariable: 'USERNAME',
passwordVariable: 'PASSWORD')]) {
sh 'cf login some.awesome.url -u $USERNAME -p $PASSWORD'
}
In Jenkins dashboard, Click on Manage Jenkins, Click on Manage Credentials under Security tab, click on system to create Global credentials. This Credential ID (SSH-Centos7 in my case) can be used as below:
stage('Example SSH Username with password') {
environment {
SSH_CREDS = credentials('SSH-Centos7')
}
Recent documentation available at Jenkins official documentation
As far as I know, all we have two methods to extract data from credential type Username and Password:
by the means of Groovy function withCredentials();
by the means of helper credentials().
withCredentials()
Syntax for extracting creds via withCredentials:
withCredentials([usernamePassword(credentialsId: 'your-credentials-id', passwordVariable: 'PASSWORD_VAR', usernameVariable: 'USERNAME')]) {
// your script could access $PASSWORD_VAR and $USERNAME_VAR
// as environment variables
//
// note: PASSWORD_VAR, USERNAME_VAR is just aliases, you may change it to whatever you like
}
If syntax looks too complicated and boring to you, use Pipeline Snippet Generator as follows.
credentials()
Syntax for extracting creds via credentials():
environment {
CREDS = credentials('your-credentials-id')
}
steps {
// your code can access
// username as $CREDS_USR
// and password as $CREDS_PSW
}
Which method to use?
It depends on credential type. For Username and Password you could use any of the methods - as you like.
credentials() helper supports the following types (end of 2022):
secret text;
username and password;
secret file.
For the rest of credentials types you have to use withCredentials().
Check out official docs for more details.

Trying to set up a mail server in OpenBSD: doveadm auth login fails

I set up an OpenBSD 7.0 instance on Vultr in order to get a mail server running with Dovecot and OpenSMTPD. I (mostly) followed the instructions here and here and a bit here.
I set it up to use with virtual mail, creating files in '/etc/mail/virtual' and '/etc/mail/credentials' with a single virtual user: 'user#domain.ca::vmail:2000:2000:/var/vmail/domain.ca/user::userdb_mail=maildir:/var/vmail/domain.ca/user'
I created the encrypted password with 'smtpctl encrypt' and pasted it where it should be in the credentials file.
However, running 'doveadm auth login user#domain.ca' fails.
In /var/log/maillog I get:
Jan 25 14:06:58 vultrBSD dovecot: auth-worker(165): conn unix:auth-worker (pid=44111,uid=518): auth-worker<1>: bsdauth(user#domain.ca): unknown user
Jan 25 14:06:58 vultrBSD dovecot: auth: passwd-file(user#domain.ca): Password mismatch
I know the password is correct, and I tried changing it and pasting in a new one that I created with 'smtpctl encrypt', but still the same error. The '/etc/mail/credentials' file is set to 0440 and owned by _smtpd:_dovecot. Even temporarily setting it to 0777 doesn't work.
I can send mail to the server from another account, I see that is shows up in '/var/vmail/domain.ca/user/new' but I am unable to connect my Thunderbird client to the server. Attempting to set up a new mail account in Thunderbird doesn't seem to work, Thunderbird rejects the password (although it does detect the correct protocols and ports, IMAP/SMTP).
Here is the local.conf file in /etc/dovecot:
auth_debug_passwords = yes
auth_mechanisms = plain
first_valid_uid = 2000
first_valid_gid = 2000
mail_location = maildir:/var/vmail/%d/%n
mail_plugin_dir = /usr/local/lib/dovecot
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date index ihave duplicate mime foreverypart extracttext imapsieve vnd.dovecot.imapsieve
mbox_write_locks = fcntl
mmap_disable = yes
namespace inbox {
inbox = yes
location =
mailbox Archive {
auto = subscribe
special_use = \Archive
}
mailbox Drafts {
auto = subscribe
special_use = \Drafts
}
mailbox Junk {
auto = subscribe
special_use = \Junk
}
mailbox Sent {
auto = subscribe
special_use = \Sent
}
mailbox Trash {
auto = subscribe
special_use = \Trash
}
prefix =
}
plugin {
imapsieve_mailbox1_before = file:/usr/local/lib/dovecot/sieve/report-spam.sieve
imapsieve_mailbox1_causes = COPY
imapsieve_mailbox1_name = Junk
imapsieve_mailbox2_before = file:/usr/local/lib/dovecot/sieve/report-ham.sieve
imapsieve_mailbox2_causes = COPY
imapsieve_mailbox2_from = Junk
imapsieve_mailbox2_name = *
sieve = file:~/sieve;active=~/.dovecot.sieve
sieve_global_extensions = +vnd.dovecot.pipe +vnd.dovecot.environment
sieve_pipe_bin_dir = /usr/local/lib/dovecot/sieve
sieve_plugins = sieve_imapsieve sieve_extprograms
}
protocols = imap sieve
service imap-login {
inet_listener imaps {
port = 993
}
}
service managesieve-login {
inet_listener sieve {
port = 4190
}
inet_listener sieve_deprecated {
port = 2000
}
}
ssl_cert = </etc/ssl/domain.ca.fullchain.pem
ssl_key = </etc/ssl/private/domain.ca.key
userdb {
args = username_format=%u /etc/mail/credentials
driver = passwd-file
name =
}
passdb {
args = scheme=CRYPT username_format=%u /etc/mail/credentials
driver = passwd-file
name =
}
protocol imap {
mail_plugins = " imap_sieve"
}
Has anyone else experienced this and know of a fix?
Thanks.
Hashed strings, including passwords, typically use several layers besides the base hashing algorithm. Two different implementations (dovecot vs smtpd) using the same hashing algorithm will output two different hashes given the same input (password.)
This is due to what is called salt and pepper. Salt is a randomly generated string usually based on some user data as the seed. This salt is then inserted into the password in a way dictated by the implementation (dovecot or smtpd) before hashing the password.
Similarly, pepper is a string dictated by the implementation and inserted into the password before hashing. This combination of salting and peppering creates a unique hash per implementation which makes storing passwords safer. This makes it so that a cracker can't easily compare hashes from several sites or programs to crack user passwords and break into all instances of that password simultaneously.
This is why you can't reuse a password hash stored by one program to unlock the same password when used by another program. Even if both programs use identical hashing algorithms.
The fix should be to set up the credentials individually for each program and not reuse each other's hashes.

KeyVault -> Databricks automatic integration

I have followed Create an Azure Key Vault-backed secret scope to integrate Databricks with Key Vault and all works ok. Unfortunately this requires manual intervention, which breaks our 'full automated infrastructure' approach. Is there any way to automate this step?
UPDATE: You create a Databricks-backed secret scope using the Databricks CLI (version 0.7.1 and above). Alternatively, you can use the Secrets API.
It does not appear that Azure Key Vault backed secret scope creation has a publicly available API call, unlike the Databricks backed secret scope creation. This is backed by the 'Note' on the secret scopes doc page:
Creating an Azure Key Vault-backed secret scope is supported only in the Azure Databricks UI. You cannot create a scope using the Secrets CLI or API.
A request for the feature you are asking for was made last year, but no ETA was given.
I took a look at the request made by the UI page. While the form data is simple enough, the headers and security measures make programmatic access impractical. If you are dead-set on automating this part, you could use one of those tools which automates the cursor around the screen and clicks things for you.
Now it is possible, but you can't use a service principal token. It must be a user token which hinder automation.
Refer to Microsoft Docs:
https://learn.microsoft.com/en-us/azure/databricks/security/secrets/secret-scopes#create-an-azure-key-vault-backed-secret-scope-using-the-databricks-cli
You can use Databricks Terraform provider to create secret scope baked by the Azure KeyVault. But because of Azure limitations it should be done by using user’s AAD token (usually using azure cli). Here is the working snippet for creation of the secret scope from existing KeyVault:
terraform {
required_providers {
databricks = {
source = "databrickslabs/databricks"
version = "0.2.9"
}
}
}
provider "azurerm" {
version = "2.33.0"
features {}
}
data "azurerm_databricks_workspace" "example" {
name = var.workspace_name
resource_group_name = var.resource_group
}
provider "databricks" {
azure_workspace_resource_id = data.azurerm_databricks_workspace.example.id
}
data "azurerm_key_vault" "example" {
name = var.keyvault_name
resource_group_name = var.resource_group
}
resource "databricks_secret_scope" "example" {
name = data.azurerm_key_vault.example.name
keyvault_metadata {
resource_id = data.azurerm_key_vault.example.id
dns_name = data.azurerm_key_vault.example.vault_uri
}
}
variable resource_group {
type = string
description = "Resource group to deploy"
}
variable workspace_name {
type = string
description = "The name of DB Workspace"
}
variable keyvault_name {
type = string
description = "The name of DB Workspace"
}

Google Docs: Cannot export/download user's document using administrative access/impersonation (forbidden 403) in python

I have read this thoroughly: https://developers.google.com/google-apps/documents-list/#using_google_apps_administrative_access_to_impersonate_other_domain_users
I have googled this to death.
So far I have been able to:
Authorise with:
clientLogin
OAuth tokens (using my domain key)
retrieve document feeds for all users in the domain (authorised either way in #1)
I am using the "entry" from the feed to Export/Download documents and always get forbidden for other users for documents not shared with admin. The feed query I am using is like:
https://docs.google.com/feeds/userid#mydomain.com/private/full/?v=3
(I have tried with and without the ?v=3)
I have also tried adding the xoauth_requestor_id (which I have also seen in posts as xoauth_requestor), both on the uri, and as a client property: client.xoauth_requestor_id = ...
Code fragments:
Client Login (using administrator credentials):
client.http_client.debug = cfg.get('HTTPDEBUG')
client.ClientLogin( cfg.get('ADMINUSER'), cfg.get('ADMINPASS'), 'HOSTED' )
OAuth:
client.http_client.debug = cfg.get('HTTPDEBUG')
client.SetOAuthInputParameters( gdata.auth.OAuthSignatureMethod.HMAC_SHA1, cfg.get('DOMAIN'), cfg.get('APPS.SECRET') )
oatip = gdata.auth.OAuthInputParams( gdata.auth.OAuthSignatureMethod.HMAC_SHA1, cfg.get('DOMAIN'), cfg.get('APPS.SECRET') )
oat = gdata.auth.OAuthToken( scopes = cfg.get('APPS.%s.SCOPES' % section), oauth_input_params = oatip )
oat.set_token_string( cfg.get('APPS.%s.TOKEN' % section) )
client.current_token = oat
Once the feed is retrieved:
# pathname eg whatever.doc
client.Export(entry, pathname)
# have also tried
client.Export(entry, pathname, extra_params = { 'v': 3 } )
# and tried
client.Export(entry, pathname, extra_params = { 'v': 3, 'xoauth_requestor_id': 'admin#mydomain.com' } )
Any suggestions, or pointers as to what I am missing here?
Thanks
You were very close to having a correct implementation. In your example above, you had:
client.Export(entry, pathname, extra_params = { 'v': 3, 'xoauth_requestor_id': 'admin#mydomain.com' } )
xoauth_requestor_id must be set to the user you're impersonating. Also what you need is to use 2-Legged OAuth 1.0a with the xoauth_requestor_id set either in the token or in the client.
import gdata.docs.client
import gdata.gauth
import tempfile
# Replace with values from your Google Apps domain admin console
CONSUMER_KEY = ''
CONSUMER_SECRET = ''
# Set this to the user you're impersonating, NOT the admin user
username = 'userid#mydomain.com'
destination = tempfile.mkstemp()
token = gdata.gauth.TwoLeggedOAuthHmacToken(
consumer_key, consumer_secret, username)
# Setting xoauth_requestor_id in the DocsClient constructor is not required
# because we set it in the token above, but I'm showing it here in case your
# token is constructed via some other mechanism and you need another way to
# set xoauth_requestor_id.
client = gdata.docs.client.DocsClient(
auth_token=token, xoauth_requestor_id=username)
# Replace this with the resource your application needs
resource = client.GetAllResources()[0]
client.DownloadResource(resource, path)
print 'Downloaded %s to %s' % (resource.title.text, destination)
Here is the reference in the source code to the TwoLeggedOAuthHmacToken class:
http://code.google.com/p/gdata-python-client/source/browse/src/gdata/gauth.py#1062
And here are the references in the source code that provide the xoauth_requestor_id constructor parameter (read these in order):
http://code.google.com/p/gdata-python-client/source/browse/src/atom/client.py#42
http://code.google.com/p/gdata-python-client/source/browse/src/atom/client.py#179
http://code.google.com/p/gdata-python-client/source/browse/src/gdata/client.py#136

Resources