Get authorization CodeArtifact token from Bitbucket Pipelines run - bitbucket

I'm using Bitbucket as a source control service and I'm interested to start using its pipelines capability to build and deploy my app. I'm using AWS CodeArtifact to host my Java artifacts.
The thing I'm struggle with is how to authenticate AWS CodeArtifact from the Bitbucket pipelines.
How to run
aws sso login --profile XXXX
export CODEARTIFACT_AUTH_TOKEN=`aws codeartifact get-authorization-token ....
Is there a best practice to deal with this??

I think the exportation of the CODEARTIFACT_AUTH_TOKEN env var is quite fine. For the first authentication to AWS, you probably want to take a look into Bitbucket OIDC capabilities:
https://bitbucket.org/blog/bitbucket-pipelines-and-openid-connect-no-more-secret-management
https://support.atlassian.com/bitbucket-cloud/docs/deploy-on-aws-using-bitbucket-pipelines-openid-connect/
Essentially, setting up an identity provider in you AWS account that will let your pipelines assume a role by just declaring
- step:
name: My pipeline
oidc: true
...
(also exporting an AWS_ROLE_ARN somewhere)
Identities and the assumed roles can be set up to granular clearance levels per repository, deployment stage, etc
Setting up an OIDC identity provider can be cumbersome. You might be interested in giving https://registry.terraform.io/modules/calidae/bitbucket-oidc/aws/latest a look, even if you weren't using terraform.

Related

Creating container registry from Azure Bicep and deploying image to this registry in the same build pipeline in Azure Devops

I'm running into an issue in Azure Devops. I have two questions regarding the issue. The issue is that I have an Azure Bicep template that deploys a bunch of resources in a resource group within my Azure subscription.
One of these resources is an Azure Container Registry (ACR) to which I want to push a certain image when the image code is updated. Now what I essentially am trying to achieve is that I have a single multi-stage Azure build Pipeline in which
The resources are deployed via Azure Bicep, after which
I build and push the image to the (ACR) automatically
Now the issue here is that to push an image to ACR a service connection needs to be made in Azure Devops, which can only happen through the portal after the Azure Bicep pipeline has run. Now I have found that I can use an Azure CLI command: az devops service-endpoint create to create a connection from a .json-file from the command line, which essentially means I could maybe add a .json-file, however I would not have the right credentials until after the AZ bicep build and would probably have to expose sensitive Azure account information in my json file to create the connection (if even possible).
This leaves me with two questions:
In practice, is this something that one would do, or does it make more sense to just have two pipelines; one for the infrastructure-as-code and one for the application code. I would think that it is preferable to be able to deploy everything in one go, but am quite new to DevOps and can't really find an answer to this question.
Is there anyway that this would still be possible to achieve securely in a single Azure DevOps pipeline?
Answer to Q1.
From my experience, infrastructure and application has always been kept separate. We generally want to split those two so that it's easier to manage. For example, you might want to test a new feature of the ACR separately, like new requirements for adding firewall rules to your ACR, or maybe changing replication settings, without rebuilding/pushing a new image every time.
On the other hand the BAU pipeline involves building new images daily or weekly. One action is a one-off thing, the other is more of a BAU. You usually just want to build the ACR and forget about it, only referencing when required.
In addition, the ACR could eventually be used for images of many other application pipelines you would have in the future. So you don't really want to tie it to a specific application pipeline. If you wanted to have a future proof solution, I'd suggest keeping them separate and then have different pipelines for different applications builds.
It's generally best to keep core infrastructure resources code separate from the BAU stuff.
Answer to Q2.
I don't know in detail the specifics of how you're running your pipeline but from what I understand, regarding exposing the sensitive content, there are two ways (best practice) I would handle this.
Keep the file with the sensitive content as secure file in the pipeline library and then retrieve it when required.
Keep the content or any secrets in an Azure KeyVault and read them during your pipeline run.
I completely agree with the accepted answer about not doing everything in the same pipeline.
Tho ACR supports RBAC and you could grant the service principal running your pipeline AcrPush permission. This way you would remove the need of creating another service connection:
// container registry name
param registryName string
// role to assign
param roleId string = '8311e382-0749-4cb8-b61a-304f252e45ec' // AcrPush role
// objectid of the service principal
param principalId string
resource registry 'Microsoft.ContainerRegistry/registries#2021-12-01-preview' existing = {
name: registryName
}
// Create role assignment
resource registryRoleAssignment 'Microsoft.Authorization/roleAssignments#2020-04-01-preview' = {
name: guid(subscription().subscriptionId, resourceGroup().name, registryName, roleId, principalId)
scope: registry
properties: {
roleDefinitionId: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', roleId)
principalId: principalId
}
}
In other subsequent pipelines, you could login then buildAndPush to the container registry without the need of creating manually a service connection or storing any other secrets:
steps:
...
- task: AzureCLI#2
displayName: Connect to container registry
inputs:
azureSubscription: <service connection name>
scriptType: pscore
scriptLocation: inlineScript
inlineScript: |
az acr login --name <azure container registry name>
- task: Docker#2
displayName: Build and push image
inputs:
command: buildAndPush
repository: <azure container registry name>.azurecr.io/<repository name>
...
My answer is really about not having to create an extra set of credentials that you would also have to maintain separately.

How to integrate kubernetes cloud plugin with jenkins

I am trying to integrate Jenkins with K8 secrets in a dedicated namespace but even after creating the service account and secret, I still see Test Connection failures.
You need to create the jenkins global credential with the secret for the cluster to be authenticated. Do try using default namespace initially. Also double check your k8 url by running #kubectl cluster-info.

What is the best way to login to the Azure Devops CLI from a Release Pipeline?

I am using the Azure Devops CLI on one of my pipelines. In order to use the CLI I need first login (authenticate). Unlike using the REST API, I can't use the OAuth token that is available to me.
So here's my understanding of my options:
I can do an "az login" using a PAT that I map to this environment variable:
AZURE_DEVOPS_EXT_PAT
THIS IS THE WAY I'm doing it now.
Apparently you can use a Service Principal. I like this the most because I should theoretically be able to have this principal apply to everyone on my team. Is that correct?
Use "az login" with a user/password. This is least desirable way to doing it because it involves passing around credentials. Too messy.
Although my pipeline has the OAuth token expost (System.AccessToken), it cannot be use by the CLI. For example is I try to assign the value of the OAuth token to the AZURE_DEVOPS_EXT_PAT it fails (AZURE_DEVOPS_EXT_PAT=$System.AccessToken).
Questions:
Is it possible to use the OAuth token to log in to the CLI?
Is the Service Principal the best way to go?
Additional Info:
I do not have subscriptions only a tenant-id, we're not creating any Azure resources, we're an AWS shop that happens to be using ADO only for CICD.
Use az devops login instead of az login
From your pipeline use:
- script: echo $(AccessToken) | az devops login
env:
AccessToken: $(System.AccessToken)
Few interesting notes:
Secrets (like System.AccessToken) are available to scripts unless you pass them in explicitly as environment variables
the System.AccessToken variable is the default access token of the build agent
there is a project-specific build agent and a project-collection build agent. The one you use is actually controlled by the 'limit access to current project scope' flag in the Pipeline settings for the project.
you may need to elevate permissions for the build agent if you're trying to manipulate objects. For example, you could grant the Create Tag permission on a repository if you wanted the build agent to update the repository.
you can also create your own PAT token with permissions that you specify.

Configure Jenkins CI build to use TFVC hosted in Azure DevOps

We recently migrated from an on-premise TFS server to Azure DevOps. Our team uses TFVC for source control, and I'm getting the following exception when Jenkins polls for new check-ins:
FATAL: This server requires federated authentication but no mechanism was available to handle it.
com.microsoft.tfs.core.exceptions.TFSFederatedAuthException: This server requires federated authentication but no mechanism was available to handle it.
Given the exception class name is TFSFederatedAuthException I suspect Azure is expecting some sort of OAuth integration, but Jenkins doesn't appear to support that for TFVC.
All I did was change the Collection URL for that Jenkins build to https://dev.azure.com/MyCompany. The Project path remains the same, and I verified this, because I was able to re-map all of my TFVC branches in Visual Studio by just pointing to the different collection URL and keeping the same project path. A screenshot of the Jenkins source control config is below:
This Jenkins server is internal with no public facing IP address or host name.
How can I allow Jenkins to poll a TFVC repository hosted in Azure DevOps in order to trigger a CI build in Jenkins?
Why not use Azure pipelines? That's a much bigger migration effort at the moment, and I'm just trying to solve a short term problem.
Using Azure pipelines is my long term goal, but I need to figure out how our automated tests can use an Oracle database first, because all data is deleted before each test is executed using Selenium.
Azure DevOps uses OAuth to communicate by default, putting in your username and password won't work because of that. Instead, the trick is to generate a Personal Access Token (I suspect the Code|Read+Write scope should do it) and pass that in.
For the username pass in ., for the password your generated personal access token. Give the token a nice name so you know which one is about to expire once you get the email notification.

Working with jenkins credentials

I want to know how to Create the credentials that can be used by Jenkins and by jobs running in Jenkins to connect to 3rd party services.
You should specify which 3rd party service you will work on.
Below is an example of credentials with bitbucket
I am now working with Jenkins ver. 1.568.
By default, there's Credentials feature. So, if you want to add a credential, just click on Add Credentials. For example, I'd like to add SSH Username with password, so I can use it in checking out code from bitbucket
Credentials plugin - provides a centralized way to define credentials that can be used by your Jenkins instance, plugins and build jobs.
Credentials Binding plugin - allows you to configure your build jobs to inject credentials as environment variables.
The third party plugins need to be installed in your Jenkins instance. For example, Assembla Auth Plugin allows you to authenticate to an Assembla repository.
Which 3rd party services are you working with?
Instead of using SSH Username with private key you can simply use username with password option

Resources