I have a series of build pipelines in Azure Devops that use the Docker#2 task to buildAndPush an image from a Dockerfile into our Azure Container Registry. The pipeline needs a Service Connection for the ACR, but at the moment I also supply the FQDN for the same ACR as a variable, so that the image can be tagged and pushed correctly.
In the interests of DRY, is there any way to access the Service Connection and extract this information instead? I haven't found much information about what adding a Service Connection actually does on the build agent - presumably there is some file of credentials/properties created somewhere...
Related
I'm running into an issue in Azure Devops. I have two questions regarding the issue. The issue is that I have an Azure Bicep template that deploys a bunch of resources in a resource group within my Azure subscription.
One of these resources is an Azure Container Registry (ACR) to which I want to push a certain image when the image code is updated. Now what I essentially am trying to achieve is that I have a single multi-stage Azure build Pipeline in which
The resources are deployed via Azure Bicep, after which
I build and push the image to the (ACR) automatically
Now the issue here is that to push an image to ACR a service connection needs to be made in Azure Devops, which can only happen through the portal after the Azure Bicep pipeline has run. Now I have found that I can use an Azure CLI command: az devops service-endpoint create to create a connection from a .json-file from the command line, which essentially means I could maybe add a .json-file, however I would not have the right credentials until after the AZ bicep build and would probably have to expose sensitive Azure account information in my json file to create the connection (if even possible).
This leaves me with two questions:
In practice, is this something that one would do, or does it make more sense to just have two pipelines; one for the infrastructure-as-code and one for the application code. I would think that it is preferable to be able to deploy everything in one go, but am quite new to DevOps and can't really find an answer to this question.
Is there anyway that this would still be possible to achieve securely in a single Azure DevOps pipeline?
Answer to Q1.
From my experience, infrastructure and application has always been kept separate. We generally want to split those two so that it's easier to manage. For example, you might want to test a new feature of the ACR separately, like new requirements for adding firewall rules to your ACR, or maybe changing replication settings, without rebuilding/pushing a new image every time.
On the other hand the BAU pipeline involves building new images daily or weekly. One action is a one-off thing, the other is more of a BAU. You usually just want to build the ACR and forget about it, only referencing when required.
In addition, the ACR could eventually be used for images of many other application pipelines you would have in the future. So you don't really want to tie it to a specific application pipeline. If you wanted to have a future proof solution, I'd suggest keeping them separate and then have different pipelines for different applications builds.
It's generally best to keep core infrastructure resources code separate from the BAU stuff.
Answer to Q2.
I don't know in detail the specifics of how you're running your pipeline but from what I understand, regarding exposing the sensitive content, there are two ways (best practice) I would handle this.
Keep the file with the sensitive content as secure file in the pipeline library and then retrieve it when required.
Keep the content or any secrets in an Azure KeyVault and read them during your pipeline run.
I completely agree with the accepted answer about not doing everything in the same pipeline.
Tho ACR supports RBAC and you could grant the service principal running your pipeline AcrPush permission. This way you would remove the need of creating another service connection:
// container registry name
param registryName string
// role to assign
param roleId string = '8311e382-0749-4cb8-b61a-304f252e45ec' // AcrPush role
// objectid of the service principal
param principalId string
resource registry 'Microsoft.ContainerRegistry/registries#2021-12-01-preview' existing = {
name: registryName
}
// Create role assignment
resource registryRoleAssignment 'Microsoft.Authorization/roleAssignments#2020-04-01-preview' = {
name: guid(subscription().subscriptionId, resourceGroup().name, registryName, roleId, principalId)
scope: registry
properties: {
roleDefinitionId: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', roleId)
principalId: principalId
}
}
In other subsequent pipelines, you could login then buildAndPush to the container registry without the need of creating manually a service connection or storing any other secrets:
steps:
...
- task: AzureCLI#2
displayName: Connect to container registry
inputs:
azureSubscription: <service connection name>
scriptType: pscore
scriptLocation: inlineScript
inlineScript: |
az acr login --name <azure container registry name>
- task: Docker#2
displayName: Build and push image
inputs:
command: buildAndPush
repository: <azure container registry name>.azurecr.io/<repository name>
...
My answer is really about not having to create an extra set of credentials that you would also have to maintain separately.
Well, how can I create a pipeline to build and release a Docker compose, with Azure Devops through the graphical interface (GUI) I am not an expert in devops but I have this challenge in my work.
I would point you toward a great guide by microsoft, it's for java applications but you can get what you need out of it.
Solution in general:
Open the Azure Portal. Select + Create a resource and search for
Container Registry. Select Create. In the Create Container Registry
dialog, enter a name for the service, select the resource group,
location and click Review + Create. Once the validation is success
click Create.
In your CI build you need to have 2 tasks, 1 for the build/compose where you provide and another to publish the image to your Azure Container Registry. You will use the "same task" for this.
This container registry is where you store the outputs of your builds, similar to artifacts in traditional CI builds. This is where you publish your application from during a release to on-prem or cloud.
You can read more about the parameters you need to provide and the settings in details in the guide.
P.S. Here is an example on how to dockerize and existing .NETCore application.
How do you build and release your Docker compose on local?
Normally, you can copy the related docker-compose CLI and Docker CLI that you execute on local to the shell script tasks (such as Bash, PowerShell, etc.) in the pipeline you set up on Azure DevOps.
Of course, there are also the available Docker Compose task and Docker task.
I have Jenkins pipeline which builds docker image of spring boot application and push that image to AWS ECR.We have created ECS cluster which takes this image from ECR repository and runs container using ECS task and services.
We have created ECS cluster manually.But now i want whenever a new image is pushed by my CICD to ECR repository it should take the new image and create new task definition and run automatically.What are the ways to achieve this ?
But now i want whenever a new image is pushed by my CICD to ECR
repository it should take the new image and create new task definition
and run automatically.What are the ways to achieve this ?
As far this step is a concern, it would more easy to do with code pipeline as there is no out of the box feature in Jenkins which can detect changes in ECR image.
The completed pipeline detects changes to your image, which is
stored in the Amazon ECR image repository, and uses CodeDeploy to
route and deploy traffic to an Amazon ECS cluster and load balancer.
CodeDeploy uses a listener to reroute traffic to the port of the
updated container specified in the AppSpec file. The pipeline is also
configured to use a CodeCommit source location where your Amazon ECS
task definition is stored. In this tutorial, you configure each of
these AWS resources and then create your pipeline with stages that
contain actions for each resource.
tutorials-ecs-ecr-codedeploy
build-a-continuous-delivery-pipeline-for-your-container-images-with-amazon-ecr-as-source
If you are looking for this thing in Jenkins, then you have to manage these things at your end.
Here will be the step
Push image to ECR
re-use the image name and Create Task definition in your jenkins job using aws-cli or ecs-cli with same image name
Create service with new task definitioni
You can look for details here
set-up-a-build-pipeline-with-jenkins-and-amazon-ecs
We ended up to the same conclusion, as there was no exact tooling matching this scenario. So we developed a little "glueing" tool from fee others open-source ones, and recently open-sourced as well:
https://github.com/GuccioGucci/yoke
Please have a look, since we're sharing templates for Jenkins, as it's our pipeline orchestrator as well.
I have a gke cluster with a running jenkins master. I am trying to start a build. I am using a pipeline with a slave configured by the kubernetes plugin (pod Templates). I have a custom image for my jenkins slave published in gcr (private access). I have added credentials (google service account) for my gcr to jenkins. Nevertheless jenkins/kubernetes is failing to start-up a slave because the image can't be pulled from gcr. When I use public images (jnlp) there is no issue.
But when I try to use the image from gcr, kubernetes says:
Failed to pull image "eu.gcr.io/<project-id>/<image name>:<tag>": rpc error: code = Unknown desc = Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
Although the pod is running in the same project as the gcr.
I would expect jenkins to start the slave even if I use a image from gcr.
Even if the pod is running in a cluster in the same project, is not authenticated by default.
Is stated that you've already set up the Service Account and I assume that there's a furnished key in the Jenkins server.
If you're using the Google OAuth Credentials Plugin you can then also use the Google Container Registry Auth Plugin to authenticate to a private GCR repository and pull the image.
I'm new to CI/CD process.
We have a model deploying a spring boot application through jenkins in docker in a same machine.
We was searching in internet how to deploy an application to another server, the only key which we have got is through SSH agent. I hope SSH is only for communicating.
Can we have a complete example how to deploy into another server and what are the other preventive measure to be taken into account.
Kindly guide us
In your Jenkins pipeline you need to define a stage for publishing the docker image and in your infrastructure you need a repository that stores your artifacts and docker images.
Repositories I know are Nexus or JFrog Artifactory.
So your server1, at the end of the pipeline, will upload the stable docker image to Nexus.
To execute the docker images in another server (not using an orchestrator) you may use Ansible.
On the net you can find a lot of sources, for example: https://www.codementor.io/mamytianarakotomalala/how-to-deploy-docker-container-with-ansible-on-debian-8-mavm48kw0