I am trying to clone a simple, private, repo in GitLab, build an image and push it to the repo's container registry using a Tekton pipeline. I have configured a service account that refers to a secret that uses basic auth. The clone task works fine, but the Kaniko build/push task continues to fail with an authentication issue (error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again). My initial assumption was that I could use the same sa/secret for both git and docker. I have tried several permutations including separate secrets with tekton annotations, a separate sa for the build/push task and other types of auth such as dockerconfigjson and token. All with no luck. With this such a fundamental pair of tasks, I'm surprised I can't find an easy answer.
Related
I'm running into an issue in Azure Devops. I have two questions regarding the issue. The issue is that I have an Azure Bicep template that deploys a bunch of resources in a resource group within my Azure subscription.
One of these resources is an Azure Container Registry (ACR) to which I want to push a certain image when the image code is updated. Now what I essentially am trying to achieve is that I have a single multi-stage Azure build Pipeline in which
The resources are deployed via Azure Bicep, after which
I build and push the image to the (ACR) automatically
Now the issue here is that to push an image to ACR a service connection needs to be made in Azure Devops, which can only happen through the portal after the Azure Bicep pipeline has run. Now I have found that I can use an Azure CLI command: az devops service-endpoint create to create a connection from a .json-file from the command line, which essentially means I could maybe add a .json-file, however I would not have the right credentials until after the AZ bicep build and would probably have to expose sensitive Azure account information in my json file to create the connection (if even possible).
This leaves me with two questions:
In practice, is this something that one would do, or does it make more sense to just have two pipelines; one for the infrastructure-as-code and one for the application code. I would think that it is preferable to be able to deploy everything in one go, but am quite new to DevOps and can't really find an answer to this question.
Is there anyway that this would still be possible to achieve securely in a single Azure DevOps pipeline?
Answer to Q1.
From my experience, infrastructure and application has always been kept separate. We generally want to split those two so that it's easier to manage. For example, you might want to test a new feature of the ACR separately, like new requirements for adding firewall rules to your ACR, or maybe changing replication settings, without rebuilding/pushing a new image every time.
On the other hand the BAU pipeline involves building new images daily or weekly. One action is a one-off thing, the other is more of a BAU. You usually just want to build the ACR and forget about it, only referencing when required.
In addition, the ACR could eventually be used for images of many other application pipelines you would have in the future. So you don't really want to tie it to a specific application pipeline. If you wanted to have a future proof solution, I'd suggest keeping them separate and then have different pipelines for different applications builds.
It's generally best to keep core infrastructure resources code separate from the BAU stuff.
Answer to Q2.
I don't know in detail the specifics of how you're running your pipeline but from what I understand, regarding exposing the sensitive content, there are two ways (best practice) I would handle this.
Keep the file with the sensitive content as secure file in the pipeline library and then retrieve it when required.
Keep the content or any secrets in an Azure KeyVault and read them during your pipeline run.
I completely agree with the accepted answer about not doing everything in the same pipeline.
Tho ACR supports RBAC and you could grant the service principal running your pipeline AcrPush permission. This way you would remove the need of creating another service connection:
// container registry name
param registryName string
// role to assign
param roleId string = '8311e382-0749-4cb8-b61a-304f252e45ec' // AcrPush role
// objectid of the service principal
param principalId string
resource registry 'Microsoft.ContainerRegistry/registries#2021-12-01-preview' existing = {
name: registryName
}
// Create role assignment
resource registryRoleAssignment 'Microsoft.Authorization/roleAssignments#2020-04-01-preview' = {
name: guid(subscription().subscriptionId, resourceGroup().name, registryName, roleId, principalId)
scope: registry
properties: {
roleDefinitionId: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', roleId)
principalId: principalId
}
}
In other subsequent pipelines, you could login then buildAndPush to the container registry without the need of creating manually a service connection or storing any other secrets:
steps:
...
- task: AzureCLI#2
displayName: Connect to container registry
inputs:
azureSubscription: <service connection name>
scriptType: pscore
scriptLocation: inlineScript
inlineScript: |
az acr login --name <azure container registry name>
- task: Docker#2
displayName: Build and push image
inputs:
command: buildAndPush
repository: <azure container registry name>.azurecr.io/<repository name>
...
My answer is really about not having to create an extra set of credentials that you would also have to maintain separately.
I've followed this gitlab tutorial link, to connect my jenkins server to Gitlab.
Everyting went fine, and I've :
created a personnal access token in my GitLab profile
created a GitLab API Token using the my GitLab access token in jenkins system configuration as stated in the tutorial
create a freestyle jenkins job and Choose my GitLab connection from the dropdown
checked the Build when a change is pushed to GitLab checkbox.
checked the Accepted Merge Request Events and Closed Merge Request Events checkboxes
generated a secret token from the above freestyle project
use the freestyle jenkins project secret token to create a webhook in the GitLab project repository integration settings
Till there everything went fine.
Then I added and push code including a jenkinsFile to my GitLab repository, and get to the Jenkins WebUI to view the build status, but the pipeline shown green saying build success, while nothing happened, no code has been retrieved from GitLab (as shown in the attached console output screenshot), thus no jenkinsFile executed nor error message shown.
I tried to run the buils manually from WebUI but same result, no way to trigger my pipeline on git push events from GitLab
I thought may be I should select Git in Source Code Management section (I left it to None as the tutorial doesn't mention it) but if I choose Git as SCM I cannot select my GitLab API Token credentials, seeming like we cannot use GitLab plugin (API Token) and Git plugin for the same build project.
SO how should I proceed to be able build my jenkins project from GitLab with a jenkinsFile, using GitLab API Token?
Does the GitLab tutorial miss some useful steps?
OK, I think I understand the issue now.
There are two sets of credentials: GitLab API token for access to GitLab Webhooks and a separate one for cloning the git repo during builds.
So we can't use the GitLab API token for cloning the repository. For this you have to use either a SSH key or a Username/Password combination. Furthermore this dropdown is part of the git plugin not the gitlab plugin.
So the gitlab plugin can't tell which credentials are available as credentials for this dropdown.
I am trying to start a VM that already exist in Google cloud with my jenkins to use it as a slave. The reason is because if I start the template of this VM I need to do a few things before I can use my Jenkins code.
Does anyone know how to start VM's that already exist in my VM Pool in Google Could via Jenkins?
There might be 2 approaches to this depending on the operations that you need to run before in your machine that is preventing you from just recreating it.
First and possibly the most straightforward given the restriction that the machine already exists would be talking directly to the GCE API in order to list and start the machine from Jenkins (using a build step).
Basically you can make requests to the GCE API to do operations with your instances. I suggest doing this using gcloud from within the Jenkins master node as it'll save you having to write your own client. It's straightforward as you only have to "install" it in your master and you can make it work safely using a service account.
Below is the outline of this approach:
Download the cloud-sdk to your master node following these release instructions.
You can do this once outside of Jenkins or directly in the build step, doesn't matter as long as Jenkins and its user is able to get the binary.
Create the service account, generate authentication keys and give it permissions to interact with GCE.
Using a service account is the way to go as you can restrict its permissions to the operations that are relevant for you.
Once you get the service account that will be bound to your gcloud client, you'll need to set it up in Jenkins. You might want to do this in a build step (I'm using Groovy here but it should be easy to translate it to the UI):
stage('Start_machine'){
steps{
//Considering that you already installed gcloud in this node, but you can also curl it from here if that's convenient
// You can set this as an scope env var in Jenkins or just hard code it
gcloud config set project ${GOOGLE_PROJECT_ID};
// This needs a json file location accessible by jenkins like: --key-file /var/lib/jenkins/..key.json
gcloud auth activate-service-account --key-file ${GOOGLE_SERVICE_ACCOUNT_KEY};
// Check the reference on this command: https://cloud.google.com/sdk/gcloud/reference/compute/instances/start
gcloud compute instances start my_existing_instance;
echo "Instance started"
}
post{
always{
println "Result : ${currentBuild.result}";
}
}
Wrapping up: You basically create a service account that has the permissions to start your instances. Download an client that can interact with the GCE API (gcloud), authenticate it and start the instance, all from within your pipeline.
The second approach might be easier if there were no constraints regarding the preexisting machine.
Jenkins has a plugin for Compute Engine that will automatically spin up new workers whenever needed.
I know that you need to do some previous operations before Jenkins sends some work to these slave machines. However, I want to bring to your attention that this plugin also considers the start up scripts.
So there's always the option to preload your operations there before the machine takes off and by the time it's ready, you might have everything done.
Hope this helps.
I am trying to build a jenkins job(trigger builds remotely) on docker image build, build all I am getting on docker hub is following:
HISTORY
ID Status Date & Time
7345... ! ERROR 10/12/17 10:03
Reason (I assume): Docker is not authenticated to post to the jenkins url.
Question: How can I trigger the job automatically when an image gets pushed to docker hub?
Pull and run Watchtower docker image to poll any third-party public Docker image on Docker Hub or Quay that you need (typically as a base image of your own containers). Here's how. "Polling" here does not imply crudely pulling the whole image every 5 minutes or so - we are monitoring periodically for changes in the image, downloading only the checksum (SHA digest) most of the time (when there are no changes in the locally cached image).
Install the Build Token Root Plugin in your Jenkins server and set it up to receive Slack-formatted notifications secured with a token to trigger builds remotely or - safer - locally (those triggers will be coming from Watchtower container, not Slack). Here's how.
Set up Watchtower to post Slack messages to your Jenkins endpoint upon every change in the image(s) (tags) that you want. Here's how.
Optionally, if your scale is so large that you could end up overloading and bringing down the entire Docker Hub with a flood HTTP GET requests (should the time triggers go wrong and turn into a tight loop) make sure to build in some safety checks on top of Watchtower to "watch the watchman".
You can try the following plugin: https://wiki.jenkins.io/display/JENKINS/CloudBees+Docker+Hub+Notification
Which claims to do what you're looking for.
You can configure a WebHook in DockerHub wich will trigger the Jenkins-Build.
Docker Hub webhooks targeting your Jenkings server endpoint require making periodic copies of the image to another repo that you own [see my other answer with Docker Hub -> Watchman -> Jenkins integration through Slack notifications].
More details
You need to set up a cron job with periodic polling (docker pull) of the source repo to [docker] pull its `latest' tag, and if a change is detected, re-tag it as your own and [docker] push to a repo you own (e.g. a "clone" of the source Docker Hub repo) where you have set up a webhook targeting your Jenkings build endpoint.
Then and only then (in a repo you own) will Jenkins plugins such as Docker Hub Notification Trigger work for you.
Polling for Dockerfile / release changes
As a substitute of polling the registry for image changes (which need not generate much network traffic thanks to the local cache of docker images) you can also poll the source Dockerfile on Github using wget. For instance Dockerfiles of the official Docker Hub images are here. In case when the Github repo makes releases, you can get push notifications of them using Github Watch > Releases Only feature and if they have CI docker builds. Docker images will usually be available with a delay after code releases, even with complete automation, so image polling is more reliable.
Other projects
There was also a proposal for a 2019 Google Summer of Code project called Polling Docker Registries for Image Changes that tried to solve this problem for Jenkins users (incl. apparently Google), but sadly it was not taken up by participants.
Run a cron job with a periodic docker search to list all tags in the docker image of interest (here's the script). Note that this script requires the substitution of the jannis/jq image with an existing image (e.g. docker run --rm -i imega/jq).
Save resulting tags list to a file, and monitor it for changes (e.g. with inotifywait).
Fire a POST request using curl to your Jenkins server's endpoint using Generic Webhook Trigger plugin.
Cautions:
for efficiency reasons this tags listing script should be limited to a few (say, 3) top pages or simple repos with a few tags,
image tag monitoring relies on tags being updated correctly (automatically) after each image change, rather than being stuck in the past, like say Ubuntu tags (e.g. trusty-20190515 was updated a few days ago - late November, without the change in its mid-May tag).
Gitlab CI pulls docker image every time for every task (stage). This operation wastes much time. I want to optimize if possible.
I see two places to work with:
1. explicitly configure CI stages to reuse the same docker machine.
2. use the docker machine from previous commit when building next commit? (If no changes in configuration file was done).
This kind of configuration can be specified trough the pull_policy on the runner itself.
As Jakub highlighted in the comments to the question, on shared runners on Gitlab.com the policy is set to always, therefore it will always download a new copy of the image, also if there is the same copy locally.
This due security reasons.
You can have a confirmation of that in the doc.
This pull policy should be used if your Runner is publicly available
and configured as a shared Runner in your GitLab instance. It is the
only pull policy that can be considered as secure when the Runner will
be used with private images.
The security implication is that if the runner checks first a local image, a non authorized user can get a private docker image guessing its name