Does AWS ECR Plugin for Jenkins Support to authenticate Public Repositories from AWS ECR?
Something Like this...
// Uploading Docker images into AWS ECR Public Repo
stage('Pushing to ECR') {
steps{
script {
docker.withServer(
'tcp://<Docker-Host>:2376',
'Docker_Server_Auth'
)
**{
docker.withRegistry(
'https://public.ecr.aws/<alias>',
'ecr-public:us-east-1:Aws_Credentials'
)
{
myImage.push("v13")
}
}**
}
}
}
The above pipeline code is failed to authenticate which throws error 'failed to authenticate'
With Private Repo Plugins works and Authenticates like below pipeline code
//Pushing to ECR Private Repo
stage('Pushing to ECR') {
steps{
script {
docker.withServer(
'tcp://<docker-host>:2376',
'Docker_Server_Auth'
)
**{
docker.withRegistry(
'https://<acc-id>.dkr.ecr.ap-south-1.amazonaws.com',
'ecr:ap-south-1:Aws_Credentials'
)
{
myImage.push("v13")
}
}**
}
}
}
Using sh 'aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws/' works inside pipeline
Is there any way to authenticate ECR Public repo with ECR Plugin like we do with Private repo??
Related
I'm trying to push a Docker image from Jenkins to DockerHub using a declarative pipeline. The DockerHub's credentials are stored in Vault. And, I wish to use the Docker plugin in my pipeline's syntax.
My following tries were successful:
If I store Dockerhub's credentials in Jenkins, the pipeline works fine with the following code snippet:
stage('Publish the Docker Image on DockerHub')
{
steps {
script {
docker.withRegistry('', 'dockerhub-credentials'){
dockerImage.push()
}
}
}
}
If I store Dockerhub's credentials in Vault and use shell commands to login, then too the pipeline works successful with the code snippet as below:
stage('Publish the Docker Image on DockerHub')
{
steps
{
withVault(
configuration: \
[
timeout: 60,
vaultCredentialId: 'vault-jenkins-approle-creds',
vaultUrl: 'http://172.31.32.203:8200'
],
vaultSecrets:
[[
engineVersion: 2,
path: 'secret/credentials/dockerhub',
secretValues:
[
[envVar: 'DOCKERHUB_USERNAME', vaultKey: 'username'],
[envVar: 'DOCKERHUB_PASSWORD', vaultKey: 'password']
]
]]
)
{
script
{
sh "docker login -u $DOCKERHUB_USERNAME -p $DOCKERHUB_PASSWORD"
sh "docker push <docker-hub-repo>"
}
}
}
}
Now, my query is how to parse the Username+Password credentials (obtained in 2) inside the docker.withRegistry() method (used in 1)?
I want to connect to my server and then clone the project from my repo to the server in question and do a docker build. Could you show me a template? Because with my own code it only does the build on jenkins. It doesn't connect to the server to do the pull.
`
pipeline {
agent any
stages {
stage('Pulling our project') {
steps{
withCredentials([gitUsernamePassword(credentialsId: 'GitlabCred')]) {
sh 'git pull origin jks'
}
}
}
stage('Building our project') {
agent any
steps {
sh 'docker compose up -d --build'
}
}
}
}
`
I am running moto server using the command
moto_server ecr -p 5000 -H 0.0.0.0
I created an ECR repo using the moto server with the command,
aws ecr create-repository --repository-name test --endpoint-url http://localhost:5000 --region us-east-1
Now can anybody please help on how to push a docker image to this ecr repo created using moto server?
After going through the code I found that the put-image command actually adds an image to the repository. In the case of ECR by AWS this command is used by AWS to add metadata of the image uploaded via the docker push command.
# aws ecr put-image --repository-name test --region us-east-1 --image-manifest test --endpoint-url http://localhost:5000 --image-tag v1
{
"image": {
"registryId": "012345678910",
"repositoryName": "test",
"imageId": {
"imageDigest": "sha256:a6698ae96409579a4f8ac96f5e5f276467b3f62d184c9e6db537daeedb9dd939",
"imageTag": "v1"
},
"imageManifest": "test"
}
}
# aws ecr list-images --repository-name test --region us-east-1 --endpoint-url http://localhost:5000
{
"imageIds": [
{
"imageDigest": "i don't know",
"imageTag": "v1"
}
]
}
Luckily my code doesn't need to push or pull image from the ECR repo. This method might not work if that is the case.
I have the below pipeline.
pipeline {
agent any
environment {
PROJECT_ID = "*****"
IMAGE = "gcr.io/$PROJECT_ID/node-app"
BRANCH_NAME_NORMALIZED = "${BRANCH_NAME.toLowerCase().replace(" / ", "
_ ")}"
}
stages {
stage('Build') {
steps {
sh ' docker build -t ${IMAGE}:${BRANCH_NAME_NORMALIZED} . '
}
}
stage('Push') {
steps {
withCredentials([file(credentialsId: 'jenkins_secret', variable: 'GC_KEY')]) {
sh("gcloud auth activate-service-account --key-file=${GC_KEY}")
}
sh ' gcloud auth configure-docker '
sh ' docker push $IMAGE:${BRANCH_NAME_NORMALIZED} '
}
}
stage('Deploy') {
steps {
withDockerContainer(image: "gcr.io/google.com/cloudsdktool/cloud-sdk", toolName: 'latest') {
withCredentials([file(credentialsId: 'jenkins_secret', variable: 'GC_KEY')]) {
sh("gcloud auth activate-service-account --key-file=${GC_KEY}")
sh("gcloud container clusters get-credentials k8s --region us-central1 --project ${DEV_PROJECT}")
sh("kubectl get pods")
}
}
}
}
}
}
In Deploy stage it gives the following error :
gcloud auth activate-service-account --key-file=****
WARNING: Could not setup log file in /.config/gcloud/logs, (Error: Could not create directory [/.config/gcloud/logs/2020.02.05]: Permission denied.
Please verify that you have permissions to write to the parent directory.)
ERROR: (gcloud.auth.activate-service-account) Could not create directory [/.config/gcloud]: Permission denied.
Please verify that you have permissions to write to the parent directory.
I can't understand where this command wants to create a directory, docker container or in Host machine?
Have you got any similar problem ?
A better approach would be to Login to GKE via Kubernetes service account with token and using a kubeconfig file instead of activating a google service account.
This has several advantages including Kubernetes RBAC support, controlling blast radius should your credentials be compromised, etc. You can read more about using RBAC Authorization here.
You can set where gcloud stores it's configs using the environment variable CLOUDSDK_CONFIG
environment {
CLOUDSDK_CONFIG = "${env.WORKSPACE}"
}
I had the same problem and that worked for me.
Below is the Jenkins DSL groovy for setting the Terraform path and retrieving the service principal credentials to run Terraform init and Terraform plan.
When ran against Terraform 12.0 version I get the error below even though I tested using the same Azure service principal credentials mentioned in the pipeline as below using a Jenkins free style job and az login worked fine.
+ terraform init -input=false
[0m[1mInitializing modules...[0m
[0m[1mInitializing the backend...[0m
[31m
[1m[31mError: [0m[0m[1mError building ARM Config: Error populating Client ID from the Azure CLI: No Authorization Tokens were found - please re-authenticate using `az login`.[0m
[0m[0m[0m
pipeline{
agent any
stages{
stage('Set Terraform path') {
steps {
script {
def tfHome = tool name: 'Terraform'
env.PATH = "${tfHome}:${env.PATH}"
}
sh 'terraform version'
}
}
stage('Provision infrastructure') {
steps {
dir('environments/dev')
{
withCredentials([azureServicePrincipal('xx-xxx-subscription-azure-sp')]) {
sh 'az login --service-principal -u $AZURE_CLIENT_ID -p $AZURE_CLIENT_SECRET -t $AZURE_TENANT_ID'
sh 'terraform init -input=false'
sh 'terraform plan -out=tfplan -input=false'
}
// sh ‘terraform destroy -auto-approve’
}
}
}
}
}