I am using Ansible to check the status of several jenkins servers. The playbook that I have created checks the disk space, uptime, and jenkins version perfectly fine. However, I tried to add a task that prints out a list of the installed jenkins plugins for each server by using the jenkins_Script module and keep receiving a '403' error message.
Playbook:
- name: Obtaining a list of Jenkins Plugins
jenkins_script:
script: 'println(Jenkins.instance.pluginManager.plugins)'
url: 'http://server.com:8080/'
user: '*****'
password: '*****'
Output:
fatal: [server]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"args": null,
"force_basic_auth": true,
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"script": "println(Jenkins.instance.pluginManager.plugins)",
"url": "http://server.com:8080/",
"url_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"url_username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"user": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"validate_certs": true
}
},
"msg": "HTTP error 403 HTTP Error 403: No valid crumb was included in the request"
}
-- I believe I have narrowed down the issue - It looks like I wasn't providing a crumb. I have since generated the crumb, but there is no 'crumb' arguement for the jenkins_script module. Does anyone know how to successfully provide a crumb?
Will gladly clarify anything stated above if needed, and any assistance is greatly appreciated.
https://github.com/ansible/ansible/pull/20207
-- if you're on ansible 2.3 the changes have already been committed all you have to do is make sure 'cross site request forgery' is enabled on the jenkins servers. (Manage jenkins > Configure Global security)
Related
I recently created a new repository in AWS ECR, and I'm attempting to push an image. I'm copy/pasting the directions provided via the "View push commands" button on the repository page. I'll copy those here for reference:
aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin 123456789.dkr.ecr.us-west-2.amazonaws.com
("Login succeeded")
docker build -t myorg/myapp .
docker tag myorg/myapp:latest 123456789.dkr.ecr.us-west-2.amazonaws.com/myorg/myapp:latest
docker push 123456789.dkr.ecr.us-west-2.amazonaws.com/myorg/myapp:latest
However, when I get to the docker push step, I see:
> docker push 123456789.dkr.ecr.us-west-2.amazonaws.com/myorg/myapp:latest
The push refers to repository [123456789.dkr.ecr.us-west-2.amazonaws.com/myorg/myapp]
a53c8ed5f326: Retrying in 1 second
78e16537476e: Retrying in 1 second
b7e38d172e62: Retrying in 1 second
f1ff72b2b1ca: Retrying in 1 second
33b67aceeff0: Retrying in 1 second
c3a550784113: Waiting
83fc4b4db427: Waiting
e8ade0d39f19: Waiting
487d5f9ec63f: Waiting
b24e42eb9639: Waiting
9262398ff7bf: Waiting
804aae047b71: Waiting
5d33f5d87bf5: Waiting
4e38024e7e09: Waiting
EOF
I'm wondering if this has something to do with the permissions/policies associated with this repository. Right now there are no statements attached to this repository. Is that the missing part? If so, what would that statement look like? I've tried this, but it had no effect:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPutImage",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789:root"
},
"Action": "ecr:PutImage"
}
]
}
Bonus Points:
I eventually want to use this in a CDK CodeBuildAction. I was getting the same error as above, so I check to see if I was getting the same result in my local terminal, which I am. So if the policy statement needs to be different for use in the CDK CodeBuildAction those details would be appreciated as well.
Thank you in advance for and advice.
I was having the same problem when trying to upload the image manually using the AWS and Docker CLI. I was able to fix it by going into ECR -> Repositories -> Permissions then adding a new policy statement with principal:* and the following actions:
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability",
"ecr:CompleteLayerUpload",
"ecr:GetDownloadUrlForLayer",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
Be sure to add more restrictive principals. I was just trying to see if permissions were the problem in this case and sure enough they were.
The accepted answer works correctly in resolving the issue. However, as has been mentioned in the answer, allowing principal:* is risky and can get your ECR compromised.
Be sure to add specific principal(s) i.e. IAM Users/Roles such that only those Users/Roles will be allowed to execute the mentioned "Actions". Following JSON policy can be added in Amazon ECR >> Repositories >> Select Required Repository >> Permissions >> Edit policy JSON to get this resolved quickly:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "Statement1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AccountNumber>:role/<RoleName>"
},
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:CompleteLayerUpload",
"ecr:GetDownloadUrlForLayer",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
]
}
]
}
I had this issue when the repository didn't exist in ECR - I assumed that pushing would create it, but it didn't.
Creating it before pushing solved the problem.
It turns out it was a missing/misconfigured policy. I was able to get it working within CodeBuild by adding a role with the AmazonEC2ContainerRegistryPowerUser managed policy:
new CodeBuildAction({
actionName: "ApplicationBuildAction",
input: this.applicationSourceOutput,
outputs: [this.applicationBuildOutput],
project: new PipelineProject(this, "ApplicationBuildProject", {
vpc: this.codeBuildVpc,
securityGroups: [this.codeBuildSecurityGroup],
environment: {
buildImage: LinuxBuildImage.STANDARD_5_0,
privileged: true,
},
environmentVariables: {
ECR_REPO_URI: {
value: ECR_REPO_URI,
},
ECR_REPO_NAME: {
value: ECR_REPO_NAME,
},
AWS_REGION: {
value: this.region,
}
},
buildSpec: BuildSpec.fromObject({
version: "0.2",
phases: {
pre_build: {
commands: [
"echo 'Logging into Amazon ECR...'",
"aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $ECR_REPO_URI",
"COMMIT_HASH=$(echo \"$CODEBUILD_RESOLVED_SOURCE_VERSION\" | head -c 8)"
]
},
build: {
commands: [
"docker build -t $ECR_REPO_NAME:latest ."
]
},
post_build: {
commands: [
"docker tag $ECR_REPO_NAME:latest $ECR_REPO_URI/$ECR_REPO_NAME:latest",
"docker tag $ECR_REPO_NAME:latest $ECR_REPO_URI/$ECR_REPO_NAME:$COMMIT_HASH",
"docker push $ECR_REPO_URI/$ECR_REPO_NAME:latest",
"docker push $ECR_REPO_URI/$ECR_REPO_NAME:$COMMIT_HASH",
]
}
}
}),
// * * ADDED THIS ROLE HERE * *
role: new Role(this, "application-build-project-role", {
assumedBy: new ServicePrincipal("codebuild.amazonaws.com"),
managedPolicies: [ManagedPolicy.fromAwsManagedPolicyName("AmazonEC2ContainerRegistryPowerUser")]
})
}),
});
In my case, the repo was not created on ECR. Creating it fixed it.
The same message ("Retrying in ... seconds" in loop) may be seen when running "docker push" without first creating the corresponding repo in ECR ("myorg/myapp" in your example). Run:
aws ecr create-repository --repository-name myorg/myapp --region us-west-2
The problem is your iam-user have not permission to full access of ecr so attach below policy to your iam-user.
follow photo for policy attachment
For anyone running into this issue, my problem was having the wrong AWS profile/account configured in my AWS cli.
run aws configure and add the keys of the account having access to ECR repository.
If you have multiple AWS accounts using the cli, then check out this solution.
Just had this problem. It was permission related. In my case I was using CDKv2, which assumes a specific role in order to upload assets. Because the user I was deploying as did not have permission to assume that role, it failed. The hint was these warning messages that appeared during the deploy:
current credentials could not be used to assume 'arn:aws:iam::12345:role/cdk-abcde1234-image-publishing-role-12345-ap-southeast-2', but are for the right account. Proceeding anyway.
current credentials could not be used to assume 'arn:aws:iam::12345:role/cdk-abcde1234-file-publishing-role-12345-ap-southeast-2', but are for the right account. Proceeding anyway.
Yes, updating the permissions on your ECR repo would fix it, but since CDK is supposed to maintain this for you, the proper solution is to allow your user to assume the CDK role so you don't need to mess with ECR permissions yourself.
In my case I did this by granting the sts:AssumeRole permission for the resource arn:aws:iam::*:role/cdk-*. This allowed my user to assume both the file upload role and the image upload role.
After granting this permission, the CDK errors about being unable to assume the role went away, and I was able to deploy successfully.
For me, the problem was that the repository name on ECR had to be the same as the name of the app/repository I was pushing. Tried all fixes here, didn't work. This did!
Browse ECR -> Repositories -> Permissions
Edit JSON Policy.
Add these actions.
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability",
"ecr:CompleteLayerUpload",
"ecr:GetDownloadUrlForLayer",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
And Add "*" in Resources.
Save it.
You're good to go, Now you can push the image to ECR.
If you have MFA enforcement policy on your account that might be the problem because you have to have a token for getting action. Take a look at this AWS document to get a token on CLI.
I was uploading from EC2 instance and I was missing to specify the region to my awscli, the login was successful but the docker push command was Retrying all the time, I have set the correct permissions on the ECR repo side
This line fix the issue for me and
aws configure set default.region us-west-1
In my case I used wrong AWS credentials and aws configure with correct credentials resolved the issue.
The steps i followed are:
Ansible login as root user
Update Server pacakges
Create a user called deploy
Clone a Git Repository from bitbucket.org
I want to clone the repository as deploy user in his home directory using ssh forwarding method.
But the issue is that, I am not able to get permissions even through ssh forwarding and the error returns as :Doesn't have rights to access the repository.
My inventory file:
[production]
rails ansible_host=(my host ip) ansible_user=ubuntu
My ansible.cfg file looks like this:
[ssh_connection]
pipelining=True
ssh_args = -o ForwardAgent=true
My playbook looks like this:
---
- hosts: production
remote_user: root
become: yes
tasks:
- name: Update all packages to latest version
apt:
upgrade: dist
- add deploy user tasks here
(deploy user add task)
- name: APP | Clone repo
git:
repo: git#github.com:e911/Nepali-POS-Tagger.git
dest: home/deploy/myproject
accept_hostkey: true
force: true
become: yes
become_user: deploy
tags: app
My deploy user is created but for some reason I cannot clone the user as deploy user. It doesnot have access right. I have researched and think this seems to be because of ssh keys not being attached. When I login in as ubuntu and switch user as deploy the attached keys are not forwarded to deploy. But I cannot have a solution for this.
How do you solve this ? Or what am I doing wrong here?
Here is the error snippet:
fatal: [rails]: FAILED! => {
"changed": false,
"cmd": "/usr/bin/git clone --origin origin '' /home/deploy/myproject",
"invocation": {
"module_args": {
"accept_hostkey": true,
"archive": null,
"bare": false,
"clone": true,
"depth": null,
"dest": "/home/deploy/myproject",
"executable": null,
"force": true,
"gpg_whitelist": [],
"key_file": null,
"recursive": true,
"reference": null,
"refspec": null,
"remote": "origin",
"repo": "git#github.com:e911/Nepali-POS-Tagger.git",
"separate_git_dir": null,
"ssh_opts": null,
"track_submodules": false,
"umask": null,
"update": true,
"verify_commit": false,
"version": "HEAD"
}
},
"msg": "",
"rc": 128,
"stderr": "Cloning into '/home/deploy/myproject'...\ngit#github.com: Permission denied (publickey).\r\nfatal: Could not read from remote repository.\n\nPlease make sure you have the correct access rights\nand the repository exists.\n",
"stderr_lines": [
"Cloning into '/home/deploy/myproject'...",
"git#github.com: Permission denied (publickey).",
"fatal: Could not read from remote repository.",
"",
"Please make sure you have the correct access rights",
"and the repository exists."
],
"stdout": "",
"stdout_lines": []
}
I have tried the solutions here: Ansible and Git Permission denied (publickey) at Git Clone but it was of not help.
We have alternative solution, using HTTP instead of SSH:
For GitHub:
Generate a Token from link: https://github.com/settings/tokens
Give permission with scope: repo (full control of private repositories)
Use that token git+https://<TOKEN>:x-oauth-basic#github.com/<ORGANIZATION>/<REPO>.git#<BRANCH>
For BitBucket:
Generate a random Password for your repo from link: https://bitbucket.org/account/settings/app-passwords
Give permission with scope Repositories: Read
Use that password to clone your repo as: git clone https://<USERNAME>:<GENERATED_PASSWORD>#bitbucket.org/<ORGANIZATION>/<REPO>.git
Hope this could be an alternative for the solution.
In my company, I'm running a pipeline as code project in which my Jenkinsfile gets a dynamic IP from a shell script, and injects that into a PrivateIP environment variable. The next step invokes a custom (in-house developed) plugin that accepts a "servers" argument as IP(s), though supposedly does not parse it correctly, cause the error output indicates an unresolvable host.
I've echoed the PrivateIP variable immediately above the plugin step, and it definitely outputs the correct value.
The plugin works if given a hard-value for IP, but fails if given anything dynamic. Built-ins such as dir don't give similar problems. I haven't been able to get a hold of the plugin developer to report the issue, nor have I gotten any responses for my issue. Is this typical for custom plugins? I've seen some documentation in the plugin developer docs that suggests only the initial environment stage is respected in pipeline plugins, otherwise a #StepContextParameter is needed to get a contextual environment.
stage('Provision') {
environment {
PrivateIP = """${sh(
returnStdout: true,
script: '${WORKSPACE}/cicd/parse-ip.sh'
)}"""
}
steps {
echo "Calling Playbook. PrivateIP: ${PrivateIP}"
customPluginName env: 'AWS',
os: 'Linux',
parameter: '',
password: '',
playbook: 'provision.yaml',
servers: PrivateIP,
gitBranch: '{my branch}',
gitUrl: '{URL}',
username: '{custom user}'
}
}
I'd expect the variable to be respected and execute an Ansible Playbook successfully.
Error
>>> fatal: [ansible_ssh_user={custom user}]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname ansible_ssh_user={custom user}: Name or service not known\r\n", "unreachable": true}
If this in-fact a default behavior of custom plug-ins (not necessarily a bug), what are the good work arounds?
I'm trying to build a new Debian image with Packer, but the building process halts at ==> openstack: Waiting for server to become ready..., while Packers building instance is stuck in the Spawning state.
(Edit: My last test build was stuck for ~45 minutes, and exited with this error message: Build 'openstack' errored: Error waiting for server ({uuid}) to become ready: unexpected state 'ERROR', wanted target '[ACTIVE]')
The source image is a cloud image of Debian, and my template file looks like this:
{
"variables": {
"os_auth_url": " ( Keystone URL ) ",
"os_domain_name": " ( Domain Name ) ",
"os_tenant_name": " ( Project Name ) ",
"os_region_name": " ( Region Name ) "
},
"builders": [
{
"type": "openstack",
"flavor": "b.tiny",
"image_name": "packer-openstack-{{timestamp}}",
"source_image": "cd8da3bf-66cd-4847-8970-447533b86b30",
"ssh_username": "debian",
"username": "{{user `username`}}",
"password": "{{user `password`}}",
"identity_endpoint": "{{user `os_auth_url`}}",
"domain_name": "{{user `os_domain_name`}}",
"tenant_name": "{{user `os_tenant_name`}}",
"region": "{{user `os_region_name`}}",
"floating_ip_pool": "internet",
"security_groups": [
"deb_test_uni"
],
"networks": [
"a4151f4e-fd88-4df8-97e1-2b113f149ef8",
"71b10496-2617-47ae-abbc-36239f0863bb"
]
}
]
}
The username and password fields are being added by a separate file, located on the (Jenkins) build server.
The building process managed to get past this at one point, but exited with a ssh timeout error. I have no idea why that happened, and why only then.
Is there anything blindingly obvious that I'm missing? Or has anyone else suffered the same problem, but managed to find a solution?
Thanks in advance!
It turns out that, in my case, there was nothing I (personally) could do. It was neither the Packer template nor the environment variables (as I had a suspicion it could be), but a fault in the server-side configuration.
I'm sorry that I don't know what the bug or fix was, as I wasn't the one who found or fixed the problem, but knowing that it could be good idea to double-check the server setup might help someone in the future.
I followed this tutorial to setup a Jenkins job to run whenever a push is made to the gitlab repository. I tested the webhook and I can see that the job is triggered. However, I don't see anything in the payload.
Just wondering, if anyone has ever tried to read the payload received from gitlab webhook?
Jenkins Gitlab Plugin sends these POST parameters to Jenkins whenever any event occurs in the Gitlab repo.
You can add env in the Jenkins console to get what all Gitlab parameters are exported to the Jenkins environment. Then you can print or use the required variables.
e.g
echo $gitlabSourceRepoURL
echo $gitlabAfter
echo $gitlabTargetBranch
echo $gitlabSourceRepoHttpUrl
echo $gitlabMergeRequestLastCommit
echo $gitlabSourceRepoSshUrl
echo $gitlabSourceRepoHomepage
echo $gitlabBranch
echo $gitlabSourceBranch
echo $gitlabUserEmail
echo $gitlabBefore
echo $gitlabSourceRepoName
echo $gitlabSourceNamespace
echo $gitlabUserName
The tutorial you have mentioned talks about GitHub webhooks. GitLab and GitHub are two separate products. So, the documentation or links for GitHub webhooks will not apply to GitLab webhooks.
GitLab invokes the webhook URL with a JSON payload in the request body that carries a lot of information about the GitLab event that led to the webhook invocation. For example, the GitLab webhook push event payload carries the following information in it:
{
"object_kind": "push",
"before": "95790bf891e76fee5e1747ab589903a6a1f80f22",
"after": "da1560886d4f094c3e6c9ef40349f7d38b5d27d7",
"ref": "refs/heads/master",
"checkout_sha": "da1560886d4f094c3e6c9ef40349f7d38b5d27d7",
"user_id": 4,
"user_name": "John Smith",
"user_username": "jsmith",
"user_email": "john#example.com",
"user_avatar": "https://s.gravatar.com/avatar/d4c74594d841139328695756648b6bd6?s=8://s.gravatar.com/avatar/d4c74594d841139328695756648b6bd6?s=80",
"project_id": 15,
"project":{
"id": 15,
"name":"Diaspora",
"description":"",
"web_url":"http://example.com/mike/diaspora",
"avatar_url":null,
"git_ssh_url":"git#example.com:mike/diaspora.git",
"git_http_url":"http://example.com/mike/diaspora.git",
"namespace":"Mike",
"visibility_level":0,
"path_with_namespace":"mike/diaspora",
"default_branch":"master",
"homepage":"http://example.com/mike/diaspora",
"url":"git#example.com:mike/diaspora.git",
"ssh_url":"git#example.com:mike/diaspora.git",
"http_url":"http://example.com/mike/diaspora.git"
},
"repository":{
"name": "Diaspora",
"url": "git#example.com:mike/diaspora.git",
"description": "",
"homepage": "http://example.com/mike/diaspora",
"git_http_url":"http://example.com/mike/diaspora.git",
"git_ssh_url":"git#example.com:mike/diaspora.git",
"visibility_level":0
},
"commits": [
{
"id": "b6568db1bc1dcd7f8b4d5a946b0b91f9dacd7327",
"message": "Update Catalan translation to e38cb41.",
"timestamp": "2011-12-12T14:27:31+02:00",
"url": "http://example.com/mike/diaspora/commit/b6568db1bc1dcd7f8b4d5a946b0b91f9dacd7327",
"author": {
"name": "Jordi Mallach",
"email": "jordi#softcatala.org"
},
"added": ["CHANGELOG"],
"modified": ["app/controller/application.rb"],
"removed": []
},
{
"id": "da1560886d4f094c3e6c9ef40349f7d38b5d27d7",
"message": "fixed readme",
"timestamp": "2012-01-03T23:36:29+02:00",
"url": "http://example.com/mike/diaspora/commit/da1560886d4f094c3e6c9ef40349f7d38b5d27d7",
"author": {
"name": "GitLab dev user",
"email": "gitlabdev#dv6700.(none)"
},
"added": ["CHANGELOG"],
"modified": ["app/controller/application.rb"],
"removed": []
}
],
"total_commits_count": 4
}
The Jenkins GitLab plugin makes this webhook payload information available in the Jenkins Global Variable env. The available env variables are as follows:
gitlabBranch
gitlabSourceBranch
gitlabActionType
gitlabUserName
gitlabUserEmail
gitlabSourceRepoHomepage
gitlabSourceRepoName
gitlabSourceNamespace
gitlabSourceRepoURL
gitlabSourceRepoSshUrl
gitlabSourceRepoHttpUrl
gitlabMergeRequestTitle
gitlabMergeRequestDescription
gitlabMergeRequestId
gitlabMergeRequestIid
gitlabMergeRequestState
gitlabMergedByUser
gitlabMergeRequestAssignee
gitlabMergeRequestLastCommit
gitlabMergeRequestTargetProjectId
gitlabTargetBranch
gitlabTargetRepoName
gitlabTargetNamespace
gitlabTargetRepoSshUrl
gitlabTargetRepoHttpUrl
gitlabBefore
gitlabAfter
gitlabTriggerPhrase
Just as you would read Jenkins job parameters from Jenkins Global Variable params in your job pipeline script, you could read webhook payload fields from Jenkins Global Variable env:
echo "My Jenkins job parameter is ${params.MY_PARAM_NAME}"
echo "One of Jenkins job webhook payload field is ${env.gitlabTargetBranch}"
Hope, the above information helps solve your problem.
Yes, I did it. And it works for some scenarios.
If you use /gitlab/buildnow, you can have access to payload objects. All of them.
But you have to name them under "this build is parametrized".
Then you can access them by name, like ${AUTHOR_NAME}.
Doc: https://github.com/elvanja/jenkins-gitlab-hook-plugin#parameterized-projects
But please note that if you use /gitlab/notifycommit, it will not work, since there is a gap (the poll) between triggering jenkins, and actually starting the job. All payload data in this situation is empty.
But be carefull to use /gitlab/buildnow, because you cannot control if you want or not to build, like when Maven commit back some files, and build is not supposed to be triggered.
What I did was to write a little tool in Python to receive all gitlab notification, and this tool talks back to GitLab and Jenkins, to fire (or not) jobs, and collect back statuses.
My start point:
How do I receive Github Webhooks in Python (last answer, not the choosen one).
I started developing it 2 days ago. It's done, but I am still validating it.