Attached images are the Ansible configuration screenshots on Jenkins.Trying to invoke an ansible-playbook from Jenkins I get the below error:
[test-ansible-on-remote] $ sshpass ******** /usr/bin/ansible-
playbook /var/jenkins_home/workspace/test-ansible-on-remote/test.yml
-i 40.68.3.120 -f 5 -u bmiadmin -k
FATAL: command execution failed
java.io.IOException: Cannot run program "sshpass" (in directory "/var/jenkins_home/workspace/test-ansible-on-remote"): error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
at hudson.Proc$LocalProc.<init>(Proc.java:250)
at hudson.Proc$LocalProc.<init>(Proc.java:219)
at hudson.Launcher$LocalLauncher.launch(Launcher.java:937)
at hudson.Launcher$ProcStarter.start(Launcher.java:455)
at hudson.Launcher$ProcStarter.join(Launcher.java:466)
at org.jenkinsci.plugins.ansible.CLIRunner.execute(CLIRunner.java:49)
at org.jenkinsci.plugins.ansible.AbstractAnsibleInvocation.execute(AbstractAnsibleInvocation.java:290)
at org.jenkinsci.plugins.ansible.AnsiblePlaybookInvocation.execute(AnsiblePlaybookInvocation.java:31)
at org.jenkinsci.plugins.ansible.AnsiblePlaybookBuilder.perform(AnsiblePlaybookBuilder.java:261)
at org.jenkinsci.plugins.ansible.AnsiblePlaybookBuilder.perform(AnsiblePlaybookBuilder.java:232)
at hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:79)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonito
Am I missing anything in the configuration?
sshpass needs to be installed as part of the host Jenkins docker image which Ansible uses for making the ssh connections to the hosts.
Related
I upgraded my docker desktop to the version 3.2.1 (61626), and choose to use wsl2, after that i cannot run Local builds of AWS CodeBuild because the AWS configuration is not being found, the command I use is (I run the command from a tab from Windows terminal using ubuntu 20 that I installed from the store):
./codebuild_build.sh -i aws/codebuild/standard:5.0 -a ./ -s ./ -b ./buildspec.yml -c ~/.aws
That command works with the version of docker that uses Hyper-V, after the upgrade to wsl2 i get the error:
agent_1 | [Container] 2021/03/05 21:04:05 Phase complete: DOWNLOAD_SOURCE State: FAILED
agent_1 | [Container] 2021/03/05 21:04:05 Phase context status code: Decrypted Variables Error Message: MissingRegion: could not find region configuration
The docker command that is generated is the following:
docker run -it -v /var/run/docker.sock:/var/run/docker.sock -e "IMAGE_NAME=aws/codebuild/standard:5.0" -e "ARTIFACTS=/mnt/c/[redacted]" -e "SOURCE=/mnt/c/[redacted]" -e "BUILDSPEC=/mnt/c/[redacted]" -e "AWS_CONFIGURATION=NONE" -e "INITIATOR=[redacted]" amazon/aws-codebuild-local:latest
edit:
running the command from git bash the generated command is:
winpty docker run -it -v //var/run/docker.sock:/var/run/docker.sock -e "IMAGE_NAME=aws/codebuild/standard:5.0" -e "ARTIFACTS=//C/[redacted]" -e "SOURCE=//C/[redacted]" -e "BUILDSPEC=//C/[redacted]" -e "AWS_CONFIGURATION=//C/Users/[redacted]/.aws" -e "INITIATOR=[redacted]" amazon/aws-codebuild-local:latest
But also fails with the error:
agent_1 | [Container] 2021/03/05 22:17:43 Phase complete: DOWNLOAD_SOURCE State: FAILED
agent_1 | [Container] 2021/03/05 22:17:43 Phase context status code: YAML_FILE_ERROR Message: stat /codebuild/output/srcDownload/src/buildspec.pr.yml: no such file or directory
With the previous command the variable AWS_CONFIGURATION had the path to my .aws folder, I had tried -c //c/Users/[myProfile]/.aws and /mnt/c/Users/[myProfile]/.aws but AWS_CONFIGURATION is always NONE
Is there a configuration that I'm missing? or I need add an extra step with wsl2?
Edit:
I installed Ubuntu 18 and failed in the same way.
I was having a similar problem. I realized that since I had to run docker as root using the sudo command, my home directory was now /root instead of /home/<username>.
There may be a better way around this, but I symlinked the folder /home/<username>/.aws to /root/.aws.
Also, you could pass the variables AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN, and AWS_ACCESS_KEY_ID in through an environment file using the -e flag of the codebuild_build.shcommand.
I am trying to run the gitlab pipeline jobs locally in order to test and debug.
Here is what I did:
Installed gitlab-runner on my local machine.
sudo gitlab-runner exec docker --docker-privileged --builds-dir /tmp/builds --docker-volumes /home/fox/Work/docker/core-application:/core-application Rspec
This gives:
Runtime platform arch=amd64 os=linux pid=632331 revision=8fa89735 version=13.6.0
Running with gitlab-runner 13.6.0 (8fa89735)
Preparing the "docker" executor
Using Docker executor with image docker:19.03.6 ...
Starting service docker:19.03.6-dind ...
Pulling docker image docker:19.03.6-dind ...
Using docker image sha256:a33335bfe8302f4d8a7688bc1fa539f2aba787ec724119be53adc4681702a3e7 for docker:19.03.6-dind with digest docker#sha256:a4f33d003b7ec9133c2a1ff61f4e80305b329c0fa8b753140b9ab2808f28328c ...
WARNING: Service docker:19.03.6-dind is already created. Ignoring.
Waiting for services to be up and running...
*** WARNING: Service runner--project-0-concurrent-0-aef5122f9d27e6f0-docker-0 probably didn't start properly.
Health check error:
service "runner--project-0-concurrent-0-aef5122f9d27e6f0-docker-0-wait-for-service" timeout
Health check container logs:
....
*********
Pulling docker image docker:19.03.6 ...
Using docker image sha256:6512892b576811235f68a6dcd5fbe10b387ac0ba3709aeaf80cd5cfcecb387c7 for docker:19.03.6 with digest docker#sha256:3eb67443c54436650bd4f1e97ddf9ab1797d75e15d685c791f6c6397edaa6d82 ...
Preparing environment
Running on runner--project-0-concurrent-0 via fox...
Getting source from Git repository
Fetching changes...
Initialized empty Git repository in /tmp/builds/project-0/.git/
Created fresh repository.
fatal: not a git repository: /home/fox/Work/docker/core-application/../.git/modules/core-application
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
ERROR: Failed to cleanup volumes
ERROR: Job failed: exit code 1
FATAL: exit code 1
Then I tried to do it with a gitlab-runner image on the local machine:
docker run --name=runner --privileged -t --rm -v /var/run/docker.sock:/var/run/docker.sock -v /tmp/.gitlab-runner/:/etc/gitlab-runner -v ${PWD}:${PWD} --workdir $PWD gitlab/gitlab-runner exec docker --builds-dir /tmp/builds/ Rspec
I get:
Runtime platform arch=amd64 os=linux pid=7 revision=8fa89735 version=13.6.0
fatal: not a git repository: /home/fox/Work/docker/core-application/../.git/modules/core-application
WARNING: You most probably have uncommitted changes.
WARNING: These changes will not be tested.
fatal: not a git repository: /home/fox/Work/docker/core-application/../.git/modules/core-application
FATAL: exit status 128
Here is what the gitlab documentation says:
If you want to use the docker executor with the exec command, use that
in context of docker-machine shell or boot2docker shell. This is
required to properly map your local directory to the directory inside
the Docker container.
There are no examples. I googled it, but nothing turned out around docker+machine and gitlab-runner.
Can someone tell me how to do it correctly? Any sample?
Thanks.
You have to execute the command from the root folder of your git repository/project:
$ ls -a
./
../
.git/
.gitignore
.gitlab-ci.yml
Makefile
$ gitlab-runner exec docker <job_name>
You are using docker commands which you shouldn't as gitlab-runner already uses those under the hood.
See $ gitlab-runner exec docker --help for more info on the command.
I have been working through the docker book and I am now learning about CI. I tried to run this script within the execute shell of my build:
# Build the image to be used for this job.
IMAGE=$(sudo docker build . | tail -1 | awk '{ print $NF }')
# Build the directory to be mounted into Docker.
MNT="$WORKSPACE/.."
# Execute the build inside Docker.
CONTAINER=$(sudo docker run -d -v $MNT:/opt/project/ $IMAGE /bin/ bash -c 'cd /opt/project/workspace; rake spec')
# Attach to the container so that we can see the output.
sudo docker attach $CONTAINER
# Get its exit code as soon as the container stops.
RC=$(sudo docker wait $CONTAINER)
# Delete the container we've just used.
sudo docker rm $CONTAINER
# Exit with the same value as that with which the process exited.
exit $RC
Running this script ends in the build failing. It shows these two errors:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
and
sudo docker run -d -v /private/var/jenkins_home/jobs/${Docker_test_job}/workspace/..:/opt/project/ /bin/ bash -c cd /opt/project/workspace; rake spec
docker: invalid reference format.
See 'docker run --help'.
+ CONTAINER=
Build step 'Execute shell' marked build as failure
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files were found. Configuration error?
Finished: FAILURE
I don't understand how to fix it as I've been following the instructions in the book. I tried using $PWD to try and fix my issue but that didn't work either.
Actaully the jenkins user does not have the permission to run docker command. To do this, add your jenkins user to the docker group:
sudo usermod -aG docker jenkins
Then restart your jenkins server to refresh the group.
Please be informed that ther is a warning "The docker group grants privileges equivalent to the root user. For details on how this impacts security in your system."
First of all, I couldn't find the answer here on SO (this is the closest post).
I have EC2 running Ubuntu. First I installed Jenkins, and then Docker.
It's not "DonD".
My project has a Jenkinsfile, on that I'm running some docker commands.
It's supposed to use a docker container like gradle, share a volume and build the project.
The final .war will be on the host file system.
The problem is: the gradle inside the container can't write on host's folder.
Here's my Jenkinsfile (one of countless tries):
#!/usr/bin/groovy
node {
checkout scm
stage 'Gradle'
sh 'sudo docker run --rm -v "$PWD":/api -w /api gradle gradle clean build --stacktrace'
}
Stacktrace important line:
Caused by: org.gradle.api.UncheckedIOException: Failed to create parent directory '/api/.gradle' when creating directory '/api/.gradle/4.3/fileHashes'
Solved!
I logged into container using the -v argument:
$ docker exec -it -v "$PWD":/api -w /api gradle bash
Then I tried to check the current user:
$ whoami
# gradle
So, the solution is to run the container as root:
sh 'docker -u root run --rm -v "$PWD":/api -w /api gradle gradle clean build'
This is the root of the container. Jenkins doesn't need root access.
I am using docker for testing my playbooks.
I created a container now when i am running below command inside container its giving me below error
ansible-playbook jenkins.yml
Error:-
[root#db1e9105692d jenkins-playbook]# ansible-playbook jenkins.yml -k -vvv
SSH password:
PLAY [localhost] **************************************************************
GATHERING FACTS ***************************************************************
<localhost> ESTABLISH CONNECTION FOR USER: root
<localhost> REMOTE_MODULE setup
<localhost> EXEC sshpass -d4 ssh -C -tt -v -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/root/.ansible/cp/ansible-ssh-%h-%p-%r" -o StrictHostKeyChecking=no -o GSSAPIAuthentication=no -o PubkeyAuthentication=no -o ConnectTimeout=10 localhost /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1454580537.38-114451000565344 && echo $HOME/.ansible/tmp/ansible-tmp-1454580537.38-114451000565344'
EXEC previous known host file not found for localhost
fatal: [localhost] => SSH Error: ssh: connect to host localhost port 22: Connection refused
while connecting to 127.0.0.1:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
TASK: [jenkins | Include OS-Specific variables] *******************************
<localhost> ESTABLISH CONNECTION FOR USER: root
fatal: [localhost] => One or more undefined variables: 'ansible_os_family' is undefined
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/root/jenkins.retry
localhost : ok=0 changed=0 unreachable=2 failed=0
But if i run this command on host machine its running fine.Do i need to do anything so that connection do not gets refused on port 22.inside docker container
Please do not consider below line as reason for error.Its just that ansible has executed few more lines before throwing error. Actually its not able to run so thats why value of this variable is empty.
fatal: [localhost] => One or more undefined variables: 'ansible_os_family' is undefined
In your container start your playbook locally:
$ ansible-playbook jenkins.yml -c local -k -vvv
Do you have connection=local defined for localhost? It's trying to connect via ssh, which can not work because you probably do not have sshd running in your container.