How to Install Jenkins using Apache Brooklyn and Chef - jenkins

I want to install Jenkins on a VM using Chef (and Apache Brooklyn). The blueprint being used is,
name: chef-jenkins
location:
jclouds:aws-ec2:
region: xyz
services:
- type: chef:jenkins
cookbook_urls:
jenkins: .../jenkins.tgz
runit: ... /runit.tgz
apt: .../apt.tgz
yum: .../yum20150407-59421-1bw7bou.tar.gz
launch_run_list: [ "jenkins::start" ]
service_name: jenkinsd
The service_name parameter is incorrect.
Running this throws an error "Failure running task ssh: run chef for launch (jSUGhBph): SSH task ended with exit code 1 when 0 was required, in Task[ssh: run chef for launch:jSUGhBph]: run chef for launch".
What else am I missing? Is it possible to run a simple chef recipe (e.g. https://gist.github.com/nstielau/978920) directly?

The error message you saw indicates that one of the shell commands that Brooklyn tried to run on the cloud server failed - in particular to "run chef for launch" command. To find out why this failed, use the Activity tab:
The tree view on the left contains the whole application. Expand this.
This will reveal the entities that make up the application. This blueprint has only one, called jenkins (chef) - click on this.
Click on the Activity tab. This will show you the list of tasks. One of them will have the status Failed - click on this.
Tasks can have subtasks, so you may see another task list, with one in status Failed - keep following this trail until you come to the last failed task
If this is an SSH task, you will have links to download stdout and stderr - you can inspect these to find out exactly why the shell command failed.
You can also find a section on troubleshooting in the Apache Brooklyn user guide which may help diagnose other problems.
I've taken your blueprint and made a few modifications; this is now working on my Ubuntu 14.04 image:
name: chef-jenkins
location: vagrant-trusty
services:
- type: chef:jenkins
cookbook_urls:
jenkins: data:application/octet-stream;charset=UTF-8;base64,H4sIAC/fwVUAA+1b3W/bOBLvM/8Kwn5Qmybytw0YONxl06Dw3jYp4uzeQ68wKImWGEuklqSSeB/ub98hJTmOs63hbOy9bfmDLdHkaPgxnBE5HN9QvmBctV7tEe12ezQYYHsflvd2t1/eK+BOdzBs9wajfqeL251er917hQf7bFSNQmkioSmShQmR0RfpgGw+/wqfqh+r+98EN5X8f6ByoeYspXuoA8Zj2O9/Rf6DUS3/fsfKf9DptV/h9h7a8gTfufyVKGRIcSPROlfjVksVOZUZkQuq/TChc5+JBkIZ1SQimqC/urkOL4xa/42sWcyF3IMB2KL/nVFnuKn/vd7Q6f8h0MQfC42N4VetiEkaaiEZVVgnRGOViCKNcEBxOTUizDiUMGUfwHcJ5bjIU0EixmPUxELCI0TCD6wFEFIciiwrONNLrJimPtD8xPiKPYy8xndMJ9hrYg8TWT5AuVY+AtrLKY4pp5JoqNq2ETdR8w+B/HfT2RQaT9EkFPyfiIukyH1RaEQTnRRZoPwoQNerFLCfnk6nawyRr4hSJyEBXTDF5+8m15dX08dVov82j5AP36P/oSNf3X0iJ799hlRAFujq/JfJdHJ5ga5P30+PkM40iRU6ms3TZUYW1D96SMMTOsuluEG+ucKwQ+1Uaxg6hbJFNvdTEUMjmvjs8sPHyU/n73Bzs+uI2O4d+QK++TI07TFJGEJzTaEzcI/SFK70HqpsyUiELdO1a6pMVRtd8++IDhOJfKlyGiJzaR2Vtzm714WEOQLdgmfhNqekznlfgOIY8aCPUoQ2YUb37MMDf+THDJrasjdzKSeUTWYiKkC0Ng2iAzNkk0RryYJCmxJ1y83DwW8SqoNEEpd3yIcEVGaXLwlN55sTBK0WNg8pGNtwgUIhFgF8bZ+y3HA5q7OecDm7vLi+mvzw8/Xk4r3tnJYEJrLcIERnIiU8otJWWBPZHyCWsgh6U+UbRr+QWBKuNyVxW2ajqrge02tJbtnj1iFf20x/maXPeD3X9r9+w/syeHEbs2391++MNtb/8B7oOvt/CHCSUbwOr5oRHspgkupylldF12DRTwudCPmoeEYhnWJvKYp/0XuS5aBiYIQ8lLKQcrXG3yNpOpMsTjQwiKgKJcs1E7wqnXAY5TRVrTNrBox5wavmpILHs/Vnvk5+S6Vacbbc237Hb5tqc8ojBW3J9dqvG3JLvO9vfVvr/9X56bsP5372ZQV4Prbu/4ab+7/+YDBy+n8INGuNQej68t3lGJ9zDQpfrt3KtyFeV7qESljGfX968q2i1n9Y+bOc7scPuIP/r9b/UXfk/H+HQC1/u8rfUx3PkH/Xyf8wqOVvd3V7quM58h/0nfwPgUfyhwU9hf2medG/5FzYXf7Dwcjp/0HwRflHdE6K9EVsws7y77a7XXf+dxBslb+i0myl/8zyYHf5d/ttZ/8Pgl3kX2XNzI9d/ITtLfv/Xr+zIf9Bbzhw+/9DQNJfCyYp9oxQZwlNcyo9VPnmArpyB47HlfQ9HAmEMG7i6Wpm4Mrrp3BIuDkumouCR5hoS2eOls3J8orcFzKGDWd59DzTS9h3+onOUqCGD9PYiwTwUiKjOmE8tlUaF55asBx7VzRPSUjLgyh7epRRwoFuXqTYzGPlATXlEUJw+asH+P8cu+j/2hTZ6Zhgm/63B91N/e92nP/vIHjQ/5WcQf0V1XgckHABCnSMx/Sehs7l903ikf/nuQq+BVv1f7i5/xu1+x2n/4fASv9NAFCp/U+yWkF9uv4dHpB943ik/yZSZw9OwN39P4Nue+j2f4fAU/m//EnQ7vIfDvttJ/9D4Mvyf+5u/ym2vf9H7c39f2fYde//g6C5FvaGL0hGx+NVRABs8UH2kFFNBVQS50sbwYNfh29wF2w1XgsLOsanaYqvbIQPvqJ2UxH5CD3bzRAK2JPea+z9xwSbEmD+EBdoA0Yr+mMsoJjjgpsa2JzRCOcp0XMhs5oXxinVr8dmWTOTBX9TuxUwhl8mzOkf+AzKyk6X3o0rW+BzeveI0odmQWlMX9c9iGal3ryxdNb5YBLGmVHTKqyKMKRKzYs0XXoPtdcNwk0YSqbqsFsuNJaEKWr6RaUUcp23uZnvn5T/5vl/NZwvGga4Rf87/d5D/E8XdAfs/8j8/8fp//7xdf2/srPikQVApUPP/Ffkji2YXxGfhMy69SKmQO2WrR/PL/49uZi2qhg9xuO3P5aUbwV/+3NQcF2gar6bKW/+dnKimPEiHpsg8JyYcPMqilzARVrdfAhKmpRORuABKo4r3vh13baY6aQITBhia85BtCK1f3E4qZr7xrfGzFZvnI3c05jxMC0iWtaTskASGwlv2JPIRLhXtXgK34hA4VtGgEUd5U54BA9xaD3YJqP31ocZU71qW5FjQ2QsCDArGzDxIIPY7gUFA6U3RBs91QnovwKjl5ds7/gxVsJGYnlGSATHQkRlNL1pZS4Y12Bzq/5Uhgk3SK4bTzJN2GMDISiDrFwopoVcQnbZ5kZppArJyn8IwcDmi/iJzGnACG8A4YIutxO2Huf68FDDWvosF9zE/uNPjYBxIpetxmcoIKGNPBuDEKzFQzkJFySmD600Dit5y0K62XCYVrmQhuNYaZEf47EdJbhLalNr/D+tyignQUo/O++xg4ODg4ODg4ODg4ODg4ODg4ODg4PD3xa/A3Nc9FMAUAAA
apt: https://github.com/opscode-cookbooks/apt/archive/v2.7.0.tar.gz
java: https://github.com/agileorbit-cookbooks/java/archive/v1.29.0.tar.gz
launch_run_list: [ "jenkins::default" ]
service_name: jenkins
launch_attributes:
java:
jdk_version: 7
This blueprint is ready-to-run (except for changing the location to match your requirements). Here are some of the changes I have made:
Full URLs for the three cookbooks
The jenkins data URL contains a bare-bones cookbook containing the recipe you referred to in a gist, tarred-and-gzipped
For the jenkins cookbook, I had to specify depends "apt" and depends "java" in metadata.rb otherwise I got strange Chef errors
Fixed launch_run_list to refer to the correct recipe name
Fixed service_name to refer to the correct service name
Added launch_attributes to configure the Java cookbook to install Java 7 (the default of Java 6 is not supported by Jenkins)

Related

Ansible AWX: Unable to choose a playbook when tried creating AWX Job template

I'm tried creating a job template in AWX web interface.
the list of playbooks is not displayed on the interface although the project has been downloaded to git and is visible in the directory ~/var/lib/awx/projects.
my environment:
centOS 8
AWX 17.0.1
Ansible 2.9.17
docker-compose 1.28.2
The yml file must not be empty and must have right syntax.
After that , always sync project and then check
I had this same challenge when creating a new Job Template on Ansible AWX setup on Ubuntu 20.04.
I created some playbooks in a repository that was added to a project. I then created a Job Template and selected the Project, but none of the playbooks were added to the project.
My playbook looked this way:
- name: Check if SSHD is running
command: systemctl status ssh
ignore_errors: yes
Here's how I fixed it:
I simply added hosts to the playbook and it showed up on the Job template for me to select. So my playbook was modified to look this way:
- name: Check if SSHD is running
hosts: all
command: systemctl status ssh
ignore_errors: yes
Note: The value for hosts could be something else. For me, it was all.
Resources: Playbooks aren’t showing up in the “Job Template” drop-down
That's all.
I hope that helps
I had the same problem. Once I made sure my git repo was up-to-date, I manually typed in the playbook name in the awx template creation screen, and my playbook was selected.

Deploying Docker images using Ansible

After reviewing this amazing forum, i thought it's time to join in...
I'm having issue with a playbook that deploys multiple Dockers.
My Ansible version is: 2.5.1
My Python version is 3.6.9
My Linux Images are 18.04 from the site: OSboxes.
Docker service is installed and running on both of the machines.
According to this website, all you need to do is follow the instructions and everything will work perfectly. :)
https://www.techrepublic.com/article/how-to-deploy-a-container-with-ansible/
(The playbook i use is in the link above)
but after following the steps, and using the playbook, i've got this error.
TASK [Pull default Docker image] ******************************************************************************************************
fatal: [192.168.1.38]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (docker_image) module: source Supported parameters include: api_version, archive_path, buildargs, cacert_path, cert_path, container_limits, debug, docker_host, dockerfile, filter_logger, force, http_timeout, key_path, load_path, name, nocache, path, pull, push, repository, rm, ssl_version, state, tag, timeout, tls, tls_hostname, tls_verify, use_tls"}
I'll be happy for your support on this issue.
The source: pull option was added in Ansible 2.8. Since you are using Ansible 2.5.1, that option is not available.
You can either use a later version, 2.8 or above, or just remove that line from your playbook and it should work:
- name: Pull default Docker image
docker_image:
name: "{{ default_container_image }}"
You won't have the guarantee that the image has been newly pulled from a registry. If that's important in your case, you can remove any locally cached version of the image first:
- name: Remove Docker image
docker_image:
name: "{{ default_container_image }}"
state: absent
- name: Pull default Docker image
docker_image:
name: "{{ default_container_image }}"
So according to the doc of docker_image module of Ansible 2.5, there is indeed no parameter source.
Nevertheless, the doc of version 2.9 tells us it has been "added in 2.8"! So you have to update you Ansible version to be able to run the linked playbook as-is. That's you best option.
Otherwise, another option would be to keep your version 2.5 and simply remove the line 38.
(-) source: pull
But I don't know how was the default behaviour before 2.8, so I cannot garanty you that it will do what you expect!
Finally, got this playbook to sing! :)
I did the following.
upgraded the Ansibe version, so now it's running on version: 2.9.15.
my python3 version is:3.6.9
After upgrading the Ansible to the version i've mentioned above, i got and error message: Failed to import the required python library (Docker SDK for Python (python >==2.7) or docker-py (python 2.6)) on osboxes(this is my machine) python...
so, after Googling this error, i found this URL:
https://neutrollized.blogspot.com/2018/12/cannot-have-both-docker-py-and-docker.html
SO, i decided to remove the docker from my machines, including the python that was installed using pip (i used the command pip-list to see if there is docker installed, and remove it using: pip uninstall).
After removing the Docker from my machines, i added the playbook one more play. install docker-compose (that's what solve my problem, and it took care of the python versions).
Just follow the URL i attached in my answer.
According the error message in Ansible module docker_image a parameter seems to be used, which is not part of the parameters implemented for that module (yet). Also the error message lists already the parameter which are available. Same as in the documentation for the module.
An other possible reason might be that the line indent for some of the parameters isn't correct.

kubectl set image throws error: the server doesn't have a resource type deployment"

Environment: Win 10 home, gcloud sdk v240.0 kubectl added as a gcloud sdk component, Jenkins 2.169
I am running a Jenkins pipeline in which I call a windows batch file as a post-build action.
In that batch file, I am running:
kubectl set image deployment/py-gmicro py-gmicro=%IMAGE_NAME%
I get this
error: the server doesn't have a resource type deployment
However, if I run the batch file directly from the command prompt, it works fine. Looks like it has an issue only if I run it from Jenkins.
Looked at a similar thread on stackoverflow, however that user was using bitbucket (instead of Jenkins).
Also, there was no certified answer on that thread. I cannot continue on that thread since I am not allowed to comment (50 reputation required)
Just was answered on this thread
I've had this error fixed by explicitly setting the namespace as an argument, e.g.:
kubectl set image -n foonamespace deployment/ms-userservice.....
Refrence:
https://www.mankier.com/1/kubectl-set-image#--namespace

Bitbucket Server Installation Error

I'm attempting to install Bitbucket server on my linux server. I'm following the steps here. I'm stuck at step 3. I've installed Bitbucket server, and now when trying to "Setup Bitbucket Server" I'm not able to access it from my browser.
I've done the following:
Using SSH I've went to the directory containing /atlassian/bitbucket/4.4.1/
I run the command bin/start-bitbucket.sh.
it gives the following message:
Starting Atlassian Bitbucket as current user
-------------------------------------------------------------------------------
JAVA_HOME "/usr/local/jdk" does not point to a valid Java home directory.
-------------------------------------------------------------------------------
----------------------------------------------------------------------------------
Bitbucket is being run with a umask that contains potentially unsafe settings.
The following issues were found with the mask "u=rwx,g=rwx,o=rx" (0002):
- access is allowed to 'others'. It is recommended that 'others' be denied
all access for security reasons.
- write access is allowed to 'group'. It is recommend that 'group' be
denied write access. Read access to a restricted group is recommended
to allow access to the logs.
The recommended umask for Bitbucket is "u=,g=w,o=rwx" (0027) and can be
configured in setenv.sh
----------------------------------------------------------------------------------
Using BITBUCKET_HOME: /home/wbbstaging/atlassian/application-data/bitbucket
Using CATALINA_BASE: /home/wbbstaging/atlassian/bitbucket/4.4.1
Using CATALINA_HOME: /home/wbbstaging/atlassian/bitbucket/4.4.1
Using CATALINA_TMPDIR: /home/wbbstaging/atlassian/bitbucket/4.4.1/temp
Using JRE_HOME: /usr/local/jdk
Using CLASSPATH: /home/wbbstaging/atlassian/bitbucket/4.4.1/bin/bitbucket-bootstrap.jar:/home/wbbstaging/atlassian/bitbucket/4.4.1/bin/bootstrap.jar:/home/wbbstaging/atlassian/bitbucket/4.4.1/bin/tomcat-juli.jar
Using CATALINA_PID: /home/wbbstaging/atlassian/bitbucket/4.4.1/work/catalina.pid
Existing PID file found during start.
Removing/clearing stale PID file.
Tomcat started.
Success! You can now use Bitbucket at the following address:
http://localhost:7990/
If you cannot access Bitbucket at the above location within 3 minutes, or encounter any other issues starting or stopping Atlassian Bitbucket, please see the troubleshooting guide at:
I try to access http://myserveraddress:7990, but i receive ERR_CONNECTION_REFUSED message. Is it because of the message JAVA_HOME "/usr/local/jdk" does not point to a valid Java home directory?
My server is running:
CentOS Linux release 7.2.1511
And I'm attempting to install
Bitbucket Server 4.4.1
Make sure you have java installed by running java --version
If it's not installed, start there. If it is installed, verify where by running find /-name java
Open /root/.bash_profile through your text editor. (I prefer to use vi editor)
And paste the given below two lines(noting that my below version may be different from what you see)
export JAVA_HOME=/usr/java/jdk1.7.0_21
export PATH=/usr/java/jdk1.7.0_21/bin:$PATH
Now enable the Java variable without system restart (On system restart it bydefault set the java variable)
source /root/.bash_profile
Now check the Java version,JAVA_HOME and PATH variables.It should show you correct information as you have set.
java --version
echo $JAVA_HOME
echo $PATH
Below is my system’s root bash_profile file
[root#localhost ~]# cat /root/.bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
export JAVA_HOME=/usr/java/jdk1.7.0_21
export PATH=/usr/java/jdk1.7.0_21/bin:$PATH
[root#localhost ~]#

Continuous delivery of infrastructure projects

I'm using Jenkins to achieve Continuous Delivery on some infrastructure projects. ATM Master-Slave Jenkins model is used where the jobs are always built by some some slave and not by the master, my intentions are to build and run test-kitchen and leibniz tests using LXC. All the requirements are matched vagrant-lxc, lxc boxes, leibniz and test-kitchen configured, everything works ok on my PC or any other of my team members but when it comes to run the Job through Master-Slave on Jenkins it seems there are some problems with environment, in details:
1- When I run as part of a build step "which lxc-create" it works and shows /usr/bin/lxc-create as it should but
2- When it runs kitchen test is fails showing:
+ kitchen test
-----> Starting Kitchen (v1.1.1)
-----> Cleaning up any prior instances of <default-ubuntu-1204>
-----> Destroying <default-ubuntu-1204>...
Finished destroying <default-ubuntu-1204> (0m0.00s).
-----> Testing <default-ubuntu-1204>
-----> Creating <default-ubuntu-1204>...
>>>>>> ------Exception-------
>>>>>> Class: Kitchen::ActionFailed
>>>>>> Message: Failed to complete #create action: [Expected process to exit with [0], but received '1'
---- Begin output of vagrant up --no-provision --provider=lxc ----
STDOUT:
STDERR: The `lxc` package does not seem to be installed or is not accessible on the PATH.
---- End output of vagrant up --no-provision --provider=lxc ----
Ran vagrant up --no-provision --provider=lxc returned 1]
>>>>>> ----------------------
>>>>>> Please see .kitchen/logs/kitchen.log for more details
It seems some environment variable is missing or something wired, It matters to say that sshing into the slave and building manually it works fine so is not a setup problem but a Master-Slave environment transmission problem or I'm missing something crucial on the configurations. Can anyone provide some help?
lxc version: 1.0.0
vagrant-lxc: 0.8.0
jenkins: 1.5.49
UPDATE 1:
Here is my kitchen configuration:
---
driver:
name: vagrant
require_chef_omnibus: false
require_chef_berkshelf: true
customize:
memory: 1024
provisioner:
name: chef_solo
platforms:
- name: ubuntu-12.04
driver:
box: "ubuntu-12.04"
box_url: "http://dl.company.com/ubuntu1204-lxc-amd64.box"
provider: lxc
suites:
....
Solved the issue!. The error message raised by Jenkins was wrong, I added Jenkins user to sudoers with NOPASSWORD and it worked just fine. I figured out because on my personal PC creating a container always asks for the sudo password.

Resources