Starting Cassandra from Jenkins - jenkins

I'm trying to start a Cassandra instance (0.8.10) from Jenkins (latest version, 1.463).
Inside a "free-style project" job, I have a "Execute shell" build step, where I have tried a couple of approaches:
.../tools/apache-cassandra-0.8.10/bin/cassandra -f
and
.../tools/apache-cassandra-0.8.10/bin/cassandra
The first approach starts Cassandra ok, but Jenkins doesn't exit the build and keeps on building. If I stop the build, the Cassandra process dies as well.
The second approach fails because the Cassandra project dies as soon as the build finishes.
I have also tried:
.../tools/apache-cassandra-0.8.10/bin/cassandra -f &
that is kind of lame, and doesn't work anyway.
Any ideas on how to start Cassandra from Jenkins?

Try using nohup with &. Also pipe stdout and stderr to a file or /dev/null:
nohup .../tools/apache-cassandra-0.8.10/bin/cassandra -f > /dev/null 2>/dev/null &

Related

Is there a way to turn off the output docker provides after "docker stop [container]" and docker start [container]?

I'm building my own backup script at the moment and therefore want to turn my docker instances on and off in a shell script.
So far this has been working perfectly, the only gripe I have with it is, that after it shuts down or starts a docker instance it throws the ID in the shell and I would love to get rid of that
docker start [containers]
4977db52f155
8063645c1a41
5b56a8ad3c72
65a0df7e8896
You can redirect the output to null.
docker stop [containers] > /dev/null 2>&1
or
docker stop [containers] &>/dev/null
https://www.cyberciti.biz/faq/how-to-redirect-output-and-errors-to-devnull/

How to define exit point for Docker Stop

I'm running an Application in tomcat. It generate some application's temp files during the process. And when we shutdown the tomcat using shutdown.sh or stop tomcat from IntelliJ then temp files will be converted to csv. This is inbuilt feature of my Application.
Now when I run the same Application in Docker, then temp is not converting to csv format as I expect it to.
I can see ENTERYPOINT in Docker configuration file that indicates the startup script for the docker. I don't know how docker stop internally works and what it trigger when we execute docker stop <container Name>, or where I should define the script for docker stop.
I also tried docker stop -t 10 <container Name>, with the thought in my mind that it might be tomcat, taking time to shutdown hence docker might be killing the tomcat process. So I tried docker stop with 10 second time also. But no luck.
I think docker is killing the tomcat when I execute docker stop
Any guidance will be appreciated.
First check this reference for process with pid 1.
Step to perform in your system.
Step 1 :- in entrypoint check your script file.
Step 2 :- update your script line entry.sh, find some line where you're starting your tomcat.
Step 3 :- for start tomcat use "exec sh catalina.sh run".
Step 4 :- check your ps with "docker exec ps -ef" it must be run on pid 1
Step 5 :- rebuild docker image.
Step 6 :- docker stop will send shutdown signal to tomcat and works.
For observe logs use docker logs -f <container_name>

Jenkins inside container

Before someone shouts at me saying that Jenkins has a official docker container, I would like to say that I'm playing/testing anything that I can think of.
I have a container(php:7.2-apache) where I'm installing Jenkins and I have a problem starting the service to run Jenkins.
I have tried to start the service with CMD service jenkins start but when I run the container docker-compose up -d the log show me this and after this the container is stooped with exit code 0
test_1 | Correct java version found
test_1 | Starting Jenkins Automation Server: jenkins.
Could some one help me with this?
Could some one help me with this? I feel curiosity to know the reason
The reason is that you are starting Jenkins as a service which will run in the background.
The Docker container lives as long as the process running by the CMD is still alive. In this case, the process simply starts the jenkins
service and exists. This will cause the container to exit as soon as the command service jenkins start finishes.
Take a look at the jenkins.sh script used by the official jenkins image to start jenkins.

Configuring Jenkins SSH options to slave nodes

I am running Jenkins on Ubuntu 14.04 (Trusty Tahr) with slave nodes via SSH. We're able to communicate with the nodes to run most commands, but when a command requires a tty input, we get the classic
the input device is not a TTY
error. In our case, it's a docker exec -it command.
So I'm searching through loads of information about Jenkins, trying to figure out how to configure the connection to the slave node to enable the -t option to force tty instances, and I'm coming up empty. How do we make this happen?
As far as I know, you cannot give -t to the ssh that Jenkins fires up (which makes sense, as Jenkins is inherently detached). From the documentation:
When the SSH slaves plugin connects to a slave, it does not run an interactive shell. Instead it does the equivalent of your running "ssh slavehost command..." a few times...
However, you can defeat this in your build scripts by...
looping back to yourself: ssh -t localhost command
using a local PTY generator: script --return -c "command" /dev/null

How to have all Jenkins slave tasks executed with nice?

We have a number of Jenkins jobs which may get executed over Jenkins slaves. Is it possible to globally set the nice level of Jenkins tasks to make sure that all Jenkins tasks get executed with a higher nice level?
Yes, that's possible. The "trick" is to start the slave agent with the proper nice level already; all Jenkins processes running on that slave will inherit that.
Jenkins starts the slave agent via ssh, effectively running a command like
cd /path/to/slave/root/dir && java -jar slave.jar
On the Jenkins node config page, you can define a "Prefix Start Slave Command" and a "Suffix Start Slave Command" to have this nice-d. Set as follows:
Prefix Start Slave Command: nice -n -10 sh -c '
Suffix Start Slave Command: '
With that, the slave startup command becomes
nice -n -10 sh -c 'cd "/path/to/slave/root/dir" && java -jar slave.jar'
This assumes that your login shell is a bourne shell. For csh, you will need a different syntax. Also note that this may fail if your slave root path contains blanks.
I usually prefer to "Launch slave via execution of command on the Master", and invoke ssh myself from within a shell wrapper. Then you can select cipher and client of choice, and also setting niceness can be done without Prefix/Suffix kludges and without whitespace pitfalls.

Resources