We are starting integrate our build and automation testing process to jenkins pipeline, and I have some issue with starting rails server.
First of all, this is our pipeline chart:
In any step of "Config" (0,1,2), I start different rails app with specific port, using: rails s -p XXXX -d, and just after the execution command, I run lsof -i:XXXX and I DO see the server running.
But, in QA stage, I want to use the servers I ran in Servers Configuration stage, but I get connection refused in our tests, and also, when accessing the server the apps ran on, I don't see them running anymore, even that I used -d to daemonize them.
Any ideas? it seems like the rails servers ran only for the servers configuration stage and then closed, is that possible? and if so, how do I handle them?
Thanks!
I'm getting org.apache.catalina.LifecycleException when running shopizer project in eclipse. Shopizer use struts2. Also I'm using tomcat7 to run it. The bad thing is some times it comes and sometimes it don't. When It comes restarting the server is not helping and I have to restart the whole system to get things working again.
open task manager.
kill the process 'javaw.exe'
then restart the server after cleaning tomcat directory and project
it should work..
I updated some plugins and restarted the jenkins but now it says:
Please wait while Jenkins is restarting
Your browser will reload automatically when Jenkins is ready.
It is taking too much time (waiting from last 40 minutes). I have only 1 project with around 20 builds. I have restarted jenkins many times and worked fine but now it stucks.
Is there any way out to kill/suspend jenkins to avoid this wait?
I had a very similar issue when using jenkins build-in restart function. To fix it I killed the service (with crossed fingers), but somehow it kept serving the "Please wait" page. I guess it is served by a separate thread, but since i could not see any running java or jenkins processes i restarted the server to stop it.
After reboot jenkins worked but it was not updated. To make it work it I ran the update again and restarted the jenkins service manually - it took less than a minute and worked just fine...
Jenkins seems to have a number of bugs related to restarting, and at least one unresolved: jenkins issue
Windows ONLY....
All the solutions here didn't work and restarting the server was not an option. If you are in the same situation.
I had to kill java.exe and restart the jenkins service. After I did this Jenkins reloaded several times and then went back to normal.
I was stuck on the jenkins restarting page for 10-ish minutes untill I did this.
Hope this helps.
Running this in the command line helped me:
service jenkins restart
I had a similar issue when updating plugins from the pluging update page and I marked the restart jenkins options. jenkins only showed the waiting message for a long time.
I solved the issue restoring .bak to .jpi files of the the plugins that I tried to update.
I did the follow in my jenkins
cd $JENKINS_HOME/plugins/
>sudo mv git.bak git.jpi
.
. (more plugins files)
.
>sudo mv ldap.bak ldap.jpi
>sudo /sbin/service jenkins restart
Check Event Viewer.
I found that my Java died.
Faulting application java.exe, version 7.0.250.17, time stamp 0x51c4b3fd, faulting module ntdll.dll, version 6.0.6002.18541, time stamp 0x4ec3e39f, exception code 0xc0000374, fault offset 0x000abc4f, process id 0x1188, application start time 0x01cee4f42968bc81.
Finally I found that it's Jenkins 1.540 problem. Don't use it.
https://issues.jenkins-ci.org/browse/JENKINS-20630
I faced the same issue after upgrading some plugins on Windows. Looking on jenkins.err.log it displayed this error
Exception in thread "main" java.io.IOException: Jenkins has failed to create a temporary file in C:\Users\builder\AppData\Local\Temp\
at Main.extractFromJar(Main.java:350)
at Main._main(Main.java:194)
at Main.main(Main.java:91)
Caused by: java.io.IOException: There is not enough space on the disk
at java.io.WinNTFileSystem.createFileExclusively(Native Method)
at java.io.File.createTempFile(Unknown Source)
at Main.extractFromJar(Main.java:347)
... 2 more
The problem was that the TEMP folder of the jenkins user had lots of temporary files. After cleaning that folder jenkins restarted correctly.
just performed a restart on the server. That fixed the issue !
In Command prompt execute this
C:\>service jenkins restart
Or
You can go for Service currently running in your machine( Win + R ) seach for Jenkins and Click on restart
For me, the cause seemed to be having lots of old job build logs hanging around. To clean them up, I ran:
cd $JENKINS_HOME/jobs
find -name 'builds' | xargs -n 1 bash -c 'rm -rf $0/[1-9]*'
Then I stopped and started Jenkins again, and it came up within a minute.
Credit to: https://stackoverflow.com/a/39230597/2255242
This is an old thread.. but my personal recommendation is to WAIT before attempting to do anything (such as restarting service, etc).
I wasted hours once trying to fix something that turned out to be not an issue in the first place. In the end, I messed things up and wasted a lot of time.
Just because you see errors in the logs doesn't necessarily mean that you need to take action.
The upgrade took about 45 minutes in the end for me. All i did at one point was refreshing my browser window. It can take a while.
Just my opinion
On Win 10: Stopping with the service command from the command line reported failure to stop the service, but I was able to stop it from services.msc (running as administrator). The updates were applied. Sorry, no definitive answer from me. YMMV.
I used TCPView and killed process that was using port 8080. BAsically it was all Java.exe from Jenkins. Killed all processes and restarted Jenkins Service
try to restart that inside windows services console, it will work
I have observed the same issue after installing a plugin and opting to restart the jenkins when no jobs are running.
When I looked at the jenkins server process, it was running fine and no issues.
On restarting the jenkins service using the below command and reloading the browser, Jenkins was up.
sudo service jenkins restart
If Jenkins is taking an unusually long time to restart the best recourse is to check the generated logs to see what may be wrong. However, even that may be of little help because many plugins try to be "quiet" by default, even if they are furiously working to load content. So if all else fails, you may have to resort to manually disabling plugins.
However here is a free tip: Some plugins are known to be messy. For example the Job Config History plugin we observed to write hundreds of thousands of records for both job configuration changes AND agent changes. Removing this plugin, and deleting the configHistory folder fixed one problem where our startup literally took > 4 hours.
In our case, the problem was we were launching ephemeral agents (via docker and/or kubernetes). Each new "agent" was treated as a configuration change. With thousands of agents per day, it didn't take long to fill up a substantial part of the disk with history that never was effectively cleared.
There are other plugins that leak data in this way. And you can also create self-inflicted wounds, e.g. by using a standalone process to remove "obsolete" files. An example where we were "bitten" is a process that tried to discard old build records, but did an incomplete job - and was "warring" with the running Jenkins process. Jenkins will try breaking its neck to load a build.xml record that is empty or incomplete.
Three more tips:
You can install the monitoring plugin. Often when the jenkins UI proper didn't start, we were able to see the /monitoring in action.
Likewise, /userContent can often be loaded even when the rest of the UI is not fully up.
Don't rule out bad actors. It just takes one aggressive script that tries, e.g. to load the entire build history and ship it back via a REST call to effectively deny service to all other UI users.
I try to fix a file named hudson.model.UpdateCenter.xml located /var/lib/jenkins
I change the URL to https://mirrors.tuna.tsinghua.edu.cn/jenkins/updates/update-center.json
Finally, restart Jenkins. it solves my problem
I just got a chance to test-drive a VPS for a week, and decided to try Grails on it. The problem is, that it shuts itself down.
Details:
VPS - 512MB Ram, Ubuntu 12.10 x64 (no particular reason for x64)
Oracle Java 7u17
Latest GVM 0.9.5
Grails 2.2.1
What I did was follow along this tutorial http://grails.org/Quick+Start, which is very basic. Everything went smooth until I did grails run-app.
After doing the initialization, it showed running for like 5s, and I could even start loading the page, but it suddenly showed Killed in the terminal. This is what the terminal showed:
root#jp:/var/grails/my-project# grails run-app
| Running Grails application
Killed
There was no input during that time whatsoever. Any ideas on the cause of this problem?
You should only ever run Grails with the run-app command when developing locally. The reason behind this is because run-app starts your Grails app with a lot of dynamic behavior that is great for rapid development, but horrible performance-wise for running on an actual server.
Refer to Grails' User Guide on how best to deploy your application:
http://grails.org/doc/latest/guide/gettingStarted.html#deployingAnApplication
As the docs above state, the correct way to run your Grails app is to embed it in a servlet container. Tomcat is a good place to start since Grails uses that by default when running locally. You may also need to play around with the VM flags of your servlet container, depending on your environment (again, the docs give a few suggestions here).
You can redirect output of your command if it get's killed immediately on your terminal.
grails run-app > output.txt
Then open output.txt and from there you can dissect the problem.
For my case, i got an incorrect JAVA_HOME directory.
Hope it helps.
The Integrity App is working fine for me in my OSX dev environment. I've deployed an instance to a Ubuntu server for my production setup, and I'm able to setup a new project. Once I call a manual build to attempt to test a first build the build record is created, but the build is never run.
I've added a bunch of logging to my application and have been able to track the point of failure to when the build job is added in ThreadPool#add It appears everything is running fine to get the job added to the build pool, but that the pool isn't actually running anything despite being spawned and no exceptions being raised.
The environment I'm running is Ubuntu 11.04, RVM & Ruby 1.9.2-p290, Passenger / Apache, and running Integrity from master w/Sqlite3 and ThreadedBuilder.
UPDATE:
I found an article indicating this may be an issue with using Apache & Passenger not loading the Ruby environment properly. This appears to be the case since in dev I'm just running bundle exec rackup, and in production I was trying to use Passenger. So on the production machine I started an instance of Integrity using bundle exec rackup, which does indeed actually start running the builds except that it didn't properly find the bundler gem as it should have. I'm sure I can track down a fix for that somehow.
So essentially the issue I am having is with running Integrity with Passenger rather than using rackup. The article that pointed me in this direction didn't work with their solution of getting Ruby in the Apache environment though. Can anyone help me determine how to properly run Integrity with Passenger?
The issue was in the way Passenger handles threading. By changing to the DelayedBuilder using DelayedJob for builds rather than the ThreadedBuilder I was able to use Passenger as the web server.