Websphere server startup problem - websphere-6.1

When I start my websphere server6.1 in debug mode, I am getting following error in RAD.
Server WebSphere Application Server v6.1 at localhost was unable to start within 1800 seconds. If the server requires more time, try increasing the timeout in the server editor.
Please help me to resolve this.

I resolved this issue by setting the start up timeout limit to 2000 seconds from 1800 sec in websphere server setup.
Todo this,
1) Double click the websphere server in RAD.
2) Click the "Timeouts" link
3) Change the start up limit to something higher than previous
In my case, I changed from 1800 sec to 2000 sec

Try delete all breakpoints and then start the server. Then put all the breakpoints that you need again. I get this error working with eclipse and tomcat, and this solution works for me.

Related

Influxdb server not listening on 8086

I started the influxdb. The meta server is getting started at 8088 and I am seeing a series of [wal] logs. When I try to connect with the server using influx command it throws
Failed to connect to http://localhost:8086
Please check your connection settings and ensure 'influxd' is running.
The server is running in the background. What could be the reason ? I have been writing continuously and then I restarted my server. After restarting I am not able to connect to the server. I also tried connecting after an hour of restarting to make sure it was not due to some startup tasks.
What could be the reason for this ?
The db had huge number of series and it took more than 2 hours for the meta server to be up fully. Later, the http listener was up after the initial start tasks.

Xdebug time out while running debug session (PhpFarm | phpFcgi)

I run apache Webserver inside of a docker container.
To be able to use multiple php-versions, I use phpfarm inside of this docker-container.
After I configured xdebug and connect it to phpstorm, I wonder why the debug-session allways finishd with a 500 error in the Browser.
The timeout was nearly 40 - 50 Seconds after I request the Webpage.
Solution was to set the Timeout for the Server in the vhostfile for each php-version:
FcgidIOTimeout 300
With this parameter, the timeout isset to 300 Seconds.
Don't forget to restart or reload the Webserver.

AWS Deployment with Rails - Inaccessible

Could you tell me what happens with AWS Server now? From 3 weeks ago, util now, whenever I deploy my RoR app into AWS Server (using ElasticBeantalk tool), I meet a strange issue
Deployment time is quite good (just about 10-15 minutes), and the healthy of server is still green. But after that, the server is inaccessible. This status last about 3 - 4 hours !!! Then, everything is OK, server run fast and smoothly. I totally don't understand server healthy still un-change although this error happen. Everything I can do is "refresh browser periodically until it run"
I don't think my application is bigger enough with total deployment time like that. It just takes me about 20 minutes on local (production mode)
Here're some error I found out when server is hang:
"An error occured while starting up the preloader."
"Gateway timeout" when loading application.js (using chrome debug)
"Bad gateway" when loading application.js (using chrome debug)
Please give me some advise to solve that. I have been stucked on this issue for a long time
Thanks

Apache doesn't use all bandwith

I'm using apache 2.4.1 on windows and I'm trying to optimize my website www.xgclan.com loading speed with http://www.webpagetest.org.
I noticed that the download time is quite long in the report.
Today I downloaded the Windows 8 preview to my server as mirror, I put it on my apache server and tried downloading it with my home connection, the speed was only 500 KB/s.
My server has an 100 Mb/s duplex connection and task manager indicates that only 7% of the bandwidth is used.
I have 120Mb/s down at home and I ran a speedtest to make sure its not an issue with my home connection.
Downloading works fine on the server so I think its an issue with apache or windows server 2008 R2.
Can anybody help me so I can use my full 100 Mb/s upload?
This issue was caused by EnableSendfile on, after turning this off I was able to use the full connection speed.

Hyperic JMX monitoring threads not closing

I'm using tomcat 6 and HypericHQ for monitoring via JMX.
The issue is the following:
hyperic, overtime, opens hundreds of jmx connection and never closes them.. after few hours our tomcat server is using 100% cpu without doing anything.
Once I stop hyperic agent, tomcat will go back to 0-1% cpu..
Here is what we are seeing virtual vm:
http://forums.hyperic.com/jiveforums/servlet/JiveServlet/download/1-11619-37096-2616/Capture.PNG
I don't know if this is an hyperic issue or not, but I wonder if there is an option to fix it via tomcat/java configuration? The reason that I don't know if this is an hyperic or a tomcat/java configuration issue is because that when we use hyperic on other standard java daemon it doesn't have the same connection leak issue.
The JMX is exposed using Spring, and it's working great when connecting with JMX clients (JConsole/VisualVM). When I close the client, I see that the number of connections drops by one.
Is there any thing that we can do to fix this via java configuration? (forcing it to close a connection that is open for more than X seconds?)
One more thing, in tomcat we see (from time to time) the following message (while hyperic is running):
Mar 7, 2011 11:30:00 AM ServerCommunicatorAdmin reqIncoming
WARNING: The server has decided to close this client connection.
Thanks

Resources