the server was running fine the last few months then suddenly today and yesterday it just stop responding and returned the 503 error. it works when i restarted the server but in a few hours, it stops again and returns the 503 error. The problem is i don't know what causes the problem time to time. i checked the error log and the file size is too large; could this be the problem of the sudden error on the tomcat server?
Any Help Will do
Best to tail the log files, when the error occurs and debug from there. If the log files are too large, zero them out, make a backup of it, and restart Tomcat and try.
Related
Hi have a few NodeJS servers on GCP Run and when there is a lot of requests, Google adds more instances. I've noticed that when they are added, we temporarily get a lot of 503 errors but the adding of instances could be because of instances being added.
Things I have tried to do to fix this:
Reduce the concurrency from 1000 to 500 to 300. We are still getting 503 errors but in some cases fewer
Used health checks to ping an endpoint to make sure express is running before sending traffic. This also might have helped but we are still getting quite a few 503 errors.
I expect that we should be able to fix this so that we don't have any 503 errors but we are still getting quite a few.
What else should I try?
Update #1:
It takes about 10 to 20 seconds to start:
Update #2:
I noticed that the 503 errors happen in groups and that the requests are not just a few milliseconds which means that they are running when the 503 errors occur. That means that GCP Run is likely adding instances because of the 503 errors and the 503 errors are not happening when the instances are added.
What can cause a 503 error all of a sudden for an instance that is already running?
I have a MVC project in React and .NET.
My server and client are running well locally. But, when I run it from IIS the server always return 500 error.
Does someone have an idea how to understand what's wrong, and why the server return 500 always?
Finally I run the application and run in local host in the server computer in order to see a verbose error information.
I did a right click on the 500-error in the network tab (after open F12) and the open in a new tab.
That's helped me understand why the server always returned 500.
For a few days now our Jenkins server is returning "HTTP ERROR 404 Not Found" from jetty. The interesting behavior is, if I reload the page a couple of times (5-20 times) then suddenly the Jenkins UI appears, but on the next click it is "HTTP ERROR 404 Not Found" again. Jenkins runs in a container on k3s. The Jenkins logs do not show any issues, the java process does not crash. I tried the latest Jenkins version and a few older ones (all alpine-based). Until last week it has been working for several months without problems. Any ideas ?
The problem here was the Traefik ingress configuration. I did use
- path: /jenkins
which worked fine for many months, after a reboot it did not anymore. When I changed it to
- path: /
it worked again. I do not understand why the behavior of traefik changed, but if someone runs into the same issue, maybe this post helps.
When I start Phusion Passenger Standalone web server (version 5.0.2), I see the following error in the log (even though everything works fine otherwise):
ServerKit/Server.h:892 ]: [Client 1-1] Disconnecting client with error: client socket write error: Broken pipe (errno=32)
Any idea what might be causing it?
Note: I start the server with foreman start and I stop it with control-c.
Passenger author here. Actually, the issue maxd linked to has got nothing to do with it.
The "Disconnecting client with error: client socket write error: Broken pipe" is a harmless informational message. It's quite normal, but I forgot to give it a lower logging level. I will do that in the next release. You can safely ignore this message. Nothing bad is going on.
Could you tell me what happens with AWS Server now? From 3 weeks ago, util now, whenever I deploy my RoR app into AWS Server (using ElasticBeantalk tool), I meet a strange issue
Deployment time is quite good (just about 10-15 minutes), and the healthy of server is still green. But after that, the server is inaccessible. This status last about 3 - 4 hours !!! Then, everything is OK, server run fast and smoothly. I totally don't understand server healthy still un-change although this error happen. Everything I can do is "refresh browser periodically until it run"
I don't think my application is bigger enough with total deployment time like that. It just takes me about 20 minutes on local (production mode)
Here're some error I found out when server is hang:
"An error occured while starting up the preloader."
"Gateway timeout" when loading application.js (using chrome debug)
"Bad gateway" when loading application.js (using chrome debug)
Please give me some advise to solve that. I have been stucked on this issue for a long time
Thanks