Xdebug time out while running debug session (PhpFarm | phpFcgi) - docker

I run apache Webserver inside of a docker container.
To be able to use multiple php-versions, I use phpfarm inside of this docker-container.
After I configured xdebug and connect it to phpstorm, I wonder why the debug-session allways finishd with a 500 error in the Browser.
The timeout was nearly 40 - 50 Seconds after I request the Webpage.

Solution was to set the Timeout for the Server in the vhostfile for each php-version:
FcgidIOTimeout 300
With this parameter, the timeout isset to 300 Seconds.
Don't forget to restart or reload the Webserver.

Related

How to debug my requests from Docker image?

I run my application that grabs data from an external API in a Docker container (alpine). I use Docker Desktop 4.1.1 on macOS Monterey 12.5.
Every now and then my app needs to refresh its auth token. Everything works well.
But sometimes I get timeouts on request to refresh the token (lets say it's auth.example.com).
I think auth.example.com might be rate limiting those calls but:
It works no problem when I request same thing from my host (outside Docker) at the same time when it timing out in a container
After I restart Docker it works right away from inside a container
Issue disappears after some (random?) time. Sometimes it's 30 minutes, sometimes it's hours
I tested it from different containers made from different, clean (Debian, alpine, Ubuntu) images - calls to auth.example.com are timing out from all of them
I tried telneting telnet auth.example.com 443 and it timeouts inside Docker and works well from my host
At the same time telnet google.com 443 works well from inside my containers
I tried running hundreds of those requests from my host in a loop to see if it gets blocked but it doesn't (and my app inside a container requests that only once an hour maybe)
Seems like Docker is adding something in the request that allows auth.example.com to filter those requests maybe?
But I tried sending requests from inside my container and from my host to RequestBin and all headers look the same.
I tried using mitmproxy and Proxyman to watch the requests but auth.example.com uses SSL pinning and I was not able to configure it properly.
I don't know how to debug that further. Any ideas?
(I am using Spotify's API, with Spotipy library, and calls that time out are made to accounts.spotify.com).

Openshift HTTP times out after 60 seconds

I have a server set up with Flask.
Everything works fine locally, HTTP requests can take longer than 60 seconds to resolve.
But when I deploy the server on Openshift, any request that takes longer than 60 seconds will time out automatically.
I have already changed the timeout on openshift to 10m, but that is not working. Any idea?
haproxy.router.openshift.io/timeout: 10m
The issue seems to be related to the way I set up OpenShift. I am using VPC environment which has its own connection timeout:
https://cloud.ibm.com/docs/vpc?topic=vpc-advanced-traffic-management#connection-timeouts

Influxdb server not listening on 8086

I started the influxdb. The meta server is getting started at 8088 and I am seeing a series of [wal] logs. When I try to connect with the server using influx command it throws
Failed to connect to http://localhost:8086
Please check your connection settings and ensure 'influxd' is running.
The server is running in the background. What could be the reason ? I have been writing continuously and then I restarted my server. After restarting I am not able to connect to the server. I also tried connecting after an hour of restarting to make sure it was not due to some startup tasks.
What could be the reason for this ?
The db had huge number of series and it took more than 2 hours for the meta server to be up fully. Later, the http listener was up after the initial start tasks.

Nginx + unicorn (rails) often gives "Connection refused" in nginx error log

At work we're running some high traffic sites in rails. We often get a problem with the following being spammed in the nginx error log:
2011/05/24 11:20:08 [error] 90248#0: *468577825 connect() to unix:/app_path/production/shared/system/unicorn.sock failed (61: Connection refused) while connecting to upstream
Our setup is nginx on the frontend server (load balancing), and unicorn on our 4 app servers. Each unicorn is running with 8 workers. The setup is very similar to the one GitHub uses.
Most of our content is cached, and when the request hits nginx it looks for the page in memcached and serves that it if can find it - otherwise the request goes to rails.
I can solve the above issue - SOMETIMES - by doing a pkill of the unicorn processes on the servers followed by a:
cap production unicorn:check (removing all the pid's)
cap production unicorn:start
Do you guys have any clue to how I can debug this issue? We don't have any significantly high load on our database server when these problems occurs..
Something killed your unicorn process on one of the servers, or it timed out. Or you have an old app server in your upstream app_server { } block that is no longer valid. Nginx will retry it from time to time. The default is to re-try another upstream if it gets a connection error, so hopefully your clients didn't notice anything.
I don't think this is a nginx issue for me, restarting nginx didn't help. It seems to be gunicorn...A quick and dirty way to avoid this is to recycle the gunicorn instances when the system is not being used, say 1AM for example if that is an acceptable maintenance window. I run gunicorn as a service that will come back up if killed so a pkill script takes care of the recycle/respawn:
start on runlevel [2345]
stop on runlevel [06]
respawn
respawn limit 10 5
exec /var/web/proj/server.sh
I am starting to wonder if this is at all related to memory allocation. I have MongoDB running on the same system and it reserves all the memory for itself but it is supposed to yield if other applications require more memory.
Other things worth a try is getting rid of eventlet or other dependent modules when running gunicorn. uWSGI can also be used as an alternative to gunicorn.

Websphere server startup problem

When I start my websphere server6.1 in debug mode, I am getting following error in RAD.
Server WebSphere Application Server v6.1 at localhost was unable to start within 1800 seconds. If the server requires more time, try increasing the timeout in the server editor.
Please help me to resolve this.
I resolved this issue by setting the start up timeout limit to 2000 seconds from 1800 sec in websphere server setup.
Todo this,
1) Double click the websphere server in RAD.
2) Click the "Timeouts" link
3) Change the start up limit to something higher than previous
In my case, I changed from 1800 sec to 2000 sec
Try delete all breakpoints and then start the server. Then put all the breakpoints that you need again. I get this error working with eclipse and tomcat, and this solution works for me.

Resources