How to change the socket time out value in Jira? - jira

We are trying to change the socket timeout in Jira as some of the REST API calls are taking too long to respond due to which we are getting Request Time Out Error.
For changing it, we tried the following but NONE of them worked:
We made changes in the General Configuration settings by following this article.
We followed the following article and changed the JVM_SUPPORT_RECOMMENDED_ARGS parameter to increase the socket time and these are our observations:
When setting the JVM_SUPPORT_RECOMMENDED_ARGS to 20000milliseconds (20sec), we found that it fails for the delay above 20sec; rest it is working fine for any value below it.
When the JVM_SUPPORT_RECOMMENDED_ARGS parameter set to any value between 2min to 14min, the delay above 50sec gives error. For the delay uptill 50sec in project creation, it is working fine.
The snapshot of setenv.sh file, where we made our changes:
Please suggest how to increase the socket time out so that we do not get a Request Time Out for a delay of around 2 min.
Any suggestions would be helpful.

If you use unproxied access to your Jira (i.e. via Tomcat distributed together with Jira), then you will have to change the timeout also in the Tomcat configuration: <jira-install>/conf/server.xml.
There's Tomcat's connector configuration:
<Connector port="8080" connectionTimeout="20000" ...
^^^^^
Notes:
Also feel free to contact official Atlassian Support if you have paid subscription.
Also take a look into logs to find out why project creation takes so much time and times-out (it's not normal, project creation is a matter of a second).

Related

Production website becomes unresponsive on certain pages

I have a weird issue that just started popping up for our customers. The portal they've been using for years has started freezing on some of the pages that the user navigates to. I tried restarting the IIS Server, the site within and the Application Pool under which the site is site is running. No difference.
In Chrome Dev Tools I can see that it is always one of these three calls that take time to complete:
When it happens, one of those three calls will report that the request is not finished, like this:
When eventually the call completes, I can see that the Content Download took 3.8 minutes. Not sure whether it is relevant or not, but it is always 3.8 minutes:
Did anyone else encounter a similar situation? Is there a suggestion on how to figure out what is happening all of a sudden that triggers these type of behaviours?
TIA,
Ed
Edit: The resource that fails to load after 3.8 minutes always generates a net::ERR_CONNECTION_RESET error:
Edit2: Thanks to all of you trying to help. A little update: I was able to isolate to problem to an issue with the server not serving some of the files. either *.css or *.js. The setting is that of two identical servers placed behind a load balancer. Apparently, the load balancer software was recently updated and right after that we started having these issues. I am working closely with the IT department of our client, trying to figure out what is the impact of the newer version that seems to have triggered all this drama.

Configuring time out in IIS 8.5

I've got a site which has an export feature. This feature can export parts of the database up to a full database export. This has been optimized a lot but still requires some 90-180 seconds to finish. Debugging and time outs aren't an issue, but live I receive a 504 gateway time out error after about 90 secs. I am guessing that IIS gets tired of waiting for the backend to respond and returns a 504. Is there any way to specify a longer time out, e.g. 5 minutes?
I've got an old executionTimeout setting set to 3600 which doesn't seem to do much any more (I believe it's an IIS <7 setting).
I've also tried this suggestion form another Stackoverflow question:
<configuration>
<system.applicationHost>
<webLimits connectionTimeout="00:01:00"
dynamicIdleThreshold="150"
headerWaitTimeout="00:00:30"
minBytesPerSecond="500"/>
</system.applicationHost>
</configuration>
The above doesn't work, the config file is broken. Is the above supposed to work?
Main question: how/can I increase waiting time in IIS to avoid 504s?
This wasn't an IIS issue. I believe the old way of providing a timeout worked. We've got a reverse proxy with a short timeout, this was changed to a longer timeout which effectively resolved the issue.

HBase 0.98.1 Put operations never timeout

I am using 0.98.1 version of HBase server and client. My application has strict response time requirements. As far as HBase is concerned, I would like to abort the HBase operation if the execution exceeds 1 or 2 seconds. This task timeout is useful in case of Region-Server being non-responsive or has crashed.
I tired configuring
1) HBASE_RPC_TIMEOUT_KEY = "hbase.rpc.timeout";
2) HBASE_CLIENT_RETRIES_NUMBER = "hbase.client.retries.number";
However, the Put operations never timeout (I am using sync flush). The operations return only after the Put is successful.
I looked through the code and found that the function receiveGlobalFailure in AsyncProcess class keeps resubmitting the task without any check on the retires. This is in version 0.98.1
I do see that in 0.99.1 there have been some changes to AsyncProcess class that might do what I want. I have not verified it though.
My questions are:
Is there any other configuration that I missed that can give me
the desired functionality.
Do I have to use 0.99.1 client to
solve my problem? Does 0.99.1 solve my problem?
If I have to use 0.99.1 client, then do I have to use 0.99.1 server or can I still use my existing 0.98.1 region-server.

Icinga - check_yum - Socket Timeout?

I'm using the check_yum - Plugin in my Icinga-Monitoring-Environment to check if there are security critical updates available. This works quite fine but sometimes I get a " CHECK_NRPE: Socket timeout after xx seconds." while executing the check. Currently my NRPE-Timeout is 30 seconds.
If I re-schedule the check a few times or executing the check directly from my Icinga-Server with a higher nrpe-timeout-value everything works fine, at least after a few executions of the check. All other checks via NRPE are not throwing any errors. So I think there is no general error with my NRPE-config or the plugins I'm using. Is there some explanation for this strange behaviour of the check_yum - plugin? Maybe some caching issues on the monitored servers?
First, be sure you are using the 1.0 version of this check from: https://code.google.com/p/check-yum/downloads/detail?name=check_yum_1.0.0&can=2&q=
The changes I've seen in that version could fix this issue, depending on it's root cause.
Second, if your server(s) are not configured to use all 'local' cache repos, then this check will likely time out before the 30 second deadline. Because: 1> the amount of data from the refresh/update is pretty large and may be taking a long time to download from remote (include RH proper) servers and 2> most of the 'official' update servers tend to go off-line A LOT.
Best solution I've found is to have a cronjob to perform your update check at a set interval (I use weekly) and create a log file containing those security patches the system(s) require. Then use a Nagios check, via a simple shell script, to see if said file has any new items in it.

My app fails to connect to the server some times

I've been helplessly observing this problem for a couple months now, and have decided this is my best shot.
I'm not sure what the cause of the problem is, but I can list some of the things I'm doing. I have an iOS app that uses AFNetworking to connect to a remote server hosted by Google App Engine using HTTP POST requests.
Now, everything works great, but sometimes, very very sporadically and random, I get failed requests. The activity indicator spins and spins for about a minute, and I get no feedback at the end - just a failed request. I check my server logs, and I don't see any errors. After the failed request, I try again, and it works fine. It works fine for the whole day. And then another time randomly the issue repeats itself, sometimes spinning for 10 seconds with a fail, or a minute.
Generally, what can possibly be the cause of this? Is it normal to have some failed connections randomly? Is that something on my part?
But the weird thing is, is that while on my iPhone the app is running, and the indicator is spinning, and it's trying to connect, I try connecting on the iOS simulator, and the connection works just fine. I try again on the iPhone, and it doesn't work.
If I close the app completely and start again, then it works again. So it sounds like it may be a software issue rather than connection issue, but then again I have no evidence or data what so ever.
I know it's vague, but I'm hoping someone may have had a similar problem. Anything helps.
There is a known issue with instance start on GAE for Java. You can star http://code.google.com/p/googleappengine/issues/detail?id=7706 issue.
The same problem was reported for Python but it is not such a big problem.
I think you should check logging level you use on appengine and monitor all your calls. Instance start usually takes more time, so you will be able to see how much time do you use on start and is it really a timeout problem.
For Java version you could try to change log level to debug:
.level = DEBUG
in your logging.properties file. It will give you more information about instance start process.

Resources