I am getting "Session expired" in Perfino UI too often.
Is it possible to increase a timeout? I did not find related settings in /opt/perfino/perfino.properties
This is related to websockets, unfortunately there is currently no option to increase this timeout. The communication mechanism is being reworked so this message will not be displayed again starting with perfino 2.1.
Related
We have found extreme memory use of an instance of com.vaadin.data.provider.DataCommunicator
Version: 8.5.2
Application is websocket event driven UI.
It appears that this happens sometimes when user suspends computer and vaadin hasn't destroyed the session yet.
Heap dump shows that atmosphere connection is still in CONNECTED state.
Vaadin heartbeats are 60 seconds.
Please advise what other info might be useful. This doesn't happen consistently enough to reproduce.
The last couple of days I've tried to debug a network error from d00m. I'm starting to run out of ideas/leads and my hope is that other SO users have valuable experience that might be useful. I hope to be able to provide all relevant information, but I'm not personally in control of the server environments.
The whole thing started by users noticing a couple of "network errors" in our app. The error seemed to occur randomly, without any noticeable pattern related to internet connectivity, iOS version or backend updates. The two errors that occurs behind the scenes are:
Error Domain=NSURLErrorDomain Code=-1001 "The request timed out."
and more frequently:
Error Domain=kCFErrorDomainCFNetwork Code=-1005 "The network connection was lost.
After debugging it for a couple of days, I've managed to reproduce these errors (occurring at random) by firing approx. 10 random (GET and POST) requests towards our backend with a random sleep timer between each request (set at 1-20 seconds). However, it only occurs in periods. What I've experienced the last couple of days is that when a "period of error" starts, I get one of the two errors every once or twice I run the code (meaning an error rate of 1/10 or 1/20 requests). This error rate continues for a couple of hours and then the error disappears for a couple of hours and then it starts all over.
Some quick facts about the setup:
Happens on device and simulator
Happens on iOS 8.4 and iOS 7.1 - although v. 8.4 is the main one I use for testing.
We use NSURLSession for our network requests. We also have AFNetworking included (updated to latest version), but we only use the Security part for SSL Pinning. Even with SSL pinning totally turned off, the error still occurs.
Some findings I've written down during the last couple of days:
It seems to only happen on our production environments which has some different configuration as our staging environments. This lead me to think that it might be related to the keep-alive bug as discussed here and here. However, our ops department have set up a new staging environment sending the same keep-alive header as the production environments, but this did not make the error occur on the staging environment.
Our Android version of the app were unable to reproduce the error using the same setup of requests. Further, we've not received any customer issues on "network errors" in the Android app.
My gut feeling says that it's related to the server environment and the HTTP implementation in iOS. I'm however unable to track down a convincing pattern that proves anything. I've made the same setup using a simple Rails script, and when the next "error period" occur, I'll be ready to try and reproduce it outside of iOS land. I'll update the question when this happens.
I'm not looking for solutions involving resetting wifi settings, shutting down the simulator or similar as I do not see this as feasible solutions in a production environment. I've also considered making the retry-loop-fix as mentioned in the GitHub issue, but I see this as a last resort.
Please let me know if you need any more information.
In my experience, those sorts of problems usually point to massive packet loss, particularly over a cell network, where minor variations in multipath interference and other issues can make the difference between reliably passing traffic and not.
Another possibility that comes to mind is poor-quality NAT implementations, in the unlikely event that your server's timeout interval is long enough to cause the NAT to give up on the TCP connection.
Either way, the only way to know for sure what's happening is to take a packet trace. To do that, connect a Mac to the Internet via a wired connection, enable network sharing over Wi-Fi, and connect the iOS device to that Wi-Fi network. Then run Wireshark and tell it to monitor the bridge interface. Instructions here:
http://www.howtogeek.com/104278/how-to-use-wireshark-to-capture-filter-and-inspect-packets/
From there, you should be able to see exactly what is being sent and when. That will probably go a long way towards understanding why it is failing.
Ok, I lost a lot of time investigeting similar issue.
1005 could be coused by known iOS bug and there are couple fixes. For example add header
"Connection" with value "close".
More info
1001 is a different story. In my case the problem was strange (bad?) firewall on the server. It was banning device when there was many (not so many) requests in short period of time.
I believe you can do easy test if you are facing similar issue.
Send a lot of (depends of firewall settings) requests in loop (let's say 50 in 1 second).
Close/Kill app (this will close connections to server)
(OPTIONAL) Wait a while (lets say 60 seconds)
Start app again and try send request
If you now got timeout for all next requests, you probably have same issue and you should talk with sever guys.
PS: if you don't have access to server, you can give user info that he should restart wifi on device to quit that timeout loop. It could be last resort in some cases.
We are in process of implementing msmq for the quick storage of the messages and process them in disconnected mode. Typical usage of any message broker.
One of the administration requirement is to send the automatic notification to administrator/developers if the queue messages (unprocessed) count reaches 1000.
Can it be done out of the box? If yes then how?
If no then do I need to write some windows service (or any sort of scheduler) to check the count every x-seconds?
Any suggestions or past experience is welcome..
The only (partially) built-in solution would be to set up the MSMQ Queue performance counter which gives you this information for private queues on the server.
There are a number of other solutions, including a SCOM management pack, and some third party solutions like evtools, or you could roll you own using System.Messaging.
Hope this is of help.
There's commercial solution for this - QueueMonitor.
Disclaimer: I'm the author of that software.
Edit
Few tips for this scenario:
set message's UseDeadLetterQueue to true - this way if there's any issue with delivering messages at least they won't be lost but moved to system's dead letter queue.
set message's Recoverable property to true - it does reduce performance, but for this kind of long running scenario there's too much risk that some restart or failure would loose messages which are only stored in memory.
if messages are no longer valid after some period, you can use TimeToReachQueue to automatically delete them.
I have the following scenario:
LR Portal 6.1.20 EE GA2 Portal behind IBM WebSeal
Staged Sites
Custom portlet which needs to publish it’s contents from staging to live
The custom portlet is publishing it’s contents with a class that extends BasePortletDataHandler and overrides the following methods:
doExportData
doImportData
doDeleteData
isAlwaysExportable
isPublishToLiveByDefault
isAlwaysStaged
This works quite well in developing mode, where there is no WebSeal. In control panel, you go to "site pages" and invoke “publish to live”.
In production however, we get WebSeal timeouts whenever this process takes more than 2 minutes. The process is still running in the background, but the user has no way telling if it's done, if it worked or if it did not. He gets no feedback about it what so ever.
Is there a way to implement a custom portlet for the control panel which takes care of these problems? How do I get/track the status of the process and how do I keep the session alive?
I don't have any experience with liferay, but I administrator WebSEAL daily so I can approach your question from that angle. You can increase the timeouts on individual junctions. I have encountered similar scenarios with applications in the past. We have had to go up to a 300 second timeout.
[junction:junction_name]
http-timeout = 300
https-timeout = 300
http://publib.boulder.ibm.com/infocenter/tivihelp/v2r1/index.jsp?topic=%2Fcom.ibm.itame.doc_6.1.1%2Fam611_webseal_admin95.htm
You may also need to increase the server timeouts:
[server]
client-connect-timeout = 300
http://publib.boulder.ibm.com/infocenter/tivihelp/v2r1/topic/com.ibm.itame.doc_6.1.1/am611_webseal_admin94.htm?path=3_10_3_3_1_4_0_6_5#http-https-timeouts
The problem is the application doesn't send any data over the TCP connection, so WebSEAL times out the connection. Unless you can change the way your application works, you'll have to increase the timeouts. Preferably you would use AJAX or similar technology to have the client routinely query the server for a status once the procedure is kicked off. However, I had a customer that was integrating with us and they couldn't change their application code, so I was forced to increase the timeouts for them as well.
My application has a long running request that takes over a minute. If I'm using Chrome or Firefox I just need to be patient. If I use IE however, at the one minute mark I get the popup that says I've reached a Network Connection Timeout.
Why is that?
The default Internet Explorer time out is 1 minute. Since your process is a long-running one, IceFaces doesn't send the response and it times out.
You can avoid this by spawning a new thread for your long running process and returning the response immediately. IceFaces has plenty of polling or push options available to you to let your client know when the long-running process is done.