Why my android app instance is re-created? - android-lifecycle

In my android application I subclass the Application class and start a service. The service has a thread which is responsible to monitor idle time, and if the application remains idle more than 1 min the service tells my application about it, and the application lock downs user session.
Here is the code, even though I am not sending any broadcast I am facing this issue..
public class MainApp extends Application
{
public void onCreate()
{
Log.i("Sharp:MainApp", "Application - OnCreate");
super.onCreate();
startInactivityMonitorService();
}
public void startInactivityMonitorService()
{
//start the activity monitoring service
Log.i("Sharp:MainApp", "Starting the activity monitoring service...");
Intent intent = new Intent(getApplicationContext(), InactivityMonitorService.class);
intent.putExtra(InactivityMonitorService.IDLE_TIMEOUT, 15000);
startService(intent);
Log.i("Sharp:MainApp", "Completed starting the activity monitoring service.");
}
}
public class InactivityMonitorService extends Service
{
public static String IDLE_TIMEOUT = "IDLE_TIMEOUT";
public int onStartCommand(Intent intent, int flags, int startId)
{
idleTimeout = intent.getExtras().getInt(InactivityMonitorService.IDLE_TIMEOUT);
if (monitorThread == null) {
monitorThread = new InactivityMonitorThread();
monitorThread.start();
}
}
private class InactivityMonitorThread extends Thread
{
public void run()
{
do {
long idleDuration = System.currentTimeMillis() - lastUserInteractionTime;
//check if we have exceeded the idle timeout
if (idleDuration > idleTimeout) {
//Send broadcast message
}
SystemClock.sleep(1000);
} while (!stopMonitoring);
}
}
}
After starting the application, I come out of it, keep the application in background, after few seconds onCreate of my application is called, which is trying to start the service and ends up with a crash!
The crash occurs due to NullPointerException in onStartCommand of the service. Below is the log I received. I am more concerned about, why does my application gets re-created?
Following 2 lines gives me the evidence that the application is re-created.
Sharp:MainApp(14627): Application - OnCreate
Sharp:MainApp(14627): Starting the activity monitoring service...
10-20 12:07:48.365: I/SurfaceTextureClient(579): [STC::queueBuffer] (this:0x5e364890) fps:0.98, dur:1015.74, max:1015.74, min:1015.74
10-20 12:07:48.365: I/BufferQueue(130): [StatusBar](this:0x42610018,api:1) [queue] fps:0.98, dur:1016.44, max:1016.44, min:1016.44
10-20 12:07:48.366: W/Trace(579): Unexpected value from nativeGetEnabledTags: 0
10-20 12:07:48.366: W/Trace(579): Unexpected value from nativeGetEnabledTags: 0
10-20 12:07:48.370: I/BufferQueue(130): [StatusBar](this:0x42610018,api:1) [release] fps:0.99, dur:1008.45, max:1008.45, min:1008.45
10-20 12:07:48.371: I/SurfaceFlinger(130): [SurfaceFlinger] fps:14.868297,dur:1008.86,max:84.01,min:22.94
10-20 12:07:48.589: V/Provider/Settings(489): from settings cache , name = read_external_storage_enforced_default , value = null
10-20 12:07:48.589: W/Trace(489): Unexpected value from nativeGetEnabledTags: 0
10-20 12:07:48.589: W/Trace(489): Unexpected value from nativeGetEnabledTags: 0
10-20 12:07:48.590: D/dalvikvm(131): threadid=2: exiting
10-20 12:07:48.590: D/dalvikvm(131): threadid=2: bye!
10-20 12:07:48.591: D/dalvikvm(131): threadid=3: exiting
10-20 12:07:48.591: D/dalvikvm(131): threadid=3: bye!
10-20 12:07:48.591: D/dalvikvm(131): threadid=4: exiting
10-20 12:07:48.591: D/dalvikvm(131): threadid=4: bye!
10-20 12:07:48.591: D/dalvikvm(131): pre gc
10-20 12:07:48.591: D/dalvikvm(131): Zygote::ForkAndSpecialize +
10-20 12:07:48.594: D/dalvikvm(131): Zygote::ForkAndSpecialize : 14627
10-20 12:07:48.594: D/dalvikvm(131): create interp thread : stack size=32KB
10-20 12:07:48.595: D/dalvikvm(131): create new thread
10-20 12:07:48.595: D/dalvikvm(131): new thread created
10-20 12:07:48.595: D/dalvikvm(131): update thread list
10-20 12:07:48.596: D/dalvikvm(14627): Zygote::ForkAndSpecialize : 0
10-20 12:07:48.596: D/dalvikvm(14627): zygote get new systemTid : 14627
10-20 12:07:48.596: D/dalvikvm(14627): Late-enabling CheckJNI
10-20 12:07:48.597: D/dalvikvm(14627): threadid=2: interp stack at 0x5a858000
10-20 12:07:48.598: D/dalvikvm(14627): threadid=3: interp stack at 0x5a960000
10-20 12:07:48.598: D/jdwp(14627): prepping for JDWP over ADB
10-20 12:07:48.598: D/dalvikvm(131): threadid=2: interp stack at 0x5a758000
10-20 12:07:48.598: D/dalvikvm(131): threadid=2: created from interp
10-20 12:07:48.598: D/dalvikvm(131): start new thread
10-20 12:07:48.598: D/dalvikvm(131): create interp thread : stack size=32KB
10-20 12:07:48.598: D/dalvikvm(131): create new thread
10-20 12:07:48.598: D/dalvikvm(131): new thread created
10-20 12:07:48.598: D/dalvikvm(131): update thread list
10-20 12:07:48.599: D/dalvikvm(131): threadid=2: notify debugger
10-20 12:07:48.599: D/dalvikvm(131): threadid=2 (ReferenceQueueDaemon): calling run()
10-20 12:07:48.599: D/jdwp(14627): ADB transport startup
10-20 12:07:48.599: D/dalvikvm(14627): Elevating priority from 0 to -8
10-20 12:07:48.599: D/dalvikvm(14627): threadid=4: interp stack at 0x5aa68000
10-20 12:07:48.600: D/jdwp(14627): JDWP: thread running
10-20 12:07:48.600: D/jdwp(14627): acceptConnection
10-20 12:07:48.600: D/ADB_SERVICES(284): Adding socket 19 pid 14627 to jdwp process list
10-20 12:07:48.600: D/jdwp(14627): trying to receive file descriptor from ADB
10-20 12:07:48.600: D/dalvikvm(14627): threadid=5: interp stack at 0x5d08f000
10-20 12:07:48.601: D/dalvikvm(14627): zygote get thread init done
10-20 12:07:48.601: D/dalvikvm(131): threadid=3: interp stack at 0x5a860000
10-20 12:07:48.601: D/dalvikvm(131): threadid=3: created from interp
10-20 12:07:48.601: D/dalvikvm(131): start new thread
10-20 12:07:48.601: D/dalvikvm(131): create interp thread : stack size=32KB
10-20 12:07:48.601: D/dalvikvm(131): create new thread
10-20 12:07:48.601: D/dalvikvm(131): new thread created
10-20 12:07:48.601: D/dalvikvm(131): update thread list
10-20 12:07:48.601: D/dalvikvm(131): threadid=3: notify debugger
10-20 12:07:48.602: D/dalvikvm(131): threadid=3 (FinalizerDaemon): calling run()
10-20 12:07:48.602: D/dalvikvm(14627): create interp thread : stack size=32KB
10-20 12:07:48.602: D/dalvikvm(14627): create new thread
10-20 12:07:48.602: D/dalvikvm(14627): new thread created
10-20 12:07:48.602: D/dalvikvm(14627): update thread list
10-20 12:07:48.602: D/dalvikvm(14627): threadid=6: interp stack at 0x5d097000
10-20 12:07:48.602: D/dalvikvm(14627): threadid=6: created from interp
10-20 12:07:48.603: D/dalvikvm(14627): start new thread
10-20 12:07:48.603: D/dalvikvm(14627): create interp thread : stack size=32KB
10-20 12:07:48.603: D/dalvikvm(14627): create new thread
10-20 12:07:48.603: D/dalvikvm(14627): new thread created
10-20 12:07:48.603: D/dalvikvm(14627): update thread list
10-20 12:07:48.603: D/dalvikvm(14627): threadid=6: notify debugger
10-20 12:07:48.603: D/dalvikvm(14627): threadid=6 (ReferenceQueueDaemon): calling run()
10-20 12:07:48.603: W/ADB_SERVICES(284): create_local_service_socket() name=jdwp:14627
10-20 12:07:48.603: W/ADB_SERVICES(284): looking for pid 14627 in JDWP process list return fds0(20) fds1(21)
10-20 12:07:48.603: W/ADB_SERVICES(284): trying to write to JDWP socket=19 pid=14627 count=1 out_fds=21
10-20 12:07:48.604: D/jdwp(14627): received file descriptor 39 from ADB
10-20 12:07:48.604: D/dalvikvm(14627): threadid=7: interp stack at 0x5d19f000
10-20 12:07:48.604: D/dalvikvm(14627): threadid=7: created from interp
10-20 12:07:48.604: D/dalvikvm(131): threadid=4: interp stack at 0x5a968000
10-20 12:07:48.605: D/dalvikvm(131): threadid=4: created from interp
10-20 12:07:48.605: D/dalvikvm(131): start new thread
10-20 12:07:48.605: D/dalvikvm(131): threadid=4: notify debugger
10-20 12:07:48.605: D/dalvikvm(131): threadid=4 (FinalizerWatchdogDaemon): calling run()
10-20 12:07:48.606: I/ActivityManager(489): Start proc com.acs.sharp.app for service com.acs.sharp.app/com.acs.android.fwk.background.InactivityMonitorService: pid=14627 uid=10099 gids={50099, 1028}
10-20 12:07:48.606: W/Trace(489): Unexpected value from nativeGetEnabledTags: 0
10-20 12:07:48.606: W/Trace(489): Unexpected value from nativeGetEnabledTags: 0
10-20 12:07:48.608: D/jdwp(14627): processIncoming
10-20 12:07:48.610: D/jdwp(14627): processIncoming
10-20 12:07:48.610: D/jdwp(14627): handlePacket : cmd=0x1, cmdSet=0xC7, len=0x13, id=0x4000005A, flags=0x0, dataLen=0x8
10-20 12:07:48.612: D/dalvikvm(14627): start new thread
10-20 12:07:48.612: D/dalvikvm(14627): create interp thread : stack size=32KB
10-20 12:07:48.612: D/dalvikvm(14627): create new thread
10-20 12:07:48.612: D/dalvikvm(14627): new thread created
10-20 12:07:48.612: D/dalvikvm(14627): update thread list
10-20 12:07:48.612: D/jdwp(14627): processIncoming
10-20 12:07:48.612: D/jdwp(14627): handlePacket : cmd=0x1, cmdSet=0xC7, len=0x17, id=0x4000005B, flags=0x0, dataLen=0xC
10-20 12:07:48.613: D/jdwp(14627): processIncoming
10-20 12:07:48.613: D/jdwp(14627): handlePacket : cmd=0x1, cmdSet=0xC7, len=0x13, id=0x4000005C, flags=0x0, dataLen=0x8
10-20 12:07:48.614: D/jdwp(14627): processIncoming
10-20 12:07:48.614: D/jdwp(14627): handlePacket : cmd=0x1, cmdSet=0xC7, len=0x13, id=0x4000005D, flags=0x0, dataLen=0x8
10-20 12:07:48.614: D/dalvikvm(14627): threadid=7: notify debugger
10-20 12:07:48.614: D/dalvikvm(14627): threadid=7 (FinalizerDaemon): calling run()
10-20 12:07:48.615: D/dalvikvm(14627): threadid=8: interp stack at 0x5d2a7000
10-20 12:07:48.615: D/dalvikvm(14627): threadid=8: created from interp
10-20 12:07:48.615: D/dalvikvm(14627): start new thread
10-20 12:07:48.620: D/dalvikvm(14627): threadid=8: notify debugger
10-20 12:07:48.620: D/dalvikvm(14627): threadid=8 (FinalizerWatchdogDaemon): calling run()
10-20 12:07:48.623: W/Trace(14627): Unexpected value from nativeGetEnabledTags: 0
10-20 12:07:48.626: W/Trace(14627): Unexpected value from nativeGetEnabledTags: 0
10-20 12:07:48.629: D/jdwp(14627): sendBufferedRequest : len=0x3D
10-20 12:07:48.635: D/dalvikvm(14627): threadid=9: interp stack at 0x5d6ad000
10-20 12:07:48.636: D/dalvikvm(14627): threadid=10: interp stack at 0x5d7b5000
10-20 12:07:48.637: W/Trace(14627): Unexpected value from nativeGetEnabledTags: 0
10-20 12:07:48.637: W/Trace(14627): Unexpected value from nativeGetEnabledTags: 0
10-20 12:07:48.638: W/Trace(489): Unexpected value from nativeGetEnabledTags: 0
10-20 12:07:48.638: W/Trace(489): Unexpected value from nativeGetEnabledTags: 0
10-20 12:07:48.638: V/ActivityManager(489): Binding process pid 14627 to record ProcessRecord{423159c8 14627:com.acs.sharp.app/u0a10099}
10-20 12:07:48.638: V/ActivityManager(489): New death recipient com.android.server.am.ActivityManagerService$AppDeathRecipient#42b99a70 for thread android.os.BinderProxy#42b341b8
10-20 12:07:48.639: V/ActivityManager(489): New app record ProcessRecord{423159c8 14627:com.acs.sharp.app/u0a10099} thread=android.os.BinderProxy#42b341b8 pid=14627
10-20 12:07:48.639: W/Trace(489): Unexpected value from nativeGetEnabledTags: 0
10-20 12:07:48.639: W/Trace(489): Unexpected value from nativeGetEnabledTags: 0
10-20 12:07:48.642: I/ActivityManager(489): No longer want com.mediatek.atci.service (pid 12317): empty for 1800s
10-20 12:07:48.643: W/Trace(489): Unexpected value from nativeGetEnabledTags: 0
10-20 12:07:48.644: W/Trace(489): Unexpected value from nativeGetEnabledTags: 0
10-20 12:07:48.647: W/Trace(14627): Unexpected value from nativeGetEnabledTags: 0
10-20 12:07:48.650: W/Trace(14627): Unexpected value from nativeGetEnabledTags: 0
10-20 12:07:48.650: W/Trace(14627): Unexpected value from nativeGetEnabledTags: 0
10-20 12:07:48.650: W/Trace(14627): Unexpected value from nativeGetEnabledTags: 0
10-20 12:07:48.650: W/Trace(14627): Unexpected value from nativeGetEnabledTags: 0
10-20 12:07:48.651: D/jdwp(14627): sendBufferedRequest : len=0x3D
10-20 12:07:48.664: D/dalvikvm(14627): open_cached_dex_file : /data/app/com.acs.sharp.app-1.apk /data/dalvik-cache/data#app#com.acs.sharp.app-1.apk#classes.dex
10-20 12:07:48.670: I/Sharp:MainApp(14627): Application - OnCreate
10-20 12:07:48.670: I/Sharp:MainApp(14627): Starting the activity monitoring service...
10-20 12:07:48.671: I/Sharp:MainApp(14627): Completed starting the activity monitoring service.
10-20 12:07:48.672: W/Trace(14627): Unexpected value from nativeGetEnabledTags: 0
10-20 12:07:48.672: W/Trace(14627): Unexpected value from nativeGetEnabledTags: 0
10-20 12:07:48.674: W/Trace(14627): Unexpected value from nativeGetEnabledTags: 0
10-20 12:07:48.674: W/Trace(14627): Unexpected value from nativeGetEnabledTags: 0
10-20 12:07:48.675: D/AndroidRuntime(14627): Shutting down VM
10-20 12:07:48.675: W/dalvikvm(14627): threadid=1: thread exiting with uncaught exception (group=0x413569a8)
10-20 12:07:48.676: E/AndroidRuntime(14627): FATAL EXCEPTION: main
10-20 12:07:48.676: E/AndroidRuntime(14627): java.lang.RuntimeException: Unable to start service com.acs.android.fwk.background.InactivityMonitorService#415fa958 with null: java.lang.NullPointerException
10-20 12:07:48.676: E/AndroidRuntime(14627): at android.app.ActivityThread.handleServiceArgs(ActivityThread.java:2822)
10-20 12:07:48.676: E/AndroidRuntime(14627): at android.app.ActivityThread.access$1900(ActivityThread.java:156)
10-20 12:07:48.676: E/AndroidRuntime(14627): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1437)
10-20 12:07:48.676: E/AndroidRuntime(14627): at android.os.Handler.dispatchMessage(Handler.java:99)
10-20 12:07:48.676: E/AndroidRuntime(14627): at android.os.Looper.loop(Looper.java:153)
10-20 12:07:48.676: E/AndroidRuntime(14627): at android.app.ActivityThread.main(ActivityThread.java:5297)
10-20 12:07:48.676: E/AndroidRuntime(14627): at java.lang.reflect.Method.invokeNative(Native Method)
10-20 12:07:48.676: E/AndroidRuntime(14627): at java.lang.reflect.Method.invoke(Method.java:511)
10-20 12:07:48.676: E/AndroidRuntime(14627): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:833)
10-20 12:07:48.676: E/AndroidRuntime(14627): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:600)
10-20 12:07:48.676: E/AndroidRuntime(14627): at dalvik.system.NativeStart.main(Native Method)
10-20 12:07:48.676: E/AndroidRuntime(14627): Caused by: java.lang.NullPointerException
10-20 12:07:48.676: E/AndroidRuntime(14627): at com.acs.android.fwk.background.InactivityMonitorService.onStartCommand(InactivityMonitorService.java:108)
10-20 12:07:48.676: E/AndroidRuntime(14627): at android.app.ActivityThread.handleServiceArgs(ActivityThread.java:2805)
10-20 12:07:48.676: E/AndroidRuntime(14627): ... 10 more

Looks like the problem is due to the Advanced Task Killer running on my device - which is scheduled to kill the application & services at 30 seconds. I will un-tick my app and do a rigorous testing to conclude on it.

Related

How to handle Rabbitmq Error shortstr_size in erlang?

I am using erlang client library to connect to local rabbitmq server using default connection parameters. I am getting this issue in a span of arroung Once in 24 Hours. My Erlang Application Not Able to Handle This Issue.
For My Error Logger I Getting These Message..
2017-08-06 12:45:03.883 [error] <0.3739.0>#rabbit_framing_amqp_0_9_1:shortstr_size:210 CRASH REPORT Process <0.3739.0> with 0 neighbours crashed with reason: bad argument in call to erlang:size([]) in rabbit_framing_amqp_0_9_1:shortstr_size/1 line 210
2017-08-06 12:45:03.883 [error] <0.3736.0> Supervisor {<0.3736.0>,amqp_channel_sup} had child writer started with rabbit_writer:start_link(#Port<0.2798>, 1, 131072, rabbit_framing_amqp_0_9_1, <0.3738.0>, {<<"client 127.0.0.1:56646 -> 127.0.0.1:5672">>,1}) at <0.3739.0> exit with reason badarg in context child_terminated
2017-08-06 12:45:03.900 [error] <0.3736.0> Supervisor {<0.3736.0>,amqp_channel_sup} had child writer started with rabbit_writer:start_link(#Port<0.2798>, 1, 131072, rabbit_framing_amqp_0_9_1, <0.3738.0>, {<<"client 127.0.0.1:56646 -> 127.0.0.1:5672">>,1}) at <0.3739.0> exit with reason reached_max_restart_intensity in context shutdown
2017-08-06 12:45:04.514 [error] <0.3754.0>#rabbit_framing_amqp_0_9_1:shortstr_size:210 CRASH REPORT Process <0.3754.0> with 0 neighbours crashed with reason: bad argument in call to erlang:size([]) in rabbit_framing_amqp_0_9_1:shortstr_size/1 line 210
2017-08-06 12:45:04.514 [error] <0.3751.0> Supervisor {<0.3751.0>,amqp_channel_sup} had child writer started with rabbit_writer:start_link(#Port<0.2819>, 1, 131072, rabbit_framing_amqp_0_9_1, <0.3753.0>, {<<"client 127.0.0.1:49559 -> 127.0.0.1:5672">>,1}) at <0.3754.0> exit with reason badarg in context child_terminated
2017-08-06 12:45:04.515 [error] <0.3751.0> Supervisor {<0.3751.0>,amqp_channel_sup} had child writer started with rabbit_writer:start_link(#Port<0.2819>, 1, 131072, rabbit_framing_amqp_0_9_1, <0.3753.0>, {<<"client 127.0.0.1:49559 -> 127.0.0.1:5672">>,1}) at <0.3754.0> exit with reason reached_max_restart_intensity in context shutdown
2017-08-06 12:45:04.846 [error] <0.3768.0>#rabbit_framing_amqp_0_9_1:shortstr_size:210 CRASH REPORT Process <0.3768.0> with 0 neighbours crashed with reason: bad argument in call to erlang:size([]) in rabbit_framing_amqp_0_9_1:shortstr_size/1 line 210
2017-08-06 12:45:04.846 [error] <0.3765.0> Supervisor {<0.3765.0>,amqp_channel_sup} had child writer started with rabbit_writer:start_link(#Port<0.2821>, 1, 131072, rabbit_framing_amqp_0_9_1, <0.3767.0>, {<<"client 127.0.0.1:60413 -> 127.0.0.1:5672">>,1}) at <0.3768.0> exit with reason badarg in context child_terminated
2017-08-06 12:45:04.846 [error] <0.3765.0> Supervisor {<0.3765.0>,amqp_channel_sup} had child writer started with rabbit_writer:start_link(#Port<0.2821>, 1, 131072, rabbit_framing_amqp_0_9_1, <0.3767.0>, {<<"client 127.0.0.1:60413 -> 127.0.0.1:5672">>,1}) at <0.3768.0> exit with reason reached_max_restart_intensity in context shutdown
2017-08-06 12:45:05.154 [error] <0.3782.0>#rabbit_framing_amqp_0_9_1:shortstr_size:210 CRASH REPORT Process <0.3782.0> with 0 neighbours crashed with reason: bad argument in call to erlang:size([]) in rabbit_framing_amqp_0_9_1:shortstr_size/1 line 210
2017-08-06 12:45:05.154 [error] <0.3779.0> Supervisor {<0.3779.0>,amqp_channel_sup} had child writer started with rabbit_writer:start_link(#Port<0.2823>, 1, 131072, rabbit_framing_amqp_0_9_1, <0.3781.0>, {<<"client 127.0.0.1:36301 -> 127.0.0.1:5672">>,1}) at <0.3782.0> exit with reason badarg in context child_terminated
2017-08-06 12:45:05.154 [error] <0.3779.0> Supervisor {<0.3779.0>,amqp_channel_sup} had child writer started with rabbit_writer:start_link(#Port<0.2823>, 1, 131072, rabbit_framing_amqp_0_9_1, <0.3781.0>, {<<"client 127.0.0.1:36301 -> 127.0.0.1:5672">>,1}) at <0.3782.0> exit with reason reached_max_restart_intensity in context shutdown
2017-08-06 12:45:05.484 [error] <0.3796.0>#rabbit_framing_amqp_0_9_1:shortstr_size:210 CRASH REPORT Process <0.3796.0> with 0 neighbours crashed with reason: bad argument in call to erlang:size([]) in rabbit_framing_amqp_0_9_1:shortstr_size/1 line 210
2017-08-06 12:45:05.484 [error] <0.3793.0> Supervisor {<0.3793.0>,amqp_channel_sup} had child writer started with rabbit_writer:start_link(#Port<0.2825>, 1, 131072, rabbit_framing_amqp_0_9_1, <0.3795.0>, {<<"client 127.0.0.1:34055 -> 127.0.0.1:5672">>,1}) at <0.3796.0> exit with reason badarg in context child_terminated
I am using this Library
https://github.com/jbrisbin/amqp_client
as it having rebar. And I am using rebar for my project .
May Be I written some thing wrong, As I am very new to erlang. And this is my First project Live in Erlang. I am using ranch as tcp client Acceptor. And My Client Handler is a gen_fsm.
code snipet what I am using
init({Ref, Socket, Transport, Mod, _Opts=[]})->
process_flag(trap_exit, true),
ok = ranch:accept_ack(Ref),
ok = Transport:setopts(Socket, [{active, once}]),
{ok, {RemoteIp, _Port}} = inet:peername(Socket),
lager:info("New Client Connection From ~w Socket ~w", [RemoteIp, Socket]),
{ok, RabbitConnection} = amqp_connection:start(#amqp_params_network{}),
{ok, RabbitChannel} = amqp_connection:open_channel(RabbitConnection),
InitTimerRef = erlang:start_timer(30000, self(), session_init_timer_laps),
gen_fsm:enter_loop(?MODULE, [], open, #state{socket=Socket, transport=Transport, buffer= <<>>, mod=Mod, timers=#session_timers{session_init_timer=InitTimerRef}, sequence_number=1, rabbitmq_conn=RabbitConnection, rabbitmq_channel=RabbitChannel, remote_ip=RemoteIp}).
Also when ever I am getting this Issue My ranch socket Listener throwing below errors.
<0.815.0> Ranch acceptor reducing accept rate: out of file descriptors
you ran out of file descriptors it means that you are opening too much resource without close them (Most likely connections).
you can increase your file descriptors, but you need to monitor your client and check the resources it is using.
NOTE the client you are using is a bit old, the official amqp rabbitmq client is on hex repository. (https://hex.pm/packages/amqp_client)
You should use it on your project:
{amqp_client, "3.6.10"}

My ROR+Nginx+Passenger+AWS is sending 502 bad gateway

I am developing android service with ROR+Nginx+passenger and Amazon Web service.
The server goes well, but yesterday the server suddenly got downed.
I've tried to solve the problem, but I couldn't.
Here's my problem.
First, the EC2 instance's log/production.log doesn't write any log.
I can get 502 bad gateway error message in my android app log.
Second, all requests including http, https are responded by 502 badgateway.
Third, my AWS load balancer is logging 50X error.
I guess this problem is about ELB, but I don't know how to solve.
I have ssl certificate, and I have only one EC2 instance.
Any help will be appreciated.
Edited
2017-02-22 00:22:30.9139 25313/7f222067b700 age/Cor/Con/CheckoutSession.cpp:269 ]: [Client 1-356202] Returning HTTP 503 due to: Request queue full (configured max. size: 100)
[ 2017-02-22 00:24:00.3704 25313/7f221ee09700 age/Cor/App/Implementation.cpp:304 ]: Could not spawn process for application /home/ec2-user/popcake: An error occurred while starting up the preloader: it did not write a startup response in time.
Error ID: 10045ede
Error details saved to: /tmp/passenger-error-JS1Bnt.html
Message from application: An error occurred while starting up the
preloader: it did not write a startup response in time. Please read this article for more information about this problem.<br>
<h2>Raw process output:</h2>
(empty)
[ 2017-02-22 00:24:00.4527 25313/7f222067b700 age/Cor/Con/CheckoutSession.cpp:285 ]: [Client 1-353433] Cannot checkout session because a spawning error occurred. The identifier of the error is 10045ede. Please see earlier logs for details about the error.
[ 2017-02-22 00:30:29.0872 25313/7f222067b700 age/Cor/CoreMain.cpp:532 ]: Signal received. Gracefully shutting down... (send signal 2 more time(s) to force shutdown)
[ 2017-02-22 00:30:29.1173 25313/7f2227bfc840 age/Cor/CoreMain.cpp:901 ]: Received command to shutdown gracefully. Waiting until all clients have disconnected...
[ 2017-02-22 00:30:29.1174 25313/7f221fe7a700 Ser/Server.h:817 ]: [ApiServer] Freed 0 spare client objects
[ 2017-02-22 00:30:29.1174 25313/7f221fe7a700 Ser/Server.h:464 ]: [ApiServer] Shutdown finished
[ 2017-02-22 00:30:29.2400 25318/7f77237e2700 age/Ust/UstRouterMain.cpp:422 ]: Signal received. Gracefully shutting down... (send signal 2 more time(s) to force shutdown)
[ 2017-02-22 00:30:29.2741 25318/7f772a775840 age/Ust/UstRouterMain.cpp:492 ]: Received command to shutdown gracefully. Waiting until all clients have disconnected...
[ 2017-02-22 00:30:29.3043 25318/7f77237e2700 Ser/Server.h:464 ]: [UstRouter] Shutdown finished
[ 2017-02-22 00:30:29.3042 25318/7f7722fe1700 Ser/Server.h:817 ]: [UstRouterApiServer] Freed 0 spare client objects
[ 2017-02-22 00:30:29.3048 25318/7f7722fe1700 Ser/Server.h:464 ]: [UstRouterApiServer] Shutdown finished
[ 2017-02-22 00:30:29.3720 25318/7f772a775840 age/Ust/UstRouterMain.cpp:523 ]: Passenger UstRouter shutdown finished
[ 2017-02-22 00:30:29.4023 25313/7f222067b700 age/Cor/CoreMain.cpp:532 ]: Signal received. Gracefully shutting down... (send signal 1 more time(s) to force shutdown)
If you see the log that you have added it says "Request queue full", basically you have some api's which are long running, because of which 50x is being thrown after the queue was full.
Restarting nginx might fix the issue for now but you need to look at the response times of your apis and see which are slow performing and fix them otherwise it will be a recurring issue and you will have to restart nginx everyday.

UWSGI count processes killed with signal 9 (indirectly counting invocations of OOM killer)

I'm running UWSGI on a server and trying to track when worker processes get OOMed without using dmesg since that requires root privileges. In this environment, if a child was killed with SIGKILL, it's a safe assumption that the OOM killer did that.
UWSGI reports in its logs what signal a child was killed with. This issue (https://github.com/unbit/uwsgi/issues/25) shows an example of logs where a child was reported to have exited with signal 9.
Example:
Oct 20 18:54:28 localhost app: DAMN ! worker 2 (pid: 16100) died, killed by signal 9 :( trying respawn ...
Here's the line of code in UWSGI that's responsible for this message:
if (WIFSIGNALED(waitpid_status)) {
uwsgi_log("DAMN ! worker %d (pid: %d) died, killed by signal %d :( trying respawn ...\n", thewid, (int) diedpid, (int) WTERMSIG(waitpid_status));
}
https://github.com/unbit/uwsgi/blob/65a8d676f3e63a04b07fdcb4e1f92bb6502f024d/core/master.c#L1074
Is there a way to count the number of child processes killed with SIGKILL and surface it as a metric within the metric subsystem thing? I'm also wondering whether a child that exceeds the harakiri timeout is counted as being killed with a signal.
UWSGI does seem to keep a per-worker signal count e.g. "signals": 0, but I'm not sure exactly what that field is counting.
Example from same GitHub issue:
"pid": 11360,
"requests": 294,
"respawn_count": 38,
"rss": 226373632,
"running_time": 628263,
"signals": 0,
"status": "cheap",
"tx": 5178,
"vsz": 380694528

ODBC crashs after 8 hours of inactivity

I use odbc to interface mysql. I start odbc by the following code:
ConnectString = "Driver={MySQL ODBC 5.1 Driver};Server=localhost;Database=mydb; User=userdb;Password=pwddb;Option=3;",case odbc:connect(ConnectString, [{scrollable_cursors,off}]) of ...
After 8 hours of inactivity (more or less), odbc crashs:
=CRASH REPORT==== 22-Jun-2012::02:09:27 === crasher:
initial call: odbc:init/1
pid: <0.113.0>
registered_name: []
exception exit: {stopped,{'EXIT',<0.108.0>,killed}}
in function gen_server:terminate/6 (gen_server.erl, line 737)
ancestors: [odbc_sup,<0.111.0>]
messages: [{'EXIT',#Port<0.967>,normal}]
links: [<0.112.0>]
dictionary: []
trap_exit: true
status: running
heap_size: 377
stack_size: 24
reductions: 2237 neighbours:
Is a connection limited in the time ?
Mysql has a variable wait_timeout that controls how long the server will wait for a client to do something. The default value is 28800 secs. Casually, 28800 secs are 8 hours, so you might want to check that in your server configuration and set it to a larger value.
Other than that, you should let your worker terminate, and have the supervisor restart it as normal. Or (if using a gen_server or gen_fsm) setup a timeout to issue a query or a ping to keep alive the connection every hour or so, to keep the worker alive.
Best!

nginx + passenger + rails 3.1 = 502 bad gateway?

I have the latest Nginx running with Passenger, SQLite and Rails 3.1. Somehow, when I have Passenger running for a while, I start getting "502 bad gateway" errors when visiting my website.
Here is a snippet from my Nginx error log:
2011/06/27 08:55:33 [error] 20331#0: *11270 upstream prematurely closed connection while reading response header from upstream, client: xxx.xxx.xx.x, server: www.example.com, request: "GET / HTTP/1.1", upstream: "passenger:unix:/passenger_helper_server:", host: "example.com"
2011/06/27 08:55:47 [info] 20331#0: *11273 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: xxx.xxx.xx.x, server: www.example.com, request: "GET / HTTP/1.1", upstream: "passenger:unix:/passenger_helper_server:", host: "example.com"
Here is my passenger-status --show=backtraces output:
Thread 'Client thread 7':
in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
in 'void Client::threadMain()' (HelperAgent.cpp:603)
Thread 'Client thread 10':
in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
in 'void Client::threadMain()' (HelperAgent.cpp:603)
Thread 'Client thread 11':
in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
in 'void Client::threadMain()' (HelperAgent.cpp:603)
Thread 'Client thread 12':
in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
in 'void Client::threadMain()' (HelperAgent.cpp:603)
Thread 'Client thread 13':
in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
in 'void Client::threadMain()' (HelperAgent.cpp:603)
Thread 'Client thread 14':
in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
in 'void Client::threadMain()' (HelperAgent.cpp:603)
Thread 'Client thread 15':
in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
in 'void Client::threadMain()' (HelperAgent.cpp:603)
Thread 'Client thread 16':
in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
in 'void Client::threadMain()' (HelperAgent.cpp:603)
Thread 'Client thread 17':
in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
in 'void Client::threadMain()' (HelperAgent.cpp:603)
Thread 'Client thread 18':
in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
in 'void Client::threadMain()' (HelperAgent.cpp:603)
Thread 'Client thread 19':
in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
in 'void Client::threadMain()' (HelperAgent.cpp:603)
Thread 'Client thread 20':
in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
in 'void Client::threadMain()' (HelperAgent.cpp:603)
Thread 'Client thread 21':
in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
in 'void Client::threadMain()' (HelperAgent.cpp:603)
Thread 'Client thread 22':
in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
in 'void Client::threadMain()' (HelperAgent.cpp:603)
Thread 'Client thread 23':
in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
in 'void Client::threadMain()' (HelperAgent.cpp:603)
Thread 'Client thread 24':
in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
in 'void Client::threadMain()' (HelperAgent.cpp:603)
Thread 'MessageServer thread':
in 'void Passenger::MessageServer::mainLoop()' (MessageServer.h:537)
Thread 'MessageServer client thread 35':
in 'virtual bool Passenger::BacktracesServer::processMessage(Passenger::MessageServer::CommonClientContext&, boost::shared_ptr<Passenger::MessageServer::ClientContext>&, const std::vector<std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::basic_string<char, std::char_traits<char>, std::allocator<char> > > >&)' (BacktracesServer.h:47)
in 'void Passenger::MessageServer::clientHandlingMainLoop(Passenger::FileDescriptor&)' (MessageServer.h:470)
This is what my passenger-memory-stats shows:
---------- Nginx processes ----------
PID PPID VMSize Private Name
-------------------------------------
16291 1 35.4 MB 0.1 MB nginx: master process /home/apps/.nginx/sbin/nginx
16292 16291 36.0 MB 0.8 MB nginx: worker process
16293 16291 35.8 MB 0.5 MB nginx: worker process
16294 16291 35.8 MB 0.5 MB nginx: worker process
16295 16291 35.8 MB 0.5 MB nginx: worker process
### Processes: 5
### Total private dirty RSS: 2.46 MB
----- Passenger processes ------
PID VMSize Private Name
--------------------------------
16251 87.0 MB 0.3 MB PassengerWatchdog
16254 100.4 MB 1.3 MB PassengerHelperAgent
16256 41.6 MB 5.7 MB Passenger spawn server
16259 134.8 MB 0.8 MB PassengerLoggingAgent
18390 770.4 MB 17.1 MB Passenger ApplicationSpawner: /home/apps/manager/current
18415 853.3 MB 147.7 MB Rack: /home/apps/manager/current
18424 790.5 MB 57.2 MB Rack: /home/apps/manager/current
18431 774.7 MB 18.7 MB Rack: /home/apps/manager/current
### Processes: 8
### Total private dirty RSS: 248.85 MB
It seems there is an issue with my the communication between Passenger and Nginx?
Also, looking at the Rails logs, it is clear that the request never reaches Rails at all, as there are no log entries for visits that get the 502 error. So my initial thought of something being wrong with any Rack middleware should not be possible.
The "V" in VM is for Virtual. See also answers on other SO questions, e.g. Virtual Memory Usage from Java under Linux, too much memory used.
That top 147 MB does not hint of anything unusual whatsoever. Your 502 errors mean something else is wrong with the worker processes from Passenger's point of view. You should check your Rails & Nginx log files for clues, and perhaps passenger-status --show=backtraces.
Try setting passenger_spawn_method conservative -- apparently there are issues with Passenger default forking settings and Rails 3.1
I just meet such deadly "502 Bad Gateway error" reported by nginx, web stack is Ubuntu 12.04 + Rails 3.2.9 + Passenger 3.0.18 + nginx 1.2.4, it spent me 2 hours to found the root cause:
My rails application no need database support, so I just remove the gem 'sqlite3' in the Gemfile, it works fine in development mode, but will lead 502 Bad Gateway in production mode.
So after add back gem 'sqlite3' in Gemfile, such 502 Bad Gateway error disappear....
I had the same problem and in my case it helped to increase the passenger_max_pool_size setting in the Nginx configuration file.
Maybe you can also take a look on the following postings which also helped me finding this solution:
http://whowish-programming.blogspot.com/2011/10/nginx-502-bad-gateway.html
https://groups.google.com/forum/?fromgroups#!topic/phusion-passenger/fgj6_4VdlLo and
https://groups.google.com/forum/?fromgroups#!topic/phusion-passenger/wAQDCrFHHgE
It was the same for me in Rails 4, but I have added a "SECRETKEYBASE" in /confirg/secrets.yml
production:
secretkeybase: # add yours here

Resources