Lighttpd error sockets disabled, out-of-fds - fastcgi

I have used lighttpd for a long, since its very fast for busy webapplications ins fast-cgi(PHP)
2 or 3 months ago sometimes I got this erro:
2015-07-06 17:58:55: (server.c.1398) [note] sockets enabled again
2015-07-06 17:58:55: (server.c.1446) [note] sockets disabled, out-of-fds
2015-07-06 17:58:59: (server.c.1398) [note] sockets enabled again
2015-07-06 17:58:59: (server.c.1446) [note] sockets disabled, out-of-fds
2015-07-06 17:59:05: (server.c.1398) [note] sockets enabled again
2015-07-06 17:59:05: (server.c.1446) [note] sockets disabled, out-of-fds
2015-07-06 17:59:10: (server.c.1398) [note] sockets enabled again
2015-07-06 17:59:10: (server.c.1446) [note] sockets disabled, out-of-fds
2015-07-06 17:59:14: (server.c.1398) [note] sockets enabled again
2015-07-06 17:59:14: (server.c.1446) [note] sockets disabled, out-of-fds
2015-07-06 17:59:18: (server.c.1398) [note] sockets enabled again
2015-07-06 17:59:18: (server.c.1446) [note] sockets disabled, out-of-fds
2015-07-06 17:59:22: (server.c.1398) [note] sockets enabled again
2015-07-06 17:59:22: (server.c.1446) [note] sockets disabled, out-of-fds
2015-07-06 17:59:26: (server.c.1398) [note] sockets enabled again
2015-07-06 17:59:26: (server.c.1446) [note] sockets disabled, out-of-fds
I was using version 1.4.29 and now after upgrade to 1.4.35 the problem persists. Since then I've been looking very different solutions and found nothing that could help me.
Some relevant config info:
h2. lighttpd.conf
server.username = "lighttpd"
server.groupname = "lighttpd"
server.event-handler = "linux-sysepoll"
server.max-fds = 4096 #same as ulimit -n
server.max-connections = 2048
server.stat-cache-engine = "simple"
server.max-keep-alive-idle = 5
server.max-keep-alive-requests = 4
server.max-read-idle = 30
server.max-write-idle = 360
h2. fast-cgi.conf
server.modules += ( "mod_fastcgi" )
fastcgi.server = ( ".php" =>
( "php-local" =>
(
"socket" => "/tmp/php-fastcgi-1.socket",
# "socket" => "/tmp/php-fastcgi-1.socket"+var.PID,
"bin-path" => "/usr/bin/php-cgi",
"max-procs" => 30,
"broken-scriptfilename" => "enable",
)
),
( "php-tcp" =>
(
"host" => "127.0.0.1",
"port" => 9999,
"check-local" => "disable",
"broken-scriptfilename" => "enable",
)
),
( "php-tcp2" =>
(
"host" => "127.0.0.1",
"port" => 9998,
"check-local" => "disable",
"broken-scriptfilename" => "enable",
)
),
( "php-num-procs" =>
(
"socket" => "/tmp/php-fastcgi-2.socket",
"bin-path" => "/usr/bin/php-cgi",
"max-procs" => 30,
"bin-environment" => (
"PHP_FCGI_CHILDREN" => "30",
"PHP_FCGI_MAX_REQUESTS" => "2048",
),
"broken-scriptfilename" => "enable",
)
),
)
This server is dedicated to that application.
Is running at more than four years with at least 2 lighttpd php applications with large volumes of access, the record was in 2013 with 14,000 unique visits and 85,000 page views, then this record had no problems with the limit.
Today keep the average 8,000 visitors and 44,000 page views per day.
:
What's wrong?

lighttpd 1.4 keeps an internal count of (some of the) file descriptors it has opened. lighttpd 1.4 disables the server sockets when its internal count (cur_fds + want_fds -- line 1408 of src/server.c) reaches 90% of server.max-fds from lighttpd.conf
It is possible (unsubstantiated) that the internal count does not match the actual number of file descriptors in use.
If this happens again, would you check how many fds are actually used by the lighttpd server process? If the lighttpd server process has pid 1234, then check the number of open fds of a process on Linux with
ls -1 /proc/1234/fd/ | wc -l
If the result is much less than 90% of server.max-fds (which you have set to 4096), then that would suggest a bug in lighttpd internal fd count. You can go to http://redmine.lighttpd.net/projects/lighttpd/issues and Register for an account. Then, you'll be able to submit a bug report.
Again, if this is the problem, you might also look at your application to see how frequently it is encountering fatal errors, dying, and being restarted.

Related

UWSGI Works Within Network But Not Over Domain

I have a RPi running NGINX and UWSGI serving a webpage and an API via UWSGI.
Web page works fine, both locally and from the web.
API works locally, but not via web. My guess it's either the router or the NGINX configuration.
I am using cloudflare for the DNS, and all appears fine there.
I can GET / POST locally using Postman, but not via the web address. I would greatly appreciate any ideas on where to look.
Output from uwsgi is:
*** Starting uWSGI 2.0.20 (32bit) on [Sat May 14 12:35:08 2022] ***
compiled with version: 8.3.0 on 06 October 2021 05:59:48
os: Linux-5.10.103-v7l+ #1529 SMP Tue Mar 8 12:24:00 GMT 2022
nodename: xxx
machine: armv7l
clock source: unix
pcre jit disabled
detected number of CPU cores: 4
current working directory: /var/www/xxx.xxx/public
detected binary path: /home/pi/.local/bin/uwsgi
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 12393
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uWSGI http bound on :9090 fd 4
spawned uWSGI http 1 (pid: 3176)
uwsgi socket 0 bound to TCP address 127.0.0.1:34881 (port auto-assigned) fd 3
Python version: 3.7.3 (default, Jan 22 2021, 20:04:44) [GCC 8.3.0]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0xd5c950
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 64408 bytes (62 KB) for 1 cores
*** Operational MODE: single process ***
<<<<<<<<<<<<<<<< Loaded script >>>>>>>>>>>>>>>>
WSGI app 0 (mountpoint='') ready in 0 seconds on interpreter 0xd5c950 pid: 3175 (default app)
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI worker 1 (and the only) (pid: 3175, cores: 1)

Use pooled connections factory on Spring with Artemis ActiveMQ failover to handle re-sending message

I am using a pooled connection factory to connect to an ActiveMQ Artemis high availability cluster.
The code belows shows my current implementation.
#Bean
public ConnectionFactory jmsConnectionFactory() {
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(brokerUrl,username,password);;
return connectionFactory;
}
#Bean
public JmsPoolConnectionFactory pooledConnectionFactoryOnline() {
JmsPoolConnectionFactory poolingFactory = new JmsPoolConnectionFactory();
poolingFactory.setConnectionFactory(jmsConnectionFactory());
poolingFactory.setMaxConnections(3);
poolingFactory.setConnectionIdleTimeout(0);
return poolingFactory;
}
#Bean
public JmsTemplate jmsTemplateOnline() {
JmsTemplate jmsTemplate = new JmsTemplate();
jmsTemplate.setConnectionFactory(pooledConnectionFactoryOnline());
jmsTemplate.setDefaultDestinationName(QUEUE);
return jmsTemplate;
}
The implementation of the connection factory pool above is from org.messaginghub.pooled.jms.JmsPoolConnectionFactory (but I encountered similar issues with org.springframework.jms.connection.CachingConnectionFactory)
and the connection string used for the failover case is (tcp://broker1:61616,tcp://broker2:62616)?ha=true&reconnectAttempts=-1.
Also my configuration for ha policy for master broker can be seen below
<connectors>
<connector name="broker1-connector">tcp://broker1:61616</connector>
<connector name="broker2-connector">tcp://broker2:61616</connector>
</connectors>
<ha-policy>
<replication>
<master>
<check-for-live-server>true</check-for-live-server>
</master>
</replication>
</ha-policy>
<cluster-connections>
<cluster-connection name="myhost1-cluster">
<connector-ref>broker1-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<static-connectors>
<connector-ref>broker2-connector</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
and for slave broker respectively
<ha-policy>
<replication>
<slave>
<allow-failback>true</allow-failback>
</slave>
</replication>
</ha-policy>
Logs for master broker are provided below
2021-01-24 21:05:56,093 INFO [org.apache.activemq.artemis.core.server] AMQ221082: Initializing metrics plugin org.apache.activemq.artemis.core.server.metrics.plugins.ArtemisPrometheusMetricsPlugin with properties: {}
2021-01-24 21:05:56,266 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server
2021-01-24 21:05:56,288 INFO [org.apache.activemq.artemis.core.server] AMQ221000: live Message Broker is starting with configuration Broker Configuration (clustered=true,journalDirectory=data/journal,bindingsDirectory=data/bindings,largeMessagesDirectory=data/large-messages,pagingDirectory=data/paging)
2021-01-24 21:05:58,987 INFO [org.apache.activemq.artemis.core.server] AMQ221055: There were too many old replicated folders upon startup, removing /var/lib/artemis/data/bindings/oldreplica.94
2021-01-24 21:05:58,994 INFO [org.apache.activemq.artemis.core.server] AMQ222162: Moving data directory /var/lib/artemis/data/bindings to /var/lib/artemis/data/bindings/oldreplica.96
2021-01-24 21:05:59,001 INFO [org.apache.activemq.artemis.core.server] AMQ221055: There were too many old replicated folders upon startup, removing /var/lib/artemis/data/journal/oldreplica.94
2021-01-24 21:05:59,058 INFO [org.apache.activemq.artemis.core.server] AMQ222162: Moving data directory /var/lib/artemis/data/journal to /var/lib/artemis/data/journal/oldreplica.96
2021-01-24 21:05:59,062 INFO [org.apache.activemq.artemis.core.server] AMQ221055: There were too many old replicated folders upon startup, removing /var/lib/artemis/data/paging/oldreplica.94
2021-01-24 21:05:59,068 INFO [org.apache.activemq.artemis.core.server] AMQ222162: Moving data directory /var/lib/artemis/data/paging to /var/lib/artemis/data/paging/oldreplica.96
2021-01-24 21:05:59,135 INFO [org.apache.activemq.artemis.core.server] AMQ221013: Using NIO Journal
2021-01-24 21:05:59,140 WARN [org.apache.activemq.artemis.core.server] AMQ222007: Security risk! Apache ActiveMQ Artemis is running with the default cluster admin user and default password. Please see the cluster chapter in the ActiveMQ Artemis User Guide for instructions on how to change this.
2021-01-24 21:05:59,149 INFO [org.apache.activemq.artemis.core.server] AMQ221057: Global Max Size is being adjusted to 1/2 of the JVM max size (-Xmx). being defined as 16,089,350,144
2021-01-24 21:05:59,300 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-server]. Adding protocol support for: CORE
2021-01-24 21:05:59,303 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-amqp-protocol]. Adding protocol support for: AMQP
2021-01-24 21:05:59,305 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-hornetq-protocol]. Adding protocol support for: HORNETQ
2021-01-24 21:05:59,306 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-mqtt-protocol]. Adding protocol support for: MQTT
2021-01-24 21:05:59,306 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-openwire-protocol]. Adding protocol support for: OPENWIRE
2021-01-24 21:05:59,307 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-stomp-protocol]. Adding protocol support for: STOMP
2021-01-24 21:05:59,463 INFO [org.apache.activemq.artemis.core.server] AMQ221109: Apache ActiveMQ Artemis Backup Server version 2.13.0 [null] started, waiting live to fail before it gets active
2021-01-24 21:05:59,555 INFO [org.apache.activemq.hawtio.branding.PluginContextListener] Initialized activemq-branding plugin
2021-01-24 21:05:59,638 INFO [org.apache.activemq.hawtio.plugin.PluginContextListener] Initialized artemis-plugin plugin
2021-01-24 21:06:00,447 INFO [io.hawt.HawtioContextListener] Initialising hawtio services
2021-01-24 21:06:00,471 INFO [io.hawt.system.ConfigManager] Configuration will be discovered via system properties
2021-01-24 21:06:00,474 INFO [io.hawt.jmx.JmxTreeWatcher] Welcome to hawtio 1.5.12 : http://hawt.io/ : Don't cha wish your console was hawt like me? ;-)
2021-01-24 21:06:00,478 INFO [io.hawt.jmx.UploadManager] Using file upload directory: /var/lib/artemis/tmp/uploads
2021-01-24 21:06:00,501 INFO [io.hawt.web.AuthenticationFilter] Starting hawtio authentication filter, JAAS realm: "activemq" authorized role(s): "amq" role principal classes: "org.apache.activemq.artemis.spi.core.security.jaas.RolePrincipal"
2021-01-24 21:06:00,535 INFO [io.hawt.web.JolokiaConfiguredAgentServlet] Jolokia overridden property: [key=policyLocation, value=file:/var/lib/artemis/etc/jolokia-access.xml]
2021-01-24 21:06:00,572 INFO [io.hawt.web.RBACMBeanInvoker] Using MBean [hawtio:type=security,area=jmx,rank=0,name=HawtioDummyJMXSecurity] for role based access control
2021-01-24 21:06:00,824 INFO [io.hawt.system.ProxyWhitelist] Initial proxy whitelist: [localhost, 127.0.0.1, 172.23.0.7, 42546424839f]
2021-01-24 21:06:01,245 INFO [org.apache.activemq.artemis] AMQ241001: HTTP Server started at http://0.0.0.0:8161
2021-01-24 21:06:01,245 INFO [org.apache.activemq.artemis] AMQ241002: Artemis Jolokia REST API available at http://0.0.0.0:8161/console/jolokia
2021-01-24 21:06:01,245 INFO [org.apache.activemq.artemis] AMQ241004: Artemis Console available at http://0.0.0.0:8161/console
2021-01-24 21:06:03,263 INFO [org.apache.activemq.artemis.core.server] AMQ221024: Backup server ActiveMQServerImpl::serverUUID=b96ecec9-e13e-11ea-8a4f-0242ac170006 is synchronized with live-server.
2021-01-24 21:06:09,763 INFO [org.apache.activemq.artemis.core.server] AMQ221031: backup announced
2021-01-24 21:06:09,806 WARN [org.apache.activemq.artemis.core.client] AMQ212037: Connection failure to broker2/broker2:61616 has been detected: AMQ219015: The connection was disconnected because of server shutdown [code=DISCONNECTED]
2021-01-24 21:06:09,806 WARN [org.apache.activemq.artemis.core.client] AMQ212037: Connection failure to broker2/broker2:61616 has been detected: AMQ219015: The connection was disconnected because of server shutdown [code=DISCONNECTED]
2021-01-24 21:06:09,875 INFO [org.apache.activemq.artemis.core.server] AMQ221037: ActiveMQServerImpl::serverUUID=b96ecec9-e13e-11ea-8a4f-0242ac170006 to become 'live'
2021-01-24 21:06:09,897 WARN [org.apache.activemq.artemis.core.client] AMQ212004: Failed to connect to server.
2021-01-24 21:06:10,553 INFO [org.apache.activemq.artemis.core.server] AMQ221080: Deploying address DLQ supporting [ANYCAST]
2021-01-24 21:06:10,554 INFO [org.apache.activemq.artemis.core.server] AMQ221003: Deploying ANYCAST queue DLQ on address DLQ
2021-01-24 21:06:10,555 INFO [org.apache.activemq.artemis.core.server] AMQ221080: Deploying address ExpiryQueue supporting [ANYCAST]
2021-01-24 21:06:10,555 INFO [org.apache.activemq.artemis.core.server] AMQ221003: Deploying ANYCAST queue ExpiryQueue on address ExpiryQueue
2021-01-24 21:06:10,803 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live
2021-01-24 21:06:10,865 INFO [org.apache.activemq.artemis.core.server] AMQ221020: Started EPOLL Acceptor at 0.0.0.0:61616 for protocols [CORE,MQTT,AMQP,STOMP,HORNETQ,OPENWIRE]
and for slave broker similarly are shown below
2021-01-24 21:05:59,975 INFO [org.apache.activemq.artemis.core.server] AMQ221025: Replication: sending NIOSequentialFile /var/lib/artemis/data/journal/activemq-data-1262.amq (size=10,485,760) to replica.
2021-01-24 21:06:01,346 INFO [org.apache.activemq.artemis.core.server] AMQ221025: Replication: sending NIOSequentialFile /var/lib/artemis/data/journal/activemq-data-1261.amq (size=10,485,760) to replica.
2021-01-24 21:06:02,253 INFO [org.apache.activemq.artemis.core.server] AMQ221025: Replication: sending NIOSequentialFile /var/lib/artemis/data/bindings/activemq-bindings-1191.bindings (size=1,048,576) to replica.
2021-01-24 21:06:02,363 INFO [org.apache.activemq.artemis.core.server] AMQ221025: Replication: sending NIOSequentialFile /var/lib/artemis/data/bindings/activemq-bindings-1196.bindings (size=1,048,576) to replica.
2021-01-24 21:06:02,451 INFO [org.apache.activemq.artemis.core.server] AMQ221025: Replication: sending NIOSequentialFile /var/lib/artemis/data/bindings/activemq-bindings-1189.bindings (size=1,048,576) to replica.
2021-01-24 21:06:09,756 INFO [org.apache.activemq.artemis.core.server] AMQ224100: Timed out waiting for large messages deletion with IDs [], might not be deleted if broker crashes atm
2021-01-24 21:06:09,756 INFO [org.apache.activemq.artemis.core.server] AMQ224100: Timed out waiting for large messages deletion with IDs [], might not be deleted if broker crashes atm
2021-01-24 21:06:09,756 INFO [org.apache.activemq.artemis.core.server] AMQ224100: Timed out waiting for large messages deletion with IDs [], might not be deleted if broker crashes atm
2021-01-24 21:06:09,756 INFO [org.apache.activemq.artemis.core.server] AMQ224100: Timed out waiting for large messages deletion with IDs [], might not be deleted if broker crashes atm
2021-01-24 21:06:10,046 INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.13.0 [b96ecec9-e13e-11ea-8a4f-0242ac170006] stopped, uptime 6 hours 32 minutes
2021-01-24 21:06:10,046 INFO [org.apache.activemq.artemis.core.server] AMQ221039: Restarting as Replicating backup server after live restart
2021-01-24 21:06:10,050 INFO [org.apache.activemq.artemis.core.server] AMQ221000: backup Message Broker is starting with configuration Broker Configuration (clustered=true,journalDirectory=data/journal,bindingsDirectory=data/bindings,largeMessagesDirectory=data/large-messages,pagingDirectory=data/paging)
2021-01-24 21:06:10,053 INFO [org.apache.activemq.artemis.core.server] AMQ221055: There were too many old replicated folders upon startup, removing /var/lib/artemis/data/bindings/oldreplica.101
2021-01-24 21:06:10,059 INFO [org.apache.activemq.artemis.core.server] AMQ222162: Moving data directory /var/lib/artemis/data/bindings to /var/lib/artemis/data/bindings/oldreplica.103
2021-01-24 21:06:10,060 INFO [org.apache.activemq.artemis.core.server] AMQ221055: There were too many old replicated folders upon startup, removing /var/lib/artemis/data/journal/oldreplica.101
2021-01-24 21:06:10,110 INFO [org.apache.activemq.artemis.core.server] AMQ222162: Moving data directory /var/lib/artemis/data/journal to /var/lib/artemis/data/journal/oldreplica.103
2021-01-24 21:06:10,111 INFO [org.apache.activemq.artemis.core.server] AMQ221055: There were too many old replicated folders upon startup, removing /var/lib/artemis/data/paging/oldreplica.100
2021-01-24 21:06:10,117 INFO [org.apache.activemq.artemis.core.server] AMQ222162: Moving data directory /var/lib/artemis/data/paging to /var/lib/artemis/data/paging/oldreplica.102
2021-01-24 21:06:10,120 INFO [org.apache.activemq.artemis.core.server] AMQ221013: Using NIO Journal
2021-01-24 21:06:10,121 WARN [org.apache.activemq.artemis.core.server] AMQ222007: Security risk! Apache ActiveMQ Artemis is running with the default cluster admin user and default password. Please see the cluster chapter in the ActiveMQ Artemis User Guide for instructions on how to change this.
2021-01-24 21:06:10,124 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-server]. Adding protocol support for: CORE
2021-01-24 21:06:10,127 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-amqp-protocol]. Adding protocol support for: AMQP
2021-01-24 21:06:10,127 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-hornetq-protocol]. Adding protocol support for: HORNETQ
2021-01-24 21:06:10,127 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-mqtt-protocol]. Adding protocol support for: MQTT
2021-01-24 21:06:10,127 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-openwire-protocol]. Adding protocol support for: OPENWIRE
2021-01-24 21:06:10,128 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-stomp-protocol]. Adding protocol support for: STOMP
2021-01-24 21:06:11,138 INFO [org.apache.activemq.artemis.core.server] AMQ221109: Apache ActiveMQ Artemis Backup Server version 2.13.0 [null] started, waiting live to fail before it gets active
2021-01-24 21:06:14,559 INFO [org.apache.activemq.artemis.core.server] AMQ221024: Backup server ActiveMQServerImpl::serverUUID=b96ecec9-e13e-11ea-8a4f-0242ac170006 is synchronized with live-server.
2021-01-24 21:06:14,594 INFO [org.apache.activemq.artemis.core.server] AMQ221031: backup announced
When I try to test failover and stop the master broker I can see that my client gets a connection exception which I am trying to
handle to avoid losing any messages. I stop the docker container by using docker stop (which stops a running container first by sending a SIGTERM signal and after a timeout period a SIGKILL signal). Since I know that all traffic will be redirected to slave broker my approach is the following:
#Autowired
JmsPoolConnectionFactory poolFactory;
try {
jmsTemplateOnline.convertAndSend(QUEUE, message);
}
catch (JmsException e){
try (Connection connection = poolFactory.createConnection();
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
MessageProducer producer = session.createProducer(new ActiveMQQueue(QUEUE))) {
producer.send(messageConverter.toMessage(message,session));
} catch (Exception jmsException) {
jmsException.printStackTrace();
}
}
Basically since connections from pool failed, according to my understanding they should bounce and create new connection with the slave
broker so getting a new connection would send my message. What happens is
[Thread-4 (ActiveMQ-client-global-threads)] [WARN ] org.apache.activemq.artemis.core.client - AMQ212037: Connection failure to /broker1:61616 has been detected: AMQ219015: The connection was disconnected because of server shutdown [code=DISCONNECTED]
[Thread-1 (ActiveMQ-client-global-threads)] [WARN ] org.apache.activemq.artemis.core.client - AMQ212037: Connection failure to /broker1:61616 has been detected: AMQ219015: The connection was disconnected because of server shutdown [code=DISCONNECTED]
[Thread-2 (ActiveMQ-client-global-threads)] [WARN ] org.apache.activemq.artemis.core.client - AMQ212037: Connection failure to /broker1:61616 has been detected: AMQ219015: The connection was disconnected because of server shutdown [code=DISC
ONNECTED]
This is the exception I get before trying to re-send my message
[http-nio-8080-exec-1] [INFO ] Uncategorized exception occurred during JMS processing; nested exception is javax.jms.JMSException: AMQ219016: Connection failure detected.
Unblocking a blocking call that will never get a response
Now though in some tests I was able to send my message, there were cases when sending my message failed with the exception below
http-nio-8080-exec-1] [INFO ]
javax.jms.IllegalStateException: AMQ219018: Producer is closed
at org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.checkClosed(ClientProducerImpl.java:301)
at org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.send(ClientProducerImpl.java:123)
at org.apache.activemq.artemis.jms.client.ActiveMQMessageProducer.doSendx(ActiveMQMessageProducer.java:483)
at org.apache.activemq.artemis.jms.client.ActiveMQMessageProducer.send(ActiveMQMessageProducer.java:220)
at org.messaginghub.pooled.jms.JmsPoolMessageProducer.sendMessage(JmsPoolMessageProducer.java:194)
at org.messaginghub.pooled.jms.JmsPoolMessageProducer.send(JmsPoolMessageProducer.java:88)
at org.messaginghub.pooled.jms.JmsPoolMessageProducer.send(JmsPoolMessageProducer.java:77)
...
Caused by: ActiveMQObjectClosedException[errorType=OBJECT_CLOSED message=AMQ219018: Producer is closed]
... 108 more
24-01-2021 16:07:27[http-nio-8080-exec-1] [WARN ] o.messaginghub.pooled.jms.JmsPoolSession - Caught exception trying close() when putting session back into the pool, will invalidate. javax.jms.IllegalStateException: Session is closed
javax.jms.IllegalStateException: Session is closed
My main issue is to find a way to not lose any messages during fail-over process. Can you point me what am I doing wrong and how I could possibly handle this case in a better way?

MYSQL docker file

I have a mysql docker file as follows:
FROM mysql:latest
ENV MYSQL_ROOT_PASSWORD password
ENV MYSQL_DATABASE database
ENV MYSQL_USER root
ENV MYSQL_PASSWORD mysql007
ENV COMPOSE_CONVERT_WINDOWS_PATHS 1
COPY init.sql /docker-entrypoint-initdb.d/
RUN chmod a+x /docker-entrypoint-initdb.d/init.sql && chown root:root /docker-entrypoint-initdb.d/init.sql
EXPOSE 3306
CMD ["mysqld"]
I build the file using the following command on docker terminal:
docker build --build-arg http_proxy=<value> --build-arg https_proxy=<value> -f mySQL_Dockerfile -t mysql .
Whenever I run the docker file it gives me the following log and exits:
Initializing database
2017-03-14T08:58:57.139375Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2017-03-14T08:58:57.538155Z 0 [Warning] InnoDB: New log files created, LSN=45790
2017-03-14T08:58:57.667334Z 0 [Warning] InnoDB: Creating foreign key constraint system tables.
2017-03-14T08:58:57.726216Z 0 [Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: 757aecb4-0894-11e7-9b9f-0242
ac110002.
2017-03-14T08:58:57.729630Z 0 [Warning] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened.
2017-03-14T08:58:57.732463Z 1 [Warning] root#localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.
2017-03-14T08:59:02.299741Z 1 [Warning] 'user' entry 'root#localhost' ignored in --skip-name-resolve mode.
2017-03-14T08:59:02.299874Z 1 [Warning] 'user' entry 'mysql.sys#localhost' ignored in --skip-name-resolve mode.
2017-03-14T08:59:02.299916Z 1 [Warning] 'db' entry 'sys mysql.sys#localhost' ignored in --skip-name-resolve mode.
2017-03-14T08:59:02.299950Z 1 [Warning] 'proxies_priv' entry '# root#localhost' ignored in --skip-name-resolve mode.
2017-03-14T08:59:02.300070Z 1 [Warning] 'tables_priv' entry 'sys_config mysql.sys#localhost' ignored in --skip-name-resolve mode.
Database initialized
Initializing certificates
Generating a 2048 bit RSA private key
.....+++
.......................................+++
unable to write 'random state'
writing new private key to 'ca-key.pem'
-----
Generating a 2048 bit RSA private key
...............+++
..........+++
unable to write 'random state'
writing new private key to 'server-key.pem'
-----
Generating a 2048 bit RSA private key
..............................+++
............+++
unable to write 'random state'
writing new private key to 'client-key.pem'
-----
Certificates initialized
MySQL init process in progress...
2017-03-14T08:59:05.114256Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2017-03-14T08:59:05.118407Z 0 [Note] mysqld (mysqld 5.7.17) starting as process 77 ...
2017-03-14T08:59:05.121736Z 0 [Note] InnoDB: PUNCH HOLE support available
2017-03-14T08:59:05.121870Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2017-03-14T08:59:05.121904Z 0 [Note] InnoDB: Uses event mutexes
2017-03-14T08:59:05.121942Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
2017-03-14T08:59:05.121958Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.3
2017-03-14T08:59:05.121974Z 0 [Note] InnoDB: Using Linux native AIO
2017-03-14T08:59:05.122514Z 0 [Note] InnoDB: Number of pools: 1
2017-03-14T08:59:05.122715Z 0 [Note] InnoDB: Using CPU crc32 instructions
2017-03-14T08:59:05.124223Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
2017-03-14T08:59:05.135947Z 0 [Note] InnoDB: Completed initialization of buffer pool
2017-03-14T08:59:05.137985Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
2017-03-14T08:59:05.150094Z 0 [Note] InnoDB: Highest supported file format is Barracuda.
2017-03-14T08:59:05.161065Z 0 [Note] InnoDB: Creating shared tablespace for temporary tables
2017-03-14T08:59:05.161215Z 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
2017-03-14T08:59:05.337319Z 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
2017-03-14T08:59:05.341242Z 0 [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active.
2017-03-14T08:59:05.341492Z 0 [Note] InnoDB: 32 non-redo rollback segment(s) are active.
2017-03-14T08:59:05.345441Z 0 [Note] InnoDB: Waiting for purge to start
2017-03-14T08:59:05.395911Z 0 [Note] InnoDB: 5.7.17 started; log sequence number 2534561
2017-03-14T08:59:05.397917Z 0 [Note] Plugin 'FEDERATED' is disabled.
2017-03-14T08:59:05.405692Z 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
2017-03-14T08:59:05.431196Z 0 [Note] Found ca.pem, server-cert.pem and server-key.pem in data directory. Trying to enable SSL support using them.
2017-03-14T08:59:05.431990Z 0 [Warning] CA certificate ca.pem is self signed.
2017-03-14T08:59:05.434311Z 0 [Note] InnoDB: Buffer pool(s) load completed at 170314 8:59:05
2017-03-14T08:59:05.444568Z 0 [Warning] 'user' entry 'root#localhost' ignored in --skip-name-resolve mode.
2017-03-14T08:59:05.444699Z 0 [Warning] 'user' entry 'mysql.sys#localhost' ignored in --skip-name-resolve mode.
2017-03-14T08:59:05.444743Z 0 [Warning] 'db' entry 'sys mysql.sys#localhost' ignored in --skip-name-resolve mode.
2017-03-14T08:59:05.444773Z 0 [Warning] 'proxies_priv' entry '# root#localhost' ignored in --skip-name-resolve mode.
2017-03-14T08:59:05.446269Z 0 [Warning] 'tables_priv' entry 'sys_config mysql.sys#localhost' ignored in --skip-name-resolve mode.
2017-03-14T08:59:05.451149Z 0 [Note] Event Scheduler: Loaded 0 events
2017-03-14T08:59:05.451397Z 0 [Note] Executing 'SELECT * FROM INFORMATION_SCHEMA.TABLES;' to get a list of tables using the deprecated partition engine. You may use the startup option '--disable-parti
tion-engine-check' to skip this check.
2017-03-14T08:59:05.451433Z 0 [Note] Beginning of list of non-natively partitioned tables
2017-03-14T08:59:05.460184Z 0 [Note] End of list of non-natively partitioned tables
2017-03-14T08:59:05.460451Z 0 [Note] mysqld: ready for connections.
Version: '5.7.17' socket: '/var/run/mysqld/mysqld.sock' port: 0 MySQL Community Server (GPL)
Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/leap-seconds.list' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it.
2017-03-14T08:59:08.290703Z 5 [Warning] 'user' entry 'root#localhost' ignored in --skip-name-resolve mode.
2017-03-14T08:59:08.290960Z 5 [Warning] 'user' entry 'mysql.sys#localhost' ignored in --skip-name-resolve mode.
2017-03-14T08:59:08.291016Z 5 [Warning] 'db' entry 'sys mysql.sys#localhost' ignored in --skip-name-resolve mode.
2017-03-14T08:59:08.291115Z 5 [Warning] 'proxies_priv' entry '# root#localhost' ignored in --skip-name-resolve mode.
2017-03-14T08:59:08.291157Z 5 [Warning] 'tables_priv' entry 'sys_config mysql.sys#localhost' ignored in --skip-name-resolve mode.
mysql: [Warning] Using a password on the command line interface can be insecure.
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 1396 (HY000) at line 1: Operation CREATE USER failed for 'root'#'%'
Any idea what the problem could be? It says the root user creation failed and exits!
Thanks.
Turns out the suggested answer works but on a different version of the mysql image. It doesn't seem to be working for mysql:latest. I changed to mysql:5.7 and the suggested solution worked just fine.
I encountered the same problem, the fix is easy, If you want to use root as your user, just set MYSQL_ROOT_PASSWORD, don't set MYSQL_PASSWORD, checkout this post.

Migrate app in openshift with "create --from-app" doesn't load my old application [Grails]

I have a Grails application in Openshift and I've recently upgraded my plan from free to silver and I need to migrate my application from non scalable to scalable, to do this I used the command rhc create-app --from-app app --scaling. After a while the new app is created but when I run it, it doesn't load the app, just the by default tomcat first screen.
Update: The logs of the new app show this:
==> app-root/logs/haproxy_ctld.log <==
I, [2015-04-18T01:02:28.949500 #163665] INFO -- : Starting haproxy_ctld
I, [2015-04-18T08:28:39.200616 #496980] INFO -- : Starting haproxy_ctld
==> app-root/logs/haproxy.log <==
[WARNING] 107/010224 (167630) : config : log format ignored for proxy 'express' since it has
no log address.
[WARNING] 107/012317 (167630) : Server express/local-gear is DOWN for maintenance.
[ALERT] 107/012317 (167630) : proxy 'express' has no server available!
[WARNING] 107/082838 (496963) : config : log format ignored for proxy 'stats' since it has n
o log address.
[WARNING] 107/082838 (496963) : config : log format ignored for proxy 'express' since it has
no log address.
[WARNING] 107/082838 (496963) : Server express/local-gear is DOWN, reason: Layer4 connection
problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers lef
t. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 107/082838 (496963) : proxy 'express' has no server available!
[WARNING] 107/082851 (496963) : Server express/local-gear is UP, reason: Layer7 check passed
, code: 200, info: "HTTP status check returned code <3C>200<3E>", check duration: 2ms. 1 act
ive and 0 backup servers online. 0 sessions requeued, 0 total in queue.
[WARNING] 107/085928 (496963) : Server express/local-gear is DOWN for maintenance.
[ALERT] 107/085928 (496963) : proxy 'express' has no server available!
==> app-root/logs/jbossews.log <==
Apr 18, 2015 8:28:47 AM org.apache.catalina.startup.Catalina start
INFO: Server startup in 3727 ms
Apr 18, 2015 8:59:28 AM org.apache.coyote.AbstractProtocol pause
INFO: Pausing ProtocolHandler ["http-bio-127.7.39.1-8080"]
Apr 18, 2015 8:59:28 AM org.apache.catalina.core.StandardService stopInternal
INFO: Stopping service Catalina
Apr 18, 2015 8:59:28 AM org.apache.coyote.AbstractProtocol stop
INFO: Stopping ProtocolHandler ["http-bio-127.7.39.1-8080"]
Apr 18, 2015 8:59:28 AM org.apache.coyote.AbstractProtocol destroy
INFO: Destroying ProtocolHandler ["http-bio-127.7.39.1-8080"]
b

Wampserver error to initialize

I am trying to start wampserver and pulls me error. Try changing the port 80 by 8080 but did not work. Also change the port skype.
Anyone know what the problem is?
Error Log
2014-12-12 09:25:39 4460 [Note] Plugin 'FEDERATED' is disabled.
2014-12-12 09:25:40 4460 [Note] InnoDB: Using atomics to ref count buffer pool pages
2014-12-12 09:25:40 4460 [Note] InnoDB: The InnoDB memory heap is disabled
2014-12-12 09:25:40 4460 [Note] InnoDB: Mutexes and rw_locks use Windows interlocked functions
2014-12-12 09:25:40 4460 [Note] InnoDB: Compressed tables use zlib 1.2.3
2014-12-12 09:25:40 4460 [Note] InnoDB: Not using CPU crc32 instructions
2014-12-12 09:25:40 4460 [Note] InnoDB: Initializing buffer pool, size = 128.0M
2014-12-12 09:25:40 4460 [Note] InnoDB: Completed initialization of buffer pool
2014-12-12 09:25:40 4460 [Note] InnoDB: Highest supported file format is Barracuda.
2014-12-12 09:25:48 4460 [Note] InnoDB: 128 rollback segment(s) are active.
2014-12-12 09:25:49 4460 [Note] InnoDB: Waiting for purge to start
2014-12-12 09:25:49 4460 [Note] InnoDB: 5.6.17 started; log sequence number 1626213
2014-12-12 09:25:49 4460 [Note] Server hostname (bind-address): '*'; port: 3306
2014-12-12 09:25:49 4460 [Note] IPv6 is available.
2014-12-12 09:25:49 4460 [Note] - '::' resolves to '::';
2014-12-12 09:25:49 4460 [Note] Server socket created on IP: '::'.
2014-12-12 09:25:49 4460 [Note] Event Scheduler: Loaded 0 events
2014-12-12 09:25:49 4460 [Note] wampmysqld: ready for connections.
Version: '5.6.17' socket: '' port: 3306 MySQL Community Server (GPL)
This is the last event in error log of apache:
[Tue Dec 09 06:46:54.906225 2014] [mpm_winnt:notice] [pid 5160:tid 292] AH00422: Parent: Received shutdown signal -- Shutting down the server.
[Tue Dec 09 06:46:58.906453 2014] [mpm_winnt:notice] [pid 4620:tid 312] AH00364: Child: All worker threads have exited.
[Tue Dec 09 06:47:00.838564 2014] [mpm_winnt:notice] [pid 5160:tid 292] AH00430: Parent: Child process 4620 exited successfully.
But today (15/12) nothing

Resources