RabbitMQ cant start due to an error on log fomatter config - docker

I'm using RabbitMQ 3.8.5-management with the following config:
log.file = rabbit.log
log.dir = /var/log/rabbitmq
log.file.level = info
log.file.formatter = json
log.file.rotation.date = $D0
I get the following error:
12:45:12.131 [error] You've tried to set log.file.formatter, but there is no setting with that name.
12:45:12.134 [error] Did you mean one of these?
12:45:12.182 [error] log.file.level
12:45:12.182 [error] log.file
12:45:12.182 [error] log.file.rotation.date
12:45:12.182 [error] Error preparing configuration in phase transform_datatypes:
12:45:12.183 [error] - Conf file attempted to set unknown variable: log.file.formatter
According to the documentation log.file.formatter should work - what is wrong?
checked documentation on RabbitMQ.
checked other SO posts.
entered the container and remove the config - it works without it.

Looks like JSON logging and the log.file.formatter setting was added with RabbitMQ 3.9.0 release.
Try upgrading if possible.

Related

Upgrading docker artifactory pro 6.X to 7.X

compose artifactory pro v6.9.0 running
In my compose I have two services :
image: docker.bintray.io/jfrog/artifactory-pro:6.9.0
image: docker.io/library/postgres:9.6.11
I was able to upgrade to 6.23.13 without any problem just by changing the version of the image.
When I try the same thing with any 7.X version (after upgrading to at least 6.10 as the doc says), I have errors.
For example, trying 7.21.3, I have these warnings
2022-08-01T08:41:30.343L [tomct] [WARNING] [ ] [org.apache.catalina.startup.HostConfig] [org.apache.catalina.startup.HostConfig deployDescriptor] - A docBase [/opt/jfrog/artifactory/app/artifactory/tomcat/webapps/access.war] inside the host appBase has been specified, and will be ignored
2022-08-01T08:41:30.343L [tomct] [WARNING] [ ] [org.apache.catalina.startup.HostConfig] [org.apache.catalina.startup.HostConfig deployDescriptor] - A docBase [/opt/jfrog/artifactory/app/artifactory/tomcat/webapps/artifactory.war] inside the host appBase has been specified, and will be ignored
...
2022-08-01T08:41:37.344Z [jfrt ] [WARN ] [ce1b2553475da56b] [c.z.h.p.ProxyConnection:182 ] [ocalhost-startStop-2] - HikariCP Main - Connection org.apache.derby.impl.jdbc.EmbedConnection#1597179442 (XID = 24), (SESSIONID = 3), (DATABASE = {db.home}), (DRDAID = null) marked as broken because of SQLSTATE(0A000), ErrorCode(20000)
java.sql.SQLFeatureNotSupportedException: Feature not implemented: No details.
and these errors
08:41:34,803 |-ERROR in ch.qos.logback.core.joran.action.AppenderAction - Could not create an Appender of type [org.artifactory.usage.appender.UsageTrafficTimeBasedRollingFileAppender]. ch.qos.logback.core.util.DynamicClassLoadingException: Failed to instantiate type org.artifactory.usage.appender.UsageTrafficTimeBasedRollingFileAppender
...
2022-08-01T08:41:37.113Z [jfrt ] [ERROR] [ce1b2553475da56b] [d.d.l.DbDistributeLocksDao:506] [ocalhost-startStop-2] - Unable to detect database version Unable to get connection from unique lock data source
2022-08-01T08:41:37.353Z [jfrt ] [ERROR] [ce1b2553475da56b] [tifactoryHomeConfigListener:55] [ocalhost-startStop-2] - Failed initializing Home. Caught exception:
java.lang.IllegalStateException: Could not find database table: db_properties
After reading the docs I do not clearly understand if I have to download the new docker-compose package from jfrog. I've tried but once there, the config.sh ask for external database and no question about reusing existing image directory.
Thx for help
Not sure if this is still relevant for you, but I got the same problem. As far as I see there are new DB connection environment variables, that must be set. This error message is just a symptom, because without the new DB connection env, Artifactory will try to use an in-memory database (Apache Derby). Something like this is needed to fix it (as Docker Compose config of the Artifactory container):
environment:
- JF_SHARED_DATABASE_TYPE=postgresql
- JF_SHARED_DATABASE_USERNAME=${POSTGRESQL_USERNAME}
- JF_SHARED_DATABASE_PASSWORD=${POSTGRESQL_PASSWORD}
- JF_SHARED_DATABASE_URL=jdbc:postgresql://postgresql:5432/artifactory
- JF_SHARED_DATABASE_DRIVER=org.postgresql.Driver

Failed to start the VM error when starting a Dataflow SQL job

Getting the following error when I try to launch a Dataflow SQL job:
Failed to start the VM, launcher-____, used for launching because of status code: INVALID_ARGUMENT, reason: Error: Message: Invalid value for field 'resource.networkInterfaces[0].network': 'global/networks/default'. The referenced network resource cannot be found. HTTP Code: 400.
This issue just started today.
Adding the default network solved the issue.

enable local cache on emqx

I see the documentation on https://docs.emqx.io/broker/v3/en/guide.html#emq-x-bridge-cache-configuration and it says that you can enable the cache on file if the network fails because emqx now is not doing this stuff.
When i set, for example the parameter on emqx 3.0.0.0 it fails on start and says in the lof file that is not declared:
You've tried to set bridge.xxx.queue.replayq_seg_bytes, but there is no setting with that name.
2020-03-03T19:43:22.777171+03:00 [error] Did you mean one of these?
2020-03-03T19:43:22.962094+03:00 [error] bridge.$name.mqueue_type
2020-03-03T19:43:22.962572+03:00 [error] bridge.$name.clean_start
2020-03-03T19:43:22.962760+03:00 [error] bridge.$name.start_type
2020-03-03T19:43:23.102793+03:00 [error] Error generating configuration in phase transform_datatypes
2020-03-03T19:43:23.103040+03:00 [error] Conf file attempted to set unknown variable: bridge.aps.queue.replayq_seg_bytes
You know if its problem of my version of emqx or is possible a problem with the sintax.
Thanks in advance
Greetings
It's sintax error.
bridge.xxx.queue.replayq_seg_bytes
It's means set the xxx bridge use queue.replayq_seg_bytes config.
bridge.mqtt.xxx.address = 127.0.0.1:1883
Is exists? By the way the EMQ X v4.0.6 is recommended.

Weird kibana error - invalid code -- missing end-of-block

I just started seeing this error on my kibana server :
read err { Error: invalid code -- missing end-of-block at
InflateRaw.zlibOnError (zlib.js:153:15) errno: -3, code:
'Z_DATA_ERROR' }
There is no helpful information in the corresponding logs :
{"type":"log","#timestamp":"2019-01-22T13:46:34Z","tags":
["license","info","xpack"],"pid":17310,"message":"Imported license
information from Elasticsearch for the [monitoring] cluster: mode:
basic | status: active"}
However, in the browser there is an error: Kibana server is not ready yet.
I have no idea how to tackle this!
UPDATE
I have seen additional error in elasticsearch logs that might suggest the cause of failure :
[2019-01-24T11:15:47,216][INFO ][o.e.c.m.MetaDataIndexTemplateService] [cloudraid01] adding template [.management-beats] for index patterns [.management-beats]
This seems to be related to metricbeats.

Flume agentSink "Unable to load output format plugin class"

I'm getting the following error and I have no idea why. If I change the sink to "console", it works fine. I'm just trying to recreate an example from the flume documentation except across two different nodes. This is using CDH3.
2011-10-20 17:41:13,046 [main] WARN text.FormatFactory: Unable to load output format plugin class - Class not found
2011-10-20 17:41:13,065 [main] INFO agent.FlumeNode: Loading spec from command line: 'foo:console|agentSink("somehost",35853);'
2011-10-20 17:41:13,228 [main] WARN agent.FlumeNode: Caught exception loading node:null
I'm trying to run flume as such:
flume node_nowatch -1 -s -n foo -c 'foo:console|agentSink("somehost",35853);'
Thanks in advance.

Resources