Upgrading docker artifactory pro 6.X to 7.X - docker

compose artifactory pro v6.9.0 running
In my compose I have two services :
image: docker.bintray.io/jfrog/artifactory-pro:6.9.0
image: docker.io/library/postgres:9.6.11
I was able to upgrade to 6.23.13 without any problem just by changing the version of the image.
When I try the same thing with any 7.X version (after upgrading to at least 6.10 as the doc says), I have errors.
For example, trying 7.21.3, I have these warnings
2022-08-01T08:41:30.343L [tomct] [WARNING] [ ] [org.apache.catalina.startup.HostConfig] [org.apache.catalina.startup.HostConfig deployDescriptor] - A docBase [/opt/jfrog/artifactory/app/artifactory/tomcat/webapps/access.war] inside the host appBase has been specified, and will be ignored
2022-08-01T08:41:30.343L [tomct] [WARNING] [ ] [org.apache.catalina.startup.HostConfig] [org.apache.catalina.startup.HostConfig deployDescriptor] - A docBase [/opt/jfrog/artifactory/app/artifactory/tomcat/webapps/artifactory.war] inside the host appBase has been specified, and will be ignored
...
2022-08-01T08:41:37.344Z [jfrt ] [WARN ] [ce1b2553475da56b] [c.z.h.p.ProxyConnection:182 ] [ocalhost-startStop-2] - HikariCP Main - Connection org.apache.derby.impl.jdbc.EmbedConnection#1597179442 (XID = 24), (SESSIONID = 3), (DATABASE = {db.home}), (DRDAID = null) marked as broken because of SQLSTATE(0A000), ErrorCode(20000)
java.sql.SQLFeatureNotSupportedException: Feature not implemented: No details.
and these errors
08:41:34,803 |-ERROR in ch.qos.logback.core.joran.action.AppenderAction - Could not create an Appender of type [org.artifactory.usage.appender.UsageTrafficTimeBasedRollingFileAppender]. ch.qos.logback.core.util.DynamicClassLoadingException: Failed to instantiate type org.artifactory.usage.appender.UsageTrafficTimeBasedRollingFileAppender
...
2022-08-01T08:41:37.113Z [jfrt ] [ERROR] [ce1b2553475da56b] [d.d.l.DbDistributeLocksDao:506] [ocalhost-startStop-2] - Unable to detect database version Unable to get connection from unique lock data source
2022-08-01T08:41:37.353Z [jfrt ] [ERROR] [ce1b2553475da56b] [tifactoryHomeConfigListener:55] [ocalhost-startStop-2] - Failed initializing Home. Caught exception:
java.lang.IllegalStateException: Could not find database table: db_properties
After reading the docs I do not clearly understand if I have to download the new docker-compose package from jfrog. I've tried but once there, the config.sh ask for external database and no question about reusing existing image directory.
Thx for help

Not sure if this is still relevant for you, but I got the same problem. As far as I see there are new DB connection environment variables, that must be set. This error message is just a symptom, because without the new DB connection env, Artifactory will try to use an in-memory database (Apache Derby). Something like this is needed to fix it (as Docker Compose config of the Artifactory container):
environment:
- JF_SHARED_DATABASE_TYPE=postgresql
- JF_SHARED_DATABASE_USERNAME=${POSTGRESQL_USERNAME}
- JF_SHARED_DATABASE_PASSWORD=${POSTGRESQL_PASSWORD}
- JF_SHARED_DATABASE_URL=jdbc:postgresql://postgresql:5432/artifactory
- JF_SHARED_DATABASE_DRIVER=org.postgresql.Driver

Related

RabbitMQ cant start due to an error on log fomatter config

I'm using RabbitMQ 3.8.5-management with the following config:
log.file = rabbit.log
log.dir = /var/log/rabbitmq
log.file.level = info
log.file.formatter = json
log.file.rotation.date = $D0
I get the following error:
12:45:12.131 [error] You've tried to set log.file.formatter, but there is no setting with that name.
12:45:12.134 [error] Did you mean one of these?
12:45:12.182 [error] log.file.level
12:45:12.182 [error] log.file
12:45:12.182 [error] log.file.rotation.date
12:45:12.182 [error] Error preparing configuration in phase transform_datatypes:
12:45:12.183 [error] - Conf file attempted to set unknown variable: log.file.formatter
According to the documentation log.file.formatter should work - what is wrong?
checked documentation on RabbitMQ.
checked other SO posts.
entered the container and remove the config - it works without it.
Looks like JSON logging and the log.file.formatter setting was added with RabbitMQ 3.9.0 release.
Try upgrading if possible.

Missing Manifest from docker

The remote Docker seems to be well configured but when I try to download any image to my virtual docker It always fails with this trace :
2020-04-01T16:20:32.969Z [jfrt ] [INFO ] [b6e7232e6d2e0cb4] [DockerV2VirtualRepoHandler:117] [http-nio-8081-exec-8] - Fetching docker manifest for repo 'thanosio/thanos' and tag 'latest'
2020-04-01T16:20:33.874Z [jfrt ] [ERROR] [b6e7232e6d2e0cb4] [.DockerV2RemoteRepoHandler:448] [http-nio-8081-exec-8] - Missing Manifest from docker-via-intranet 'v2/thanosio/thanos/manifests/latest' not found at docker-via-intranet:thanosio/thanos/latest/list.manifest.json
2020-04-01T16:20:34.703Z [jfrt ] [ERROR] [b6e7232e6d2e0cb4] [.DockerV2RemoteRepoHandler:448] [http-nio-8081-exec-8] - Missing Manifest from docker-remote 'v2/thanosio/thanos/manifests/latest' not found at docker-remote:thanosio/thanos/latest/list.manifest.json
2020-04-01T16:20:35.545Z [jfrt ] [ERROR] [b6e7232e6d2e0cb4] [.DockerV2RemoteRepoHandler:448] [http-nio-8081-exec-8] - Missing Manifest from quay-io 'v2/thanosio/thanos/manifests/latest' not found at quay-io:thanosio/thanos/latest/list.manifest.json
DockerHub requires token authentication. You should check the "Enable Token Authentication" box. After doing this, try to pull an image you have never pulled before (since JFrog Container Registry caches 404s for a period of time). You can also go to the Advanced settings and set the missed metadata retrieval cache period to zero (instead of waiting for the cache period to expire).

Starting Zabbix Server within docker replaces strings with nothing in config file →

→ or totally ignored strings like name of new DB for testing purposes.
Firstly tries to add something about ~250 to 250 already added hosts and Z-server shutted down. I've restarted it and inside docker logs I saw this:
6:20191014:091840.201 using configuration file: /etc/zabbix/zabbix_server.conf
6:20191014:091840.223 current database version (mandatory/optional): 04020000/04020001
6:20191014:091840.223 required mandatory version: 04020000
6:20191014:091840.484 __mem_malloc: skipped 7 asked 108424 skip_min 304 skip_max 12192
6:20191014:091840.484 [file:dbconfig.c,line:94] __zbx_mem_realloc(): out of memory (requested 108424 bytes)
6:20191014:091840.484 [file:dbconfig.c,line:94] __zbx_mem_realloc(): please increase CacheSize configuration parameter
6:20191014:091840.484 === memory statistics for configuration cache ===
Solution for those problem was to increase CacheSize in zabbix_server.conf . Okay, that's not a problem and after this Im push a new config to Z-server and restart it... → and z-server stops already after start and logs says the same problem. After reading config in container I saw what string what I corrected to matching my wishes are missing O_o. Strings are deleted.
My config:
LogType=console
DBHost=postgres-server
DBName=zabbix_pwd
DBSchema=public
DBUser=zabbix
DBPassword=zabbix
DBPort=5432
StartPollers=5
StartIPMIPollers=5
StartPollersUnreachable=5
SNMPTrapperFile=/var/lib/zabbix/snmptraps/snmptraps.log
StartSNMPTrapper=1
CacheSize=512M
HistoryCacheSize=512M
HistoryIndexCacheSize=512M
TrendCacheSize=512m
ValueCacheSize=256M
AlertScriptsPath=/usr/lib/zabbix/alertscripts
ExternalScripts=/usr/lib/zabbix/externalscripts
FpingLocation=/usr/sbin/fping
Fping6Location=/usr/sbin/fping6
SSHKeyLocation=/var/lib/zabbix/ssh_keys
SSLCertLocation=/var/lib/zabbix/ssl/certs/
SSLKeyLocation=/var/lib/zabbix/ssl/keys/
SSLCALocation=/var/lib/zabbix/ssl/ssl_ca/
LoadModulePath=/var/lib/zabbix/modules/
And what I've getting after starting z-server:
LogType=console
DBHost=postgres-server
DBName=zabbix_pwd
DBSchema=public
DBUser=zabbix
DBPassword=zabbix
DBPort=5432
SNMPTrapperFile=/var/lib/zabbix/snmptraps/snmptraps.log
StartSNMPTrapper=1
AlertScriptsPath=/usr/lib/zabbix/alertscripts
ExternalScripts=/usr/lib/zabbix/externalscripts
FpingLocation=/usr/sbin/fping
Fping6Location=/usr/sbin/fping6
SSHKeyLocation=/var/lib/zabbix/ssh_keys
SSLCertLocation=/var/lib/zabbix/ssl/certs/
SSLKeyLocation=/var/lib/zabbix/ssl/keys/
SSLCALocation=/var/lib/zabbix/ssl/ssl_ca/
LoadModulePath=/var/lib/zabbix/modules/
Any suggestions to how-to rule the world and don't be captured by doctors ?
With docker you need to send conf parameters in the docker-compose.yml file, or in your docker run command using the -e :
For example from my docker yml file:
zabbix-server:
image: zabbix/zabbix-server-pgsql:ubuntu-4.2.6
environment:
ZBX_MAXHOUSEKEEPERDELETE: 5000
ZBX_STARTPOLLERS: 15
ZBX_CACHESIZE: 8M
ZBX_STARTDBSYNCERS: 4
ZBX_HISTORYCACHESIZE: 16M
ZBX_TRENDCACHESIZE: 4M
ZBX_VALUECACHESIZE: 8M
ZBX_LOGSLOWQUERIES: 3000
Another way to work with zabbix:
https://hub.docker.com/r/monitoringartist/zabbix-3.0-xxl/

Error: Request to Elasticsearch failed

Kibana version: 5.4.2
Elasticsearch version: 5.4.2
Logstash version: 5.4.2
Server OS version: Linux Red Hat, docker container kibana, logstash and elasticsearch.
Logs does not display in ELK. What is the problem? How to fix the error?
Discover: Request to Elasticsearch failed:
{"error":{"root_cause":[],"type":"search_phase_execution_exception","reason":"all
shards
failed","phase":"query","grouped":true,"failed_shards":[]},"status":503}
Less
Error: Request to Elasticsearch failed:
{"error":{"root_cause":[],"type":"search_phase_execution_exception","reason":"all
shards
failed","phase":"query","grouped":true,"failed_shards":[]},"status":503}
at ip-address:5601/bundles/kibana.bundle.js?v=15117:28:10760 at
Function.Promise.try
(http://:5601/bundles/commons.bundle.js?v=15117:82:22203)
at ip-address:5601/bundles/commons.bundle.js?v=15117:82:21573 at
Array.map (native) at Function.Promise.map
(http://ip-address:5601/bundles/commons.bundle.js?v=15117:82:21528) at
callResponseHandlers
(http://:5601/bundles/kibana.bundle.js?v=15117:28:10376)
atip-address:5601/bundles/kibana.bundle.js?v=15117:27:29944 at
processQueue
(ip-address:5601/bundles/commons.bundle.js?v=15117:38:23621) at
ip-address:5601/bundles/commons.bundle.js?v=15117:38:23888 at
Scope.$eval
(ip-address:5601/bundles/commons.bundle.js?v=15117:39:4619)
If you have multiple websites
Go to
STORES > CONFIGURATION > CATALOG > CATALOG > CATALOG SEARCH > Elasticsearch Index Prefix.
Set a different prefix for each website, any random one for each. Reindex and flush cache on each store.
Another Official Solution is
Base on the bug filed in my previous reply, I modified the following file to fix the search problem.
./vendor/magento/module-elasticsearch/Model/Adapter/FieldMapper/Product/FieldProvider/FieldType/Converter.php
private const ES_DATA_TYPE_DOUBLE = 'double';
--> private const ES_DATA_TYPE_FLOAT = 'float';
Are the indices being created ? What does this show ?
GET /_cat/indices/

Homebrew: Can't start elastic search

I'm in a big trouble, I can't start Elasticsearch and I need it for run my rails locally, please tell me what's going on. I installed Elasticsearch in the normal fashion then I did the following:
elasticsearch --config=/usr/local/opt/elasticsearch/config/elasticsearch.yml
But it shows the following error: [2015-11-01 20:36:50,574][INFO ][bootstrap] es.config is no longer supported. elasticsearch.yml must be placed in the config directory and cannot be renamed.
I tried several alternative ways of run it, like:
elasticsearch -f -D
But then I get the following error, and I can't find any useful for solve it, it seems to be related with file perms but not sure:
java.io.IOException: Resource not found: "org/joda/time/tz/data/ZoneInfoMap" ClassLoader: sun.misc.Launcher$AppClassLoader#33909752
at org.joda.time.tz.ZoneInfoProvider.openResource(ZoneInfoProvider.java:210)
at org.joda.time.tz.ZoneInfoProvider.<init>(ZoneInfoProvider.java:127)
at org.joda.time.tz.ZoneInfoProvider.<init>(ZoneInfoProvider.java:86)
at org.joda.time.DateTimeZone.getDefaultProvider(DateTimeZone.java:514)
at org.joda.time.DateTimeZone.getProvider(DateTimeZone.java:413)
at org.joda.time.DateTimeZone.forID(DateTimeZone.java:216)
at org.joda.time.DateTimeZone.getDefault(DateTimeZone.java:151)
at org.joda.time.chrono.ISOChronology.getInstance(ISOChronology.java:79)
at org.joda.time.DateTimeUtils.getChronology(DateTimeUtils.java:266)
at org.joda.time.format.DateTimeFormatter.selectChronology(DateTimeFormatter.java:968)
at org.joda.time.format.DateTimeFormatter.printTo(DateTimeFormatter.java:672)
at org.joda.time.format.DateTimeFormatter.printTo(DateTimeFormatter.java:560)
at org.joda.time.format.DateTimeFormatter.print(DateTimeFormatter.java:644)
at org.elasticsearch.Build.<clinit>(Build.java:51)
at org.elasticsearch.node.Node.<init>(Node.java:135)
at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:170)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:270)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
[2015-11-01 20:40:57,602][INFO ][node ] [Centurius] version[2.0.0], pid[22063], build[de54438/2015-10-22T08:09:48Z]
[2015-11-01 20:40:57,605][INFO ][node ] [Centurius] initializing ...
Exception in thread "main" java.lang.IllegalStateException: failed to load bundle [] due to jar hell
Likely root cause: java.security.AccessControlException: access denied ("java.io.FilePermission" "/usr/local/Cellar/elasticsearch/2.0.0/libexec/antlr-runtime-3.5.jar" "read")
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)
at java.security.AccessController.checkPermission(AccessController.java:884)
at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
at java.lang.SecurityManager.checkRead(SecurityManager.java:888)
at java.util.zip.ZipFile.<init>(ZipFile.java:210)
at java.util.zip.ZipFile.<init>(ZipFile.java:149)
at java.util.jar.JarFile.<init>(JarFile.java:166)
at java.util.jar.JarFile.<init>(JarFile.java:103)
at org.elasticsearch.bootstrap.JarHell.checkJarHell(JarHell.java:173)
at org.elasticsearch.plugins.PluginsService.loadBundles(PluginsService.java:340)
at org.elasticsearch.plugins.PluginsService.<init>(PluginsService.java:113)
at org.elasticsearch.node.Node.<init>(Node.java:144)
at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:170)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:270)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
Refer to the log for complete error details.
Thanks for your help.
There are some changes with libexec with Elasticsearch/homebrew installation and that is why it is failing to start. There is a PR #45644 currently being worked on. Till the PR gets accepted, you can use the same formula to fix the installation of Elasticsearch.
First uninstall the earlier/older version. Then edit the formula of Elasticsearch:
$ brew edit elasticsearch
And use the formula from the PR.
Then do brew install elasticsearch, it should work fine.
To start Elasticsearch, just do:
$ elasticsearch
config option is no longer valid. For custom config, use path.config:
$ elasticsearch --path.conf=/usr/local/opt/elasticsearch/config

Resources