On a Clickhouse Docker server, I try to use the Clickhouse-jdbc-bridge to access data on a Mariadb Docker server. But I am met with "NamedDataSource [jmdb-1] does not exist!" error.
I followed basic jdbc-bridge instructions over at https://clickhouse.com/docs/en/integrations/jdbc/jdbc-with-clickhouse
The jdbc-bridge is running:
root#c3410b41cd87:/# ps -aux | grep jdbc
root 251 0.3 1.3 6349608 108076 ? Sl 13:28 0:17 java -jar /clickhouse-jdbc-bridge/clickhouse-jdbc-bridge-2.0.7-shaded.jar
root 529 0.0 0.0 6300 716 pts/0 S+ 14:59 0:00 grep --color=auto jdbc
Here is my datasource file:
root#c3410b41cd87:/clickhouse-jdbc-bridge/config/datasources# cat jmdb-1.json
{
"jmdb-1": {
"driverUrls": [
"https://dlm.mariadb.com/2325871/Connectors/java/connector-java-3.0.5/mariadb-java-client-3.0.5.jar"
],
"jdbcUrl": "jdbc:mariadb://10.5.1.11:13306",
"username": "mariadb",
"password": "maria1234"
}
}
Here is my failing query with the resulting error:
SELECT * FROM jdbc('jmdb-1', 'mdb', 'mdb_sf1_region');
Received exception from server (version 22.5.1):
Code: 86. DB::Exception: Received from 10.5.1.31:9000. DB::Exception: Received error from remote server /columns_info?connection_string=jmdb-1&schema=mdb&table=mdb_sf1_region&external_table_functions_use_nulls=true. HTTP status code: 500 Internal Server Error, body: NamedDataSource [jmdb-1] does not exist!.
I assume, that Clickhouse can reach the jdbc-bridge because of the rror. But i have no clue, why the bridge does not find the datasource file. Any help appreciated!
The error message from the Clickhouse server trying to use the Clickhouse-JDBC-bridge is misleading.
At startup, the Clickhouse-JDBC-bridge loads the datasources and tries to connect with them. Only successful connected datasources get registered! So the Clickhouse-JDBC-bridge indeed finds the datasource file, but because there is an error in the file or the connection not working, the datasource does not get loaded and the error returned to the Clickhouse server says, the datasource does not exist.
One can query all registered datasources via:
SELECT * FROM jdbc('','show datasources');
If one does not find his datasource there, it is likely, that there is a typo in the datasources file or the server the datasource points to is not up.
On startup of the Clickhouse-JDBC-bridge, one can read the console output for further error messages about the datasource.
Related
I’ve configured some clients to use the nsca to send data to Nagios since I’m not able to connect from server to the client. I was able to achieve like everything I wanted, but one last thing is not exactly how I would like.
The host check / status information is not appending a message I want. This is my nsclient.ini conf:
[/settings/scheduler/schedules]
host_check = Check_OK
;host_check = Check_OK “OK”
And in the nagios server I see these messages:
Jun 30 06:52:02 localhost nagios: EXTERNAL COMMAND:
PROCESS_HOST_CHECK_RESULT;client;0;No message Jun 30 06:57:16
localhost nagios: EXTERNAL COMMAND:
PROCESS_HOST_CHECK_RESULT;client;0;No message Jun 30 07:02:31
localhost nagios: EXTERNAL COMMAND:
PROCESS_HOST_CHECK_RESULT;client;0;No message Jun 30 07:11:17
localhost nagios: EXTERNAL COMMAND:
PROCESS_HOST_CHECK_RESULT;client;3;Invalid command line: unrecognised
option 'OK' Jun 30 07:11:17 localhost nagios: Error: External command
failed -> PROCESS_HOST_CHECK_RESULT;client;3;Invalid command line:
unrecognised option 'OK' Jun 30 07:11:17 localhost nagios: External
command [1656573077] PROCESS_HOST_CHECK_RESULT;client;3;Invalid
command line: unrecognised option 'OK' returned error Command failed
So, when I use the “OK” I get that 3 invalid command line: unrecognised option ‘OK’ and when I don’t use any message I got that 0 No message.
Any thought what I’m doing wrong here?
nsclient version = NSCP-0.5.2.35-x64
Thanks!
It gave me some hard time to figure out but after deciding to check the documentation I tried couple other stuff and ended up with this way to get that message passed along:
host_check = check_ok message=OK
This way Nagios UI I can see the Status Information for the host as "OK" and how I was expecting.
After proceed with the installation of WhatsApp Business API (developer single instance) in docker on windows 10 Enterprise, I´m facing the following msg error when using https://192.168.43.200:8080/v1/health by postman
Error msg:
{
"meta": {
"version": "v2.33.3",
"api_status": "stable"
},
"errors": [
{
"code": 1014,
"title": "Internal error",
"details": "php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution. Please check if wacore is running: wacore:6252"
}
]
}
Looking in log files it´s seems that the core is listening in one port that is different from expected by web
---> Web log
[2021-02-24 12:46:38.560338] app.INFO: [064af96616514f6f8b41fc530047db4b] Matched route "{route}". {"route":"GET_v1_health","route_parameters":{"_controller":"WhatsApp\Controller\HealthController::getHealth","_route":"GET_v1_health"},"request_uri":"https://192.168.43.200:8080/v1/health","method":"GET"} []
[2021-02-24 12:46:38.587929] app.INFO: [064af96616514f6f8b41fc530047db4b] Guard authentication successful! {"token":"[object] (Symfony\Component\Security\Guard\Token\PostAuthenticationGuardToken: PostAuthenticationGuardToken(user="admin", authenticated=true, roles="ROLE_ADMIN"))","authenticator":"WhatsApp\Security\TokenAuthenticator"} []
[2021-02-24 12:47:14.646964] app.INFO: [064af96616514f6f8b41fc530047db4b] Response: {"meta":{"version":"v2.33.3","api_status":"stable"},"errors":[{"code":1014,"title":"Internal error","details":"php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution. Please check if wacore is running: wacore:6252"}]} []
[2021-02-24 12:47:14.650236] app.INFO: [064af96616514f6f8b41fc530047db4b] Request GET_/v1/health returns 500 in 36269.15 ms [] []
===================================================================================
Core log
D 2021-02-24 12:10:39.282 UTC 28 apiendpointmanager.cpp:190] Endpoint "healthcheck" is listening on address "0.0.0.0" port 6253 req_id=Main
D 2021-02-24 12:10:39.282 UTC 29 apiendpointmanager.cpp:190] Endpoint "control" is listening on address "0.0.0.0" port 6252 req_id=Main
===================================================================================
No one change was executed in docker-compose.yml. Is the same that is on GitHub (https://github.com/WhatsApp/WhatsApp-Business-API-Setup-Scripts) except network mode was changed "bridge" to "nat" since I´m using windows
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
9d811d5d3283 Default Switch ics local
27dc22b69113 nat nat local
4e2733cd792d none null local
$ docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8d7000856b95 docker.whatsapp.biz/web:v2.33.3 "/opt/whatsapp/bin/w…" 17 hours ago Exited (4294967295) 6 minutes ago postgres_waweb_1
909781cdb775 docker.whatsapp.biz/coreapp:v2.33.3 "/opt/whatsapp/bin/w…" 17 hours ago Up 5 minutes 6250-6253/tcp postgres_wacore_1
7d68b7a61cad postgres:10.6 "docker-entrypoint.s…" 17 hours ago Up 6 minutes 5432/tcp, 33060/tcp, 0.0.0.0:33060->3306/tcp postgres_db_1
219b1e393f21 nginx "/docker-entrypoint.…" 42 hours ago Exited (4294967295) 41 hours ago nostalgic_jennings
The current WA_API_VERSION is 2.33.3
Database used is Postgress10.6
Looking at a similar question answered by #WeiyanWang (How to access wacore container using WhatsApp Business API) I tried to execute the same command in Postgres, but no success
Regards,
after some investigation on scenery described, I found some settings mistakes in windows docker.
To fix these problems following the steps:
Change docker settings to original installation
Choose the option "Switch to linux container..."
Reinstall the WhatsApp Business Api following documentation
Note: Is not necessary change the "bridge" to "nat" network settings. I just only changed the "waweb" from 9090:443 to 8080:443
Having a lot of trouble installing mysql 5.7 on Mac Mojave, (ran 'brew install mysql#5.7')
on initial install, got msg saying postinstall was not completed successfully (please see msg below).
So, after I delete everything in the directory /usr/local/var/mysql (which mysql says is not empty), I STILL get same message when re-running postinstall command ... (which is quite annoying seems MySQL is populating the data dir then complaining it is not empty?!)
[08:02:48][~/tmp]#brew postinstall mysql#5.7
==> Postinstalling mysql#5.7
==> /usr/local/Cellar/mysql#5.7/5.7.28/bin/mysqld --initialize-insecure --user=gert --basedir=/usr/local/Cellar/mysql#5.7/5.7.28 --datadir=/usr/local/var/my Last 15 lines from /Users/gert/Library/Logs/Homebrew/mysql#5.7/post_install.01.mysqld: 2019-12-09 08:03:39 +0200
/usr/local/Cellar/mysql#5.7/5.7.28/bin/mysqld
--initialize-insecure
--user=gert
--basedir=/usr/local/Cellar/mysql#5.7/5.7.28
--datadir=/usr/local/var/mysql
--tmpdir=/tmp
2019-12-09T06:03:39.151987Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use
--explicit_defaults_for_timestamp server option (see documentation for more details). 2019-12-09T06:03:39.154025Z 0
[ERROR] --initialize specified but the data directory has files in it. Aborting. 2019-12-09T06:03:39.154074Z 0 [ERROR] Aborting
Trying to start mysql as root gives error:
[08:04:41][~/tmp]#sudo /usr/local/opt/mysql#5.7/bin/mysql.server start
Password: Starting MySQL ..... ERROR! The server quit without updating
PID file (/var/run/mysqld/mysqld.pid).
Banging head against wall for days now trying to follow StackOverflow posts MySql server startup error 'The server quit without updating PID file ', none of which is working ...
My my.cnf:
[mysqld]
# Only allow connections from localhost
#bind-address = 127.0.0.1
#SO posts said to comment out the above ...
pid-file = /var/run/mysqld/mysqld.pid #Checked, this folder + file exists, with write permissions
Try using a data dir away from the mysql directory i.e if mysql is in /usr/local/mysql, use the data dir as /var/data.
root#photon [ /var ]# /usr/local/mysql/bin/mysqld --initialize-insecure --user=mysql --datadir=/var/data
2020-02-22T21:42:27.121230Z 0 [System] [MY-013169] [Server] /usr/local/mysql/bin/mysqld (mysqld 8.0.19) initializing of server in progress as process 820
2020-02-22T21:42:35.018238Z 5 [Warning] [MY-010453] [Server] root#localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.
→ or totally ignored strings like name of new DB for testing purposes.
Firstly tries to add something about ~250 to 250 already added hosts and Z-server shutted down. I've restarted it and inside docker logs I saw this:
6:20191014:091840.201 using configuration file: /etc/zabbix/zabbix_server.conf
6:20191014:091840.223 current database version (mandatory/optional): 04020000/04020001
6:20191014:091840.223 required mandatory version: 04020000
6:20191014:091840.484 __mem_malloc: skipped 7 asked 108424 skip_min 304 skip_max 12192
6:20191014:091840.484 [file:dbconfig.c,line:94] __zbx_mem_realloc(): out of memory (requested 108424 bytes)
6:20191014:091840.484 [file:dbconfig.c,line:94] __zbx_mem_realloc(): please increase CacheSize configuration parameter
6:20191014:091840.484 === memory statistics for configuration cache ===
Solution for those problem was to increase CacheSize in zabbix_server.conf . Okay, that's not a problem and after this Im push a new config to Z-server and restart it... → and z-server stops already after start and logs says the same problem. After reading config in container I saw what string what I corrected to matching my wishes are missing O_o. Strings are deleted.
My config:
LogType=console
DBHost=postgres-server
DBName=zabbix_pwd
DBSchema=public
DBUser=zabbix
DBPassword=zabbix
DBPort=5432
StartPollers=5
StartIPMIPollers=5
StartPollersUnreachable=5
SNMPTrapperFile=/var/lib/zabbix/snmptraps/snmptraps.log
StartSNMPTrapper=1
CacheSize=512M
HistoryCacheSize=512M
HistoryIndexCacheSize=512M
TrendCacheSize=512m
ValueCacheSize=256M
AlertScriptsPath=/usr/lib/zabbix/alertscripts
ExternalScripts=/usr/lib/zabbix/externalscripts
FpingLocation=/usr/sbin/fping
Fping6Location=/usr/sbin/fping6
SSHKeyLocation=/var/lib/zabbix/ssh_keys
SSLCertLocation=/var/lib/zabbix/ssl/certs/
SSLKeyLocation=/var/lib/zabbix/ssl/keys/
SSLCALocation=/var/lib/zabbix/ssl/ssl_ca/
LoadModulePath=/var/lib/zabbix/modules/
And what I've getting after starting z-server:
LogType=console
DBHost=postgres-server
DBName=zabbix_pwd
DBSchema=public
DBUser=zabbix
DBPassword=zabbix
DBPort=5432
SNMPTrapperFile=/var/lib/zabbix/snmptraps/snmptraps.log
StartSNMPTrapper=1
AlertScriptsPath=/usr/lib/zabbix/alertscripts
ExternalScripts=/usr/lib/zabbix/externalscripts
FpingLocation=/usr/sbin/fping
Fping6Location=/usr/sbin/fping6
SSHKeyLocation=/var/lib/zabbix/ssh_keys
SSLCertLocation=/var/lib/zabbix/ssl/certs/
SSLKeyLocation=/var/lib/zabbix/ssl/keys/
SSLCALocation=/var/lib/zabbix/ssl/ssl_ca/
LoadModulePath=/var/lib/zabbix/modules/
Any suggestions to how-to rule the world and don't be captured by doctors ?
With docker you need to send conf parameters in the docker-compose.yml file, or in your docker run command using the -e :
For example from my docker yml file:
zabbix-server:
image: zabbix/zabbix-server-pgsql:ubuntu-4.2.6
environment:
ZBX_MAXHOUSEKEEPERDELETE: 5000
ZBX_STARTPOLLERS: 15
ZBX_CACHESIZE: 8M
ZBX_STARTDBSYNCERS: 4
ZBX_HISTORYCACHESIZE: 16M
ZBX_TRENDCACHESIZE: 4M
ZBX_VALUECACHESIZE: 8M
ZBX_LOGSLOWQUERIES: 3000
Another way to work with zabbix:
https://hub.docker.com/r/monitoringartist/zabbix-3.0-xxl/
I'm trying to use graylog2 to collect logs from docker containers. Docs says that only UDP GELF input is supported for this purpose.
I'm using docker-compose to run the graylog server. See gist for all files used: https://gist.github.com/olegabr/7f5190c453bb63c71dabf151d2373c2f.
And I'm using this command to test it:
sendip -p ipv4 -is 127.0.0.1 -p udp -us 5070 -ud 12201 -d '{"version": "1.1","host":"example.org","short_message":"Short message","full_message":"Backtrace here\n\nmore stuff","level":1,"_user_id":9001,"_some_info":"foo","_some_env_var":"bar"}' -v 127.0.0.1
Server receives this message, but it can not process it. I see following in the graylog2 logs:
2016-12-09 11:53:20,125 WARN : org.graylog2.bindings.providers.DefaultStreamProvider - Unable to load default stream, tried 1 times, retrying every 500ms. Processing is blocked until this succeeds.
2016-12-09 11:53:25,129 WARN : org.graylog2.bindings.providers.DefaultStreamProvider - Unable to load default stream, tried 11 times, retrying every 500ms. Processing is blocked until this succeeds.
e.t.c. many many similar lines.
The API call curl http://admin:123456#127.0.0.1:9000/api/count/total returns
{"events":0}
In the server logs I see that the default stream was initialized:
mongo_1 | 2016-12-09T11:51:12.522+0000 I INDEX [conn3] build index on: graylog.pipeline_processor_pipelines_streams properties: { v: 2, unique: true, key: { stream_id: 1 }, name: "stream_id_1", ns: "graylog.pipeline_processor_pipelines_streams" }
graylog_1 | 2016-12-09 11:51:13,408 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog.plugins.pipelineprocessor.periodical.LegacyDefaultStreamMigration] periodical, running forever.
graylog_1 | 2016-12-09 11:51:13,424 INFO : org.graylog.plugins.pipelineprocessor.periodical.LegacyDefaultStreamMigration - Legacy default stream has no connections, no migration needed.
graylog_1 | 2016-12-09 11:51:13,487 INFO : org.graylog2.migrations.V20160929120500_CreateDefaultStreamMigration - Successfully created default stream: All messages
graylog_1 | 2016-12-09 11:51:13,653 INFO : org.graylog2.migrations.V20161125142400_EmailAlarmCallbackMigration - No streams needed to be migrated.
graylog_1 | 2016-12-09 11:51:13,662 INFO : org.graylog2.migrations.V20161125161400_AlertReceiversMigration - No streams needed to be migrated.
graylog_1 | 2016-12-09 11:51:13,672 INFO : org.graylog2.migrations.V20161130141500_DefaultStreamRecalcIndexRanges - Cluster not connected yet, delaying migration until it is reachable.
So, why it can not be loaded when the message arrives? Why it is needed in the first place?
I've tried to find similar reports in web but with no success.
This has nothing to do with the UDP input per se.
Graylog 2.2.0-beta.1 is broken and shouldn't be used. Please downgrade to Graylog 2.1.2 (the latest stable version) or wait for Graylog 2.2.0-beta.2.
See https://groups.google.com/forum/#!searchin/graylog2/docker|sort:date/graylog2/gCycC3_K3vU/EL-Lz_uNDQAJ for a related post on the Graylog mailing list.
same trouble
just setup graylog and configure input gelf udp 12209 port
then test it twice by:
docker run --log-driver=gelf --log-opt gelf-address=udp://127.0.0.1:12209 busybox echo Hello Graylog
in UI i saw:
2 messages in process buffe
2 unprocessed messages are currently in the journal, in 1 segments.
0 messages have been appended in the last second, 0 messages have been read in the last second.
and still getting:
2016-12-09 12:41:23,715 INFO : org.graylog2.inputs.InputStateListener - Input [GELF UDP/584aa67308813b00010d009e] is now RUNNING
2016-12-09 12:41:43,666 WARN : org.graylog2.bindings.providers.DefaultStreamProvider - Unable to load default stream, tried 1 times, retrying every 500ms. Processing is blocked until this succeeds.
anyone have found solution ?