I am trying to connect Neo4j and ES docker containers using graphaware and I get what could be looking like a docker-to-docker connection issue.
org.apache.http.conn.HttpHostConnectException: Connect to 127.0.0.1:9201 [/127.0.0.1] failed: Connection refused (Connection refused)
I can access both container data via http using a browser. At this stage I do not know where the issue is actually located since I am quite new with these technologies.
Versions I am using:
neo4j: 3.5.3
ElasticSearch 6.6.1
graphaware-neo4j-to-elasticsearch-3.5.2.53.11.jar
graphaware-server-community-all-3.5.2.jar
graphaware-uuid-3.5.2.53.17.jar
docker: 18.09.3
Here is my yml file.
version: '3.3'
services:
neo4j:
restart: always
image: neo4j:3.5.3
container_name: neo4j
environment:
- NEO4J_AUTH=none
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
- NEO4J_dbms_connector_http_listen__address=:7474
- NEO4J_dbms_connector_https_listen__address=:6477
- NEO4J_dbms_connector_bolt_listen__address=:7687
- NEO4J_dbms_memory_heap_initialSize=16G
- NEO4J_dbms_memory_heap_maxSize=16G
volumes:
- /home/leag/drive53/neo4j/data:/data
- ./neo4j/conf:/conf
- ./neo4j/plugins:/plugins
- /home/leag/drive53/imports:/import
ports:
- 7474:7474
- 7687:7687
networks:
- neo-ela
elastic:
build: .
container_name: elastic_container
volumes:
- ./elasticsearch/data:/usr/share/elasticsearch/data
- ./elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
environment:
- neo4j="http://neo4j"
ports:
- 9201:9201
networks:
- neo-ela
networks:
neo-ela:
driver: bridge
The neo4j.conf file:
# This setting should only be set once for registering the framework and all the used submodules
dbms.unmanaged_extension_classes=com.graphaware.server=/graphaware
com.graphaware.runtime.enabled=true
#UIDM becomes the module ID:
com.graphaware.module.UIDM.1=com.graphaware.module.uuid.UuidBootstrapper
#optional, default is "uuid". (only if using the UUID module)
com.graphaware.module.UIDM.uuidProperty=uuid
#optional, default is all nodes:
#com.graphaware.module.UIDM.node=hasLabel('Label1') || hasLabel('Label2')
#optional, default is uuidIndex
com.graphaware.module.UIDM.uuidIndex=uuidIndex
#prevent the whole db to be assigned a new uuid if the uuid module is settle up together with neo4j2es
com.graphaware.module.UIDM.initializeUntil=0
#ES becomes the module ID:
com.graphaware.module.ES.2=com.graphaware.module.es.ElasticSearchModuleBootstrapper
#URI of Elasticsearch; elastic works as well
com.graphaware.module.ES.uri=127.0.0.1
#Port of Elasticsearch
com.graphaware.module.ES.port=9201
#optional, protocol of Elasticsearch connection, defaults to http
com.graphaware.module.ES.protocol=http
#optional, Elasticsearch index name, default is neo4j-index
com.graphaware.module.ES.index=neo4j-index
#optional, node property key of a propery that is used as unique identifier of the node. Must be the same as com.graphaware.module.UIDM.uuidProperty (only if using UUID module), defaults to uuid
#use "ID()" to use native Neo4j IDs as Elasticsearch IDs (not recommended)
com.graphaware.module.ES.keyProperty=uuid
#optional, whether to retry if a replication fails, defaults to false
com.graphaware.module.ES.retryOnError=false
#optional, size of the in-memory queue that queues up operations to be synchronised to Elasticsearch, defaults to 10000
com.graphaware.module.ES.queueSize=10000
#optional, size of the batch size to use during re-initialization, defaults to 1000
com.graphaware.module.ES.reindexBatchSize=2000
#optional, specify which nodes to index in Elasticsearch, defaults to all nodes
#com.graphaware.module.ES.node=hasLabel('Label1')
#optional, specify which node properties to index in Elasticsearch, defaults to all properties
#com.graphaware.module.ES.node.property=key != 'age'
#optional, specify whether to send updates to Elasticsearch in bulk, defaults to true (highly recommended)
com.graphaware.module.ES.bulk=true
#optional, read explanation below, defaults to 0
com.graphaware.module.ES.initializeUntil=0
and elasticsearch.yml
network.publish_host: 127.0.0.1
network.host: 127.0.0.1
transport.tcp.port: 9300
http.port: 9201
http.host: 127.0.0.1
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-methods: OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers: "X-Requested-With, Content-Type, Content-Length, X-User"
The problem, most likely, is that your neo4j is trying to connect to an elasticsearch running on the same machine as neo4j is
#URI of Elasticsearch; elastic works as well
com.graphaware.module.ES.uri=127.0.0.1
#Port of Elasticsearch
com.graphaware.module.ES.port=9201
Elasticsearch is not running on the same machine and you should avoid using 127.0.0.1 on your config files, try and swap the localhost address to the local dns address for ES, as such
#URI of Elasticsearch; elastic works as well
com.graphaware.module.ES.uri=elastic
#Port of Elasticsearch
com.graphaware.module.ES.port=9201
Related
I'm using a K3S Cluster in a docker(-compose) container in my CI/CD pipeline, to test my application code. However I have problem with the certificate of the cluster. I need to communicate on the cluster using the external addres. My docker-compose script looks as follows
version: '3'
services:
server:
image: rancher/k3s:v0.8.1
command: server --disable-agent
environment:
- K3S_CLUSTER_SECRET=somethingtotallyrandom
- K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml
- K3S_KUBECONFIG_MODE=666
volumes:
- k3s-server:/var/lib/rancher/k3s
# get the kubeconfig file
- .:/output
ports:
# - 6443:6443
- 6080:6080
- 192.168.2.110:6443:6443
node:
image: rancher/k3s:v0.8.1
tmpfs:
- /run
- /var/run
privileged: true
environment:
- K3S_URL=https://server:6443
- K3S_CLUSTER_SECRET=somethingtotallyrandom
ports:
- 31000-32000:31000-32000
volumes:
k3s-server: {}
accessing the cluster from python gives me
MaxRetryError: HTTPSConnectionPool(host='192.168.2.110', port=6443): Max retries exceeded with url: /apis/batch/v1/namespaces/mlflow/jobs?pretty=True (Caused by SSLError(SSLCertVerificationError("hostname '192.168.2.110' doesn't match either of 'localhost', '172.19.0.2', '10.43.0.1', '172.23.0.2', '172.18.0.2', '172.23.0.3', '127.0.0.1', '0.0.0.0', '172.18.0.3', '172.20.0.2'")))
Here are my two (three) question
how can I add additional IP adresses to the cert generation? I was hoping the --bind-address in the server command triggers taht
how can I fall back on http providing an --http-listen-port didn't achieve the expected result
any other suggestion how I can enable communication with the cluster
changing the python code is not really an option as I would like o keep the code unaltered for testing. (Fallback on http works via kubeconfig.
The solution is to use the parameter tls-san
server --disable-agent --tls-san 192.168.2.110
I am trying to capture syslog messages sent over the network using rsyslog, and then have rsyslog capture, transform and send these messages to elasticsearch.
I found a nice article on the configuration on https://www.reddit.com/r/devops/comments/9g1nts/rsyslog_elasticsearch_logging/
Problem is that rsyslog keeps popping up an error at startup that it cannot connect to Elasticsearch on the same machine on port 9200. Error I get is
Failed to connect to localhost port 9200: Connection refused
2020-03-20T12:57:51.610444+00:00 53fd9e2560d9 rsyslogd: [origin software="rsyslogd" swVersion="8.36.0" x-pid="1" x-info="http://www.rsyslog.com"] start
rsyslogd: omelasticsearch: we are suspending ourselfs due to server failure 7: Failed to connect to localhost port 9200: Connection refused [v8.36.0 try http://www.rsyslog.com/e/2007 ]
Anyone can help on this?
Everything is running in docker on a single machine. I use below docker compose file to start the stack.
version: "3"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ports:
- 9200:9200
networks:
- logging-network
kibana:
image: docker.elastic.co/kibana/kibana:7.6.1
depends_on:
- logstash
ports:
- 5601:5601
networks:
- logging-network
rsyslog:
image: rsyslog/syslog_appliance_alpine:8.36.0-3.7
environment:
- TZ=UTC
- xpack.security.enabled=false
ports:
- 514:514/tcp
- 514:514/udp
volumes:
- ./rsyslog.conf:/etc/rsyslog.conf:ro
- rsyslog-work:/work
- rsyslog-logs:/logs
volumes:
rsyslog-work:
rsyslog-logs:
networks:
logging-network:
driver: bridge
rsyslog.conf file below:
global(processInternalMessages="on")
#module(load="imtcp" StreamDriver.AuthMode="anon" StreamDriver.Mode="1")
module(load="impstats") # config.enabled=`echo $ENABLE_STATISTICS`)
module(load="imrelp")
module(load="imptcp")
module(load="imudp" TimeRequery="500")
module(load="omstdout")
module(load="omelasticsearch")
module(load="mmjsonparse")
module(load="mmutf8fix")
input(type="imptcp" port="514")
input(type="imudp" port="514")
input(type="imrelp" port="1601")
# includes done explicitely
include(file="/etc/rsyslog.conf.d/log_to_logsene.conf" config.enabled=`echo $ENABLE_LOGSENE`)
include(file="/etc/rsyslog.conf.d/log_to_files.conf" config.enabled=`echo $ENABLE_LOGFILES`)
#try to parse a structured log
action(type="mmjsonparse")
# this is for index names to be like: rsyslog-YYYY.MM.DD
template(name="rsyslog-index" type="string" string="rsyslog-%$YEAR%.%$MONTH%.%$DAY%")
# this is for formatting our syslog in JSON with #timestamp
template(name="json-syslog" type="list") {
constant(value="{")
constant(value="\"#timestamp\":\"") property(name="timereported" dateFormat="rfc3339")
constant(value="\",\"host\":\"") property(name="hostname")
constant(value="\",\"severity\":\"") property(name="syslogseverity-text")
constant(value="\",\"facility\":\"") property(name="syslogfacility-text")
constant(value="\",\"program\":\"") property(name="programname")
constant(value="\",\"tag\":\"") property(name="syslogtag" format="json")
constant(value="\",") property(name="$!all-json" position.from="2")
# closing brace is in all-json
}
# this is where we actually send the logs to Elasticsearch (localhost:9200 by default)
action(type="omelasticsearch" template="json-syslog" searchIndex="rsyslog-index" dynSearchIndex="on")
#################### default ruleset begins ####################
# we emit our own messages to docker console:
syslog.* :omstdout:
include(file="/config/droprules.conf" mode="optional") # this permits the user to easily drop unwanted messages
action(name="main_utf8fix" type="mmutf8fix" replacementChar="?")
include(text=`echo $CNF_CALL_LOG_TO_LOGFILES`)
include(text=`echo $CNF_CALL_LOG_TO_LOGSENE`)
First of all you need to run all the containers on the same docker network which in this case are not. Second , after running the containers on the same network , login to rsyslog container and check if 9200 is available.
I am using EFK stack to build a monitoring system. According to the Docker Logging Driver, I can add customized tag to enrich the metadata of the container log.
Here is my docker-compose file:
version: "3.3"
services:
watcher:
image: image_name
deploy:
replicas: 1
placement:
constraints: [node.role == manager]
logging:
driver: fluentd
options:
tag: "docker/{{.ImageName}}"
networks:
- elastic
Here is my Fluent-bit configuration:
[SERVICE]
Flush 5
Daemon Off
Log_Level debug
Parsers_File /conf/parsers.conf
[INPUT]
Name Forward
Port 24224
[OUTPUT]
Name es
Match *
Host elasticsearch
Port 9200
Index fluent_bit
Type json
As you can see, I have already add the tag: "docker/{{.ImageName}}" to the docker-compose file. And the container was restarted as well. The log I got in Kibana should include such a tag. But here is the log I got:
#timestamp:Mar 18, 2020 # 15:35:23.000 container_id:06dde90cb998c78962e321c8396c1f992119450a6868eecb7fa14c5b348670b1 container_name:/test_container source:stderr log:2020-03-18 14:35:23 - INFO - module: __main__ - action: Watcher is started - Watcher Start _id:RmcS7nABi-qh6YwdCII3 _type:json _index:fluent_bit _score: -
There are still only the container name and container id in the metadata and nothing more. Can anybody tell me what could be the reason for this?
I just found where does the problem come from.
When you add a tag option to the log-driver, it will not be automatically included into the fluent-bit/fluentd output. The Include_tag_key should also be set to true in the output section.
[OUTPUT]
Name es
Match *
Host elasticsearch
Port 9200
Index fluent_bit
Type json
Include_Tag_Key true
I'm trying to setup ejabberd server via docker so I could use pidgin to chat with my teammates.
I have the following docker compose file:
version: "2"
services:
ejabberd:
image: rroemhild/ejabberd
ports:
- "5222:5222"
- "5269:5269"
- "5280:5280"
environment:
- ERLANG_NODE=ejabberd
- XMPP_DOMAIN=localhost
- EJABBERD_ADMINS=admin
- EJABBERD_USERS=admin:pass1 user1:pass2 user2:pass3
volumes:
- ssl:/opt/ejabberd/ssl
- backup:/opt/ejabberd/backup
- upload:/opt/ejabberd/upload
- database:/opt/ejabberd/database
volumes:
ssl:
backup:
upload:
database:
Whenever I try to launch ejabberd I get this error:
ejabberd_1 | 05:52:58.912 [critical] Node name mismatch: I'm
[ejabberd#986834bd1bc8], the database is owned by
[ejabberd#319f85780c99] ejabberd_1 | 05:52:58.912 [critical] Either
set ERLANG_NODE in ejabberdctl.cfg or change node name in Mnesia
Is there something I'm missing?
Because your hostname is changing when Docker is initialised, you'll need to override the ERLANG_NODE setting in /etc/ejabberd/ejabberdctl.cfg. For example:
ERLANG_NODE=ejabberd#mypermanenthostname
If others are attempting to migrating an ejabberd instance, they can do something similar, but the instructions they really want are:
https://docs.ejabberd.im/admin/guide/managing/#ad-hoc-commands
I have setup MySQL cluster on my PC using mysql/mysql-cluster image on docker hub, and it starts up fine. However when I try to connect to the cluster from outside docker (via the host machine) using clusterJ it doesn't connect.
Initially I was getting the following error: Could not alloc node id at 127.0.0.1 port 1186: No free node id found for mysqld(API)
So I created a custom mysql-cluster.cnf, very similar to the one distributed with the docker image, but with a new api endpoint:
[ndbd default]
NoOfReplicas=2
DataMemory=80M
IndexMemory=18M
[ndb_mgmd]
NodeId=1
hostname=192.168.0.2
datadir=/var/lib/mysql
[ndbd]
NodeId=2
hostname=192.168.0.3
datadir=/var/lib/mysql
[ndbd]
NodeId=3
hostname=192.168.0.4
datadir=/var/lib/mysql
[mysqld]
NodeId=4
hostname=192.168.0.10
[api]
This is the configuration used for clusterJ setup:
com.mysql.clusterj.connect:
host: 127.0.0.1:1186
database: my_db
Here is the docker-compose config:
version: '3'
services:
#Sets up the MySQL cluster ndb_mgmd process
database-manager:
image: mysql/mysql-cluster
networks:
database_net:
ipv4_address: 192.168.0.2
command: ndb_mgmd
ports:
- "1186:1186"
volumes:
- /c/Users/myuser/conf/mysql-cluster.cnf:/etc/mysql-cluster.cnf
# Sets up the first MySQL cluster data node
database-node-1:
image: mysql/mysql-cluster
networks:
database_net:
ipv4_address: 192.168.0.3
command: ndbd
depends_on:
- database-manager
# Sets up the second MySQL cluster data node
database-node-2:
image: mysql/mysql-cluster
networks:
database_net:
ipv4_address: 192.168.0.4
command: ndbd
depends_on:
- database-manager
#Sets up the first MySQL server process
database-server:
image: mysql/mysql-cluster
networks:
database_net:
ipv4_address: 192.168.0.10
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=true
- MYSQL_DATABASE=my_db
- MYSQL_USER=my_user
command: mysqld
networks:
database_net:
ipam:
config:
- subnet: 192.168.0.0/16
When I try to connect to the cluster I get the following error: '127.0.0.1:1186' nodeId 0; Return code: -1 error code: 0 message: .
I can see that the app running ClusterJ is registered to the cluster, but then it disconnects. Here is a excerpt from the docker mysql manager logs:
database-manager_1 | 2018-05-10 11:18:43 [MgmtSrvr] INFO -- Node 3: Communication to Node 4 opened
database-manager_1 | 2018-05-10 11:22:16 [MgmtSrvr] INFO -- Alloc node id 6 succeeded
database-manager_1 | 2018-05-10 11:22:16 [MgmtSrvr] INFO -- Nodeid 6 allocated for API at 10.0.2.2
Any help solving this issue would be much appreciated.
Here is how ndb_mgmd handles the request to start the ClusterJ application.
You connect to the MGM server on port 1186. In this connection you
will get the configuration. This configuration contains the IP addresses
of the data nodes. To connect to the data nodes ClusterJ will try to
connect to 192.168.0.3 and 192.168.0.4. Since ClusterJ is outside Docker,
I presume those addresses point to some different place.
The management server will also provide a dynamic port to use when
connecting to the NDB data node. It is a lot easier to manage this
by setting ServerPort for NDB data nodes. I usually use 11860 as
ServerPort, 2202 is also popular to use.
I am not sure how you mix a Docker environment with an external
environment. I assume it is possible to solve somehow by setting
up proper IP translation tables in the correct places.