elasticsearch not unable to know the cluster's nodes - docker

I have a a cluster of 4 nodes, 2 masters and 2 slaves all running on dockers.
The config of the master is below:
network:
host: 0.0.0.0
publish_host: myIP
http:
port: 9201
transport.tcp.port: 9301
path:
logs: "/usr/share/elasticsearch/logs"
discovery.zen.minimum_master_nodes: 2
http.cors.enabled : true
http.cors.allow-origin : "*"
http.cors.allow-methods : OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type,Authorization, Content-Length
cluster.name: lpfrlogsconcentration
node:
name: "node-1"
master: true
data: false
ingest: true
discovery.zen.ping.unicast.hosts:
- myIP:9301
- myIP:9302
- myIP:9303
- myIP:9304
And for the slaves, it is:
network:
host: 0.0.0.0
publish_host: myIP
http:
port: 9203
transport.tcp.port: 9303
path:
logs: "/usr/share/elasticsearch/logs"
discovery.zen.minimum_master_nodes: 2
http.cors.enabled : true
http.cors.allow-origin : "*"
http.cors.allow-methods : OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type,Authorization, Content-Length
cluster.name: lpfrlogsconcentration
node:
name: "node-3"
master: false
data: true
ingest: true
discovery.zen.ping.unicast.hosts:
- myIP:9301
- myIP:9302
- myIP:9303
- myIP:9304
But when I run the dockers, I get on the nodes 1 & 2 (the masters) this error :
[2018-06-14T08:41:02,744][INFO ][o.e.d.z.ZenDiscovery ]
[node-1] failed to send join request to master
[{node-2}{UNxVYf5mQUSUV3Z9h6c7Pw}{4xJAijDHTfCNfdjyUDLvtQ}{221.128.56.131}{221.128.56.131:9302}],
reason [RemoteTransportException[[node-2][172.17.0.2:9302][internal:discovery/zen/join]];
nested: NotMasterException[Node [{node-2}{UNxVYf5mQUSUV3Z9h6c7Pw}{4xJAijDHTfCNfdjyUDLvtQ}
{221.128.56.131}{221.128.56.131:9302}] not master for join request]; ],
tried [3] times
Can you help me debug this?

Related

Envoy proxy: 503 Service Unavailable [duplicate]

This question already has answers here:
Http response at 400 or 500 level
(2 answers)
Closed 10 months ago.
State of Servies:
Client (nuxt) is up on http://localhost:3000 and the client sends
requests to http://localhost:8080.
Server (django) is running on 0.0.0.0:50051.
Also docker is up
78496fef541f 5f9773709483 "/docker-entrypoint.…" 29 minutes
ago Up 29 minutes 0.0.0.0:8080->8080/tcp, :::8080-8080/tcp,
10000/tcp envoy
envoy.yaml Configurations:
I configured the envoy.yaml file as follows:
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 8080 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: greeter_service
max_stream_duration:
grpc_timeout_header_max: 0s
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: custom-header-1,grpc-status,grpc-message
http_filters:
- name: envoy.filters.http.grpc_web
- name: envoy.filters.http.cors
- name: envoy.filters.http.router
clusters:
- name: greeter_service
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
load_assignment:
cluster_name: cluster_0
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 0.0.0.0
port_value: 50051
Error:
But the following error occurs and, as it seems, the requests do not reach the Django 0.0.0.0:50051 server.
503 Service Unavailable
grpc-message: upstream connect error or disconnect/reset before
headers. reset reason: connection failure, transport failure reason:
delayed connect error: 111
I have encountered the same error. Here are my condition:
Envoy: running on docker with listener at port 8080 and redirect to port 9090
Next JS Web client: request to envoy proxy at port 8080
Node Grpc Server: listen on port 9090
Im starting all on local environment.
Based on this example about configuring the envoy proxy that refer to this issue, I change the address on envoy proxy to host.docker.internal on envoy.yaml.
Just refer to this section, if you want to try:
clusters:
- name: backend_service
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
load_assignment:
cluster_name: cluster_0
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: host.docker.internal // i change it, before was 0.0.0.0
port_value: 9090

Ejabberd 21.12 MQTT configuration

I have a fresh install of Ejabberd 21.12 on Ubuntu 20.04. So far I have not been unable to mosquitto_sub or mosquitto_pub a test message.
mosquitto_sub -h mydomain.xyz -t "test/topic" -d -p 1883
Client (null) sending CONNECT
Client (null) received CONNACK (5)
Connection error: Connection Refused: not authorised.
Client (null) sending DISCONNECT
the relevant (I think) parts of ejabberd.yml looks like this
port: 5280
ip: "::"
module: ejabberd_http
request_handlers:
"/admin": ejabberd_web_admin
"/mqtt": mod_mqtt
-
port: 1883
module: mod_mqtt
backlog: 1000
-
port: 8883
module: mod_mqtt
backlog: 1000
tls: true
mod_mqtt: {}
I tried defining publisher and subscriber in acl but that didn't seem to change anything.
I did:
ufw allow 1883
ufw allow 8883
Thanks for any help or insight.
------EDIT----------
Still having the same issue but I have tried adding access rules just to fill out the mqtt configuration.
I have tried changing the syntax of the ACL section many times but they all result in the same auth fail. Here is my yml:
###
###' ejabberd configuration file
###
### The parameters used in this configuration file are explained at
###
### https://docs.ejabberd.im/admin/configuration
###
### The configuration file is written in YAML.
### *******************************************************
### ******* !!! WARNING !!! *******
### ******* YAML IS INDENTATION SENSITIVE *******
### ******* MAKE SURE YOU INDENT SECTIONS CORRECTLY *******
### *******************************************************
### Refer to http://en.wikipedia.org/wiki/YAML for the brief description.
###
hosts:
- DOMAIN.XYZ
loglevel: debug
certfiles:
- "/opt/ejabberd/conf/server.pem"
# - "/etc/letsencrypt/live/DOMAIN.XYZ/fullchain.pem"
# - "/etc/letsencrypt/live/DOMAIN.XYZ/privkey.pem"
- "/opt/ejabberd/conf/fullchain.pem"
- "/opt/ejabberd/conf/privkey.pem"
ca_file: "/opt/ejabberd/conf/cacert.pem"
listen:
-
port: 5222
ip: "0.0.0.0"
module: ejabberd_c2s
max_stanza_size: 262144
shaper: c2s_shaper
access: c2s
starttls_required: true
-
port: 5269
ip: "0.0.0.0"
module: ejabberd_s2s_in
max_stanza_size: 524288
-
port: 5443
ip: "0.0.0.0"
module: ejabberd_http
tls: true
request_handlers:
"/admin": ejabberd_web_admin
"/api": mod_http_api
"/bosh": mod_bosh
"/captcha": ejabberd_captcha
"/upload": mod_http_upload
"/ws": ejabberd_http_ws
"/oauth": ejabberd_oauth
-
port: 5280
ip: "0.0.0.0"
module: ejabberd_http
request_handlers:
"/admin": ejabberd_web_admin
"/mqtt": mod_mqtt
-
port: 1883
module: mod_mqtt
backlog: 1000
-
port: 8883
module: mod_mqtt
backlog: 1000
tls: true
s2s_use_starttls: optional
acl:
local:
user_regexp: ""
loopback:
ip:
- 127.0.0.0/8
- ::1/128
- ::FFFF:127.0.0.1/128
admin:
user:
- admin#DOMAIN.XYZ
publisher:
user:
"broker" : "DOMAIN.XYZ"
subscriber:
user:
"broker" : "DOMAIN.XYZ"
access_rules:
local:
allow: local
c2s:
deny: blocked
allow: all
announce:
allow: admin
configure:
allow: admin
muc_create:
allow: local
pubsub_createnode:
allow: local
trusted_network:
allow: loopback
api_permissions:
"console commands":
from:
- ejabberd_ctl
who: all
what: "*"
"admin access":
who:
access:
allow:
acl: loopback
acl: admin
oauth:
scope: "ejabberd:admin"
access:
allow:
acl: loopback
acl: admin
what:
- "*"
- "!stop"
- "!start"
"public commands":
who:
ip: 127.0.0.1/8
what:
- status
- connected_users_number
shaper:
normal: 1000
fast: 50000
shaper_rules:
max_user_sessions: 10
max_user_offline_messages:
5000: admin
100: all
c2s_shaper:
none: admin
normal: all
s2s_shaper: fast
max_fsm_queue: 10000
acme:
contact: "mailto:cedar#disroot.org"
ca_url: "https://acme-v02.api.letsencrypt.org"
modules:
mod_adhoc: {}
mod_admin_extra: {}
mod_announce:
access: announce
mod_avatar: {}
mod_blocking: {}
mod_bosh: {}
mod_caps: {}
mod_carboncopy: {}
mod_client_state: {}
mod_configure: {}
mod_disco: {}
mod_fail2ban: {}
mod_http_api: {}
mod_http_upload:
put_url: https://#HOST#:5443/upload
mod_last: {}
mod_mam:
## Mnesia is limited to 2GB, better to use an SQL backend
## For small servers SQLite is a good fit and is very easy
## to configure. Uncomment this when you have SQL configured:
## db_type: sql
assume_mam_usage: true
default: never
mod_mqtt:
access_publish:
"#":
- allow: publisher
- deny
access_subscribe:
"#":
- allow: subscriber
- deny
mod_muc:
access:
- allow
access_admin:
- allow: admin
access_create: muc_create
access_persistent: muc_create
access_mam:
- allow
default_room_options:
allow_subscription: true # enable MucSub
mam: false
mod_muc_admin: {}
mod_offline:
access_max_user_messages: max_user_offline_messages
mod_ping: {}
mod_privacy: {}
mod_private: {}
mod_proxy65:
access: local
max_connections: 5
mod_pubsub:
access_createnode: pubsub_createnode
plugins:
- flat
- pep
force_node_config:
## Avoid buggy clients to make their bookmarks public
storage:bookmarks:
access_model: whitelist
mod_push: {}
mod_push_keepalive: {}
mod_register:
## Only accept registration requests from the "trusted"
## network (see access_rules section above).
## Think twice before enabling registration from any
## address. See the Jabber SPAM Manifesto for details:
## https://github.com/ge0rg/jabber-spam-fighting-manifesto
ip_access: trusted_network
mod_roster:
versioning: true
mod_s2s_dialback: {}
mod_shared_roster: {}
mod_stream_mgmt:
resend_on_timeout: if_offline
mod_vcard: {}
mod_vcard_xupdate: {}
mod_version:
show_os: false
### Local Variables:
### mode: yaml
### End:
### vim: set filetype=yaml tabstop=8
Even if Ejabberd is set up to allow "anonymous" access, it seems to require a username AND PASSWORD (that it doesn't check) when connecting over MQTT. This command works for me:
mosquitto_sub -h mydomain.xyz -t "test/topic" -d -p 1883 -u foo -P bar

Unable to start envoy in docker-compose

I have this envoy.yaml
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 8080 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
'#type': type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ['*']
routes:
- match: { prefix: '/' }
route:
cluster: echo_service
timeout: 0s
max_stream_duration:
grpc_timeout_header_max: 0s
cors:
allow_origin_string_match:
- prefix: '*'
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: '1728000'
expose_headers: custom-header-1,grpc-status,grpc-message
http_filters:
- name: envoy.filters.http.grpc_web
- name: envoy.filters.http.cors
- name: envoy.filters.http.router
clusters:
- name: echo_service
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
load_assignment:
cluster_name: cluster_0
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: node-server
port_value: 9090
This file is copied from this official example.
But when I try to go ahead with the docs
$ docker-compose pull node-server envoy commonjs-client
$ docker-compose up node-server envoy commonjs-client
I get this:
ERROR: No such service: node-server
if I run docker-compose pull envoy I get ERROR: No such service: envoy
What did I miss?
It seems like I got the wrong assumption in my comments. This repository contains Dockerfiles with a context to create said image when running the docker compose command.
In your example, the command:
docker-compose pull node-server envoy commonjs-client
should check if the images are available locally. If not, they should be able to build them.
Confusing to me is that you pointed to a docker-compose.yaml file stashed away somewhere deep in the examples folder. If you were to run that command I can see why you'd get the error. The relative path to the envoy Dockerfile is ./net/grpc/gateway/docker/envoy/Dockerfile, which is not accessible from the echo location.
It should however be accessible from your project root (i.e. the directory of this file https://github.com/grpc/grpc-web/blob/master/docker-compose.yml). Have you tried running it from there?
Fyi: What should happen after a pull is compose notifying you that the image cannot be found in your local repository and proceeding to create it based on the Dockerfile it found in the path relative to root (./net/grpc/gateway/docker/envoy/Dockerfile).

Connecting react web client to gRPC server through Envoy docker doesn't work

I have gRPC server in scala Play Framework which exposes gRPC hello world example service on port 9000. I'm trying to connect it with React web client. It seems that I'm having connection issues with Envoy proxy which is deployed to docker container on Mac.
I'm always getting the same error which I believe means that Envoy is not able to connect with backend:
​code: 2​
message: "Http response at 400 or 500 level"
​metadata: Object { }
My docker file to build Envoy is:
FROM envoyproxy/envoy:v1.12.2
COPY ./envoy.yaml /etc/envoy/envoy.yaml
CMD /usr/local/bin/envoy -c /etc/envoy/envoy.yaml -l trace --log-path /tmp/envoy_info.log
And I'm building it using this script:
echo --- Building my-envoy docker image ---
docker build -t my-envoy:1.0 .
echo --- Running my-envoy docker image ---
docker run -d -p 8080:8080 -p 9901:9901 --network=host my-envoy:1.0
Envoy configuration is defined in this yaml file:
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 8080 }
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: greeter_service
max_grpc_timeout: 0s
cors:
allow_origin:
- "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: custom-header-1,grpc-status,grpc-message
http_filters:
- name: envoy.grpc_web
- name: envoy.cors
- name: envoy.router
clusters:
- name: greeter_service
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
hosts: [{ socket_address: { address: host.docker.internal, port_value: 9000 }}]
I could not find anything relevant in envoy log file except these logs:
[2020-12-15 01:28:18.747][8][debug][upstream] [source/common/upstream/logical_dns_cluster.cc:72] starting async DNS resolution for host.docker.internal
[2020-12-15 01:28:18.747][8][trace][upstream] [source/common/network/dns_impl.cc:160] Setting DNS resolution timer for 5000 milliseconds
[2020-12-15 01:28:18.748][8][trace][upstream] [source/common/network/dns_impl.cc:160] Setting DNS resolution timer for 5000 milliseconds
[2020-12-15 01:28:18.749][8][debug][upstream] [source/common/upstream/logical_dns_cluster.cc:79] async DNS resolution complete for host.docker.internal
[2020-12-15 01:28:21.847][8][debug][main] [source/server/server.cc:175] flushing stats
[2020-12-15 01:28:23.751][8][debug][upstream] [source/common/upstream/logical_dns_cluster.cc:72] starting async DNS resolution for host.docker.internal
[2020-12-15 01:28:23.751][8][trace][upstream] [source/common/network/dns_impl.cc:160] Setting DNS resolution timer for 5000 milliseconds
[2020-12-15 01:28:23.753][8][trace][upstream] [source/common/network/dns_impl.cc:160] Setting DNS resolution timer for 5000 milliseconds
[2020-12-15 01:28:23.753][8][debug][upstream] [source/common/upstream/logical_dns_cluster.cc:79] async DNS resolution complete for host.docker.internal

Envoy proxy isn't forwarding to listed address?

So I have a dapr service hosted at: 192.168.1.34:50459
I'm trying to have it communicate with my web application using grpc-web. To do so I have an envoy proxy (according to https://github.com/grpc/grpc-web/issues/347).
My envoy.yaml file is as follows:
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 4949 }
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: greeter_service
max_grpc_timeout: 0s
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: custom-header-1,grpc-status,grpc-message
http_filters:
- name: envoy.grpc_web
- name: envoy.cors
- name: envoy.router
clusters:
- name: greeter_service
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
hosts: [{ socket_address: { address: 192.168.1.34, port_value: 50459 }}]
It should listen at 0.0.0.0:4949, and forward it to 192.168.1.34:50459
But when I start this proxy with
docker run -d -v envoy.yaml:/etc/envoy/envoy.yaml:ro -p 4949:4949 -p 50459:50459 envoyproxy/envoy:v1.15.0
It routes it to 0.0.0.0:50459
enter image description here
Does anyone know how to resolve this?
I don't know much about envoy but can you configure it to log to mounted path? See if the route is actually configured? Also, I assume you're setting the --dapr-grpc-port explicitly in Dapr run command? Have you tried setting it to a diff port in case of collisions?

Resources