opendistro for elasticsearch single-node cluster not working - elasticsearch-opendistro
I am new to Opendistro for Elasticsearch. Getting excited with this new opensource to try and I am unable to setup a single node cluster. I am using all default setting after following https://opendistro.github.io/for-elasticsearch-docs/ , however I am not able to setup one
my elasticsearch.yml is as below:
cluster.name: my-application
node.name: elk1
node.master: true
node.data: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.1.1
http.port: 9200
discovery.zen.minimum_master_nodes: 1
opendistro_security.ssl.transport.pemcert_filepath: esnode.pem
opendistro_security.ssl.transport.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
opendistro_security.ssl.transport.enforce_hostname_verification: true
opendistro_security.ssl.http.enabled: true
opendistro_security.ssl.http.pemcert_filepath: esnode.pem
opendistro_security.ssl.http.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.http.pemtrustedcas_filepath: root-ca.pem
opendistro_security.allow_unsafe_democertificates: true
opendistro_security.allow_default_init_securityindex: true
opendistro_security.authcz.admin_dn:
- CN=kirk,OU=client,O=client,L=test,C=DE
opendistro_security.audit.type: internal_elasticsearch
opendistro_security.enable_snapshot_restore_privilege: true
opendistro_security.check_snapshot_restore_write_privileges: true
opendistro_security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
cluster.routing.allocation.disk.threshold_enabled: false
discovery.type: single-node
node.max_local_storage_nodes: 1
My kibana.yml looks as below. I copied the *.pem files from /etc/elasticsearch/ to /etc/kibana/
server.port: 5601
server.host: delk1
elasticsearch.hosts: ["https://elk1:9200"]
elasticsearch.ssl.verificationMode: none
elasticsearch.username: kibanaserver
elasticsearch.password: kibanaserver
elasticsearch.requestHeadersWhitelist: ["securitytenant","Authorization"]
opendistro_security.multitenancy.enabled: true
opendistro_security.multitenancy.tenants.preferred: ["Private", "Global"]
opendistro_security.readonly_mode.roles: ["kibana_read_only"]
server.ssl.enabled: true
server.ssl.key: /etc/kibana/esnode-key.pem
server.ssl.certificate: /etc/kibana/esnode.pem
Error in Elasticsearch is as below:
[2020-03-09T18:57:08,052][INFO ][c.a.o.s.c.ConfigurationRepository] [elk1] Background init thread started. Install default config?: true
[2020-03-09T18:57:08,074][INFO ][c.a.o.s.c.ConfigurationRepository] [elk1] Will create .opendistro_security index so we can apply default config
[2020-03-09T18:57:08,208][INFO ][o.e.g.GatewayService ] [elk1] recovered [2] indices into cluster_state
[2020-03-09T18:57:09,416][ERROR][c.a.o.s.a.BackendRegistry] [elk1] Not yet initialized (you may need to run securityadmin)
[2020-03-09T18:57:09,460][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for internalusers while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS] (index=.opendistro_security and type=null)
[2020-03-09T18:57:09,461][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for actiongroups while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS] (index=.opendistro_security and type=null)
[2020-03-09T18:57:09,461][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for config while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS] (index=.opendistro_security and type=null)
[2020-03-09T18:57:09,461][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for roles while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS] (index=.opendistro_security and type=null)
[2020-03-09T18:57:09,461][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for rolesmapping while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS] (index=.opendistro_security and type=null)
[2020-03-09T18:57:09,462][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for tenants while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS] (index=.opendistro_security and type=null)
[2020-03-09T18:57:09,483][ERROR][c.a.o.s.a.BackendRegistry] [elk1] Not yet initialized (you may need to run securityadmin)
[2020-03-09T18:57:09,595][INFO ][o.e.c.r.a.AllocationService] [elk1] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[security-auditlog-2020.03.09][0]]]).
[2020-03-09T18:57:11,990][ERROR][c.a.o.s.a.BackendRegistry] [elk1] Not yet initialized (you may need to run securityadmin)
[2020-03-09T18:57:14,499][ERROR][c.a.o.s.a.BackendRegistry] [elk1] Not yet initialized (you may need to run securityadmin)
[2020-03-09T18:57:17,006][ERROR][c.a.o.s.a.BackendRegistry] [elk1] Not yet initialized (you may need to run securityadmin)
[2020-03-09T18:57:17,411][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for internalusers while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS] (index=.opendistro_security and type=null)
[2020-03-09T18:57:17,412][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for actiongroups while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS] (index=.opendistro_security and type=null)
[2020-03-09T18:57:17,412][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for config while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS] (index=.opendistro_security and type=null)
[2020-03-09T18:57:17,412][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for roles while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS] (index=.opendistro_security and type=null)
[2020-03-09T18:57:17,412][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for rolesmapping while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS] (index=.opendistro_security and type=null)
[2020-03-09T18:57:17,412][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for tenants while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS] (index=.opendistro_security and type=null)
[2020-03-09T18:57:19,512][ERROR][c.a.o.s.a.BackendRegistry] [elk1] Not yet initialized (you may need to run securityadmin)
[2020-03-09T18:57:22,021][ERROR][c.a.o.s.a.BackendRegistry] [elk1] Not yet initialized (you may need to run securityadmin)
[2020-03-09T18:57:24,533][ERROR][c.a.o.s.a.BackendRegistry] [elk1] Not yet initialized (you may need to run securityadmin)
[2020-03-09T18:57:25,413][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for internalusers while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS] (index=.opendistro_security and type=null)
[2020-03-09T18:57:25,413][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for actiongroups while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS] (index=.opendistro_security and type=null)
[2020-03-09T18:57:25,413][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for config while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS] (index=.opendistro_security and type=null)
[2020-03-09T18:57:25,413][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for roles while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS] (index=.opendistro_security and type=null)
[2020-03-09T18:57:25,414][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for rolesmapping while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS] (index=.opendistro_security and type=null)
[2020-03-09T18:57:25,414][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for tenants while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS] (index=.opendistro_security and type=null)
Error in Kibana.yml
{"type":"log","#timestamp":"2020-03-09T19:09:47Z","tags":["plugins","debug"],"pid":16997,"plugin":{"name":"state_session_storage_redirect","version":"kibana","description":"When using the state:storeInSessionStorage setting with the short-urls, we need some way to get the full URL's hashed states into sessionStorage, this app will grab the URL from the injected state and and put the URL hashed states into sessionStorage before redirecting the user."},"message":"Initializing plugin state_session_storage_redirect#kibana"}
{"type":"log","#timestamp":"2020-03-09T19:09:47Z","tags":["plugins","debug"],"pid":16997,"plugin":{"name":"status_page","version":"kibana"},"message":"Initializing plugin status_page#kibana"}
{"type":"log","#timestamp":"2020-03-09T19:09:47Z","tags":["plugins","debug"],"pid":16997,"plugin":{"name":"tile_map","version":"kibana"},"message":"Initializing plugin tile_map#kibana"}
{"type":"log","#timestamp":"2020-03-09T19:09:47Z","tags":["status","plugin:tile_map#7.4.2","info"],"pid":16997,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2020-03-09T19:09:47Z","tags":["plugins","debug"],"pid":16997,"plugin":{"author":"Rashid Khan <rashid#elastic.co>","name":"timelion","version":"kibana"},"message":"Initializing plugin timelion#kibana"}
{"type":"log","#timestamp":"2020-03-09T19:09:47Z","tags":["status","plugin:timelion#7.4.2","info"],"pid":16997,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2020-03-09T19:09:47Z","tags":["plugins","debug"],"pid":16997,"plugin":{"name":"ui_metric","version":"kibana"},"message":"Initializing plugin ui_metric#kibana"}
{"type":"log","#timestamp":"2020-03-09T19:09:47Z","tags":["status","plugin:ui_metric#7.4.2","info"],"pid":16997,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2020-03-09T19:09:47Z","tags":["plugins","debug"],"pid":16997,"plugin":{"name":"markdown_vis","version":"kibana"},"message":"Initializing plugin markdown_vis#kibana"}
{"type":"log","#timestamp":"2020-03-09T19:09:47Z","tags":["status","plugin:markdown_vis#7.4.2","info"],"pid":16997,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2020-03-09T19:09:47Z","tags":["plugins","debug"],"pid":16997,"plugin":{"name":"metric_vis","version":"kibana"},"message":"Initializing plugin metric_vis#kibana"}
{"type":"log","#timestamp":"2020-03-09T19:09:47Z","tags":["status","plugin:metric_vis#7.4.2","info"],"pid":16997,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2020-03-09T19:09:47Z","tags":["plugins","debug"],"pid":16997,"plugin":{"name":"table_vis","version":"kibana"},"message":"Initializing plugin table_vis#kibana"}
{"type":"log","#timestamp":"2020-03-09T19:09:47Z","tags":["status","plugin:table_vis#7.4.2","info"],"pid":16997,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2020-03-09T19:09:47Z","tags":["plugins","debug"],"pid":16997,"plugin":{"name":"tagcloud","version":"kibana"},"message":"Initializing plugin tagcloud#kibana"}
{"type":"log","#timestamp":"2020-03-09T19:09:47Z","tags":["status","plugin:tagcloud#7.4.2","info"],"pid":16997,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2020-03-09T19:09:47Z","tags":["plugins","debug"],"pid":16997,"plugin":{"author":"Yuri Astrakhan<yuri#elastic.co>","name":"vega","version":"kibana"},"message":"Initializing plugin vega#kibana"}
{"type":"log","#timestamp":"2020-03-09T19:09:47Z","tags":["status","plugin:vega#7.4.2","info"],"pid":16997,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2020-03-09T19:09:47Z","tags":["plugin","debug"],"pid":16997,"message":"Checking Elasticsearch version"}
{"type":"log","#timestamp":"2020-03-09T19:09:47Z","tags":["error","elasticsearch","admin"],"pid":16997,"message":"Request error, retrying\nGET https://elk1:9200/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip => unable to verify the first certificate"}
{"type":"log","#timestamp":"2020-03-09T19:09:47Z","tags":["server","uuid","uuid"],"pid":16997,"message":"Resuming persistent Kibana instance UUID: 329fdcc3-8105-489d-be69-c4e6397cb9a6"}
{"type":"log","#timestamp":"2020-03-09T19:09:47Z","tags":["warning","elasticsearch","admin"],"pid":16997,"message":"Unable to revive connection: https://elk1:9200/"}
{"type":"log","#timestamp":"2020-03-09T19:09:47Z","tags":["status","plugin:elasticsearch#7.4.2","error"],"pid":16997,"state":"red","message":"Status changed from yellow to red - No Living connections","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","#timestamp":"2020-03-09T19:09:47Z","tags":["warning","elasticsearch","admin"],"pid":16997,"message":"No living connections"}
{"type":"log","#timestamp":"2020-03-09T19:09:50Z","tags":["plugin","debug"],"pid":16997,"message":"Checking Elasticsearch version"}
{"type":"log","#timestamp":"2020-03-09T19:09:50Z","tags":["warning","elasticsearch","admin"],"pid":16997,"message":"Unable to revive connection: https://elk1:9200/"}
{"type":"log","#timestamp":"2020-03-09T19:09:50Z","tags":["warning","elasticsearch","admin"],"pid":16997,"message":"No living connections"}
{"type":"log","#timestamp":"2020-03-09T19:09:52Z","tags":["plugin","debug"],"pid":16997,"message":"Checking Elasticsearch version"}
....
{"type":"log","#timestamp":"2020-03-09T19:12:03Z","tags":["plugin","debug"],"pid":16997,"message":"Checking Elasticsearch version"}
{"type":"log","#timestamp":"2020-03-09T19:12:03Z","tags":["warning","elasticsearch","admin"],"pid":16997,"message":"Unable to revive connection: https://elk1:9200/"}
{"type":"log","#timestamp":"2020-03-09T19:12:03Z","tags":["warning","elasticsearch","admin"],"pid":16997,"message":"No living connections"}
{"type":"log","#timestamp":"2020-03-09T19:12:06Z","tags":["plugin","debug"],"pid":16997,"message":"Checking Elasticsearch version"}
{"type":"log","#timestamp":"2020-03-09T19:12:06Z","tags":["warning","elasticsearch","admin"],"pid":16997,"message":"Unable to revive connection: https://elk1:9200/"}
{"type":"log","#timestamp":"2020-03-09T19:12:06Z","tags":["warning","elasticsearch","admin"],"pid":16997,"message":"No living connections"}
Not sure where am I going wrong. Any pointers with example will help. Thanks in advance
In the log message, you have:
Not yet initialized (you may need to run securityadmin)
In that case, you should type something like:
/usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh \
-cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/ \
-icl -nhnv -cacert /etc/elasticsearch/certs/root-ca.pem \
-cert /etc/elasticsearch/certs/admin.pem \
-key /etc/elasticsearch/certs/admin-key.pem \
-h localhost
More info:
https://opendistro.github.io/for-elasticsearch-docs/old/0.9.0/docs/security/generate-certificates/#run-securityadminsh
Related
Local MQTT broker not reading the .conf file while bridging to cloud MQTT broker
I am trying to connect my local mqtt broker to DIoTY cloud broker. I have taken reference from https://www.losant.com/blog/how-to-configure-mosquitto-bridge-to-losant and done all the configuration file changes as required. My /etc/mosquitto/mosquitto.conf looks like # Place your local configuration in /etc/mosquitto/conf.d/ # # A full description of the configuration file is at # /usr/share/doc/mosquitto/examples/mosquitto.conf.example pid_file /var/run/mosquitto.pid persistence true persistence_location /var/lib/mosquitto/ log_dest file /var/log/mosquitto/mosquitto.log include_dir /etc/mosquitto/conf.d I made my separate cloud.conf file in conf.d # Config file for mosquitto # See mosquitto.conf(5) for more information. user mosquitto max_queued_messages 200 message_size_limit 0 allow_zero_length_clientid true allow_duplicate_messages false listener 1883 autosave_interval 900 autosave_on_changes false persistence true persistence_file mosquitto.db allow_anonymous true connection dioty address mqtt.dioty.co:1883 bridge_attempt_unsubscribe false remote_username ******* remote_password ******* start_type automatic bridge_protocol_version mqttv311 notifications false try_private true bridge_insecure false cleansession false topic # in 0 Mosquitto logs after starting broker are as follows 1608537228: mosquitto version 1.6.12 starting 1608537228: Config loaded from /etc/mosquitto/mosquitto.conf. 1608537228: Opening ipv4 listen socket on port 1883. 1608537228: Opening ipv6 listen socket on port 1883. 1608537228: mosquitto version 1.6.12 running 1608539039: Saving in-memory database to /var/lib/mosquitto/mosquitto.db. What I think is my local mqtt is not reading .conf file. How can I fix this?
You are using Losant configuration to configure DIOTY broket which won't work as both are a different broker. To save credentials in mosquitto config, first, you have to generate the password file using mosquitto_passwd mosquitto_passwd -c /etc/mosquitto/passwd USER PASSWORD then add password file location to mosquitto config also set allow_anonymouse=false allow_anonymous false password_file /etc/mosquitto/passwd That's it now you just need to publish or subscribe using mosquitto_pub -h localhost -t "test" -m "hello world mosquitto_sub -h localhost -t test
MQTT Broker does not receive any messages
I'm trying to connect my Tasmota switch over mqtt. i have installed mosquitto on a virtual machine, heres the configuration: /etc/mosquitto/mosquitto.conf # Place your local configuration in /etc/mosquitto/conf.d/ # # A full description of the configuration file is at # /usr/share/doc/mosquitto/examples/mosquitto.conf.example pid_file /var/run/mosquitto.pid persistence true persistence_location /var/lib/mosquitto/ log_dest file /var/log/mosquitto/mosquitto.log include_dir /etc/mosquitto/conf.d /etc/mosquitto/acl # weewx readwrite to the loop user tasmota #topic weather/# /etc/mosquitto/conf.d/myconfig.conf allow_anonymous true password_file /etc/mosquitto/passwd persistence false protocol mqtt acl_file /etc/mosquitto/acl the service is running, and the port is up this is the configuration of my switch im trying to take a look at messages with mosquitto_sub -h 10.11.0.106 -t '#' also tried to add user and password, but i dont get any output i can see in the log, that the connection is established 1579896351: Config loaded from /etc/mosquitto/mosquitto.conf. 1579896351: Opening ipv4 listen socket on port 1883. 1579896351: Opening ipv6 listen socket on port 1883. 1579896351: New connection from 10.10.0.137 on port 1883. 1579896351: New client connected from 10.10.0.137 as mosqsub|19705-warmachin (c1, k60). 1579896358: Socket error on client mosqsub|19705-warmachin, disconnecting. 1579896358: New connection from 10.10.0.137 on port 1883. 1579896358: New client connected from 10.10.0.137 as mosqsub|19775-warmachin (c1, k60). 1579896361: New connection from 10.11.1.51 on port 1883. 1579896361: New client connected from 10.11.1.51 as DVES_6CA231 (c1, k30, u'tasmota'). 1579896361: New connection from 10.11.1.52 on port 1883. 1579896361: New client connected from 10.11.1.52 as DVES_301DDC (c1, k30, u'tasmota'). 1579896362: New connection from 10.11.1.54 on port 1883. 1579896362: New client connected from 10.11.1.54 as DVES_350992 (c1, k30, u'tasmota'). did i miss something or am i missunderstanding something completely wrong? please help
As thrashed out in the comments, your ACL file is missing any enabled topics either for the anonymous user or the tasmota user. If you enable ACLs you need to define all the topics you want users to be able to access.
Mosquitto "SSL is disabled"
I have setup Mosquitto MQTT with SSL on port 8883. However when I try and connect I get an error "Error: A TLS error occurred" I looked up the mosquitto logs and I see "SSL is disabled" in the logs. I don't think that's expected. All the certificates are valid. My config File (Note: mqtt.test.com is not the actual host): autosave_interval 1800 # Persistence Settings persistence true persistence_file mosquitto.db persistence_location /tmp/ connection_messages true # Logging Settings log_timestamp true log_dest file /home/ubuntu/mqtt/mosquitto/mosquitto.log log_type debug # Port Settings listener 1883 # Only needed if Websockets listener 8033 protocol websockets certfile /etc/letsencrypt/live/mqtt.test.com/cert.pem cafile /etc/letsencrypt/live/mqtt.taggle.com/chain.pem keyfile /etc/letsencrypt/live/mqtt.test.com/privkey.pem listener 8883 certfile /etc/letsencrypt/live/mqtt.test.com/cert.pem cafile /etc/letsencrypt/live/mqtt.test.com/chain.pem keyfile /etc/letsencrypt/live/mqtt.test.com/privkey.pem
There is no need to build mosquitto from source to user the auth_plugin, you need access to the matching src bundle for the version of the broker you have installed. When you built mosquitto you most likely didn't have the openssl dev packages installed to allow the build to link against openssl. That or you built mosquitto with make WITH_TLS=no Double check you followed all the instructions in the readme.md that comes with the src and that installed all the prerequisite packages
mosquitto PSK Encryption not working
I'm trying to build a PSK Encryption bridge connection with mosquitto following this tutorial. I'm using two docker containers. One as a bridge and another one as a server. Both of them on different computers. The connection works fine with no encryption. For the subscriptions to the topics I'm using node-red. This is the configuration file for the server: port 1883 persistence true persistence_location /mosquitto/data/ #persistence_file mosquitto.db #cleansession false #clientid nodered listener 8883 psk_hint broker-server psk_file /mosquitto/certs/psk_file.txt log_type all log_dest file /mosquitto/log/mosquitto.log connection_messages true log_timestamp true allow_anonymous true #password_file /mosquitto/config/passwd For the bridge connection I have to files. mosquitto.conf: #include_dir /etc/mosquitto/conf.d # GENERAL CONFIGURATION BROKER # ---------------------------------------------------------------- pid_file /var/run/mosquitto.pid persistence true persistence_location /var/lib/mosquitto/ log_type all log_dest file /etc/mosquitto/log/mosquitto.log include_dir /etc/mosquitto/bridges # ---------------------------------------------------------------- # SECURITY (comm. Nordic -> RPI): Password #password_file /etc/mosquitto/passwd allow_anonymous true And bridge.conf: # ================================================================= # Bridges to Node Red # ================================================================= # IP address #connection client-bridgeport connection bridge-01 address 192.168.1.34:8883 bridge_identity bridgeport bridge_psk 123456789987654321 # ----------------------------------------------------------------- # TOPICS topic # out 1 "" topic # in 1 "" # ------------------------------------------------------------------ # Setting protocol version explicitly #bridge_protocol_version mqttv311 #bridge_insecure false # Bridge connection name and MQTT client Id, # enabling the connection automatically when the broker starts. cleansession false remote_clientid broker-server start_type automatic #notifications false log_type all In the logfile of the server I can see the following error: Socket error on client unknown, disconnecting. And in the bridge connection I see the following error: Bridge broker-server sending CONNECT Socket error on client local.broker-server, disconnecting. I don't know what I'm doing wrong. If I remove the encryption everything works fine.
It seems that the default docker container of mosquitto in docker hub has not included the psk encryption in the mosquitto build as is shown in this post. I had to build my own image installing mosquitto as following: RUN apt-get -y update && \ apt-get -y install mosquitto mosquitto-clients
Mosquitto server not able to connect from outside network
I followed the TLS configuration on official Mosquitto website and generated all the certificates and keys. pid_file /var/run/mosquitto.pid persistence true persistence_location /var/lib/mosquitto/ listener 1883 port 8883 cafile /etc/mosquitto/ca_certificates/ca.crt certfile /etc/mosquitto/ca_certificates/server.crt keyfile /etc/mosquitto/ca_certificates/server.key log_dest file /var/log/mosquitto/mosquitto.log include_dir /etc/mosquitto/conf.d It is working fine locally. but I am not able to connect from outside my network. Can someone explain me what wrong I am doing here ? AM i missing something? Thank you.