Ejabberd 21.12 MQTT configuration - mqtt

I have a fresh install of Ejabberd 21.12 on Ubuntu 20.04. So far I have not been unable to mosquitto_sub or mosquitto_pub a test message.
mosquitto_sub -h mydomain.xyz -t "test/topic" -d -p 1883
Client (null) sending CONNECT
Client (null) received CONNACK (5)
Connection error: Connection Refused: not authorised.
Client (null) sending DISCONNECT
the relevant (I think) parts of ejabberd.yml looks like this
port: 5280
ip: "::"
module: ejabberd_http
request_handlers:
"/admin": ejabberd_web_admin
"/mqtt": mod_mqtt
-
port: 1883
module: mod_mqtt
backlog: 1000
-
port: 8883
module: mod_mqtt
backlog: 1000
tls: true
mod_mqtt: {}
I tried defining publisher and subscriber in acl but that didn't seem to change anything.
I did:
ufw allow 1883
ufw allow 8883
Thanks for any help or insight.
------EDIT----------
Still having the same issue but I have tried adding access rules just to fill out the mqtt configuration.
I have tried changing the syntax of the ACL section many times but they all result in the same auth fail. Here is my yml:
###
###' ejabberd configuration file
###
### The parameters used in this configuration file are explained at
###
### https://docs.ejabberd.im/admin/configuration
###
### The configuration file is written in YAML.
### *******************************************************
### ******* !!! WARNING !!! *******
### ******* YAML IS INDENTATION SENSITIVE *******
### ******* MAKE SURE YOU INDENT SECTIONS CORRECTLY *******
### *******************************************************
### Refer to http://en.wikipedia.org/wiki/YAML for the brief description.
###
hosts:
- DOMAIN.XYZ
loglevel: debug
certfiles:
- "/opt/ejabberd/conf/server.pem"
# - "/etc/letsencrypt/live/DOMAIN.XYZ/fullchain.pem"
# - "/etc/letsencrypt/live/DOMAIN.XYZ/privkey.pem"
- "/opt/ejabberd/conf/fullchain.pem"
- "/opt/ejabberd/conf/privkey.pem"
ca_file: "/opt/ejabberd/conf/cacert.pem"
listen:
-
port: 5222
ip: "0.0.0.0"
module: ejabberd_c2s
max_stanza_size: 262144
shaper: c2s_shaper
access: c2s
starttls_required: true
-
port: 5269
ip: "0.0.0.0"
module: ejabberd_s2s_in
max_stanza_size: 524288
-
port: 5443
ip: "0.0.0.0"
module: ejabberd_http
tls: true
request_handlers:
"/admin": ejabberd_web_admin
"/api": mod_http_api
"/bosh": mod_bosh
"/captcha": ejabberd_captcha
"/upload": mod_http_upload
"/ws": ejabberd_http_ws
"/oauth": ejabberd_oauth
-
port: 5280
ip: "0.0.0.0"
module: ejabberd_http
request_handlers:
"/admin": ejabberd_web_admin
"/mqtt": mod_mqtt
-
port: 1883
module: mod_mqtt
backlog: 1000
-
port: 8883
module: mod_mqtt
backlog: 1000
tls: true
s2s_use_starttls: optional
acl:
local:
user_regexp: ""
loopback:
ip:
- 127.0.0.0/8
- ::1/128
- ::FFFF:127.0.0.1/128
admin:
user:
- admin#DOMAIN.XYZ
publisher:
user:
"broker" : "DOMAIN.XYZ"
subscriber:
user:
"broker" : "DOMAIN.XYZ"
access_rules:
local:
allow: local
c2s:
deny: blocked
allow: all
announce:
allow: admin
configure:
allow: admin
muc_create:
allow: local
pubsub_createnode:
allow: local
trusted_network:
allow: loopback
api_permissions:
"console commands":
from:
- ejabberd_ctl
who: all
what: "*"
"admin access":
who:
access:
allow:
acl: loopback
acl: admin
oauth:
scope: "ejabberd:admin"
access:
allow:
acl: loopback
acl: admin
what:
- "*"
- "!stop"
- "!start"
"public commands":
who:
ip: 127.0.0.1/8
what:
- status
- connected_users_number
shaper:
normal: 1000
fast: 50000
shaper_rules:
max_user_sessions: 10
max_user_offline_messages:
5000: admin
100: all
c2s_shaper:
none: admin
normal: all
s2s_shaper: fast
max_fsm_queue: 10000
acme:
contact: "mailto:cedar#disroot.org"
ca_url: "https://acme-v02.api.letsencrypt.org"
modules:
mod_adhoc: {}
mod_admin_extra: {}
mod_announce:
access: announce
mod_avatar: {}
mod_blocking: {}
mod_bosh: {}
mod_caps: {}
mod_carboncopy: {}
mod_client_state: {}
mod_configure: {}
mod_disco: {}
mod_fail2ban: {}
mod_http_api: {}
mod_http_upload:
put_url: https://#HOST#:5443/upload
mod_last: {}
mod_mam:
## Mnesia is limited to 2GB, better to use an SQL backend
## For small servers SQLite is a good fit and is very easy
## to configure. Uncomment this when you have SQL configured:
## db_type: sql
assume_mam_usage: true
default: never
mod_mqtt:
access_publish:
"#":
- allow: publisher
- deny
access_subscribe:
"#":
- allow: subscriber
- deny
mod_muc:
access:
- allow
access_admin:
- allow: admin
access_create: muc_create
access_persistent: muc_create
access_mam:
- allow
default_room_options:
allow_subscription: true # enable MucSub
mam: false
mod_muc_admin: {}
mod_offline:
access_max_user_messages: max_user_offline_messages
mod_ping: {}
mod_privacy: {}
mod_private: {}
mod_proxy65:
access: local
max_connections: 5
mod_pubsub:
access_createnode: pubsub_createnode
plugins:
- flat
- pep
force_node_config:
## Avoid buggy clients to make their bookmarks public
storage:bookmarks:
access_model: whitelist
mod_push: {}
mod_push_keepalive: {}
mod_register:
## Only accept registration requests from the "trusted"
## network (see access_rules section above).
## Think twice before enabling registration from any
## address. See the Jabber SPAM Manifesto for details:
## https://github.com/ge0rg/jabber-spam-fighting-manifesto
ip_access: trusted_network
mod_roster:
versioning: true
mod_s2s_dialback: {}
mod_shared_roster: {}
mod_stream_mgmt:
resend_on_timeout: if_offline
mod_vcard: {}
mod_vcard_xupdate: {}
mod_version:
show_os: false
### Local Variables:
### mode: yaml
### End:
### vim: set filetype=yaml tabstop=8

Even if Ejabberd is set up to allow "anonymous" access, it seems to require a username AND PASSWORD (that it doesn't check) when connecting over MQTT. This command works for me:
mosquitto_sub -h mydomain.xyz -t "test/topic" -d -p 1883 -u foo -P bar

Related

Traefik failing TLS handshake with Let's Encrypt Certificate

I am attempting to have Traefik serve as a reverse proxy for services running in Docker containers. I've been following the documentation that Traefik provides and have a small docker environment configured via docker compose that successfully serves data via HTTP. Traefik sits behind HAProxy running in TCP mode forwarding packets received from the Internet to Traefik.
However when I tried to add a new router for serving the same content via HTTPS, I receive the following esoteric (to me) error when I run a curl directed to https://my.domain.tld/: error:1408F10B:SSL routines:ssl3_get_record:wrong version number
Full curl output:
curl -v https://my.domain.tld/
* Trying <IP Address of Domain>...
* TCP_NODELAY set
* Connected to my.domain.tld (<IP Address of Domain>) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* error:1408F10B:SSL routines:ssl3_get_record:wrong version number
* Closing connection 0
When I attempt to browse to the site via Firefox (web browser) I receive an error code of SSL_ERROR_RX_RECORD_TOO_LONG. When googling this error I was unable to find a post that seemed to have my specific issue.
Below is the docker-compose for the setup I am using to configure the applications
version: "3.9"
secrets:
cloudflare_dns_token:
file: ./secrets/cf_dns_api_token.txt
networks:
socket_proxy:
name: socket_proxy
driver: bridge
ipam:
config:
- subnet: 192.168.0.0/24
container_bridge:
name: container_bridge
driver: bridge
ipam:
config:
- subnet: 192.168.1.0/24
services:
socket-proxy:
image: tecnativa/docker-socket-proxy
container_name: socket-proxy
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
socket_proxy:
ipv4_address: 192.168.0.2 # Static IP address
environment:
EVENTS: 1
PING: 1
VERSION: 1
CONTAINERS: 1
NETWORKS: 1
traefik:
# The official v2 Traefik docker image
image: traefik:v2.8.1
container_name: traefik-proxy
command:
# Log Level for Traefik
- "--log.level=DEBUG"
# Enables the web UI
- "--api.insecure=true"
# Traefik enables docker as the provider to look for services
- "--providers.docker=true"
# Traefik will use the Docker Socket proxy to communicate with the docker socket
- "--providers.docker.endpoint=tcp://192.168.0.2:2375"
# Traefik will not expose services if they aren't labled for export
- "--providers.docker.exposedByDefault=false"
# Port where Traefik will listen for web (http) traffic for routing
- "--entrypoints.web.address=:80"
# Port where Traefik will listen for web secure (https) traffic for routing
- "--entrypoints.websecure.address=:443"
# Trust Proxy Protocol Packets from only the listed IP address
- "--entryPoints.web.proxyProtocol.trustedIPs=10.0.8.1/32"
# Trust Proxy Protocol Packets from only the listed IP address
- "--entryPoints.websecure.proxyProtocol.trustedIPs=10.0.8.1/32"
# Enable a ACME DNS challenge named "letsencrypt"
- "--certificatesresolvers.letsencrypt.acme.dnschallenge=true"
# Tell Traefik which provider to use for DNS Challenge
- "--certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare"
# Staging environment for let's encrypt for testing
- "--certificatesresolvers.letsencrypt.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory"
# Email to provide to let's encrypt
- "--certificatesresolvers.letsencrypt.acme.email=${EMAIL}"
# Tell Traefik to store the certificate on a path under our volume
- "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json"
networks:
# Tells Traefik to connect to both the socket proxy network and the container bridge network where the other containers will be connected
socket_proxy:
container_bridge:
ports:
# The HTTP port
- "80:80"
# The HTTPS port
- "443:443"
# The Web UI (enabled by --api.insecure=true)
- "8080:8080"
secrets:
- "cloudflare_dns_token"
environment:
# To Be Removed Need Secret Working Properly
- "CF_DNS_API_TOKEN=${CF_DNS_TOKEN}"
#- "CF_DNS_API_TOKEN=/run/secrets/cloudflare_dns_token"
volumes:
# Create a letsencrypt dir within the folder where the docker-compose file is
- "./letsencrypt:/letsencrypt"
whoami:
image: traefik/whoami
container_name: whoami-server
networks:
container_bridge:
labels:
# Tells Traefik to proxy to the service (container)
- "traefik.enable=true"
#####################################################################
#
# Labels for HTTPS Proxying
#
#####################################################################
# Explicitly stating 'whoami-secure' route is HTTPS
- "traefik.http.routers.whoami-secure.tls=true"
# Rule for determing when to route requests to this service for the secure http router
- "traefik.http.routers.whoami-secure.rule=Host(`whoami.${FQDN}`)"
# Entry point for requests to this service for the secure http router
- "traefik.http.routers.whoami-secure.entrypoints=websecure"
# Uses the Host rule to define which certificate to issue
- "traefik.http.routers.whoami-secure.tls.certresolver=letsencrypt"
#####################################################################
#
# Labels for HTTP Proxying
#
#####################################################################
# Rule for determing when to route requests to this service for the unsecure http router
- "traefik.http.routers.whoami.rule=Host(`whoami.${FQDN}`)"
# Entry point for requests to this service for the unsecure http router
- "traefik.http.routers.whoami.entrypoints=web"
My expectation is that Traefik would gracefully handle the request via HTTPS and manage the TLS handshake without issue. I can confirm that Traefik is able to successfully generate a certificate via Let's Encrypt DNS Challenge for Cloudflare. I am using the Let's Encrypt staging environment at the moment so I did expect an error about the certificate being served as invalid, but it seems that TLS handshake errors out before a determination of validity.
EDIT #1: Running OpenSSL and Wireshark
OpenSSL returns the following when run ```openssl s_client -connect my.domain.tld:443``
CONNECTED(00000003)
140330304906560:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../ssl/record/ssl3_record.c:331:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 5 bytes and written 315 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
Wireshark logs show the following:
3 TCP packets preceding the first TLSv1 Client Hello
The Client Hello is acknowledged by the server
Then the server returns a HTTP 400 error - Bad Request
Wireshark dump of error
Transmission Control Protocol, Src Port: 443, Dst Port: 50085, Seq: 1, Ack: 518, Len: 207
Source Port: 443
Destination Port: 50085
[Stream index: 0]
[Conversation completeness: Complete, WITH_DATA (63)]
[TCP Segment Len: 207]
Sequence Number: 1 (relative sequence number)
Sequence Number (raw): 1488204866
[Next Sequence Number: 208 (relative sequence number)]
Acknowledgment Number: 518 (relative ack number)
Acknowledgment number (raw): 500352812
0101 .... = Header Length: 20 bytes (5)
Flags: 0x018 (PSH, ACK)
000. .... .... = Reserved: Not set
...0 .... .... = Nonce: Not set
.... 0... .... = Congestion Window Reduced (CWR): Not set
.... .0.. .... = ECN-Echo: Not set
.... ..0. .... = Urgent: Not set
.... ...1 .... = Acknowledgment: Set
.... .... 1... = Push: Set
.... .... .0.. = Reset: Not set
.... .... ..0. = Syn: Not set
.... .... ...0 = Fin: Not set
[TCP Flags: ·······AP···]
Window: 501
[Calculated window size: 64128]
[Window size scaling factor: 128]
Checksum: 0x1155 [unverified]
[Checksum Status: Unverified]
Urgent Pointer: 0
[Timestamps]
[SEQ/ACK analysis]
TCP payload (207 bytes)
Hypertext Transfer Protocol
[Expert Info (Warning/Security): Unencrypted HTTP protocol detected over encrypted port, could indicate a dangerous misconfiguration.]
[Unencrypted HTTP protocol detected over encrypted port, could indicate a dangerous misconfiguration.]
[Severity level: Warning]
[Group: Security]
HTTP/1.1 400 Bad request\r\n
[Expert Info (Chat/Sequence): HTTP/1.1 400 Bad request\r\n]
[HTTP/1.1 400 Bad request\r\n]
[Severity level: Chat]
[Group: Sequence]
Response Version: HTTP/1.1
Status Code: 400
[Status Code Description: Bad Request]
Response Phrase: Bad request
Content-length: 90\r\n
Cache-Control: no-cache\r\n
Connection: close\r\n
Content-Type: text/html\r\n
\r\n
[HTTP response 1/1]
File Data: 90 bytes
5 packets later the connection is reset
Wireguard dump of connection reset
Transmission Control Protocol, Src Port: 443, Dst Port: 50085, Seq: 208, Len: 0
Source Port: 443
Destination Port: 50085
[Stream index: 0]
[Conversation completeness: Complete, WITH_DATA (63)]
[TCP Segment Len: 0]
Sequence Number: 208 (relative sequence number)
Sequence Number (raw): 1488205073
[Next Sequence Number: 208 (relative sequence number)]
Acknowledgment Number: 0
Acknowledgment number (raw): 0
0101 .... = Header Length: 20 bytes (5)
Flags: 0x004 (RST)
000. .... .... = Reserved: Not set
...0 .... .... = Nonce: Not set
.... 0... .... = Congestion Window Reduced (CWR): Not set
.... .0.. .... = ECN-Echo: Not set
.... ..0. .... = Urgent: Not set
.... ...0 .... = Acknowledgment: Not set
.... .... 0... = Push: Not set
.... .... .1.. = Reset: Set
[Expert Info (Warning/Sequence): Connection reset (RST)]
[Connection reset (RST)]
[Severity level: Warning]
[Group: Sequence]
.... .... ..0. = Syn: Not set
.... .... ...0 = Fin: Not set
[TCP Flags: ·········R··]
Window: 0
[Calculated window size: 0]
[Window size scaling factor: 128]
Checksum: 0x6dc9 [unverified]
[Checksum Status: Unverified]
Urgent Pointer: 0
[Timestamps]

Envoy proxy: 503 Service Unavailable [duplicate]

This question already has answers here:
Http response at 400 or 500 level
(2 answers)
Closed 10 months ago.
State of Servies:
Client (nuxt) is up on http://localhost:3000 and the client sends
requests to http://localhost:8080.
Server (django) is running on 0.0.0.0:50051.
Also docker is up
78496fef541f 5f9773709483 "/docker-entrypoint.…" 29 minutes
ago Up 29 minutes 0.0.0.0:8080->8080/tcp, :::8080-8080/tcp,
10000/tcp envoy
envoy.yaml Configurations:
I configured the envoy.yaml file as follows:
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 8080 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: greeter_service
max_stream_duration:
grpc_timeout_header_max: 0s
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: custom-header-1,grpc-status,grpc-message
http_filters:
- name: envoy.filters.http.grpc_web
- name: envoy.filters.http.cors
- name: envoy.filters.http.router
clusters:
- name: greeter_service
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
load_assignment:
cluster_name: cluster_0
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 0.0.0.0
port_value: 50051
Error:
But the following error occurs and, as it seems, the requests do not reach the Django 0.0.0.0:50051 server.
503 Service Unavailable
grpc-message: upstream connect error or disconnect/reset before
headers. reset reason: connection failure, transport failure reason:
delayed connect error: 111
I have encountered the same error. Here are my condition:
Envoy: running on docker with listener at port 8080 and redirect to port 9090
Next JS Web client: request to envoy proxy at port 8080
Node Grpc Server: listen on port 9090
Im starting all on local environment.
Based on this example about configuring the envoy proxy that refer to this issue, I change the address on envoy proxy to host.docker.internal on envoy.yaml.
Just refer to this section, if you want to try:
clusters:
- name: backend_service
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
load_assignment:
cluster_name: cluster_0
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: host.docker.internal // i change it, before was 0.0.0.0
port_value: 9090

Ejabberd Oauth: Access denied

I installed and configured an Ejabberd XMPP server. I tested connecting to the server from a mobile app and exchanging messages. Now I want to enable OAuth(need to integrate OAuth token generation into my own Node.js REST API: login on my REST API = login on my DB + Ejabberd OAuth token generation). I want to prevent doing 2 login calls (my REST API + Ejabberd) by generating the token within my REST API login and use this token in my Android/Ios apps.
When I open this url:
http://my-server:5280/oauth/authorization_token?response_type=token&client_id=Client1&redirect_uri=http://my-server:3000/ejabberd&scope=get_roster+sasl_auth
I get a login form, after filling the form with valid credentials I get redirected to this url:
http://my-server:5280/oauth/authorization_token
with an empty response: ERR_EMPTY_RESPONSE
For reference, here is my configuration file:
hosts:
- "test"
- "157.245.128.100"
loglevel: 5
log_rotate_size: 10485760
log_rotate_date: ""
log_rotate_count: 1
log_rate_limit: 100
certfiles:
- "/opt/ejabberd/conf/server.pem"
#- "/opt/ejabberd/proxym.dev.pem"
## - "/etc/letsencrypt/live/localhost/fullchain.pem"
## - "/etc/letsencrypt/live/localhost/privkey.pem"
ca_file: "/opt/ejabberd/conf/cacert.pem"
listen:
-
port: 5222
ip: "::"
module: ejabberd_c2s
max_stanza_size: 262144
shaper: c2s_shaper
access: c2s
starttls_required: false
-
port: 5269
ip: "::"
module: ejabberd_s2s_in
max_stanza_size: 524288
-
port: 5443
ip: "::"
module: ejabberd_http
tls: false
request_handlers:
"/admin": ejabberd_web_admin
"/api": mod_http_api
"/bosh": mod_bosh
"/captcha": ejabberd_captcha
"/upload": mod_http_upload
"/ws": ejabberd_http_ws
#"/oauth": ejabberd_oauth
-
port: 5280
ip: "::"
module: ejabberd_http
request_handlers:
"/admin": ejabberd_web_admin
"/oauth": ejabberd_oauth
"/api": mod_http_api
-
port: 1883
ip: "::"
module: mod_mqtt
backlog: 1000
s2s_use_starttls: optional
#disable_sasl_mechanisms: ["X-OAUTH2"]
acl:
local:
user_regexp: ""
loopback:
ip:
- 127.0.0.0/8
- ::1/128
- ::FFFF:127.0.0.1/128
admin:
user:
- "admin#test"
access_rules:
local:
allow: local
c2s:
deny: blocked
allow: all
announce:
allow: admin
configure:
allow: admin
muc_create:
allow: local
pubsub_createnode:
allow: local
trusted_network:
allow: loopback
api_permissions:
"console commands":
from:
- ejabberd_ctl
who: all
what: "*"
"admin access":
who:
access:
allow:
acl: loopback
acl: admin
oauth:
scope: "ejabberd:admin"
access:
allow:
acl: loopback
acl: admin
what:
- "*"
- "!stop"
- "!start"
"public commands":
who:
ip: 127.0.0.1/8
what:
- status
- connected_users_number
commands_admin_access:
who: all
commands:
what:
- "user"
- "admin"
- "open"
shaper:
normal: 1000
fast: 50000
shaper_rules:
max_user_sessions: 10
max_user_offline_messages:
5000: admin
100: all
c2s_shaper:
none: admin
normal: all
s2s_shaper: fast
max_fsm_queue: 10000
acme:
contact: "mailto:admin#test"
ca_url: "https://acme-v01.api.letsencrypt.org"
modules:
mod_adhoc: {}
mod_admin_extra: {}
mod_announce:
access: announce
mod_avatar: {}
mod_blocking: {}
mod_bosh: {}
mod_caps: {}
mod_carboncopy: {}
mod_client_state: {}
mod_configure: {}
mod_disco: {}
mod_fail2ban: {}
mod_http_api: {}
mod_http_upload:
put_url: https://#HOST#:5443/upload
docroot: /home/upload
mod_last: {}
mod_mam:
## Mnesia is limited to 2GB, better to use an SQL backend
## For small servers SQLite is a good fit and is very easy
## to configure. Uncomment this when you have SQL configured:
## db_type: sql
assume_mam_usage: true
default: never
mod_mqtt: {}
mod_muc:
access:
- allow
access_admin:
- allow: admin
access_create: muc_create
access_persistent: muc_create
access_mam:
- allow
default_room_options:
allow_subscription: true # enable MucSub
mam: false
mod_muc_admin: {}
mod_offline:
access_max_user_messages: max_user_offline_messages
mod_ping: {}
mod_privacy: {}
mod_private: {}
mod_proxy65:
access: local
max_connections: 5
mod_pubsub:
access_createnode: pubsub_createnode
plugins:
- flat
- pep
force_node_config:
## Avoid buggy clients to make their bookmarks public
storage:bookmarks:
access_model: whitelist
mod_push: {}
mod_push_keepalive: {}
mod_register:
## Only accept registration requests from the "trusted"
## network (see access_rules section above).
## Think twice before enabling registration from any
## address. See the Jabber SPAM Manifesto for details:
## https://github.com/ge0rg/jabber-spam-fighting-manifesto
ip_access: trusted_network
mod_roster:
versioning: true
mod_s2s_dialback: {}
mod_shared_roster: {}
mod_stream_mgmt:
resend_on_timeout: if_offline
mod_vcard: {}
mod_vcard_xupdate: {}
mod_version:
show_os: false
# mod_admin_extra: {}
#commands_admin_access: configure
#commands:
# - add_commands:
# - user
#oauth_expire: 3600
#oauth_access: all
commands_admin_access:
- allow:
- user: "admin#test"
commands:
- add_commands: [user, admin, open]
oauth_expire: 31536000
oauth_access: all
### Local Variables:
### mode: yaml
### End:
### vim: set filetype=yaml tabstop=8
### host_config:
sql_type: mysql
sql_server: "157.245.128.100"
sql_database: "ejabberd"
sql_username: "ejabberd"
sql_password: "password"
sql_port: 3306
auth_method: sql
default_db: sql
Please check out that you have following configuration added in your configuration yml.
commands_admin_access:
- allow:
- user: "admin#localhost" # your user name.
commands:
- add_commands: [user, admin, open]
oauth_access:
- allow:
- user:
- "admin#localhost" # add your user name
oauth_expire: 86400
Please follow the link here:
https://docs.ejabberd.im/developer/ejabberd-api/simple-configuration/
You might have missed some configuration from up, since I also ran into similar problem but found that I was missing some part of above configuration.

ejabberd live/start is not working - returning error - "Failed to start ejabberd application: Configuration error: duplicated option: listen"

I have cloned the ejabberd repo and followed all the installation instructions.
When I start server using command
ejabberdctl live
I get the following error.
Erlang/OTP 22 [erts-10.5.2] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:1]
Eshell V10.5.2 (abort with ^G)
(ejabberd#localhost)1> 16:31:01.163 [notice] Changed loghwm of /usr/local/var/log/ejabberd/error.log to 100
16:31:01.163 [notice] Changed loghwm of /usr/local/var/log/ejabberd/ejabberd.log to 100
16:31:01.204 [info] Loading configuration from /usr/local/etc/ejabberd/ejabberd.yml
16:31:01.247 [critical] Failed to start ejabberd application: Configuration error: duplicated option: listen
I have not modified the default configuration file which is same as ejabberd.yml.example.
The port 5222 is not being used and there is no duplicate port or listen in the config file.
The configuration file -
###
### ejabberd configuration file
###
### The parameters used in this configuration file are explained at
###
### https://docs.ejabberd.im/admin/configuration
###
### The configuration file is written in YAML.
### *******************************************************
### ******* !!! WARNING !!! *******
### ******* YAML IS INDENTATION SENSITIVE *******
### ******* MAKE SURE YOU INDENT SECTIONS CORRECTLY *******
### *******************************************************
### Refer to http://en.wikipedia.org/wiki/YAML for the brief description.
###
hosts:
- localhost
loglevel: 4
log_rotate_size: 10485760
log_rotate_date: ""
log_rotate_count: 1
log_rate_limit: 100
## If you already have certificates, list them here
# certfiles:
# - /etc/letsencrypt/live/domain.tld/fullchain.pem
# - /etc/letsencrypt/live/domain.tld/privkey.pem
listen:
-
port: 5222
ip: "::"
module: ejabberd_c2s
max_stanza_size: 262144
shaper: c2s_shaper
access: c2s
starttls_required: false
protocol_options:
- no_sslv2
- no_tlsv1_3
-
port: 5269
ip: "::"
module: ejabberd_s2s_in
max_stanza_size: 524288
-
port: 5443
ip: "::"
module: ejabberd_http
tls: true
request_handlers:
/admin: ejabberd_web_admin
/api: mod_http_api
/bosh: mod_bosh
/captcha: ejabberd_captcha
/upload: mod_http_upload
/ws: ejabberd_http_ws
-
port: 5280
ip: "::"
module: ejabberd_http
request_handlers:
/admin: ejabberd_web_admin
/.well-known/acme-challenge: ejabberd_acme
-
port: 1883
ip: "::"
module: mod_mqtt
backlog: 1000
s2s_use_starttls: optional
acl:
local:
user_regexp: ""
loopback:
ip:
- 127.0.0.0/8
- ::1/128
access_rules:
local:
allow: local
c2s:
deny: blocked
allow: all
announce:
allow: admin
configure:
allow: admin
muc_create:
allow: local
pubsub_createnode:
allow: local
trusted_network:
allow: loopback
api_permissions:
"console commands":
from:
- ejabberd_ctl
who: all
what: "*"
"admin access":
who:
access:
allow:
acl: loopback
acl: admin
oauth:
scope: "ejabberd:admin"
access:
allow:
acl: loopback
acl: admin
what:
- "*"
- "!stop"
- "!start"
"public commands":
who:
ip: 127.0.0.1/8
what:
- status
- connected_users_number
shaper:
normal: 1000
fast: 50000
shaper_rules:
max_user_sessions: 10
max_user_offline_messages:
5000: admin
100: all
c2s_shaper:
none: admin
normal: all
s2s_shaper: fast
modules:
mod_adhoc: {}
mod_admin_extra: {}
mod_announce:
access: announce
mod_avatar: {}
mod_blocking: {}
mod_bosh: {}
mod_caps: {}
mod_carboncopy: {}
mod_client_state: {}
mod_configure: {}
mod_disco: {}
mod_fail2ban: {}
mod_http_api: {}
mod_http_upload:
put_url: https://#HOST#:5443/upload
mod_last: {}
mod_mam:
## Mnesia is limited to 2GB, better to use an SQL backend
## For small servers SQLite is a good fit and is very easy
## to configure. Uncomment this when you have SQL configured:
## db_type: sql
assume_mam_usage: true
default: always
mod_mqtt: {}
mod_muc:
access:
- allow
access_admin:
- allow: admin
access_create: muc_create
access_persistent: muc_create
access_mam:
- allow
default_room_options:
mam: true
mod_muc_admin: {}
mod_offline:
access_max_user_messages: max_user_offline_messages
mod_ping: {}
mod_privacy: {}
mod_private: {}
mod_proxy65:
access: local
max_connections: 5
mod_pubsub:
access_createnode: pubsub_createnode
plugins:
- flat
- pep
force_node_config:
## Avoid buggy clients to make their bookmarks public
storage:bookmarks:
access_model: whitelist
mod_push: {}
mod_push_keepalive: {}
mod_register:
## Only accept registration requests from the "trusted"
## network (see access_rules section above).
## Think twice before enabling registration from any
## address. See the Jabber SPAM Manifesto for details:
## https://github.com/ge0rg/jabber-spam-fighting-manifesto
ip_access: trusted_network
mod_roster:
versioning: true
mod_s2s_dialback: {}
mod_shared_roster: {}
mod_stream_mgmt:
resend_on_timeout: if_offline
mod_vcard: {}
mod_vcard_xupdate: {}
mod_version:
show_os: false
### Local Variables:
### mode: yaml
### End:
### vim: set filetype=yaml tabstop=8
Looks like, you have dublicates:
-
port: 5443
ip: "::"
module: ejabberd_http
tls: true
request_handlers:
/admin: ejabberd_web_admin
/api: mod_http_api
/bosh: mod_bosh
/captcha: ejabberd_captcha
/upload: mod_http_upload
/ws: ejabberd_http_ws
And
-
port: 5280
ip: "::"
module: ejabberd_http
request_handlers:
/admin: ejabberd_web_admin
/.well-known/acme-challenge: ejabberd_acme
with the same paths and modules but with different ports... So, try to remove one of this blocks. I know that you use different ports but I don't see duplicates in default configurations... Hope, this will be helpful.
I finally figured out the issue. I had earlier installed web_presence module and it had a yml file with listen in it. That was causing the issue. After removing the module the .ejabberd_modules folder, it works perfectly fine.

scdf2 uaa request failed redirect to dashboard from login

Using kubernetes deployer, I cannot get login into scdf2 applying uaa service security... using scdf 2.1.2 image version.
I got a loop into /login and /login?code=xxx from uaa service because, I think, scdf2 cannot get "token"..
The process :
1) Initial launching of uaa server .
An uaa service running into a pod k8s, using the following config
[applying https://github.com/making/uaa-on-kubernetes/blob/master/k8s/uaa.yml]
It needs a secret deployed with cert and key.
When i've created the csr, with CN value for certificated is "uaa-service"
as a valid hostname
Then, uaa-service using https and certs:
apiVersion: v1
kind: Service
metadata:
name: uaa-service
labels:
app: uaa
spec:
type: LoadBalancer
ports:
- port: 8443
nodePort: 8443
name: uaa
selector:
app: uaa
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: uaa
spec:
replicas: 1
selector:
matchLabels:
app: uaa
template:
metadata:
labels:
app: uaa
spec:
initContainers:
- image: openjdk:8-jdk-slim
name: pem-to-keystore
volumeMounts:
- name: keystore-volume
mountPath: /keystores
- name: uaa-tls
mountPath: /uaa-tls
command:
- sh
- -c
- |
openssl pkcs12 -export \
-name uaa-tls \
-in /uaa-tls/tls.crt \
-inkey /uaa-tls/tls.key \
-out /keystores/uaa.p12 \
-password pass:foobar
keytool -importkeystore \
-destkeystore /keystores/uaa.jks \
-srckeystore /keystores/uaa.p12 \
-deststoretype pkcs12 \
-srcstoretype pkcs12 \
-alias uaa-tls \
-deststorepass changeme \
-destkeypass changeme \
-srcstorepass foobar \
-srckeypass foobar \
-noprompt
containers:
- name: uaa
image: making/uaa:4.13.0
command:
- sh
- -c
- |
mv /usr/local/tomcat/webapps/uaa.war /usr/local/tomcat/webapps/ROOT.war
catalina.sh run
ports:
- containerPort: 8443
volumeMounts:
- name: uaa-config
mountPath: /uaa
readOnly: true
- name: server-config
mountPath: /usr/local/tomcat/conf/server.xml
subPath: server.xml
readOnly: true
- name: keystore-volume
mountPath: /keystores
readOnly: true
env:
- name: _JAVA_OPTIONS
value: "-Djava.security.policy=unlimited -Djava.security.egd=file:/dev/./urandom"
readinessProbe:
httpGet:
path: /healthz
port: 8443
scheme: HTTPS
initialDelaySeconds: 90
timeoutSeconds: 30
failureThreshold: 50
periodSeconds: 60
livenessProbe:
httpGet:
path: /healthz
port: 8443
scheme: HTTPS
initialDelaySeconds: 90
timeoutSeconds: 30
periodSeconds: 60
failureThreshold: 50
volumes:
- name: uaa-config
configMap:
name: uaa-config
items:
- key: uaa.yml
path: uaa.yml
- key: log4j.properties
path: log4j.properties
- name: server-config
configMap:
name: uaa-config
items:
- key: server.xml
path: server.xml
- name: keystore-volume
emptyDir: {}
- name: uaa-tls
secret:
secretName: uaa-tls
# kubectl create secret tls uaa-tls --cert=uaa-service.crt --key=uaa-service.key
---
apiVersion: v1
kind: ConfigMap
metadata:
name: uaa-config
data:
server.xml: |-
<?xml version='1.0' encoding='utf-8'?>
<Server port="-1">
<Listener className="org.apache.catalina.startup.VersionLoggerListener" />
<Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
<Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
<Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
<Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" />
<Service name="Catalina">
<Connector class="org.apache.coyote.http11.Http11NioProtocol" protocol="HTTP/1.1" connectionTimeout="20000"
scheme="https"
port="8443"
SSLEnabled="true"
sslEnabledProtocols="TLSv1.2"
ciphers="TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,TLS_DHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
secure="true"
clientAuth="false"
sslProtocol="TLS"
keystoreFile="/keystores/uaa.jks"
keystoreType="PKCS12"
keyAlias="uaa-tls"
keystorePass="changeme"
bindOnInit="false"/>
<Connector protocol="org.apache.coyote.http11.Http11NioProtocol"
connectionTimeout="20000"
port="8989"
address="127.0.0.1"
bindOnInit="true"/>
<Engine name="Catalina" defaultHost="localhost">
<Host name="localhost"
appBase="webapps"
unpackWARs="true"
autoDeploy="false"
failCtxIfServletStartFails="true">
<Valve className="org.apache.catalina.valves.RemoteIpValve"
remoteIpHeader="x-forwarded-for"
protocolHeader="x-forwarded-proto" internalProxies="10\.\d{1,3}\.\d{1,3}\.\d{1,3}|192\.168\.\d{1,3}\.\d{1,3}|169\.254\.\d{1,3}\.\d{1,3}|127\.\d{1,3}\.\d{1,3}\.\d{1,3}|172\.1[6-9]{1}\.\d{1,3}\.\d{1,3}|172\.2[0-9]{1}\.\d{1,3}\.\d{1,3}|172\.3[0-1]{1}\.\d{1,3}\.\d{1,3}"/>
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
prefix="localhost_access" suffix=".log" rotatable="false" pattern="%h %l %u %t "%r" %s %b"/>
</Host>
</Engine>
</Service>
</Server>
log4j.properties: |-
PID=????
log4j.rootCategory=INFO, CONSOLE
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=[%d{yyyy-MM-dd HH:mm:ss.SSS}] uaa%X{context} - ${PID} [%t] .... %5p --- %c{1}: %m%n
log4j.category.org.springframework.security=INFO
log4j.category.org.cloudfoundry.identity=INFO
log4j.category.org.springframework.jdbc=INFO
log4j.category.org.apache.http.wire=INFO
uaa.yml: |-
logging:
config: "/uaa/log4j.properties"
require_https: true
scim:
groups:
zones.read: Read identity zones
zones.write: Create and update identity zones
idps.read: Retrieve identity providers
idps.write: Create and update identity providers
clients.admin: Create, modify and delete OAuth clients
clients.write: Create and modify OAuth clients
clients.read: Read information about OAuth clients
clients.secret: Change the password of an OAuth client
scim.write: Create, modify and delete SCIM entities, i.e. users and groups
scim.read: Read all SCIM entities, i.e. users and groups
scim.create: Create users
scim.userids: Read user IDs and retrieve users by ID
scim.zones: Control a user's ability to manage a zone
scim.invite: Send invitations to users
password.write: Change your password
oauth.approval: Manage approved scopes
oauth.login: Authenticate users outside of the UAA
openid: Access profile information, i.e. email, first and last name, and phone number
groups.update: Update group information and memberships
uaa.user: Act as a user in the UAA
uaa.resource: Serve resources protected by the UAA
uaa.admin: Act as an administrator throughout the UAA
uaa.none: Forbid acting as a user
uaa.offline_token: Allow offline access
oauth:
clients:
uaa_admin:
authorities: clients.read,clients.write,clients.secret,uaa.admin,scim.read,scim.write,password.write
authorized-grant-types: client_credentials
override: true
scope: 'cloud_controller.read,cloud_controller.write,openid,password.write,scim.userids,dataflow.view,dataflow.create,dataflow.manage'
secret: uaa_secret
id: uaa_admin
user:
authorities:
- openid
- scim.me
- cloud_controller.read
- cloud_controller.write
- cloud_controller_service_permissions.read
- password.write
- scim.userids
- uaa.user
- approvals.me
- oauth.approvals
- profile
- roles
- user_attributes
- uaa.offline_token
issuer:
uri: https://uaa-service:8443
login:
url: https://uaa-service:8443
entityBaseURL: https://uaa-service:8443
entityID: cloudfoundry-saml-login
saml:
nameID: 'urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified'
assertionConsumerIndex: 0
signMetaData: true
signRequest: true
socket:
connectionManagerTimeout: 10000
soTimeout: 10000
authorize:
url: https://uaa-service:8443/oauth/authorize
uaa:
# The hostname of the UAA that this login server will connect to
url: https://uaa-service:8443
token:
url: https://uaa-service:8443/oauth/token
approvals:
url: https://uaa-service:8443/approvals
login:
url: https://uaa-service:8443/authenticate
limitedFunctionality:
enabled: false
whitelist:
endpoints:
- /oauth/authorize/**
- /oauth/token/**
- /check_token/**
- /login/**
- /login.do
- /logout/**
- /logout.do
- /saml/**
- /autologin/**
- /authenticate/**
- /idp_discovery/**
methods:
- GET
- HEAD
- OPTIONS
I think that rhe important values to remember are ( in doubt about saml):
issuer:
uri: https://uaa-service:8443
login:
url: https://uaa-service:8443
entityBaseURL: https://uaa-service:8443
authorize:
url: https://uaa-service:8443/oauth/authorize
uaa:
# The hostname of the UAA that this login server will connect to
url: https://uaa-service:8443
token:
url: https://uaa-service:8443/oauth/token
approvals:
url: https://uaa-service:8443/approvals
login:
url: https://uaa-service:8443/authenticate
Ok, deployed and running the pod. Remember 8443 form uaa_services actions.
2)
Upgrade uaa config for users admin and user and roles mappings.
Because i cannot get install uaac gem ... i run a docker imagen with uaac client:
docker run --rm -it cf-uaac bash
then
>>>> I need add the ip pod uaa-server to the docker image
#echo "10.42.0.1 uaa-service" >> /etc/hosts
#uaac --skip-ssl-validation target https://uaa-service:8443
Unknown key: Max-Age = 86400
Target: http://uaa-service:8443
#uaac token client get uaa_admin -s uaa_secret
Unknown key: Max-Age = 86400
Successfully fetched token via client credentials grant.
Target: http://uaa-service:8443
Context: uaa_admin, from client uaa_admin
>>> Ok i got a uaa_admin token to create admin user, group etc ..
>>> check token again is valid
# uaac token decode
Note: no key given to validate token signature
jti: 8067e0122b20433ab817f684e7335d30
sub: uaa_admin
authorities: clients.read password.write clients.secret clients.write uaa.admin scim.write scim.read
scope: clients.read password.write clients.secret clients.write uaa.admin scim.write scim.read
client_id: uaa_admin
cid: uaa_admin
azp: uaa_admin
grant_type: client_credentials
rev_sig: 7216b9b8
iat: 1565017183
exp: 1565060383
iss: http://uaa-service:8443/oauth/token
zid: uaa
aud: scim uaa_admin password clients uaa**
#uaac user add admin -p password --emails admin#mk.com
root#bf98436ccc82:/# uaac user add admin -p password --emails admin#mk.com
user account successfully added
root#bf98436ccc82:/# uaac user add user -p password --emails user#mk.com
user account successfully added
=========================================================================================================================================
root#bf98436ccc82:/# uaac group add "dataflow.view"
id: 9796f596-e540-4f3b-a32c-90b1bac5d0cc
meta
version: 0
created: 2019-08-05T15:00:01.014Z
lastmodified: 2019-08-05T15:00:01.014Z
members:
schemas: urn:scim:schemas:core:1.0
displayname: dataflow.view
zoneid: uaa
root#bf98436ccc82:/# uaac group add "dataflow.create"
id: c798e762-bcae-4d1f-8eef-2f7083df2d45
meta
version: 0
created: 2019-08-05T15:00:01.495Z
lastmodified: 2019-08-05T15:00:01.495Z
members:
schemas: urn:scim:schemas:core:1.0
displayname: dataflow.create
zoneid: uaa
root#bf98436ccc82:/# uaac group add "dataflow.manage"
id: 47aeba32-db27-456c-aa12-d5492127fe1f
meta
version: 0
created: 2019-08-05T15:00:01.986Z
lastmodified: 2019-08-05T15:00:01.986Z
members:
schemas: urn:scim:schemas:core:1.0
displayname: dataflow.manage
zoneid: uaa
=========================================================================================================================================
root#bf98436ccc82:/# uaac member add dataflow.view admin
success
root#bf98436ccc82:/# uaac member add dataflow.create admin
success
root#bf98436ccc82:/# uaac member add dataflow.manage admin
success
=========================================================================================================================================
root#bf98436ccc82:/# uaac member add dataflow.view user
success
root#bf98436ccc82:/# uaac member add dataflow.create user
success
root#bf98436ccc82:/# uaac member add dataflow.manage user
success
>>> Now, mapping admin to dataflow uua client
>>> Important
>>> The redirect url MUST THE SAME from http original request
>>> scdf2-data-flow-skipper:8844
>>> this is my login uri to dashboard scdf2
>>> i can't get direct connect to pod ... ssh tunnels insteads ..
# uaac client add dataflow \
--name dataflow \
--scope cloud_controller.read,cloud_controller.write,openid,password.write,scim.userids,dataflow.view,dataflow.create,dataflow.manage \
--authorized_grant_types password,authorization_code,client_credentials,refresh_token \
--authorities uaa.resource \
--redirect_uri http://scdf2-data-flow-server:8844/login\
--autoapprove openid \
--secret dataflow
#uaac client add skipper \
--name skipper \
--scope cloud_controller.read,cloud_controller.write,openid,password.write,scim.userids,dataflow.view,dataflow.create,dataflow.manage \
--authorized_grant_types password,authorization_code,client_credentials,refresh_token \
--authorities uaa.resource \
--redirect_uri http://scdf2-data-flow-skipper:8844/login \
--autoapprove openid \
--secret skipper
>>>> Using curl to get a valid token and check that uri's are ok
curl -k -v -d"username=admin&password=password&client_id=dataflow&grant_type=client_credentials" -u "dataflow:dataflow" https://uaa-service:8443/oauth/token * Expire in 0 ms for 6 (transfer 0x5632e4386dd0)
* Trying 10.42.0.1...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x5632e4386dd0)
* Connected to uaa-service (10.42.0.1) port 8443 (#0)
* Server auth using Basic with user 'dataflow'
> POST /oauth/token HTTP/1.1
> Host: uaa-service:8443
> Authorization: Basic ZGF0YWZsb3c6ZGF0YWZsb3c=
> User-Agent: curl/7.64.0
> Accept: */*
> Content-Length: 81
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 81 out of 81 bytes
< HTTP/1.1 200
< Cache-Control: no-cache, no-store, max-age=0, must-revalidate
< Pragma: no-cache
< Expires: 0
< X-XSS-Protection: 1; mode=block
< X-Frame-Options: DENY
< X-Content-Type-Options: nosniff
< Cache-Control: no-store
< Pragma: no-cache
< Content-Type: application/json;charset=UTF-8
< Transfer-Encoding: chunked
< Date: Mon, 05 Aug 2019 15:02:21 GMT
<
* Connection #0 to host uaa-service left intact
{"access_token":"eyJhbGciOiJIUzI1NiIsImtpZCI6ImxlZ2FjeS10b2tlbi1rZXkiLCJ0eXAiOiJKV1QifQ.eyJqdGkiOiJlNmU3YzNiOWVkMmM0ZmI5ODQ5OWE3MmQ2N2EzMjMyYSIsInN1YiI6ImRhdGFmbG93IiwiYXV0aG9yaXRpZXMiOlsidWFhLnJlc291cmNlIl0sInNjb3BlIjpbInVhYS5yZXNvdXJjZSJdLCJjbGllbnRfaWQiOiJkYXRhZmxvdyIsImNpZCI6ImRhdGFmbG93IiwiYXpwIjoiZGF0YWZsb3ciLCJncmFudF90eXBlIjoiY2xpZW50X2NyZWRlbnRpYWxzIiwicmV2X3NpZyI6IjFkMmUwMjVjIiwiaWF0IjoxNTY1MDE3MzQxLCJleHAiOjE1NjUwNjA1NDEsImlzcyI6Imh0dHA6Ly91YWEtc2VydmljZTo4MDgwL29hdXRoL3Rva2VuIiwiemlkIjoidWFhIiwiYXVkIjpbImRhdGFmbG93IiwidWFhIl19.G2f8bIMbUWJOz8kcZYtU37yYhTtMOEJlsrvJFINnUjo","token_type":"bearer","expires_in":43199,"scope":"uaa.resource","jti":"e6e7c3b9ed2c4fb98499a72d67a3232a"}root#bf98436ccc82:/#
At this point, it seems that uaa server it is running ok and i can get from a "docker" process... let's continue using pods ...
3) Deploy skipper and scdf2 using security uaa.
Skipper and scdf2 are deployed using same values (changes into client_ide values of course:
LOGGING_LEVEL_ROOT: DEBUG
KUBERNETES_NAMESPACE: (v1:metadata.namespace)
SERVER_PORT: 8080
SPRING_CLOUD_CONFIG_ENABLED: false
SPRING_CLOUD_DATAFLOW_FEATURES_ANALYTICS_ENABLED: false
SPRING_CLOUD_KUBERNETES_SECRETS_ENABLE_API: true
SPRING_CLOUD_DATAFLOW_FEATURES_SCHEDULES_ENABLED: true
SPRING_CLOUD_KUBERNETES_SECRETS_PATHS: /etc/secrets
SPRING_CLOUD_KUBERNETES_CONFIG_NAME: scdf2-data-flow-server
SPRING_CLOUD_SKIPPER_CLIENT_SERVER_URI: http://${SCDF2_DATA_FLOW_SKIPPER_SERVICE_HOST}/api
SPRING_CLOUD_DATAFLOW_SERVER_URI: http://${SCDF2_DATA_FLOW_SERVER_SERVICE_HOST}:${SCDF2_DATA_FLOW_SERVER_SERVICE_PORT}
SPRING_CLOUD_DATAFLOW_SECURITY_CF_USE_UAA: true
SECURITY_OAUTH2_CLIENT_CLIENT_ID: dataflow
SECURITY_OAUTH2_CLIENT_CLIENT_SECRET: dataflow
SECURITY_OAUTH2_CLIENT_SCOPE: openid
SPRING_CLOUD_DATAFLOW_SECURITY_AUTHORIZATION_MAP_OAUTH_SCOPES: true
SECURITY_OAUTH2_CLIENT_ACCESS_TOKEN_URI: https://uaa-service:8443/oauth/token
SECURITY_OAUTH2_CLIENT_USER_AUTHORIZATION_URI: https://uaa-service:8443/oauth/authorize
SECURITY_OAUTH2_RESOURCE_USER_INFO_URI: https://uaa-service:8443/userinfo
SECURITY_OAUTH2_RESOURCE_TOKEN_INFO_URI: https://uaa-service:8443/check_token
SPRING_APPLICATION_JSON: { "com.sun.net.ssl.checkRevocation": "false", "maven": { "local-repository": "myLocalrepoMK", "remote-repositories": { "mk-repository": {"url": "http://${NEXUS_SERVICE_HOST}:${NEXUS_SERVICE_PORT}/repository/maven-releases/","auth": {"username": "admin","password": "admin123"}},"spring-repo": {"url": "https://repo.spring.io/libs-release","auth": {"username": "","password": ""}},"spring-repo-snapshot": {"url": "https://repo.spring.io/libs-snapshot/","auth": {"username": "","password": ""}}}} }
Using 8443 as comunication between pod to pod ...
And skipper and scdf2 config maps:
management:
endpoints:
web:
base-path: /management
security:
roles: MANAGE
spring:
cloud:
dataflow:
security:
authorization:
map-oauth-scopes: true
role-mappings:
ROLE_CREATE: dataflow.create
ROLE_DEPLOY: dataflow.deploy
ROLE_DESTROY: dataflow.destoy
ROLE_MANAGE: dataflow.manage
ROLE_MODIFY: dataflow.modify
ROLE_SCHEDULE: dataflow.schedule
ROLE_VIEW: dataflow.view
enabled: true
rules:
# About
- GET /about => hasRole('ROLE_VIEW')
# Audit
- GET /audit-records => hasRole('ROLE_VIEW')
- GET /audit-records/** => hasRole('ROLE_VIEW')
# Boot Endpoints
- GET /management/** => hasRole('ROLE_MANAGE')
At this point, i think why cannot i see a login mapping defined?
I deploy skipper and scdf2 and the first problem is that all health process is returno 401 .. ok ... let's continue ...
Request not progress after :
http://scdf2-data-flow-server:8844/login?code=ETFX6qfQMw&state=Fudfts
Not bypass /login page from scdf2 and go to dashboard
The request hangs in:
http://scdf2-data-flow-server:8844/login&response_type=code&scope=openid&state=5HST0f
I think that all UAA's process are terminanted and back to redirect to login into scdf security model.
login and loop
But, what is happens?
Login request arrive to scdf2, scdf2 check into uaa that all is correct and back again to process as new request into scdf2, that send again a request to uaa server ...
Then , restart scdf using debug logging ...
request is now :
GET /login?code=W7luipeEGG&state=7yiI9S HTTP/1.1
and logging :
2019-08-12 15:37:58.413 DEBUG 1 --- [nio-8080-exec-5] o.a.tomcat.util.net.SocketWrapperBase : Socket: [org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper#39c5463b:org.apache.tomcat.util.net.NioChannel#6160a9db:java.nio.channels.SocketChannel[connected local=/127.0.0.1:8080 remote=/127.0.0.1:58562]], Read from buffer: [0]
2019-08-12 15:37:58.414 DEBUG 1 --- [nio-8080-exec-5] org.apache.tomcat.util.net.NioEndpoint : Socket: [org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper#39c5463b:org.apache.tomcat.util.net.NioChannel#6160a9db:java.nio.channels.SocketChannel[connected local=/127.0.0.1:8080 remote=/127.0.0.1:58562]], Read direct from socket: [593]
2019-08-12 15:37:58.414 DEBUG 1 --- [nio-8080-exec-5] o.a.coyote.http11.Http11InputBuffer : Received [GET /login?code=W7luipeEGG&state=7yiI9S HTTP/1.1
Host: scdf2-data-flow-server:8844
Connection: keep-alive
Cache-Control: max-age=0
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36
DNT: 1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3
Sec-Fetch-Site: none
Accept-Encoding: gzip, deflate
Accept-Language: es-ES,es;q=0.9,en-US;q=0.8,en;q=0.7
Cookie: JSESSIONID=077168452F9CCF4378715DC3FE20D4B2
]
2019-08-12 15:37:58.414 DEBUG 1 --- [nio-8080-exec-5] o.a.t.util.http.Rfc6265CookieProcessor : Cookies: Parsing b[]: JSESSIONID=077168452F9CCF4378715DC3FE20D4B2
2019-08-12 15:37:58.414 DEBUG 1 --- [nio-8080-exec-5] o.a.catalina.connector.CoyoteAdapter : Requested cookie session id is 077168452F9CCF4378715DC3FE20D4B2
2019-08-12 15:37:58.414 DEBUG 1 --- [nio-8080-exec-5] o.a.c.authenticator.AuthenticatorBase : Security checking request GET /login
2019-08-12 15:37:58.414 DEBUG 1 --- [nio-8080-exec-5] org.apache.catalina.realm.RealmBase : No applicable constraints defined
2019-08-12 15:37:58.414 DEBUG 1 --- [nio-8080-exec-5] o.a.c.authenticator.AuthenticatorBase : Not subject to any constraint
2019-08-12 15:37:58.415 DEBUG 1 --- [nio-8080-exec-5] org.apache.tomcat.util.http.Parameters : Set encoding to UTF-8
2019-08-12 15:37:58.415 DEBUG 1 --- [nio-8080-exec-5] org.apache.tomcat.util.http.Parameters : Decoding query null UTF-8
2019-08-12 15:37:58.416 DEBUG 1 --- [nio-8080-exec-5] org.apache.tomcat.util.http.Parameters : Start processing with input [code=W7luipeEGG&state=7yiI9S]
2019-08-12 15:37:58.425 ERROR 1 --- [nio-8080-exec-5] o.s.c.c.s.OAuthSecurityConfiguration : An error occurred while accessing an authentication REST resource.
but using debug error, now i can see:
019-08-12 15:37:58.416 DEBUG 1 --- [nio-8080-exec-5] org.apache.tomcat.util.http.Parameters : Start processing with input [code=W7luipeEGG&state=7yiI9S]
2019-08-12 15:37:58.425 ERROR 1 --- [nio-8080-exec-5] o.s.c.c.s.OAuthSecurityConfiguration : An error occurred while accessing an authentication REST resource.
org.springframework.security.authentication.BadCredentialsException: Could not obtain access token
at org.springframework.security.oauth2.client.filter.OAuth2ClientAuthenticationProcessingFilter.attemptAuthentication(OAuth2ClientAuthenticationProcessingFilter.java:107)
at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:212)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:158)
at
org.springframework.security.oauth2.client.filter.OAuth2ClientAuthenticationProcessingFilter.attemptAuthentication(OAuth2ClientAuthenticationProcessingFilter.java:105)
... 66 common frames omitted
Caused by: org.springframework.web.client.ResourceAccessException: I/O error on POST request for "https://uaa-service:8443/oauth/token": sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target; nested exception is javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:744)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:691)
at org.springframework.security.oauth2.client.token.OAuth2AccessTokenSupport.retrieveToken(OAuth2AccessTokenSupport.java:137)
... 72 common frames omitted
Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1946)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:316)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:310)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1639)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:223)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:1037)
at sun.security.ssl.Handshaker.process_record(Handshaker.java:965)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1064)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1367)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1395)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1621)
... 88 common frames omitted
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141)
at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126)
at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:280)
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:392)
... 94 common frames omitted
2019-08-12 15:37:58.426 DEBUG 1 --- [nio-8080-exec-5] o.a.c.c.C.[Tomcat].[localhost] : Processing ErrorPage[errorCode=0, location=/error]
2019-08-12 15:37:58.427 DEBUG 1 --- [nio-8080-exec-5] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Disabling the response for further output
Ok, now we got
sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
It seems that jvm needs more info into cacerts o similars ...
then, how can i add the new cacert from uaa-server to jvm from scdf2?
Is it the new step into get to start working scdf2 uaa ?
what am i doing wrong?
Do I need to add the uaa-service cert to pod jvm from scdf2 running?
please help !!!
And, the problem was,
Into the server-deployment, I've remove the following config:
#- name: SECURITY_OAUTH2_CLIENT_SCOPE
# value: 'openid'
Do not apply any config parametrization about scope in anywhere.
Because, if scope is omitted or null, all scopes will be assigned to the client, and no needed confirmation for third party permission ...
Warning, you can get alot samples using this config into .. tested?
No apply any config about uaa into skipper.... only cacert to uaa into jks

Resources