I've tried to follow the steps here to configure ejabberd OAuth but failed. ejabberd.yml looks like below :
-
port: 5280
module: ejabberd_http
request_handlers:
"/websocket": ejabberd_http_ws
"/log": mod_log_http
# OAuth support:
"/oauth": ejabberd_oauth
# ReST API:
"/api": mod_http_api
## "/pub/archive": mod_http_fileserver
web_admin: true
http_bind: true
## register: true
captcha: true
Note : I've restart the ejabberd.
URL that I used (this is the page where I entered User, Server and Password) : http://mytestsite.com:5280/oauth/authorization_token?response_type=token&client_id=Client1&redirect_uri=http://mytestsite.com&scope=user_get_roster+sasl_auth
I've been redirected to https://mytestsite.com/?error=access_denied&state=&gws_rd=ssl
According to the tutorial, once I enabled /oauth and /api in the .yml file, the following URL should redirect me to http://mytestsite.com/?access_token=RHIT8DoudzOctdzBhYL9bYvXz28xQ4Oj&token_type=bearer&expires_in=3600&scope=user_get_roster+sasl_auth&state=
You must defined oauth_access parameter in ejabberd.yml config file, otherwise, no one can create an oauth token.
We will update the documentation to make it more accurate on that part.
Related
I'm trying to send logs from Winlogbeat to my ELK stack.
I installed my ELK stack with docker and configured TLS on it.
I did everything according to the official guide and it worked for my host.
However, when copying the same winlogbeat directory to my Event Collector server, it did not work (all files are the same including the yml file).
When trying to run the "winlogbeat.exe setup -e" I got the following error: 'error connecting to elasticsearch at "https://elastic-host:9200" Get "https://elastic-host:9200" Winlogbeat setup error: x509 certificate is valid for elastic-host ip, not elastic-host ip' (same ips). The CA is already added to the trusted root certificates. Everything is configured the same as on the host, on the host it works, on the server it doesn't. (the ELK server and the EVC are in the same segment so there shouldn't be any firewall drops)
My .yml (same file on host and EVC server):
on the host it works without the ssl as well and the traffic is still encrypted due to the TLS that I configured on the docker cluster. So I'm not sure the ssl configuration is needed (but I wanted to include them in case they are needed).
# This file is an example configuration file highlighting only the most common
# options. The winlogbeat.reference.yml file from the same directory contains
# all the supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/winlogbeat/index.html
# ======================== Winlogbeat specific options =========================
# event_logs specifies a list of event logs to monitor as well as any
# accompanying options. The YAML data type of event_logs is a list of
# dictionaries.
#
# The supported keys are name (required), tags, fields, fields_under_root,
# forwarded, ignore_older, level, event_id, provider, and include_xml. Please
# visit the documentation for the complete details of each option.
# https://go.es.io/WinlogbeatConfig
winlogbeat.event_logs:
- name: Application
ignore_older: 72h
- name: System
- name: Security
processors:
- script:
lang: javascript
id: security
file: ${path.home}/module/security/config/winlogbeat-security.js
- name: Microsoft-Windows-Sysmon/Operational
processors:
- script:
lang: javascript
id: sysmon
file: ${path.home}/module/sysmon/config/winlogbeat-sysmon.js
- name: Windows PowerShell
event_id: 400, 403, 600, 800
processors:
- script:
lang: javascript
id: powershell
file: ${path.home}/module/powershell/config/winlogbeat-powershell.js
- name: Microsoft-Windows-PowerShell/Operational
event_id: 4103, 4104, 4105, 4106
processors:
- script:
lang: javascript
id: powershell
file: ${path.home}/module/powershell/config/winlogbeat-powershell.js
- name: ForwardedEvents
tags: [forwarded]
processors:
- script:
when.equals.winlog.channel: Security
lang: javascript
id: security
file: ${path.home}/module/security/config/winlogbeat-security.js
- script:
when.equals.winlog.channel: Microsoft-Windows-Sysmon/Operational
lang: javascript
id: sysmon
file: ${path.home}/module/sysmon/config/winlogbeat-sysmon.js
- script:
when.equals.winlog.channel: Windows PowerShell
lang: javascript
id: powershell
file: ${path.home}/module/powershell/config/winlogbeat-powershell.js
- script:
when.equals.winlog.channel: Microsoft-Windows-PowerShell/Operational
lang: javascript
id: powershell
file: ${path.home}/module/powershell/config/winlogbeat-powershell.js
# ====================== Elasticsearch template settings =======================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
# ================================== General ===================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
# =================================== Kibana ===================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
host: "192.168.101.129:5601"
protocol: https
username: "elastic"
password: "passwd"
setup.kibana.ssl.enabled: true
setup.kibana.ssl.certificate_authorities: ["C:\\Program Files\\Winlogbeat\\ca.crt"]
setup.kibana.ssl.certificate: "C:\\Program Files\\Winlogbeat\\winlogbeat.crt"
setup.kibana.ssl.key: "C:\\Program Files\\Winlogbeat\\winlogbeat.key"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
# =============================== Elastic Cloud ================================
# These settings simplify using Winlogbeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
# ================================== Outputs ===================================
# Configure what output to use when sending the data collected by the beat.
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["192.168.101.129:9200"]
username: "elastic"
password: "passwd"
# Protocol - either `http` (default) or `https`.
protocol: "https"
output.elasticsearch.ssl.certificate_authorities: ["C:\\Program Files\\Winlogbeat\\ca.crt"]
output.elasticsearch.ssl.certificate: "C:\\Program Files\\Winlogbeat\\winlogbeat.crt"
output.elasticsearch.ssl.key: "C:\\Program Files\\Winlogbeat\\winlogbeat.key"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
# ------------------------------ Logstash Output -------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
# ================================== Logging ===================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
# ============================= X-Pack Monitoring ==============================
# Winlogbeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
#monitoring.enabled: false
# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Winlogbeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:
# ============================== Instrumentation ===============================
# Instrumentation support for the winlogbeat.
#instrumentation:
# Set to true to enable instrumentation of winlogbeat.
#enabled: false
# Environment in which winlogbeat is running on (eg: staging, production, etc.)
#environment: ""
# APM Server hosts to report instrumentation results to.
#hosts:
# - http://localhost:8200
# API Key for the APM Server(s).
# If api_key is set then secret_token will be ignored.
#api_key:
# Secret token for the APM Server(s).
#secret_token:
# ================================= Migration ==================================
# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true
In your output, you need to specify ssl.verification_mode: certificate.
For your example, it looks like it is the Kibana output that has a certificate specified on it:
setup.kibana.ssl.enabled: true
setup.kibana.ssl.certificate_authorities: ["C:\\Program Files\\Winlogbeat\\ca.crt"]
setup.kibana.ssl.certificate: "C:\\Program Files\\Winlogbeat\\winlogbeat.crt"
setup.kibana.ssl.key: "C:\\Program Files\\Winlogbeat\\winlogbeat.key"
setup.kibana.ssl.verification_mode: certificate
Older versions of winlogbeat will need ssl.verification_mode: none instead.
See SSL/TLS configuration documentation at https://www.elastic.co/guide/en/beats/winlogbeat/current/configuration-ssl.html
I want to transfer image between users in my chat application. I am using an ejabberd server for chat. As I found out, the module which could do this is mod_http_upload - HTTP File Upload (XEP-0363).
I am not able to figure out how to implement this. Anybody how could help me in figuring out how to do this will be very helpful.
In order to use this module add configuration setting in ejabberd.yml file
listen:
# add following lines in listen section
-
module: ejabberd_http
port: 5443
tls: true
certfile: "/etc/ejabberd/example.com.pem"
request_handlers:
"": mod_http_upload
access: # add following lines in access section
soft_upload_quota:
all: 1000 # MiB
hard_upload_quota:
all: 1100 # MiB
modules: #add following lines in modules section
mod_http_upload:
docroot: "/home/xmpp/upload"
put_url: "http://#HOST#:5443"
upload file on this url (according to your setting url) http://#HOST#:5443 as you do in ruby on rails .For more detail about configuration of module check this link-
https://github.com/processone/ejabberd-contrib/blob/master/mod_http_upload/README.txt
after uploading file you can send link(url) to user for downloading file.
In my case i used https and its worked
I have simple services as:
transactions-core-service and transactions-api-service.
transactions-api-service invokes transactions-core-service to return a list of transactions. transactions-api-service is enabled with hystrix command.
Both are registered in Eureka server with below services ids:
TRANSACTIONS-API-SERVICE n/a (1) (1) UP (1) - 192.168.2.12:transactions-api-service:8083
TRANSACTIONS-CORE-SERVICE n/a (1) (1) UP (1) - 192.168.2.12:transactions-core-service:8087
Below is Zuul server:
#SpringBootApplication
#Controller
#EnableZuulProxy
public class ZuulApplication {
public static void main(String[] args) {
new SpringApplicationBuilder(ZuulApplication.class).web(true).run(args);
}
}
Zuul Configurations:
===============================================
info:
component: Zuul Server
server:
port: 8765
endpoints:
restart:
enabled: true
shutdown:
enabled: true
health:
sensitive: false
zuul:
ignoredServices: "*"
routes:
transactions-api-service:
path: transactions/accounts/**
serviceId: transactions-api-service
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/
logging:
level:
ROOT: INFO
org.springframework.web: DEBUG
===============================================
When I try to invoke transactions-api-service with url (http://localhost:8765/transactions/accounts/123/transactions/786) I get Zuul Exception:
2016-02-13 11:29:29.050 WARN 4936 --- [nio-8765-exec-1]
o.s.c.n.z.filters.post.SendErrorFilter : Error during filtering
com.netflix.zuul.exception.ZuulException: Forwarding error
at org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.forward(RibbonRoutingFilter.java:131)
~[spring-cloud-net flix-core-1.1.0.M3.jar:1.1.0.M3]
at org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.run(RibbonRoutingFilter.java:76)
~[spring-cloud-netflix- core-1.1.0.M3.jar:1.1.0.M3] ......
If I invoke the transactions-api-service individually (with localhost /accounts/123/transactions/786), it works fine.
Am I missing any configurations on Zuul?
You need to change zuul execution timeout by adding this property in application.yml of zuul server:
# Increase the Hystrix timeout to 60s (globally)
hystrix:
command:
default:
execution:
isolation:
thread:
timeoutInMilliseconds: 60000
Please refer to this thread on netflix issues: https://github.com/spring-cloud/spring-cloud-netflix/issues/321
Faced same issue. In my case, zuul was using service discovery. As a solution, below configuration worked like a charm.
ribbon.ReadTimeout=60000
Reference to the property usage is here.
You have an incorrect indentation. Instead of:
zuul:
ignoredServices: "*"
routes:
transactions-api-service:
path: transactions/accounts/**
serviceId: transactions-api-service
It should be:
zuul:
ignoredServices: "*"
routes:
transactions-api-service:
path: transactions/accounts/**
serviceId: transactions-api-service
you can use this to avoid 500 error
hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds=1000000
zuul.host.connect-timeout-millis=10000
zuul.host.socket-timeout-millis=1000000
In case if your Zuul gateway uses discovery service for service lookup in that case you can disable the hystrix timeout or increase the hysterix timeout as below :
# Disable Hystrix timeout globally (for all services)
hystrix.command.default.execution.timeout.enabled: false
#To disable timeout foror particular service,
hystrix.command.<serviceName>.execution.timeout.enabled: false
# Increase the Hystrix timeout to 60s (globally)
hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds: 60000
# Increase the Hystrix timeout to 60s (per service)
hystrix.command.<serviceName>.execution.isolation.thread.timeoutInMilliseconds: 60000
I was having same issue with zuul server, it got resolved with below property
Let's say you have 2 clients clientA and clientB,
so for clientA, spring.application.name=clientA and server.port=1111
for clientB spring.application.name=clientB and server.port=2222 in there respective application.propeties files.
You want to connect this 2 servers to ZuulServer which is running on port 8087.
add below properties in you ZuulServer application.properties file
spring.application.name=gateway-service
eureka.client.serviceUrl.defaultZone=http://localhost:8761/eureka
eureka.client.register-with-eureka=true
eureka.client.fetch-registry=true
clientA.ribbon.listOfServers=http://localhost:1111
clientB.ribbon.listOfServers=http://localhost:2222
server.port=8087
Note: I am using Eureka Client with my Zuul Server. you can skip that part. Adding this solution in case its helpful for someone.
i am registering the account from android on ejabberd server but i am getting 403 auth error while creating account.
here is my ejabberd.yml. can anyone tell me where i am missing code
admin:
user:
- "xyz": "my-ip"
loopback:
ip:
- "127.0.0.0/8"
-"my-ip"
register:
all: allow
trusted_network:
all: allow
I have found solution. In your configuration file, ejabberd.yml, you need:
mod_register:
access_from: allow
access: register
I am using GitLab Omnibus 7.10.0 on RHEL 6.6. I have enabled LDAP using the following configuration:
gitlab_rails['ldap_enabled'] = true
gitlab_rails['ldap_servers'] = YAML.load <<-'EOS' # remember to close this block with 'EOS' below
main: # 'main' is the GitLab 'provider ID' of this LDAP server
label: 'FOO COM Active Directory (LDAP)'
host: 'ad.server.foo.com'
port: 3268
uid: 'someuser'
method: 'plain' # "tls" or "ssl" or "plain"
bind_dn: 'CN=My Whole. Name,OU=Some Users,DC=ad,DC=server,DC=foo,DC=com'
password: 'thepassword'
active_directory: true
allow_username_or_email_login: false
block_auto_created_users: false
base: 'DC=ad,DC=server,DC=foo,DC=com'
user_filter: ''
# ## EE only
# group_base: ''
# admin_group: ''
# sync_ssh_keys: false
#
# secondary: # NOT FILLED OUT
EOS
My problem is that I can't get users to authenticate via LDAP. I'm not sure if the configuration is wrong, or I need to do something on the server side (which I have no direct access to). When I run
gitlab-rake gitlab:ldap:check RAILS_ENV=production
I get this
Checking LDAP ...
LDAP users with access to your GitLab server (only showing the first 100 results)
Server: ldapmain
Checking LDAP ... Finished
I can search for individual users using java with this account (my personal account) or another account for a different application, but can't get AD working with gitlab. I got the bind_dn "My Whole. Name" by running this command on a Windows box.
gpresult -r
I have also tried a bind_dn of:
uid=myADaccountname,OU=Some Users,DC=ad,DC=server,DC=foo,DC=com
and
myADaccountname#ad.server.foo.com
but I still have the same problem.
For Active Directory, the uid should be:
uid: 'sAMAccountName'
Gitlab should connect using the user specified in the bind_dn, with the given password.
Since GitLab 9.5.1 the uid now requires [ ]
See this issue: https://gitlab.com/gitlab-org/gitlab-ce/issues/37120
This might just be a bug which will be fixed.
I had to update the value for Active Directory from the answer above to:
uid: ['sAMAccountName']