I am using GitLab Omnibus 7.10.0 on RHEL 6.6. I have enabled LDAP using the following configuration:
gitlab_rails['ldap_enabled'] = true
gitlab_rails['ldap_servers'] = YAML.load <<-'EOS' # remember to close this block with 'EOS' below
main: # 'main' is the GitLab 'provider ID' of this LDAP server
label: 'FOO COM Active Directory (LDAP)'
host: 'ad.server.foo.com'
port: 3268
uid: 'someuser'
method: 'plain' # "tls" or "ssl" or "plain"
bind_dn: 'CN=My Whole. Name,OU=Some Users,DC=ad,DC=server,DC=foo,DC=com'
password: 'thepassword'
active_directory: true
allow_username_or_email_login: false
block_auto_created_users: false
base: 'DC=ad,DC=server,DC=foo,DC=com'
user_filter: ''
# ## EE only
# group_base: ''
# admin_group: ''
# sync_ssh_keys: false
#
# secondary: # NOT FILLED OUT
EOS
My problem is that I can't get users to authenticate via LDAP. I'm not sure if the configuration is wrong, or I need to do something on the server side (which I have no direct access to). When I run
gitlab-rake gitlab:ldap:check RAILS_ENV=production
I get this
Checking LDAP ...
LDAP users with access to your GitLab server (only showing the first 100 results)
Server: ldapmain
Checking LDAP ... Finished
I can search for individual users using java with this account (my personal account) or another account for a different application, but can't get AD working with gitlab. I got the bind_dn "My Whole. Name" by running this command on a Windows box.
gpresult -r
I have also tried a bind_dn of:
uid=myADaccountname,OU=Some Users,DC=ad,DC=server,DC=foo,DC=com
and
myADaccountname#ad.server.foo.com
but I still have the same problem.
For Active Directory, the uid should be:
uid: 'sAMAccountName'
Gitlab should connect using the user specified in the bind_dn, with the given password.
Since GitLab 9.5.1 the uid now requires [ ]
See this issue: https://gitlab.com/gitlab-org/gitlab-ce/issues/37120
This might just be a bug which will be fixed.
I had to update the value for Active Directory from the answer above to:
uid: ['sAMAccountName']
Related
I'm trying to send logs from Winlogbeat to my ELK stack.
I installed my ELK stack with docker and configured TLS on it.
I did everything according to the official guide and it worked for my host.
However, when copying the same winlogbeat directory to my Event Collector server, it did not work (all files are the same including the yml file).
When trying to run the "winlogbeat.exe setup -e" I got the following error: 'error connecting to elasticsearch at "https://elastic-host:9200" Get "https://elastic-host:9200" Winlogbeat setup error: x509 certificate is valid for elastic-host ip, not elastic-host ip' (same ips). The CA is already added to the trusted root certificates. Everything is configured the same as on the host, on the host it works, on the server it doesn't. (the ELK server and the EVC are in the same segment so there shouldn't be any firewall drops)
My .yml (same file on host and EVC server):
on the host it works without the ssl as well and the traffic is still encrypted due to the TLS that I configured on the docker cluster. So I'm not sure the ssl configuration is needed (but I wanted to include them in case they are needed).
# This file is an example configuration file highlighting only the most common
# options. The winlogbeat.reference.yml file from the same directory contains
# all the supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/winlogbeat/index.html
# ======================== Winlogbeat specific options =========================
# event_logs specifies a list of event logs to monitor as well as any
# accompanying options. The YAML data type of event_logs is a list of
# dictionaries.
#
# The supported keys are name (required), tags, fields, fields_under_root,
# forwarded, ignore_older, level, event_id, provider, and include_xml. Please
# visit the documentation for the complete details of each option.
# https://go.es.io/WinlogbeatConfig
winlogbeat.event_logs:
- name: Application
ignore_older: 72h
- name: System
- name: Security
processors:
- script:
lang: javascript
id: security
file: ${path.home}/module/security/config/winlogbeat-security.js
- name: Microsoft-Windows-Sysmon/Operational
processors:
- script:
lang: javascript
id: sysmon
file: ${path.home}/module/sysmon/config/winlogbeat-sysmon.js
- name: Windows PowerShell
event_id: 400, 403, 600, 800
processors:
- script:
lang: javascript
id: powershell
file: ${path.home}/module/powershell/config/winlogbeat-powershell.js
- name: Microsoft-Windows-PowerShell/Operational
event_id: 4103, 4104, 4105, 4106
processors:
- script:
lang: javascript
id: powershell
file: ${path.home}/module/powershell/config/winlogbeat-powershell.js
- name: ForwardedEvents
tags: [forwarded]
processors:
- script:
when.equals.winlog.channel: Security
lang: javascript
id: security
file: ${path.home}/module/security/config/winlogbeat-security.js
- script:
when.equals.winlog.channel: Microsoft-Windows-Sysmon/Operational
lang: javascript
id: sysmon
file: ${path.home}/module/sysmon/config/winlogbeat-sysmon.js
- script:
when.equals.winlog.channel: Windows PowerShell
lang: javascript
id: powershell
file: ${path.home}/module/powershell/config/winlogbeat-powershell.js
- script:
when.equals.winlog.channel: Microsoft-Windows-PowerShell/Operational
lang: javascript
id: powershell
file: ${path.home}/module/powershell/config/winlogbeat-powershell.js
# ====================== Elasticsearch template settings =======================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
# ================================== General ===================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
# =================================== Kibana ===================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
host: "192.168.101.129:5601"
protocol: https
username: "elastic"
password: "passwd"
setup.kibana.ssl.enabled: true
setup.kibana.ssl.certificate_authorities: ["C:\\Program Files\\Winlogbeat\\ca.crt"]
setup.kibana.ssl.certificate: "C:\\Program Files\\Winlogbeat\\winlogbeat.crt"
setup.kibana.ssl.key: "C:\\Program Files\\Winlogbeat\\winlogbeat.key"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
# =============================== Elastic Cloud ================================
# These settings simplify using Winlogbeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
# ================================== Outputs ===================================
# Configure what output to use when sending the data collected by the beat.
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["192.168.101.129:9200"]
username: "elastic"
password: "passwd"
# Protocol - either `http` (default) or `https`.
protocol: "https"
output.elasticsearch.ssl.certificate_authorities: ["C:\\Program Files\\Winlogbeat\\ca.crt"]
output.elasticsearch.ssl.certificate: "C:\\Program Files\\Winlogbeat\\winlogbeat.crt"
output.elasticsearch.ssl.key: "C:\\Program Files\\Winlogbeat\\winlogbeat.key"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
# ------------------------------ Logstash Output -------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
# ================================== Logging ===================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
# ============================= X-Pack Monitoring ==============================
# Winlogbeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
#monitoring.enabled: false
# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Winlogbeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:
# ============================== Instrumentation ===============================
# Instrumentation support for the winlogbeat.
#instrumentation:
# Set to true to enable instrumentation of winlogbeat.
#enabled: false
# Environment in which winlogbeat is running on (eg: staging, production, etc.)
#environment: ""
# APM Server hosts to report instrumentation results to.
#hosts:
# - http://localhost:8200
# API Key for the APM Server(s).
# If api_key is set then secret_token will be ignored.
#api_key:
# Secret token for the APM Server(s).
#secret_token:
# ================================= Migration ==================================
# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true
In your output, you need to specify ssl.verification_mode: certificate.
For your example, it looks like it is the Kibana output that has a certificate specified on it:
setup.kibana.ssl.enabled: true
setup.kibana.ssl.certificate_authorities: ["C:\\Program Files\\Winlogbeat\\ca.crt"]
setup.kibana.ssl.certificate: "C:\\Program Files\\Winlogbeat\\winlogbeat.crt"
setup.kibana.ssl.key: "C:\\Program Files\\Winlogbeat\\winlogbeat.key"
setup.kibana.ssl.verification_mode: certificate
Older versions of winlogbeat will need ssl.verification_mode: none instead.
See SSL/TLS configuration documentation at https://www.elastic.co/guide/en/beats/winlogbeat/current/configuration-ssl.html
I am trying to connect from a Ruby on Rails application to AWS Keyspaces (AWS Cassandra), but I cannot manage to do it. I use the cequel gem and generated the config/cequel.yml which contains a similar thing to the following:
development:
host: "CONTACT_POINT"
username: "USER"
password: "PASS"
port: 9142
keyspace: key_development
max_retries: 3
retry_delay: 0.5
newrelic: true
ssl: true
server_cert: 'config/certs/AmazonRootCA1.pem'
replication:
class: NetworkTopologyStrategy
datacenter1: 3
datacenter2: 2
durable_writes: false
(Credentials where used in another app and they work which is working as expected.)
when I try to run:
rake cequel:keyspace:create
I get the following errors:
Cassandra::Errors::NoHostsAvailable: All attempted hosts failed: x.xxx.xxx.xxx (Cassandra::Errors::ServerError: Internal Server Error)
Set the dc to us-east-1 . drop the replication definition.
I have an umbrella project compose by:
gateway - Phoenix application
core - business model layer notifications a dedicated app for delivering sms, email, etc… users
user management, role system, and authentication.
The three components are connected via AMQP, so they can send and receive messages between them.
We are using Docker, and drone.io hosted on google cloud’s kubernetes engine. So, what is happening is that randomly the application raises the following exception while running my tests:
{\"status\":\"error\",\"message\":\"%DBConnection.OwnershipError{message: \\\"cannot find ownership process for #PID<0.716.0>.\\\\n\\\\nWhen using ownership, you must manage connections in one\\\\nof the four ways:\\\\n\\\\n* By explicitly checking out a connection\\\\n* By explicitly allowing a spawned process\\\\n* By running the pool in shared mode\\\\n* By using :caller option with allowed process\\\\n\\\\nThe first two options require every new process to explicitly\\\\ncheck a connection out or be allowed by calling checkout or\\\\nallow respectively.\\\\n\\\\nThe third option requires a {:shared, pid} mode to be set.\\\\nIf using shared mode in tests, make sure your tests are not\\\\nasync.\\\\n\\\\nThe fourth option requires [caller: pid] to be used when\\\\nchecking out a connection from the pool. The caller process\\\\nshould already be allowed on a connection.\\\\n\\\\nIf you are reading this error, it means you have not done one\\\\nof the steps above or that the owner process has crashed.\\\\n\\\\nSee Ecto.Adapters.SQL.Sandbox docs for more information.\\\"}\",\"code\":0}"
It’s a Json since we exchange amqp messages in our tests, for instance:
# Given the module FindUserByEmail
def Users.Services.FindUserByEmail do
use GenAMQP.Server, event: "find_user_by_email", conn_name:
Application.get_env(:gen_amqp, :conn_name)
alias User.Repo
alias User.Models.User, as: UserModel
def execute(payload) do
%{ "email" => email } = Poison.decode!(payload)
resp =
case Repo.get_by(UserModel, email: email) do
%UserModel{} = user -> %{status: :ok, response: user}
nil -> ErrorHelper.err(2011)
true -> ErrorHelper.err(2012)
{:error, %Ecto.Changeset{} = changeset} -> ViewHelper.translate_errors(changeset)
end
{:reply, Poison.encode!(resp)}
end
end
# and the test
defmodule Users.Services.FindUserByEmailTest do
use User.ModelCase
alias GenAMQP.Client
def execute(payload) do
resp = Client.call_with_conn(#conn_name, "find_user_by_email", payload)
data = Poison.decode!(resp, keys: :atoms)
assert data.status == "ok"
end
end
Following details my .drone.yaml file:
pipeline:
unit-tests:
image: bitwalker/alpine-elixir-phoenix:1.6.1
environment:
RABBITCONN: amqp://user:pass#localhost:0000/unit_testing
DATABASE_URL: ecto://username:password#postgres/testing
commands:
- mix local.hex --force && mix local.rebar --force
- mix deps.get
- mix compile --force
- mix test
mix.exs file in every app contains the following aliases
defp aliases do
[
"test": ["ecto.create", "ecto.migrate", "test"]
]
end
All ours model_case files contain this configuration:
setup tags do
:ok = Sandbox.checkout(User.Repo)
unless tags[:async] do
Sandbox.mode(User.Repo, {:shared, self()})
end
:ok
end
How can I debug this? it only occurs while testing code inside the container. Would the issue be related to container’s resources?
Any tip or hint would be appreciated.
Try setting onwership_timeout and timeout to a large numbers in your config/tests.exs
config :app_name, User.Repo,
adapter: Ecto.Adapters.Postgres,
username: ...,
password: ...,
database: ...,
hostname: ...,
pool: Ecto.Adapters.SQL.Sandbox,
timeout: 120_000, # i think the default is 5000
pool_timeout: 120_000,
ownership_timeout: 120_000 #i think the default is 5000
I need your help. In my app I'm using devise_ldap_authenticatable gem for Devise.
After configuring device.rb in initializers and ldap.yml still can't find records in LDAP I'm sure I should (I can find them via LDAP explorer using the same configuration and credentials). Can you please advice me, what I do wrong in my config, that I'm not searching properly. Which attribute should I change (it's not the problem with authentication but with finding records in LDAP).
My config:
authorizations: &AUTHORIZATIONS
##allow_unauthenticated_bind: false
##group_base: DC=zzz,DC=yyy,DC=xxx
#Requires config.ldap_check_group_membership in devise.rb be true
# Can have multiple values, must match all to be authorized
##required_groups:
# If only a group name is given, membership will be checked against "uniqueMember"
##- cn=admins,ou=groups,dc=zzz,dc=yyy,dc=xxx
##- cn=users,ou=groups,dc=zzz,dc=yyy,dc=xxx
# If an array is given, the first element will be the attribute to check against, the second the group name
##- ["moreMembers", "cn=users,ou=groups,dc=test,dc=com"]
## Requires config.ldap_check_attributes in devise.rb to be true
## Can have multiple attributes and values, must match all to be authorized
##require_attribute:
##objectClass: inetOrgPerson
##authorizationRole: postsAdmin
## Environment
development:
host: host_name
port: 389
attribute: mail
base: DC=zzz,DC=yyy,DC=xxx
admin_user: user_name
admin_password: *******
ssl: false
In my log I can see:
LDAP: LDAP dn lookup: mail=login
LDAP: LDAP search for login: mail=login
LDAP: LDAP search yielded 0 matches
LDAP: Authorizing user mail=login,DC=xxx,DC=zzz,DC=yyy
LDAP: Not authorized because not authenticated.
Many thanks,
T
I'm using GitLab CE Omnibus package (gitlab_7.7.2-omnibus.5.4.2.ci-1_amd64) on a clean Debian (debian-7.8.0-amd64) installation.
I followed the installation process on https://about.gitlab.com/downloads/ and everything works fine.
I modified /etc/gitlab/gitlab.rb to use a single LDAP server for authentification.
Which worked also as expected.
But when I tried to use a secondary LDAP connection "gitlab-ctl reconfigure" gives me the output:
---- Begin output of /opt/gitlab/bin/gitlab-rake cache:clear ----
STDOUT:
STDERR: rake aborted!
Devise::OmniAuth::StrategyNotFound: Could not find a strategy with name `Ldapsecondary'. Please ensure it is required or explicitly set it using the :strategy_class option .
Tasks: TOP => cache:clear => environment
(See full trace by running task with --trace)
---- End output of /opt/gitlab/bin/gitlab-rake cache:clear ----
So, the problem is that I can use the LDAP connection 'main' but I cannot use the connection 'secondary'.
Is there any possibility to use two different LDAP connection in the CE edition at once?
I'm new to ruby [on rails]. I found something in /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/ldap/config.rb but I'm not able to debug anything.
Here are my settings in /etc/gitlab/gitlab.rb
gitlab_rails['ldap_enabled'] = true
gitlab_rails['ldap_servers'] = YAML.load <<-EOS # remember to close this block with 'EOS' below
main: # 'main' is the GitLab 'provider ID' of this LDAP server
label: 'First Company'
host: '192.168.100.1'
port: 389
uid: 'sAMAccountName'
method: 'tls' # "tls" or "ssl" or "plain"
bind_dn: 'debian#firstcompany.local'
password: 'Passw0rd'
active_directory: true
allow_username_or_email_login: false
base: 'dc=firstcompany,dc=local'
user_filter: '(&(objectClass=person)(objectClass=user)(!(userAccountControl:1.2.840.113556.1.4.803:=2)))'
## EE only
group_base: ''
admin_group: ''
sync_ssh_keys: false
secondary: # 'secondary' is the GitLab 'provider ID' of second LDAP server
label: 'Second Company'
host: '192.168.200.1'
port: 389
uid: 'sAMAccountName'
method: 'tls' # "tls" or "ssl" or "plain"
bind_dn: 'debian#secondcompany.local'
password: 'Passw0rd'
active_directory: true
allow_username_or_email_login: false
base: 'dc=secondcompany,dc=local'
user_filter: '(&(objectClass=person)(objectClass=user)(!(userAccountControl:1.2.840.113556.1.4.803:=2)))'
## EE only
group_base: ''
admin_group: ''
sync_ssh_keys: false
EOS
Thank you very much!
Multiple LDAP servers is an EE feature so setting the config in CE won't do anything. You can see the feature in GitLab documentation.
With GitLab 14.7 (January 2022, seven years later), this is now possible! (for hosted instances)
LDAP failover support
You can now specify multiple hosts (using hosts) in your GitLab LDAP configuration.
GitLab will use the first reachable host. This ensures continuity of access to GitLab should one of your LDAP hosts become unresponsive.
Thanks to Mathieu Parent for the contribution!
See Documentation and Issue.