Freeradius + Active Directory + Google Authenticator - freeradius

I've been trying to make VPN users authenticate with 2FA (Google authenticator). At the moment I have Cisco ISE, FreeRadius Server, Active Directory. What I want to achieve is when a user connects to VPN (Cisco ISE) the server ask for user from Radius server then Radius server authenticate user from Active Directory. If user is authenticated successfully the FreeRadius server must ask for OTP from user. My configuration is :
/etc/raddb/sites-enabled/default
server default {
listen {
type = auth
ipaddr = 1.1.1.1
port = 0
limit {
max_connections = 16
lifetime = 0
idle_timeout = 30
}
}
listen {
ipaddr = *
port = 0
type = acct
}
authorize {
filter_username
preprocess
chap
mschap
digest
suffix
eap {
ok = return
}
files
-sql
ldap
if ((ok || updated) && User-Password && !control:Auth-Type){
update {
control:Auth-Type := ldap
}
}
expiration
logintime
pap
}
authenticate {
Auth-Type PAP {
pap
}
Auth-Type CHAP {
chap
}
Auth-Type MS-CHAP {
mschap
}
mschap
digest
Auth-Type LDAP {
ldap
}
eap
}
preacct {
preprocess
acct_unique
suffix
files
}
accounting {
detail
unix
-sql
exec
attr_filter.accounting_response
}
session {
}
post-auth {
if (Google-Password) {
update request {
pam
}
}
else {
update reply {
&Google-Password = "%{Google-Password}"
}
}
update {
&reply: += &session-state:
}
-sql
exec
remove_reply_message_if_eap
Post-Auth-Type REJECT {
-sql
attr_filter.access_reject
eap
remove_reply_message_if_eap
}
Post-Auth-Type Challenge {
}
}
pre-proxy {
}
post-proxy {
eap
}
}
/etc/raddb/clients.conf
client CISCO_ISE {
ipaddr = 1.1.1.2
proto = *
secret = testing123
require_message_authenticator = no
nas_type = other
limit {
max_connections = 16
lifetime = 0
idle_timeout = 30
}
}
/etc/raddb/mods-config/files/authorize
DEFAULT Framed-Protocol == PPP
Framed-Protocol = PPP,
Framed-Compression = Van-Jacobson-TCP-IP
DEFAULT Hint == "CSLIP"
Framed-Protocol = SLIP,
Framed-Compression = Van-Jacobson-TCP-IP
DEFAULT Hint == "SLIP"
Framed-Protocol = SLIP
/etc/pam.d/radiusd
auth requisite pam_google_authenticator.so forward_pass
With this configuration FreeRadius server asks for username and password but after ad authentication server doesn't ask for one time password

Solved the issue. For those who is configuring exact settings you need to use state attribute same thing as session or cookie. If request has state attribute then change authentication method to PAM which will check the token. Else if request doesn't have state attribute then it's first time request which you need to authenticate via Active Directory

Related

dovecot deliver does not use same user-id for auto-indexing in FTS as it does for IMAP searches

Using dovecot 2.3.7.2 with solr 8.11.2 when I do:
doveadm search -u user mailbox INBOX subject "something"
I get multiple mail ID's.
When I start a manual IMAP session and login as that user, select INBOX, and try the command:
. search subject "something"
It returns zero mail ID's; this is consistent across all searches using IMAP - no results returned, no matter what I search for in IMAP.
Further investigation shows that the SOLR search via doveadm is using just the 'username', whereas the IMAP search is using the full email address (and finding nothing).
Worse, with auto-update of the FTS turned on, the user-id used when updating as mail arrives is the domain-less user-id.
Is there a way to change this behaviour, or at least make it consistent?
The dovecot -n command returns:
# 2.3.7.2 (3c910f64b): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.7.2 ()
# OS: Linux 5.4.0-125-generic x86_64 Ubuntu 20.04.5 LTS
# Hostname: WITHELD
mail_location = maildir:~/Mail
mail_plugins = " fts fts_solr virtual"
mail_privileged_group = mail
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date index ihave duplicate mime foreverypart extracttext
namespace {
location = virtual:~/Mail/virtual
prefix = virtual.
separator = .
}
namespace inbox {
inbox = yes
location =
mailbox Drafts {
special_use = \Drafts
}
mailbox Junk {
special_use = \Junk
}
mailbox Sent {
special_use = \Sent
}
mailbox "Sent Messages" {
special_use = \Sent
}
mailbox Trash {
special_use = \Trash
}
mailbox virtual.All {
comment = All my messages
special_use = \All
}
prefix =
}
passdb {
args = /etc/dovecot/dovecot-sql.conf.ext
driver = sql
}
passdb {
driver = pam
}
plugin {
fts = solr
fts_autoindex = yes
fts_enforced = yes
fts_solr = url=http://localhost:8983/solr/dovecot/
sieve = file:~/sieve;active=~/.dovecot.sieve
}
protocols = " imap lmtp sieve pop3 sieve"
service imap {
vsz_limit = 4 G
}
service index-worker {
vsz_limit = 2 G
}
service indexer-worker {
vsz_limit = 2 G
}
service lmtp {
inet_listener lmtp {
address = 127.0.0.1
port = 24
}
}
ssl_cert = </etc/letsencrypt/live/WITHELD/fullchain.pem
ssl_client_ca_dir = /etc/ssl/certs
ssl_dh = # hidden, use -P to show it
ssl_key = # hidden, use -P to show it
userdb {
args = /etc/dovecot/dovecot-sql.conf.ext
driver = sql
}
userdb {
driver = passwd
}
protocol lmtp {
mail_plugins = " fts fts_solr virtual sieve"
postmaster_address = WITHELD
}
protocol lda {
mail_plugins = " fts fts_solr virtual sieve"
}
protocol imap {
mail_max_userip_connections = 40
}

FreeRadius 3.0.1 Unlang Policy to Dynamically match User -> Client -> LDAP group

I have my Radius working with AD + Google OTP working fine. What I am trying to accomplish now is to specify user-to-client-to-ADgroup in a policy and/or unlang within the post-auth.
How it works today:
client performs request
Radius sends first half of the password to AD
Radius sends the second half of the password to Google OTP
If both come back good, then auth is successful
Post-auth does some checking if user is memberof ADgroup -> assign
class -> accept
OR if not part of ADgroup -> reject
The part I need assistance with is I have over 30 sites with equipment in each one. We distinguish our users based on per site access. E.g. NetworkAdmin01 is allowed to access site01 but not site02.
So the only way I can think of doing this:
Each site has it's own virtual server (VS)
Each client has the "virtual server" attribute set
Within each VS there is post-auth unlang like:
if (LDAP-Group == "NetworkAdmins_site01") {
[do something] (update control, update reply, etc..)
else
reject
This setup would require me to have 30+ VS running on the Radius and is not manageable.
If I was able to run this within a few VS (separated based on equipment vendor)Want within the post-auth to grant/assign based on;
if (%{client:shortname} =~ /regex/) #grab the portion of the variable between "." (site01)
if (LDAP-Group =~ /regex/) # grab the portion of the variable after last "_" (site01)
if (%{0} == %{1}) {
if (LDAP-Group == NetworkAdmins_site01) {
update reply {
Juniper-Local-User-Name := "admins_group"
}
}
else {
update control {
Auth-type := "Reject"
}
}
}
}
}
So after a lot of looking around, it appears that runtime dynamic variables are the biggest limitation to building any type of policies/rules.
so I went a different direction. I have basically matched the NAS-IP-Address to the IP subnet/site I expect the request to come from.
So this is placed into the Post-Auth section of each VS. Not the most manageable way when you have 30+ sites but, best I could find at this point (without running 30+ VS).
# SITE01 site
if (&NAS-IP-Address < 10.1.0.0/16) {
if (LDAP-Group == "Radius_NetworkAdmins_SITE01") {
update reply {
Juniper-Local-User-Name := "ad-super-users"
}
}
elsif (LDAP-Group == "Radius_NetworkAdminsRO_SITE01") {
update reply {
Juniper-Local-User-Name := "ad-readonly-users"
}
}
}
# SITE02 site
if (&NAS-IP-Address < 10.2.0.0/16) {
if (LDAP-Group == "Radius_NetworkAdmins_SITE02") {
update reply {
Juniper-Local-User-Name := "ad-super-users"
}
}
elsif (LDAP-Group == "SG_Uni_Radius_NetworkAdminsRO_SITE02") {
update reply {
Juniper-Local-User-Name := "ad-readonly-users"
}
}
}
else {
update reply {
Reply-Message := "Not authorized to access this system"
}
update control {
Auth-Type := "Reject"
}
}
#
Post-Auth-Type REJECT {
-sql
attr_filter.access_reject
eap
remove_reply_message_if_eap
}
}
Post-Auth-Type Challenge {
}
#
pre-proxy {
}
#
post-proxy {
eap
}

EMQX http ACL auth - broker isn't available

I use EMQ X Broker v4.0.1. Simple http auth is work fine, but when I try to use http ACL auth - it doesn't work for me, despite the fact that settings are very close. When I try to refer to the broker via Eclipse Paho I get the error with status code 3 that means the broker isn't available. I turned on emqx_auth_http from dashboard. This is my EMQX settings for http ACL auth:
emqx.conf
listener.tcp.external = 1884
plugins/emqx_auth_http.conf
auth.http.auth_req = http://127.0.0.1:8991/mqtt/auth
auth.http.auth_req.method = post
auth.http.auth_req.params = clientid=%c,username=%u,password=%P
auth.http.super_req = http://somesite.com/mqtt/superuser
auth.http.super_req.method = post
auth.http.super_req.params = clientid=%c,username=%u
auth.http.acl_req = http://somesite/mqtt/acl
auth.http.acl_req.method = post
auth.http.acl_req.params = access=%A,username=%u,clientid=%c,ipaddr=%a,topic=%t,mountpoint=%m
auth.http.request.retry_times = 3
auth.http.request.retry_interval = 1s
auth.http.request.retry_backoff = 2.0
Endpoints(http://somesite.com/mqtt/superuser, http://somesite/mqtt/acl) are working fine and I can get access to it from Postaman app. May be you could tell me where I do something wrong in my configuration or somewhere else?
Maybe uou need to provide your HTTP server code.
http respose status 200 is ok
http respose status 4xx is unauthorized
http respose status 200 and body is ignore means break
This is a project just passed the test:
egg-iot-with-mqtt
/**
* Auth
*/
router.post('/mqtt/auth', async (ctx, next) => {
const { clientid, username, password } = ctx.request.body
// Mock
// 200 means ok
if (clientid === '' || 'your condition') {
ctx.body = ''
} else {
// 4xx unauthorized
ctx.status = 401
}
})
/**
* ACL
*/
router.post('/mqtt/acl', async (ctx, next) => {
/**
* Request Body
* access: 1 | 2, 1 = sub, 2 = pub
* access in body now is string !!!
{
access: '1',
username: 'undefined',
clientid: 'mqttjs_bf980bf7',
ipaddr: '127.0.0.1',
topic: 't/1',
mountpoint: 'undefined'
}
*/
const info = ctx.request.body
console.log(info)
if (info.topic === 't/2') {
// 200 is ok
ctx.body = ''
} else {
// 4xx is unauthorized
ctx.status = 403
}
})

Nomad constraint "${attr.vault.version} version >= 0.6.1" to access vault

I'm trying to deploy a Nomad job which has a template that fetches some secrets from a Vault.
My problem is that it keeps on giving this Placement Failure because of a constraint which I can't understand why:
Constraint ${attr.vault.version} version >= 0.6.1 filtered 1 node
Nomad config
datacenter = "dc1"
data_dir = "/var/lib/nomad"
advertise {
# Defaults to the first private IP address.
http = "10.134.43.195"
rpc = "10.134.43.195"
serf = "10.134.43.195"
}
server {
enabled = true
bootstrap_expect = 1
server_join {
retry_join = ["provider=digitalocean api_token=[SECRET] tag_name=nomad_auto_join"],
}
}
client {
enabled = true
}
# Consul is installed locally and clustered
consul {
address = "http://127.0.0.1:8500"
server_auto_join = true
client_auto_join = true
auto_advertise = true
}
vault {
enabled = true
address = "http://vault.service.consul:8200"
token = "[NOMAD_VAULT_TOKEN]"
create_from_role = "nomad-cluster"
task_token_ttl = "1h"
}
autopilot {
cleanup_dead_servers = true
last_contact_threshold = "200ms"
max_trailing_logs = 250
server_stabilization_time = "10s"
enable_redundancy_zones = false
disable_upgrade_migration = false
enable_custom_upgrades = false
}
telemetry {
publish_allocation_metrics = true
publish_node_metrics = true
prometheus_metrics = true
}
### Nomad Token to access Vault
NOMAD_VAULT_TOKEN is generated with this command:
vault token create -policy nomad-server -period 72h -orphan
Vault policy nomad-server
nomad-server vault policy is as such:
# Allow creating tokens under "nomad-cluster" token role.Z
path "auth/token/create/nomad-cluster" {
capabilities = ["update"]
}
# Allow looking up "nomad-cluster" token role.
path "auth/token/roles/nomad-cluster" {
capabilities = ["read"]
}
# Allow looking up the token passed to Nomad to validate # the token has the
# proper capabilities. This is provided by the "default" policy.
path "auth/token/lookup-self" {
capabilities = ["read"]
}
# Allow looking up incoming tokens to validate they have permissions to access
# the tokens they are requesting. This is only required if
# `allow_unauthenticated` is set to false.
path "auth/token/lookup" {
capabilities = ["update"]
}
# Allow revoking tokens that should no longer exist. This allows revoking
# tokens for dead tasks.
path "auth/token/revoke-accessor" {
capabilities = ["update"]
}
# Allow checking the capabilities of our own token. This is used to validate the
# token upon startup.
path "sys/capabilities-self" {
capabilities = ["update"]
}
# Allow our own token to be renewed.
path "auth/token/renew-self" {
capabilities = ["update"]
}
# This is where needed secretes are fetched from
path "kv/*" {
capabilities = ["update", "read", "create"]
}
Nomad job definition
My Nomad job definition is:
job "api" {
datacenters = ["dc1"]
type = "service"
group "api" {
count = 1
update {
max_parallel = 1
min_healthy_time = "30s"
healthy_deadline = "10m"
progress_deadline = "11m"
auto_revert = true
}
task "api" {
driver = "docker"
config {
image = "registry.gitlab.com/[GROUP]/[PROJECT]/${ENVIRONMENT}:${BUILD_NUMBER}"
port_map {
nginx = 80
}
auth {
username = "${REGISTRY_USER}"
password = "${REGISTRY_PASS}"
}
force_pull = true
hostname = "api"
}
vault {
policies = ["kv"]
change_mode = "signal"
change_signal = "SIGINT"
// env = "false"
}
template {
data = <<EOT
APP_NAME={{ key "services/api/app/${ENVIRONMENT}/APP_NAME" }}
APP_ENV={{ key "services/api/app/${ENVIRONMENT}/APP_ENV" }}
APP_KEY={{with secret "kv/services/api/app/${ENVIRONMENT}"}}{{.Data.APP_KEY.value}}{{end}}
APP_DEBUG={{ key "services/api/app/${ENVIRONMENT}/APP_DEBUG" }}
APP_URL={{ key "services/api/app/${ENVIRONMENT}/APP_URL" }}
LOG_CHANNEL={{ key "services/api/log/${ENVIRONMENT}/LOG_CHANNEL" }}
DB_CONNECTION={{ key "services/api/db/${ENVIRONMENT}/DB_CONNECTION" }}
DB_HOST={{ key "services/api/db/${ENVIRONMENT}/DB_HOST" }}
DB_PORT={{ key "services/api/db/${ENVIRONMENT}/DB_PORT" }}
DB_DATABASE={{with secret "kv/services/api/db/${ENVIRONMENT}/DB_DATABASE"}}{{.Data.value}}{{end}}
DB_USERNAME={{with secret "kv/services/api/db/${ENVIRONMENT}/DB_USERNAME"}}{{.Data.value}}{{end}}
DB_PASSWORD={{with secret "kv/services/api/db/${ENVIRONMENT}/DB_PASSWORD"}}{{.Data.value}}{{end}}
BROADCAST_DRIVER={{ key "services/api/broadcast/${ENVIRONMENT}/BROADCAST_DRIVER" }}
CACHE_DRIVER={{ key "services/api/cache/${ENVIRONMENT}/CACHE_DRIVER" }}
SESSION_DRIVER={{ key "services/api/session/${ENVIRONMENT}/SESSION_DRIVER" }}
SESSION_LIFETIME={{ key "services/api/session/${ENVIRONMENT}/SESSION_LIFETIME" }}
QUEUE_DRIVER={{ key "services/api/queue/${ENVIRONMENT}/QUEUE_DRIVER" }}
REDIS_HOST={{ key "services/api/redis/${ENVIRONMENT}/REDIS_HOST" }}
REDIS_PORT={{ key "services/api/redis/${ENVIRONMENT}/REDIS_PORT" }}
REDIS_PASSWORD={{with secret "kv/services/api/redis/${ENVIRONMENT}/REDIS_PASSWORD"}}{{.Data.value}}{{end}}
MAIL_DRIVER={{ key "services/api/mail/${ENVIRONMENT}/MAIL_DRIVER" }}
MAIL_HOST={{ key "services/api/mail/${ENVIRONMENT}/MAIL_HOST" }}
MAIL_PORT={{ key "services/api/mail/${ENVIRONMENT}/MAIL_PORT" }}
MAIL_USERNAME={{ key "services/api/mail/${ENVIRONMENT}/MAIL_USERNAME" }}
MAIL_PASSWORD={{ key "services/api/mail/${ENVIRONMENT}/MAIL_PASSWORD" }}
MAIL_ENCRYPTION={{ key "services/api/mail/${ENVIRONMENT}/MAIL_ENCRYPTION" }}
MAIL_FROM_ADDRESS={{ key "services/api/mail/${ENVIRONMENT}/MAIL_FROM_ADDRESS" }}
MAIL_FROM_NAME={{ key "services/api/mail/${ENVIRONMENT}/MAIL_FROM_NAME" }}
MAILGUN_DOMAIN={{ key "services/api/mailgun/${ENVIRONMENT}/MAILGUN_DOMAIN" }}
MAILGUN_SECRET={{with secret "kv/services/api/mailgun/${ENVIRONMENT}/MAILGUN_SECRET"}}{{.Data.value}}{{end}}
DO_SPACES_KEY={{with secret "kv/services/api/spaces/${ENVIRONMENT}/DO_SPACES_KEY"}}{{.Data.value}}{{end}}
DO_SPACES_SECRET={{with secret "kv/services/api/spaces/${ENVIRONMENT}/DO_SPACES_SECRET"}}{{.Data.value}}{{end}}
DO_SPACES_ENDPOINT={{ key "services/api/spaces/${ENVIRONMENT}/DO_SPACES_ENDPOINT" }}
DO_SPACES_REGION={{ key "services/api/spaces/${ENVIRONMENT}/DO_SPACES_REGION" }}
DO_SPACES_BUCKET={{ key "services/api/spaces/${ENVIRONMENT}/DO_SPACES_BUCKET" }}
JWT_SECRET={{with secret "kv/services/api/jwt/${ENVIRONMENT}"}}{{.Data.JWT_SECRET}}{{end}}
EOT
destination = "custom/.env"
// change_mode = "signal"
// change_signal = "SIGINT"
env = true
}
service {
name = "api"
tags = [
"urlprefix-${ENVIRONMENT_URL}/"
]
port = "nginx"
check {
type = "tcp"
port = "nginx"
interval = "10s"
timeout = "2s"
}
}
resources {
cpu = 500
memory = 256
network {
mbits = 100
port "nginx" {}
}
}
}
}
}
Nomad logs
In Nomad logs, I can check that it correctly gets a token from Vault:
Oct 16 13:29:19 nomad-server-0 nomad: 2019-10-16T13:29:19.333Z [INFO ] client.fingerprint_mgr.vault: Vault is available
Oct 16 13:29:19 nomad-server-0 nomad: 2019-10-16T13:29:19.334Z [DEBUG] client.fingerprint_mgr: fingerprinting periodically: fingerprinter=vault period=15s
Oct 16 13:29:19 nomad-server-0 nomad: 2019-10-16T13:29:19.354Z [DEBUG] nomad.vault: starting renewal loop: creation_ttl=72h0m0s
Oct 16 13:29:19 nomad-server-0 nomad: 2019-10-16T13:29:19.375Z [DEBUG] client.fingerprint_mgr: detected fingerprints: node_attrs="[arch cgroup consul cpu host network nomad signal storage vault]"
Oct 16 13:29:19 nomad-server-0 nomad: 2019-10-16T13:29:19.380Z [DEBUG] nomad.vault: successfully renewed server token
Oct 16 13:29:19 nomad-server-0 nomad: 2019-10-16T13:29:19.380Z [INFO ] nomad.vault: successfully renewed token: next_renewal=35h59m59.999975432s
I'm stuck here, if anyone can provide any insight, I would really appreciate!
[EDIT]
I'm using Vault v1.2.0-rc1.
sometimes this error means that the vault is still sealed or has become sealed after a restart of vault
this can be tested by doing a dig on the vaults dns address for example vault.service.consul
According to the docs, Note: Vault integration requires Vault version 0.6.2 or higher.
Your error message backs that up -- Constraint ${attr.vault.version} version >= 0.6.1 filtered 1 node. Normally, constraints are things that you specify in the nomad spec, but in this case, it looks like it's coming from nomad.
I think you'll need to upgrade vault to 0.6.2
I've upgraded Vault to version v1.2.3 and it started working.

Configure FreeRADIUS to only support EAP TTLS PAP

I have an external RADIUS server that only supports PAP. I have configured FreeRADIUS 2.2.4 to proxy the PAP request inside an EAP-TTLS tunnel (from a WiFi access point configured for WPA2 Enterprise) to this RADIUS server, and I tested it with eapol_test. I can manually configure a PC or Mac to only send EAP-TTLS+PAP but this is not really desirable.
When unconfigured WPA2 Enterprise clients connect they try PEAP and LEAP and EAP-MD5. I disabled most of the other EAP types, but it seems that I need at least one other EAP type supported in default_eap_type in the TTLS block. The non-commented part of my eap.conf is below:
eap {
default_eap_type = ttls
timer_expire = 60
ignore_unknown_eap_types = no
cisco_accounting_username_bug = no
max_sessions = 4096
md5 {
}
tls {
certdir = ${confdir}/certs
cadir = ${confdir}/certs
private_key_password = heythatsprivate
private_key_file = ${certdir}/server.pem
certificate_file = ${certdir}/server.pem
dh_file = ${certdir}/dh
random_file = /dev/urandom
CA_path = ${cadir}
cipher_list = "DEFAULT"
make_cert_command = "${certdir}/bootstrap"
ecdh_curve = "prime256v1"
cache {
enable = yes
lifetime = 24 # hours
max_entries = 255
}
verify {
}
ocsp {
enable = no
override_cert_url = yes
url = "http://127.0.0.1/ocsp/"
}
}
ttls {
default_eap_type = md5
copy_request_to_tunnel = no
use_tunneled_reply = no
virtual_server = "inner-tunnel"
}
}
Is there a way to configure FreeRADIUS so that there are no EAP types allowed inside TTLS or to explicitly require PAP inside the tunnel?
Thanks,
-rohan
Try to setup a new servers besides the default...
server my-server {
authorize { ... }
authenticate {
eap
}
accounting { ... }
}
Then create a inner-tunnel for second fase of authentication
server my-tunnel {
authorize {
pap
}
...
authenticate {
Auth-Type PAP {
pap
}
}
...
}
You will need to modify your EAP configuration as this:
eap {
default_eap_type = ttls
...
ttls {
default_eap_type = gtc
copy_request_to_tunnel = yes
use_tunneled_reply = yes
virtual_server = "my-tunnel"
}
...
}
Then specify for each client what server do you want to use to process authentication requests
client example {
ipv6addr = x.x.x.x
netmask = 32
secret = *******
shortname = example
virtual_server = my-server
}
I'm sure this will enable what do you want to do.
Regards,
-Hernan Garcia

Resources