Configure FreeRADIUS to only support EAP TTLS PAP - freeradius

I have an external RADIUS server that only supports PAP. I have configured FreeRADIUS 2.2.4 to proxy the PAP request inside an EAP-TTLS tunnel (from a WiFi access point configured for WPA2 Enterprise) to this RADIUS server, and I tested it with eapol_test. I can manually configure a PC or Mac to only send EAP-TTLS+PAP but this is not really desirable.
When unconfigured WPA2 Enterprise clients connect they try PEAP and LEAP and EAP-MD5. I disabled most of the other EAP types, but it seems that I need at least one other EAP type supported in default_eap_type in the TTLS block. The non-commented part of my eap.conf is below:
eap {
default_eap_type = ttls
timer_expire = 60
ignore_unknown_eap_types = no
cisco_accounting_username_bug = no
max_sessions = 4096
md5 {
}
tls {
certdir = ${confdir}/certs
cadir = ${confdir}/certs
private_key_password = heythatsprivate
private_key_file = ${certdir}/server.pem
certificate_file = ${certdir}/server.pem
dh_file = ${certdir}/dh
random_file = /dev/urandom
CA_path = ${cadir}
cipher_list = "DEFAULT"
make_cert_command = "${certdir}/bootstrap"
ecdh_curve = "prime256v1"
cache {
enable = yes
lifetime = 24 # hours
max_entries = 255
}
verify {
}
ocsp {
enable = no
override_cert_url = yes
url = "http://127.0.0.1/ocsp/"
}
}
ttls {
default_eap_type = md5
copy_request_to_tunnel = no
use_tunneled_reply = no
virtual_server = "inner-tunnel"
}
}
Is there a way to configure FreeRADIUS so that there are no EAP types allowed inside TTLS or to explicitly require PAP inside the tunnel?
Thanks,
-rohan

Try to setup a new servers besides the default...
server my-server {
authorize { ... }
authenticate {
eap
}
accounting { ... }
}
Then create a inner-tunnel for second fase of authentication
server my-tunnel {
authorize {
pap
}
...
authenticate {
Auth-Type PAP {
pap
}
}
...
}
You will need to modify your EAP configuration as this:
eap {
default_eap_type = ttls
...
ttls {
default_eap_type = gtc
copy_request_to_tunnel = yes
use_tunneled_reply = yes
virtual_server = "my-tunnel"
}
...
}
Then specify for each client what server do you want to use to process authentication requests
client example {
ipv6addr = x.x.x.x
netmask = 32
secret = *******
shortname = example
virtual_server = my-server
}
I'm sure this will enable what do you want to do.
Regards,
-Hernan Garcia

Related

dovecot deliver does not use same user-id for auto-indexing in FTS as it does for IMAP searches

Using dovecot 2.3.7.2 with solr 8.11.2 when I do:
doveadm search -u user mailbox INBOX subject "something"
I get multiple mail ID's.
When I start a manual IMAP session and login as that user, select INBOX, and try the command:
. search subject "something"
It returns zero mail ID's; this is consistent across all searches using IMAP - no results returned, no matter what I search for in IMAP.
Further investigation shows that the SOLR search via doveadm is using just the 'username', whereas the IMAP search is using the full email address (and finding nothing).
Worse, with auto-update of the FTS turned on, the user-id used when updating as mail arrives is the domain-less user-id.
Is there a way to change this behaviour, or at least make it consistent?
The dovecot -n command returns:
# 2.3.7.2 (3c910f64b): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.7.2 ()
# OS: Linux 5.4.0-125-generic x86_64 Ubuntu 20.04.5 LTS
# Hostname: WITHELD
mail_location = maildir:~/Mail
mail_plugins = " fts fts_solr virtual"
mail_privileged_group = mail
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date index ihave duplicate mime foreverypart extracttext
namespace {
location = virtual:~/Mail/virtual
prefix = virtual.
separator = .
}
namespace inbox {
inbox = yes
location =
mailbox Drafts {
special_use = \Drafts
}
mailbox Junk {
special_use = \Junk
}
mailbox Sent {
special_use = \Sent
}
mailbox "Sent Messages" {
special_use = \Sent
}
mailbox Trash {
special_use = \Trash
}
mailbox virtual.All {
comment = All my messages
special_use = \All
}
prefix =
}
passdb {
args = /etc/dovecot/dovecot-sql.conf.ext
driver = sql
}
passdb {
driver = pam
}
plugin {
fts = solr
fts_autoindex = yes
fts_enforced = yes
fts_solr = url=http://localhost:8983/solr/dovecot/
sieve = file:~/sieve;active=~/.dovecot.sieve
}
protocols = " imap lmtp sieve pop3 sieve"
service imap {
vsz_limit = 4 G
}
service index-worker {
vsz_limit = 2 G
}
service indexer-worker {
vsz_limit = 2 G
}
service lmtp {
inet_listener lmtp {
address = 127.0.0.1
port = 24
}
}
ssl_cert = </etc/letsencrypt/live/WITHELD/fullchain.pem
ssl_client_ca_dir = /etc/ssl/certs
ssl_dh = # hidden, use -P to show it
ssl_key = # hidden, use -P to show it
userdb {
args = /etc/dovecot/dovecot-sql.conf.ext
driver = sql
}
userdb {
driver = passwd
}
protocol lmtp {
mail_plugins = " fts fts_solr virtual sieve"
postmaster_address = WITHELD
}
protocol lda {
mail_plugins = " fts fts_solr virtual sieve"
}
protocol imap {
mail_max_userip_connections = 40
}

Azure Terraform Web App private Endpoint virtual network

I am trying to automate the deployment of an azure virtual network and azure web app.
During the deployment of those resources, everything went just fine and no errors. So I wanted to try to activate the private endpoint on the web app. This is my configuration on terraform.
resource "azurerm_virtual_network" "demo-vnet" {
name = "virtual-network-test"
address_space = ["10.100.0.0/16"]
location = var.location
resource_group_name = azurerm_resource_group.rg-testing-env.name
}
resource "azurerm_subnet" "front_end" {
name = "Front_End-Subnet"
address_prefixes = ["10.100.5.0/28"]
virtual_network_name = azurerm_virtual_network.demo-vnet.name
resource_group_name = azurerm_resource_group.rg-testing-env.name
delegation {
name = "testing-frontend"
service_delegation {
name = "Microsoft.Web/serverFarms"
actions = ["Microsoft.Network/virtualNetworks/subnets/action"]
}
}
}
And on the web app itself, I set this configuration
resource "azurerm_app_service_virtual_network_swift_connection" "web-app-vnet" {
app_service_id = azurerm_app_service.app-test.example.id
subnet_id = azurerm_subnet.front_end.id
}
NOTE: On my first deployment, the swift failed because I had not delegation on the virtual network, so I had to add the delegation on the subnet to be able to run terraform.
After setting in place all the configuration, I run my terraform, everything run just smoothly, no errors.
After the completion, I checked my web app Private Endpoint and that was just off.
Can please anyone explain me what am I doing wrong here?. I thought that the swift connection was the block of code to activate the Private endpoint but apparently I am missing something else.
Just to confirm my logic workflow, I tried to do the manual steps in the portal. But surprisingly I was not able because I have the delegation on the subnet, as you can see.
Thank you so much for any help and/or explanation you can offer to solve this issue
I have used the below code to test the creation of VNET and Web app with private endpoint.
provider "azurerm" {
features{}
}
data "azurerm_resource_group" "rg" {
name = "ansumantest"
}
# Virtual Network
resource "azurerm_virtual_network" "vnet" {
name = "ansumanapp-vnet"
location = data.azurerm_resource_group.rg.location
resource_group_name = data.azurerm_resource_group.rg.name
address_space = ["10.4.0.0/16"]
}
# Subnets for App Service instances
resource "azurerm_subnet" "appserv" {
name = "frontend-app"
resource_group_name = data.azurerm_resource_group.rg.name
virtual_network_name = azurerm_virtual_network.vnet.name
address_prefixes = ["10.4.1.0/24"]
enforce_private_link_endpoint_network_policies = true
}
# App Service Plan
resource "azurerm_app_service_plan" "frontend" {
name = "ansuman-frontend-asp"
location = data.azurerm_resource_group.rg.location
resource_group_name = data.azurerm_resource_group.rg.name
kind = "Linux"
reserved = true
sku {
tier = "Premium"
size = "P1V2"
}
}
# App Service
resource "azurerm_app_service" "frontend" {
name = "ansuman-frontend-app"
location = data.azurerm_resource_group.rg.location
resource_group_name = data.azurerm_resource_group.rg.name
app_service_plan_id = azurerm_app_service_plan.frontend.id
}
#private endpoint
resource "azurerm_private_endpoint" "example" {
name = "${azurerm_app_service.frontend.name}-endpoint"
location = data.azurerm_resource_group.rg.location
resource_group_name = data.azurerm_resource_group.rg.name
subnet_id = azurerm_subnet.appserv.id
private_service_connection {
name = "${azurerm_app_service.frontend.name}-privateconnection"
private_connection_resource_id = azurerm_app_service.frontend.id
subresource_names = ["sites"]
is_manual_connection = false
}
}
# private DNS
resource "azurerm_private_dns_zone" "example" {
name = "privatelink.azurewebsites.net"
resource_group_name = data.azurerm_resource_group.rg.name
}
#private DNS Link
resource "azurerm_private_dns_zone_virtual_network_link" "example" {
name = "${azurerm_app_service.frontend.name}-dnslink"
resource_group_name = data.azurerm_resource_group.rg.name
private_dns_zone_name = azurerm_private_dns_zone.example.name
virtual_network_id = azurerm_virtual_network.vnet.id
registration_enabled = false
}
Requirements:
As you can see from the above code the Private Endpoint , Private DNS and Private DNS Link block are required for creating the private endpoint and enabling it for the app service.
The App service Plan needs to have Premium Plan for having Private
endpoint.
The Subnet to be used by Private Endpoint should have
enforce_private_link_endpoint_network_policies = true set other
wise it will error giving message as subnet has private endpoint network policies enabled , it should be disabled to be used by Private endpoint.
DNS zone name should only be privatelink.azurewebsites.net as you are creating a private endpoint for webapp.
Outputs:

Freeradius + Active Directory + Google Authenticator

I've been trying to make VPN users authenticate with 2FA (Google authenticator). At the moment I have Cisco ISE, FreeRadius Server, Active Directory. What I want to achieve is when a user connects to VPN (Cisco ISE) the server ask for user from Radius server then Radius server authenticate user from Active Directory. If user is authenticated successfully the FreeRadius server must ask for OTP from user. My configuration is :
/etc/raddb/sites-enabled/default
server default {
listen {
type = auth
ipaddr = 1.1.1.1
port = 0
limit {
max_connections = 16
lifetime = 0
idle_timeout = 30
}
}
listen {
ipaddr = *
port = 0
type = acct
}
authorize {
filter_username
preprocess
chap
mschap
digest
suffix
eap {
ok = return
}
files
-sql
ldap
if ((ok || updated) && User-Password && !control:Auth-Type){
update {
control:Auth-Type := ldap
}
}
expiration
logintime
pap
}
authenticate {
Auth-Type PAP {
pap
}
Auth-Type CHAP {
chap
}
Auth-Type MS-CHAP {
mschap
}
mschap
digest
Auth-Type LDAP {
ldap
}
eap
}
preacct {
preprocess
acct_unique
suffix
files
}
accounting {
detail
unix
-sql
exec
attr_filter.accounting_response
}
session {
}
post-auth {
if (Google-Password) {
update request {
pam
}
}
else {
update reply {
&Google-Password = "%{Google-Password}"
}
}
update {
&reply: += &session-state:
}
-sql
exec
remove_reply_message_if_eap
Post-Auth-Type REJECT {
-sql
attr_filter.access_reject
eap
remove_reply_message_if_eap
}
Post-Auth-Type Challenge {
}
}
pre-proxy {
}
post-proxy {
eap
}
}
/etc/raddb/clients.conf
client CISCO_ISE {
ipaddr = 1.1.1.2
proto = *
secret = testing123
require_message_authenticator = no
nas_type = other
limit {
max_connections = 16
lifetime = 0
idle_timeout = 30
}
}
/etc/raddb/mods-config/files/authorize
DEFAULT Framed-Protocol == PPP
Framed-Protocol = PPP,
Framed-Compression = Van-Jacobson-TCP-IP
DEFAULT Hint == "CSLIP"
Framed-Protocol = SLIP,
Framed-Compression = Van-Jacobson-TCP-IP
DEFAULT Hint == "SLIP"
Framed-Protocol = SLIP
/etc/pam.d/radiusd
auth requisite pam_google_authenticator.so forward_pass
With this configuration FreeRadius server asks for username and password but after ad authentication server doesn't ask for one time password
Solved the issue. For those who is configuring exact settings you need to use state attribute same thing as session or cookie. If request has state attribute then change authentication method to PAM which will check the token. Else if request doesn't have state attribute then it's first time request which you need to authenticate via Active Directory

Nomad constraint "${attr.vault.version} version >= 0.6.1" to access vault

I'm trying to deploy a Nomad job which has a template that fetches some secrets from a Vault.
My problem is that it keeps on giving this Placement Failure because of a constraint which I can't understand why:
Constraint ${attr.vault.version} version >= 0.6.1 filtered 1 node
Nomad config
datacenter = "dc1"
data_dir = "/var/lib/nomad"
advertise {
# Defaults to the first private IP address.
http = "10.134.43.195"
rpc = "10.134.43.195"
serf = "10.134.43.195"
}
server {
enabled = true
bootstrap_expect = 1
server_join {
retry_join = ["provider=digitalocean api_token=[SECRET] tag_name=nomad_auto_join"],
}
}
client {
enabled = true
}
# Consul is installed locally and clustered
consul {
address = "http://127.0.0.1:8500"
server_auto_join = true
client_auto_join = true
auto_advertise = true
}
vault {
enabled = true
address = "http://vault.service.consul:8200"
token = "[NOMAD_VAULT_TOKEN]"
create_from_role = "nomad-cluster"
task_token_ttl = "1h"
}
autopilot {
cleanup_dead_servers = true
last_contact_threshold = "200ms"
max_trailing_logs = 250
server_stabilization_time = "10s"
enable_redundancy_zones = false
disable_upgrade_migration = false
enable_custom_upgrades = false
}
telemetry {
publish_allocation_metrics = true
publish_node_metrics = true
prometheus_metrics = true
}
### Nomad Token to access Vault
NOMAD_VAULT_TOKEN is generated with this command:
vault token create -policy nomad-server -period 72h -orphan
Vault policy nomad-server
nomad-server vault policy is as such:
# Allow creating tokens under "nomad-cluster" token role.Z
path "auth/token/create/nomad-cluster" {
capabilities = ["update"]
}
# Allow looking up "nomad-cluster" token role.
path "auth/token/roles/nomad-cluster" {
capabilities = ["read"]
}
# Allow looking up the token passed to Nomad to validate # the token has the
# proper capabilities. This is provided by the "default" policy.
path "auth/token/lookup-self" {
capabilities = ["read"]
}
# Allow looking up incoming tokens to validate they have permissions to access
# the tokens they are requesting. This is only required if
# `allow_unauthenticated` is set to false.
path "auth/token/lookup" {
capabilities = ["update"]
}
# Allow revoking tokens that should no longer exist. This allows revoking
# tokens for dead tasks.
path "auth/token/revoke-accessor" {
capabilities = ["update"]
}
# Allow checking the capabilities of our own token. This is used to validate the
# token upon startup.
path "sys/capabilities-self" {
capabilities = ["update"]
}
# Allow our own token to be renewed.
path "auth/token/renew-self" {
capabilities = ["update"]
}
# This is where needed secretes are fetched from
path "kv/*" {
capabilities = ["update", "read", "create"]
}
Nomad job definition
My Nomad job definition is:
job "api" {
datacenters = ["dc1"]
type = "service"
group "api" {
count = 1
update {
max_parallel = 1
min_healthy_time = "30s"
healthy_deadline = "10m"
progress_deadline = "11m"
auto_revert = true
}
task "api" {
driver = "docker"
config {
image = "registry.gitlab.com/[GROUP]/[PROJECT]/${ENVIRONMENT}:${BUILD_NUMBER}"
port_map {
nginx = 80
}
auth {
username = "${REGISTRY_USER}"
password = "${REGISTRY_PASS}"
}
force_pull = true
hostname = "api"
}
vault {
policies = ["kv"]
change_mode = "signal"
change_signal = "SIGINT"
// env = "false"
}
template {
data = <<EOT
APP_NAME={{ key "services/api/app/${ENVIRONMENT}/APP_NAME" }}
APP_ENV={{ key "services/api/app/${ENVIRONMENT}/APP_ENV" }}
APP_KEY={{with secret "kv/services/api/app/${ENVIRONMENT}"}}{{.Data.APP_KEY.value}}{{end}}
APP_DEBUG={{ key "services/api/app/${ENVIRONMENT}/APP_DEBUG" }}
APP_URL={{ key "services/api/app/${ENVIRONMENT}/APP_URL" }}
LOG_CHANNEL={{ key "services/api/log/${ENVIRONMENT}/LOG_CHANNEL" }}
DB_CONNECTION={{ key "services/api/db/${ENVIRONMENT}/DB_CONNECTION" }}
DB_HOST={{ key "services/api/db/${ENVIRONMENT}/DB_HOST" }}
DB_PORT={{ key "services/api/db/${ENVIRONMENT}/DB_PORT" }}
DB_DATABASE={{with secret "kv/services/api/db/${ENVIRONMENT}/DB_DATABASE"}}{{.Data.value}}{{end}}
DB_USERNAME={{with secret "kv/services/api/db/${ENVIRONMENT}/DB_USERNAME"}}{{.Data.value}}{{end}}
DB_PASSWORD={{with secret "kv/services/api/db/${ENVIRONMENT}/DB_PASSWORD"}}{{.Data.value}}{{end}}
BROADCAST_DRIVER={{ key "services/api/broadcast/${ENVIRONMENT}/BROADCAST_DRIVER" }}
CACHE_DRIVER={{ key "services/api/cache/${ENVIRONMENT}/CACHE_DRIVER" }}
SESSION_DRIVER={{ key "services/api/session/${ENVIRONMENT}/SESSION_DRIVER" }}
SESSION_LIFETIME={{ key "services/api/session/${ENVIRONMENT}/SESSION_LIFETIME" }}
QUEUE_DRIVER={{ key "services/api/queue/${ENVIRONMENT}/QUEUE_DRIVER" }}
REDIS_HOST={{ key "services/api/redis/${ENVIRONMENT}/REDIS_HOST" }}
REDIS_PORT={{ key "services/api/redis/${ENVIRONMENT}/REDIS_PORT" }}
REDIS_PASSWORD={{with secret "kv/services/api/redis/${ENVIRONMENT}/REDIS_PASSWORD"}}{{.Data.value}}{{end}}
MAIL_DRIVER={{ key "services/api/mail/${ENVIRONMENT}/MAIL_DRIVER" }}
MAIL_HOST={{ key "services/api/mail/${ENVIRONMENT}/MAIL_HOST" }}
MAIL_PORT={{ key "services/api/mail/${ENVIRONMENT}/MAIL_PORT" }}
MAIL_USERNAME={{ key "services/api/mail/${ENVIRONMENT}/MAIL_USERNAME" }}
MAIL_PASSWORD={{ key "services/api/mail/${ENVIRONMENT}/MAIL_PASSWORD" }}
MAIL_ENCRYPTION={{ key "services/api/mail/${ENVIRONMENT}/MAIL_ENCRYPTION" }}
MAIL_FROM_ADDRESS={{ key "services/api/mail/${ENVIRONMENT}/MAIL_FROM_ADDRESS" }}
MAIL_FROM_NAME={{ key "services/api/mail/${ENVIRONMENT}/MAIL_FROM_NAME" }}
MAILGUN_DOMAIN={{ key "services/api/mailgun/${ENVIRONMENT}/MAILGUN_DOMAIN" }}
MAILGUN_SECRET={{with secret "kv/services/api/mailgun/${ENVIRONMENT}/MAILGUN_SECRET"}}{{.Data.value}}{{end}}
DO_SPACES_KEY={{with secret "kv/services/api/spaces/${ENVIRONMENT}/DO_SPACES_KEY"}}{{.Data.value}}{{end}}
DO_SPACES_SECRET={{with secret "kv/services/api/spaces/${ENVIRONMENT}/DO_SPACES_SECRET"}}{{.Data.value}}{{end}}
DO_SPACES_ENDPOINT={{ key "services/api/spaces/${ENVIRONMENT}/DO_SPACES_ENDPOINT" }}
DO_SPACES_REGION={{ key "services/api/spaces/${ENVIRONMENT}/DO_SPACES_REGION" }}
DO_SPACES_BUCKET={{ key "services/api/spaces/${ENVIRONMENT}/DO_SPACES_BUCKET" }}
JWT_SECRET={{with secret "kv/services/api/jwt/${ENVIRONMENT}"}}{{.Data.JWT_SECRET}}{{end}}
EOT
destination = "custom/.env"
// change_mode = "signal"
// change_signal = "SIGINT"
env = true
}
service {
name = "api"
tags = [
"urlprefix-${ENVIRONMENT_URL}/"
]
port = "nginx"
check {
type = "tcp"
port = "nginx"
interval = "10s"
timeout = "2s"
}
}
resources {
cpu = 500
memory = 256
network {
mbits = 100
port "nginx" {}
}
}
}
}
}
Nomad logs
In Nomad logs, I can check that it correctly gets a token from Vault:
Oct 16 13:29:19 nomad-server-0 nomad: 2019-10-16T13:29:19.333Z [INFO ] client.fingerprint_mgr.vault: Vault is available
Oct 16 13:29:19 nomad-server-0 nomad: 2019-10-16T13:29:19.334Z [DEBUG] client.fingerprint_mgr: fingerprinting periodically: fingerprinter=vault period=15s
Oct 16 13:29:19 nomad-server-0 nomad: 2019-10-16T13:29:19.354Z [DEBUG] nomad.vault: starting renewal loop: creation_ttl=72h0m0s
Oct 16 13:29:19 nomad-server-0 nomad: 2019-10-16T13:29:19.375Z [DEBUG] client.fingerprint_mgr: detected fingerprints: node_attrs="[arch cgroup consul cpu host network nomad signal storage vault]"
Oct 16 13:29:19 nomad-server-0 nomad: 2019-10-16T13:29:19.380Z [DEBUG] nomad.vault: successfully renewed server token
Oct 16 13:29:19 nomad-server-0 nomad: 2019-10-16T13:29:19.380Z [INFO ] nomad.vault: successfully renewed token: next_renewal=35h59m59.999975432s
I'm stuck here, if anyone can provide any insight, I would really appreciate!
[EDIT]
I'm using Vault v1.2.0-rc1.
sometimes this error means that the vault is still sealed or has become sealed after a restart of vault
this can be tested by doing a dig on the vaults dns address for example vault.service.consul
According to the docs, Note: Vault integration requires Vault version 0.6.2 or higher.
Your error message backs that up -- Constraint ${attr.vault.version} version >= 0.6.1 filtered 1 node. Normally, constraints are things that you specify in the nomad spec, but in this case, it looks like it's coming from nomad.
I think you'll need to upgrade vault to 0.6.2
I've upgraded Vault to version v1.2.3 and it started working.

Freeradius - No authenticate method found

When doing authorization via smbpasswd, the authentication fails with:
ERROR: No authenticate method (Auth-Type) found for the request: Rejecting the user
The command I am using is:
radtest testusr test 127.0.0.1:18120 0 testing123
Does anybody know why? As far as I can tell, pap should be able to handle this. See -X below
root#hinserv:/etc/freeradius/certs# freeradius -X
freeradius: FreeRADIUS Version 2.2.8, for host x86_64-pc-linux-gnu, built on Jul 26 2017 at 15:27:21
Copyright (C) 1999-2015 The FreeRADIUS server project and contributors.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.
You may redistribute copies of FreeRADIUS under the terms of the
GNU General Public License.
For more information about these matters, see the file named COPYRIGHT.
Starting - reading configuration files ...
including configuration file /etc/freeradius/radiusd.conf
including configuration file /etc/freeradius/proxy.conf
including configuration file /etc/freeradius/clients.conf
including files in directory /etc/freeradius/modules/
including configuration file /etc/freeradius/modules/redis
including configuration file /etc/freeradius/modules/ldap
including configuration file /etc/freeradius/modules/detail.example.com
including configuration file /etc/freeradius/modules/counter
including configuration file /etc/freeradius/modules/rediswho
including configuration file /etc/freeradius/modules/checkval
including configuration file /etc/freeradius/modules/acct_unique
including configuration file /etc/freeradius/modules/otp
including configuration file /etc/freeradius/modules/cui
including configuration file /etc/freeradius/modules/inner-eap
including configuration file /etc/freeradius/modules/detail.log
including configuration file /etc/freeradius/modules/opendirectory
including configuration file /etc/freeradius/modules/preprocess
including configuration file /etc/freeradius/modules/realm
including configuration file /etc/freeradius/modules/policy
including configuration file /etc/freeradius/modules/mac2ip
including configuration file /etc/freeradius/modules/etc_group
including configuration file /etc/freeradius/modules/krb5
including configuration file /etc/freeradius/modules/dynamic_clients
including configuration file /etc/freeradius/modules/ntlm_auth
including configuration file /etc/freeradius/modules/attr_rewrite
including configuration file /etc/freeradius/modules/radrelay
including configuration file /etc/freeradius/modules/passwd
including configuration file /etc/freeradius/modules/perl
including configuration file /etc/freeradius/modules/replicate
including configuration file /etc/freeradius/modules/smbpasswd
including configuration file /etc/freeradius/modules/dhcp_sqlippool
including configuration file /etc/freeradius/modules/files
including configuration file /etc/freeradius/modules/echo
including configuration file /etc/freeradius/modules/exec
including configuration file /etc/freeradius/modules/unix
including configuration file /etc/freeradius/modules/pam
including configuration file /etc/freeradius/modules/chap
including configuration file /etc/freeradius/modules/ippool
including configuration file /etc/freeradius/modules/radutmp
including configuration file /etc/freeradius/modules/smsotp
including configuration file /etc/freeradius/modules/expr
including configuration file /etc/freeradius/modules/detail
including configuration file /etc/freeradius/modules/wimax
including configuration file /etc/freeradius/modules/soh
including configuration file /etc/freeradius/modules/sqlcounter_expire_on_login
including configuration file /etc/freeradius/modules/mac2vlan
including configuration file /etc/freeradius/modules/logintime
including configuration file /etc/freeradius/modules/attr_filter
including configuration file /etc/freeradius/modules/sradutmp
including configuration file /etc/freeradius/modules/pap
including configuration file /etc/freeradius/modules/sql_log
including configuration file /etc/freeradius/modules/expiration
including configuration file /etc/freeradius/modules/mschap
including configuration file /etc/freeradius/modules/linelog
including configuration file /etc/freeradius/modules/digest
including configuration file /etc/freeradius/modules/always
including configuration file /etc/freeradius/modules/cache
including configuration file /etc/freeradius/eap.conf
including configuration file /etc/freeradius/policy.conf
including files in directory /etc/freeradius/sites-enabled/
including configuration file /etc/freeradius/sites-enabled/inner-tunnel
including configuration file /etc/freeradius/sites-enabled/default
main {
user = "freerad"
group = "freerad"
allow_core_dumps = no
}
including dictionary file /etc/freeradius/dictionary
main {
name = "freeradius"
prefix = "/usr"
localstatedir = "/var"
sbindir = "/usr/sbin"
logdir = "/var/log/freeradius"
run_dir = "/var/run/freeradius"
libdir = "/usr/lib/freeradius"
radacctdir = "/var/log/freeradius/radacct"
hostname_lookups = no
max_request_time = 30
cleanup_delay = 5
max_requests = 1024
pidfile = "/var/run/freeradius/freeradius.pid"
checkrad = "/usr/sbin/checkrad"
debug_level = 0
proxy_requests = yes
log {
stripped_names = no
auth = no
auth_badpass = no
auth_goodpass = no
}
security {
max_attributes = 200
reject_delay = 1
status_server = yes
allow_vulnerable_openssl = no
}
}
radiusd: #### Loading Realms and Home Servers ####
proxy server {
retry_delay = 5
retry_count = 3
default_fallback = no
dead_time = 120
wake_all_if_all_dead = no
}
home_server localhost {
ipaddr = 127.0.0.1
port = 1812
type = "auth"
secret = "testing123"
response_window = 20
max_outstanding = 65536
require_message_authenticator = yes
zombie_period = 40
status_check = "status-server"
ping_interval = 30
check_interval = 30
num_answers_to_alive = 3
num_pings_to_alive = 3
revive_interval = 120
status_check_timeout = 4
coa {
irt = 2
mrt = 16
mrc = 5
mrd = 30
}
}
home_server_pool my_auth_failover {
type = fail-over
home_server = localhost
}
realm example.com {
auth_pool = my_auth_failover
}
realm LOCAL {
}
radiusd: #### Loading Clients ####
client localhost {
ipaddr = 127.0.0.1
require_message_authenticator = no
secret = "testing123"
nastype = "other"
}
radiusd: #### Instantiating modules ####
instantiate {
Module: Linked to module rlm_exec
Module: Instantiating module "exec" from file /etc/freeradius/modules/exec
exec {
wait = no
input_pairs = "request"
shell_escape = yes
timeout = 10
}
Module: Linked to module rlm_expr
Module: Instantiating module "expr" from file /etc/freeradius/modules/expr
Module: Linked to module rlm_expiration
Module: Instantiating module "expiration" from file /etc/freeradius/modules/expiration
expiration {
reply-message = "Password Has Expired "
}
Module: Linked to module rlm_logintime
Module: Instantiating module "logintime" from file /etc/freeradius/modules/logintime
logintime {
reply-message = "You are calling outside your allowed timespan "
minimum-timeout = 60
}
}
radiusd: #### Loading Virtual Servers ####
server { # from file /etc/freeradius/radiusd.conf
modules {
Module: Creating Auth-Type = digest
Module: Checking authenticate {...} for more modules to load
Module: Linked to module rlm_pap
Module: Instantiating module "pap" from file /etc/freeradius/modules/pap
pap {
encryption_scheme = "auto"
auto_header = yes
}
Module: Linked to module rlm_chap
Module: Instantiating module "chap" from file /etc/freeradius/modules/chap
Module: Linked to module rlm_mschap
Module: Instantiating module "mschap" from file /etc/freeradius/modules/mschap
mschap {
use_mppe = yes
require_encryption = no
require_strong = no
with_ntdomain_hack = no
allow_retry = yes
}
Module: Linked to module rlm_digest
Module: Instantiating module "digest" from file /etc/freeradius/modules/digest
Module: Linked to module rlm_pam
Module: Instantiating module "pam" from file /etc/freeradius/modules/pam
pam {
pam_auth = "radiusd"
}
Module: Linked to module rlm_unix
Module: Instantiating module "unix" from file /etc/freeradius/modules/unix
unix {
radwtmp = "/var/log/freeradius/radwtmp"
}
Module: Linked to module rlm_eap
Module: Instantiating module "eap" from file /etc/freeradius/eap.conf
eap {
default_eap_type = "md5"
timer_expire = 60
ignore_unknown_eap_types = no
cisco_accounting_username_bug = no
max_sessions = 1024
}
Module: Linked to sub-module rlm_eap_md5
Module: Instantiating eap-md5
Module: Linked to sub-module rlm_eap_leap
Module: Instantiating eap-leap
Module: Linked to sub-module rlm_eap_gtc
Module: Instantiating eap-gtc
gtc {
challenge = "Password: "
auth_type = "PAP"
}
Module: Linked to sub-module rlm_eap_tls
Module: Instantiating eap-tls
tls {
rsa_key_exchange = no
dh_key_exchange = yes
rsa_key_length = 512
dh_key_length = 512
verify_depth = 0
CA_path = "/etc/freeradius/certs"
pem_file_type = yes
private_key_file = "/etc/freeradius/certs/server.key"
certificate_file = "/etc/freeradius/certs/server.pem"
CA_file = "/etc/freeradius/certs/ca.pem"
private_key_password = "whatever"
dh_file = "/etc/freeradius/certs/dh"
random_file = "/dev/urandom"
fragment_size = 1024
include_length = yes
check_crl = no
check_all_crl = no
cipher_list = "DEFAULT"
make_cert_command = "/etc/freeradius/certs/bootstrap"
ecdh_curve = "prime256v1"
cache {
enable = no
lifetime = 24
max_entries = 255
}
verify {
}
ocsp {
enable = no
override_cert_url = yes
url = "http://127.0.0.1/ocsp/"
use_nonce = yes
timeout = 0
softfail = no
}
}
Module: Linked to sub-module rlm_eap_ttls
Module: Instantiating eap-ttls
ttls {
default_eap_type = "md5"
copy_request_to_tunnel = no
use_tunneled_reply = no
virtual_server = "inner-tunnel"
include_length = yes
}
Module: Linked to sub-module rlm_eap_peap
Module: Instantiating eap-peap
peap {
default_eap_type = "mschapv2"
copy_request_to_tunnel = no
use_tunneled_reply = no
proxy_tunneled_request_as_eap = yes
virtual_server = "inner-tunnel"
soh = no
}
Module: Linked to sub-module rlm_eap_mschapv2
Module: Instantiating eap-mschapv2
mschapv2 {
with_ntdomain_hack = no
send_error = no
}
Module: Checking authorize {...} for more modules to load
Module: Linked to module rlm_preprocess
Module: Instantiating module "preprocess" from file /etc/freeradius/modules/preprocess
preprocess {
huntgroups = "/etc/freeradius/huntgroups"
hints = "/etc/freeradius/hints"
with_ascend_hack = no
ascend_channels_per_line = 23
with_ntdomain_hack = no
with_specialix_jetstream_hack = no
with_cisco_vsa_hack = no
with_alvarion_vsa_hack = no
}
reading pairlist file /etc/freeradius/huntgroups
reading pairlist file /etc/freeradius/hints
Module: Linked to module rlm_realm
Module: Instantiating module "suffix" from file /etc/freeradius/modules/realm
realm suffix {
format = "suffix"
delimiter = "#"
ignore_default = no
ignore_null = no
}
Module: Linked to module rlm_files
Module: Instantiating module "files" from file /etc/freeradius/modules/files
files {
usersfile = "/etc/freeradius/users"
acctusersfile = "/etc/freeradius/acct_users"
preproxy_usersfile = "/etc/freeradius/preproxy_users"
compat = "no"
}
reading pairlist file /etc/freeradius/users
reading pairlist file /etc/freeradius/acct_users
reading pairlist file /etc/freeradius/preproxy_users
Module: Checking preacct {...} for more modules to load
Module: Linked to module rlm_acct_unique
Module: Instantiating module "acct_unique" from file /etc/freeradius/modules/acct_unique
acct_unique {
key = "User-Name, Acct-Session-Id, NAS-IP-Address, NAS-Identifier, NAS-Port"
}
Module: Checking accounting {...} for more modules to load
Module: Linked to module rlm_detail
Module: Instantiating module "detail" from file /etc/freeradius/modules/detail
detail {
detailfile = "/var/log/freeradius/radacct/%{%{Packet-Src-IP-Address}:-%{Packet-Src-IPv6-Address}}/detail-%Y%m%d"
header = "%t"
detailperm = 384
dirperm = 493
locking = no
log_packet_header = no
escape_filenames = no
}
Module: Linked to module rlm_attr_filter
Module: Instantiating module "attr_filter.accounting_response" from file /etc/freeradius/modules/attr_filter
attr_filter attr_filter.accounting_response {
attrsfile = "/etc/freeradius/attrs.accounting_response"
key = "%{User-Name}"
relaxed = no
}
reading pairlist file /etc/freeradius/attrs.accounting_response
Module: Checking session {...} for more modules to load
Module: Linked to module rlm_radutmp
Module: Instantiating module "radutmp" from file /etc/freeradius/modules/radutmp
radutmp {
filename = "/var/log/freeradius/radutmp"
username = "%{User-Name}"
case_sensitive = yes
check_with_nas = yes
perm = 384
callerid = yes
}
Module: Checking post-proxy {...} for more modules to load
Module: Checking post-auth {...} for more modules to load
Module: Instantiating module "attr_filter.access_reject" from file /etc/freeradius/modules/attr_filter
attr_filter attr_filter.access_reject {
attrsfile = "/etc/freeradius/attrs.access_reject"
key = "%{User-Name}"
relaxed = no
}
reading pairlist file /etc/freeradius/attrs.access_reject
} # modules
} # server
server inner-tunnel { # from file /etc/freeradius/sites-enabled/inner-tunnel
modules {
Module: Checking authenticate {...} for more modules to load
Module: Checking authorize {...} for more modules to load
Module: Linked to module rlm_passwd
Module: Instantiating module "smbpasswd" from file /etc/freeradius/modules/smbpasswd
passwd smbpasswd {
filename = "/etc/samba/smbpasswd"
format = "*User-Name:uuid:LM-Password:NT-Password:SMB-Account-CTRL-TEXT::"
delimiter = ":"
ignorenislike = no
ignoreempty = yes
allowmultiplekeys = no
hashsize = 100
}
rlm_passwd: nfields: 7 keyfield 0(User-Name) listable: no
Module: Checking session {...} for more modules to load
Module: Checking post-proxy {...} for more modules to load
Module: Checking post-auth {...} for more modules to load
} # modules
} # server
radiusd: #### Opening IP addresses and Ports ####
listen {
type = "auth"
ipaddr = *
port = 0
}
listen {
type = "acct"
ipaddr = *
port = 0
}
listen {
type = "auth"
ipaddr = 127.0.0.1
port = 18120
}
... adding new socket proxy address * port 49016
Listening on authentication address * port 1812
Listening on accounting address * port 1813
Listening on authentication address 127.0.0.1 port 18120 as server inner-tunnel
Listening on proxy address * port 1814
Ready to process requests.
rad_recv: Access-Request packet from host 127.0.0.1 port 44096, id=30, length=77
User-Name = "testusr"
User-Password = "test"
NAS-IP-Address = 192.168.200.36
NAS-Port = 0
Message-Authenticator = 0xfb35b4ee44829fd799ffe2ace59661d7
server inner-tunnel {
# Executing section authorize from file /etc/freeradius/sites-enabled/inner-tunnel
+group authorize {
++[chap] = noop
++[mschap] = noop
rlm_passwd: Unable to create uuid: 1003
++[smbpasswd] = ok
[suffix] No '#' in User-Name = "testusr", looking up realm NULL
[suffix] No such realm "NULL"
++[suffix] = noop
++update control {
++} # update control = noop
[eap] No EAP-Message, not doing EAP
++[eap] = noop
++[files] = noop
++[expiration] = noop
++[logintime] = noop
++[pap] = noop
+} # group authorize = ok
ERROR: No authenticate method (Auth-Type) found for the request: Rejecting the user
Failed to authenticate the user.
} # server inner-tunnel
Using Post-Auth-Type Reject
# Executing group from file /etc/freeradius/sites-enabled/inner-tunnel
+group REJECT {
[attr_filter.access_reject] expand: %{User-Name} -> testusr
attr_filter: Matched entry DEFAULT at line 11
++[attr_filter.access_reject] = updated
+} # group REJECT = updated
Delaying reject of request 0 for 1 seconds
Going to the next request
Waking up in 0.9 seconds.
Sending delayed reject for request 0
Sending Access-Reject of id 30 to 127.0.0.1 port 44096
Waking up in 4.9 seconds.
It's because you've not provided a reference password to check the one in the request against.
To get this working you could add the following to the authorize section:
if (User-Name == 'testusr') {
update control {
Cleartext-Password := 'test'
}
}

Resources