Nomad constraint "${attr.vault.version} version >= 0.6.1" to access vault - token

I'm trying to deploy a Nomad job which has a template that fetches some secrets from a Vault.
My problem is that it keeps on giving this Placement Failure because of a constraint which I can't understand why:
Constraint ${attr.vault.version} version >= 0.6.1 filtered 1 node
Nomad config
datacenter = "dc1"
data_dir = "/var/lib/nomad"
advertise {
# Defaults to the first private IP address.
http = "10.134.43.195"
rpc = "10.134.43.195"
serf = "10.134.43.195"
}
server {
enabled = true
bootstrap_expect = 1
server_join {
retry_join = ["provider=digitalocean api_token=[SECRET] tag_name=nomad_auto_join"],
}
}
client {
enabled = true
}
# Consul is installed locally and clustered
consul {
address = "http://127.0.0.1:8500"
server_auto_join = true
client_auto_join = true
auto_advertise = true
}
vault {
enabled = true
address = "http://vault.service.consul:8200"
token = "[NOMAD_VAULT_TOKEN]"
create_from_role = "nomad-cluster"
task_token_ttl = "1h"
}
autopilot {
cleanup_dead_servers = true
last_contact_threshold = "200ms"
max_trailing_logs = 250
server_stabilization_time = "10s"
enable_redundancy_zones = false
disable_upgrade_migration = false
enable_custom_upgrades = false
}
telemetry {
publish_allocation_metrics = true
publish_node_metrics = true
prometheus_metrics = true
}
### Nomad Token to access Vault
NOMAD_VAULT_TOKEN is generated with this command:
vault token create -policy nomad-server -period 72h -orphan
Vault policy nomad-server
nomad-server vault policy is as such:
# Allow creating tokens under "nomad-cluster" token role.Z
path "auth/token/create/nomad-cluster" {
capabilities = ["update"]
}
# Allow looking up "nomad-cluster" token role.
path "auth/token/roles/nomad-cluster" {
capabilities = ["read"]
}
# Allow looking up the token passed to Nomad to validate # the token has the
# proper capabilities. This is provided by the "default" policy.
path "auth/token/lookup-self" {
capabilities = ["read"]
}
# Allow looking up incoming tokens to validate they have permissions to access
# the tokens they are requesting. This is only required if
# `allow_unauthenticated` is set to false.
path "auth/token/lookup" {
capabilities = ["update"]
}
# Allow revoking tokens that should no longer exist. This allows revoking
# tokens for dead tasks.
path "auth/token/revoke-accessor" {
capabilities = ["update"]
}
# Allow checking the capabilities of our own token. This is used to validate the
# token upon startup.
path "sys/capabilities-self" {
capabilities = ["update"]
}
# Allow our own token to be renewed.
path "auth/token/renew-self" {
capabilities = ["update"]
}
# This is where needed secretes are fetched from
path "kv/*" {
capabilities = ["update", "read", "create"]
}
Nomad job definition
My Nomad job definition is:
job "api" {
datacenters = ["dc1"]
type = "service"
group "api" {
count = 1
update {
max_parallel = 1
min_healthy_time = "30s"
healthy_deadline = "10m"
progress_deadline = "11m"
auto_revert = true
}
task "api" {
driver = "docker"
config {
image = "registry.gitlab.com/[GROUP]/[PROJECT]/${ENVIRONMENT}:${BUILD_NUMBER}"
port_map {
nginx = 80
}
auth {
username = "${REGISTRY_USER}"
password = "${REGISTRY_PASS}"
}
force_pull = true
hostname = "api"
}
vault {
policies = ["kv"]
change_mode = "signal"
change_signal = "SIGINT"
// env = "false"
}
template {
data = <<EOT
APP_NAME={{ key "services/api/app/${ENVIRONMENT}/APP_NAME" }}
APP_ENV={{ key "services/api/app/${ENVIRONMENT}/APP_ENV" }}
APP_KEY={{with secret "kv/services/api/app/${ENVIRONMENT}"}}{{.Data.APP_KEY.value}}{{end}}
APP_DEBUG={{ key "services/api/app/${ENVIRONMENT}/APP_DEBUG" }}
APP_URL={{ key "services/api/app/${ENVIRONMENT}/APP_URL" }}
LOG_CHANNEL={{ key "services/api/log/${ENVIRONMENT}/LOG_CHANNEL" }}
DB_CONNECTION={{ key "services/api/db/${ENVIRONMENT}/DB_CONNECTION" }}
DB_HOST={{ key "services/api/db/${ENVIRONMENT}/DB_HOST" }}
DB_PORT={{ key "services/api/db/${ENVIRONMENT}/DB_PORT" }}
DB_DATABASE={{with secret "kv/services/api/db/${ENVIRONMENT}/DB_DATABASE"}}{{.Data.value}}{{end}}
DB_USERNAME={{with secret "kv/services/api/db/${ENVIRONMENT}/DB_USERNAME"}}{{.Data.value}}{{end}}
DB_PASSWORD={{with secret "kv/services/api/db/${ENVIRONMENT}/DB_PASSWORD"}}{{.Data.value}}{{end}}
BROADCAST_DRIVER={{ key "services/api/broadcast/${ENVIRONMENT}/BROADCAST_DRIVER" }}
CACHE_DRIVER={{ key "services/api/cache/${ENVIRONMENT}/CACHE_DRIVER" }}
SESSION_DRIVER={{ key "services/api/session/${ENVIRONMENT}/SESSION_DRIVER" }}
SESSION_LIFETIME={{ key "services/api/session/${ENVIRONMENT}/SESSION_LIFETIME" }}
QUEUE_DRIVER={{ key "services/api/queue/${ENVIRONMENT}/QUEUE_DRIVER" }}
REDIS_HOST={{ key "services/api/redis/${ENVIRONMENT}/REDIS_HOST" }}
REDIS_PORT={{ key "services/api/redis/${ENVIRONMENT}/REDIS_PORT" }}
REDIS_PASSWORD={{with secret "kv/services/api/redis/${ENVIRONMENT}/REDIS_PASSWORD"}}{{.Data.value}}{{end}}
MAIL_DRIVER={{ key "services/api/mail/${ENVIRONMENT}/MAIL_DRIVER" }}
MAIL_HOST={{ key "services/api/mail/${ENVIRONMENT}/MAIL_HOST" }}
MAIL_PORT={{ key "services/api/mail/${ENVIRONMENT}/MAIL_PORT" }}
MAIL_USERNAME={{ key "services/api/mail/${ENVIRONMENT}/MAIL_USERNAME" }}
MAIL_PASSWORD={{ key "services/api/mail/${ENVIRONMENT}/MAIL_PASSWORD" }}
MAIL_ENCRYPTION={{ key "services/api/mail/${ENVIRONMENT}/MAIL_ENCRYPTION" }}
MAIL_FROM_ADDRESS={{ key "services/api/mail/${ENVIRONMENT}/MAIL_FROM_ADDRESS" }}
MAIL_FROM_NAME={{ key "services/api/mail/${ENVIRONMENT}/MAIL_FROM_NAME" }}
MAILGUN_DOMAIN={{ key "services/api/mailgun/${ENVIRONMENT}/MAILGUN_DOMAIN" }}
MAILGUN_SECRET={{with secret "kv/services/api/mailgun/${ENVIRONMENT}/MAILGUN_SECRET"}}{{.Data.value}}{{end}}
DO_SPACES_KEY={{with secret "kv/services/api/spaces/${ENVIRONMENT}/DO_SPACES_KEY"}}{{.Data.value}}{{end}}
DO_SPACES_SECRET={{with secret "kv/services/api/spaces/${ENVIRONMENT}/DO_SPACES_SECRET"}}{{.Data.value}}{{end}}
DO_SPACES_ENDPOINT={{ key "services/api/spaces/${ENVIRONMENT}/DO_SPACES_ENDPOINT" }}
DO_SPACES_REGION={{ key "services/api/spaces/${ENVIRONMENT}/DO_SPACES_REGION" }}
DO_SPACES_BUCKET={{ key "services/api/spaces/${ENVIRONMENT}/DO_SPACES_BUCKET" }}
JWT_SECRET={{with secret "kv/services/api/jwt/${ENVIRONMENT}"}}{{.Data.JWT_SECRET}}{{end}}
EOT
destination = "custom/.env"
// change_mode = "signal"
// change_signal = "SIGINT"
env = true
}
service {
name = "api"
tags = [
"urlprefix-${ENVIRONMENT_URL}/"
]
port = "nginx"
check {
type = "tcp"
port = "nginx"
interval = "10s"
timeout = "2s"
}
}
resources {
cpu = 500
memory = 256
network {
mbits = 100
port "nginx" {}
}
}
}
}
}
Nomad logs
In Nomad logs, I can check that it correctly gets a token from Vault:
Oct 16 13:29:19 nomad-server-0 nomad: 2019-10-16T13:29:19.333Z [INFO ] client.fingerprint_mgr.vault: Vault is available
Oct 16 13:29:19 nomad-server-0 nomad: 2019-10-16T13:29:19.334Z [DEBUG] client.fingerprint_mgr: fingerprinting periodically: fingerprinter=vault period=15s
Oct 16 13:29:19 nomad-server-0 nomad: 2019-10-16T13:29:19.354Z [DEBUG] nomad.vault: starting renewal loop: creation_ttl=72h0m0s
Oct 16 13:29:19 nomad-server-0 nomad: 2019-10-16T13:29:19.375Z [DEBUG] client.fingerprint_mgr: detected fingerprints: node_attrs="[arch cgroup consul cpu host network nomad signal storage vault]"
Oct 16 13:29:19 nomad-server-0 nomad: 2019-10-16T13:29:19.380Z [DEBUG] nomad.vault: successfully renewed server token
Oct 16 13:29:19 nomad-server-0 nomad: 2019-10-16T13:29:19.380Z [INFO ] nomad.vault: successfully renewed token: next_renewal=35h59m59.999975432s
I'm stuck here, if anyone can provide any insight, I would really appreciate!
[EDIT]
I'm using Vault v1.2.0-rc1.

sometimes this error means that the vault is still sealed or has become sealed after a restart of vault
this can be tested by doing a dig on the vaults dns address for example vault.service.consul

According to the docs, Note: Vault integration requires Vault version 0.6.2 or higher.
Your error message backs that up -- Constraint ${attr.vault.version} version >= 0.6.1 filtered 1 node. Normally, constraints are things that you specify in the nomad spec, but in this case, it looks like it's coming from nomad.
I think you'll need to upgrade vault to 0.6.2

I've upgraded Vault to version v1.2.3 and it started working.

Related

Azure event hub using terraform

I have question related to terraform code for azure event hub.
What are the security principles and policies that we need to take care while deploying azure event hub securely through terraform?. If possible please share the terraform code also.
Thanks
I have checked few docs but unable to understand it.
I tried to reproduce the same in my environment to create an Azure event hub using Terraform:
Terraform Code:
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "venkyrg" {
name = "venkyrg1"
location = "West Europe"
}
resource "azurerm_eventhub_namespace" "example" {
name = "venkatnamespace"
location = azurerm_resource_group.venkyrg.location
resource_group_name = azurerm_resource_group.venkyrg.name
sku = "Standard"
capacity = 1
tags = {
environment = "Production"
}
}
resource "azurerm_eventhub" "example" {
name = "venkateventhub"
namespace_name = azurerm_eventhub_namespace.example.name
resource_group_name = azurerm_resource_group.venkyrg.name
partition_count = 2
message_retention = 1
}
#Event hub Policy creation
resource "azurerm_eventhub_authorization_rule" "example" {
name = "navi"
namespace_name = azurerm_eventhub_namespace.example.name
eventhub_name = azurerm_eventhub.example.name
resource_group_name = azurerm_resource_group.venkyrg.name
listen = true
send = false
manage = false
}
# Service Prinicipal Assignment
resource "azurerm_role_assignment" "pod-identity-assignment" {
scope = azurerm_resource_group.resourceGroup.id
role_definition_name = "Azure Event Hubs Data Owner"
principal_id = "74cca40a-1d7e-4352-a66c-217eab00cf33"
}
Terraform Apply:
Once ran the code resources are created with event hub policies in Azure successfully, like below.
Policy Status:
Azure Built-in roles for Azure Event Hubs
Reference: Azurerm-eventhub with Terraform

Running a nomad job for a docker container that Traefik can find

I'm currently running a docker container with Traefik as the load balancer using the following docker-compose file:
services:
loris:
image: bdlss/loris-grok-docker
labels:
- traefik.http.routers.loris.rule=Host(`loris.my_domain`)
- traefik.http.routers.loris.tls=true
- traefik.http.routers.loris.tls.certresolver=lets-encrypt
- traefik.port=80
networks:
- web
It is working fairly well. As part of one my first attempts using Nomad, I simply want to be able to start this container using a nomad job loris.nomad instead of using the docker-compose file.
The Docker container 'Labels' and the 'Network' identification are quite important for Traefik to do the dynamic routing.
My question is: where can I put this "label" information and "network" information in the loris.nomad file so that it starts the container in the same way that the docker-compose file currently does.
I've tried putting this information in the task.config stanza but this doesn't work and I'm having trouble following the documentation. I've seen examples where an additional "service" stanza has been added, but I"m still not sure.
Here's the basics of that nomad file I want to modify.
# loris.nomad
job "loris" {
datacenters = ["dc1"]
group "loris" {
network {
port "http" {
to = 5004
}
task "loris" {
driver = "docker"
config {
image = "bdlss/loris-openjpeg-docker"
ports = ["http"]
}
resources {
cpu = 500
memory = 512
}
}
}
}
Any advice is much appreciated.
Well, the most appropriate option for running traefik in nomad and load-balance between containers is using consul catalog (required for service discovery).
For this to run you have to confgure the consule connection when you start nomad. If you like to test things out locally you can do this by simply running sudo nomad agent -dev-connect. Consul can be started with consul agent -dev -client="0.0.0.0".
Now you can simply provide your traefik configuration using tags as it is shown here.
If you really need (which will cause issues in a clustered setup for sure) to run traefik in nomad with docker provider you can do the following:
First you need to enable host path mounting in the docker plugin. See this and this. You can place your configuration in an extra file like extra.hcl which looks like this:
plugin "docker" {
config {
volumes {
enabled = true
}
}
}
Now you can start nomad with this extra setting sudo nomad agent -dev-connect -config=extra.hcl. Now you can provide your traefik settings in the config/labels block, like (full):
job "traefik" {
region = "global"
datacenters = ["dc1"]
type = "service"
group "traefik" {
count = 1
task "traefik" {
driver = "docker"
config {
image = "traefik:v2.3"
//network_mode = "host"
volumes = [
"local/traefik.yaml:/etc/traefik/traefik.yaml",
"/var/run/docker.sock:/var/run/docker.sock"
]
labels {
traefik.enable = true
traefik.http.routers.from-docker.rule = "Host(`docker.loris.mydomain`)"
traefik.http.routers.from-docker.entrypoints = "web"
traefik.http.routers.from-docker.service = "api#internal"
}
}
template {
data = <<EOF
log:
level: DEBUG
entryPoints:
traefik:
address: ":8080"
web:
address: ":80"
api:
dashboard: true
insecure: true
accessLog: {}
providers:
docker:
exposedByDefault: false
consulCatalog:
prefix: "traefik"
exposedByDefault: false
endpoint:
address: "10.0.0.20:8500"
scheme: "http"
datacenter: "dc1"
EOF
destination = "local/traefik.yaml"
}
resources {
cpu = 100
memory = 128
network {
mbits = 10
port "http" {
static = 80
}
port "traefik" {
static = 8080
}
}
}
service {
name = "traefik"
tags = [
"traefik.enable=true",
"traefik.http.routers.from-consul.rule=Host(`consul.loris.mydomain`)",
"traefik.http.routers.from-consul.entrypoints=web",
"traefik.http.routers.from-consul.service=api#internal"
]
check {
name = "alive"
type = "tcp"
port = "http"
interval = "10s"
timeout = "2s"
}
}
}
}
}
(There might be a setting to bind to 0.0.0.0 I defined those domains in my /etc/hosts to point to my main interface IP).
You can test it with this modified webapp spec (I didn't figure out how to map ports correctly, like container:80 -> host:<random>, but I think it is enough to show how complicated it gets :)):
job "demo-webapp" {
datacenters = ["dc1"]
group "demo" {
count = 3
task "server" {
env {
// "${NOMAD_PORT_http}"
PORT = "80"
NODE_IP = "${NOMAD_IP_http}"
}
driver = "docker"
config {
image = "hashicorp/demo-webapp-lb-guide"
labels {
traefik.enable = true
traefik.http.routers.webapp-docker.rule = "Host(`docker.loris.mydomain`) && Path(`/myapp`)"
traefik.http.services.webapp-docker.loadbalancer.server.port = 80
}
}
resources {
network {
// Used for docker provider
mode ="bridge"
mbits = 10
port "http"{
// Used for docker provider
to = 80
}
}
}
service {
name = "demo-webapp"
port = "http"
tags = [
"traefik.enable=true",
"traefik.http.routers.webapp-consul.rule=Host(`consul.loris.mydomain`) && Path(`/myapp`)",
]
check {
type = "http"
path = "/"
interval = "2s"
timeout = "2s"
}
}
}
}
}
I hope this somehow answers your question.

Freeradius + Active Directory + Google Authenticator

I've been trying to make VPN users authenticate with 2FA (Google authenticator). At the moment I have Cisco ISE, FreeRadius Server, Active Directory. What I want to achieve is when a user connects to VPN (Cisco ISE) the server ask for user from Radius server then Radius server authenticate user from Active Directory. If user is authenticated successfully the FreeRadius server must ask for OTP from user. My configuration is :
/etc/raddb/sites-enabled/default
server default {
listen {
type = auth
ipaddr = 1.1.1.1
port = 0
limit {
max_connections = 16
lifetime = 0
idle_timeout = 30
}
}
listen {
ipaddr = *
port = 0
type = acct
}
authorize {
filter_username
preprocess
chap
mschap
digest
suffix
eap {
ok = return
}
files
-sql
ldap
if ((ok || updated) && User-Password && !control:Auth-Type){
update {
control:Auth-Type := ldap
}
}
expiration
logintime
pap
}
authenticate {
Auth-Type PAP {
pap
}
Auth-Type CHAP {
chap
}
Auth-Type MS-CHAP {
mschap
}
mschap
digest
Auth-Type LDAP {
ldap
}
eap
}
preacct {
preprocess
acct_unique
suffix
files
}
accounting {
detail
unix
-sql
exec
attr_filter.accounting_response
}
session {
}
post-auth {
if (Google-Password) {
update request {
pam
}
}
else {
update reply {
&Google-Password = "%{Google-Password}"
}
}
update {
&reply: += &session-state:
}
-sql
exec
remove_reply_message_if_eap
Post-Auth-Type REJECT {
-sql
attr_filter.access_reject
eap
remove_reply_message_if_eap
}
Post-Auth-Type Challenge {
}
}
pre-proxy {
}
post-proxy {
eap
}
}
/etc/raddb/clients.conf
client CISCO_ISE {
ipaddr = 1.1.1.2
proto = *
secret = testing123
require_message_authenticator = no
nas_type = other
limit {
max_connections = 16
lifetime = 0
idle_timeout = 30
}
}
/etc/raddb/mods-config/files/authorize
DEFAULT Framed-Protocol == PPP
Framed-Protocol = PPP,
Framed-Compression = Van-Jacobson-TCP-IP
DEFAULT Hint == "CSLIP"
Framed-Protocol = SLIP,
Framed-Compression = Van-Jacobson-TCP-IP
DEFAULT Hint == "SLIP"
Framed-Protocol = SLIP
/etc/pam.d/radiusd
auth requisite pam_google_authenticator.so forward_pass
With this configuration FreeRadius server asks for username and password but after ad authentication server doesn't ask for one time password
Solved the issue. For those who is configuring exact settings you need to use state attribute same thing as session or cookie. If request has state attribute then change authentication method to PAM which will check the token. Else if request doesn't have state attribute then it's first time request which you need to authenticate via Active Directory

Icinga2 client Host culster-zone check command not going down (RED) when lost connection

I have setup a single master with 2 client endpoints in my icintga2 monitoring system using director with Top-Down mode.
I have also setup 2 client nodes with both accept configs and accept commands.
(hopefully this means I'm running Top Down Command Endpoint mode)
The service checks (disk/mem/load) for the 3 hosts are returning correct results. But my problem is:
according to the example from Top Down Command Endpoint example,
host icinga2-client1 is using "hostalive" as the host check_command.
eg.
object Host "icinga2-client1.localdomain" {
check_command = "hostalive" //check is executed on the master
address = "192.168.56.111"
vars.client_endpoint = name //follows the convention that host name == endpoint name
}
But one issue I have is that
if the client1 icinga process is not running,
the host status stays GREEN and also all of service status (disk/mem/load) stay all GREEN as well
because master is not getting any service check updates and hostalive check command is able to ping the node.
Under Best Practice - Health Check section,
it mentioned to use "cluster-zone" check commands.
I was expecting while using "cluster-zone",
the host status would be RED
when the client node icinga process is stopped,
but somehow this is not happening.
Does anyone has any idea?
My zone/host/endpoint configurations are as follows:
object Zone "icinga-master" {
endpoints = [ "icinga-master" ]
}
object Host "icinga-master" {
import "Master-Template"
display_name = "icinga-master [192.168.100.71]"
address = "192.168.100.71"
groups = [ "Servers" ]
}
object Endpoint "icinga-master" {
host = "192.168.100.71"
port = "5665"
}
object Zone "rick-tftp" {
parent = "icinga-master"
endpoints = [ "rick-tftp" ]
}
object Endpoint "rick-tftp" {
host = "172.16.181.216"
}
object Host "rick-tftp" {
import "Host-Template"
display_name = "rick-tftp [172.16.181.216]"
address = "172.16.181.216"
groups = [ "Servers" ]
vars.cluster_zone = "icinga-master"
}
object Zone "tftp-server" {
parent = "icinga-master"
endpoints = [ "tftp-server" ]
}
object Endpoint "tftp-server" {
host = "192.168.100.221"
}
object Host "tftp-server" {
import "Host-Template"
display_name = "tftp-server [192.168.100.221]"
address = "192.168.100.221"
groups = [ "Servers" ]
vars.cluster_zone = "icinga-master"
}
template Host "Host-Template" {
import "pnp4nagios-host"
check_command = "cluster-zone"
max_check_attempts = "5"
check_interval = 1m
retry_interval = 30s
enable_notifications = true
enable_active_checks = true
enable_passive_checks = true
enable_event_handler = true
enable_perfdata = true
}
Thanks,
Rick

Configure FreeRADIUS to only support EAP TTLS PAP

I have an external RADIUS server that only supports PAP. I have configured FreeRADIUS 2.2.4 to proxy the PAP request inside an EAP-TTLS tunnel (from a WiFi access point configured for WPA2 Enterprise) to this RADIUS server, and I tested it with eapol_test. I can manually configure a PC or Mac to only send EAP-TTLS+PAP but this is not really desirable.
When unconfigured WPA2 Enterprise clients connect they try PEAP and LEAP and EAP-MD5. I disabled most of the other EAP types, but it seems that I need at least one other EAP type supported in default_eap_type in the TTLS block. The non-commented part of my eap.conf is below:
eap {
default_eap_type = ttls
timer_expire = 60
ignore_unknown_eap_types = no
cisco_accounting_username_bug = no
max_sessions = 4096
md5 {
}
tls {
certdir = ${confdir}/certs
cadir = ${confdir}/certs
private_key_password = heythatsprivate
private_key_file = ${certdir}/server.pem
certificate_file = ${certdir}/server.pem
dh_file = ${certdir}/dh
random_file = /dev/urandom
CA_path = ${cadir}
cipher_list = "DEFAULT"
make_cert_command = "${certdir}/bootstrap"
ecdh_curve = "prime256v1"
cache {
enable = yes
lifetime = 24 # hours
max_entries = 255
}
verify {
}
ocsp {
enable = no
override_cert_url = yes
url = "http://127.0.0.1/ocsp/"
}
}
ttls {
default_eap_type = md5
copy_request_to_tunnel = no
use_tunneled_reply = no
virtual_server = "inner-tunnel"
}
}
Is there a way to configure FreeRADIUS so that there are no EAP types allowed inside TTLS or to explicitly require PAP inside the tunnel?
Thanks,
-rohan
Try to setup a new servers besides the default...
server my-server {
authorize { ... }
authenticate {
eap
}
accounting { ... }
}
Then create a inner-tunnel for second fase of authentication
server my-tunnel {
authorize {
pap
}
...
authenticate {
Auth-Type PAP {
pap
}
}
...
}
You will need to modify your EAP configuration as this:
eap {
default_eap_type = ttls
...
ttls {
default_eap_type = gtc
copy_request_to_tunnel = yes
use_tunneled_reply = yes
virtual_server = "my-tunnel"
}
...
}
Then specify for each client what server do you want to use to process authentication requests
client example {
ipv6addr = x.x.x.x
netmask = 32
secret = *******
shortname = example
virtual_server = my-server
}
I'm sure this will enable what do you want to do.
Regards,
-Hernan Garcia

Resources