Hashi-UI and Nomad authentication - docker

I need advice how to set up authentication to Hashi-UI for management Nomad and Consul. I have Debian 8 server and there I installed Terraform, I created terraform file. This download and run Nomad and Consul. That works, but if I access to Hashi-UI there is no login, so everyone can access it. I run hashi like nomad job. It is run on Nginx. How can I set login for user like this for apache?
My Nomad file:
job "hashi-ui" {
region = "global"
datacenters = ["dc1"]
type = "service"
update {
stagger = "30s"
max_parallel = 2
}
group "server" {
count = 1
task "hashi-ui" {
driver = "docker"
config {
image = "jippi/hashi-ui"
network_mode = "host"
}
service {
port = "http"
check {
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
}
env {
NOMAD_ENABLE = 1
NOMAD_ADDR = "http://0.0.0.0:4646"
CONSUL_ENABLE = 1
CONSUL_ADDR = "http://0.0.0.0:8500"
}
resources {
cpu = 500
memory = 512
network {
mbits = 5
port "http" {
static = 3000
}
}
}
}
task "nginx" {
driver = "docker"
config {
image = "ygersie/nginx-ldap-lua:1.11.3"
network_mode = "host"
volumes = [
"local/config/nginx.conf:/etc/nginx/nginx.conf"
]
}
template {
data = <<EOF
worker_processes 2;
events {
worker_connections 1024;
}
env NS_IP;
env NS_PORT;
http {
access_log /dev/stdout;
error_log /dev/stderr;
auth_ldap_cache_enabled on;
auth_ldap_cache_expiration_time 300000;
auth_ldap_cache_size 10000;
ldap_server ldap_server1 {
url ldaps://ldap.example.com/ou=People,dc=example,dc=com?uid?sub?(objectClass=inetOrgPerson);
group_attribute_is_dn on;
group_attribute member;
satisfy any;
require group "cn=secure-group,ou=Group,dc=example,dc=com";
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 15080;
location / {
auth_ldap "Login";
auth_ldap_servers ldap_server1;
set $target '';
set $service "hashi-ui.service.consul";
set_by_lua_block $ns_ip { return os.getenv("NS_IP") or "127.0.0.1" }
set_by_lua_block $ns_port { return os.getenv("NS_PORT") or 53 }
access_by_lua_file /etc/nginx/srv_router.lua;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 31d;
proxy_pass http://$target;
}
}
}
EOF
destination = "local/config/nginx.conf"
change_mode = "noop"
}
service {
port = "http"
tags = [
"urlprefix-hashi-ui.example.com/"
]
check {
type = "tcp"
interval = "5s"
timeout = "2s"
}
}
resources {
cpu = 100
memory = 64
network {
mbits = 1
port "http" {
static = "15080"
}
}
}
}
}
}
Thank you for any advice.

Since you are using Nginx, you can easily enable authentication in Nginx. Here some useful links:
Basic Auth using Nginx: http://nginx.org/en/docs/http/ngx_http_auth_basic_module.html
LDAP Auth using Nginx: http://www.allgoodbits.org/articles/view/29
Interestingly, this problem is discussed in the HashiUI GitHub repo as well. Take a look at this approach too:
https://github.com/jippi/hashi-ui/blob/master/docs/authentication_example.md
Thanks,
Arul

Related

cannot access nomad docker task via local ip:port

with following job config. curl NOMAD_IP_http:NOMAD_PORT_http cannot access http-echo service.
there is no listenig port on localhost for incomming request.
why and how to access the http-echo service
job "job" {
datacenters = ["dc1"]
group "group" {
count = 2
network {
port "http" {}
}
service {
name = "http-echo"
port = "http"
tags = [
"http-echo",
]
check {
type = "http"
path = "/health"
interval = "30s"
timeout = "2s"
}
}
task "task" {
driver = "docker"
config {
image = "hashicorp/http-echo:latest"
args = [
"-listen", ":${NOMAD_PORT_http}",
"-text", "Hello and welcome to ${NOMAD_IP_http} running on port ${NOMAD_PORT_http}",
]
}
resources {}
}
}
}
UPDATE
after config driver network_mode, curl successfully.
network_mode = "host"
You forgot to add ports at job -> group -> task ->ports
Now it works on latest nomad(v1.1.3+).
job "job" {
datacenters = ["dc1"]
group "group" {
count = 2
network {
port "http" {}
# or maps to container's default port
# port "http" {
# to = 5678
# }
#
}
service {
name = "http-echo"
port = "http"
tags = [
"http-echo",
]
check {
type = "http"
path = "/health"
interval = "30s"
timeout = "2s"
}
}
task "task" {
driver = "docker"
config {
image = "hashicorp/http-echo:latest"
args = [
"-listen", ":${NOMAD_PORT_http}",
"-text", "Hello and welcome to ${NOMAD_IP_http} running on port ${NOMAD_PORT_http}",
]
ports = ["http"]
}
resources {}
}
}
}
Then run docker ps, you will get the mapped port, and curl works.

Running a nomad job for a docker container that Traefik can find

I'm currently running a docker container with Traefik as the load balancer using the following docker-compose file:
services:
loris:
image: bdlss/loris-grok-docker
labels:
- traefik.http.routers.loris.rule=Host(`loris.my_domain`)
- traefik.http.routers.loris.tls=true
- traefik.http.routers.loris.tls.certresolver=lets-encrypt
- traefik.port=80
networks:
- web
It is working fairly well. As part of one my first attempts using Nomad, I simply want to be able to start this container using a nomad job loris.nomad instead of using the docker-compose file.
The Docker container 'Labels' and the 'Network' identification are quite important for Traefik to do the dynamic routing.
My question is: where can I put this "label" information and "network" information in the loris.nomad file so that it starts the container in the same way that the docker-compose file currently does.
I've tried putting this information in the task.config stanza but this doesn't work and I'm having trouble following the documentation. I've seen examples where an additional "service" stanza has been added, but I"m still not sure.
Here's the basics of that nomad file I want to modify.
# loris.nomad
job "loris" {
datacenters = ["dc1"]
group "loris" {
network {
port "http" {
to = 5004
}
task "loris" {
driver = "docker"
config {
image = "bdlss/loris-openjpeg-docker"
ports = ["http"]
}
resources {
cpu = 500
memory = 512
}
}
}
}
Any advice is much appreciated.
Well, the most appropriate option for running traefik in nomad and load-balance between containers is using consul catalog (required for service discovery).
For this to run you have to confgure the consule connection when you start nomad. If you like to test things out locally you can do this by simply running sudo nomad agent -dev-connect. Consul can be started with consul agent -dev -client="0.0.0.0".
Now you can simply provide your traefik configuration using tags as it is shown here.
If you really need (which will cause issues in a clustered setup for sure) to run traefik in nomad with docker provider you can do the following:
First you need to enable host path mounting in the docker plugin. See this and this. You can place your configuration in an extra file like extra.hcl which looks like this:
plugin "docker" {
config {
volumes {
enabled = true
}
}
}
Now you can start nomad with this extra setting sudo nomad agent -dev-connect -config=extra.hcl. Now you can provide your traefik settings in the config/labels block, like (full):
job "traefik" {
region = "global"
datacenters = ["dc1"]
type = "service"
group "traefik" {
count = 1
task "traefik" {
driver = "docker"
config {
image = "traefik:v2.3"
//network_mode = "host"
volumes = [
"local/traefik.yaml:/etc/traefik/traefik.yaml",
"/var/run/docker.sock:/var/run/docker.sock"
]
labels {
traefik.enable = true
traefik.http.routers.from-docker.rule = "Host(`docker.loris.mydomain`)"
traefik.http.routers.from-docker.entrypoints = "web"
traefik.http.routers.from-docker.service = "api#internal"
}
}
template {
data = <<EOF
log:
level: DEBUG
entryPoints:
traefik:
address: ":8080"
web:
address: ":80"
api:
dashboard: true
insecure: true
accessLog: {}
providers:
docker:
exposedByDefault: false
consulCatalog:
prefix: "traefik"
exposedByDefault: false
endpoint:
address: "10.0.0.20:8500"
scheme: "http"
datacenter: "dc1"
EOF
destination = "local/traefik.yaml"
}
resources {
cpu = 100
memory = 128
network {
mbits = 10
port "http" {
static = 80
}
port "traefik" {
static = 8080
}
}
}
service {
name = "traefik"
tags = [
"traefik.enable=true",
"traefik.http.routers.from-consul.rule=Host(`consul.loris.mydomain`)",
"traefik.http.routers.from-consul.entrypoints=web",
"traefik.http.routers.from-consul.service=api#internal"
]
check {
name = "alive"
type = "tcp"
port = "http"
interval = "10s"
timeout = "2s"
}
}
}
}
}
(There might be a setting to bind to 0.0.0.0 I defined those domains in my /etc/hosts to point to my main interface IP).
You can test it with this modified webapp spec (I didn't figure out how to map ports correctly, like container:80 -> host:<random>, but I think it is enough to show how complicated it gets :)):
job "demo-webapp" {
datacenters = ["dc1"]
group "demo" {
count = 3
task "server" {
env {
// "${NOMAD_PORT_http}"
PORT = "80"
NODE_IP = "${NOMAD_IP_http}"
}
driver = "docker"
config {
image = "hashicorp/demo-webapp-lb-guide"
labels {
traefik.enable = true
traefik.http.routers.webapp-docker.rule = "Host(`docker.loris.mydomain`) && Path(`/myapp`)"
traefik.http.services.webapp-docker.loadbalancer.server.port = 80
}
}
resources {
network {
// Used for docker provider
mode ="bridge"
mbits = 10
port "http"{
// Used for docker provider
to = 80
}
}
}
service {
name = "demo-webapp"
port = "http"
tags = [
"traefik.enable=true",
"traefik.http.routers.webapp-consul.rule=Host(`consul.loris.mydomain`) && Path(`/myapp`)",
]
check {
type = "http"
path = "/"
interval = "2s"
timeout = "2s"
}
}
}
}
}
I hope this somehow answers your question.

Multiple vue.js containers in kubernetes and terraform setup returns 404 or 502

I have an Ingress / Terraform / NGINX / Kubernetes setup that has issues with properly redirecting, it's currently serving a vue.js frontend and a .NET Core backend, both of these work online. However, when adding another Vue.JS instance it doesn't seem to properly redirect to said URL.
My terraform setup
resource "kubernetes_ingress" "ingress" {
metadata {
name = "ingress"
namespace = var.namespace_name
annotations = {
"nginx.ingress.kubernetes.io/force-ssl-redirect" = true
"nginx.ingress.kubernetes.io/from-to-www-redirect" = true
"nginx.ingress.kubernetes.io/ssl-redirect": true
"kubernetes.io/ingress.class": "nginx"
}
}
spec {
tls {
hosts = [var.domain_name, "*.${var.domain_name}"]
secret_name = "tls-secret"
}
rule {
host = var.domain_name
http {
path {
path = "/"
backend {
service_name = "frontend"
service_port = 80
}
}
path {
path = "/api"
backend {
service_name = "api"
service_port = 80
}
}
path {
path = "/backend/*"
backend {
service_name = "backend"
service_port = 80
}
}
path {
path = "/payment/*"
backend {
service_name = "payment"
service_port = 80
}
}
}
}
}
wait_for_load_balancer = true
}
When running kubectl describe the following is returned
Name: ingress
Namespace: [redacted]
Address: [ip-address]
Default backend: default-http-backend:80 (<none>)
TLS:
tls-secret terminates [url-name],*.[url-name]
Rules:
Host Path Backends
---- ---- --------
[url-name]
/ frontend:80 (10.244.0.97:80)
/api api:80 (10.244.0.121:80)
/backend/ backend:80 (10.244.0.96:80)
/payment/ payment:80 (10.244.0.32:80)
Annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: true
nginx.ingress.kubernetes.io/from-to-www-redirect: true
nginx.ingress.kubernetes.io/ssl-redirect: true
I was thinking I might be missing proxy settings but I have no idea how to redirect that. Furthermore, this entire solution is deployed with CI and to digital ocean. I've tried various other configurations such as removing the asterix in the paths /backend/ but this didn't change anything.
In annotations adding nginx.ingress.kubernetes.io/rewrite-target: / only broke the /api URL and didn't fix the others.
EDIT* adding "kubernetes.io/ingress.class": "nginx" like #Vitalii mentioned unfortunately did not fix the issue. Question has been updated for completeness sake
Adding nginx.ingress.kubernetes.io/rewrite-target: / was actually part of the solution, it did break the .NET C# API which made me ask a separate question that can be found here for consistency & future searches sake the solution I've used was as follows. apart from adding the rewrite target line in my annotations changing the API path from
path {
path = "/api(.*)"
backend {
service_name = "api"
service_port = 80
}
}
Into
path {
path = "/(api.*)"
backend {
service_name = "olc-api"
service_port = 80
}
}
With this it matches the /api to my .NET core app, instead of it trying to find a URL within the vue.js container(s)

How to modify a file in a Docker container when deploying with Terraform and Kubernetes?

As a part of a bigger module, I want to deploy an nginx container and replace its default nginx.conf. The new config should be built using Terraform resources' data which is generated at the time of deployment. Is there a way to do it?
I managed to replace the standard nginx.conf with a dynamically generated one following these steps:
Create a template config file with placeholders for dynamic data
Parse the file using Terraform's template_file data source
Store the parsed data in a ConfigMap and mount the map as a volume for the Nginx container
Step by step:
Create nginx.conf template named nginx-conf.tpl:
events {
worker_connections 4096; ## Default: 1024
}
http {
server {
listen 80;
listen [::]:80;
server_name ${server_name};
location /_plugin/kibana {
proxy_pass https://${elasticsearch_kibana_endpoint};
}
location / {
proxy_pass https://${elasticsearch_endpoint};
}
}
}
Parse the nginx-conf.tpl template with the following Terraform code:
data "template_file" "nginx" {
template = "${file("${path.module}/nginx-conf.tpl")}"
vars = {
elasticsearch_endpoint = "${aws_elasticsearch_domain.example-name.endpoint}"
elasticsearch_kibana_endpoint = "${aws_elasticsearch_domain.example-name.kibana_endpoint}"
server_name = "${var.server_name}"
}
}
Create a ConfigMap and store the parsed template there with nginx.conf key:
resource "kubernetes_config_map" "nginx" {
metadata {
name = "nginx"
}
data = {
"nginx.conf" = data.template_file.nginx.rendered
}
}
Finally, mount the ConfigMap key as a container volume:
# ...
spec {
# ...
container {
# ...
volume_mount {
name = "nginx-conf"
mount_path = "/etc/nginx"
}
}
volume {
name = "nginx-conf"
config_map {
name = "nginx"
items {
key = "nginx.conf"
path = "nginx.conf"
}
}
}
}
# ...
That's it. Nginx server will start using the provided config.
Useful links: Kubernetes ConfigMap as volume, Terraform temple_file data source doc.

How to pull docker image from public registry with nomad job?

I'am using nomad on GCE and I cannot pull docker images from the public registry.
I can do a pull form the command line with docker pull gerlacdt/helloapp:v0.1.0
But when trying to run a nomad job with a public registry image, I have this error:
Failed to find docker auth for repo "gerlacdt/helloapp": docker-credential-gcr
Relevant files :
The /root/.docker/config.json file:
{
"auths": {
"https://index.docker.io/v1/": {}
},
"credHelpers": {
"asia.gcr.io": "gcr",
"eu.gcr.io": "gcr",
"gcr.io": "gcr",
"staging-k8s.gcr.io": "gcr",
"us.gcr.io": "gcr"
}
}
The nomad client config:
datacenter = "europe-west1-c"
name = "consul-clients-092s"
region = "europe-west1"
bind_addr = "0.0.0.0"
advertise {
http = "172.27.3.132"
rpc = "172.27.3.132"
serf = "172.27.3.132"
}
client {
enabled = true
options = {
"docker.auth.config" = "/root/.docker/config.json"
"docker.auth.helper" = "gcr"
}
}
consul {
address = "127.0.0.1:8500"
}
The job file:
job "helloapp" {
datacenters = ["europe-west1-b", "europe-west1-c", "europe-west1-d"]
constraint {
attribute = "${attr.kernel.name}"
value = "linux"
}
# Configure the job to do rolling updates
update {
stagger = "10s"
max_parallel = 1
}
group "hello" {
count = 1
restart {
attempts = 2
interval = "1m"
delay = "10s"
mode = "fail"
}
# Define a task to run
task "hello" {
driver = "docker"
config {
image = "gerlacdt/helloapp:v0.1.0"
port_map {
http = 8080
}
}
service {
name = "${TASKGROUP}-service"
tags = [
# "traefik.tags=public",
"traefik.frontend.rule=Host:bla.zapto.org",
"traefik.frontend.entryPoints=http",
"traefik.tags=exposed"
]
port = "http"
check {
name = "alive"
type = "http"
interval = "10s"
timeout = "3s"
path = "/health"
}
}
resources {
cpu = 500 # 500 MHz
memory = 128 # 128MB
network {
mbits = 1
port "http" {
}
}
}
logs {
max_files = 10
max_file_size = 15
}
kill_timeout = "10s"
}
}
}
The complete error message from nomad client logs:
failed to initialize task "hello" for alloc "c845bdb9-500a-dc40-0f17-2b79fe4866f1": Failed to find docker auth for repo "gerlacdt/helloapp": docker-credential-gcr with input "gerlacdt/helloapp" failed with stderr:

Resources