Securing the route on OPENSHIFT - docker

I deployed an Application on OPENSHIFT. And created a route for it. I want the route to be accessible by only few users and not everyone. And the users who can access the route should be controlled by me. It's like providing Authentication Scheme for the route in the Openshift. How can i achieve this. Please help me in this regard.

Unfortunately, OpenShift Routes do not have any authentication mechanisms built-in. There are the usual TLS / subdomain / path-based routing features, but no authentication.
So your most straight-forward path on OpenShift would be to deploy an additional reverse proxy as part of your application such as "nginx", "traefik" or "haproxy":
+-------------+ +-----------------+
| reverse | | |
+--incoming traffic--->+ proxy +------>+ your application|
from your Route | | | |
+-------------+ +-----------------+
For authentication methods, you then have multiple options, depending on which solution you choose to deploy (the simplest ones being Basic Auth or Digest Auth).

While Kubernetes ingress controllers usually embeds such functionality, this is not the case with OpenShift. The recommended way would usually be to rely on an oauth-proxy, which would integrate with the OpenShift API, allowing for easy integration with OpenShift users -- while it could also be used integrating with third-party providers (KeyCloak, LemonLDAP-NG, ...).
Oauth
Integrating with the OpenShift API, we would first create a ServiceAccount, such as the following:
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
serviceaccounts.openshift.io/oauth-redirectreference.my-app: '{"kind":"OAuthRedirectReference","apiVersion":"v1","reference":{"kind":"Route","name":"my-route-name"}}'
name: my-app
namespace: my-project
That annotation would tell OpenShift how to send your users back to your application, upon successful login: look for a Route my-route-name.
We will also create a certificate, signed by OpenShift PKI, that we'll later use setting up our OAuth proxy. This could be done creating a Service such as the following:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.openshift.io/serving-cert-secret-name: my-app-tls
name: my-app
namespace: my-project
spec:
ports:
- port: 8443
name: oauth
selector:
[ your deployment selector placeholder ]
We may then add a sidecar container into our application deployment:
[...]
containers:
- args:
- -provider=openshift
- -proxy-prefix=/oauth2
- -skip-auth-regex=^/public/
- -login-url=https://console.example.com/oauth/authorize
- -redeem-url=https://kubernetes.default.svc/oauth/token
- -https-address=:8443
- -http-address=
- -email-domain=*
- -upstream=http://localhost:3000
- -tls-cert=/etc/tls/private/tls.crt
- -tls-key=/etc/tls/private/tls.key
- -client-id=system:serviceaccount:my-project:my-app
- -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
- -cookie-refresh=0
- -cookie-expire=24h0m0s
- -cookie-name=_oauth2_mycookiename
name: oauth-proxy
image: docker.io/openshift/oauth-proxy:latest
[ resource requests/limits & livenesss/readiness placeholder ]
volumeMounts:
- name: cert
path: /etc/tls/private
[ your application container placeholder ]
serviceAccount: my-app
serviceAccountName: my-app
volumes:
- secret:
name: my-app-tls
name: cert
Assuming:
console.example.com your OpenShift public API FQDN
^/public/, optional path regexpr that should not be subject to authentication
http://localhost:3000, the URL the Oauth proxy would connect, fix port to whichever your application binds to. Also consider reconfiguring your application to bind on its loopback interface only, preventing accesses that would bypass your oauth proxy
my-project / my-app, your project & app names
Make sure your Route tls termination is set to Passthrough, sends its traffic to port 8443.
Checkout https://github.com/openshift/oauth-proxy for an exhaustive list of options. the openshift-sar one could be interesting, filtering which users may log in, based on their permissions over OpenShift objects.
Obviously, in some cases you would be deploying applications that could be configured to integrate with some OAuth provider: then there's no reason to use a proxy.
Basic Auth
In cases you do not want to integrate with OpenShift users, nor any kind of third-party provider, you may want to use some nginx sidecar container - there are plenty available on dockerhub.
Custom Ingress
Also note that with OpenShift 3.x, cluster admins may customize the template used to generate HAProxy configuration, patching their ingress deployment. This is a relatively common way to fix some headers, handle some custom annotations for your users to set in their Routes, ... Look for TEMPLATE_FILE in https://docs.openshift.com/container-platform/3.11/architecture/networking/routes.html#env-variables.
As of right now, I do not think OpenShift 4 ingress controllers would allow for this - as it is driven by an operator, with limited configuration options, see https://docs.openshift.com/container-platform/4.5/networking/ingress-operator.html.
Although as for OpenShift 3: you may very well deploy your own ingress controllers, should you have to handle such edge cases.

Related

x-requext-id header propagation in keycloak

I am using keycloak to implement OAuth2 code authorization flow in a kubernetes cluster governed by an API gatware Ambassador, I am using Istio Service mesh to add all the tracability, mTLS features to my cluster. One of which is Jaeger which requires all the services to forward x-request-id header in order to link the spans into a specific trace.
When request is sent, Istio's proxy attached to ambassador will generate the x-request-id and forward the request keycloak for authorization, when the results are sent back to the ambassador, the header is dropped and therefore, the istio proxy of keycloak will be generating a new x-header-id. The following image shows the problem:
Here is a photo of the trace where I lost the x-request-id:
Is there a way I can force Keycloak to forward the x-request-id header if passed to it?
Update
here is the environment variables (ConfigMap) associated with Keycloak:
kind: ConfigMap
apiVersion: v1
metadata:
name: keycloak-envars
data:
KEYCLOAK_ADMIN: "admin"
KC_PROXY: "edge"
KC_DB: "postgres"
KC_DB_USERNAME: "test"
KC_DB_DATABASE: "keycloak"
PROXY_ADDRESS_FORWARDING: "true"
You may need to restart your keycloak docker container with the environment variable PROXY_ADDRESS_FORWARDING=true.
ex: docker run -e PROXY_ADDRESS_FORWARDING=true jboss/keycloak

scdf 2.1 k8s config security context non root no fs writable

I need config scdf2 skipper , scdf and app pods to run without root and no write into filesystem pod .
i made changes into config yamls
data:
application.yaml: |-
spring:
cloud:
skipper:
server:
platform:
kubernetes:
accounts:
default:
namespace: default
deploymentServiceAccountName: scdf2-server-data-flow
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
limits:
Colla
And scdf start runs with user "2000", (there is a problem with writeable local maven repo, fixed with a pvc nfs)...
But, the app pods always starts as root user, no 2000 users.
I've change skipper-config with securitycontext, .. any clues?
TX
What you set as deploymentServiceAccountName is one of the Kubernetes deployer properties that can be used for deploying streaming applications or launching task applications.
Looks like the above configuration is not applied to your SCDF or Skipper server configuration properties as they should at least get applied when deploying applications.
For the SCDF server and Skipper servers, in your SCDF/Skipper server deployment configurations, you need to explicitly set your serviceAccountName (not as deploymentServiceAccountName as its name suggests, the deploymentServiceAccountName is internally converted into the actual serviceAccountName for the respective stream/task apps when they get deployed).
We got it. We use it into skipp/scdf deploy, not in pods deploymente.
Your request:
Into scdf / skipper cfg deployment got:
spec:
containers:
- name: {{ template "scdf.fullname" . }}-server
image: {{ .Values.server.image }}:{{ .Values.server.version }}
imagePullPolicy: {{ .Values.server.imagePullPolicy }}
volumeMounts:
...
serviceAccountName: {{ template "scdf.serviceAccountName" . }}
Do you tell me to change config map scdf/skipper to task and streams? Another property into or before config about deployment
How is it relation about "serviceaccount" and user running process into pod?
How related serviceaccount with running process user "2000"
I cant understand.
Please help, it is very important to running without root and no use local filesystem from pod excepts "tmp" files

Traefik v2 [how to route to specific port]

I'm trying to start the change of backends to be compatible with traefik v2.0.
The old configuration was:
labels:
- traefik.port=8500
- traefik.docker.network=proxy
- traefik.frontend.rule=Host:consul.{DOMAIN}
I assumed, the network is not necessary anymore, it would change the new traefik for:
- traefik.http.routers.consul-server-bootstrap.rule=Host('consul.scoob.thrust.com.br')
But how I set, that this should forward to my backend at port 8500? and not 80 where the entrypoint was reached at Traefik?
My goal would try to accomplish something like this:
https://docs.traefik.io/user-guide/cluster-docker-consul/#migrate-configuration-to-consul
Is it still possible?
I saw, there was no --consul or storeconfig command in v2.0
You need traefik.http.services.{SERVICE}.loadbalancer.server.port
labels:
- "traefik.http.services.{SERVICE}.loadbalancer.server.port=8500"
- "traefik.docker.network=proxy"
- "traefik.http.routers.{SERVICE}.rule=Host(`{DOMAIN}`)"
Replace {SERVICE} with the name of your service.
Replace {DOMAIN} with your domain name.
If you want to remove the proxy network you'll need to look at https://docs.traefik.io/v2.0/providers/docker/#usebindportip

Spinnaker GateWay EndPoint

I'm working for a spinnaker for create a new CD pipeline.
I've deployed halyard in a docker container on my computer, and also deployed spinnaker from it to the Google Kubernetes Engine.
After all of them, I've prepared a new ingress yaml file, shown as below.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins-cloud
namespace: spinnaker
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: spin-deck
servicePort: 9000
After accessing the spinnaker UI via a public IP, I got an error, shown as below.
Error fetching applications. Check that your gate endpoint is accessible.
After all of them, I've checked the docs about it and I've run some commands shown as below.
I've checked the service data on my K8S cluster.
spin-deck NodePort 10.11.245.236 <none> 9000:32111/TCP 1h
spin-gate NodePort 10.11.251.78 <none> 8084:31686/TCP 1h
For UI
hal config security ui edit --override-base-url "http://spin-deck.spinnaker:9000"
For API
hal config security api edit --override-base-url "http://spin-gate.spinnaker:8084"
After running these commands and redeploying spinnaker, the error repeated itself.
How can I solve the problem of accessing the spinnaker gate from the UI?
--override-base-url should be populated without port.

Kubernetes certbot standalone not working

I'm trying to generate an SSL certificate with certbot/certbot docker container in kubernetes. I am using Job controller for this purpose which looks as the most suitable option. When I run the standalone option, I get the following error:
Failed authorization procedure. staging.ishankhare.com (http-01):
urn:ietf:params:acme:error:connection :: The server could not connect
to the client to verify the domain :: Fetching
http://staging.ishankhare.com/.well-known/acme-challenge/tpumqbcDWudT7EBsgC7IvtSzZvMAuooQ3PmSPh9yng8:
Timeout during connect (likely firewall problem)
I've made sure that this isn't due to misconfigured DNS entries by running a simple nginx container, and it resolves properly. Following is my Jobs file:
apiVersion: batch/v1
kind: Job
metadata:
#labels:
# app: certbot-generator
name: certbot
spec:
template:
metadata:
labels:
app: certbot-generate
spec:
volumes:
- name: certs
containers:
- name: certbot
image: certbot/certbot
command: ["certbot"]
#command: ["yes"]
args: ["certonly", "--noninteractive", "--agree-tos", "--staging", "--standalone", "-d", "staging.ishankhare.com", "-m", "me#ishankhare.com"]
volumeMounts:
- name: certs
mountPath: "/etc/letsencrypt/"
#- name: certs
#mountPath: "/opt/"
ports:
- containerPort: 80
- containerPort: 443
restartPolicy: "OnFailure"
and my service:
apiVersion: v1
kind: Service
metadata:
name: certbot-lb
labels:
app: certbot-lb
spec:
type: LoadBalancer
loadBalancerIP: 35.189.170.149
ports:
- port: 80
name: "http"
protocol: TCP
- port: 443
name: "tls"
protocol: TCP
selector:
app: certbot-generator
the full error message is something like this:
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for staging.ishankhare.com
Waiting for verification...
Cleaning up challenges
Failed authorization procedure. staging.ishankhare.com (http-01): urn:ietf:params:acme:error:connection :: The server could not connect to the client to verify the domain :: Fetching http://staging.ishankhare.com/.well-known/acme-challenge/tpumqbcDWudT7EBsgC7IvtSzZvMAuooQ3PmSPh9yng8: Timeout during connect (likely firewall problem)
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: staging.ishankhare.com
Type: connection
Detail: Fetching
http://staging.ishankhare.com/.well-known/acme-challenge/tpumqbcDWudT7EBsgC7IvtSzZvMAuooQ3PmSPh9yng8:
Timeout during connect (likely firewall problem)
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address. Additionally, please check that
your computer has a publicly routable IP address and that no
firewalls are preventing the server from communicating with the
client. If you're using the webroot plugin, you should also verify
that you are serving files from the webroot path you provided.
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
I've also tried running this as a simple Pod but to no help. Although I still feel running it as a Job to completion is the way to go.
First, be aware your Job definition is valid, but the spec.template.metadata.labels.app: certbot-generate value does not match with your Service definition spec.selector.app: certbot-generator: one is certbot-generate, the second is certbot-generator. So the pod run by the job controller is never added as an endpoint to the service.
Adjust one or the other, but they have to match, and that might just work :)
Although, I'm not sure using a Service with a selector targeting short-lived pods from a Job controller would work, neither with a simple Pod as you tested. The certbot-randomId pod created by the job (or whatever simple pod you create) takes about 15 seconds total to run/fail, and the HTTP validation challenge is triggered after just a few seconds of the pod life: it's not clear to me that would be enough time for kubernetes proxying to be already working between the service and the pod.
We can safely assume that the Service is actually working because you mentioned that you tested DNS resolution, so you can easily ensure that's not a timing issue by adding a sleep 10 (or more!) to give more time for the pod to be added as an endpoint to the service and being proxied appropriately before the HTTP challenge is triggered by certbot. Just change your Job command and args for those:
command: ["/bin/sh"]
args: ["-c", "sleep 10 && certbot certonly --noninteractive --agree-tos --staging --standalone -d staging.ishankhare.com -m me#ishankhare.com"]
And here too, that might just work :)
That being said, I'd warmly recommend you to use cert-manager which you can install easily through its stable Helm chart: the Certificate custom resource that it introduces will store your certificate in a Secret which will make it straightforward to reuse from whatever K8s resource, and it takes care of renewal automatically so you can just forget about it all.

Resources