Zuul routing with same endpoint but different services - netflix-zuul

I want to have routes as shown below :
zuul:
prefix:/a
stripPrefix:true
routes:
route1:
path:/v1
serviceId: service1
stripPrefix:false
route2:
path:/v1
serviceId: service2
stripPrefix:false
But every time I request the endpoint v1 on service2 I get results from service1. How to set the properties such that I can have same end point on two different services?

Related

x-requext-id header propagation in keycloak

I am using keycloak to implement OAuth2 code authorization flow in a kubernetes cluster governed by an API gatware Ambassador, I am using Istio Service mesh to add all the tracability, mTLS features to my cluster. One of which is Jaeger which requires all the services to forward x-request-id header in order to link the spans into a specific trace.
When request is sent, Istio's proxy attached to ambassador will generate the x-request-id and forward the request keycloak for authorization, when the results are sent back to the ambassador, the header is dropped and therefore, the istio proxy of keycloak will be generating a new x-header-id. The following image shows the problem:
Here is a photo of the trace where I lost the x-request-id:
Is there a way I can force Keycloak to forward the x-request-id header if passed to it?
Update
here is the environment variables (ConfigMap) associated with Keycloak:
kind: ConfigMap
apiVersion: v1
metadata:
name: keycloak-envars
data:
KEYCLOAK_ADMIN: "admin"
KC_PROXY: "edge"
KC_DB: "postgres"
KC_DB_USERNAME: "test"
KC_DB_DATABASE: "keycloak"
PROXY_ADDRESS_FORWARDING: "true"
You may need to restart your keycloak docker container with the environment variable PROXY_ADDRESS_FORWARDING=true.
ex: docker run -e PROXY_ADDRESS_FORWARDING=true jboss/keycloak

Securing the route on OPENSHIFT

I deployed an Application on OPENSHIFT. And created a route for it. I want the route to be accessible by only few users and not everyone. And the users who can access the route should be controlled by me. It's like providing Authentication Scheme for the route in the Openshift. How can i achieve this. Please help me in this regard.
Unfortunately, OpenShift Routes do not have any authentication mechanisms built-in. There are the usual TLS / subdomain / path-based routing features, but no authentication.
So your most straight-forward path on OpenShift would be to deploy an additional reverse proxy as part of your application such as "nginx", "traefik" or "haproxy":
+-------------+ +-----------------+
| reverse | | |
+--incoming traffic--->+ proxy +------>+ your application|
from your Route | | | |
+-------------+ +-----------------+
For authentication methods, you then have multiple options, depending on which solution you choose to deploy (the simplest ones being Basic Auth or Digest Auth).
While Kubernetes ingress controllers usually embeds such functionality, this is not the case with OpenShift. The recommended way would usually be to rely on an oauth-proxy, which would integrate with the OpenShift API, allowing for easy integration with OpenShift users -- while it could also be used integrating with third-party providers (KeyCloak, LemonLDAP-NG, ...).
Oauth
Integrating with the OpenShift API, we would first create a ServiceAccount, such as the following:
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
serviceaccounts.openshift.io/oauth-redirectreference.my-app: '{"kind":"OAuthRedirectReference","apiVersion":"v1","reference":{"kind":"Route","name":"my-route-name"}}'
name: my-app
namespace: my-project
That annotation would tell OpenShift how to send your users back to your application, upon successful login: look for a Route my-route-name.
We will also create a certificate, signed by OpenShift PKI, that we'll later use setting up our OAuth proxy. This could be done creating a Service such as the following:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.openshift.io/serving-cert-secret-name: my-app-tls
name: my-app
namespace: my-project
spec:
ports:
- port: 8443
name: oauth
selector:
[ your deployment selector placeholder ]
We may then add a sidecar container into our application deployment:
[...]
containers:
- args:
- -provider=openshift
- -proxy-prefix=/oauth2
- -skip-auth-regex=^/public/
- -login-url=https://console.example.com/oauth/authorize
- -redeem-url=https://kubernetes.default.svc/oauth/token
- -https-address=:8443
- -http-address=
- -email-domain=*
- -upstream=http://localhost:3000
- -tls-cert=/etc/tls/private/tls.crt
- -tls-key=/etc/tls/private/tls.key
- -client-id=system:serviceaccount:my-project:my-app
- -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
- -cookie-refresh=0
- -cookie-expire=24h0m0s
- -cookie-name=_oauth2_mycookiename
name: oauth-proxy
image: docker.io/openshift/oauth-proxy:latest
[ resource requests/limits & livenesss/readiness placeholder ]
volumeMounts:
- name: cert
path: /etc/tls/private
[ your application container placeholder ]
serviceAccount: my-app
serviceAccountName: my-app
volumes:
- secret:
name: my-app-tls
name: cert
Assuming:
console.example.com your OpenShift public API FQDN
^/public/, optional path regexpr that should not be subject to authentication
http://localhost:3000, the URL the Oauth proxy would connect, fix port to whichever your application binds to. Also consider reconfiguring your application to bind on its loopback interface only, preventing accesses that would bypass your oauth proxy
my-project / my-app, your project & app names
Make sure your Route tls termination is set to Passthrough, sends its traffic to port 8443.
Checkout https://github.com/openshift/oauth-proxy for an exhaustive list of options. the openshift-sar one could be interesting, filtering which users may log in, based on their permissions over OpenShift objects.
Obviously, in some cases you would be deploying applications that could be configured to integrate with some OAuth provider: then there's no reason to use a proxy.
Basic Auth
In cases you do not want to integrate with OpenShift users, nor any kind of third-party provider, you may want to use some nginx sidecar container - there are plenty available on dockerhub.
Custom Ingress
Also note that with OpenShift 3.x, cluster admins may customize the template used to generate HAProxy configuration, patching their ingress deployment. This is a relatively common way to fix some headers, handle some custom annotations for your users to set in their Routes, ... Look for TEMPLATE_FILE in https://docs.openshift.com/container-platform/3.11/architecture/networking/routes.html#env-variables.
As of right now, I do not think OpenShift 4 ingress controllers would allow for this - as it is driven by an operator, with limited configuration options, see https://docs.openshift.com/container-platform/4.5/networking/ingress-operator.html.
Although as for OpenShift 3: you may very well deploy your own ingress controllers, should you have to handle such edge cases.

Traefik v2 [how to route to specific port]

I'm trying to start the change of backends to be compatible with traefik v2.0.
The old configuration was:
labels:
- traefik.port=8500
- traefik.docker.network=proxy
- traefik.frontend.rule=Host:consul.{DOMAIN}
I assumed, the network is not necessary anymore, it would change the new traefik for:
- traefik.http.routers.consul-server-bootstrap.rule=Host('consul.scoob.thrust.com.br')
But how I set, that this should forward to my backend at port 8500? and not 80 where the entrypoint was reached at Traefik?
My goal would try to accomplish something like this:
https://docs.traefik.io/user-guide/cluster-docker-consul/#migrate-configuration-to-consul
Is it still possible?
I saw, there was no --consul or storeconfig command in v2.0
You need traefik.http.services.{SERVICE}.loadbalancer.server.port
labels:
- "traefik.http.services.{SERVICE}.loadbalancer.server.port=8500"
- "traefik.docker.network=proxy"
- "traefik.http.routers.{SERVICE}.rule=Host(`{DOMAIN}`)"
Replace {SERVICE} with the name of your service.
Replace {DOMAIN} with your domain name.
If you want to remove the proxy network you'll need to look at https://docs.traefik.io/v2.0/providers/docker/#usebindportip

Spinnaker GateWay EndPoint

I'm working for a spinnaker for create a new CD pipeline.
I've deployed halyard in a docker container on my computer, and also deployed spinnaker from it to the Google Kubernetes Engine.
After all of them, I've prepared a new ingress yaml file, shown as below.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins-cloud
namespace: spinnaker
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: spin-deck
servicePort: 9000
After accessing the spinnaker UI via a public IP, I got an error, shown as below.
Error fetching applications. Check that your gate endpoint is accessible.
After all of them, I've checked the docs about it and I've run some commands shown as below.
I've checked the service data on my K8S cluster.
spin-deck NodePort 10.11.245.236 <none> 9000:32111/TCP 1h
spin-gate NodePort 10.11.251.78 <none> 8084:31686/TCP 1h
For UI
hal config security ui edit --override-base-url "http://spin-deck.spinnaker:9000"
For API
hal config security api edit --override-base-url "http://spin-gate.spinnaker:8084"
After running these commands and redeploying spinnaker, the error repeated itself.
How can I solve the problem of accessing the spinnaker gate from the UI?
--override-base-url should be populated without port.

How to connect to kubernetes-api from a browser?

I set up a cybernetes cluster this tutorial https://coreos.com/kubernetes/docs/latest/deploy-master.html
When you open the browser https://my_ip
I get Unauthorized.
What you need to do to access the API?
~/kubectl config view`
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/hhh/ca.pem
server: https://192.168.0.139
name: default-cluster
contexts:
- context:
cluster: hhh-cluster
user: hhh
name: default-system
current-context: default-system
kind: Config
preferences: {}
users:
- name: cluster-hhh
user:
password: admin
username: admin
- name: default-admin
user:
client-certificate: /home/hhh/admin.pem
client-key: /home/hhh/admin-key.pem
basic-auth not work
Does basic auth work when using kubectl (it's unclear from your output which client credentials are working when connecting to your cluster's apiserver)?
Are you passing --basic-auth-file to your kube-apiserver process when starting it (see https://github.com/kubernetes/kubernetes/blob/master/cmd/kube-apiserver/app/server.go#L218)? If not, then basic auth will not work when connecting to your apiserver? If so, you can verify that it is working by running curl -k --user admin:admin https://192.168.0.139.
If you want (or need) to use client certificates from your browser, take a look at the instructions I put into this github issue about making it easier to configure.

Resources