Limit access to AKS cluster IP from internal Azure traffic - azure-aks

How to limit access to AKS cluster IP from internal Azure traffic using NSG on the load balancer/application gateway?

You can limit the access to your AKS via setting authorized IP address with:
# Existing Cluster
az aks update \
--name myAKSCluster \
--resource-group myResourceGroup \
--api-server-authorized-ip-ranges 73.140.245.0/24
# New Cluster
az aks create \
--name MyAKSCluster \
--resource-group MyResourceGroup \
--api-server-authorized-ip-ranges 73.140.245.0/24
Your can find here good documentation.
NOTE: this will only affect accessing yoour cluster with kubectl etc, deplyoment pipelines etc.
If you are talking about limiting the access to your ingress controller, please leave a comment.

Related

keycloak dynamic client registration registration_client_uri docker hostname override

I have successfully created a client inside Keycloak using Dynamic Client Registration
The response body contains:
"registration_client_uri":"https://127.0.0.1:8443/auth/realms...",
This is because Keycloak is installed with Docker, and is fronted by NginX. I want to replace the IP address/port with the actual public hostname.
Where are the docs / configurations for this?
I started keycloak as follows:
docker run -itd --name keycloak \
--restart unless-stopped \
--env-file keycloak.env \
-p 127.0.0.1:8443:8443 \
--network keycloak \
jboss/keycloak:11.0.0 \
-Dkeycloak.profile=preview
And inside keycloak.env, I have set KEYCLOAK_HOSTNAME=example.com
Configure env variable PROXY_ADDRESS_FORWARDING=true, because Keycloak is running behind Nginx reverse proxy - https://hub.docker.com/r/jboss/keycloak/
Enabling proxy address forwarding
When running Keycloak behind a proxy, you will need to enable proxy address forwarding.
docker run -e PROXY_ADDRESS_FORWARDING=true jboss/keycloak

Calling an application outside cluster from a pod

There is a web service app running on a Compute Engine and a GKE cluster in the same network.
Is it possible for a pod in the cluster to call the web service app using internal IP address of web service app?
Your answer will be appreciated.
Thanks.
TL;DR
Yes it's possible.
Assuming that you are talking about the Internal IP address of your VM you will need to create a rule allowing traffic from pod address range to your VM.
Example
Assuming that:
There is a Compute Engine instance named: nginx and it's configured to run on port 80.
There is a Kubernetes Engine within the same network as your GCE instance.
You will need to check the pod ip address range of your GKE cluster. You can do it by either:
Cloud Console (Web UI)
$ gcloud container clusters describe CLUSTER-NAME --zone=ZONE | grep -i "clusterIpv4Cidr"
The firewall rule could be created by either:
Cloud Console (Web UI)
gcloud command like below:
gcloud compute --project=PROJECT-ID firewall-rules create pod-to-vm \
--direction=INGRESS --priority=1000 --network=default \
--action=ALLOW --rules=tcp:80 --source-ranges=clusterIpv4Cidr \
--target-tags=pod-traffic
Disclaimer!
Enter the value from last command (describe cluster) in the place of clusterIpv4Cidr
You will need to add pod-traffic to your VM's network tags!
After that you can spawn a pod and check if you can communicate with your VM:
$ kubectl run -it ubuntu --image=ubuntu -- /bin/bash
$ apt update && apt install -y curl dnsutils
You can communicate with your VM with GKE pods by either:
IP address of your VM:
root#ubuntu:/# curl IP_ADDRESS
REDACTED
<p><em>Thank you for using nginx.</em></p>
REDACTED
Name of your VM (nginx):
root#ubuntu:/# curl nginx
REDACTED
<p><em>Thank you for using nginx.</em></p>
REDACTED
You can also check if the name is correctly resolved by running:
root#ubuntu:/# nslookup nginx
Server: DNS-SERVER-IP
Address: DNS-SERVER-IP#53
Non-authoritative answer:
Name: nginx.c.PROJECT_ID.internal
Address: IP_ADDRESS
Additional resources:
Stackoverflow.com: Unable to connect from gke to gce

Docker Swarm vs. Docker Cluster

I created a swarm cluster via
docker-machine -d azure --swarm --swarm-master --swarm-discovery token://SWARM_CLUSTER_TOKEN my-swarm-master
and
docker-machine -d azure--swarm --swarm-discovery token://SWARM_CLUSTER_TOKEN my-node-01
After that, I Logged into cloud.docker.com - but when I click on Node Clusters or Nodes I can't see my swarm.
So is swarm (via command line) and cluster (via cloud.docker.com) not the same thing? What's the difference and when should I use which one?
Edit:
Yes, my Azure subscription is added in cloud.docker.com under Cloud Settings.
They are separate. The docker-machine commands you ran create a self hosted swarm that you manage yourself (from your first docker-machine command). The Docker Cloud creates an environment that's managed for you from the Docker infrastructure. Without access to that token used by Swarm, Docker Cloud won't know about the nodes in your Swarm.

Adding services in different consul clients running on same host

I've followed the section of in Testing a Consul cluster on a single host using consul. Three consul servers are successfully added and running in same host for testing purpose. Afterwards, I've also followed the tutorial and created a consul client node4 to expose ports. Is it possible to add more services and bind to one of those consul clients ?
Use the new 'swarm mode' instead of the legacy Swarm. Swarm mode doesn't require Consul. Service discovery and key/value store is now part of the docker daemon. Here's how to create a 3 nodes High Available cluster (3 masters).
Create three nodes
docker-machine create --driver vmwarefusion node01
docker-machine create --driver vmwarefusion node02
docker-machine create --driver vmwarefusion node03
Find the ip of node01
docker-machine ls
Set one as the initial swarm master
docker $(docker-machine config node01) swarm init --advertise-addr <ip-of-node01>
Retrieve the token to let other nodes join as master
docker $(docker-machine config node01) swarm join-token manager
This will print out something like
docker swarm join \
--token SWMTKN-1-0siwp7rzqeslnhuf42d16zcwodk543l99liy0wuq1mern8s8u9-8mbsrxzu9mgfw7x6ehpxh0dof \
192.168.40.144:2377
Add the other two nodes to the swarm as masters
docker $(docker-machine config node02) swarm join \
--token SWMTKN-1-0siwp7rzqeslnhuf42d16zcwodk543l99liy0wuq1mern8s8u9-8mbsrxzu9mgfw7x6ehpxh0dof \
192.168.40.144:2377
docker $(docker-machine config node03) swarm join \
--token SWMTKN-1-0siwp7rzqeslnhuf42d16zcwodk543l99liy0wuq1mern8s8u9-8mbsrxzu9mgfw7x6ehpxh0dof \
192.168.40.144:2377
Examine the swarm
docker node ls
You should now be able to shutdown the leader node and see another pick up as manager.
Best practice for Consul, is to run consul one per HOST, and when you want to talk to consul, you always talk locally. In general, everything 1 consul node knows, every other consul node also knows. So you can just talk to your localhost consul (127.0.0.1:8500) and do everything you need to do. When you add services, you add them to the local consul node that has the service's process on it. There are projects like Registrator (https://github.com/gliderlabs/registrator) That will automatically add services from running docker containers, which makes life easier.
Overall, welcome to Consul, it's great stuff!

Unable to connect instances of docker-machine to pull docker images running behind a corporate proxy

I'm trying to configure docker swarm mode using docker-machine. I have set up a cluster of 4 nodes connected to each other. However, I'm not able to connect the instances created by docker-machine to outside world that is running behind a corporate proxy.
When you create your boot2docker machines, you need to make sure you have set your corporate proxy:
docker-machine create -d virtualbox \
--engine-env HTTP_PROXY=http://example.com:8080 \
--engine-env HTTPS_PROXY=https://example.com:8080 \
--engine-env NO_PROXY=example2.com \
default

Resources