Spent a long time debugging the Issue
I am trying to deploy a Ghost Docker Instance, backed by a MySQL db on GKE platform. Here is the deployment and service kube files for both. One by one I kubectl apply -f <config.yml> for each yml file in the following order:
ssd-storageclass.yml - create an ssd storage class.
pvc-mysql.yml - create a PVC Claim for MySQL.
pvc-ghost.yml - create a PVC Claim for ghost.
deploy-mysql.yml - create a MySQL deployment.
service-mysql.yml - expose MySQL instance.
deploy-ghost.yml - create ghost deployment.
Once done, I expose the ghost deployment via a Load balancer on Port 80 and get an xx.xx.xx.xx IP address for LB. I am able to access ghost on the generated IP address.
This is true as long as I have env variable url in deploy-ghost.yml set as "http://www.limosyn.com".
[...]
containers:
env:
[...]
- name: database__connection__database
value: mysql
- name: url
value: "http://www.limosyn.com"
[...]
Surprisingly, when I change the protocol from http to https, i.e. https://www.limosyn.com, I am no longer able to access the deployment on the LB assigned IP. The problem goes away on changing back to http.
I have tried tens of Permutations and combinations with and without https, doing clean deployments, etc. The situation remains the same. It never works with https.
Before I had the same infra deployed via docker-compose on a single vm instance with https baseurl and that worked. I am facing this issue with Kubernetes only.
You can easily reproduce the scenario if you have a cluster lying around.
Would really appreciate a resolution
Related
I am trying to automate the process of dynamically bring up two containers in a Kubernetes cluster using open-source images. Since the images are third party images, I have some limitations to what can be configured. I also need these containers to come up inside different pods.
For the sake of this discussion, I will call these containers container a.domain.com and container b.domain.com. Container A and B need to communicate back and forth and this communication is secured using TLS Certificates.
To enable this communication, I have to add the following code snippet to the spec to of my Kubernetes deployment doc.
#deployment doc for a.domain.com
spec:
hostAliases:
- ip: <Insert IP address for b.domain.com>
hostnames:
- "b.domain.com"
#deployment doc for b.domain.com
spec:
hostAliases:
- ip: <Insert IP address for a.domain.com>
hostnames:
- "a.domain.com"
If this code is missing, I get the following errors:
Error on container a.domain.com: No such host - b.domain.com
Error on container b.domain.com: No such host - a.domain.com
Since, both my containers have to come up together, I cannot hardcode the IP address in the yaml file.
Is there anyway I can add a parameter to the deployment docs for these containers that allows me to deterministically pre-configure the IP address that the pods use when they come up?
Posting the OP's comment as an answer (community wiki):
I finally figured it out. Using service-name.namespace instead of service-name.namespace.svc.cluster.local solved the issue for me.
I have a docker file and using that i have build the image and then used EKS service to launch the containers. Now in my application for logging purpose I am taking environment variables like "container_instance" and "ec2_instance_id" and logging it so that I can see in Elastic Search from which container or host ec2 machine this log got generated.
How can I set these 2 data when I start my container in environment variable?
In your Kubernetes Pod spec, you can use the downward API to inject some of this information. For example, to get the node's Kubernetes node name, you can set
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
The node name is typically a hostname for the node (this example in the EKS docs shows the EC2 internal hostnames, for example). You can't easily get things like an EC2 instance ID at a per-pod level.
You also might configure logging globally at a cluster level. The Kubernetes documentation includes a packaged setup to route logs to Elasticsearch and Kibana. The example shown there includes only the pod name in the log message metadata, but you should be able to reconfigure the underlying fluentd to include additional host-level metadata.
I have built a .NET Core Azure Function using a ServiceBusTrigger. The function works fine when deployed in a regular App Service plan once the appropriate Application settings, such as the Service Bus connection string.
However, I would prefer to host the function as a Docker container on Azure Kubernetes Service (AKS). I have AKS setup and have a number of .NET Core Docker containers running fine there, including some Azure Functions on TimerTriggers.
When I deploy the function using the ServiceBusTrigger, it fails to properly tun and I get "Function host is not running." when I visit the functions IP address. I believe this is because the app settings are not being found.
The problem is I do not know how to include them when hosting in the Docker/Kubernetes environment. I've tried including the appropriate ENV entries in the Docker file, but then I cannot find the corresponding values in the deployment YAML viewed via the Kubernetes dashboard after I've successfully run func deploy from PowerShell.
Most of the Microsoft documentation addresses the TimerTrigger and HttpTrigger cases, but I can find little on the ServiceBusTrigger when using Docker/Kubernetes.
So, how do I include with the appropriate app settings with my deployment?
From this Blog :Playing with Azure Functions kubernetes integration, you could find a description about add environment variables.
In the deployment.yml, add the env(like AzureWebJobsStorage as an environment variable).
containers:
- image: tsuyoshiushio/queuefunction-azurefunc
imagePullPolicy: Always
name: queuefunction-deployment
env:
- name: AzureWebJobsStorage
value: YOUR_STORAGE_ACCOUNT_CONNECTION_STRING_HERE
ports:
- containerPort: 80
protocol: TCP
Then apply it, it will works.
I have a local kubernetes cluster setup using the edge release of docker (mac). My pods use an env var that I've defined to be my DB's url. These env vars are defined in a config map as:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
DB_URL: postgres://user#localhost/my_dev_db?sslmode=disable
What should I be using here instead of localhost? I need this env var to point to my local dev machine.
You can use the private lan address of your computer, but please ensure that your database software is listening to all network interfaces and there is no firewall blocking incoming traffic.
If your LAN address is dynamic, you could use an internal DNS name pointing to your computer if your network setup provides one.
Another option is to run your database inside the kubernetes cluster: this way you could use it's service name as the hostname.
Option 1 - Local Networking Approach
If you are running minikube, I would recommend taking a look at the answers to this question: Routing an internal Kubernetes IP address to the host system
Option 2 - Tunneling Solution: Connect to an External Service
A very simple but a little hacky solution would be to use a tunneling tool like ngrok: https://ngrok.com/
Option 3 - Cloud-native Development (run everything inside k8s)
If you plan to follow the suggestions of whites11, you could make your life a lot easier with using a kubernetes-native dev tool such as DevSpace (https://github.com/covexo/devspace) or Draft (https://github.com/Azure/draft). Both work with minikube or other self-hosted clusters.
I've been working on using Rancher for manager our dashboard applications, part of this has involved exposing multiple kibana containers from the same port, and one kibana 3 container exposing on port 80.
I want to therefore send requests on specified ports: 5602, 5603, 5604 to specific containers, so I setup the following docker-compose.yml config:
kibana:
image: rancher/load-balancer-service
ports:
- 5602:5602
- 5603:5603
- 5604:5604
links:
- kibana3:kibana3
- kibana4-logging:kibana4-logging
- kibana4-metrics:kibana4-metrics
labels:
io.rancher.loadbalancer.target.kibana3: 5602=80
io.rancher.loadbalancer.target.kibana4-logging: 5603=5601
io.rancher.loadbalancer.target.kibana4-metrics: 5604=5601
Everything works as expected, but I get sporadic 503's. When I go into the container and look at the haproxy.cfg I see:
frontend d898fb95-ec51-4c73-bdaa-cc0435d8572a_5603_frontend
bind *:5603
mode http
default_backend d898fb95-ec51-4c73-bdaa-cc0435d8572a_5603_2_backend
backend d898fb95-ec51-4c73-bdaa-cc0435d8572a_5603_2_backend
mode http
timeout check 2000
option httpchk GET /status HTTP/1.1
server cbc23ed9-a13a-4546-9001-a82220221513 10.42.60.179:5603 check port 5601 inter 2000 rise 2 fall 3
server 851bdb7d-1f6b-4f61-b454-1e910d5d1490 10.42.113.167:5603
server 215403bb-8cbb-4ff0-b868-6586a8941267 10.42.85.7:5601
The IPs listed are all three Kibana containers, the first container has a health check has it, but none of the others do (kibana3/kibana4.1 dont have a status endpoint). My understanding of the docker-compose config is it should have only the one server per backend, but all three appear to be listed, I assume this is in part down to the sporadic 503s, and removing this manually and restarting the haproxy service does seem to solve the problem.
I am configuring the load balancer incorrectly or is this worth raising as a Github issue with Rancher?
I posted on the Rancher forums as that was suggested from Rancher Labs on twitter: https://forums.rancher.com/t/load-balancer-sporadic-503s-with-multiple-port-bindings/2358
Someone from rancher posted a link to a github issue which was similar to what I was experiencing: https://github.com/rancher/rancher/issues/2475
In summary, the load balancers will rotate through all matching backends, there is a work around involving "dummy" domains, which I've confirmed with my configuration does work, even if it is slightly inelegant.
labels:
# Create a rule that forces all traffic to redis at port 3000 to have a hostname of bogus.com
# This eliminates any traffic from port 3000 to be directed to redis
io.rancher.loadbalancer.target.conf/redis: bogus.com:3000
# Create a rule that forces all traffic to api at port 6379 to have a hostname of bogus.com
# This eliminates any traffic from port 6379 to be directed to api
io.rancher.loadbalancer.target.conf/api: bogus.com:6379
(^^ Copied from rancher github issue, not my workaround)
I'm going to see how easy it would be to route via port and raise a PR/Github issue as I think it's a valid usecase for an LB in this scenario.
Make sure that you are using the port initially exposed on the docker container. For some reason, if you bind it to a different port, HAProxy fails to work. If you are using a container from DockerHub that is using a port already taken on your system, you may have to rebuild that docker container to use a different port by routing it through a proxy like nginx.