I have built a .NET Core Azure Function using a ServiceBusTrigger. The function works fine when deployed in a regular App Service plan once the appropriate Application settings, such as the Service Bus connection string.
However, I would prefer to host the function as a Docker container on Azure Kubernetes Service (AKS). I have AKS setup and have a number of .NET Core Docker containers running fine there, including some Azure Functions on TimerTriggers.
When I deploy the function using the ServiceBusTrigger, it fails to properly tun and I get "Function host is not running." when I visit the functions IP address. I believe this is because the app settings are not being found.
The problem is I do not know how to include them when hosting in the Docker/Kubernetes environment. I've tried including the appropriate ENV entries in the Docker file, but then I cannot find the corresponding values in the deployment YAML viewed via the Kubernetes dashboard after I've successfully run func deploy from PowerShell.
Most of the Microsoft documentation addresses the TimerTrigger and HttpTrigger cases, but I can find little on the ServiceBusTrigger when using Docker/Kubernetes.
So, how do I include with the appropriate app settings with my deployment?
From this Blog :Playing with Azure Functions kubernetes integration, you could find a description about add environment variables.
In the deployment.yml, add the env(like AzureWebJobsStorage as an environment variable).
containers:
- image: tsuyoshiushio/queuefunction-azurefunc
imagePullPolicy: Always
name: queuefunction-deployment
env:
- name: AzureWebJobsStorage
value: YOUR_STORAGE_ACCOUNT_CONNECTION_STRING_HERE
ports:
- containerPort: 80
protocol: TCP
Then apply it, it will works.
Related
I set up a Cloud Run instance with gRPC and HTTP2. It works well. But, I'd like to open a new port externally and route that traffic to gRPC over HTTPS.
This is the current YAML for the container:
- image: asia.gcr.io/assets-320007/server5c2b87a8444cb42e566e130a907015df7dd841b4
ports:
- name: h2c
containerPort: 5000
resources:
limits:
cpu: 1000m
memory: 512Mi
I cannot add new ports because if I do, I get:
spec.template.spec.containers[0].ports should contain 0 or 1 port (field: spec.template.spec.containers[0].ports)
Also, the YAML doesn't specify a forwarding port. It seems to be just assuming that you would only ever set up one port which automatically routes to the one open port on the Docker container. Is that true?
Note: it would be really nice if the YAML came with reference documentation or a schema. That way we could tell what all the possible permutations could be.
Yes, you can only expose one port for a Cloud Run service.
I also find this a curious limitation.
I'm deploying services that use gRPC and expose Prometheus metrics and have been able to multiplex both HTTP/2 and HTTP/1 services in a single port but it requires additional work and is inconsistent with the conceptually underlying Kubernetes.
An excellent feature of GCP is comprehensive and current documentation. Here's Cloud Run Service.
NOTE Found using [APIs Explorer] (https://developers.google.com/apis-explorer) and then finding the Cloud Run Admin API
There are some differences between these knative types and the similar Kubernetes types. One approach I've used is to deploy a known-good service using e.g. gcloud and then corroborate the YAML produced by the service.
For example... top of my head... container ports can't have arbitrary names but must be .... http1 (see link).
Spent a long time debugging the Issue
I am trying to deploy a Ghost Docker Instance, backed by a MySQL db on GKE platform. Here is the deployment and service kube files for both. One by one I kubectl apply -f <config.yml> for each yml file in the following order:
ssd-storageclass.yml - create an ssd storage class.
pvc-mysql.yml - create a PVC Claim for MySQL.
pvc-ghost.yml - create a PVC Claim for ghost.
deploy-mysql.yml - create a MySQL deployment.
service-mysql.yml - expose MySQL instance.
deploy-ghost.yml - create ghost deployment.
Once done, I expose the ghost deployment via a Load balancer on Port 80 and get an xx.xx.xx.xx IP address for LB. I am able to access ghost on the generated IP address.
This is true as long as I have env variable url in deploy-ghost.yml set as "http://www.limosyn.com".
[...]
containers:
env:
[...]
- name: database__connection__database
value: mysql
- name: url
value: "http://www.limosyn.com"
[...]
Surprisingly, when I change the protocol from http to https, i.e. https://www.limosyn.com, I am no longer able to access the deployment on the LB assigned IP. The problem goes away on changing back to http.
I have tried tens of Permutations and combinations with and without https, doing clean deployments, etc. The situation remains the same. It never works with https.
Before I had the same infra deployed via docker-compose on a single vm instance with https baseurl and that worked. I am facing this issue with Kubernetes only.
You can easily reproduce the scenario if you have a cluster lying around.
Would really appreciate a resolution
I'm creating an App that will have to communicate with a Kubernetes service, via REST APIs. The service hosts a docker image that's listening on port 8080 and responds with a JSON body.
I noticed that when I create a deployment via -
kubectl expose deployment myapp --target-port=8080 --type=NodePort --name=app-service
It then creates a service entitled app-service
To then locally test this, I obtain the IP:port for the created service via -
minikube service app-service --url
I'm using minikube for my local development efforts. I then get a response such as http://172.17.118.68:31970/ which then when I enter on my browser, works fine (I get the JSON responses i'm expecting).
However, it seems the IP & port for that service are always different whenever I start this service up.
Which leads to my question - how is a mobile App supposed to find that new IP:Port then if it's subject to change? Is the common way to work around this to register that combination via a DNS server (such as Google Cloud's DNS system?)
Or am I missing a step here with setting up Kubernetes public services?
Which leads to my question - how is a mobile App supposed to find that new IP:Port then if it's subject to change?
minikube is not meant for production use. It is only meant for development purpose. You should create a real kubernetes cluster and use LoadBalancer type service or an Ingress(for L7 traffic) to expose your service to external world. Since you need to expose your backend REST api, Ingress is good choice.
I just tried setting up kubernetes on my bare server,
Previously I had successfully create my docker compose
There are several apps :
Apps A (docker image name : a-service)
Apps B (docker image name : b-service)
Inside Application A and B there are configs (actually there are apps A,B,C,D,etc lots of em)
The config file is something like this
IPFORSERVICEA=http://a-service:port-number/path/to/something
IPFORSERVICEB=http://b-service:port-number/path/to/something
At least above config work in docker compose (the config is inside app level, which require to access another apps). Is there any chance for me to access another Kubernetes Service from another service ? As I am planning to create 1 app inside 1 deployment, and 1 service for each deployment.
Something like:
App -> Deployment -> Service(i.e: NodePort,ClusterIP)
Thanks !
Is there any chance for me to access another Kubernetes Service from
another service ?
Yes, you just need to specify DNS name of service (type: ClusterIP works fine for this) you need to connect to as:
<service_name>.<namespace>.svc.cluster.local
In this case such domain name will be correctly resolved into internal IP address of the service you need to connect to using built-in DNS.
For example:
nginx-service.web.svc.cluster.local
where nginx-service - name of your service and web - is apps's namespace, so service yml definition can look like:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: web
spec:
ports:
- name: http
protocol: TCP
port: 80
selector:
app: nginx
type: ClusterIP
See official docs to get more information.
Use Kubernetes service discovery.
Service discovery is the process of figuring out how to connect to a
service. While there is a service discovery option based on
environment variables available, the DNS-based service discovery is
preferable. Note that DNS is a cluster add-on so make sure your
Kubernetes distribution provides for one or install it yourself.
Service dicovery by example
I have a local kubernetes cluster setup using the edge release of docker (mac). My pods use an env var that I've defined to be my DB's url. These env vars are defined in a config map as:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
DB_URL: postgres://user#localhost/my_dev_db?sslmode=disable
What should I be using here instead of localhost? I need this env var to point to my local dev machine.
You can use the private lan address of your computer, but please ensure that your database software is listening to all network interfaces and there is no firewall blocking incoming traffic.
If your LAN address is dynamic, you could use an internal DNS name pointing to your computer if your network setup provides one.
Another option is to run your database inside the kubernetes cluster: this way you could use it's service name as the hostname.
Option 1 - Local Networking Approach
If you are running minikube, I would recommend taking a look at the answers to this question: Routing an internal Kubernetes IP address to the host system
Option 2 - Tunneling Solution: Connect to an External Service
A very simple but a little hacky solution would be to use a tunneling tool like ngrok: https://ngrok.com/
Option 3 - Cloud-native Development (run everything inside k8s)
If you plan to follow the suggestions of whites11, you could make your life a lot easier with using a kubernetes-native dev tool such as DevSpace (https://github.com/covexo/devspace) or Draft (https://github.com/Azure/draft). Both work with minikube or other self-hosted clusters.