Do modules deployed via Azure IoT Edge runtime require an Azure IoT SDK client? - azure-iot-edge

I have a service that's already Dockerized. The service listens on some ports and makes some outbound network calls. At the moment, updating the service requires someone to access the console remotely and manually replace the old container with the latest version.
After reading through the Azure IoT Edge documentation and the SDKs, it's not clear to me if an Azure IoT module MUST include an Azure IoT SDK. I know the Azure IoT SDK is necessary for passing messages, accessing the module twin, and probably more, but I don't need any of that at the moment for this specific use-case.
Can I reuse my existing Docker containers with Azure IoT Edge or would I need to add the Azure IoT SDK (because there's a health check or other internal requirement for the SDK)?

As you mentioned, Azure IoT SDK is the recommended way to do messaging, access twin etc. But it is optional.
If you just want the IoT Edge runtime to launch a Docker container that listens on local ports and performs outbound network calls, you can certainly do that. Nothing will get in your way.

Related

Azure API Management service with external virtual network to Docker

I want to use the Azure API Management Service (AMS) to expose the API created with R/Plumber hosted in a Docker container and runs in an Ubuntu machine.
Scenario
With R/Plumber I created some APIs that I want to protect. Then, I created a virtual machine on Azure with Ubuntu and installed Docker. The APIs are in a container that I published on the virtual machine by Docker. I can access them via internet.
On Azure I created an API Management service and added the APIs from the Swagger OpenAPI documentation.
Problem
I want to secure the APIs. I want to expose to the internet only the AMS. Then, my idea was to remove the public IP from the virtual machine and via a virtual network using the internal IPs to connect the API Management Service to the API with the internal IP (http://10.0.1.5:8000).
So, I tried to set a Virtual Network. Clicked on the menu, then External and then on the row, I can select a network. In this virtual network, I have one network interface that is the one the virtual machine is using.
When I save the changes, I have to wait a while and then I receive an error
Failed to connect to management endpoint at azuks-chi-testapi-d1.management.azure-api.net:3443 for a service deployed in a virtual network. Make sure to follow guidance at https://aka.ms/apim-vnet-common-issues.
I read the following documentation but I can't understand how to do what I wanted
Azure API Management - External Type: gateway unable to access resources within the virtual network?
How to use Azure API Management with virtual networks
Is there any how-to to use? Any advice? What are I doing wrong?
Update
I tried to add more Address space in the Virtual network.
One of them (10.0.0.2/24) is delegate for the API Management.
Then, in the Network security group I added the port 3443.
From the API manager I can't reach the server with the internet IP (10.0.2.5). What did I miss?
See common network configuration issues, it lists all dependencies that are expected to be exposed for APIM to work. Make sure that your vnet allows ingress at port 3443 for the subnet where APIM service is located. This configuration must be done on VNET side, not APIM.

How can I integrate my application with Kubernetes cluster running Docker containers?

This is more of a research question. If it does not meet the standards of SO, please let me know and I will ask elsewhere.
I am new to Kubernetes and have a few basic questions. I have read a lot of doc on the internet and was hoping someone can help answer few basic questions.
I am trying to create an integration with Kubernetes (user applications running inside Docker containers to be precise) and my application that would act as a backup for certain data in the containers.
My application currently runs in AWS. Would the Kube cluster need to run in AWS as well ? Or can it run in any cloud service or even on-prem as long as the APIs are available ?
My application needs to know the IP of the Master node API server to do POST/GET requests and nothing else ?
For authentication, can I use AD (my application uses AD today for a few things). That would also give me Role based policies for each user. Or do I have to use the Kube Token Reviewer API for authentication always ?
Would the applications running in Kubernetes use the APIs I provide to communicate with my application ?
Would my application use POST/GET to communicate with the Kube Master API server ? Do I need to use kubectl for this and above #4 ?
Thanks for your help.
Your application needn't exist on the same server as k8s. There are several ways to connect to k8s cluster, depending on your use case. Either you can expose the built-in k8s API using kubectl proxy, connect directly to the k8s API on the master, or you can expose services via load balancer or node port.
You would only need to know the IP for the master node if you're connecting to the cluster directly through the built-in k8s API, but in most cases you should only be using this API to internally administer your cluster. The preferred way of accessing k8s pods is to expose them via load balancer, which allows you to access a service on any node from a single IP. k8s also allows you to access a service with a nodePort from any k8s node (except the master) through a preassigned port.
TokenReview is only one of the k8s auth strategies. I don't know anything about Active Directory auth, but at a glance OpenID connect tokens seem to support it. You should review whether or not you need to allow users direct access to the k8s API at all. Consider exposing services via LoadBalancer instead.
I'm not sure what you mean by this, but if you deploy your APIs as k8s deployments you can expose their endpoints through services to communicate with your external application however you like.
Again, the preferred way to communicate with k8s pods from external applications is through services exposed as load balancers, not through the built-in API on the k8s master. In the case of services, it's up to the underlying API to decide which kinds of requests it wants to accept.

How to send message from mosquitto broker to azure IOT hub

I have installed mosquitto as a local broker. There are multiple devices which sends messages to the broker. I want that whatever message coming to mosquitto broker should be sent to azure IOT hub. Can anyone please refer to any document how to do that.
You have several approaches:
You can use one of the device client SDKs (depending on which
language you are using on your broker) to send data to IoT Hub
You
can use MQTT directly to communicate with IoT Hub, which will require
that you do specific things described in this document
(definitively recommend using the client SDK)

Azure Mobile App on Service Fabric

I am using the Offline Sync feature of Azure Mobile Apps and it is working as expected.
I am also running a Service Fabric cluster on Azure for other services.
Is there anything that would prevent me technically or legally from running the Azure Mobile App on Service Fabric? (As opposed to running Azure Moble App on Azure Mobile App host on Azure).
Azure Mobile Apps uses three things:
Offline sync with a SQL Azure instance backend - fully supported on whatever container you choose
Push Registrations connected via App Service Push - will not be supported outside of Azure App Service
Authentication via server-flow or client-flow - will not be supported outside of Azure App Service
You don't have a problem legally - Azure Mobile Apps is an open-source project licensed under a OSS friendly license. However, Auth and Push are going to be issues if you use them.

SSO in Cloud foundry with Gitlab User data

we have the following situation. A Open Source Cloud Foundry installation and an independent Gitlab installation. I would like to use the Gitlab Token to identify User in the CF UAA, means everybody registered at the Gitlab is able to use CF.
Is there a possibility to combine this two system?
This is currently not supported by UAA.
We have plans to add supports for Generic OAuth 2.0 based external identity providers in the future.

Resources