Failing to setup Azure IoT Edge runtime: Invalid hostname - azure-iot-edge

I have Standard D2s v3 (2 vcpus, 8 GB memory) running on Azure with Python, Docker and iotedgectl installed.
When I run
iotedgectl setup --connection-string "HostName=***.azure-devices.net;DeviceId=***;SharedAccessKey=***" --auto-cert-gen-force-no-passwords
I get following error
ERROR: Error parsing user input data: Invalid hostname. Hostname cannot be empty or greater than 64 characters: ****.nwq4jyrgm4zejiseat2enywp0h.fx.internal.cloudapp.net.
ERROR: Please fix any input values and re-run 'iotedgectl setup'
ERROR: Errors were observed. Return Code: 1
any ideas?

The IoT Edge runtime requires a hostname to generate a TLS server certificate for the Edge Hub. This enables verifiable TLS connections between modules and leaf devices (for gateway scenarios). Per RFC3280, the maximum length of the Common Name for an SSL certificate is 64 chars. (search for ub-common-name-length).
This error is indicating that the hostname exceeds this limit. By default, the iotedgectl tool detects and uses the hostname of the host machine. Unfortunately, Azure Windows VMs have very long hostnames.
To remedy this, you can set the hostname and bypass the auto detection like so:
iotedgectl setup --connection-string "<conn string>" --auto-cert-gen-force-no-passwords --edge-hostname <a shorter hostname>
If you are interested in using IoT Edge as a gateway, there is more information here: https://learn.microsoft.com/en-us/azure/iot-edge/how-to-create-transparent-gateway

Related

Unsupported attribute 27 in CoA-Request from IP:PORT

I have a freeradius server setup on my ubuntu VM. My Edgecore AP is connected to a MicroTik and Laptop. I have OpenWRT running on the AP and I'm able to connect a client using WPA2 Enterprise encryption. I'm trying to send a CoA request from the VM such as "Session-Timeout" however, observing the logs on the AP, I receive the message I've included in the title. Is CoA completely unsupported or hostapd simply can't understand the incoming request?
Dynamic Authorization Extensions (RFC 5176) is default disabled in hostapd.
set radius_das_port=3799 in your conf to enable this feature.
https://web.mit.edu/freebsd/head/contrib/wpa/hostapd/hostapd.conf

MinIO + Docker - cannot use SSL certificate with new version (x509 doesn't contain any IP sans)

I'm running MinIO under docker. I've been using a version that was released before the integration of the MinIO console (circa July 2021). This was setup with an SSL certificate purchased from a third party, bound to my external web address (https://minio.example.com for instance).
After running the new version of Minio RELEASE.2021-09-24T00-24-24Z via Docker, I needed to update my config (the env variables for MINIO_ACCESS_KEY / MINIO_SECRET_KEY change for example. I've also added --console-address=":9001" to my config, MinIO is running on port 9000 for the main service.
The service runs fine for storing data, but accessing the web address gives the error:
x509: cannot validate certificate for 172.19.0.2 because it doesn't contain any IP SANs
I believe this is to do with MinIO looking at the internal Docker IP addresses, and not finding them in the SSL (there are no IPs in the SSL at all). I'm unable to find documentation explaining how to resolve this. Ideally, I don't want to get a new SSL that contains the IP address (external or internal!).
Can I change some of the Docker config such that MinIO will not try to check the IP addresses in the SSL?
To answer my own question, I re-read the quickstart guide more carefully (https://docs.min.io/docs/minio-quickstart-guide.html), noting the following:
Similarly, if your TLS certificates do not have the IP SAN for the MinIO server host, the MinIO Console may fail to validate the connection to the server. Use the MINIO_SERVER_URL environment variable and specify the proxy-accessible hostname of the MinIO server to allow the Console to use the MinIO server API using the TLS certificate.
For example: export MINIO_SERVER_URL="https://minio.example.net"
For me, this meant I needed to update my docker-compose.yml file, adding the MINIO_SERVER_URL env variable. It had to point to the data URL for MinIO, not the console URL (otherwise you get an error about "Expected element type <AssumeRoleResponse> but have <html>").
It now works fine.

Getting Neo4J running on OpenShift

I am trying to get the Bitnami Neo4j image running on OpenShift (testing on my local Minishift), but I am unable to connect. I am following the steps outlined in this issue (now closed), however, now I cannot access the external IP for the load balancer.
Here are the steps I have taken:
Deploy Image (bitnami/neo4j)
Create service for the load balancer,
using the YAML supplied in the issue mentioned
Get the external IP
address for the LB (oc get services) The command in step 3 lists 2
of the same IP addresses, and when I attempt to go to this IP in my
browser it times out.
I can create a route that points to port 7374 on the IP of the LB, but
then I get the same error as reported in the aforementioned issue.
(ServiceUnavailable: WebSocket connection failure. Due to security
constraints in your web browser, the reason for the failure is not
available to this Neo4j Driver. Please use your browsers development
console to determine the root cause of the failure. Common)
Configure neo4j to accept non-local connections. E.g.:
dbms.connector.bolt.address=0.0.0.0:7687
Source: https://neo4j.com/developer/kb/explanation-of-error-websocket-connection-failure/

DTLSv1_listen unable to accept second client in a docker container

I'm experiencing an issue with OpenSSL/DTLS server.
Environment: docker container based on CentOs7
OpenSSL version: OpenSSL-1.1.1d
A DTLS server (non-blocking) using DTLSv1_Listen having a UDP socket with SO_REUSEADDR is unable to accept a second
client connection when it has already been accepted a client connection and serving it.
When the first client has finished, the second client connection is accepted.
I have used the dtls_udp_echo.c (taken from http://web.archive.org/web/20150617012520/http://sctp.fh-muenster.de/dtls-samples.html ) to carry out the test and reproduce the issue.
The test application has been compiled and executed within a docker container, having CentOS7 as base image, but the behaviour has been noticed with other base images OS too (e.g. Redhat, Ubuntu, Debian, SLES).
The same application running on a bare metal works without any issue.
Is there any known compatibility issue between Docker and OpenSSL/DTLS?
Is there any specific configuration to be done to overcome this issue?
Best Regards

How to protect ConnectionStrings in Azure IoT Edge module code?

Typing the connection string in the configuration file (as shown in the official example: https://github.com/Azure-Samples/iot-edge-samples/blob/master/js/simple/gw.cloud.config.json#L38) doesn't seem right.
Environment variables may be provided to the modules by the Edge Runtime (https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/iot-edge/iot-edge-runtime.md) but as far as I can see there is no way to modify its behaviour.
In the first document(https://github.com/Azure-Samples/iot-edge-samples/blob/master/js/simple/gw.cloud.config.json#L38), it shows how to customize the IoT Edge runtime (gw.[local|cloud].config.json). You can update gw.cloud.config.json by replacing <IoT Hub device connection string> with your actual IoT Hub device connection string to establish connection between IoT Edge application and Azure IoT Hub.
In the next document(https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/iot-edge/iot-edge-runtime.md), you can also configure the IoT Edge runtime by executing the following command.You will find the connection string setting in C:\ProgramData\azure-iot-edge\config\config.json.
iotedgectl setup --connection-string "{device connection string}" --nopass

Resources