Configure IoTEdge module to receive messages port 53000 - docker

I'm loosely following along this article to develop and debug modules for IoTEdge
https://learn.microsoft.com/en-us/azure/iot-edge/how-to-visual-studio-develop-module?view=iotedge-2020-11
The article leverages the iotedgehubdev which is where, presumably, the configuration exists to expose port 53000.
My question is, without using the simulator or iotedgehubdev tool, how do I configure the port to allow messages to be sent using this type of syntax
curl --header "Content-Type: application/json" --request POST --data '{"inputName": "input1","data":"hello world"}' http://localhost:53000/api/v1/messages
// Register callback to be called when a message is received by the module
await ioTHubModuleClient.SetInputMessageHandlerAsync("input1", PipeMessage, ioTHubModuleClient);
static async Task<MessageResponse> PipeMessage(Message message, object userContext)
{
....
}
Target environment: Ubuntu, IoTEdge 1.1.4, published via IoTHub pulled from ACR
Development: Windows 11, Visual Studio 2022, debug via SSH to docker module on Ubuntu
Once the module is up and running, I want to send a post request to the module from the Ubuntu machine hosting the module. The module is being published from IoTHub
I've looked across many articles for clues on how port 53000 is setup and listening but haven't found anything that helps so far.
Appreciate the help.

Sending a message is now easy, once your code is running on Simulator, you can send messages by issuing a CURL request to the endpoint you received when starting the Simulator. Please follow below Reference in which we have detailed information about:
( curl --header "Content-Type: application/json" --request POST --data '{"inputName": "input1","data":"hello world"}' http://localhost:53000/api/v1/messages)
Even I looked across many articles for clues. How to Set up port 53000 without using the simulator or iotedgehubdev tool. if you want to work without using the simulator or iotedgehubdev.
you can reach out to Azure Support or Can raise a GitHub Issue.
You can refer this article( Azure IoT Edge Simulator — Easily run and test your IoT Edge application | by Xavier Geerinck | Medium ) by Xavier Geerinck

We have to build a custom API module which will listen to the port just what the iotedgedev utility is doing, in which ever language you are writing it in.
Create a Rest API.
Use Azure Devices Module Client module with IOTEdge enable in project file.
Create an output in your custom API module and send the message using module client.
Create the route config in you deployment file and provide the output of this module output to the input of another module in the routes section.
Edit: Do not forget to mention the createOptions Portbindings and the Exposed Ports section, like for e.g
"createOptions": {
"ExposedPorts": {
"9000/tcp": {}
},
"HostConfig": {
"PortBindings": {
"9000/tcp": [
{
"HostPort": "9000"
}
]
}
}
}

Related

Why can I not run a Kafka connector?

Background
Firstly - a bit of background - I am trying to learn a bit more about Kafka and Kafka connect. In that vein I'm following along to an early release book 'Kafka Connect' by Mickael Maison and Kate Stanley.
Run Connectors
Very early on (Chapter 2 - components in a connect data pipeline) they give an example of 'How do you run connectors'. Note that the authors are not using Confluent. Here in the early stages, we are advised to create a file named sink-config.json and then create a topic called topic-to-export with the following line of code:
bin/kafka-topics.sh --bootstrap-server localhost:9092 \
--create --replication-factor 1 --partitions 1 --topic topic-to-export
We are then instructed to "use the Connect REST API to start the connector with the configuration you created"
$ curl -X PUT -H "Content-Type: application/json" \ http://localhost:8083/connectors/file-sink/config --data "#sink-config.json"
The Error
However, when I run this command it brings up the following error:
{"error_code":500,"message":"Cannot deserialize value of type `java.lang.String` from Object value (token `JsonToken.START_OBJECT`)\n at [Source: (org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$UnCloseableInputStream); line: 1, column: 36] (through reference chain: java.util.LinkedHashMap[\"config\"])"}
Trying to fix the error
Keeping in mind that I'm still trying to learn Kafka and Kafka Connect I've done a pretty simple search which has brought me to a post on StackOverflow which seemed to suggest maybe this should have been a POST not a PUT. However, changing this to:
curl -d #sink-config.json -H "Content-Type: application/json" -X POST http://localhost:8083/connectors/file-sink/config
simply brings up another error:
{"error_code":405,"message":"HTTP 405 Method Not Allowed"}
I'm really not sure where to go from here as this 'seems' to be the way that you should be able to get a connector to run. For example, this intro to connectors by Baeldung also seems to specify this way of doing things.
Does anyone have any ideas what is going on? I'm not sure where to start...
First, thanks for taking a look at the early access version of our book.
You found a mistake in this example!
To start a connector, the recommended way is to use the PUT /connectors/file-sink/config endpoint, however the example JSON we provided is not correct.
The JSON file should be something like:
{
"name": "file-sink",
"connector.class": "org.apache.kafka.connect.file.FileStreamSinkConnector",
"tasks.max": 1,
"topics": "topic-to-export",
"file": "/tmp/sink.out",
"value.converter": "org.apache.kafka.connect.storage.StringConverter"
}
The mistake comes because there's another endpoint that can be used to start connectors, POST /connectors, and the JSON we provided is for that endpoint.
We recommend you use PUT /connectors/file-sink/config as the same endpoint can also be used to reconfigure connectors. In addition, the same JSON file can also be used with the PUT /connector-plugins/{connector-type}/config/validate endpoint.
Thanks again for spotting the mistake and reporting it, we'll fix the example in the coming weeks. We'll also reply to your emails about the other questions shortly.

twilio. An error occurs when deploying with plugin

If you deploy using the command below,
twilio flex:plugins:deploy --changelog='first deploy'
The following error will occur.
I don't understand the meaning of the path pointed to by resource.
Error code 20404 from Twilio: The requested resource /Services/ZSXXXXXXXXXXXXXXXXXXXXXXX/Environments was not found. See https://www.twilio.com/docs/errors/20404 for more info.
This is the first deployment that has not been deployed yet.
What should i do?
twilio serverless:deploy
Using the above command, functions and assets are deployed on a serverless basis.
At that time, I have deleted the Services of functions and assets that existed by default.
Is this default Services relevant for plugins?
Also, if it is related, where is the part to reset in the plug-in?
When I contacted support, they told me to run the reset command.
curl https://flex-api.twilio.com/v1/Configuration \
-H "Content-Type: application/json" \
-d '{"account_sid":"ACCOUNT_SID", "serverless_service_sids": []}' \
-u ACCOUNT_SID:AUTH_TOKEN
If you deploy again after executing the above command
I was able to deploy without problems.

Create repo on Bitbucket programmatically

I used to do
curl -k -X POST --user john#outlook.com:doe13 "https://api.bitbucket.org/1.0/repositories" -d "name=logoApp"
and success.
now I got : error
{"type": "error", "error": {"message": "Resource removed", "detail": "This API is no longer supported.\n\nFor information about its removal, please refer to the deprecation notice at: https://developer.atlassian.com/cloud/bitbucket/deprecation-notice-v1-apis/"}}
Does anyone know a know way to do this ?
There's a difference between a success from curl (OK:200) and an error from the service you're trying to use. The error, however, mentions that you're trying to use the Cloud Rest API version 1, which is deprecated effective 30 June 2018.
Read this for more information.
I don't use Bitbucket Server (a local option), and I think that has more features for this sort of thing.
For the public Bitbucket, you can still do it but it isn't documented.
The v1.0 API has been removed, and the new v2.0 API doesn't document a POST to a /repositories. Instead, you have to hit an endpoint that includes the repo that doesn't yet exist: /repositories/workspace/repo_slug
The JSON payload needs to know the project for the repo: look in the slug for a project that already exists. Fill in the user/team and repo name in the URL. And, you can make an application password so you aren't using your account password. This app password can limit the scope of what that access can do.
% curl -X POST --user 'user:app_pass' \
-H "Content-type: application/json" \
-d '{"project":{"key":"PROJ"}}' \
"https://api.bitbucket.org/2.0/repositories/USER/REPO"

How to create queue in Rabbitmq

I am creating a new image taking base as rabbitmq and trying to create queue,exchange which will be reflected on localhost url once the server is up. I am able to manually create queue within rabbitmq container. But I want to achieve this either through dockerfile or entrypoint.sh. I want the exchange, queue to be available as soon as rabbitmq server is up. Please suggest any way to achieve it. Any sample example will be helpful.
Rabbitmq has a Management HTTP API. You can use this api to interact with rabbitmq.
You can create an exchange by doing a PUT request to http://localhost:15672/api/exchanges/${vhost}/${name}. Similarly, you can create a queue by
doing a PUT to http://localhost:15672/api/queues/${vhost}/${name}.
You can call these using curl in the entrypoint script.
You can use HareDu 2 like so:
var result = _container.Resolve<IBrokerObjectFactory>()
.Object<Queue>()
.Create(x =>
{
x.Queue("fake_queue");
x.Configure(c =>
{
c.IsDurable();
c.AutoDeleteWhenNotInUse();
c.HasArguments(arg =>
{
arg.SetQueueExpiration(1000);
arg.SetPerQueuedMessageExpiration(2000);
});
});
x.Targeting(t =>
{
t.VirtualHost("HareDu");
t.Node("Node1");
});
});
Here is a practical example with curl and the REST HTTP API which was already mentioned.
First of all, the HTTP REST API is a separate plugin and if not installed, you have to install it with the command:
rabbitmq-plugins enable rabbitmq_management
if you want to install it in your Docker image, you can do:
RUN rabbitmq-plugins enable rabbitmq_management
Once you plugin is installed, you can just call the API with a curl command:
curl --location --request PUT 'http://localhost:15671/api/queues/%2F/TEST_QUEUE' \
--header 'Content-Type: application/json' \
--header 'Authorization: Basic Z3Vlc3Q6Z3Vlc3Q=' \
--data-raw '{
"auto_delete": false,
"durable": true,
"arguments": {}
}'
The Basic auth provided is guest/guest as the default user in the docker image.
The trick is for the default vhost, you have to put %2F in your url (equivalent to /) this can make you lose a lot of time if you try other variations.
Here are some references:
Management plugin documentation
HTTP API documentation
If you need a practical docker example, you can have a look to my solace integration project where I set up a rabbitMQ for testing purpose here.

restify example TODO server not sending responses

I'm new at this and developing my first API server. I wanted to see an example of a POST request so I installed restify 3.0.3 and tried to run the TODO server example. I see the requests logged at the server but no response is sent. I'm using the sample curl requests provided and the server is running on Cloud9. Curl is running on windows 7.
For example, I've tried:
curl -isS http://test-atk9.c9.io | json
curl -isS http://test-atk9.c9.io/todo -X POST -d name=demo -d
task="buy milk"
Can anyone help?
I saw the same behavior when using PostMan to exercise the example.
Setting the PostMan Header Accept:text/plain or Accept:application/json worked for me.
BTW: if you set Accept:text/html, you should receive a helpful response:
{
"code": "NotAcceptableError",
"message": "Server accepts: application/todo,application/json,text/plain,application/octet-stream,application/javascript"
}
Hope this helps.

Resources