We are having some challenges around using Azure Portal to manage IoT edge devices in development. I am posting in case someone can confirm these are known issues, or supply possible workarounds.
The first inconsistency is that when we have no clients connected, it will show up saying 1 under the connected client count field; even though at the same time giving a warning that the device is disconnected from the hub.
The second (and more annoying) inconsistency is that the modules that are running don’t match the modules that are showing up on the hub. When we run the docker ps command we get what we think is the correct situation, while at the same time, the hub shows that we have modules that are “running” that aren’t there at all, and one that is “pending deployment” that we can see is running.
We've also seen that the EdgeAgent container is pumping out a bunch of warnings saying: [WRN] - Building state for computing patch failed with error 'Could not find type in JObject. type Newtonsoft.Json.JsonSerializationException.' We wonder if this error has anything to do with it, maybe it’s trying to send status updates back to the hub but falling over somewhere along the line?
We would be grateful for any comments or updates on portal status. I have some images supporting my message, but can't upload them because I am a Stack Overflow newbie.
Thanks for any guidance as to whether these are known issues with the portal at this stage, or if there is something we are doing incorrectly.
Dave
Related
I'm currently trying to make the Connector Device work with ROS. My first step was to just take one of the robots of the "connector.wbt" world and place it in a new world and switching the robots controller argument to "ros"as one can see below.
When I run the simulations all ROS services are advertised as expected. In the next step I enable the Connector using its corresponding service, where an integer needs to be specified (presence check Time Step) //presence_sensor/enable service as shown below:
Now the topic is enabled, however no messages are published, while I'm expecting to see messages of type "webots_ros/Int8Stamped", which should give me zeros for the absence of a suitable connector.
Is there any idea if the problems lies on my side? Is the Connector-Ros-Interface working properly for you if you recreate the above example?
You are completely right, there was indeed a problem with the ros-connector interface.
This has been fixed just now here:
https://github.com/omichel/webots/pull/672
This fix will be included in the nightly build of today (R2019b revision 1, available from tomorrow morning) that you can find here:
https://github.com/omichel/webots/releases
Let us know if there are still problems.
As the title says, whats the correct way to configure an IoTEdge Module to report data to Remote Monitoring?
Actually I have a custom module running on an IoTEdge device that is working correctly (I can check that is working properly by looking at the docker logs of the module) , but its not transmitting anything to the Remote monitoring dashboard, by the way the device is listed on the available devices on the Azure Remote Monitoring but it is offline. I suppose that it depends on the MessageSchema and MessageTemplate that are not configured. I can't find any specific documentation about this topic, can anyone point me in the right direction?
Are you asking about the original V1 version of the remote monitoring solution, or the newer V2 version? If it's the original version, you would need to, at least once, send a DeviceInfo structure (https://learn.microsoft.com/en-us/azure/iot-suite/iot-suite-v1-remote-monitoring-device-info#device-metadata) to the IoTHub associated with the solution. I haven't tried it yet, but it should work for the edge device (I don't think it would have an issue with the module concept). If it's the V2 version, I would need to investigate further.
i am not sure where the root of my problem actually comes from, so i try to explain the bigger picture.
In short, the symptom: After upgrading consul from 0.7.3 to 0.8.1 my agents ( explaining that below ) could no longer connect to the cluster leader due to dublicated node-ids ( why that probably happens, explained below).
I could neither fix it with https://www.consul.io/docs/agent/options.html#_disable_host_node_id nor fully understand, why i run into this .. and thats where the bigger picture and maybe even different questions comes from.
I have the following setup:
I run a application stack with about 8 containers for different services ( different micoservices, DB-types and so on).
I use a single consul server per stack (yes the consul server runs in the software stack, it has its reasons because i need this to be offline-deployable and every stack lives for itself)
The consul-server does handle the registration, service discovery and also KV/configuration
Important/Questionable: Every container has a consul agent started with with "consul agent -config-dir /etc/consul.d" .. connecting the this one server. The configuration looks like this .. including to other files with they encrypt token / acl token. Do not wonder about servicename() .. it replaced by a m4 macro during image build time
The clients are secured by a gossip key and ACL keys
Important: All containers are on the same hardware node
Server configuration looks like this, if any important. In addition, ACLs looks like this, and a ACL-master and client token/gossip json files are in that configurtion folder
Sorry for this probably TLTR above, but the reasons behind all the explanation was, this multi-agent setup ( or 1-agent per container ).
My reasons for that:
I use tiller to configure the containers, so a dimploy gem will try to usually connect to localhost:8500 .. to acomplish that without making the consul-configuration extraordinary complicated, i use this local agent, which then forwards the request to the actual server and thus handles all the encryption-key/ACL negation stuff
i use several 'consul watch' tasks on the server to trigger re-configuration, they also run on localhost:8500 without any extra configuration
That said, the reason i run a 1-agent/container is, the simplicity for local services to talk to the consul-backend without really knowing about authentication as long as they connect through 127.0.0.1:8500 ( as the level of security )
Final Question:
Is that multi-consul agent actually designed to be used that way? The reason i ask is, because as far as i understand, the node-id duplication issue i get now when starting a 0.8.1 comes from "the host" being the same, so the hardware node being identical for all consul-agents .. right?
Is my design wrong or do i need to generate my own node-ids from now on and its all just fine?
Seem like this issue has been identified by Hashicorp and addressed in https://github.com/hashicorp/consul/blob/master/CHANGELOG.md#085-june-27-2017 where -disable-host-node-id has been set to true by default, thus the node-id is no longer generated from the host hardware but a random uuid, which solves the issue i had running several consul nodes on the same physical hardware
So the way i deployed was fine.
Hie members! ----am Boniface M - - a beginner in android[University student]..
My question is am planning to develop an android app/middleware that will act as a grid service .i.e an app for grid computing.. the application needs to be installed in 1....n devices. in the connection, one device must act as a server for all others. communication between the devices is via the wifi under the permission of the server device.which is determined by a certain algorithm[no problem here].
The problem is should i use a database that will keep track of all the services a device is running which are accessible to other devices or is there any way that i can directly keep all this information and then retrieve them as i request them from another app installed in another device.
and also how i can share files via wifi like blutooth
Thanks....
You're asking many questions in one and I'm actually unsure what you mean overall. Here's a few links that are sure be of some use...
http://developer.android.com/reference/android/os/Build.html This library is good for finding out information about the device you're running on.
http://developer.android.com/reference/android/location/Criteria.html - Criteria might be useful, lets you know what location based services you have running
Other than that, if you're looking to see if particular things are running check out this question: How to check if a service is running on Android?
If you're looking to keep a central hub of what devices have what available etc. you're going to need a middle man for what you want to do I suspect. If it was me, I'd do HTTP requests to a server, to php scripts I have written which would then read/write from a MySQL database to get information about other devices.
If you want to share files via wifi.. you're going to need an FTP server on the phone. There's an app swiFTP which does this to some degree (phone -> PC) but the concept should be the same. Take a look at it. It's a starting point! http://www.tested.com/news/how-to-transfer-files-wirelessly-to-your-android-phone/53/
Again, I'm unsure EXACTLY what you're looking to do but hopefully all of that is of some help. If it's not leave me a comment and I'll try and assist you further.
hope it helps!
We're using TFS Build Server to ensure that all files checked in by developers are going to compile to a working source tree, cuz there's nothing worse than a broken build!
Anyway we've having some problems with the drop location that Build Server wants to use, we keep getting this error:
TFS209011: Could not create drop location \build-server\drops\project\BuildNumber. No more connections can be mades to this remote computer at this time because there are already as many connections as the computer can accept
Since this is being used in a pilot program at the moment we only have 2 projects which are using the Build Server. I've checked the network share and the allowed number of connections is about 100 so I don't really get what the problem is.
Only occationally does the problem raise it's head, quite often we'll not have one for days, and then we'll have a bunch in a row.
I can't seem to find much info on this either.
I'm pretty good with TFS - but a dev not a network guy. I would GUESS that while the NETWORK SHARE itself allows 100 connections, is it possible the underlying server it is running on doesn't have some sort of limitation?
Have you checked event logs?
This problem seems specific enough I would encourage you to post to the official Microsoft forums.
It looks like the problem is to do with our install of Windows 2003, we have "Web Edition" installed and it is limited to just 10 connections.
I ended up with a post of the MSDN forums in which I got this answer: http://forums.microsoft.com/msdn/ShowPost.aspx?PostID=3967598&SiteID=1&mode=1