I'm new to node-red running on my Raspberry Pi 3 and have search with no luck for an explanation for the different icons, in the node-red editor, on some built-in nodes, for example the mqtt node. What is the meaning and purpose of the red and blue icons on this node?
The Blue dot indicates that the node has undeployed changes
The Red triangle means that node is either missing required configuration data or has some configuration data that does not validate
Related
I have been constructing a network diagram for months and now all of a sudden when I right-click on the network diagram (node or background) I no longer get the context menu and cannot e.g. rename a node, or create a new node. I don't know how this happened. Right-clicking on edges produces a context menu, but less options are available than normal. It doesn't otherwise produce an error. It seems to be the case for all networks within one network collection.
The context menus behave normally on other networks in a different network collection open in the same session.
Please help if you can suggest a fix or a way to produce more debugging information.
Thanks!
-Justin
Tried right-clicking on the network diagram (background, node), but no context menu is produced for networks within a particular network collection. Other network collections seem to behave normally.
I am looking to dynamically set an erlang node to 'hidden' or set 'connect_all' after the node has already been created. Is this possible in erlang?
There is an undocumented net_kernel:hidden_connect_node(NodeName) function that can be used on a per-connection basis for NodeName not to share all the connection details of the caller.
There is no guarantee related to its long term support, but that's currently the only way to do things dynamically.
Thanks to #I GIVE TERRIBLE ADVICE (AND WRITE AWESOME ERLANG BOOKS) for sharing this gem of knowledge. I would also like to highlight how it has been particularly useful in my specific case :
Context :
I have several machines that host an Erlang node running my OTP application
The nodes are configured in a wireless peer-to-peer setup
For testing purposes, I would like to observe the behaviour of the cluster when multi-hop is required from a node A to another node B.
So far my best (and only) solution has been to physically move around the nodes such that they can only reach neighbours in range of their Wi-Fi antenna.
Bottom line for those that are in situations similar to what I have described, this is a very handy function for clustering nodes without completely removing the default transitive behaviour.
I'm playing around With Node-Red (I'm still in the newbie stages)
I have around 20 ESP8266 modules taking temperature and humidity in various locations in and around my home.
The way I am doing it right now, is to put a webserver on each of my ESPs and have node-red poll them every 5 Seconds. This is ugly in all respects, as repeating this 20 times hurts the eyes. I've set up two of them already, and it looks bad:
My question is:
Is there a way I can give node-red a list of the Devices (well, their IP addresses) and have node-red create my desired Dashboard for all of them? Looks like I would need a "for-each" module, as well as something to automatically create a group in the Dashboard for grouping the various gauges/Charts to each sensor.
Not a lot of code to share so far, but I did create a gist for you to see, if you're interested in the webserver part for ESP8266:
Gist of how to Connect ESP8266 With Node-Red using Arduino/C
In advance, thanks for your tips and suggestions
No, at the moment you need to define each widget on the dashboard explicitly.
The closest you could do is use the template node and pass in an array of values that can be rendered in a loop, but that will not work for the Chart node or Text nodes.
I am new in ejabberd clustering setup i tried ejabberd cluster setup past one week but till i did not get it.
1.After clustering setup i got the output like running db nodes = ['ejabberd#first.example.com','ejabberd#second.example.com'] still now fine.
After that i login into PSI+ client and login credtials username:one#first.example.com then password:xxxxx.
Then i stopped ejabberd#first.example.com node my PSI+ client also down.
So why its not automatically connect with my second server ejabberd#second.example.com
Then how will i achieve ejabberd clustering suppose if one node is crash another node is manitain the connection automatically.
Are you trying to set up one cluster, or federate two clusters? If just one cluster, they should share the same domain (either first.example.com or second.example.com).
Also, when there's a node failure, your client must reconnect (not sure what PSI does), and you need to have all nodes in your cluster behind a VIP so the reconnect attempt will find the next available node.
So I installed and configured SNMP on lets say node A. Also lets say that node A mounts through NFS from storage node B. Finally I have monitoring node C. When node C requests information via snmp from node A, then node beautifully complies when nothing is wrong.
I've been running into this problem though. Lets say that storage node B fails. If node C requests info via SNMP from node A the snmp demon freezes on node A because it can't reach the mounting point.
This sort of action is very counter intuitive from the respect of the purpose of SNMP. SNMP is used for monitoring system stats. If something fails I want to know that soemthing fails. From node C's perspective in this case it looks as if someone turned off node A (which clearly is not what happend).
I looked for configuration settings such that the SNMP will skip over MIBS it cant read, but found nothing. Any ideas?
Turns out that nfs timeout coupled with the snmp demon is a old bug. The way to fix it is to basically skip over all the NFS mounted points by using the following option in the snmpd.conf
skipNFSInHostResources yes