What is the fastest way of communicating from PC to a Controller using LabVIEW? - communication

I am working on a project wherein I need to communicate 8 boolean outputs to the controller based on the result generated by a program built using LabVIEW on a PC.
I have discussed this with a few colleagues who suggest using a parallel port data-bus and use TTL signals to communicate to a micro-controller which will give maximum transfer speed.
I understand it being a cost effective solution but will it be fastest way to communicate with a micro-controller? Also, considering it being a legacy technology which limits its availability on standard PC's I have to buy an additional PCI-E card with parallel port interface.

Related

When to write a Custom Kernel Module

Problem Statement:
I have a very high bandwidth data link that is UDP based. The source of this data is not configurable, and sends on UDP a stream of datagrams. We have code that uses the standard methods for receiving data on the UDP socket that works adequately. I wanted to know if
Does there exist a command interface to extract multiple UDP datagrams at a time? to improve efficiency?
If one doesn't exist, does it make sense to create a kernel module to provide the capability?
I am a novice, and i wanted to understand what thought process has to happen when writing your own kernel module seems appropriate. I know that such a surgical procedure isn't meant to done lightly, but there must be a set of criteria where that action is prudent. Maybe not in my case, but in general.
HW / Kernel Module Perspective
A typical network adapter these days would be capable of distributing received packets across multiple hardware Rx queues thus letting the host run multiple software Rx queues bound to different CPU cores reading out packets in parallel. From a single HW/SW queue perspective, the host may poll it for new packets (see Linux NAPI), with each poll ideally yielding a batch of packets, and, alternatively, the host may still use interrupt-driven approach for Rx signalling with interrupt coalescing turned on for improved efficiency.
Existing NIC drivers in Linux kernel strive to stick with the most performant techniques, and the kernel itself should be able to leverage all of that properly.
Userland / Application Perspective
There's PACKET_MMAP interface provided by Linux kernel for improved Rx/Tx efficiency on the application side. Long story short, an application can set up a memory buffer shared between the kernel- and userspace and read out incoming packets from it, ideally in batches, or blocks, thus avoiding costly kernel-to-userspace copies and context switches so customary when using regular methods.
For added efficiency, the application may have multiple sockets bound to the NIC in separate threads / processes and demand that packet reception be load balanced across these sockets (see AF_PACKET fanout mode description).
DPDK Perspective
Kernel bypass framework that allows an application to seize full control of a network adapter by means of a vendor-specific poll-mode driver, or PMD, effectively running in userspace as part of the application and by its very nature not needing any kernel-to-userspace copies, context switches and, most likely, locking. Multi-queue receive operation, load balancing (round robin, RSS, you name it) and more cutting edge offloads are likely to be available, too (it's vendor specific).
Summary
The short of it, given the fact that multiple network acceleration techniques already exist, one need never write their own kernel module to solve the problem in question. By the looks of it, your application, which, as you say, uses standard methods, is not aware of PACKET_MMAP technique. So I'd be tempted to suggest looking at this one closely. DPDK approach might require that the application be effectively re-implemented from scratch, so I would first go for PACKET_MMAP approach as a low-hanging fruit.

How to discover the high-performance network interface on a linux HPC cluster?

I have a distributed program which communicates with ZeroMQ that runs on HPC clusters.
ZeroMQ uses TCP sockets, so by default on HPC clusters the communications will use the admin network, so I have introduced an environment variable read by my code to force communication on a particular network interface.
With Infiniband (IB), usually it is ib0. But there are cases where another IB interface is used for the parallel file system, or on Cray systems the interface is ipogif, on some non-HPC systems it can be eth1, eno1, p4p2, em2, enp96s0f0, or whatever...
The problem is that I need to ask the administrator of the cluster the name of the network interface to use, while codes using MPI don't need to because MPI "knows" which network to use.
What is the most portable way to discover the name of the high-performance network interface on a linux HPC cluster? (I don't mind writing a small MPI program for this if there is no simple way)
There is no simple way and I doubt a complete solution exists. For example, Open MPI comes with an extensive set of ranked network communication modules and tries to instantiate all of them, selecting in the end the one that has the highest rank. The idea is that ranks somehow reflect the speed of the underlying network and that if a given network type is not present, its module will fail to instantiate, so faced with a system that has both Ethernet and InfiniBand, it will pick InfiniBand as its module has higher precedence. This is why larger Open MPI jobs start relatively slowly and is definitely not fool proof - in some cases one has to intervene and manually select the right modules, especially if the node has several network interfaces of InfiniBand HCAs and not all of them provide node-to-node connectivity. This is usually configured system-wide by the system administrator or the vendor and is why MPI "just works" (pro tip: in not-so-small number of cases it actually doesn't).
You may copy the approach taken by Open MPI and develop a set of detection modules for your program. For TCP, spawn two or more copies on different nodes, list their active network interfaces and the corresponding IP addresses, match the network addresses and bind on all interfaces on one node, then try to connect to it from the other node(s). Upon successful connection, run something like the TCP version of NetPIPE to measure the network speed and latency and pick the fastest network. Once you've gotten this information from the initial small set of nodes, it is very likely that the same interface is used on all other nodes too, since most HPC systems are as homogeneous as possible when it comes to their nodes' network configuration.
If there is a working MPI implementation installed, you can use it to launch the test program. You may also enable debug logging in the MPI library and parse the output, but this will require that the target system has an MPI implementation supported by your log parser. Also, most MPI libraries use native InfiniBand or whatever high-speed network API there is and will not tell you which is the IP-over-whatever interface, because they won't use it at all (unless configured otherwise by the system administrator).
Q : What is the most portable way to discover the name of the high-performance network interface on a linux HPC cluster?
This seems to be in a gray-zone - trying to solve a multi-faceted problem among site-specific hardware (technical) interface naming and theirs non-technical, weakly administratively maintained, preferred ways of use.
As-is State :
ZeroMQ can (as per RFC 37/ZMTP v3.0+) specify <hardware(interface)>:<port>/<service> details :
zmq_bind (server_socket, "tcp://eth0:6000/system/name-service/test");
And:
zmq_connect (client_socket, "tcp://192.168.55.212:6000/system/name-service/test");
yet has no means, to my knowledge, to reverse-engineer the primary use of such an interface, in the holistic context of the HPC-site and it's hardware configuration.
Seems to me, your idea of pre-testing the administrative mappings via MPI-tool first and letting ZeroMQ deployment use these externally detected (if indeed auto-detectable, as you assumed above) configuration details for a proper (preferred) interface usage.
The Safe Way to Go :
Asking the HPC-infrastructure Support Team ( who is responsible for knowing all of the above and trained to help Scientific Teams to use the HPC in the most productive manner ) would be my preferred way to go.
Disclaimer :
Sorry in case this did not help your will to read & auto-detect all the needed configuration details ( a universal BlackBox-HPC-ecosystem detection and auto-configuration strategy would hardly be a trivial one-liner, I guess, wouldn't it? )

Configuration Topology with cooja/contiky

I'm trying to implementing tree topology with Cooja/contiky. Finding through examples i've not been able to find an a good example to find what i need.
In short :
I'd need to implementing a topology of this type(picture here under) with cooja end contiky, is there someone that could give me some advice?
Thanks in advance
I don't really use Contiki Operating System, I have only ever used TinyOS but a network topology such as the one you have should be easily achievable.
For TinyOS, the mote-to-mote radio tutorial HERE will show you how to two different sensor nodes can communicate with each other (a gateway is basically just a sensor node connected to a PC) and the mote-to-PC communication tutorial HERE will show you how a gateway node can forward information from itself to the PC it is connected to. When the network is running you basically have a Java application listening to USB port and receiving packets from gateway node. Once the packet has been received on the Java application then you can send it to an external network server.
It may sound difficult if you have never developed on TinyOS but what you want to do is very common and so there will be complete programs in the tutorial section of a typical TinyOS distribution showing you how to achieve most of the things you need you need to achieve. There should also be similar examples in Contiki.

What is the most suitable virtual machine software for sharing hardware ports (COM, LPT etc) at register level?

I'm using Delphi to develop real-time control software and over the last couple of years I have done some work running older Windows installations under Microsoft's VirtualPC and it works fine for 'pure software' development (i.e no or limited access to the outside world). Such tools seem able to work with network connections but I have to maintain software which performs I/O via the parallel port (via a device driver). We also use USB I/O. In the past I've liked Microsoft's virtual tools because it takes time to install a new operating system and then (in my case) install Delphi and a load of libraries and components to provide development support. In these circumstances I've not been too bothered by my lack of access to the low-level I/O ports.
I want to up my game and I'm happy to pay for a good virtualisation tool IF I can have access from it to the outside world, i.e I want to be able to configure it to allow access to my machine's parallel port and com ports in the same way as if it was running natively. This access has to be able to expose the parallel port in register terms, i.e to 'see' the port at address $03f8 for example and to support I/O operations of those registers (via the appropriate kernel access) as my Windows 7 64-bit installation is able to do.
I see that there are a number of virtualisation solution out there now but it's quite hard to acertain the capability of each at such a low level. Does anyone have any experience or knowledge in this area?
The VMware products would be suited best for this. You can add virtual serial and parallel ports and forward them to a physical port on the host, or even to a file or a named pipe.
You can also connect any USB device that is connected to the host machine.
This works with VMware Workstation, but might even work with the free VMware player too.

Any successful profibus communications from .NET?

Has anyone successfully talked profibus from a .NET application?
If you did, what device/card did you use to accomplish this, what was the application, and did you use any kind of preexisting or available code?
We've not used Profibus, but have used DeviceNET (another CAN based protocol), Ethernet/IP and ControlNet which all have similar challenges.
We've been doing this since the late 1990's and therefore rely mainly on our own generated code using off-the-shelf hardware. The companies that have shown longevity during that period that I remember are:-
AnyBus (HMS, www.anybus.com) we've recently started using their gateway products as we can place fieldbus interfaces close to the hardware and then communicate over normal Ethernet (usually using Ethernet/IP www.odva.org). This has the advantage of separating hardware and PC using only a network cable. The Ethernet/IP .NET classes were written by ourselves as nothing much was on the market at the time. I'm sure a quick google search would find suitable class libraries
SST (www.mysst.com) have had fieldbus interfaces for more than a decade. The last SST card we used for DeviceNET still only had VB6 sample code. A good selection of fieldbus support and different form-factors e.g. PC104, PCI, PMCIA
Beckhoff/Wago (www.beckhoff.com, www.wago.com) we typically use Beckhoff for the I/O more than the interface cards but again a company that has been around a long time. They also have products that support exposing using OPC (another way for you to get I/O information without directly communicating with the hardware/devicedrivers)
I suggest not using OPC interfaces to the hardware directly (it’s OK for communication using PC (.NET)->PLC->Profibus) as you need to ensure that the control system responds to loss of control from your .NET application. I’m assuming that you are needing a profibus Master here (not a slave), so as long as your control system is intrinsically fail safe, then loss of communication should mean the control system enters an "Idle" state and therefore most of the I/O will return to the fails safe state.
We also try to ensure that we do not put safety related code in .NET. Most of our .NET code is userinterface from a PLC, but in some places we do control the fieldbus directly but ensure hardware interlocks will prevent un-safe operation, either using safety switches/relays or a small PLC with the the task of interlocking only. And above all make the system fail-safe! Loss of comms from the .NET code should shutdown the automation to the fail-safe state.
We have used Steeplechase to connect to our profibus to our automated pick system.
http://www.phoenixcontact.com/automation/32131_31909.htm
Try this: http://libnodave.sourceforge.net

Resources