I have started to develop software for the ZYNQ 7020 SoC from Xilinx. I have finished several tutorials and I have found that whenever I use some predefined block in the PL (for example GPIO controller) then associated software driver for that peripheral is automatically generated. Does anybody know whether this is true only for the predefined blocks or also for user developed blocks? Thanks.
I have found in my opinion useful document regarding the process of custom IP block drivers development.
The tools can't automatically write your driver, but if you package your IP block and driver in the right format you can do the same type of thing. Look here for some more info on how to do that: Is there a way to pass a design parameter from a custom IP to software
Related
I'm a software developer but I'm a newbie to embedded software development.
I have a Zynq Ultrascale board that has an Axi DMA in its Hardware and I want to access this DMA from Linux.
I know I should use DMA-Engine to Access DMA in Linux and I found the following link which is the Xilinx DMA driver, but I can't add these files to my qt project without any errors and I received file(header file) not found errors.
drivers/dma/xilinx/xilinx_dma.c
I have a piece of scattered information about the DMA driver, Device tree, and DMA-Engine but I know nothing about how to utilize these to access hardware DMA.
I built a Petalinux project and add DMA-Engine and DMA Test client to its kernel.
I don't know adding DMAEngine to the Petalinux project is enough or I should have a driver as well.
I don't know adding hardware specification (by .xsa file and .bit file) to the Petalinux project is enough or I should add a device tree to my Linux for detecting DMA as well
I lookup a step-by-step tutorial on how to set up Linux and qt creator for accessing DMA,
or at least a clear roadmap to my target.
thank you in advance.
First of all, you are facing errors when adding xilinx_dma.c to the Qt project because this file is meant to be compiled as part of kernel or as a kernel module.
Adding DMA Engine to Petalinux is not enough to work with DMA from user space. DMA Engine only provides a standardized API to let different DMAs be integrated into kernel. You need to add a client driver as well. Xilinx, as far as I know, has provided a simple client driver called DMA Proxy Driver. It also includes some simple examples that show how you can access DMA from the user space. However, if your application needs high bandwidth, you probably need to consider other options.
There is also an open source client driver for Axi DMA which achieves higher bandwidths compared to Proxy DMA Driver. It's user space API also allows you to register a callback function to be called whenever a transaction is finished.
The third option is to implement the driver in the user space. This can be done by defining the DMA as a UIO device in the device tree and access its register map directly from the user space. In this case, you need to allocate some contiguous memory blocks in the kernel space to avoid complications with MMU, which cannot be dealt with from the user space.
I have a distributed program which communicates with ZeroMQ that runs on HPC clusters.
ZeroMQ uses TCP sockets, so by default on HPC clusters the communications will use the admin network, so I have introduced an environment variable read by my code to force communication on a particular network interface.
With Infiniband (IB), usually it is ib0. But there are cases where another IB interface is used for the parallel file system, or on Cray systems the interface is ipogif, on some non-HPC systems it can be eth1, eno1, p4p2, em2, enp96s0f0, or whatever...
The problem is that I need to ask the administrator of the cluster the name of the network interface to use, while codes using MPI don't need to because MPI "knows" which network to use.
What is the most portable way to discover the name of the high-performance network interface on a linux HPC cluster? (I don't mind writing a small MPI program for this if there is no simple way)
There is no simple way and I doubt a complete solution exists. For example, Open MPI comes with an extensive set of ranked network communication modules and tries to instantiate all of them, selecting in the end the one that has the highest rank. The idea is that ranks somehow reflect the speed of the underlying network and that if a given network type is not present, its module will fail to instantiate, so faced with a system that has both Ethernet and InfiniBand, it will pick InfiniBand as its module has higher precedence. This is why larger Open MPI jobs start relatively slowly and is definitely not fool proof - in some cases one has to intervene and manually select the right modules, especially if the node has several network interfaces of InfiniBand HCAs and not all of them provide node-to-node connectivity. This is usually configured system-wide by the system administrator or the vendor and is why MPI "just works" (pro tip: in not-so-small number of cases it actually doesn't).
You may copy the approach taken by Open MPI and develop a set of detection modules for your program. For TCP, spawn two or more copies on different nodes, list their active network interfaces and the corresponding IP addresses, match the network addresses and bind on all interfaces on one node, then try to connect to it from the other node(s). Upon successful connection, run something like the TCP version of NetPIPE to measure the network speed and latency and pick the fastest network. Once you've gotten this information from the initial small set of nodes, it is very likely that the same interface is used on all other nodes too, since most HPC systems are as homogeneous as possible when it comes to their nodes' network configuration.
If there is a working MPI implementation installed, you can use it to launch the test program. You may also enable debug logging in the MPI library and parse the output, but this will require that the target system has an MPI implementation supported by your log parser. Also, most MPI libraries use native InfiniBand or whatever high-speed network API there is and will not tell you which is the IP-over-whatever interface, because they won't use it at all (unless configured otherwise by the system administrator).
Q : What is the most portable way to discover the name of the high-performance network interface on a linux HPC cluster?
This seems to be in a gray-zone - trying to solve a multi-faceted problem among site-specific hardware (technical) interface naming and theirs non-technical, weakly administratively maintained, preferred ways of use.
As-is State :
ZeroMQ can (as per RFC 37/ZMTP v3.0+) specify <hardware(interface)>:<port>/<service> details :
zmq_bind (server_socket, "tcp://eth0:6000/system/name-service/test");
And:
zmq_connect (client_socket, "tcp://192.168.55.212:6000/system/name-service/test");
yet has no means, to my knowledge, to reverse-engineer the primary use of such an interface, in the holistic context of the HPC-site and it's hardware configuration.
Seems to me, your idea of pre-testing the administrative mappings via MPI-tool first and letting ZeroMQ deployment use these externally detected (if indeed auto-detectable, as you assumed above) configuration details for a proper (preferred) interface usage.
The Safe Way to Go :
Asking the HPC-infrastructure Support Team ( who is responsible for knowing all of the above and trained to help Scientific Teams to use the HPC in the most productive manner ) would be my preferred way to go.
Disclaimer :
Sorry in case this did not help your will to read & auto-detect all the needed configuration details ( a universal BlackBox-HPC-ecosystem detection and auto-configuration strategy would hardly be a trivial one-liner, I guess, wouldn't it? )
I'm trying to implementing tree topology with Cooja/contiky. Finding through examples i've not been able to find an a good example to find what i need.
In short :
I'd need to implementing a topology of this type(picture here under) with cooja end contiky, is there someone that could give me some advice?
Thanks in advance
I don't really use Contiki Operating System, I have only ever used TinyOS but a network topology such as the one you have should be easily achievable.
For TinyOS, the mote-to-mote radio tutorial HERE will show you how to two different sensor nodes can communicate with each other (a gateway is basically just a sensor node connected to a PC) and the mote-to-PC communication tutorial HERE will show you how a gateway node can forward information from itself to the PC it is connected to. When the network is running you basically have a Java application listening to USB port and receiving packets from gateway node. Once the packet has been received on the Java application then you can send it to an external network server.
It may sound difficult if you have never developed on TinyOS but what you want to do is very common and so there will be complete programs in the tutorial section of a typical TinyOS distribution showing you how to achieve most of the things you need you need to achieve. There should also be similar examples in Contiki.
I am trying to write a small script that will help me automate some of my IT tasks regarding to VLAN management.
I do not want to log-in to my switch via command-line - I want to send commands to it and get response (over the NET).
Are there any alternatives? I have started to search the web but so far I did not found anything.
I know SNMP is an option to gain info but I want to check other alternatives
thanks.
You can try Netconf Configuration Protocol, it is RPC-like management protocol which is supported by Cisco and many other vendors.
SNMP is the only widely and commonly used option here.
You can use WMI to manage Windows-based infrastructure.
There is also legacy SYSLOG protocol (RFC3164) which is UDP based.
For traffic monitoring and billing purposes there are NetFlow,
sFlow, jFlow, IPFIX and RADIUS protocols.
There are some other protocols but mostly proprietary.
So I'd suggest using SNMP which is nowadays a de-facto standard in network monitoring domain.
You might look at Expect as a scripting language solution. It is commonly used to do exactly what you are needing:
log into device (with result cases)
execute commands
save config
logout
As you build out a script library, tasks become simplified as you could do things like run scripts with parameters and have Expect do all the detail work.
See the wikipedia article for an overview.
I have also used SNMP for this kind of thing but the functionality is different because you are using an SNMP read-write privilege to upload new parts or complete configs, saving the running config to flash and/or saving the config off-device.
Try NETCONF+YANG protocol because it is currently the best option for network device configuration. More about SNMP alternatives:
https://bestmonitoringtools.com/top-snmp-alternatives-because-snmp-is-dying/
Has anyone successfully talked profibus from a .NET application?
If you did, what device/card did you use to accomplish this, what was the application, and did you use any kind of preexisting or available code?
We've not used Profibus, but have used DeviceNET (another CAN based protocol), Ethernet/IP and ControlNet which all have similar challenges.
We've been doing this since the late 1990's and therefore rely mainly on our own generated code using off-the-shelf hardware. The companies that have shown longevity during that period that I remember are:-
AnyBus (HMS, www.anybus.com) we've recently started using their gateway products as we can place fieldbus interfaces close to the hardware and then communicate over normal Ethernet (usually using Ethernet/IP www.odva.org). This has the advantage of separating hardware and PC using only a network cable. The Ethernet/IP .NET classes were written by ourselves as nothing much was on the market at the time. I'm sure a quick google search would find suitable class libraries
SST (www.mysst.com) have had fieldbus interfaces for more than a decade. The last SST card we used for DeviceNET still only had VB6 sample code. A good selection of fieldbus support and different form-factors e.g. PC104, PCI, PMCIA
Beckhoff/Wago (www.beckhoff.com, www.wago.com) we typically use Beckhoff for the I/O more than the interface cards but again a company that has been around a long time. They also have products that support exposing using OPC (another way for you to get I/O information without directly communicating with the hardware/devicedrivers)
I suggest not using OPC interfaces to the hardware directly (it’s OK for communication using PC (.NET)->PLC->Profibus) as you need to ensure that the control system responds to loss of control from your .NET application. I’m assuming that you are needing a profibus Master here (not a slave), so as long as your control system is intrinsically fail safe, then loss of communication should mean the control system enters an "Idle" state and therefore most of the I/O will return to the fails safe state.
We also try to ensure that we do not put safety related code in .NET. Most of our .NET code is userinterface from a PLC, but in some places we do control the fieldbus directly but ensure hardware interlocks will prevent un-safe operation, either using safety switches/relays or a small PLC with the the task of interlocking only. And above all make the system fail-safe! Loss of comms from the .NET code should shutdown the automation to the fail-safe state.
We have used Steeplechase to connect to our profibus to our automated pick system.
http://www.phoenixcontact.com/automation/32131_31909.htm
Try this: http://libnodave.sourceforge.net