Send data from PC to Raspberry - opencv

I'm wondering if I can do something like this:
Do some image processing with opencv on my pc, do some math and send data to RaspberyPi to PID controller and then control servos, in real time.
UART wolud be the best connection?

In principle you can use any means to communicate from the PC to the Raspberry PI that are available (UART, ethernet, etc.).
However, you just have to be careful about any time requirements you have in the system you are controlling and check whether the communication rate is suitable.
For instance, you can use 9600 baud UART to temperature control, as the dynamics of such systems are usually slow. If your servos control an inverted pendulum, then the communication speed might make it impossible to control.

Related

CH340 with custom baud rate for Diesel Heater and Raspberry Pi

Is there a way (or trick) to set a CH340 to 25000 baud on Linux?
If not:
Is there a better chip which is able to do that 25000 baud (no FTDI please)?
Is there a completely different and better way to achieve my goal (see below)?
Background:
I would like to interface a China Diesel heater to a Raspberry Pi (OpenPlotter on a ship).
This was done in Open Source with a uC (STM or EsP32) with http://www.mrjones.id.au/afterburner/
There is this blue wire with half duplex RS232#TTL and baud rate of 25.000 baud (8N1). This is the binary, proprietary communication interface to the heating.
I tried to use one of this china USB to RS485 adapters which use a CH340C internally.
The interface of Afterburner:
The USB to RS485 interface is similar Afterburner:
I think I should be able to use this interface without major modifications if I would use "channel B" of RS485 with a pull up resistor (worth a try).
My main problem is the baud rate at the moment. I think I will run in the same problem if I would use a CP210x. I guess a FTDI232 might be able to handle this baud rate but I will not use this chip at all.
My other idea was to use a NodeMCU (CH340 with ESP32/ESP8266) and speak 25000 baud to the heating and 28800 over the serial to USB of the NodeMCU. I think this is a little bit overkill to do it this way.

Using consumer cellphones to build a mesh network for IOT devices?

I have been looking into LoRaWAN for a low cost waterproof asset tracker I am looking at building.
AFAIK, the primary benefits of LoraWAN over say LTE-M or cellular are: no connectivity costs and potentially lower power consumption.
What I'm wondering is: why can't we use our own cellphones as the "base station" that the IOT device talks with? We can do this with bluetooth and WiFi, why not cell? Is it the LTE protocol that prevents it? Physics? What am I missing?
There's quite a few architectural reasons why Peer-to-Peer LTE isn't feasible, but the largest is probably the fact that in LTE the uplink and downlink use different modulation techniques.
In the downlink (the connection from the Base stations (eNodeBs) to the User Equipment (our mobile phones)) Orthogonal Frequency Division Multiplex (OFDMA) is used, this means the phone listens out onto the RF interface for the OFDMA signal.
This works well, OFDMA is a great way of encoding the data onto the air interface, but it has a very high peak-to-average-power ratio, this means if the UEs used OFDMA in the Uplink (From the UE to the eNodeB) they'd have awful battery life.
Instead in the Uplink LTE uses Single Carrier Frequency Division Multiple Access (SC-FDMA), which is much more power efficient and allows you talk all day, so the eNodeBs listen on their RF interface for the SC-FDMA modulated traffic.
This means our UEs (Mobile phones) use one type of modulation to send and a different modulation scheme to receive, so they can't talk directly to one another as they can't send OFDMA modulated data, only receive & visa-versa.
Some more reading on OFDMA & SC-FDMA.
The LTE relay interface inducted as part of Release 10 allows the deployment of relay nodes (a kind of low cost eNB) that are fixed and that use in-band LTE to extend the coverage of standard eNodeBs by one hop, improve signal quality and to increase the network capacity. Relays can be placed such that it converts the long single hop into two shorter hops.
However the approach of using UE seems have many challenges as it can make UE to get bit loaded with more functional changes across layers(MAC, PHY, RRC, NAS) as it has to take additional functionalities from Relay nodes/eNB as well ranging from lower layer signalling, co-ordination, mobility to forwarding. Also, there might be additional power consumption and change in antenna to support the same which all will add to more cost of UE.

WiFi Beacon Packets

I'm trying to write a simple C code with WinPcap to broadcast a beacon packet and capture it in all nearby WiFi units. The code I'm using is very similar to the ones available at WinPcap[1].
The code runs fine if I create an ad-hoc network connection and join all the computers into it. However, this process of creating and joining to an ad-hoc network is cumbersome. It would be much better if, regardless of what network each computer is in, the beacon packets would be broadcasted and captured once the code is running.
As simple as this problem might sound, after some searching it seems that this is not possible to be done on windows (unless re-writing drivers or maybe the kernel):
Raw WiFi Packets with WinPcap[2]
Sending packets without network connection[3]
Does winpcap/libpcap allow me to send raw wireless packets?[4]
Basically, it would be necessary to use the WiFi in monitor mode, which is not supported in Windows[5]. Therefore, if the computers are not in the same network connection, the packets will be discarded.
1st Issue
I'm still intriguing, beacon and probe request packets are a normal traffic across the network. How they could be being sent and received constantly but the user is not allowed to write a program to do so? How to reconcile that?
2nd Issue
Does anyone has experience with Managed Wifi API[6]? I've heard that it might help.
3rd Issue
Acrylic WiFi[7] claims to have developed a NDIS driver which support monitor mode under Windows. Does anyone has experience with this software? Is it possible to integrate with C codes?
4th Issue
Is it possible to code such Wifi beacon on Linux? and on Android?
www.winpcap.org/docs/docs_412/html/main.html
stackoverflow.com/questions/34454592/raw-wifi-packets-with-winpcap/34461313?noredirect=1#comment56674673_34461313
stackoverflow.com/questions/25631060/sending-packets-without-network-connection-wireless-adapter
stackoverflow.com/questions/7946497/does-winpcap-libpcap-allow-me-to-send-raw-wireless-packets
en.wikipedia.org/wiki/Monitor_mode#Operating_system_support
managedwifi.codeplex.com/
www.acrylicwifi.com/
Couple questions I will try to answer. Mgmt and Ctrl packets are used for running a wifi network and don't contain data, I would not call these normal packets. Windows used to(I think still does) convert data packets into ethernet frames and pass it up the stack. Beacon and Probe Req pkts are not necessary for TCP/IP stack to work, ie. web browsers don't need beacon frames to get your web page. Most OS's need minimal info from mgmt/ctrl pkts to help a user interact with a wifi adapter, most mgmt/ctrl pkts only are useful to the driver(and low level os components) to figure how to interact with the network. This way the wifi adapters look and act like ethernet adapters to high level os components.
Never had any experience with Managed Wifi API or Acrylic, so can't give you any feedback.
Most analyzers that capture and send packets do it in 2-3 separate modes mainly because of hardware. Wifi adapters can be in listen mode(promiscuous mode and/or monitor mode) or adapter mode. To capture network traffic you need to listen and not send, ie. if someone sends a pkt while you are sending you miss that traffic. In order to capture(or send) traffic you will need a custom NDIS driver in windows, on linux many of them already do. Checkout wireshark or tshark, they use winpcap to capture pkts in windows and there are some adapters they recommend to use to capture pkts.
Yes it is possible to send a beacon on linux, ie. Aireplay. I know its possible to capture traffic on Android but you it needs to have rooted or custom firmware, which I would believe also means you can send custom pkts. If you are simply trying to send a pkt it might be easier to capture some traffic in tshark or wireshark and use something like aireplay to resend that traffic. You could also edit the packet with a hex editor to tune it to what you need.

3G/4G peer to peer network for long-distance communication in remote area?

I'm working on an engineering project where I want a go-kart to maintain a direct connection with a base station. The base and go-kart can be separated by about a half mile (with lots of obstacles in between) which is too far for WiFi.
I'm thinking about using 3G/4G to directly connect the two. Does anyone have any resources or ideas that might help?
Or, alternatively, a better way to connect them? I'm just trying to send some sensor data (pretty low bandwidth) in real-time.
The biggest problem you face is radio spectrum that you are allowed to use. All 3G/4G spectrum is licensed to some firm and they get really unhappy (e.g. have you hunted down and fined) when you transmit in their space.
I did find DASH7 which
is an open source wireless sensor networking standard … which operates in the 433 MHz unlicensed ISM band. DASH7 provides multi-year battery life, range of up to 2 km, indoor location with 1 meter accuracy, low latency for connecting with moving things, a very small open source protocol stack …
with a parts cost around US$ 10. This sounds like it satisfies your requirements and keeps the local constabulary from bothering you.
You could maybe use SMS, between a modem on the kart and a mobile phone or modem at the base.
A mobile data connection like a telephone call isn't possible directly between the two; you have to make a data connection from the kart to a server in your operator's core network, identified by the APN. Then you can access IP addresses as for a normal internet connection - so the base computer would have to be a web server.

Converting TCP/IP traffic to Halfduplex in linux

I am developing a custom network driver for a PHY media which doesn't support full duplex mode.
I want to use TCP/IP traffic with this network driver and on top of this half-duplex PHY media.
But TCP/IP traffic can be full duplex. I would like to implement some mechanism/algorithm in this driver so that this custom network driver will convert TCP/IP traffic to Half duplex in linux.
Please let me know if this can be achieved or how to do it.
So you are trying to write a driver which supports full duplex traffic on a card which actually does NOT support the feature...
Well..you must be aware that the networking subsystem is one of the largest subsystems in the kernel and one of the few which actually uses softirqs (because it is always looking at getting scaled appropriately in this day and age of multiprocessers) and still had to resort to some trickery (NAPI) ir order to manage the deluge of interrupt requests generated by the ever increasing rates of the present day media...why im saying all this is because I just would want to remind you of the real life complexities involved in writing a 'regular' network driver, let alone a 'pseudo full duplex' driver.
Now I believe what you pretty much want is to give an illusion of 'full duplexity' to the...TCP/IP stack ( is it ??) i.e your driver is just another full duplex driver and it (any one of this driver's clients, be it the MAC layer or something like ethtool) can go have a ball with it (in terms of dumping & retrieving packets) in the same manner as it does with (and expects results out of) a 'regular full duplex' driver...
So if this is really the case, I wonder what good giving such an illusion might be? Perhaps you are just experimenting? IN any case, TCP is by default full duplex anyways and by using a half duplex media the data rates anyway are a bit lower (although not exactly half) than those obtained by using a full duplex adapter. I don't think it even matters at the higher layers (in terms of functionality) whether the media is full or half duplex (except may be in the MAC layer?) correct me if Im wrong.
There were (and still are) quite a few half duplex media in use currently and as such there are many media which support both full duplex and half duplex traffic..I fail to see how it will affect the clients of the driver (besides lowering the over all data rate as the only tangible effect)...which means you can pretty much look at any netwrk driver in the kernel and see that it has ways to configure the adapter to use either full or half duplex (and the user space can, say ethtool as one of the ways to toggle this...)...
Anyways, you may want to have a look and perhaps take a few tips from MODBus driver (the bus being by default half duplex) here.
I'm not sure how you're relating MAC layer with TCP layer. Duplex mode is a Ethernet domain and it doesn't propagate to IP and not event to TCP, in Ethernet terms duplexity means you can send or receive a MAC frames exclusively at different times (half duplex) or at the same time (full duplex).
The upper layers of the network stack are completely (at least they should) unaware of this process. Consider the following example, you're sending a huge file over the network using FTP, assuming most normal network systems the stack would be FTP/TCP/IP/Ethernet. From FTP perspective you have a virtual session, from TCP you have a virtual pipe, from IP you just know how to reach the end system and from Ethernet perspective you just know how to reach the next node in the network.
TCP doesn't care your packets are chopped during the transmission nor if your packet is delayed within a certain threshold due an incoming packet arriving It only cares to receive a confirmation receipt that the packet made it to the final destination. I hope I can show my point.

Resources