Receiving an input string using MAX485 breakout block for Arduino Uno - parsing

I am attempting to use my arduino uno's rx and tx pins to receive an ASCII character string from an rs485 device transmitting at 2400 BAUD with 0.100Sec between transmissions, and then parse and output certain pieces of the string to a 16x2 LCD attached to the arduino.. I am getting some data strings, as I checked with my scope, coming in on the rx pin 0-5vdc square wave. Anyone with sample code to receive rs485 ascii strings into a buffer would be helpful.

RS485, RS422, and RS232 are different schemes for the hardware link layer. By that I mean, those specifications only describe what is on the wire. A transceiver chipset converts the wire signals back to the logic level signals that are connected to Arduino, or any other device. At the logic level the Arduino sees, any of the RS___ signals will look the same.
A USART converts the bit stream to a byte (this can be software or hardware). The USART does not know about the signal levels on the wire, it operates solely on the logical level bit stream. The UNO contains one USART that is available on the TxRx pins.
So your code on the microcontroller does not need to be different, RS232 or RS485. All the Serial code samples you see will function fine. You tell the Serial library baud, stop bits, and parity, and you are done. Set the serial connection to 2400, and the Arduino will start seeing characters.
Caveat
RS485 is sometimes used in half-duplex mode. This means you cannot receive and transmit at the same time. If you are wiring for half duplex, then you code must be certain you are not transmitting while some other device is still transmitting.

Related

Bluetooth Low Energy data transmission on iOS

I'm recently working on a project which uses Bluetooth Low Energy. I implemented most of communication protocol, however I started having concerns, that actually I don't know how the data transmission works and if the solution that I implemented is going to behave in the same way with all devices.
So my main concern is what data chunk is received when I get a notification from peripheral(_:didUpdateValueFor:error:)? Is it only as big as negotatiated MTU size? Or maybe iOS receives information about chunk size and waits to receive it all before triggering peripheral(_:didUpdateValueFor:error:)?
When a peripheral sends chunks let's say 100 bytes each, can I assume that I will get always in a single notification 100 bytes? Or could it be last 50 bytes from previous chunk and first 50 bytes from the next one? That would be quite tricky and hard to detect where is the beginning of my frame.
I tried to find more information in Apple documentation but there is nothing about it.
My guess is that I receive always a single state of characteristic. Therefore it would mean that chunks depend on implementation on peripheral side. But what if characteristic is bigger than MTU size?
First, keep in mind that sending streaming data over a characteristic is not what characteristics are designed for. The point of characteristics is to represent some small (~20 bytes) piece of information like current battery level, device name, or current heartbeat. The idea is that a characteristics will change only when the underly value changes. It was never designed to be a serial protocol. So your default assumption should be that it's up to you to manage everything about that.
You should not write more data to a characteristic than the value you get from maximumWriteValueLength(for:). Chunking is your job.
Each individual value you write will appear to the receiver atomically. Remember, these are intended to be individual values, not chunks out of a larger data stream, so it would make no sense to overlap values from the same characteristic. "Atomically" means it all arrives or none of it. So if your MTU can handle 100 bytes, and you write 100 bytes, the other side will receive 100 bytes or nothing.
That said, there is very little error detection in BLE, and you absolutely can drop packets. It's up to you to verify that the data arrived correctly.
If you're able to target iOS 11+, do look at L2CAP, which is designed for serial protocols rather than using GATT.
If you can't do that, I recommend watching WWDC 2013 Session 703, which covers this use case in detail. (I am having trouble finding a link to it anymore, however.)

IR receiver powered by AndroidThings

Is it possible to implement IR receiver on android-things?
1st idea:
Use GPIO as input and try to buffer changes and then parse the buffer to decode a message.
findings:
GPIO listener mechanism is too slow to observe IR signal.
Another way is to read GPIO infinite loop. But all IR protocols strongly depend on time and java(dalvik) in this case is to less accurate.
2nd idea
Use UART
findings:
It seems to be possible to adjust baud rate to observe all bits of a message but UART API require to setup quantity of start bits etc. and this is a problem because IR protocols do not fit that schema.
IMHO at the moment, UART is the only path but it would be a huge workaround.
The overarching problem (as you've discovered) is that any non-realtime system will have difficulty parsing this input directly because of the timing constraints. This is a job best suited to a microcontroller where you can access a timer interrupt. Grab an inexpensive tinyAVR or PIC to manage the sensor for you.
You will also want to use a dedicated receiver sensor (you might already be doing this) to simplify parsing the signal. These sensors include a demodulator, which means you don't have to deal with 38kHz pulse signal and the input is converted into a more standard PWM wave.
I believe you can't process the IR signal in Java because the reading pulses would be quicker than the reading resolution-at least in a raspberry pi. To get faster gpio readings I'm confident you can do in c++ with ndk with the raspberry. Though it's not officially supported there's some tricks to enable it. See How to do GPIO on Android Things bypassing Java on how to write to gpio in c. From there it should be trivial to read in a tight loop. Though I would still try to hook the trigger from Java since so far I have no clear easy idea on how to write/install interrupts in c.

Why do 802.11 Acknowledgement Frames have no source MAC?

Anyone know why 802.11 Acknowledgement Frames have no source MAC address? I don't see it when I capture the packets from TCPDUMP or Wireshark from Linux with a monitor-mode and promiscuous-mode driver. How does the Access Point distinguish ACK frames from different 802.11 clients if there is no source MAC addresses in the frame?
I can see from all the captures that the ACK comes immediately after the frame is sent (around 10 to 30 microseconds) but that alone can't be enough to distinguish the source can it? Maybe each frame has some kind of unique identifier and the ACK frame has this ID inside it? Maybe there is identifying information in the encrypted payload since the WLAN uses WPA-PSK mode?
No, there is nothing encripted in 802.11 MAC ACK frames.
802.11 is a contention based protocol. ie the medium is shared by different STA's and AP's who all are working in the same channel [frequency] in terms of time. Those who wants to transmit are competing for medium , winner who gets medium starts transmitting.
As per 802.11 spec. , once a frame is on air, the next "SIFS" period medium should be free. ie no one is allowed to transmit.
At the end of SIFS, the reciever of unicast frame should transmit ACK. This is the rule.
SIFS [Short Interframe space] in 802.11 is ~10 microseconds for OFDM based 802.11 implementations [802.11 G,A]. For 802.11b its ~20 microseconds if my memory is correct. Thats why you are seeing 10 or 30 microseconds in between TX and ACK
So, everyone knows who is transmitting the ACK and whom the ACK is to. So no need to include source address, its implecit.
Why Source address not included?
To reduce the frame size and so to same power.
Hope it helps. If you got more questions on this, please feel free
ACK frames, like all 802.11 management frames, have a DA (Destination Address) and SA (Source Address) in their MAC header (both not to be exactly confused with just "MAC address, see below), and that's all what's needed in this context.
TLD;DR: In 802.11 context, "MAC address", SA (Source addr), TA (Transmitter addr), DA (Destination addr), BSSID or whatnot all look alike a 6 byte "MAC address" that we're familiar with from other technologies, yet they should not be confused.
And now for the demolition of "MAC address" concept in 802.11 context.
802.11 Acknowledgement Frames are parts of 802.11 managements frames, and 802.11 is "a set of media access control (MAC) and physical layer (PHY) specifications" (source).
What this mean - and this is a very important concept to grasp when working with Wi-Fi - is that 802.11 in itself, including its management frames, have nothing to do with "traditional" (say 802.3, aka Ethernet) PHY (layer 1) nor MAC (layer 2) layers - they are a kind of their own.
802.3/Ethernet, to continue with this analogy - or rather counter example - have no such thing as ACK frames, beacons, probe request, RTS/CTS, auth/deauth, association etc, which are all types of 802.11 management frames. These are simply not needed with 802.3, for the most part because wired Ethernet is not a shared media (that's IEEE terminology), that may lead to unreliability/collision as is air with 802.11/Wi-Fi.
The important consequence of this is that you should not expect a priori to meet other, more familiar concept nor data from other layer 1/2 technologies. Forget this, once for all.
Sure, Wi-Fi looks like it carry on some MAC and IP and TCP and or UDP or whatnot, and they do most of the time, but regarding management frames such as ACK, that's a different world - its own world. As it is, 802.11 could perfectly be used - and maybe is used in some niche use cases - to carry other higher level protocols than TCP/IP. And its MAC concept, though looking familiar with its 6 bytes, should not be confused in its form nor use to the MAC of 802.3/Ethernet. To take another example, 802.15 aka Bluetooth also have a 6 bytes MAC, yet that is again another thing.
And to take the 802.11 a contrario example, 802.11 layer 1/2 beacon frames for instance carry some info about SSID, supported rates, Frequency-hopping (FH), parameter set etc etc that have no counterparts in other L1/2 technologies.
Now, embrace the complexity of what is/are "MAC addresses" in 802.11...
And this is why, to take an example in day to day use, pcap/tcpdump have such weird filters such as wlan ra, wlan ta, wlan addr1, wlan addr2, wlan addr3, wlan addr4 - and the like for wireshark capture and display filter.
Right George..!!
In MAC format NAV timer is updated like that, while data frames NAV is updated while transmitting , It updates for
DATA frame + SIFS + ACK + SIFS
So there is clear for this time, only one AP <---> station is talking, all are waiting to be clear NAV time period, so doesn't need to add Source address as it is wasting frame bytes.
I came across same question and saw this stackoverflow question on internet.
If you think about it, if stationA is waiting for ack from stationB, that means stationA has pretty much secured/locked medium for that long (see Jaydeep's answer) time i.e. (enough time for 2SIFS + 1ACK assuming no follow-up transmission between these 2 stations).
Hence no other station is sending any frame (i.e. ack here), leading no need to distinguish ack.
Its just StationA is waiting for ack from stationB in that time window.

Microcontroller communication tasks in background

I'm using an ARM Cortex M4 and I want to ask if it's possible to unload main routine form communication tasks and let them run in background.
For example I'm using on ARM MCU this peripherals:
ADC
I2C
UART
SPI
When adc_start(ADC); is called, ADC start conversion in background so I don't need to wait until ADC has finished conversion and I can go to the next istruction and later read the ADC result.
I want to ask if it's possible to do the same with communication periphericals. I2C and SPI can be fast, but since this MCU types can reach 50Mhz and more, it's a waste of MCU speed if I need to wait until I2C have finished to trasmit at 400kHz or SPI at 20Mhz or worst with UART. Also, if I perform some tasks and I don't want to interrupt them, I need to be able to unload MCU from any interrupts from peripherals and let them recive packets, buffer them and when I need to read them.
Something like this is possible?
If I've understood the question correctly, you're looking for automatic interrupt based handling of fast communication peripherals such as the I2C and SPI. As far as I know, YES! its achievable, at least on the Texas Instruments TIVA based ARM CORTEX M4 series MCUs. It's quite a nifty little feature to have around when you're working on computationally intensive algorithms and not have the CPU bogged down on waiting for the SPI to finish its task.
For a good reference on programming the CORTEX M4 peripherals, I recommend keeping this book handy:
http://www.amazon.com/TI-ARM-Peripherals-Programming-Interfacing-ebook/dp/B00L9DRAI2
Table 6-7 in chapter 6 of the book details the interrupt vector table on the TM4C123G MCU (the one shipped with the TIVA launchpad). Interrupts 50 and 53 are assignments for the SSI/SPI and I2C peripherals respectively. Process should be fairly straight forward once you unmask the right interrupts.

Monitor packet losses using Wireshark

I'm using Wireshark to monitor network traffinc to test a new software installed on a router. The router itself lets other networks (4g, mobile devices through usb etc) connect to it and enhance the speed on that router.
What I'm trying to do is to disconnect the connected devices and discover if there are any packet losses while doing this. I know I can simply use a filter stating "tcp.analysis.lost_segment" to track down lost packets, but how can I eventually isolate the specific device that causes the packet loss? Or even know if the reason was because of a disconnected device when there is a loss?
Also, what is the most stable method to test this with? To download a big file? To stream a video? Etc etc
All input is greatly appreciated
You can't detect lost packets solely with Wireshark or any other packet capture*.
Wireshark basically "records" what was seen on the line.
If a packet is lost, then by definition you will not see it on the line.
The * means I lied. Sort of. You can't detect them as such, but you can extremely strongly indicate them by taking simultaneous captures at/near both devices in the data exchange... then compare the two captures.
COMPUTER1<-->CAPTURE-MACHINE<-->NETWORK<-->CAPTURE-MACHINE<-->COMPUTER2
If you see the data leaving COMPUTER1, but never see it in the capture at COMPUTER2, there's your loss. (You could then move the capture machines one device closer on the network until you find the exact box/line losing your packets... or just analyze the devices in the network for eg configs, errors, etc.)
Alternately if you know exactly when the packet was sent, you could not prove but INDICATE its absence with a capture covering a minute or two before and after the packet was sent which does NOT have that packet. Such an indicator may even stand on its own as sufficient to find the problem.

Resources