3G/4G peer to peer network for long-distance communication in remote area? - wifi

I'm working on an engineering project where I want a go-kart to maintain a direct connection with a base station. The base and go-kart can be separated by about a half mile (with lots of obstacles in between) which is too far for WiFi.
I'm thinking about using 3G/4G to directly connect the two. Does anyone have any resources or ideas that might help?
Or, alternatively, a better way to connect them? I'm just trying to send some sensor data (pretty low bandwidth) in real-time.

The biggest problem you face is radio spectrum that you are allowed to use. All 3G/4G spectrum is licensed to some firm and they get really unhappy (e.g. have you hunted down and fined) when you transmit in their space.
I did find DASH7 which
is an open source wireless sensor networking standard … which operates in the 433 MHz unlicensed ISM band. DASH7 provides multi-year battery life, range of up to 2 km, indoor location with 1 meter accuracy, low latency for connecting with moving things, a very small open source protocol stack …
with a parts cost around US$ 10. This sounds like it satisfies your requirements and keeps the local constabulary from bothering you.

You could maybe use SMS, between a modem on the kart and a mobile phone or modem at the base.
A mobile data connection like a telephone call isn't possible directly between the two; you have to make a data connection from the kart to a server in your operator's core network, identified by the APN. Then you can access IP addresses as for a normal internet connection - so the base computer would have to be a web server.

Related

Video transmission over wifi using UDP/packet injection

Hey Stackoverflow community :)
Im looking into making a camera stream video from a an RC device into a computer using wifi.
After considering all of the options I had Im left with two:
use UDP to transfer video in packets
use packet injection and packet sniffing on the receiving device.
I was wondering what are the pros and cons of each method (for that specific purpose of video transmission)?
after looking around I found many implementations for both ways but nowhere have they specified why one is better than the other.
few things that I have not mentioned:
I know UDP does not have error correction which can make the video weird- I dont care about the quality of the video as long as it will be recognizeable.
I dont want to use connection based protocol (TPC, etc)- I dont want to wait for handshake when I get disconnected.
thanks :)
I'm trying to do a similar thing. My take on this is basically when you use the wifi cards in monitor mode (i.e. using packet sniffing/injection) you don't actually need to be connected to that network. Typically, you still need to be connected to an Access point as a client then you can communicate using UDP through that connection. But, in this case, the UDP messages are routed to the Wifi cards and the packets are injected out without being associated with any client. Then, any 'client' just has to sniff or listen on that same channel to get the transmission. So the benefit is not only does UDP not check for lost frames/etc, but also in this case you don't need to be connected to the network to get the packets.
In my case, this is preferable, since basically you will need to connect to the AP in the former case and that would require more capable hardware on the receiver side typically (more range is needed for the association part since you need to send messages back over TCP essentially to get it connected).
FYI here are the links/repos I am using and it also is a reference to what I am talking about
https://docs.px4.io/master/en/tutorials/video_streaming_wifi_broadcast.html
https://github.com/svpcom/wifibroadcast
I am using an off the shelf 'solution' in the short term, the Accsoon Cineye Air, which basically transmits HDMI 300ft line of sight over WiFi. You need an android phone to receive it, and basically I'm using the Vysor application (paid version is $40) to mirror the screen to my desktop. It works, but the latency is still more than I want : 60ms at least from the cineeye, so you can drive it around but its not as quick as DJI which is around 30-40ms ), which is my goal.

How to solve modbus error on solar inverter?

I am currently working with one of Growatt inverters, 5 KVA residential inverter, It has two ports one is RS-485 I have connected a Smart Energy Meter with it to control backpower flowing to grid, and on the other port Growatt wifi device was working, I wanted to use my own platform, I used this protocol Growatt PV Inverter Modbus RS-485 RTU Protocol and then connected wire to RS-232 with a Raspberry Pi to read the data and send it back to my server. Now the issue coming is as soon as both devices start to work then inverter starts showing an error. I cannot understand why it was not giving with the Growatt device. Is there any solution?
I want to ask some question about your problem to help you if I can :-).
Are you going to monitor inverter data on your server? If yes => so, why you don't connect your server to inverter directly (I mean by using USB to the RS-485 converter).
What is your connection type? If it is Modbus-RTU you need to find Modbus register map to get your desired data. You could find this in the inverter user manual.
Be careful about your Modbus communication configurations and settings (i.e. Baudrate, ID, Parity and Byte Size).

Using consumer cellphones to build a mesh network for IOT devices?

I have been looking into LoRaWAN for a low cost waterproof asset tracker I am looking at building.
AFAIK, the primary benefits of LoraWAN over say LTE-M or cellular are: no connectivity costs and potentially lower power consumption.
What I'm wondering is: why can't we use our own cellphones as the "base station" that the IOT device talks with? We can do this with bluetooth and WiFi, why not cell? Is it the LTE protocol that prevents it? Physics? What am I missing?
There's quite a few architectural reasons why Peer-to-Peer LTE isn't feasible, but the largest is probably the fact that in LTE the uplink and downlink use different modulation techniques.
In the downlink (the connection from the Base stations (eNodeBs) to the User Equipment (our mobile phones)) Orthogonal Frequency Division Multiplex (OFDMA) is used, this means the phone listens out onto the RF interface for the OFDMA signal.
This works well, OFDMA is a great way of encoding the data onto the air interface, but it has a very high peak-to-average-power ratio, this means if the UEs used OFDMA in the Uplink (From the UE to the eNodeB) they'd have awful battery life.
Instead in the Uplink LTE uses Single Carrier Frequency Division Multiple Access (SC-FDMA), which is much more power efficient and allows you talk all day, so the eNodeBs listen on their RF interface for the SC-FDMA modulated traffic.
This means our UEs (Mobile phones) use one type of modulation to send and a different modulation scheme to receive, so they can't talk directly to one another as they can't send OFDMA modulated data, only receive & visa-versa.
Some more reading on OFDMA & SC-FDMA.
The LTE relay interface inducted as part of Release 10 allows the deployment of relay nodes (a kind of low cost eNB) that are fixed and that use in-band LTE to extend the coverage of standard eNodeBs by one hop, improve signal quality and to increase the network capacity. Relays can be placed such that it converts the long single hop into two shorter hops.
However the approach of using UE seems have many challenges as it can make UE to get bit loaded with more functional changes across layers(MAC, PHY, RRC, NAS) as it has to take additional functionalities from Relay nodes/eNB as well ranging from lower layer signalling, co-ordination, mobility to forwarding. Also, there might be additional power consumption and change in antenna to support the same which all will add to more cost of UE.

Bluetooth mesh networking? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I had an Idea and I was wondering if it was possible. I've googled it and can't seem to find any existing solutions. I was thinking of having a Bluetooth mesh network. The layout I was hoping to achieve is to have one central station (PC with bluetooth dongle) and then a bunch of bluetooth modules (preferably these) that would all form a mesh network with the modules around them. Not all of them would be in range of the "central station" but would need to communicate with it through the other nodes. The bluetooth modules would be hooked up to ATtiny85 chips if it makes any difference. If you have any questions just ask.
Is this possible?
Is it possible with the above bluetooth module?
Would they all have to be set up individually or could there be some sort of neighbor discovery?
Would there be security risks?
What would the limitations on the size of the network be?
Where should I start?
CSR has delivered a BLE mesh network solution
http://www.csr.com/news/pr/2014/csr-mesh
Not sure if you have found a reasonable solution yet, I am new to the BLE and was also thinking along the same lines of having a BLE mesh that can permit transmitting of signal up to a few miles or so. This way, sensors can be placed in remote rural areas and utilizing multiple hops of sensors, the data can be transmitted to the central controlling station. However, as of yet, I haven't seen a dual mode sensor that can assume both roles as needed.
The other approach can be to make use of TCP/IP bridge. This way, the device, which can be an iPhone or Android, listens to the advertised data, creates an IP packet and send it to the remote server. Obviously, for this to work you need to have cellular data network available. But granting ubiquitous data network or Wi-Fi coverage this solution sounds more promising to me.
NOTE: Here http://www.bluetooth.com/Pages/low-energy-tech-info.aspx they talk about star topology though, below is the excerpt:
Topology – Bluetooth low energy technology uses a 32 bit access address on every packet for each slave, allowing billions of devices to be connected. The technology is optimized for one-to-one connections while allowing one-to-many connections using a star topology. With the use of quick connections and disconnections, data can move in a mesh-like topology without the complexities of maintaining a mesh network.
Also have a look at FruityMesh. It is an open source implementation of a mesh network that is based on standard Bluetooth Low Energy 4.1 connections.
They use the Nordic nRF51 chipset in combination with the S130 SoftDevice.
Found on github: https://github.com/mwaylabs/fruitymesh/wiki
So bluetooth - as clearly pointed out in the comments - is not designed for mesh networking. Nor, honestly would you want to. It would be far to expensive both in fincances AND in processing time and battery power to handle such an operation.
Instead, why not use XBee? https://www.sparkfun.com/search/results?term=xbee&what=products
These XBee modules are not only designed to do EXACTLY what you want, but they are low cost and HEAVILY documented.
A much better choice for your wifi mesh.
well, theoretically it should possible to build a mesh networking behavior with BLE devices, though it has not been designed that way.
The idea would be to use the fact that BLE has been designed so it can work over disconnections.
So you could handle two connections with your device: one as a bluetooth master and the other as a bluetooth slave. Then you could run once as a slave and listen to the next device's services see if there is any event, and if there is, become a master and broadcast the event to the previous device until the event reaches the host. The tricky part would be to tweak the timings so it works fastly and smoothly.
Another way that should be less a hack would be to build an ANT network for the mesh topology, while having BLE to be able to connect each node to Bluetooth enabled devices. You could use something like the nRF51422 to do such thing.
HTH
As I undertand, Bluetooth is something designed to do data transmission with a low power consumption. So compared to 802.15.4, Bluetooth shows a much shorter communication range which means more device maybe used to build a network. And I think BLE is just a name, just some code pre-programmed into chip ROM. Anyone can modify the BLE protocol, if he gets enough coding experience.

Converting TCP/IP traffic to Halfduplex in linux

I am developing a custom network driver for a PHY media which doesn't support full duplex mode.
I want to use TCP/IP traffic with this network driver and on top of this half-duplex PHY media.
But TCP/IP traffic can be full duplex. I would like to implement some mechanism/algorithm in this driver so that this custom network driver will convert TCP/IP traffic to Half duplex in linux.
Please let me know if this can be achieved or how to do it.
So you are trying to write a driver which supports full duplex traffic on a card which actually does NOT support the feature...
Well..you must be aware that the networking subsystem is one of the largest subsystems in the kernel and one of the few which actually uses softirqs (because it is always looking at getting scaled appropriately in this day and age of multiprocessers) and still had to resort to some trickery (NAPI) ir order to manage the deluge of interrupt requests generated by the ever increasing rates of the present day media...why im saying all this is because I just would want to remind you of the real life complexities involved in writing a 'regular' network driver, let alone a 'pseudo full duplex' driver.
Now I believe what you pretty much want is to give an illusion of 'full duplexity' to the...TCP/IP stack ( is it ??) i.e your driver is just another full duplex driver and it (any one of this driver's clients, be it the MAC layer or something like ethtool) can go have a ball with it (in terms of dumping & retrieving packets) in the same manner as it does with (and expects results out of) a 'regular full duplex' driver...
So if this is really the case, I wonder what good giving such an illusion might be? Perhaps you are just experimenting? IN any case, TCP is by default full duplex anyways and by using a half duplex media the data rates anyway are a bit lower (although not exactly half) than those obtained by using a full duplex adapter. I don't think it even matters at the higher layers (in terms of functionality) whether the media is full or half duplex (except may be in the MAC layer?) correct me if Im wrong.
There were (and still are) quite a few half duplex media in use currently and as such there are many media which support both full duplex and half duplex traffic..I fail to see how it will affect the clients of the driver (besides lowering the over all data rate as the only tangible effect)...which means you can pretty much look at any netwrk driver in the kernel and see that it has ways to configure the adapter to use either full or half duplex (and the user space can, say ethtool as one of the ways to toggle this...)...
Anyways, you may want to have a look and perhaps take a few tips from MODBus driver (the bus being by default half duplex) here.
I'm not sure how you're relating MAC layer with TCP layer. Duplex mode is a Ethernet domain and it doesn't propagate to IP and not event to TCP, in Ethernet terms duplexity means you can send or receive a MAC frames exclusively at different times (half duplex) or at the same time (full duplex).
The upper layers of the network stack are completely (at least they should) unaware of this process. Consider the following example, you're sending a huge file over the network using FTP, assuming most normal network systems the stack would be FTP/TCP/IP/Ethernet. From FTP perspective you have a virtual session, from TCP you have a virtual pipe, from IP you just know how to reach the end system and from Ethernet perspective you just know how to reach the next node in the network.
TCP doesn't care your packets are chopped during the transmission nor if your packet is delayed within a certain threshold due an incoming packet arriving It only cares to receive a confirmation receipt that the packet made it to the final destination. I hope I can show my point.

Resources