Radio Frequency Hardware - communication

I have a project consists of an autonomous robot and it needs to communicate with a base station and in this way the base station has a GSM module to send data to a website. My problem is the transmission of data between the robot and the base station, which in this case will be radio frequency, and I am looking for a hardware that allows me to send data from the robot to the base station, such as video frames and sensory data. In this way the base station must also send data to the robot, so bidirectional communication is required. The distance is about 5/6km and we are currently considering line of sight, that is, very few obstacles. Can anyone help me with the hardware for radio frequency communication?

Related

Is it possible to run a speaker recognition algorithm on a low-level embedded system(like Cortex-M0)?

I'm trying to implement a hello-world algorithm in speaker recognition, GMM-UBM, on an end-device MCU. Wifi/BLE...etc are not available due to some limitation.
The UBM is pre-trained on PC then downloaded to a flash memory that my MCU can access.
What will actually be run on the MCU are only speaker enrollment and testing, as known as a GMM model adaptation procedure and score(log-likelihood ratio) calculation.
For the purpose of this system is to tell if an input voice segment is from a target speaker or not, a score threshold should be selected. Hence, currently, after speaker enrollment, I use a bunch of impostor voices(pre-saved in flash memory) to calculate the impostor scores corresponding to the target speaker. Assuming the impostor scores is a 1-D Gaussian distribution, then I can decide the score threshold depend on the false alarm rate acceptable.
But here's the problem, the procedure above for threshold selection is time-consuming especially on a device with CPU clock rate < 100MHz. Imagine that you have to wait 1 minute or even longer time to get response from Google assistant first time you use it, that will be embarrassing.
The question are as below:
Is my concept right or there is some misunderstanding?
Is this a standard procedure to enroll a speaker and select a threshold?
Is there some method to reduce the computing time of the threshold selection procedure, or even a pre-defined threshold to avoid the on-chip computing?
Thanks a billion!!

large scale local network

For an application I need to extend WiFi range where a raspberry pi which is mounted on a drone and is away from station can connect to this WiFi network and stream video.what options are there for me to implement this network?
suppose that the maximum distance between drone(rpi which sends video) and station(router or some thing like that which is connected to a PC and receives video)is 1km.
first of all your project sounds amazing and I would like to see it working on my own eyes :)
And to answer your questions:
1km is quite a distance for all kinds of routers used at home or Access Points hidden inside buildings. Your only hope here is to setup multiple outdoor sector antennas (like THIS beauty from Mikrotik) and CAPsMAN or using Ubiquiti devices with seamless/fast roaming to cover space where drone will fly
With this setup you can easily transfer large streams over large area. Yet the maximal distance will also be affected by number of wireless networks in your vicinity.
Feel free to add more questions. We`ll try our best to help you out
And once done please share some videos, photos, etc with us :)

How to locate objects based on specific motion in video using computer vision?

I am working on a problem where I have to track some machine usage of some surveillance video(stock example video).
My problem is what kind of modern Machine learning techniques or traditional computer vision methods that can I use, I don't want to use a single frame input source model (like object recognition on frame) because there are lots of occlusions & also is hard to recognize some machine objects because of side viewpoint ,so I have to use uniform motions of machine data from input video.

Application for gathering real-time data from sensors and composing multiple sources of data

I'm currently searching for an application to visualize data from different sensors. The idea is that the sensor picks up movement and sends the angle which the object is at to the application. With about 4 of these sensors, they should display movement at their vicinity.
An example would be a car driving on a street. On both sides of the street there are sensors which pick up the angle(if the car was right in front of you, the angle would be 90 degrees) and send it to the application. Then the application should be able to take the input information from the sensors and plot a moving car/object on a canvas.
So far i've found Power BI / Azure IoT hub and cumulocity which although gather sensor data, but do not have the ways to transform it into the earlier specified form.
Is there anything which is capable of this?

Comma.ai self-driving car neural network using client/server architecture in TensorFlow, why?

In comma.ai's self-driving car software they use a client/server architecture. Two processes are started separately, server.py and train_steering_model.py.
server.py sends data to train_steering_model.py via http and sockets.
Why do they use this technique? Isn't this a complicated way of sending data? Isn't this easier to make train_steering_model.py load the data set by it self?
The document DriveSim.md in the repository links to a paper titled Learning a Driving Simulator. In the paper, they state:
Due to the problem complexity we decided to learn video prediction with separable networks.
They also mention the frame rate they used is 5 Hz.
While that sentence is the only one that addresses your question, and it isn't exactly crystal clear, let's break down the task in question:
Grab an image from a camera
Preprocess/downsample/normalize the image pixels
Pass the image through an autoencoder to extract representative feature vector
Pass the output of the autoencoder on to an RNN that will predict proper steering angle
The "problem complexity" refers to the fact that they're dealing with a long sequence of large images that are (as they say in the paper) "highly uncorrelated." There are lots of different tasks that are going on, so the network approach is more modular - in addition to allowing them to work in parallel, it also allows scaling up the components without getting bottlenecked by a single piece of hardware reaching its threshold computational abilities. (And just think: this is only the steering aspect. The Logs.md file lists other components of the vehicle to worry about that aren't addressed by this neural network - gas, brakes, blinkers, acceleration, etc.).
Now let's fast forward to the practical implementation in a self-driving vehicle. There will definitely be more than one neural network operating onboard the vehicle, and each will need to be limited in size - microcomputers or embedded hardware, with limited computational power. So, there's a natural ceiling to how much work one component can do.
Tying all of this together is the fact that cars already operate using a network architecture - a CAN bus is literally a computer network inside of a vehicle. So, this work simply plans to farm out pieces of an enormously complex task to a number of distributed components (which will be limited in capability) using a network that's already in place.

Resources