How to change the frequency of memory in raspberry pi 4 in a similar way to how we change the CPU frequency. so scientific articles mention that we can do that from the BIOS but i did not find any way to do that.
Related
I'm seeing a significant difference in inference performance between my desktop CPU and when I run on the Neural Compute Stick 2 VPU - almost 500ms slower on VPU. This is the one line that takes the most time and has the biggest difference:
result = exec_net.infer( inputs={input_layer_ir: blob} )
My desktop is my gaming machine and has a nice fast Intel CPU. That said is this the expected order of magnitude difference between the VPU and CPU?
CPU speeds are really fast like .07 seconds and VPU is around .5.
It’s the road segmentation model from the open zoo samples.
Intel® Neural Compute Stick 2 (NCS 2) is a USB stick that offers you access to neural network functionality, without the need for large, expensive hardware. It is a plug-and-play device, so you are ready to start prototyping right away.
The performance of NCS 2 compared to the well-known CPUs or GPUs in the meaning of TFLOPS, it is still a hundred times lower. This behaviour is expected, so don’t rely on it as an external device to replace the CPU plugin.
Im using raspberry Pi 4 B and have installed ROS melodic. I have Raspberry Pi Camera V2.1. Would like to send a compressed video with a low latency as much as possible to the Microrcontroller (ESP32) via sonar. So its important to have a very low latency as the sonar has a low bandwidth. I look at this github camera node raspberry pi camera node for Pi Camera V2 but the compressed video has a latency of more than 2 seconds. Any other way or other approaches or other help to overcome the issue with the latency?
Thanks
High latency won't necessarily affect a system with low bandwidth so long as the transmission is consistent. With any camera being processed through ROS there will almost always be some delay. The above node will probably be one of your best bets, however, there is the usb cam node. If neither of these are sufficient you'll probably need to sit down and crunch the numbers to make sure you actually have enough bandwidth/processing power. Then you might want to look into creating your own video streaming node that's a little more tailor made and lower overhead; I'd suggest gstreamer for this.
Firstly, I'm going to train a CNN model on my computer (image classification program), then I'm gonna save it to be used in raspberry pi
After that, I'm gonna give the raspberry pi some images, I want it to predict the images using the trained model
Finally, according to the result (the prediction) , i want it to take an action.
So, is it possible to do that? if yes, what specifications should i keep in mind when i buy the raspberry pi ?
It's completely possible.
Hardware
Following main hardware specs need to be considered when you're deploying your model on edge devices like raspberry, banana pi, ...
Memory
Processing Speed
Memory - Random Access Memory(RAM). RAM allows you to deploy bigger models on your edge device and also in case of processing, the CPU is also most important one.
Raspberry Pi versions RAMs:
The Raspberry Pi 2 has 1 GiB of RAM.
The Raspberry Pi 3 has 1 GiB of RAM in the B and B+ models, and 512 MiB of RAM in the A+ model. The Raspberry Pi Zero and Zero W have 512 MiB of RAM.
The Raspberry Pi 4 is available with 2, 4 or 8 GiB of RAM. A 1 GiB model was originally available at launch in June 2019 but was discontinued in March 2020, and the 8 GiB model was introduced in May 2020.
Model Optimization
If you have one of the version of a Raspberry Pi so then you can't change it's capability however you can optimize your model by updating your neural network. So you need think about using efficient networks, such as EfficientNet, MobileNet, SqueezeNet, GhostNet.
For object detection purposes, I have used Raspberry Pi 2 B model with tiny Yolo with quite low FPS (frame per second).
I hope, from now you can consider according to your task which Raspberry Pi device is suitable for you.
I would like to know if it is possible to run the OpenCV HOG Detector using a Raspberry Pi in real time using the Raspberry Pi camera.
Unfortunately not, even overclocked to 1000MHz and with 64MB for video it's not enough.
On my old mac with a 2.1 GHz Dual Core Intel CPU and 2GB of ram I could barely get between 8-12 FPS for a 640x480 stream.
I haven't tried OpenCV 3.0 (just 2.4.8) on Raspberry PI so don't have any softcascades test results to share, but it sounds promising.
Another idea I can think of is using LBP cascades. You could start with a HAAR since there's one already for detecting bodies so it would be easy to test, but LBP should be a bit faster. Perhaps you could train a cascade that works really well for a set environment.
Also, if it helps, you can use my little OpenCV wrapper for the PiCamera for tests. It basically returns frames from the Pi Camera module as cv::Mat.
I've had openCV running on a PI, using a USB video grabber, as I am using a CCTV camera. I use Python.
It runs fine (for what I want to do), but you need to limit the resolution.
It's slower than a PC (2ghz dual core) but still works.
I was wondering if there was a way that I could detect the exact frequency of a BLE signal with an iphone. I know it will be in the 2.4 GHz range but i would like to know the difference down to the 1 Hz range between the transmitted frequency and the received frequency. The difference would be caused by the doppler effect meaning that the central or the peripheral would have to be moving. Also is there an exact frequency that iphones transmit BLE at or does it depend on the iphone's antenna?
Bluetooth doesn't have one particular frequency it operates on. Via bluetooth.com:
Bluetooth technology operates in the unlicensed industrial, scientific and medical (ISM) band at 2.4 to 2.485 GHz, using a spread spectrum, frequency hopping, full-duplex signal at a nominal rate of 1600 hops/sec.
… adaptive hopping among 79 frequencies at 1 MHz intervals gives a high degree of interference immunity and also allows for more efficient transmission within the spectrum.
So there'll be a wide spread of frequencies in use for even a single connection to a single device. There's hardware on the market like the Ubertooth that can do packet captures and spectrum analysis.
To my knowledge, iOS doesn't offer API to find out this information. OS X does at some level, probably via SPI or an IOBluetooth API, because Apple's Hardware Tools (search for "Bluetooth") offer a way to monitor spectrum usage of Bluetooth Classic devices on OS X.
As to your desire to detect movement via the Doppler effect on the radios, my instincts say that it's going to be very, very difficult to do. I'm not sure what the exact mathematics behind it would look like, but you'll want to examine what the Doppler effect on a transmission at 2.4 GHz would be as a result of low-to-moderate rates of motion. (A higher rate of motion or relative speed, say, over a few tens of miles an hour, will be quickly make Bluetooth the wrong radio technology to use because of its low transmit power.)