Install multiple Dual edge tpu on a motherboard - google-coral

Is it possible to install multiple Dual Edge TPU on a motherboard? I need to build a system which support object detection on more than 100 video stream available from cameras in a campus.

There is a PCIe adapter developed that can be used with one dual edge TPU. With this setup you could stick as many dual TPU cards onto your motherboard as you have PCIe slots. I would subscribe to the GitHub issue to get informed about additional options. Last time I talked with the creator of the adapter he mentioned to work on an adapter for a Raspberry PI but also has plans to build an additional adapter that can take more then one edge TPU.
Edit: There is a waiting list that has an option for an adapter card with 4 TPU slots.

Short answer - yes you can as long as You have enough supported slots, power and good cooling system but it's not that easy!
Right now it's hard to find any cheap device that support dual lane pci-e m.2 E-slot, most are 1x and only one edge tpu core will work. According to docs each TPU can take up to 3A of power and heat up above 100C. So for each core You need to think about power, cooling and supported slot which adds much more to price of whole solution. Dual TPU is now priced at $39.
For more computing power there is Asus AI board which is PCI-e 16x card with same Edge TPU cores - 8x or 16x. You will get 32 TOPS or 64 TOPS of power. Card includes cooling and PCI-e slot is usually ready for high power consumption (as for GPUs). Of course You pay about 3x for each core but that does not include price for problematic m.2 slot and cooling. I think it's still the best option.
You can consider other device with better NPU, like something from nvidia jetson family, which are ready to use devices at few levels of price and power (up to 32TOPS). Also You can cluster such devices with kubernetes and add as much as You need.

Related

Is possible multiples GPUs work as one with more memory?

I have a deep learning workstation where there are 4 GPUs with 6 GB of memory each. Would it be possible to make a docker container see the 4 GPUs as one but with 24 GB?
Thank you.
I haven't work with docker before but work a lot with CUDA with Multiple GPU. Since multiple GPUs is physically are separated, hence working with multiple GPUs required a lot of memory synchronization in code level.
I don't think that docker can virtually merge all the GPU memory as that will make the computation very complicated on the GPU side. working with Multiple GPU required custom kernel to synchronize to each other.
The best analogy I relate is, "Can you get two bare-metal computers to merge the RAM and run Microsoft Word as if it were a single machine?".
Short answer: No.
Alternate answer: Yes, but requires additional hardware, expensive, and probably incompatible with your existing hardware.
Explanation:
It is possible if your GPUs are connected using NVIDIA NVLink (take a look at the details here https://www.nvidia.com/en-us/design-visualization/nvlink-bridges/).
Usually NVLink used for pairs of GPUs, like GPU0 connected with GPU1 and GPU2 connected with GPU3, in this case best option you can obtain is 2 GPUs with doubled memory each.
Another option is special InfiniBand module, installed to modern GPU servers by some vendors.

Can I use a 2.4 GHz rated parabolic antenna for 700 MHz Verizon 4G LTE data?

Can I use a 2.4 GHz rated parabolic antenna for 700 MHz Verizon 4G LTE data?
It seems 700 MHz are a long ways from the range of a 2.4 GHz antenna, but I have seen people marketing similar looking parabolic antennas that claim they are wide spectrum and go down to 600 Mhz and up to 5GHz... I'm just not sure at what attenuation though?
Will the same Verizon tower switch me from 700 MHz to one of their higher frequencies with better reception?
Verizon Wireless appears to utilize multiple 4G LTE frequencies like 2.1 GHz, 1.9 GHz, 1.7 GHz, 850 MHz, and 700 MHz. I don't know if they would they would responsively switch me from 700 MHz to a higher frequency if my signal improves though??? This would be important if I need an antenna that would work on more than one frequency.
How I know the band I'm using: I used my iPhone to figure out I'm using band 13 for communication with my local Verizon tower, which is 700 Mhz by following these direction. I have LTE data miles away from the tower but it's not a good connection so I'm looking to get a highly directional antenna for my JetPack 7730L.
Specific use case
Here's the 900mm-wide (about 3 feet) parabolic antenna I'm looking at so you can have a specific example to pick on.
More than likely the answer to this is going to be no. Although it seems plausible that a 2.4Ghz antenna will be able to pick up or transmit frequency that are in a similar range - the reality is that cell phone signals and WiFi are drastically different technologies.
Moreover, stackoverflow is unlikely to provide a concrete answer to this question. You might have better luck in the Signal Processing area: https://dsp.stackexchange.com/
As a side note: the frequencies that Verizon is using are dependent on the environment and other cell towers that are in the area. It isn't something that you can just change on your phone.

Wider device support for ARCore? (Note 8, hint hint)

are there any plans to extend ARCore support to a wider range of devices? I'd love to try ARCore on my Samsung Note 8, but the samples I have seen so far don't work, as they are locked to the Nexus and Galaxy 8 devices.
From their news announcement:
ARCore will run on millions of devices, starting today with the Pixel and Samsung’s S8, running 7.0 Nougat and above. We’re targeting 100 million devices at the end of the preview. We’re working with manufacturers like Samsung, Huawei, LG, ASUS and others to make this possible with a consistent bar for quality and high performance.
Initial launch is loosely planned for this winter. Speaking as someone who works on VIO/SLAM systems (but not at Google), new flagship devices like the Note 8 are certainly powerful enough to run ARCore. The bigger difficulty is tight sensor integration. Running VIO requires camera calibration and IMU delay characterization (ie given an accelerometer timestamp and a camera timestamp, how do you align measurements). It's easy enough to do on a device-by-device basis, but large-scale rollout requires significant device study as well as factory process changes.

Advice on a GPU for Dell Precision T3500 for image processing

I am a grad student and in our lab, we have a Dell Precision T3500 (http://www.dell.com/us/business/p/precision-t3500/pd). We use it primarily for image processing research and we need to use OpenCV 2.4.7's "ocl" i.e., OpenCL bindings for parallelizing up our work for some publications.
I looked at the workstation's specs and it specifies that we can get a NVIDIA Quadro 5000 or an AMD FirePro V7900 (the best of both manufacturers for this workstation).
This is where I am confused. Most of the reviews compare performance for CAD/CAM, MAYA and other software. But we will be writing our own code using OpenCV. Can anyone help me out in choosing the best of these two GPUs? Or is there anyway I can get a better GPU by upgrading the power supply?
We would greatly appreciate all the advice we can get at this stage!
Thank you very much.
If you are using OpenCL I agree with DarkZeros. You probably should buy AMD HW. Nvidia supports OpenCL only grudgingly as they want everyone to use CUDA.
Both of the cards you showed seem to be rather similar. Theoretical maximum at around 1TFlops. However both of them are rather old and very expensive. If you are not bound by any purchasing agreement I really recommend you buy a consumer card. The specs in dell.com only mean that if you purchase the computer from them you can select a GPU for it. It does not limit what you can do afterwards.
Depending on the chassis you could change your power supply. That would mean you could purchase something like this http://www.amazon.com/XFX-RADEON-1000MHz-Graphics-R9290XENFC/dp/B00G2OTRMA . It has double the memory of either of those professional cards and over 5x the theoretical processing power.
To be fair if you have the money to spend GTX Titan is still an excellent choice. It is about as fast as that AMD card and you can use CUDA with it if you need, considering how common CUDA is in scientific computing it might be wise to go with that.
However if you cannot switch your power supply, if it's non standard size or whatnot, then you are more limited. In that case you should search for pretty much the heftiest card that can run on 150W. Even those have perhaps double the performance of the cards the computer was originally available with.

Detecting exact frequency of Bluetooth signal

I was wondering if there was a way that I could detect the exact frequency of a BLE signal with an iphone. I know it will be in the 2.4 GHz range but i would like to know the difference down to the 1 Hz range between the transmitted frequency and the received frequency. The difference would be caused by the doppler effect meaning that the central or the peripheral would have to be moving. Also is there an exact frequency that iphones transmit BLE at or does it depend on the iphone's antenna?
Bluetooth doesn't have one particular frequency it operates on. Via bluetooth.com:
Bluetooth technology operates in the unlicensed industrial, scientific and medical (ISM) band at 2.4 to 2.485 GHz, using a spread spectrum, frequency hopping, full-duplex signal at a nominal rate of 1600 hops/sec.
… adaptive hopping among 79 frequencies at 1 MHz intervals gives a high degree of interference immunity and also allows for more efficient transmission within the spectrum.
So there'll be a wide spread of frequencies in use for even a single connection to a single device. There's hardware on the market like the Ubertooth that can do packet captures and spectrum analysis.
To my knowledge, iOS doesn't offer API to find out this information. OS X does at some level, probably via SPI or an IOBluetooth API, because Apple's Hardware Tools (search for "Bluetooth") offer a way to monitor spectrum usage of Bluetooth Classic devices on OS X.
As to your desire to detect movement via the Doppler effect on the radios, my instincts say that it's going to be very, very difficult to do. I'm not sure what the exact mathematics behind it would look like, but you'll want to examine what the Doppler effect on a transmission at 2.4 GHz would be as a result of low-to-moderate rates of motion. (A higher rate of motion or relative speed, say, over a few tens of miles an hour, will be quickly make Bluetooth the wrong radio technology to use because of its low transmit power.)

Resources