how to connect nexys2 FPGA with camera? Driver issue - image-processing

My project is to capture images and process them to move a wheelchair accordingly. I am using Nexys2 FPGA board for this purpose. Nexys2 has a usb port and the camera is also a usb camera. but i dont have the drivers in verilog which will make nexys2 and the camera communicate with each other. Please help me ill be very grateful.

Well, if you manage to write a driver for a USB camera in VErilog, you can sell that for a lot of money :)
Well, sarcasm aside, there is NO WAY you can access a USB camera in Verilog, unless you have a USB host implemented in your FPGA and have a CPU controlling it and have a SW driver for that camera.
There are alternatives to this, you can buy a camera which has an FPGA "friendly" connector like this one:
5 Mega Pixel Digital Camera Package
Which comes with the Verilog code that you can use in your project.

Sadly, the USB port on the Digilent Nexus 2 board does not have host capabilities, it can only act as a USB slave. The USB connection on the board is used for powering the board and configuring the FPGA and other onboard peripherals.
The newer Nexus 3 board has a second USB port however it has the same issue in that it can only act in slave mode. Also due to the configuration can only be used for mouse and keyboard input.

Related

Multiple usb cameras on Raspberry Pi using OpenCV

I would like to connect 2 usb webcams to a RaspberryPI and be able to get at least 1920 x 1080 frames at 10 fps using OpenCV. Has anyone done this and knows if this is possible? I am worried that the PI has only 1 usb bus?? (usb2) and might get a usb bandwidth problem.
Currently I am using an Odroid and it has a usb2 and usb3 bus so I can connect 1 camera to each without any problemo..
What i have found in the past with this is no matter what you select using OpenCV for bandwidth options the cameras try to take up as much bandwidth as they want.
This has led to multiple cameras on a single USB port being a no-no.
That being said, this will depend on your camera and is very likely worth testing. I regularly use HD-3000 Microsoft cameras and they do not like working on the same port, even on my beefy i7 laptop. This is because the limitation is in the USB Host Bandwidth and not processing power etc.
I have had a similar development process to you inthe past though, and selected an Odroid XU4 because it had the multiple USB hosts for the cameras. It also means you have a metric tonne more processing power available and more importantly can buy and use the on-board chip if you want to create a custom electronics design.

ALSA, TinyALSA support in Android Things for Raspberry Pi 0.5.1-devpreview

The 0.5.1-devpreview BSP for RPI3 comes with libtinyalsa.so, libalasautils.so but seemingly no adb shell commandline support for audio.
We are designing a custom audio board (with audio processor) for use with Android Things and Raspberry Pi and we would typically use ALSA utilities and custom kernel drivers for accessing this board under Raspian.
It is possible the default Android Things I2S peripheral drivers and Peripheral Manager support the stream interfaces we need (the same way the VoiceHat drivers were wrapped), but we have little to no information on the default drivers in the RPI3 BSP, and we don't have any information on how to override the default drivers in Android Things without a distro rebuild.
Seems silly to write a Native C++ low-level peripheral driver when so many audio processor companies already provide ALSA-ready ASoC drivers for device source tree use.
Best practices for writing your own audio driver for Android Things?
The VoiceHat driver is one example of how to do a userspace audio driver.
If you're using a custom audio board, you should be aware of the audio chip the board uses. Looking at that chip's datasheet, you should be able to use the same peripheral I/O (UART, GPIO, I2C, SPI) to configure the connection and read/write data over the I2S bus.
In the Google Assistant sample, the app registers the VoiceHat at the beginning of the activity and unregisters it at the end of the activity.

How to connect TX and RX on the ESP8266 to USB pins D- and D+ on the NXP LPC1769?

I have a board (with NXP LPC1769) hosting an application and connected to the PC via USB cable. I use an application running on my PC and communication is pretty straightforward (some ASCII commands are exchanging) and working as it should be.
So, what I would like to achieve is to connect my favorite WIFI module ESP8266 using its TX/RX pins to the USB connector (D- and D+) of the NXP LPC1769 instead of my PC.
You may ask why you don't use any UART pins of the LPC1769. And my answer, I would love to. But it requires pretty much code modification which is not pleasant at this stage for me.
Pins P0.29 and P0.30 used from LPC1769 connected to USB connector.
Here is the existing schematic;
I would like to ask if this is even possible, and if possible, what are the options?
(I am inexperienced with NXP MCUs, still a work in progress, please bear with me).
Thank you.
NO my friend, it is simply not possible to connect USB serial to Rx/Tx pins of the ESP8266. First there is the hardware limitation, only asynchronous serial communication is possible with the ESP8266. That device has no USB host in it that can be programmed so there no way to do what you ask simply with that circuit. Nevertheless i would suggest implementing a simply board with a FTDI device of your choice (FT232R for example) and do the conversion from USB to asynchronous serial communication (Rx/Tx) directly.

Starter kit for bare-metal programming for Beaglebone

i plan to try some bare-metal TCP/IP stuff on a beaglebone. There is TI StarterWare containing the TCP/IP stack which is good. However, to flash my program to BB, I need some JTAG adaptor and software. Which one should I buy/use there so many different JTAG debuggers, are they all equivalent?
One preliminary remark:
You don't really need a JTAG probe for downloading/running/flashing your program: you can load and execute using u-boot loadb or load commands from the serial console, provided that your beaglebone does still have u-boot installed - The procedure for connecting a USB-to-TTL adapter is described here. I would strongly suggest to buy the exact adapter featured in the article above on e-bay if you don't have one.
In addition to the u-boot/serial adapter, you can to connect your beaglebone to your local network, and download your application using u-boot tftp commands. You can buy a USB to Ethernet adapter for a couple of dollars, plug it into your PC, then install a TFTP server, tftp32 (Windows) or tftpd-hpa (Linux). You will then be able to connect directly your development PC to your beaglebone.
In the case your beaglebone would not have a working u-boot installed anymore, you still can re-install it from the serial port:
This can be done by connecting both P8.44/SYS_BOOT3/LCD_DATA3/GPIO2_9 and P8.43/SYS_BOOT2/LCD_DATA2/GPIO2_8 to the ground (two of P9.43/P9.44/P9.45/P9.46) using two 4.7 k ohm resistors, powering the beaglebone with an external 5V power supply (not by USB), and power-cycling the beaglebone - power-cycling IS required, performing a 'reset' is not enough for the new SYSBOOT configuration to be taken into account.
You can then download u-boot from your PC using Teraterm: u-boot-spl-.bin should be downloaded using x-modem, and u-boot.bin using y-modem, as described in the 'Boot over UART' section of this TI wiki article.
This being said, a JTAG probe is always useful when debugging a bare metal application or the Linux kernel: as a hobbyist, I am using the EDU version of Segger J-link with my beaglebone (around USD 63). If you need it for commercial use, the price tag is around USD 400 I guess.
You will also need to have the TI 20 pin header soldered on your beaglebone - see section "Optional JTAG" of the beaglebone documentation.
I bought the Samtec FTR-110-03-G-D-06 connector, and am perfectly happy with it.
Please note the CircuitCo used to sell Beaglebone Blacks with the connector already soldered.
Finally, you will need an adapter to connect the TI 20 Pin connector to the standard 20 pin ARM JTAG connector used by the J-link.
To my knowledge, they are at least two solutions:
The J-Link TI-CTI-20 Adapter from Segger, which was my choice,
The BeagleBone Black JTAG Adapter Kit from Tican Tools.
The J-Link has software support for both Windows and Linux. I have been using it with the Starterware and my beaglebone black on both Windows and Linux systems with success to this day. It has been working fine with a bunch of different Cortex-M0+, M0, M3, and M4 as well.
Unfortunately, I don't have been experimenting with other JTAG probes...
From what I have read, the JTAG emulator that allows you to use the free license to Code Composer Studio with the Beaglebone Black is the XDS100v2. Here is a link to it:
https://store.ti.com/TMDSEMU100V2U-20T-XDS100v2-JTAG-Emulator-20-pin-compact-TI-connector-P1848.aspx
I just bought one myself to use with the BBB. I have not tried it yet though.
You don't necessarily need JTAG to test them. You can build it and put it in a path in your filesystem, then during boot, ask your bootloader i.e. u-boot to load it and then jump there and execute it.

Writing Device Drivers for a Microcontroller(any)

I am very enthusiastic in writing device drivers for a microcontroller(like PIC, Atmel etc).
Since I am a newbie in this controller-coding-area I just want to know whether writing device drivers for controller is same as we write for linux( or any other OS) ?
Also can anyone suggest some online device driver building tutorial for the same ..?
Thanks,
If you are thinking about developing the device drivers to interface your device with a host computer (probably using USB), then most of the microcontrollers nowadays implement default classes that rely on native drivers.
A concrete example:
If you use a PIC18F4555, you can use the regular HID (human interface device) windows driver to communicate with your microcontroller (given you implemented it correctly). No need to develop any driver.
Writing a device driver for an MCU is a pretty far cry from writing it for a OS. Most MCUs won't have an OS running on them at all. You'll generally end up writing some low level Interrupt Service Routines (ISRs) and filling up buffers, that your application software will end up emptying. You don't have to fit into any device driver paradigm that an O/S has defined. You basically have to read the datasheet for the device you are wanting to interface with and read and write to its memory over whatever interface it might use (e.g. SPI, I2C, UART, etc.). Ultimately the device driver ought to provide intuitive function calls to the application software.
If you are using AVR MCU like atmega then you can use vusb (https://www.obdev.at/products/vusb/index.html) for those MCU that don't have any HID and handles the interrupts by connecting D+ and D- pins of the USB to digital I/O ports of the MCU.
The atmegaU2 packages have their own USB communication ports and HID.

Resources