get input stream from other driver? Graphical tablet driver, output Midi to use with Web Midi API - driver

I have 0 experience with writing drivers.
before I commit to more learning about this subject, I need to ask if my idea is possible to implement (feasible?), if it's worth it, or if there's a better way.
is there a way to get the Graphical tablet driver's input stream, decode it with standard documented Graphical tablet protocols from another driver (seen by OS as a Midi hardware device), transform it to midi output (optimally w/o losing data).
the only reason for doing all that is that Web Midi API is already supported in chrome since stable release 49, and Web USB API is still in spec drafting phase.
see Can I Use - Midi
I want to use Web Midi API to get input from my tablet coming from the Midi Driver, and transform it back to Graphical tablet input stream (coords, pressure depth, etc..), obviously to draw it on a Canvas with close experience to Native apps with access to Graphics tablet & functionality to take advantage it.

The only way to get the USB MIDI driver to attach to the device would be for the device to report itself as a MIDI device; this would require for you to modify the device's firmware. (Which is not something you can do from software on the PC, let alone a web app.)
It would be possible to write your own device driver for the tablet that converts all events into MIDI messages (and in Linux, this does not require a kernel driver but could be done with a user-space program), but this is nothing that you could do without experience.

Related

Xilinx - Vivado Project: VGA IO not working

I'm new to Xilinx-Vivado. So at the moment we just need to look and see how Vivado and SDK work using Zybo Zynq-7000 Board. I searched on the internet, and found a project with VGA IO. The mysterious thing is that I actually made it to work when I was at school, but due to the current situation, we are not able to get much help, I am now alone with it at home.
This is the project.
Firstly I'd like to ask what does the console below tell me?
I generated the bitstream, and then exported the hardware included the bitstream, lastly I launch SDK. On SDK i programmed the FPGA and then ran the project as Launch as Hardware (System debugger and GDB).
That's how I did it:
Image1
And the configuartions:
Image2
And the output I am getting through the console is:
Image3
To my main problem, it is that I have connected all the cables to the Zybo Board that is required; USB cable from my laptop to the FPGA and VGA cable from the FPGA up to my monitor screen. The problem is that I am not getting any output on my monitor, do I have to enable something so that my VGA cable from FPGA to monitor is working?
This ultimately boils down to standard debugging. I can only give a couple suggestions.
First, confirm that your design is working in simulation; check that your outputs, especially your sync signals, are working as expected.
Next confirm that your IO constraints are set up correctly and that you are using the right IO pins on the board.
If those all seem correct, ideally you'd have access to a signal analyzer, but that sounds unlikely in current circumstances. As an alternative, you can look at using an ILA, like chipscope, to probe the signals and see monitor them in hardware.
Last, and obviously, make sure all of the cables are connected correctly.
Good luck with the design.

ESP8266 programming without SDK

There are limitations in the ESP SDK libraries (which are not public) like for example the length of the packet recv (112bytes max) when in promisc mode.
I tried reaching them to get some input and directions - but they seem to be replying nonsense.
Is it possible to program the chip without the SDK - thus make my own SDK and forget their limitations?
The processor-core on the esp8266 is an 'xtensa'. The processor-core, or let's just call it the processor, is what we program with C or C++ or assembler. The processor's instruction set is made public by the company (which is Tensilica .. or Cadence??) and once you have the instruction set, you can program directly or make a compiler and have complete freedom with the processor.
The processor-core is not the complete product and for us end-consumers, and companies, like Espressif, buy the Intellectual Property rights to a processor-core's design and build an end-product by putting peripherals like SPI, I2C, UART and in the esp8266's case, the wifi-tranceiver, around the processor-core.
These peripherals are controlled digitally, and output to the processor digitally, so the processor can interface with them - but their action can be either digital or analog. For UART, SPI, I2C etc, espressif has provided us with the datasheet that informs of all the registers that can be used to control that peripheral. It's something like write to this X memory address what you want to transfer and then set the bit Y on the Z memory address to begin the transfer. For SPI for example, there are registers to control speed, polarity, phase etc for a transfer. Once you know how to control a peripheral at the lower level, you can write high level drivers, which espressif does provide too, but you can write your own.
For Wifi, espressif hasn't released how the peripheral can be interfaced with, so we have to depend upon the compiled binaries that espressif sends us. You can use 'objdump -t' on the 'lib/lib80211.a' to get atleast the names of the routines that the Wifi driver provides. You can call these routines from C or assembler code and go a little bit lower than espressif intended but to go any lower would require 'Reverse Engineering' by manually understanding the low level code in the compiled Driver and nobody's gonna take that risk and time-drain.

Access the whole video memory

I'm looking for a way to read the whole video memory that a video card outputs to a display. That includes also hardware accelerated output, video playback and output in fullscreen mode (that somehow I feel could be different from windowed mode).
In short: I want to be able to capture everything that is going to be represented on a display.
I suppose that IF that's possible it would be os-dependant. The targets I'm interested in are Windows OSX and Linux.
Do you have any hint?
For windows I guess you could take CamStudio, strip it down and use it to record the screen then do whatever you want with the output, other than that you could look into forensic kernel drivers for accessing RAM. It's not exactly as simple as a pointer pointing to the video memory anymore, haha.
Digital Rights Management, requested feature of Windows, attempts to block your access to blocks of graphics-card frame buffer memory. Using an open-source driver under Linux would seem to be the only way to access this memory, or as mentioned earlier, some 3rd party software that knows some back doors or hacks or ways to locate other program's frame buffer space.
Unless of course, you are trying to capture output from your own program (ie you are calling the video/graphics creation functions yourself), there are APIs to manipulate display frames in DirectX and OpenGL.
I think I found some resources that can help to capture the display memory in Windows
Fastest method of screen capturing
How to save backbuffer to file in DirectX 10?
http://betterlogic.com/roger/2010/07/fast-screen-capture/

Possible to stream video over 115kbps?

I need some advice from people experienced with streaming video.
I have a task to put together a system that allows video coming from RS-170 (composite) video cameras and have them displayed on an iPad. The catch is that no wireless (no Wi-Fi, no bluetooth) is allowed. Only a wired interface.
The physical I/O options on an iPad are apparently extremely limited, but I did manage to come across a company named Redpark that makes an RS232-to-Lightning cable. So my proposed solution is to have the video feeds go into a box with software that digitizes and encodes the video, and then sends it over RS232 to the iPad using that cable. The catch here is that the maximum bandwidth on that cable is 115kbps.
My preliminary testing of this setup on a prototype system have been less than stellar so far. I set up two PCs, each with serial ports, and hooked them together with a null modem. I then set the baud rates of the ports to 115kpbs and then attempted to stream a web cam video feed over the serial connection in real-time using ffmpeg. The results weren't very encouraging, but I at least did manage to get some sort of image to show up.
I guess I need to play around with the ffmpeg encoding options some more. But I need to ask: am I wasting my time with this idea, or should what I am asking here be possible?
For SDA LQ standard ("low quality") we encode H.264 mp4 (using x264) with a 128 kbps video track. The hardware decoding on the iPad can play it. It is maximum 320x240 30 fps video. The quality depends heavily on the material. For mostly nonmoving material, it is watchable. If there is a lot of movement or lighting changes, you may not be able to make out much. You can check out some examples at the link. Video game video, but some may be comparable to your application.
Without knowing more about your requirements (resolution, framerate, type of material), it is difficult to say more. However, given the right material, it is definitely possible to do it and have it be watchable (for some definitions of watchable).

Reading from a text file in Dynamic C language

I'm using a rabbit microcontroller. It uses the Dyanimc C language.
How can I read from a file in my PC and manipulate it or print it to the screen?
C or C++ methods are not working here.
If you read the Rabbit manual then you can see that the file system function calls from within the Dynamic-C language refer to files that are stored on local flash devices that are connected to the processor chip.
FAT version 1.02 supports SPI-based serial flash devices. FAT versions 2.01 and 2.05 also support SPIbased
serial flash devices and require Dynamic C 9.01 or later. FAT version 2.05 introduces support for
NAND flash devices. FAT version 2.10 extends μC/OS-II compatibility to make the FAT API reentrant
from multiple tasks. FAT version 2.13 adds support for SD cards and requires Dynamic C 10.21 or later. In
all versions of the FAT, a battery-backed write-back cache reduces wear on the flash device and a roundrobin
cluster assignment helps spread the wear over its surface.
There is no way that the Rabbit can read or access a file on your pc directly. You must first provide a transfer mechanism to pass the file over from the pc to the flash storage device that you have designed into your hardware platform and use the file write function calls to store this data into the Rabbit file system. This would normally be by transferring the data over a serial link using some protocol of your choice or invention.
Next you seem to want to display some data on the screen. I assume that by this you mean the pc screen (although you could have a local screen connected to the Rabbit) Again the Rabbit has no direct method of accessing the screen. You will have to write a pc application that takes data messages from the Rabbit, possibly over the serial interface (other interfaces may be available), and interpret these as instructions to display some text or formatting on the pc screen.

Resources