To capture Tv tuner/webcam output - device-driver

We are developting a tool for live video streaming in which we want to broadcast the output of tv tuner on a network.Our project is at it's nascent stage.Our main concern at this moment is to how to capture the output of a tv tuner card.Plz guide us regarding this or provide a pointer where I can detail about the topic.
All help will be highly appreciated.
Thanks
Mawia
ps:project is for cross-platform but at the moment even if you tell me of any one platform it will be enough!

With Linux, Ubuntu specifically, a webcam's audio and video are mounted on /dev/audioX and /dev/videoY, where x and y are numbers. It can be a bit tricky to find out where the system has mounted the device, if more then one is present, however a quick 'ls /dev | grep audio' and 'ls /dev | grep video' should help narrow the search.
As for tv tuners I am unsure, though it seems likely that they would also be mounted under /dev.

Related

Xilinx - Vivado Project: VGA IO not working

I'm new to Xilinx-Vivado. So at the moment we just need to look and see how Vivado and SDK work using Zybo Zynq-7000 Board. I searched on the internet, and found a project with VGA IO. The mysterious thing is that I actually made it to work when I was at school, but due to the current situation, we are not able to get much help, I am now alone with it at home.
This is the project.
Firstly I'd like to ask what does the console below tell me?
I generated the bitstream, and then exported the hardware included the bitstream, lastly I launch SDK. On SDK i programmed the FPGA and then ran the project as Launch as Hardware (System debugger and GDB).
That's how I did it:
Image1
And the configuartions:
Image2
And the output I am getting through the console is:
Image3
To my main problem, it is that I have connected all the cables to the Zybo Board that is required; USB cable from my laptop to the FPGA and VGA cable from the FPGA up to my monitor screen. The problem is that I am not getting any output on my monitor, do I have to enable something so that my VGA cable from FPGA to monitor is working?
This ultimately boils down to standard debugging. I can only give a couple suggestions.
First, confirm that your design is working in simulation; check that your outputs, especially your sync signals, are working as expected.
Next confirm that your IO constraints are set up correctly and that you are using the right IO pins on the board.
If those all seem correct, ideally you'd have access to a signal analyzer, but that sounds unlikely in current circumstances. As an alternative, you can look at using an ILA, like chipscope, to probe the signals and see monitor them in hardware.
Last, and obviously, make sure all of the cables are connected correctly.
Good luck with the design.

Reducing bandwidth usage of vlc audio playback from smb share

I'm looking for a way to reduce a java based media player network bandwidth usage. During my research I found out that quality can be traded for lower datarates on streams with the transcode options. In my case the audio source is in a samba network share (file based - only wav type), and I'm not sure if the transcode setting apply for it.
The source of my problem is that our customer's work site has only a 50Mbit connection to their datacenter, and many clients (~10) has to be able to play back these audio files simultaneously. There is no QoS i guess, and the network is used for other purposes too. Caching is not an option (it's a long story, that i can't tell).
I would be really grateful if someone can clarify it for me. Can I lower the bandwidth requirements in this scenario by lowering the quality with transcode?
I'm open for other suggestions too, if you have an idea.

CPU load of streaming vs file downloading when routing data

I'm using a Raspberry Pi 2 to route wifi-eth connections. So from the eth side I have a computer that will connect to internet using the Pi wifi connection. On the Raspberry I started htop to monitor the CPUs load, then on the computer I started chrome and played a 20-minute 1080 video. The load on the CPU didn't seem to go beyond 5% anyhow. After that I closed youtube tab and started a download of a binary file of 5GB from the first row here (https://testdebit.info/). Well, I noticed that CPU load was much more higher, around 10%!
Any explanation of such a difference?
It has to do with compression and how video is encoded. A normal file can be compressed, but nothing like that of a video stream.
A video stream can achieve very high compressions due to the predictable characteristics of video, e.g. video from one frame to another doesn't change much. As such, video will send a whole frame (I-frame) and then update it with just the changes (P-frame). It's even possible to do backward prediction (B-frame). Here's a wikipedia reference.
Yes, I hear your next unspoken question: Doesn't more compression mean more CPU time to uncompress? That's true for a lot of types of compression, such as that used by zip files. But since raw video is not very information dense over time, you have compression techniques that in essence reduce the amount of data you send with very little CPU usage.
I hope this helps.

Access the whole video memory

I'm looking for a way to read the whole video memory that a video card outputs to a display. That includes also hardware accelerated output, video playback and output in fullscreen mode (that somehow I feel could be different from windowed mode).
In short: I want to be able to capture everything that is going to be represented on a display.
I suppose that IF that's possible it would be os-dependant. The targets I'm interested in are Windows OSX and Linux.
Do you have any hint?
For windows I guess you could take CamStudio, strip it down and use it to record the screen then do whatever you want with the output, other than that you could look into forensic kernel drivers for accessing RAM. It's not exactly as simple as a pointer pointing to the video memory anymore, haha.
Digital Rights Management, requested feature of Windows, attempts to block your access to blocks of graphics-card frame buffer memory. Using an open-source driver under Linux would seem to be the only way to access this memory, or as mentioned earlier, some 3rd party software that knows some back doors or hacks or ways to locate other program's frame buffer space.
Unless of course, you are trying to capture output from your own program (ie you are calling the video/graphics creation functions yourself), there are APIs to manipulate display frames in DirectX and OpenGL.
I think I found some resources that can help to capture the display memory in Windows
Fastest method of screen capturing
How to save backbuffer to file in DirectX 10?
http://betterlogic.com/roger/2010/07/fast-screen-capture/

Possible to stream video over 115kbps?

I need some advice from people experienced with streaming video.
I have a task to put together a system that allows video coming from RS-170 (composite) video cameras and have them displayed on an iPad. The catch is that no wireless (no Wi-Fi, no bluetooth) is allowed. Only a wired interface.
The physical I/O options on an iPad are apparently extremely limited, but I did manage to come across a company named Redpark that makes an RS232-to-Lightning cable. So my proposed solution is to have the video feeds go into a box with software that digitizes and encodes the video, and then sends it over RS232 to the iPad using that cable. The catch here is that the maximum bandwidth on that cable is 115kbps.
My preliminary testing of this setup on a prototype system have been less than stellar so far. I set up two PCs, each with serial ports, and hooked them together with a null modem. I then set the baud rates of the ports to 115kpbs and then attempted to stream a web cam video feed over the serial connection in real-time using ffmpeg. The results weren't very encouraging, but I at least did manage to get some sort of image to show up.
I guess I need to play around with the ffmpeg encoding options some more. But I need to ask: am I wasting my time with this idea, or should what I am asking here be possible?
For SDA LQ standard ("low quality") we encode H.264 mp4 (using x264) with a 128 kbps video track. The hardware decoding on the iPad can play it. It is maximum 320x240 30 fps video. The quality depends heavily on the material. For mostly nonmoving material, it is watchable. If there is a lot of movement or lighting changes, you may not be able to make out much. You can check out some examples at the link. Video game video, but some may be comparable to your application.
Without knowing more about your requirements (resolution, framerate, type of material), it is difficult to say more. However, given the right material, it is definitely possible to do it and have it be watchable (for some definitions of watchable).

Resources