I'm designing video capture card using PCIe interface. PCIe device will have FPGA IP from xilinx and there is a PCIe
reference driver from xilinx. Since device is a video capture card and there should be an application to display same.
Assuming, I will use either VLC or mplayer for displaying video data from PCIe driver.
I should use media framework such as V4L2 and ALSA for video and audio respectively.
I well receive raw video and audio data from FPGA through pci driver. video either it will be in planner YVU422 or YVU420 format.
From above information, I understood that driver should be (/dev/video) media driver (V4l2) not PCIe driver.
I have few question regarding this.
1) How to make driver compatible with VLC/mPlayer application.
2) what is interface between VLC and V4L2 driver, which IOCTLS should I use for set and get
( Suppose I want to set resolution from VLC app to FPGA device using V4L2 driver)
3) In which way VLC/mplayer accept input video data, do I need to add any header(metadata) information to raw video data
or not. whether VLC/mplayer accepts planner or packed YUV or not.
4) As of now I'm assuming that, ALSA will handle audio part, but how and when to invoke audio driver. How to maintain
sync between audio and video
Regards,
Kulkarni.
Related
i would like to stream Dron video from the controler (Dji mavic air 2 with a RC-N1 controler) via RTMP direktly from the controler/my Phone to my PC wihl i am in the fields without having any internet conection or an extra Network. Is that some how possible?
First, you still need a network. You can create a hotspot using a computer (laptop) or a device that connects to the RC-controller (phone, tablet).
Secondly, you need an RTMP server that will be located in this network. As a quick example, I can recommend MonaServer2, it is easy to install and run.
Thirdly, you need someone who will "listen" to the server when the stream comes to it. For example, you can use a VLC player. Launch it and specify RTMP stream as a source.
So, you start the RTMP-server, let's say it is located at 192.168.0.1:1935. On your device connected to the RC-controller, using the standard DJI-Fly application or your own app developed using Mobile SDK, select the option to start streaming, specify the address: rtmp://192.168.0.1:1935/live. Next, launch VLC-Player, select File->Open Network Stream and type rtmp://192.168.0.1:1935/live in URL field. Now you will be able to watch live-stream in your VLC window.
This is the fastest and easiest way I can recommend.
Also you can take raw h264 frames from camera, send them to decoder and do whatever you want. If you need some more info about it feel free to ask. Hope it helps!
I want to detect and send/recieve data from a smartphone in some vicinity without using internet.
I've always thought it would be fun to do this with audio. Most modern ways of modulating a signal (like OFDM) will sound like a white noise hiss over audio, and you should be able to get a few KB/s in a normal room environment if the phones are close to each other.
It also has the benefit that the user can always tell when it's transmitting.
Multiple methods are possible.
You could use a private (isolated) local area network that is not connected to the internet. Either ethernet cabled or over WiFi.
Airdrop might not require an internet connection (a WAN connected access point).
Bluetooth BLE communication doesn't require an internet connection. You could use an ESP32 or Raspberry Pi to read sensor data and have a mobile device connect over BLE to the ESP32 or Pi (or another mobile device).
You could use audio. Play FSK tones or Morse Code on one device and receive and decode the audio modulations on another device. (I've tried both of these methods successfully.) Or you could use a speech synthesizer on one device and a voice transcription app on another.
You could use light. Flash the flashlight (or LED) on one device, and receive and decode the light pulse sequences using the video camera another device. (There may be apps in the App store that can do this.) Or display a bar code or QR code on one device and use the camera on another to decode the data in the bar code or QR code.
You could use MIDI. Bluetooth MIDI over BLE from device to device. Or with MIDI cables, using a bunch of Lightning to USB and USB to Midi adapters.
You might be able to use vibrations from the Taptic engine on one device, and detect the vibration sequences using the motion sensor API on another device.
With many Android devices, you can connect a USB to serial port dongle, and use a long RS232 serial cable between devices.
With an iPhone, you could use a Lightning to Ethernet adapter, plus a fiber optic media converter, and send signals over several kilometers of (private) fiber optic cabling. etc.
You might want to use the IR sensor on your phone by using an IR sensor library. (Search it on a search engine). If the does not have that, you can use a QR code generator library (Search it on a search engine) to transfer your data.
You could use a raspberry pi (for example) to take readings from your sensor and store them. Make it run a webserver and create its own wifi network (not connected to the www) where you can access a webpage that displays the readings. Or you can set it up so that the Pi logs into the wifi hotspot from your phone whenever available and then uploads the data or sends it in an email or whatever.
You can use an internet module, for example the FONA 800 or 808 by Adafruit to let your Pi talk with the internet, via a SIM card from hologram.io for example. The Pi can talk to the FONA in Python. But to be honest that doens't really answer your question with the proximity thing - but if I were you I would drop that and do the following:
Read the data from the sensor and save it to a csv file on the Pi
Once every hour (or whatever), connect to the internet via FONA/hologram.io SIM
Insert the data from the previous hour to a remote mysql database
Use PHP or something to display the data from the database nicely and access via your phone
That way, you can have as many sensors as you want and access all from your phone. As I said the proximity thing is not relevant for me, it's easier imho to go through cellular (+ I wouldnt know how to do it over lets say bluetooth)
I need to stream a video via VLC from my computer, but i need to convert it from Ethernet to HDMI, for further distribution, i have this product
So, Are there any methods to convert IP to HDMI with keeping video stream compatible with that extender?
The product you reference, which is essentially a wireless HDMI link, takes a standard HDMI input.
You should only need to connect your HDMI output on your computer to the input on this device to have the video play on the screen connected to another instance of the product at the other end of the wireless HDMI connection.
Sorry if this question is obvious or duplicated. My 30 minutes of research led me nowhere.
We have an iPhone app that live streams video from the device to our remote Wowza servers.
We're looking to integrate the Swivl (motion tracking tripod) into our product, and it uses a wireless microphone that feeds into the 30-pin port of our iPhone. Swivl's SDK doesn't include anything about capturing audio from their hardware so I assume that it would be handled by the iPhone itself.
If I use the AVAudioRecorer, will it automatically route the audio input from the 30-pin port instead of the default microphone, or do I have to explicitly define the audio source?
Any clues help.
After a few tests, it seems that iOS automatically routes incoming audio signals.
There is no need to explicitly specify the source of the audio.
Straight from AVAudioRecorder documentation:
In iOS, the audio being recorded comes from the device connected by the user—built-in microphone or headset microphone, for example. In OS X, the audio comes from the system’s default audio input device as set by a user in System Preferences.
Let us say I have an audio iPhone app which takes input from the microphone.
Now, although I haven't tried this myself, I believe the user could use an external microphone that plugs into the phonojack socket.
This means my audio unit could be receiving its input from the internal or the external microphone.
My guess is that iOS will automatically route from an external microphone if it is connected.
But what if I don't want that?
Is there a way to specify which microphone should be used?
I have looked in the audio session guide, I can find some setting regarding a Bluetooth headset. But that is as close as I can find. It appears that it is not possible. But I find that difficult to believe.
PS Also I am curious how it detects an external microphone... if I plug my headphones in, it should continue routing from the internal microphone. my headphones are just plain stereo headphones. but if I used my mobile phone's headphones ( on extra band on the Jack... they have a microphone built onto the cable where the individual earpiece strands meet ) I would expect it to pick up this source instead.
You have to use the AUHAL unit to set a specific input device as default input and then connect it with the AudioQueue.
Apple has a detailled Technical Note for that: Device input using the HAL Output Audio Unit