I have stream of numpy arrays on input and then using OpenCV I am converting it into jpgs. Then I am streaming jpegs to the browser with multipart/x-mixed-replace; boundary=frame mimetype. Everything runs on local machine.
I want to replace jpg stream with Rtsp stream. Which libs/tools/tech stack should I use?
Related
So I'd like to stream video from an RTSP stream using OpenCV, and immediately save the raw video to a file without decoding. We need high performance and might be processing as many as 500 cameras on one machine. Which means that even though typically small and 2% CPU, the extra 10x CPU needed to decode every frame adds up. When I run with ffmpeg command line it's getting so little usage it shows 0.0% CPU.
The recommendation I've seen before is just use ffmpeg, reason I'd like OpenCV is:
A) We might need to do some image analysis in the future, wouldn't be on all frames though, just a small sample of them, so all still need to be saved frame by frame to file without decoding, but some will separately be decoded.
B) Simpler to implement than ffmpeg (that in the future would need to pass select frames to OpenCV)
Edit: I've tried using VideoWriter::fourcc('X', '2', '6', '4'), and -1 in order to try to skip encoding. Seems like it goes ahead and does it anyway even though it's already in that format?
Thoughts? Thanks!
I have a video source which gives me a raw h264 stream. I need to re-stream this live input in a way it is cross-compatible and playable without any plugin. I've tried using ffmpeg+ffserver to produce a fragmented mp4, but unfortunately my iPhone isn't playing it.
Is there a way to make it (raw h264 in mp4 container) playable in iOS's Safari, or maybe another cross-platform container?
Ps: i'm using a raspberry pi 3 to host ffmpeg processes, so i'm avoiding re encoding tasks; instead i'm just trying to fit my raw h264 in a "ios-compatible" container and make it accessible through a media server.
For live streams you must use HTTP Live Streaming (HLS) with either the traditional MPEG-TS or fMP4 for newer iOS versions (see Apple HLS examples).
With FFmpeg you can use the hls muxer. The hls_segment_type option is used to choose between mpegts and fmp4.
I am developing a Live Streaming application where mobile application (iOS/Android) records the video and Audio and then encodes raw pixels to h.264 and AAC using VideoToolBox and AudioToolBox, these encoded pixels are converted to PES (Packetized Elementary Stream) separately (Video & Audio). Now we are stuck at what to transfer to the server either PES or MPEG-TS, which one gives the minimum latency and packet loss like Periscope, Meerkat, Qik, UStream and other live streaming applications.
For transmitting which networking protocol is best suitable TCP/UDP.
And what is required at server to receive these packets. I know FFMPEG will trans-code and generates the segmented files(.ts) and .m3u8 file for HLS streaming, but do we need any pipe before FFMPEG layer?
Please give me some ideas in terms of which is best and what are pros and cons of each.
Thanks
Shiva.
I am writing a client-server application which does real time video transmission from an android based phone to a server. The captured video from the phone camera is encoded using the android provided h264 encoder and transmitted via UDP socket. The frames are not RTP encapsulated. I need it to reduce the overhead and hence the delay.
On the receiver, I need to decode the incoming encoded frame. The data being sent on the UDP socket not only contains the encoded frame but some other information related to the frame as a part of its header. Each frame is encoded as an nal unit.
I am able to retrieve the frames from the received packet as a byte array. I can save this byte array as raw h264 file and playback using vlc and everything works fine.
However, I need to do some processing on this frame and hence need to use it with opencv.
Can anyone help me with decoding a raw h264 byte array in opencv?
Can ffmpeg be used for this?
Short answer: ffmpeg and ffplay will work directly. I remember Opencv can be built on top of those 2. so shouldn`t be difficult to use the FFMEPG/FFSHOW plug in to convert to cv::Mat. Follow the docuemnts
OpenCV can use the FFmpeg library (http://ffmpeg.org/) as backend to
record, convert and stream audio and video. FFMpeg is a complete,
cross-reference solution. If you enable FFmpeg while configuring
OpenCV than CMake will download and install the binaries in
OPENCV_SOURCE_CODE/3rdparty/ffmpeg/. To use FFMpeg at runtime, you
must deploy the FFMepg binaries with your application.
https://docs.opencv.org/3.4/d0/da7/videoio_overview.html
Last time, I have to play with DJI PSDK. And they only allow stream at UDP port udp://192.168.5.293:23003 with H.264
So I wrote a simple ffmpeg interface to stream to the PSDK. But I have to debug it beforehand. So I use ffplay to show this network stream to proof it is working. This is the script to show the stream. So you have to work on top of this to work as opencv plugin
ffplay -f h264 -i udp://192.168.1.45:23003
Please help me here.
I want to packetise and stream an HEVC encoded bitstream that resides on the server (e.g.Linux stream server) to a client machine( Linux) where the RTP headers will be removed and and the bitstream decoded to YUV using HEVC decoder.
The stages will includes:
Encoding the raw YUV and obtain an HEVC bitstream ---Server
packetise the bitstream/encapsulate using RTP----server
RTP packets streaming over UDP
Remove RTP and store --- client
decode bitstream using HEVC decoder ---- client
At the moment, I have only encoded the videos.
I want to start by packetizing the bitstream from the encoding process.
I will be grateful if someone could help me with information on how to extract HEVC NAL units and packetize for streaming over a network.
Many thanks.
James