I would like to provide a stream of images via RTSP using Indy 10 components. I don't need to know all the individual requests and all, that's all covered separate from what I need. But what Indy component should I use and how should I use it? This stream will not consist of sound, only images.
Note that RTSP is very similar to HTTP, but with a different structure.
Indy does not have any RTSP or RTP/RTCP components, so you will have to implement those protocols from scratch. RTSP is a textual-based protocol, so you can use TIdCmdTCPServer, though it may be better to derive from TId(Custom)TCPServer and override its DoExecute() method to avoid duplicated code (reading headers, processing URLs, etc), like TIdHTTPServer does. As for the images, you can use TIdUDPClient to send the RTP/RTCP packets as needed.
Related
I am working on Adaptive Autosar project, where input data (video) captured from camera sensor needs to be transferred from client machine to server machine, which runs object detection algorithm.
someIP (service oriented middleware over IP) protocol is used as middleware .
Is it possible to share video file using SomeIP protocol?
If No, what is other method to share the video frame?
Thanks & Regards
Astha Mishra
The problem would be, that you would need a very good connection between the two ECUs, and I doubt that even with Ethernet, you can pass the data that fast to keep a certain performance. It might make sense to preprocess the data, before transmitting it somewhere else.
Transmission would be done rather as a byte stream with a streaming protocol, e.g. SomeIpTp, you might think about compression if possible. UDP instead of TCP might be also a good idea, but consider the possible drawbacks of UDP.
Vector seems to provide some MICROSAR module called MICROSAR.AVB module for Audio/Video-Briding.
But make sure, the sensor/camera does not provide the data faster than it can push it out over the network.
I'm looking to use an existing video player library for iOS apps with HLS support so that I can implement a player with some very specific networking behavior, as opposed to letting Apple decide the size and timing of requests. It needs to be customizable enough to support new networking policies such that I can override the request sizes, change what files are requested, and read data from an existing local cache file. In short, I'm attempting to override the networking portion that actually calls out and fetches the segments such that I can feed in data from partial cache as well as make specific algorithmic changes to the timing and size of external HTTP requests.
I've tried AV Foundation's AVAssetResourceLoaderDelegate Protocol but, unless there is something I'm not seeing, there doesn't seem to be a way to override the outgoing requests and just feed bytes to the media player.
I've also looking into VLC but unfortunately my current project is incompatible with a GPL license.
Ideally, there would be a way to directly feed bytes or complete segments to MPMoviePlayerController, but I can't find any way of accomplishing this in the API. The only method I'm aware that works is using a local HTTP server, which I have been doing but it seems overly complicated when all I'm really trying to do is override some internal networking code.
Any suggestions of another way of doing this?
From what have gathered so far, Apple provided tools to make Mac to act as HTTP Live Streaming server. But my goal is different. I want to make iDevices to be the HTTP Live Streaming server. (for local network only)
Can it be done at all?
Yes and no. Apple does not provide a way to stream encoded media data, so that part is 100% up to you. Also, Apple does not provide a way to access encoded frames directly (i.e. you can easily get an encoded file or the raw frames, but not easily get "encoded frames'). So you need to develop a way to get these encoded frames from the files for streaming, or encode the raw frames on the fly.
It may or may not fit your use case, but if you first right the streamer portion, you should be able to say small/short clips to disk, and stream them out as they are created with minimal overall latency.
I am using the directshow filter for muxing a vp8 and vorbis.
And MOST IMPORTANTLY I am sending (trying to send actually) the webm file in real time.
So there is no file being created.
As data is packed into webm after being encoder i send it off to the socket.
The filesinker filter uses IStream to do the file IO. And it heavely uses the seek operation.
Which I can not use. Since I can not seek on a socket.
Has anyone implemented or know how to use this muxer so that seek operation in not called.
Or maybe a version on the muxer with queues so that it supports fragmentation.
Thanks
I am using the directshow filter providede by www.webmproject.org
Implementation of IStream on writers allow multiplexers update cross references in the written stream/file. So they don't have to write sequentially which is impossible for most of container formats without creating huge buffers or temporary files.
Now if you are creating the file on runtime to progressively send over network, which I suppose you are trying to achieve, you don't know what, where and when the multiplexer is going to update to close the file. Whether it is going to revisit data in the beginning of the file and update references, headers etc.
You are supposed to create the full file first, and then deliver it. Or you need to substitute the whole writer thing and deliver onto socket all writes, including overwrites of already existing data. The most appropriate method to deliver real time data over network however is not not transfer the files at all. Sender send individual streams and receivers either use them as such, or multiplex into file after receiving then is it is necessary.
I was wondering if I can use an HTTP protocol to acquire an image stream from an RTSP camera? I am currently using VLC Media ActiveX Plugin to connect to and view the RTSP stream, but I would like to eliminate the ActiveX control and move to a more raw level of image acquisition. I recall seeing somewhere that it's possible to get these images using HTTP. I'd like to use Indy TIdHTTP component to connect to the camera and acquire the image. I'm also assuming this would need some sort of speed control, such as a delay in-between requests. However, it's also my understanding that these RTSP cameras have pre-defined frame rates, which using the standard RTSP protocol are supposed to follow.
many cameras will allow you to grab screenshots with a URL that might look like:
http://user:password#camera/snapshot.jpg
for a proper stream, you would need to use RTSP (there are Delphi RTSP clients), tunnelling over HTTP if your device supports the application/x-rtsp-tunnelled content type, or another stream your device supports.