How the GTP-u content being dissected by Wireshark?
Is the content inside GTP-U also encrypted if i don't have the ipsec over it?
I have some G711 PCMU content with GTP tunneling as shown by Wireshark but i am seeing there are two packets with the same content everywhere but the IPs are different.
I am not able to understand how this has been dis-sected and is the content really encrypted or not as in generally the frame size of the PCMU content is 160 bytes without RTP header but here i am seeing only 32 bytes without header.
Anyone has any idea on this GTP-U content or can provide me some documents or resource to understand the same?
Thanks
Nitin
GTP-U does not specify encryption for the payload; it is encapsulated as-is.
To read about GTP, see 3GPP TS 29.060.
Related
I'm using MSE to play fragmented MP4 streams (H264 video) in browser(s).
The concept is working, there is a MediaSource and SourceBuffer, and I'm pushing data to SourceBuffer, and MediaSource is being displayed on the HTML page correctly.
However I've now found a stream which my configuration simply can't play.
I'd like to emphasize, that my MSE configuration is good and working for most of the streams - for all the streams I've tried until now. So I'd just skip the details of implementation for sake of simplicity.
There is an error message with a lot of details:
CHUNK_DEMUXER_ERROR_APPEND_FAILED: Invalid video decoder config: codec: h264, profile: h264 baseline, level: not available, alpha_mode: is_opaque, coded size: [0,0], visible rect: [0,0,0,0], natural size: [0,0], has extra data: false, encryption scheme: Unencrypted, rotation: 0°, flipped: 0, color space: {primaries:BT709, transfer:BT709, matrix:BT709, range:LIMITED}
It seems the video itself doesn't have the correct size information.
So the obvious question: (How) is it possible to configure the MediaSource's video decoder to update the stream's size (width and height) parameters?
This looks like a problem with the video bitstream of that particular piece of content. More specifically, in the decoder initialization config which usually is contained in special NAL units (SPS and PPS) of the initialization segment(s).
You probably won't be able to work around that. If you were to fix it you'd most likely have to re-write those NAL units in the bitstream which is not typically something to do on the client side. It's a content authoring issue.
Also, you might want to cross-validate with https://conformance.dashif.org/ or the dash.js reference player.
I am using ESP8266 Arduino ConfigFile.ino as an example to store configuration settings on SPIFFS.
https://github.com/esp8266/Arduino/blob/master/libraries/esp8266/examples/ConfigFile/ConfigFile.ino
From this code segment, configFile cannot be >1024 bytes.
size_t size = configFile.size();
if (size > 1024) {
Serial.println("Config file size is too large");
return false;
}
Why is 1024 bytes the limitation for config file size? If this is indeed a limitation, are there ways to overcome this limitation?
It's a limitation only in this particular example - It's meant to serve as a basis for you to start developing your own configuration file code. Nothing is stopping you from creating a larger buffer for both the raw character data and JsonBuffer. I have several configuration files on production devices around 10-20K with no issues to report.
I have been used ffmpeg to decode every single frame that I received from my ip cam. The brief code looks like this:
-(void) decodeFrame:(unsigned char *)frameData frameSize:(int)frameSize{
AVFrame frame;
AVPicture picture;
AVPacket pkt;
AVCodecContext *context;
pkt.data = frameData;
pat.size = frameSize;
avcodec_get_frame_defaults(&frame);
avpicture_alloc(&picture, PIX_FMT_RGB24, targetWidth, targetHeight);
avcodec_decode_video2(&context, &frame, &got_picture, &pkt);
}
The code woks fine, but it's software decoding. I want to enhance the decoding performance by hardware decoding. After lots of research, I know it may be achieved by AVFoundation framework.
The AVAssetReader class may help, but I can't figure out what's the next.Could anyone points out the following steps for me? Any help would be appreciated.
iOS does not provide any public access directly to the hardware decode engine, because hardware is always used to decode H.264 video on iOS.
Therefore, session 513 gives you all the information you need to allow frame-by-frame decoding on iOS. In short, per that session:
Generate individual network abstraction layer units (NALUs) from your H.264 elementary stream. There is much information on how this is done online. VCL NALUs (IDR and non-IDR) contain your video data and are to be fed into the decoder.
Re-package those NALUs according to the "AVCC" format, removing NALU start codes and replacing them with a 4-byte NALU length header.
Create a CMVideoFormatDescriptionRef from your SPS and PPS NALUs via CMVideoFormatDescriptionCreateFromH264ParameterSets()
Package NALU frames as CMSampleBuffers per session 513.
Create a VTDecompressionSessionRef, and feed VTDecompressionSessionDecodeFrame() with the sample buffers
Alternatively, use AVSampleBufferDisplayLayer, whose -enqueueSampleBuffer: method obviates the need to create your own decoder.
Edit:
This link provide more detail explanation on how to decode h.264 step by step: stackoverflow.com/a/29525001/3156169
Original answer:
I watched the session 513 "Direct Access to Video Encoding and Decoding" in WWDC 2014 yesterday, and got the answer of my own question.
The speaker says:
We have Video Toolbox(in iOS 8). Video Toolbox has been there on
OS X for a while, but now it's finally populated with headers on
iOS.This provides direct access to encoders and decoders.
So, there is no way to do hardware decoding frame by frame in iOS 7, but it can be done in iOS 8.
Is there anyone figure out how to directly access to video encoding and decoding frame by frame in iOS 8?
I would like to add three "parts" to an NSInputStream: an NSString, an output from another stream and then another NSString. The idea is the following:
The first and last NSStrings represent the beginning and end of a SOAP request while the output from the stream is a result of loading a very large file and encoding it as Base64 string. So, in the end I would have the final NSInputStream hold the whole SOAP request like this:
< soap beginning > < Base64 encoded data > < soap ending >
The reason I want the whole request to be held in NSInputStream is two-fold:
I don't what to load the very large data file into memory
I think that this is the only way to enforce sending the final request in HTTP 1.1 chunks (which I need because otherwise, if the request becomes too big, the server won't accept it). So, I know that doing this:
NSInputStream *dataStream = ....;
[request setHTTPBodyStream:dataStream];
ensures that the request will be sent as HTTP 1.1 chunks and not as one huge raw SOAP request.
So, I wonder how this can be achieved - namely, how do I "enqueue" things into an NSInputStream? Can it be even done? Is there an alternative way?
Just for reference, in Java this can be done as follows
Vector<InputStream> streamVec = new Vector<InputStream>();
BufferedInputStream fStream = new BufferedInputStream(fileData.getInputStream());
Base64InputStream b64stream = new Base64InputStream(fStream, true);
String[] SOAPBody = GenerateSOAPBody(fileInfo).split("CUT_HERE");
streamVec.add(new ByteArrayInputStream(SOAPBody[0].getBytes()));
streamVec.add(b64stream);
streamVec.add(new ByteArrayInputStream(SOAPBody[1].getBytes()));
SequenceInputStream seqStream = new SequenceInputStream(streamVec.elements());
because Java has these objects available, but NSStreams in objective-c look like very low level objects and are very hard to work with.
Note: I completely re-wrote the original question as I asked it 2 days ago, since I think the new edit explains more clearly what the problem is. I hope it would help it be easier comprehended and maybe answered
UPDATE 2
Here is what I've been able to achieve so far: Instead of trying to enqueue into a stream, I am using a temp file to first write the < soap beginning >, then I set up an input stream to read from the file in chunks, encode each chunk as a Base64 string and write this to the same temp file, finally, when my stream closes, I write the < soap ending > to the temp file. Then I set up another input stream with the contents of this file which I pass to the NSMutableURLRequest:
NSMutableURLRequest* request = [NSMutableURLRequest requestWithURL:url];
...
NSInputStream *dataStream = [NSInputStream inputStreamWithFileAtPath:_tempFilePath];
[request setHTTPBodyStream:dataStream];
This ensures HTTP 1.1 chunked transfer of the contents of the file. After the connection finishes, delete the temp file.
This seems to work fine but of course this is an annoying work-about. I don't want to be writing to a temp file when it all could have been handled by streams (ideally.) If anybody still has better suggestions, let me know :)
UPDATE 3
OK, another update is in order. While my writing to file seems to work, I am now hitting an unexpected issue with some of my requests failing to upload to the server. Specifically, everything is going according to the plan, I am reading the contents of the temp file into a stream and set HTTP body of my request to be this stream and it starts transmitting the HTTP 1.1 chunks as I want it to - but for some reason some packets get dropped and the final request - this is my guess - gets malformed and thus fails. I think the issue of dropped packets is random, because I observe it on larger requests - that is, the issue just has more chance to show up - while my smaller requests usually go thru just fine. This is of course a separate issue from the original in this question. If anybody has a good idea what might be causing this, I asked about the problem here: Packets dropped during chunked HTTP 1.1 request sent by NSURLConnection
Your solution is an ok option, but you can do it with a stream. It means subclassing NSInputStream, and that isn't trivial because there are a bunch of methods you need to implement.
Basically your subclass would initially return the header bytes, then it would return bytes from the 'internal' stream to the file content, then when that's used up it returns the footer bytes. It means maintaining a record of how big the header and footer are and how much has been processed so far, but that isn't a big issue.
There's an example of creating a subclass here which shows the tricky hidden methods you need to implement to get the stream subclass to work properly without throwing exceptions.
I need to compress data sent over a secure channel in my iOS app and I was wondering if I could use TLS compression for the same. I am unable to figure out if Apple's TLS implementation, Secure Transport, supports the same.
Does anyone else know if TLS compression is supported in iOS or not?
I was trying to determine if Apple implementation of SSL/TLS did support compression, but I have to say that I am afraid it does not.
At first I was hopeful that having a errSSLPeerDecompressFail error code, there has to be a way to enable the compression. But I could not find it.
The first obvious reason that Apple doesn’t support compression is several wire captures I did from my device (6.1) opening secure sockets in different ports. In all of them the Client Hello packet reported only one compression method: null.
Then I looked at the last available code for libsecurity_ssl available from Apple. This is the implementation from Mac OS X 10.7.5, but something tells me that the iOS one will be very similar, if not the same, but surely it will not be more powerful than the Mac OS X one.
You can find in the file sslHandshakeHello.c, lines 186-187 (SSLProcessServerHello):
if (*p++ != 0) /* Compression */
return unimpErr;
That error code sounds a lot like “if the server sends another compression but null (0), we don’t implement that, so fail”.
Again, the same file, line 325 (SSLEncodeClientHello):
*p++ = 0; /* null compression */
And nothing else around (DEFLATE is the method 1, according to RFC 3749).
Below, lines 469, 476 and 482-483 (SSLProcessClientHello):
compressionCount = *(charPtr++);
...
/* Ignore list; we're doing null */
...
/* skip compression list */
charPtr += compressionCount;
I think it is pretty clear that this implementation only handles the null compression: it is the only one sent in the Client Hello, the only one understood in the Server Hello, and the compression methods are ignored when the Client Hello is received (null must be implemented and offered by every client).
So I think both you and me have to implement an application level compression. Good luck.