Right now, I am using a static librtmp.a library to open a RTMP connection between my iphone and server. When the record button is pressed, the camera starts taking input, and on captureoutput, uses AVAssetWriter's in different threads to encode the videos to h264/AAC. The videos are then saved to a specific URL. I'm trying to take these processed frames and send them over my RTMP client using librtmp.
-(void)writeURL:(NSURL*)segmentURL {
NSLog(#"Utilities are writing...");
NSData *segmentData = [NSData dataWithContentsOfURL:segmentURL];
const char* body = [segmentData bytes];
NSLog(#"%i", [segmentData length]);
NSLog(#"%s", body);
RTMPPacket packet = _rtmp->m_write;
RTMPPacket_Alloc(&packet, [segmentData length]);
packet.m_headerType = RTMP_PACKET_SIZE_MEDIUM;
packet.m_packetType = RTMP_PACKET_TYPE_VIDEO;
packet.m_body = (char*)body;
RTMPPacket_Dump(&packet);
RTMP_SendPacket(_rtmp, &packet, TRUE);
//RTMP_Write(_rtmp, packet.m_body, packet.m_nBodySize);
}
crashes everytime on the RTMPPacket_alloc call and I'm unsure what to do. Is this the right way to approach sending the data over the network?
EDIT:
sample output
2013-03-31 22:53:16.163 videoAppPrototype[2567:907] Switching encoders
2013-03-31 22:53:16.179 videoAppPrototype[2567:1703] Encoder switch finished
2013-03-31 22:53:16.220 videoAppPrototype[2567:1703] Upload public.mpeg-4
2013-03-31 22:53:16.223 videoAppPrototype[2567:1703] Utilities are writing...
2013-03-31 22:53:16.230 videoAppPrototype[2567:1703] 339171
DEBUG: RTMP PACKET: packet type: 0x09. channel: 0x00. info 1: 0 info 2: 0. Body size: 0. body: 0x00
EDIT 2:
I changed my code to use RTMP_Write() instead of RTMP_SendPacket().
New Method:
-(void)writeURL:(NSURL*)segmentURL {
NSLog(#"Utilities are writing...");
NSData *segmentData = [NSData dataWithContentsOfURL:segmentURL];
NSUInteger len = [segmentData length] / sizeof(unsigned char);
Byte *byteData = (Byte*)malloc(len);
memcpy(byteData, [segmentData bytes], len);
free(byteData);
NSLog(#"%i", [segmentData length]);
NSLog(#"First write attempt...");
RTMP_Write(_rtmp, (char *)byteData, len);
NSLog(#"Successful?");
}
Which now crashes at RTMP_Write, as shown in stack trace:
If someone has any idea or needs anymore information, lemme know!
Related
Right now I'm investigating possibility to implement video streaming through MultipeerConnectivity framework. For that purpose I'm using NSInputStream and NSOutputStream.
The problem is: I can't receive any picture so far. Right now I'm trying to pass simple picture and show it on the receiver. Here's a little snippet of my code:
Sending picture via NSOutputStream:
- (void)sendMessageToStream
{
NSData *imgData = UIImagePNGRepresentation(_testImage);
int img_length = (int)[imgData length];
NSMutableData *msgData = [[NSMutableData alloc] initWithBytes:&img_length length:sizeof(img_length)];
[msgData appendData:imgData];
int msg_length = (int)[msgData length];
uint8_t *readBytes = (uint8_t *)[msgData bytes];
uint8_t buf[msg_length];
(void)memcpy(buf, readBytes, msg_length);
int stream_len = [_stream writeData:(uint8_t*)buf maxLength:msg_length];
//int stream_len = [_stream writeData:(uint8_t *)buf maxLength:data_length];
//NSLog(#"stream_len = %d", stream_len);
_tmpCounter++;
dispatch_async(dispatch_get_main_queue(), ^{
_lblOperationsCounter.text = [NSString stringWithFormat:#"Sent: %ld", (long)_tmpCounter];
});
}
The code above works totally fine. stream_len parameter after writing equals to 29627 bytes which is expected value, because image's size is around 25-26 kb.
Receiving picture via NSinputStream:
- (void)readDataFromStream
{
UInt32 length;
if (_currentFrameSize == 0) {
uint8_t frameSize[4];
length = [_stream readData:frameSize maxLength:sizeof(int)];
unsigned int b = frameSize[3];
b <<= 8;
b |= frameSize[2];
b <<= 8;
b |= frameSize[1];
b <<= 8;
b |= frameSize[0];
_currentFrameSize = b;
}
uint8_t bytes[1024];
length = [_stream readData:bytes maxLength:1024];
[_frameData appendBytes:bytes length:length];
if ([_frameData length] >= _currentFrameSize) {
UIImage *img = [UIImage imageWithData:_frameData];
NSLog(#"SETUP IMAGE!");
_imgView.image = img;
_currentFrameSize = 0;
[_frameData setLength:0];
}
_tmpCounter++;
dispatch_async(dispatch_get_main_queue(), ^{
_lblOperationsCounter.text = [NSString stringWithFormat:#"Received: %ld", (long)_tmpCounter];
});
}
As you can see I'm trying to receive picture in several steps, and here's why. When I'm trying to read data from stream, it's always reading maximum 1095 bytes no matter what number I put in maxLength: parameter. But when I send the picture in the first snippet of code, it's sending absolutely ok (29627 bytes . Btw, image's size is around 29 kb.
That's the place where my question come up - why is that? Why is sending 29 kb via NSOutputStream works totally fine when receiving is causing problems? And is there a solid way to make video streaming work through NSInputStream and NSOutputStream? I just didn't find much information about this technology, all I found were some simple things which I knew already.
Here's an app I wrote that shows you how:
https://app.box.com/s/94dcm9qjk8giuar08305qspdbe0pc784
Build the project with Xcode 9 and run the app on two iOS 11 devices.
To stream live video, touch the Camera icon on one of two devices.
If you don't have two devices, you can run one app in the Simulator; however, you can only use the camera on the real device (the Simulator will display the video broadcasted).
Just so you know: this is not the ideal way to stream real-time video between devices (it should probably be your last choice). Data packets (versus streaming) are way more efficient and faster.
Regardless, I'm really confused by your NSInputStream-related code. Here's something that makes a little more sense, I think:
case NSStreamEventHasBytesAvailable: {
// len is a global variable set to a non-zero value;
// mdata is a NSMutableData object that is reset when a new input
// stream is created.
// displayImage is a block that accepts the image data and a reference
// to the layer on which the image will be rendered
uint8_t * buf[len];
len = [aStream read:(uint8_t *)buf maxLength:len];
if (len > 0) {
[mdata appendBytes:(const void *)buf length:len];
} else {
displayImage(mdata, wLayer);
}
break;
}
The output stream code should look something like this:
// data is an NSData object that contains the image data from the video
// camera;
// len is a global variable set to a non-zero value
// byteIndex is a global variable set to zero each time a new output
// stream is created
if (data.length > 0 && len >= 0 && (byteIndex <= data.length)) {
len = (data.length - byteIndex) < DATA_LENGTH ? (data.length - byteIndex) : DATA_LENGTH;
uint8_t * bytes[len];
[data getBytes:&bytes range:NSMakeRange(byteIndex, len)];
byteIndex += [oStream write:(const uint8_t *)bytes maxLength:len];
}
There's a lot more to streaming video than setting up the NSStream classes correctly—a lot more. You'll notice in my app, I created a cache for the input and output streams. This solved a myriad of issues that you would likely encounter if you don't do the same.
I have never seen anyone successfully use NSStreams for video streaming...ever. It's highly complex, for one reason.
There are many different (and better) ways to stream video; I wouldn't go this route. I just took it on because no one else has been able to do it successfully.
I think that the problem is in your assumption that all data will be available in NSInputStream all the time while you are reading it. NSInputStream made from NSURL object has an asynchronous nature and it should be accessed accordingly using NSStreamDelegate. You can look at example in the README of POSInputStreamLibrary.
I am trying to get a video stream from a camera and show it on an iOS app. The camera sends the stream via udp port. For that is simple, I am just opening a GCDAsyncUDPSocket and start listening on the delegate. My problem starts when the data starts to arrive.
I have the NSData object that arrives to the delegate and, as the stream is H264 and following the info from WWDC2014 I strip the NSData into smaller pieces, separating it by 0x000001. That should be the NALUs and for each one I get the type:
- (int)getNALUType:(NSData *)NALU {
uint8_t * bytes = (uint8_t *) NALU.bytes;
return bytes[0] & 0x1F;
}
If type is 7 or 8 I create the SPS and PPS objects and when I have both of them I create the format description with CMVideoFormatDescriptionCreateFromH264ParameterSets.
Finally, having the format description, I start adding the objects with type 1 and 5 (IDR and non-IDR pictures) to the AVSampleBufferDisplayLayer block buffer.
I am using the code from here to process the NALUs:
https://github.com/niswegmann/H264Streamer/blob/master/H264Streamer/ViewController.m
and this is how I process the data that arrives from the stream:
-(void)udpSocket:(GCDAsyncUdpSocket *)sock didReceiveData:(NSData *)data fromAddress:(NSData *)address withFilterContext:(id)filterContext{
UInt8 TxDataBytes[10];
int TxDataIndex = 0;
TxDataBytes[TxDataIndex++] = 0x00;
TxDataBytes[TxDataIndex++] = 0x00;
TxDataBytes[TxDataIndex++] = 0x01;
NSData *NALUStartData = [NSData dataWithBytes:&TxDataBytes length:TxDataIndex];
NSArray* NALUArray = [data componentsSplitByDataBytes:NALUStartData];
for (NSData* NALU in NALUArray) {
[self parseNALU:NALU];
}
}
componentSplitByDataBytes is from this repo:
https://github.com/watr/NSData-SplitByData
I am pretty sure that my problem is how I get the NALUs from the NSData, but I don't know what else to try.
I've been working on this for a long time, but am stuck.
I'm writing an iOS app that takes AES encrypted data form a Go server-side application and decrypts it. I'm using CCCryptor for the decryption on the iOS side. However, I cannot, for the life of me, get plaintext out. There is a working Java/Android implementation, and it decrypts fine on the Go side, so I'm pretty sure it's to do with my CCCryptor settings.
I'm actually getting a 0 success status on decryption, but taking the output and doing a NSString initWithBytes gives me a null string.
Note: I'm only writing the iOS side.
Go code that encrypts:
func encrypt(key, text []byte) []byte {
block, err := aes.NewCipher(key)
if err != nil {
panic(err)
}
b := encodeBase64(text)
ciphertext := make([]byte, aes.BlockSize+len(b))
iv := ciphertext[:aes.BlockSize]
if _, err := io.ReadFull(rand.Reader, iv); err != nil {
panic(err)
}
cfb := cipher.NewCFBEncrypter(block, iv)
cfb.XORKeyStream(ciphertext[aes.BlockSize:], []byte(b))
return ciphertext
}
Objective-C code that decrypts
+ (NSData *)decrypt:(NSData*)data withPassword:(NSString*)password{
NSData * key = [password dataUsingEncoding:NSUTF8StringEncoding];
size_t dataLength = [data length] - kCCBlockSizeAES128;
NSData *iv = [data subdataWithRange:NSMakeRange(0, kCCBlockSizeAES128)];
NSData *encrypted = [data subdataWithRange:NSMakeRange(kCCBlockSizeAES128, dataLength)];
//See the doc: For block ciphers, the output size will always be less than or
//equal to the input size plus the size of one block.
//That's why we need to add the size of one block here
// size_t bufferSize = dataLength + kCCBlockSizeAES128;
// void *buffer = malloc(dataLength);
NSMutableData *ret = [NSMutableData dataWithLength:dataLength + kCCBlockSizeAES128];
size_t numBytesDecrypted = 0;
CCCryptorStatus status = CCCrypt(kCCDecrypt, kCCAlgorithmAES,
0x0000, // change to 0 solve the problem
[key bytes],
kCCKeySizeAES256,
[iv bytes],
[encrypted bytes], dataLength, /* input */
[ret mutableBytes], [ret length], /* output */
&numBytesDecrypted
);
NSLog(#"err: %d", status);
NSLog(#"dataLength: %d, num: %d", (int)dataLength, (int)numBytesDecrypted);
if (status == kCCSuccess) {
//the returned NSData takes ownership of the buffer and will free it on deallocation
return ret;
}
// free(buffer); //free the buffer;
return nil;
}
My recommendation is to use RNCryptor, there is an iOS and a Go implementation available.
RNCryptor combines all the necessary cryptographic primitives for your needs including:
AES-256 encryption (Advanced Encryption Standard)
CBC mode (Cipher Block Chaining)
Password stretching with PBKDF2 (Password Based Key Derivation Function 2)
Password salting
Random IV (Initialization Vector)
Encrypt-then-hash HMAC (Authentication)
It has been extensively deployed and vetted.
It is all to easy to get cryptography wrong and using RNCryptor will avoid the potential pitfalls.
If I had the cryptographic needs you have I would use it.
So, basically I want to play some audio files (mp3 and caf mostly). But the callback never gets called. Only when I call them to prime the queue.
Here's my data struct:
struct AQPlayerState
{
CAStreamBasicDescription mDataFormat;
AudioQueueRef mQueue;
AudioQueueBufferRef mBuffers[kBufferNum];
AudioFileID mAudioFile;
UInt32 bufferByteSize;
SInt64 mCurrentPacket;
UInt32 mNumPacketsToRead;
AudioStreamPacketDescription *mPacketDescs;
bool mIsRunning;
};
Here's my callback function:
static void HandleOutputBuffer (void *aqData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer)
{
NSLog(#"HandleOutput");
AQPlayerState *pAqData = (AQPlayerState *) aqData;
if (pAqData->mIsRunning == false) return;
UInt32 numBytesReadFromFile;
UInt32 numPackets = pAqData->mNumPacketsToRead;
AudioFileReadPackets (pAqData->mAudioFile,
false,
&numBytesReadFromFile,
pAqData->mPacketDescs,
pAqData->mCurrentPacket,
&numPackets,
inBuffer->mAudioData);
if (numPackets > 0) {
inBuffer->mAudioDataByteSize = numBytesReadFromFile;
AudioQueueEnqueueBuffer (pAqData->mQueue,
inBuffer,
(pAqData->mPacketDescs ? numPackets : 0),
pAqData->mPacketDescs);
pAqData->mCurrentPacket += numPackets;
} else {
// AudioQueueStop(pAqData->mQueue, false);
// AudioQueueDispose(pAqData->mQueue, true);
// AudioFileClose (pAqData->mAudioFile);
// free(pAqData->mPacketDescs);
// free(pAqData->mFloatBuffer);
pAqData->mIsRunning = false;
}
}
And here's my method:
- (void)playFile
{
AQPlayerState aqData;
// get the source file
NSString *p = [[NSBundle mainBundle] pathForResource:#"1_Female" ofType:#"mp3"];
NSURL *url2 = [NSURL fileURLWithPath:p];
CFURLRef srcFile = (__bridge CFURLRef)url2;
OSStatus result = AudioFileOpenURL(srcFile, 0x1/*fsRdPerm*/, 0/*inFileTypeHint*/, &aqData.mAudioFile);
CFRelease (srcFile);
CheckError(result, "Error opinning sound file");
UInt32 size = sizeof(aqData.mDataFormat);
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyDataFormat, &size, &aqData.mDataFormat),
"Error getting file's data format");
CheckError(AudioQueueNewOutput(&aqData.mDataFormat, HandleOutputBuffer, &aqData, CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &aqData.mQueue),
"Error AudioQueueNewOutPut");
// we need to calculate how many packets we read at a time and how big a buffer we need
// we base this on the size of the packets in the file and an approximate duration for each buffer
{
bool isFormatVBR = (aqData.mDataFormat.mBytesPerPacket == 0 || aqData.mDataFormat.mFramesPerPacket == 0);
// first check to see what the max size of a packet is - if it is bigger
// than our allocation default size, that needs to become larger
UInt32 maxPacketSize;
size = sizeof(maxPacketSize);
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyPacketSizeUpperBound, &size, &maxPacketSize),
"Error getting max packet size");
// adjust buffer size to represent about a second of audio based on this format
CalculateBytesForTime(aqData.mDataFormat, maxPacketSize, 1.0/*seconds*/, &aqData.bufferByteSize, &aqData.mNumPacketsToRead);
if (isFormatVBR) {
aqData.mPacketDescs = new AudioStreamPacketDescription [aqData.mNumPacketsToRead];
} else {
aqData.mPacketDescs = NULL; // we don't provide packet descriptions for constant bit rate formats (like linear PCM)
}
printf ("Buffer Byte Size: %d, Num Packets to Read: %d\n", (int)aqData.bufferByteSize, (int)aqData.mNumPacketsToRead);
}
// if the file has a magic cookie, we should get it and set it on the AQ
size = sizeof(UInt32);
result = AudioFileGetPropertyInfo(aqData.mAudioFile, kAudioFilePropertyMagicCookieData, &size, NULL);
if (!result && size) {
char* cookie = new char [size];
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyMagicCookieData, &size, cookie),
"Error getting cookie from file");
CheckError(AudioQueueSetProperty(aqData.mQueue, kAudioQueueProperty_MagicCookie, cookie, size),
"Error setting cookie to file");
delete[] cookie;
}
aqData.mCurrentPacket = 0;
for (int i = 0; i < kBufferNum; ++i) {
CheckError(AudioQueueAllocateBuffer (aqData.mQueue,
aqData.bufferByteSize,
&aqData.mBuffers[i]),
"Error AudioQueueAllocateBuffer");
HandleOutputBuffer (&aqData,
aqData.mQueue,
aqData.mBuffers[i]);
}
// set queue's gain
Float32 gain = 1.0;
CheckError(AudioQueueSetParameter (aqData.mQueue,
kAudioQueueParam_Volume,
gain),
"Error AudioQueueSetParameter");
aqData.mIsRunning = true;
CheckError(AudioQueueStart(aqData.mQueue,
NULL),
"Error AudioQueueStart");
}
And the output when I press play:
Buffer Byte Size: 40310, Num Packets to Read: 38
HandleOutput start
HandleOutput start
HandleOutput start
I tryed replacing CFRunLoopGetCurrent() with CFRunLoopGetMain() and CFRunLoopCommonModes with CFRunLoopDefaultMode, but nothing.
Shouldn't the primed buffers start playing right away I start the queue?
When I start the queue, no callbacks are bang fired.
What am I doing wrong? Thanks for any ideas
What you are basically trying to do here is a basic example of audio playback using Audio Queues. Without looking at your code in detail to see what's missing (that could take a while) i'd rather recommend to you to follow the steps in this basic sample code that does exactly what you're doing (without the extras that aren't really relevant.. for example why are you trying to add audio gain?)
Somewhere else you were trying to play audio using audio units. Audio units are more complex than basic audio queue playback, and I wouldn't attempt them before being very comfortable with audio queues. But you can look at this example project for a basic example of audio queues.
In general when it comes to Core Audio programming in iOS, it's best you take your time with the basic examples and build your way up.. the problem with a lot of tutorials online is that they add extra stuff and often mix it with obj-c code.. when Core Audio is purely C code (ie the extra stuff won't add anything to the learning process). I strongly recommend you go over the book Learning Core Audio if you haven't already. All the sample code is available online, but you can also clone it from this repo for convenience. That's how I learned core audio. It takes time :)
I'm almost done with this task but i'm stuck a point due to which i'm getting partial result.
I have server(linux or windows) and client(iOS) between which TCP IP socket connection exist. I have used form load in my iphone simulator where the connection between server and iphone happens automatically as the application opens. Server send the data back what ever I send on simulator and print it in log. But i'm not able to exactly receive the whole response. For "Innovations" I receive maybe just "in" or "Innova"etc.. Below are the code snippets.
void TCPClient()
{
CFStreamCreatePairWithSocketToHost(kCFAllocatorDefault, host, port, &readStream, &writeStream);
[NSThread sleepForTimeInterval:2]; //Delay
CFWriteStreamSetProperty(writeStream, kCFStreamPropertyShouldCloseNativeSocket, kCFBooleanTrue);
if(!CFWriteStreamOpen(writeStream))
{
NSLog(#"Error Opening Socket");
}
else
{
UInt8 buf[] = "Innovations";
int bytesWritten = CFWriteStreamWrite(writeStream, buf, strlen((char*)buf));
NSLog(#"Written: %d", bytesWritten);
}
CFReadStreamSetProperty(readStream, kCFStreamPropertyShouldCloseNativeSocket, kCFBooleanTrue);
if(!CFReadStreamOpen(readStream))
{
NSLog(#"Error reading");
}
else
{
UInt8 bufr[15];
int bytesRead = CFReadStreamRead(readStream, bufr,strlen((char*)bufr));
NSLog(#"Read: %d", bytesRead);
NSLog(#"buffer: %s", bufr);
}
}
Notice in the read I did change the array size. But I still get the error. Same in the case of IBAction of a button. Even in that for every click i'm sending a data and i'm not getting the response of the same data.
Can valuable suggestion???
One error is that
int bytesRead = CFReadStreamRead(readStream, bufr,strlen((char*)bufr));
should be
int bytesRead = CFReadStreamRead(readStream, bufr, sizeof(bufr));
The last parameter of CFReadStreamRead is the capacity of the read buffer and determines the maximum number of bytes read. strlen((char*)bufr) is the length of the string currently in the buffer. You should also NULL-terminat the string in bufr before printing it.
With this modification, your program might work with short strings. But there will be problems as soon as you try to send/receive larger amounts of data.
A socket write can write less bytes than you asked it to, and a socket read can return less bytes than you requested.
Have a look at the Stream Programming Guide which describes how to register the socket streams with the runloop and handle stream events asynchronously.