I am trying to read a custom file from my documents directory via NSInputStream to upload it to a FTP server. I'm using the exact same code as demonstrated in the SimpleFTPSample provided by Apple. It seems to work fine as long as the file is empty, but as soon as it contains data it fails.
Here's my code. This is how I create the input stream, I tried it both ways:
//Approach one: Init with path
self.fileStream = [NSInputStream inputStreamWithFileAtPath:filePath];
//Approach two: Init with data
NSData* fileData = [NSData dataWithContentsOfFile:filePath];
self.fileStream = [NSInputStream inputStreamWithData:fileData];
If I init the stream with data, I get EXC_BAD_ACCESS (code=1, address=0x0) when invoking read on the stream (code snipped below), if I use the path it jumps right to File read error.
filePath is #"/var/mobile/Applications/94292A2A-37FC-4D8E-BDA6-A26963932AE6/Documents/1395576645.cpn", NSData returns properly and has 806 bytes.
That's part of the stream:handleEvent: delegate method:
if (self.bufferOffset == self.bufferLimit) {
NSInteger bytesRead;
bytesRead = [self.fileStream read: self.buffer maxLength: kSendBufferSize];
if (bytesRead == -1) {
if (kPrintsToConsole) NSLog(#"File read error");
} else if (bytesRead == 0) {
[self stopSending];
} else {
self.bufferOffset = 0;
self.bufferLimit = bytesRead;
}
}
I'm kinda stuck. Hope you guys can help me out. Running iOS 7.1 & Xcode 5.1
you need to call [self.fileStream open] before read. For both file and data approaches.
Related
Right now I'm investigating possibility to implement video streaming through MultipeerConnectivity framework. For that purpose I'm using NSInputStream and NSOutputStream.
The problem is: I can't receive any picture so far. Right now I'm trying to pass simple picture and show it on the receiver. Here's a little snippet of my code:
Sending picture via NSOutputStream:
- (void)sendMessageToStream
{
NSData *imgData = UIImagePNGRepresentation(_testImage);
int img_length = (int)[imgData length];
NSMutableData *msgData = [[NSMutableData alloc] initWithBytes:&img_length length:sizeof(img_length)];
[msgData appendData:imgData];
int msg_length = (int)[msgData length];
uint8_t *readBytes = (uint8_t *)[msgData bytes];
uint8_t buf[msg_length];
(void)memcpy(buf, readBytes, msg_length);
int stream_len = [_stream writeData:(uint8_t*)buf maxLength:msg_length];
//int stream_len = [_stream writeData:(uint8_t *)buf maxLength:data_length];
//NSLog(#"stream_len = %d", stream_len);
_tmpCounter++;
dispatch_async(dispatch_get_main_queue(), ^{
_lblOperationsCounter.text = [NSString stringWithFormat:#"Sent: %ld", (long)_tmpCounter];
});
}
The code above works totally fine. stream_len parameter after writing equals to 29627 bytes which is expected value, because image's size is around 25-26 kb.
Receiving picture via NSinputStream:
- (void)readDataFromStream
{
UInt32 length;
if (_currentFrameSize == 0) {
uint8_t frameSize[4];
length = [_stream readData:frameSize maxLength:sizeof(int)];
unsigned int b = frameSize[3];
b <<= 8;
b |= frameSize[2];
b <<= 8;
b |= frameSize[1];
b <<= 8;
b |= frameSize[0];
_currentFrameSize = b;
}
uint8_t bytes[1024];
length = [_stream readData:bytes maxLength:1024];
[_frameData appendBytes:bytes length:length];
if ([_frameData length] >= _currentFrameSize) {
UIImage *img = [UIImage imageWithData:_frameData];
NSLog(#"SETUP IMAGE!");
_imgView.image = img;
_currentFrameSize = 0;
[_frameData setLength:0];
}
_tmpCounter++;
dispatch_async(dispatch_get_main_queue(), ^{
_lblOperationsCounter.text = [NSString stringWithFormat:#"Received: %ld", (long)_tmpCounter];
});
}
As you can see I'm trying to receive picture in several steps, and here's why. When I'm trying to read data from stream, it's always reading maximum 1095 bytes no matter what number I put in maxLength: parameter. But when I send the picture in the first snippet of code, it's sending absolutely ok (29627 bytes . Btw, image's size is around 29 kb.
That's the place where my question come up - why is that? Why is sending 29 kb via NSOutputStream works totally fine when receiving is causing problems? And is there a solid way to make video streaming work through NSInputStream and NSOutputStream? I just didn't find much information about this technology, all I found were some simple things which I knew already.
Here's an app I wrote that shows you how:
https://app.box.com/s/94dcm9qjk8giuar08305qspdbe0pc784
Build the project with Xcode 9 and run the app on two iOS 11 devices.
To stream live video, touch the Camera icon on one of two devices.
If you don't have two devices, you can run one app in the Simulator; however, you can only use the camera on the real device (the Simulator will display the video broadcasted).
Just so you know: this is not the ideal way to stream real-time video between devices (it should probably be your last choice). Data packets (versus streaming) are way more efficient and faster.
Regardless, I'm really confused by your NSInputStream-related code. Here's something that makes a little more sense, I think:
case NSStreamEventHasBytesAvailable: {
// len is a global variable set to a non-zero value;
// mdata is a NSMutableData object that is reset when a new input
// stream is created.
// displayImage is a block that accepts the image data and a reference
// to the layer on which the image will be rendered
uint8_t * buf[len];
len = [aStream read:(uint8_t *)buf maxLength:len];
if (len > 0) {
[mdata appendBytes:(const void *)buf length:len];
} else {
displayImage(mdata, wLayer);
}
break;
}
The output stream code should look something like this:
// data is an NSData object that contains the image data from the video
// camera;
// len is a global variable set to a non-zero value
// byteIndex is a global variable set to zero each time a new output
// stream is created
if (data.length > 0 && len >= 0 && (byteIndex <= data.length)) {
len = (data.length - byteIndex) < DATA_LENGTH ? (data.length - byteIndex) : DATA_LENGTH;
uint8_t * bytes[len];
[data getBytes:&bytes range:NSMakeRange(byteIndex, len)];
byteIndex += [oStream write:(const uint8_t *)bytes maxLength:len];
}
There's a lot more to streaming video than setting up the NSStream classes correctly—a lot more. You'll notice in my app, I created a cache for the input and output streams. This solved a myriad of issues that you would likely encounter if you don't do the same.
I have never seen anyone successfully use NSStreams for video streaming...ever. It's highly complex, for one reason.
There are many different (and better) ways to stream video; I wouldn't go this route. I just took it on because no one else has been able to do it successfully.
I think that the problem is in your assumption that all data will be available in NSInputStream all the time while you are reading it. NSInputStream made from NSURL object has an asynchronous nature and it should be accessed accordingly using NSStreamDelegate. You can look at example in the README of POSInputStreamLibrary.
I am trying to read an audio file (that is not supported by iOS) with ffmpeg and then play it using AVAudioPlayer. It took me a while to get ffmpeg built inside an iOS project, but I finally did using kewlbear/FFmpeg-iOS-build-script.
This is the snippet I have right now, after a lot of searching on the web, including stackoverflow. One of the best examples I found was here.
I believe this is all the relevant code. I added comments to let you know what I'm doing and where I need something clever to happen.
#import "FFmpegWrapper.h"
#import <AVFoundation/AVFoundation.h>
AVFormatContext *formatContext = NULL;
AVStream *audioStream = NULL;
av_register_all();
avformat_network_init();
avcodec_register_all();
// this is a file locacted on my NAS
int opened = avformat_open_input(&formatContext, #"http://192.168.1.70:50002/m/NDLNA/43729.flac", NULL, NULL);
// can't open file
if(opened == 1) {
avformat_close_input(&formatContext);
}
int streamInfoValue = avformat_find_stream_info(formatContext, NULL);
// can't open stream
if (streamInfoValue < 0)
{
avformat_close_input(&formatContext);
}
// number of streams available
int inputStreamCount = formatContext->nb_streams;
for(unsigned int i = 0; i<inputStreamCount; i++)
{
// I'm only interested in the audio stream
if(formatContext->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO)
{
// found audio stream
audioStream = formatContext->streams[i];
}
}
if(audioStream == NULL) {
// no audio stream
}
AVFrame* frame = av_frame_alloc();
AVCodecContext* codecContext = audioStream->codec;
codecContext->codec = avcodec_find_decoder(codecContext->codec_id);
if (codecContext->codec == NULL)
{
av_free(frame);
avformat_close_input(&formatContext);
// no proper codec found
}
else if (avcodec_open2(codecContext, codecContext->codec, NULL) != 0)
{
av_free(frame);
avformat_close_input(&formatContext);
// could not open the context with the decoder
}
// this is displaying: This stream has 2 channels and a sample rate of 44100Hz
// which makes sense
NSLog(#"This stream has %d channels and a sample rate of %dHz", codecContext->channels, codecContext->sample_rate);
AVPacket packet;
av_init_packet(&packet);
// this is where I try to store in the sound data
NSMutableData *soundData = [[NSMutableData alloc] init];
while (av_read_frame(formatContext, &packet) == 0)
{
if (packet.stream_index == audioStream->index)
{
// Try to decode the packet into a frame
int frameFinished = 0;
avcodec_decode_audio4(codecContext, frame, &frameFinished, &packet);
// Some frames rely on multiple packets, so we have to make sure the frame is finished before
// we can use it
if (frameFinished)
{
// this is where I think something clever needs to be done
// I need to store some bytes, but I can't figure out what exactly and what length?
// should the length be multiplied by the of the number of channels?
NSData *frameData = [[NSData alloc] initWithBytes:packet.buf->data length:packet.buf->size];
[soundData appendData: frameData];
}
}
// You *must* call av_free_packet() after each call to av_read_frame() or else you'll leak memory
av_free_packet(&packet);
}
// first try to write it to a file, see if that works
// this is indeed writing bytes, but it is unplayable
[soundData writeToFile:#"output.wav" atomically:YES];
NSError *error;
// this is my final goal, playing it with the AVAudioPlayer, but this is giving unclear errors
AVAudioPlayer *player = [[AVAudioPlayer alloc] initWithData:soundData error:&error];
if(player == nil) {
NSLog(error.description); // Domain=NSOSStatusErrorDomain Code=1954115647 "(null)"
} else {
[player prepareToPlay];
[player play];
}
// Some codecs will cause frames to be buffered up in the decoding process. If the CODEC_CAP_DELAY flag
// is set, there can be buffered up frames that need to be flushed, so we'll do that
if (codecContext->codec->capabilities & CODEC_CAP_DELAY)
{
av_init_packet(&packet);
// Decode all the remaining frames in the buffer, until the end is reached
int frameFinished = 0;
while (avcodec_decode_audio4(codecContext, frame, &frameFinished, &packet) >= 0 && frameFinished)
{
}
}
av_free(frame);
avcodec_close(codecContext);
avformat_close_input(&formatContext);
Not really found a solution to this specific problem, but ended up using ap4y/OrigamiEngine instead.
My main reason I wanted to use FFmpeg is to play unsupported audio files (FLAC/OGG) on iOS and tvOS and OrigamiEngine does the job just fine.
I'm trying to perform a deep copy of a CMSampleBufferRef for audio and video connection ? I need to use this buffer for delayed processing. Can somebody helper here by point to a sample code.
Thanks
I solve this problem
I needs access to the sample data for a long period of time.
try many way:
CVPixelBufferRetain -----> program broken
CVPixelBufferPool -----> program broken
CVPixelBufferCreateWithBytes ----> it can solve this program,but this will reduce performance,Apple is not recommended to do so
CMSampleBufferCreateCopy --->it is ok, and apple recommended it.
List : To maintain optimal performance, some sample buffers directly reference pools of memory that may need to be reused by the device system and other capture inputs. This is frequently the case for uncompressed device native capture where memory blocks are copied as little as possible. If multiple sample buffers reference such pools of memory for too long, inputs will no longer be able to copy new samples into memory and those samples will be dropped. If your application is causing samples to be dropped by retaining the provided CMSampleBuffer objects for too long, but it needs access to the sample data for a long period of time, consider copying the data into a new buffer and then calling CFRelease on the sample buffer (if it was previously retained) so that the memory it references can be reused.
REF:https://developer.apple.com/reference/avfoundation/avcapturefileoutputdelegate/1390096-captureoutput
that might be what you need:
pragma mark -captureOutput
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
if (connection == m_videoConnection) {
/* if you did not read m_sampleBuffer ,here you must CFRelease m_sampleBuffer, it is causing samples to be dropped
*/
if (m_sampleBuffer) {
CFRelease(m_sampleBuffer);
m_sampleBuffer = nil;
}
OSStatus status = CMSampleBufferCreateCopy(kCFAllocatorDefault, sampleBuffer, &m_sampleBuffer);
if (noErr != status) {
m_sampleBuffer = nil;
}
NSLog(#"m_sampleBuffer = %p sampleBuffer= %p",m_sampleBuffer,sampleBuffer);
}
}
pragma mark -get CVPixelBufferRef to use for a long period of time
- (ACResult) readVideoFrame: (CVPixelBufferRef *)pixelBuffer{
while (1) {
dispatch_sync(m_readVideoData, ^{
if (!m_sampleBuffer) {
_readDataSuccess = NO;
return;
}
CMSampleBufferRef sampleBufferCopy = nil;
OSStatus status = CMSampleBufferCreateCopy(kCFAllocatorDefault, m_sampleBuffer, &sampleBufferCopy);
if ( noErr == status)
{
CVPixelBufferRef buffer = CMSampleBufferGetImageBuffer(sampleBufferCopy);
*pixelBuffer = buffer;
_readDataSuccess = YES;
NSLog(#"m_sampleBuffer = %p ",m_sampleBuffer);
CFRelease(m_sampleBuffer);
m_sampleBuffer = nil;
}
else{
_readDataSuccess = NO;
CFRelease(m_sampleBuffer);
m_sampleBuffer = nil;
}
});
if (_readDataSuccess) {
_readDataSuccess = NO;
return ACResultNoErr;
}
else{
usleep(15*1000);
continue;
}
}
}
then you can use it such this:
-(void)getCaptureVideoDataToEncode{
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^(){
while (1) {
CVPixelBufferRef buffer = NULL;
ACResult result= [videoCapture readVideoFrame:&buffer];
if (ACResultNoErr == result) {
ACResult error = [videoEncode encoder:buffer outputPacket:&streamPacket];
if (buffer) {
CVPixelBufferRelease(buffer);
buffer = NULL;
}
if (ACResultNoErr == error) {
NSLog(#"encode success");
}
}
}
});
}
I do this. CMSampleBufferCreateCopy can indeed deep copy
but a new problem is appear
captureOutput delegate doesn't work
I am trying to a read JSON file containing contact info objects consisting of NSString types and NSMutableArrays. Currently, I am using NSData to read the whole file and then parsing through it. I have utilised Stig's example as mentioned here: SBJson4Parser Example
SBJson4ValueBlock block = ^(id obj, BOOL *stop) {
NSLog(#"Found: %#", #([obj isKindOfClass:[NSDictionary class]]));
//contactsData *contact = obj;
NSDictionary *contact = obj;
NSLog(#"Contact: %#",contact);
/* NSString *fName, *lName;
fName = [contact objectForKey:#"mFirstName"];
lName = [contact objectForKey:#"mLastName"];
NSLog(#"First Name: %#",fName);
NSLog(#"Last Name: %#",lName);
*/
};
SBJson4ErrorBlock eh = ^(NSError* err){
NSLog(#"Oops: %#",error);
};
NSLog(#"Parse work");
id parser = [SBJson4Parser multiRootParserWithBlock:block
errorHandler:eh];
//uint8_t buf[1024];
//unsigned int len = 0;
NSLog(#"Trying to push stream to data");
//[inputStream read:buf maxLength:len];
NSData *data = [NSData dataWithContentsOfFile:filePath options:NSUTF8StringEncoding error:NULL];
//id data = [json da:NSUTF8StringEncoding];
SBJson4ParserStatus status = [parser parse:data];
NSLog(#"Status: %u",status);
These days people seem to have hundreds or even thousands of contacts, thanks to social networks. Will this lead to a larger memory footprint on an iOS device ? If so, how do I parse a single object from a stream ? If I have to use a delegate, an example would be greatly appreciated.
Please note that I am new to the world of iOS development as well as Objective-C.
The structure of the json file in question:
{
"mAddresses": [
],
"mContactPhoto": "",
"mDisplayName": ",tarun,,,,israni,,",
"mPhoneNumberList": [
{
"mLabel": "_$!<Home>!$_",
"mNumber": "(988) 034-5678",
"mType": 1
}
]
}{
"mAddresses": [
],
"mContactPhoto": "",
"mDisplayName": ",Sumit,,,,Kumar,,",
"mPhoneNumberList": [
{
"mLabel": "_$!<Home>!$_",
"mNumber": "(789) 034-5123",
"mType": 1
}
]
}
Your solution looks like it should work to me. If your file so big that you don't want to hold it all in memory, i.e. you want to avoid this line:
NSData *data = [NSData dataWithContentsOfFile:filePath options:NSUTF8StringEncoding error:NULL];
then you can use an NSInputStream (untested, but hopefully you get the gist):
id parser = [SBJson4Parser multiRootParserWithBlock:block
errorHandler:eh];
id is = [NSInputStream inputStreamWithFileAtPath:filePath];
[is open];
// buffer to read from the input stream
uint8_t buf[1024];
// read from input stream until empty, or an error;
// better error handling is left as an exercise for the reader
while (0 > [is read:buffer maxLength: sizeof buffer]) {
SBJson4ParserStatus status = [parser parse:data];
NSLog(#"Status: %u",status);
// handle parser errors here
}
[is close];
However, you still have to read and parse the whole file to guarantee that you find a particular contact. There is no way to read just a specific contact this way. If that is something you do often, you may want to store your contacts a different way that supports that scenario better. One way would be to use e.g. SQLLite.
I have .Net web service response containing a byte[] entry, among other fields.
The data is a PDF file.
I extract the Dictionary from the received data with:
[NSJSONSerialization JSONObjectWithData]
Hereafter I use the following code to convert the byte[] to NSData.
I then save the result to disk (see last line).
When opening the resulting PDF file, I get the following error:
"failed to find PDF header: `%PDF' not found."
NSArray *byteArray = [rootDictionary objectForKey:#"file"];
unsigned c = byteArray.count;
uint8_t *bytes = malloc(sizeof(*bytes) * c);
unsigned i;
for (i = 0; i < c; i++)
{
NSString *str = [byteArray objectAtIndex:i];
int byte = [str intValue];
bytes[i] = (uint8_t)byte;
}
NSData* data = [NSData dataWithBytes:(const void *)byteArray length:sizeof(unsigned char)*c];
//Save to disk using svc class.
NSString *localPath = [svc saveReport:data ToFile:[rootDictionary objectForKey:#"name"]];
I also tried converting the byte[] to a base64 NSString (on the service side) and then back to NSData in my app, which worked (**mostly) but I was told that it's sloppy code.
** When pulling multiple PDF asynchronously at the same time, some of these reports received as base64 strings were also corrupted.
PS. Please let me know if I must supply the code from my svc class as well, but I don't think the problem is there.
Edit:
I created a new web service method which takes a byte[] as input, then modified my iOS app to send the byteArray variable back to the service, where it get's saved to a file.
The resulting PDF file is a valid file readable by Adobe. Meaning there is no corruption during transfer.
Thank you!
O.k, finally sorted this out after fine-tooth-combing my code (as inspired by snadeep.gvn from http://www.raywenderlich.com/forums/viewtopic.php?f=2&p=38590#p38590).
I made a stupid mistake, which I overlooked 100+ times.
This line of code:
NSData* data = [NSData dataWithBytes:(const void *)byteArray length:sizeof(unsigned char)*c];
Should change to:
NSData* data = [NSData dataWithBytes:(const void *)bytes length:sizeof(unsigned char)*c];
Good times, now I can finally get some sleep :-)