I receive a EXC_BAD_ACCESS (code=1, address = 0x0) when I call the function vuRenderControllerUpdateVideoBackgroundTexture in Vuforia 10.
None of the parameters I sent are NULL/nullptr though.
- (void)renderFrame {
if(![[self session]cameraIsStarted]) {
[[self view]setHidden:YES];
return;
}
[[self view]setHidden:NO];
CAMetalLayer* layer = (CAMetalLayer*) view.layer;
if(!_videoTexture) {
[self viewDidChangeSize:[layer drawableSize]];
}
id<CAMetalDrawable> drawable = [layer nextDrawable];
if(!drawable) {
return;
}
VuState* state = nullptr;
vuEngineAcquireLatestState([session engine], &state);
if(vuStateHasCameraFrame(state) == VU_FALSE) {
vuStateRelease(state);
return;
}
id<MTLCommandBuffer> commandBuffer = [[self commandQueue] commandBuffer];
MTLRenderPassDescriptor* backgroundPassDescriptor = [[MTLRenderPassDescriptor alloc] init];
MTLRenderPassColorAttachmentDescriptor* colorAttachment = [backgroundPassDescriptor colorAttachments][0];
[colorAttachment setTexture:[drawable texture]];
[colorAttachment setLoadAction:MTLLoadActionClear];
[colorAttachment setClearColor:MTLClearColorMake(0.0,0.0,0.0,1.0)];
[colorAttachment setStoreAction:MTLStoreActionStore];
[[backgroundPassDescriptor depthAttachment] setTexture:[self videoDepthTexture]];
[[backgroundPassDescriptor depthAttachment]setLoadAction:MTLLoadActionClear];
[[backgroundPassDescriptor depthAttachment]setStoreAction:MTLStoreActionStore];
[[backgroundPassDescriptor depthAttachment]setClearDepth:0.0f];
id<MTLRenderCommandEncoder> backgroundEncoder = [commandBuffer renderCommandEncoderWithDescriptor:backgroundPassDescriptor];
int textureUnitData = 0;
VuRenderVideoBackgroundData backgroundData;
backgroundData.renderData = &backgroundEncoder;
backgroundData.textureUnitData = &textureUnitData;
backgroundData.textureData = &_videoTexture;
VuResult result = vuRenderControllerUpdateVideoBackgroundTexture([session renderController], state, &backgroundData); // crash here
At the call site, the variables are like this:
Am I missing an undocumented precondition?
I am not familiar with Vuforia so I'm not sure if this will help.
The OxO is indeed likely to be a NULL pointer deference.
I dont see the contents of state on your image. Did you make sure state is not NULL anymore?
Else maybe check if there are methods to instantiate VuState and VuRenderVideoBackgroundData objects. Sometimes only declaring can be not enough.
Related
I get a libyuv crash recently.
I try a lot, but no use.
Please help or try to give some ideas how to achieve this. Thanks!
I have a iOS project(Objective C). One of the functions is encode the video stream.
My idea is
Step 1: Start a timer(20 FPS)
Step 2: Copy and get the bitmap data
Step 3: Transfer the bitmap data to YUV I420 (libyuv)
Step 4: Encode to the h264 format (Openh264)
Step 5: Send the h264 data with RTSP
All of function run on the foreground.
It works well for 3~4hr.
BUT it always will be crashed after 4hr+.
Check the CPU(39%), Memory(140MB), it is stable(No memory leak, CPU busy, etc.).
I try a lot, but no use ( Include add try-catch in my project, detect the data size before run in this line )
I figure out it will run more if decrease the FPS time(20FPS -> 15FPS)
Does it need to add something after encode each frame?
Could someone help me or give some idea for this? Thanks!
// This function runs in a GCD timer
- (void)processSDLFrame:(NSData *)_frameData {
if (mH264EncoderPtr == NULL) {
[self initEncoder];
return;
}
int argbSize = mMapWidth * mMapHeight * 4;
NSData *frameData = [[NSData alloc] initWithData:_frameData];
if ([frameData length] == 0 || [frameData length] != argbSize) {
NSLog(#"Incorrect frame with size : %ld\n", [frameData length]);
return;
}
SFrameBSInfo info;
memset(&info, 0, sizeof (SFrameBSInfo));
SSourcePicture pic;
memset(&pic, 0, sizeof (SSourcePicture));
pic.iPicWidth = mMapWidth;
pic.iPicHeight = mMapHeight;
pic.uiTimeStamp = [[NSDate date] timeIntervalSince1970];
#try {
libyuv::ConvertToI420(
static_cast<const uint8 *>([frameData bytes]), // sample
argbSize, // sample_size
mDstY, // dst_y
mStrideY, // dst_stride_y
mDstU, // dst_u
mStrideU, // dst_stride_u
mDstV, // dst_v
mStrideV, // dst_stride_v
0, // crop_x
0, // crop_y
mMapWidth, // src_width
mMapHeight, // src_height
mMapWidth, // crop_width
mMapHeight, // crop_height
libyuv::kRotateNone, // rotation
libyuv::FOURCC_ARGB); // fourcc
} #catch (NSException *exception) {
NSLog(#"libyuv::ConvertToI420 - exception:%#", exception.reason);
return;
}
pic.iColorFormat = videoFormatI420;
pic.iStride[0] = mStrideY;
pic.iStride[1] = mStrideU;
pic.iStride[2] = mStrideV;
pic.pData[0] = mDstY;
pic.pData[1] = mDstU;
pic.pData[2] = mDstV;
if (mH264EncoderPtr == NULL) {
NSLog(#"OpenH264Manager - encoder not initialized");
return;
}
int rv = -1;
#try {
rv = mH264EncoderPtr->EncodeFrame(&pic, &info);
} #catch (NSException *exception) {
NSLog( #"NSException caught - mH264EncoderPtr->EncodeFrame" );
NSLog( #"Name: %#", exception.name);
NSLog( #"Reason: %#", exception.reason );
[self deinitEncoder];
return;
}
if (rv != cmResultSuccess) {
NSLog(#"OpenH264Manager - encode failed : %d", rv);
[self deinitEncoder];
return;
}
if (info.eFrameType == videoFrameTypeSkip) {
NSLog(#"OpenH264Manager - drop skipped frame");
return;
}
// handle buffer data
int size = 0;
int layerSize[MAX_LAYER_NUM_OF_FRAME] = { 0 };
for (int layer = 0; layer < info.iLayerNum; layer++) {
for (int i = 0; i < info.sLayerInfo[layer].iNalCount; i++) {
layerSize[layer] += info.sLayerInfo[layer].pNalLengthInByte[i];
}
size += layerSize[layer];
}
uint8 *output = (uint8 *)malloc(size);
size = 0;
for (int layer = 0; layer < info.iLayerNum; layer++) {
memcpy(output + size, info.sLayerInfo[layer].pBsBuf, layerSize[layer]);
size += layerSize[layer];
}
// alloc new buffer for streaming
NSData *newData = [NSData dataWithBytes:output length:size];
// Send the data with RTSP
sendData( newData );
// free output buffer data
free(output);
}
[Jan/08/2020 Update]
I report this ticket on the Google Issue Report
https://bugs.chromium.org/p/libyuv/issues/detail?id=853
The Googler give me a feedback.
ARGBToI420 does no allocations. Its similar to a memcpy with a source and destination and number of pixels to convert.
The most common issues with it are
1. the destination buffer has been deallocated. Try adding validation that the YUV buffer is valid. Write to the first and last byte of each layer.
This often occurs on shutdown and threads dont shut down in the order you were hoping. A mutex to guard the memory could help.
2. the destination is an odd size and the allocator did not allocate enough memory. When alllocating the UV plane, use (width + 1) / 2 for width/stride and (height + 1) / 2 for height of UV. Allocate stride * height bytes. You could also use an allocator that verifies there are no overreads or overwrites, or a sanitizer like asan / msan.
When screen casting, usually windows are a multiple of 2 pixels on Windows and Linux, but I have seen MacOS use odd pixel count.
As a test you could wrap the function with temporary buffers. Copy the ARGB to a temporary ARGB buffer.
Call ARGBToI420 to a temporary I420 buffer.
Copy the I420 result to the final I420 buffer.
That should give you a clue which buffer/function is failing.
I will try them.
I am trying to read an audio file (that is not supported by iOS) with ffmpeg and then play it using AVAudioPlayer. It took me a while to get ffmpeg built inside an iOS project, but I finally did using kewlbear/FFmpeg-iOS-build-script.
This is the snippet I have right now, after a lot of searching on the web, including stackoverflow. One of the best examples I found was here.
I believe this is all the relevant code. I added comments to let you know what I'm doing and where I need something clever to happen.
#import "FFmpegWrapper.h"
#import <AVFoundation/AVFoundation.h>
AVFormatContext *formatContext = NULL;
AVStream *audioStream = NULL;
av_register_all();
avformat_network_init();
avcodec_register_all();
// this is a file locacted on my NAS
int opened = avformat_open_input(&formatContext, #"http://192.168.1.70:50002/m/NDLNA/43729.flac", NULL, NULL);
// can't open file
if(opened == 1) {
avformat_close_input(&formatContext);
}
int streamInfoValue = avformat_find_stream_info(formatContext, NULL);
// can't open stream
if (streamInfoValue < 0)
{
avformat_close_input(&formatContext);
}
// number of streams available
int inputStreamCount = formatContext->nb_streams;
for(unsigned int i = 0; i<inputStreamCount; i++)
{
// I'm only interested in the audio stream
if(formatContext->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO)
{
// found audio stream
audioStream = formatContext->streams[i];
}
}
if(audioStream == NULL) {
// no audio stream
}
AVFrame* frame = av_frame_alloc();
AVCodecContext* codecContext = audioStream->codec;
codecContext->codec = avcodec_find_decoder(codecContext->codec_id);
if (codecContext->codec == NULL)
{
av_free(frame);
avformat_close_input(&formatContext);
// no proper codec found
}
else if (avcodec_open2(codecContext, codecContext->codec, NULL) != 0)
{
av_free(frame);
avformat_close_input(&formatContext);
// could not open the context with the decoder
}
// this is displaying: This stream has 2 channels and a sample rate of 44100Hz
// which makes sense
NSLog(#"This stream has %d channels and a sample rate of %dHz", codecContext->channels, codecContext->sample_rate);
AVPacket packet;
av_init_packet(&packet);
// this is where I try to store in the sound data
NSMutableData *soundData = [[NSMutableData alloc] init];
while (av_read_frame(formatContext, &packet) == 0)
{
if (packet.stream_index == audioStream->index)
{
// Try to decode the packet into a frame
int frameFinished = 0;
avcodec_decode_audio4(codecContext, frame, &frameFinished, &packet);
// Some frames rely on multiple packets, so we have to make sure the frame is finished before
// we can use it
if (frameFinished)
{
// this is where I think something clever needs to be done
// I need to store some bytes, but I can't figure out what exactly and what length?
// should the length be multiplied by the of the number of channels?
NSData *frameData = [[NSData alloc] initWithBytes:packet.buf->data length:packet.buf->size];
[soundData appendData: frameData];
}
}
// You *must* call av_free_packet() after each call to av_read_frame() or else you'll leak memory
av_free_packet(&packet);
}
// first try to write it to a file, see if that works
// this is indeed writing bytes, but it is unplayable
[soundData writeToFile:#"output.wav" atomically:YES];
NSError *error;
// this is my final goal, playing it with the AVAudioPlayer, but this is giving unclear errors
AVAudioPlayer *player = [[AVAudioPlayer alloc] initWithData:soundData error:&error];
if(player == nil) {
NSLog(error.description); // Domain=NSOSStatusErrorDomain Code=1954115647 "(null)"
} else {
[player prepareToPlay];
[player play];
}
// Some codecs will cause frames to be buffered up in the decoding process. If the CODEC_CAP_DELAY flag
// is set, there can be buffered up frames that need to be flushed, so we'll do that
if (codecContext->codec->capabilities & CODEC_CAP_DELAY)
{
av_init_packet(&packet);
// Decode all the remaining frames in the buffer, until the end is reached
int frameFinished = 0;
while (avcodec_decode_audio4(codecContext, frame, &frameFinished, &packet) >= 0 && frameFinished)
{
}
}
av_free(frame);
avcodec_close(codecContext);
avformat_close_input(&formatContext);
Not really found a solution to this specific problem, but ended up using ap4y/OrigamiEngine instead.
My main reason I wanted to use FFmpeg is to play unsupported audio files (FLAC/OGG) on iOS and tvOS and OrigamiEngine does the job just fine.
I'm trying to perform a deep copy of a CMSampleBufferRef for audio and video connection ? I need to use this buffer for delayed processing. Can somebody helper here by point to a sample code.
Thanks
I solve this problem
I needs access to the sample data for a long period of time.
try many way:
CVPixelBufferRetain -----> program broken
CVPixelBufferPool -----> program broken
CVPixelBufferCreateWithBytes ----> it can solve this program,but this will reduce performance,Apple is not recommended to do so
CMSampleBufferCreateCopy --->it is ok, and apple recommended it.
List : To maintain optimal performance, some sample buffers directly reference pools of memory that may need to be reused by the device system and other capture inputs. This is frequently the case for uncompressed device native capture where memory blocks are copied as little as possible. If multiple sample buffers reference such pools of memory for too long, inputs will no longer be able to copy new samples into memory and those samples will be dropped. If your application is causing samples to be dropped by retaining the provided CMSampleBuffer objects for too long, but it needs access to the sample data for a long period of time, consider copying the data into a new buffer and then calling CFRelease on the sample buffer (if it was previously retained) so that the memory it references can be reused.
REF:https://developer.apple.com/reference/avfoundation/avcapturefileoutputdelegate/1390096-captureoutput
that might be what you need:
pragma mark -captureOutput
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
if (connection == m_videoConnection) {
/* if you did not read m_sampleBuffer ,here you must CFRelease m_sampleBuffer, it is causing samples to be dropped
*/
if (m_sampleBuffer) {
CFRelease(m_sampleBuffer);
m_sampleBuffer = nil;
}
OSStatus status = CMSampleBufferCreateCopy(kCFAllocatorDefault, sampleBuffer, &m_sampleBuffer);
if (noErr != status) {
m_sampleBuffer = nil;
}
NSLog(#"m_sampleBuffer = %p sampleBuffer= %p",m_sampleBuffer,sampleBuffer);
}
}
pragma mark -get CVPixelBufferRef to use for a long period of time
- (ACResult) readVideoFrame: (CVPixelBufferRef *)pixelBuffer{
while (1) {
dispatch_sync(m_readVideoData, ^{
if (!m_sampleBuffer) {
_readDataSuccess = NO;
return;
}
CMSampleBufferRef sampleBufferCopy = nil;
OSStatus status = CMSampleBufferCreateCopy(kCFAllocatorDefault, m_sampleBuffer, &sampleBufferCopy);
if ( noErr == status)
{
CVPixelBufferRef buffer = CMSampleBufferGetImageBuffer(sampleBufferCopy);
*pixelBuffer = buffer;
_readDataSuccess = YES;
NSLog(#"m_sampleBuffer = %p ",m_sampleBuffer);
CFRelease(m_sampleBuffer);
m_sampleBuffer = nil;
}
else{
_readDataSuccess = NO;
CFRelease(m_sampleBuffer);
m_sampleBuffer = nil;
}
});
if (_readDataSuccess) {
_readDataSuccess = NO;
return ACResultNoErr;
}
else{
usleep(15*1000);
continue;
}
}
}
then you can use it such this:
-(void)getCaptureVideoDataToEncode{
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^(){
while (1) {
CVPixelBufferRef buffer = NULL;
ACResult result= [videoCapture readVideoFrame:&buffer];
if (ACResultNoErr == result) {
ACResult error = [videoEncode encoder:buffer outputPacket:&streamPacket];
if (buffer) {
CVPixelBufferRelease(buffer);
buffer = NULL;
}
if (ACResultNoErr == error) {
NSLog(#"encode success");
}
}
}
});
}
I do this. CMSampleBufferCreateCopy can indeed deep copy
but a new problem is appear
captureOutput delegate doesn't work
I'm trying to backtrace the usermode stack of a thread during a minifilter callback function.
Assuming that I'm in the same context as the calling thread, getting the thread stack address from it's TEB/TIB and processing the addresses on that stack should allow me to backtrace it's stack.
Since the addresses I'm getting are not the expected usermode modules that are calling the system-call,
there must be something wrong I'm doing.
I will be glad if you would point me to the correct direction.
Thanks in advance.
Following is the code that is reading the stack content:
pTEB = (PVOID *)((char *)pThread + 0x20);
// Read TIB
pTib = (NT_TIB*)pTEB;
stackBottom = (PVOID*)pTib->StackLimit;
stackTop = (PVOID*)pTib->StackBase;
LogDbgView(("stackBottom=%p, stackTop=%p",stackBottom, stackTop));
if (!MyReadMemory(IoGetCurrentProcess(), stackBottom, buf, stackTop-stackBottom))
{
LogDbgView(("Read Memory = %x",buf));
LogDbgView(("Read Memory = %x",buf+8));
LogDbgView(("Read Memory = %x",buf+16));
LogDbgView(("Read Memory = %x",buf+24));
}
Below are the functions that gets the module names and addresses:
PVOID LoadModulesInformation()
{
PVOID pSystemInfoBuffer = NULL;
__try
{
NTSTATUS status = STATUS_INSUFFICIENT_RESOURCES;
ULONG SystemInfoBufferSize = 0;
status = ZwQuerySystemInformation(SystemModuleInformation,
&SystemInfoBufferSize,
0,
&SystemInfoBufferSize);
if (!SystemInfoBufferSize)
return NULL;
pSystemInfoBuffer = (PVOID)ExAllocatePool(NonPagedPool, SystemInfoBufferSize*2);
if (!pSystemInfoBuffer)
return NULL;
memset(pSystemInfoBuffer, 0, SystemInfoBufferSize*2);
status = ZwQuerySystemInformation(SystemModuleInformation,
pSystemInfoBuffer,
SystemInfoBufferSize*2,
&SystemInfoBufferSize);
if (NT_SUCCESS(status))
{
return pSystemInfoBuffer;
}
}
__except(EXCEPTION_EXECUTE_HANDLER)
{
}
return NULL;
}
PUNICODE_STRING findModuleName(PVOID addr, PVOID pSystemInfoBuffer, ULONG Tag FILE_AND_LINE_ARGS)
{
PVOID pModuleBase = NULL;
PCHAR pCharRet=NULL;
PUNICODE_STRING pus = NULL;
__try
{
if (pSystemInfoBuffer != NULL && MmIsAddressValid(addr))
{
PSYSTEM_MODULE_ENTRY pSysModuleEntry = ((PSYSTEM_MODULE_INFORMATION)(pSystemInfoBuffer))->Module;
ULONG i;
for (i = 0; i <((PSYSTEM_MODULE_INFORMATION)(pSystemInfoBuffer))->Count; i++)
{
if ((pSysModuleEntry[i].Base <= addr) && (pSysModuleEntry[i].Size < ((ULONG)addr - (ULONG)pSysModuleEntry[i].Base)))
{
pCharRet = pSysModuleEntry[i].ImageName+pSysModuleEntry[i].PathLength;
break;
}
}
}
}
__except(EXCEPTION_EXECUTE_HANDLER)
{
pCharRet = NULL;
}
if (pCharRet)
{
pus = UtlpCharToUnicode(pCharRet, TRUE, TRUE, Tag FILE_AND_LINE_PARAMS);
}
else
{
pus = UtlpCharToUnicode("UNKNOWN", TRUE, TRUE, Tag FILE_AND_LINE_PARAMS);
}
return pus;
}
pTEB = (PVOID *)((char *)pThread + 0x20);
Don't do this at all, even in PoC. Structure layout changes time to time. You can use PsGetProcessPeb instead (see here how https://code.google.com/p/arkitlib/source/browse/trunk/ARKitDrv/Ps.c)
LogDbgView(("Read Memory = %x",buf));
You don`t read memory at buf address, you read value of buf instead. In C you must dereference pointer to read memory from there. Like this
LogDbgView(("Read Memory = %x",(PVOID)*buf));
LogDbgView(("Read Memory = %x",buf+8));
To be able to compile for x86 as well as for x64 you must avoid such things. Instead, use sizeof(PVOID):
LogDbgView(("Read Memory = %x",(PVOID)*(buf+sizeof(PVOID))));
I am working on an iPhone application. I am trying to send message with one URL with smtp via the gmail server. I use the CFNetwork framework. Sometimes mail is sent without problem , but many times I get an exception exc_bad_access at the line exc_bad_access if
(CFWriteStreamCanAcceptBytes(outputStream))
1 Class : HSK_CFUtilities
CFIndex CFWriteStreamWriteFully(CFWriteStreamRef outputStream, const uint8_t* buffer, CFIndex length)
{
CFIndex bufferOffset = 0;
CFIndex bytesWritten;
while (bufferOffset < length)
{
if (CFWriteStreamCanAcceptBytes(outputStream))
{
bytesWritten = CFWriteStreamWrite(outputStream, &(buffer[bufferOffset]), length - bufferOffset);
if (bytesWritten < 0)
{
// Bail!
return bytesWritten;
}
bufferOffset += bytesWritten;
}
else if (CFWriteStreamGetStatus(outputStream) == kCFStreamStatusError)
{
return -1;
}
else
{
// Pump the runloop
CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0.0, true);
}
}
return bufferOffset;
}
2 Class : SKPSMTPMessage in method parseBuffer
case kSKPSMTPWaitingSendSuccess:
{
if ([tmpLine hasPrefix:#"250 "])
{
sendState = kSKPSMTPWaitingQuitReply;
NSString *quitString = #"QUIT\r\n";
DEBUGLOG(#"C: %#", quitString);
if (CFWriteStreamWriteFully((CFWriteStreamRef)outputStream, (const uint8_t *)[quitString UTF8String], [quitString lengthOfBytesUsingEncoding:NSUTF8StringEncoding]) < 0)
{
error = [outputStream streamError];
encounteredError = YES;
}
else
{
[self startShortWatchdog];
}
}
I was wondering if you could give me a hint with it? I would appreciate any help . Thank you in advance, Best regards.
Because it was EXC_BAD_ACCESS error so
In my Case when I made strong property of SKPSMTPMessage in needed .h class & used
SKPSMTPMessage object as global for needed class, It worked.
Here's a great link on what causes EXC_BAD_ACCESS and how to track down the root problem:
Lou Franco's Understanding EXC_BAD_ACCESS