adding GPUImage framework to IOS project - ios

The hours are turning into days trying to add the GPUImage framework into an IOS project. Now I've got it working I'm trying the sample filtering live video code from Sunset Lake Software page. The app fails to build with the following red error: ' Use of undeclared 'thresholdfFilter'
GPUImageVideoCamera *videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
GPUImageFilter *customFilter = [[GPUImageFilter alloc] initWithFragmentShaderFromFile:#"CustomShader"];
GPUImageView *filteredVideoView = [[GPUImageView alloc] initWithFrame:CGRectMake(0.0, 0.0, 768, 1024)];
// problem here
[videoCamera addTarget:thresholdFilter];
[customFilter addTarget:filteredVideoView];
[videoCamera startCameraCapture];
Using Xcode 6.0.1 and testing app on iPad2 with IOS 8.0.2 - If required, I can post screen shots of how I emdedded the framework.

First, the code written in my initial blog post announcing the framework should not be copied to use with the modern version of the framework. That initial post was written over two years ago, and does not reflect the current state of the API. In fact, I've just deleted all of that code from the original post and directed folks to the instructions on the GitHub page which are kept up to date. Thanks for the reminder.
Second, the problem you're describing above is that you're attempting to use a variable called thresholdFilter without ever defining such a variable. That's not a problem with the framework, the compiler has no idea what you're referring to.
Third, the above code won't work for another reason: you're not holding on to your camera instance. You're defining it locally, instead of assigning it to an instance variable of your encapsulating class. This will lead to ARC deallocating that camera as soon as your above setup method is complete, leading to a black screen or to a crash. You need to create an instance variable or property and assign your camera to that, so that a strong reference is made to it.

Related

add stamp annotation using PSPDFKit iOS objective-c

I am using PSPDFKit framework, and I am unable to add stamp annotation, using this I have implemented following:
[pdfController.annotationStateManager toggleState:PSPDFAnnotationStringStamp];
NSMutableArray<PSPDFStampAnnotation *> *defaultStamps = [NSMutableArray array];
for (NSString *stampSubject in #[#"Great!", #"Stamp", #"Like"]) {
PSPDFStampAnnotation *stamp = [[PSPDFStampAnnotation alloc] initWithSubject:stampSubject];
stamp.boundingBox = CGRectMake(0.f, 0.f, 200.f, 70.f);
[defaultStamps addObject:stamp];
}
PSPDFStampAnnotation *imageStamp = [[PSPDFStampAnnotation alloc] init];
imageStamp.image = [UIImage imageNamed:#"abc.jpg"];
imageStamp.boundingBox = CGRectMake(0.f, 0.f, imageStamp.image.size.width/4.f, imageStamp.image.size.height/4.f);
[defaultStamps addObject:imageStamp];
[PSPDFStampViewController setDefaultStampAnnotations:defaultStamps];
but I have no output.
Peter here, Founder and CEO of PSPDFKit, the PDF SDK for iOS, Android, Web, macOS and (soon) Windows. The best way to reach our support is reaching out directly to our support page. Support is part of your license subscription.
You're setting default stamps for the PSPDFStampViewController. Can you post a screenshot how things look? You're changing the default here (APPROVED, REJECTED and so on and replacing this with your own images, which is valid and works.)
Note that you only need to call this once and it needs to be called before you switch/toggle the stamp mode, so your current code will not work.
Please also make sure you use the latest version so we can rule out any old bugs or incompatibilities. As of writing this, it's Xcode 8.3 and PSPDFKit 6.6 (click for release blog post).
Stamps only show up if you have the annotation component licensed - if you ping us on support my team can check what components you have to make sure that's not the problem here.
If you're just trying to programmatically add annotations via our framework, check out this guide article instead.

Xcode UI Testing - can't find element that was just recorded

I'm trying out Xcode UI testing. I've just recorded a simple test and replayed it and it's failing on the first step. The code is:
XCUIApplication *app = [[XCUIApplication alloc] init];
XCUIElementQuery *scrollViewsQuery = app.scrollViews;
[[scrollViewsQuery.otherElements containingType:XCUIElementTypeStaticText identifier:#"First Page"].element tap];
The line that's failing is the last one, and the error message is 'UI Testing Failure - No matches found for ScrollView'
Why is this failing? How can I interact with this element in this view?
Try referencing the static text directly, instead of through the otherElements accessor.
[scrollViewsQuery.staticTexts[#"First Page"] tap];
Is your application a menu bar app by chance? I've found that XCUITest has bugs when dealing with finding elements belonging to menu bar apps specifically.
Try making sure that in your "MYAPP-Info.plist" file, the entry "Application is agent (UIElement)" is set to "NO".

Capture an image of a skin rash in ResearchKit

I am working on Apple ResearchKit application for Lupus patients. I have already put some surveys and a task for walking activity.
Now I need capture image of a skin rash at frequent intervals, save it inside the app only (not in photos app) and compare the newest image with the last image taken.
I need to know if I can use ResearchKit to do the above said task. How can I open the iPhone camera and capture an image using ResearchKit? I know image comparison is a task outside ResearchKit. But my first priority is capturing the image in ResearchKit. Is it possible to use ResearchKit or do I have to do this task outside the scope of RK.
Please provide me with any code or any link if available.
Thanks in advance
#prateek ResearchKit has a Image Capture step which can do what you require. You'll also have to declare an output directory for the captured image in your task view controller. Sample code below.
ORKImageCaptureStep *imageCaptureStep = [[ORKImageCaptureStep alloc] initWithIdentifier:#"ImageCaptureStep"];
imageCaptureStep.title = /*Title for the step*/;
ORKTaskViewController *taskViewController = [[ORKTaskViewController alloc] initWithTask:imageCaptureStep taskRunUUID:nil];
taskViewController.delegate = self;
taskViewController.outputDirectory = /*where to store your image*/;
And don't forget to implement the "ORKTaskViewControllerDelegate" for the task view controller.

CVPixelBufferRef as a GPU Texture

I have one (or possibly two) CVPixelBufferRef objects I am processing on the CPU, and then placing the results onto a final CVPixelBufferRef. I would like to do this processing on the GPU using GLSL instead because the CPU can barely keep up (these are frames of live video). I know this is possible "directly" (ie writing my own open gl code), but from the (absolutely impenetrable) sample code I've looked at it's an insane amount of work.
Two options seem to be:
1) GPUImage: This is an awesome library, but I'm a little unclear if I can do what I want easily. First thing I tried was requesting OpenGLES compatible pixel buffers using this code:
#{ (NSString *)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA],
(NSString*)kCVPixelBufferOpenGLESCompatibilityKey : [NSNumber numberWithBool:YES]};
Then transferring data from the CVPixelBufferRef to GPUImageRawDataInput as follows:
// setup:
_foreground = [[GPUImageRawDataInput alloc] initWithBytes:nil size:CGSizeMake(0,0)pixelFormat:GPUPixelFormatBGRA type:GPUPixelTypeUByte];
// call for each frame:
[_foreground updateDataFromBytes:CVPixelBufferGetBaseAddress(foregroundPixelBuffer)
size:CGSizeMake(CVPixelBufferGetWidth(foregroundPixelBuffer), CVPixelBufferGetHeight(foregroundPixelBuffer))];
However, my CPU usage goes from 7% to 27% on an iPhone 5S just with that line (no processing or anything). This suggests there's some copying going on on the CPU, or something else is wrong. Am I missing something?
2) OpenFrameworks: OF is commonly used for this type of thing, and OF projects can be easily setup to use GLSL. However, two questions remain about this solution: 1. can I use openframeworks as a library, or do I have to rejigger my whole app just to use the OpenGL features? I don't see any tutorials or docs that show how I might do this without actually starting from scratch and creating an OF app. 2. is it possible to use CVPixelBufferRef as a texture.
I am targeting iOS 7+.
I was able to get this to work using the GPUImageMovie class. If you look inside this class, you'll see that there's a private method called:
- (void)processMovieFrame:(CVPixelBufferRef)movieFrame withSampleTime:(CMTime)currentSampleTime
This method takes a CVPixelBufferRef as input.
To access this method, declare a class extension that exposes it inside your class
#interface GPUImageMovie ()
-(void) processMovieFrame:(CVPixelBufferRef)movieFrame withSampleTime:(CMTime)currentSampleTime;
#end
Then initialize the class, set up the filter, and pass it your video frame:
GPUImageMovie *gpuMovie = [[GPUImageMovie alloc] initWithAsset:nil]; // <- call initWithAsset even though there's no asset
// to initialize internal data structures
// connect filters...
// Call the method we exposed
[gpuMovie processMovieFrame:myCVPixelBufferRef withSampleTime:kCMTimeZero];
One thing: you need to request your pixel buffers with kCVPixelFormatType_420YpCbCr8BiPlanarFullRange in order to match what the library expects.

Use GPUImage in EDK Marmalade

i'm trying write extension of GPUImage for marmalade Framework. For this i used oficial documentation and Extension Development Kit (EDK) Marmalade. I write some sample code, compile with:
mkb s3egpuimage_iphone.mkb --arm --release --compiler clang
It's compile fine, and i get library and headers and make link with deploying tool marlmalade and linkage complete fine. But i write ipa into iPod touch and run this code, i get or freez application or crash application. Crash or freez begin of i call:
[videoCamera startCameraCapture]
ofcourse i initialized videoCamera with
[[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
and make easy target:
textureOutput = [[GPUImageTextureOutput alloc] init];
...
[videoCamera addTarget:textureOutput];
[videoCamera startCameraCapture];
NSString *pathToMovie = [NSHomeDirectory() stringByAppendingPathComponent:#"Documents/Movie.m4v"];
unlink([pathToMovie UTF8String]);
NSURL *movieURL = [NSURL fileURLWithPath:pathToMovie];
movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:movieURL size:CGSizeMake(480.0, 640.0)];
movieWriter.shouldPassthroughAudio = YES;
videoCamera.audioEncodingTarget = movieWriter;
[movieWriter startRecording];
i think about this, but i not understand it.
With you help please?
Again I'd comment if I could but.... So this is a partial answer.
Worth looking through the log to see if any messages come up you were not expecting. However you've not shown the s4e file but a few things to consider:
1) At the lower level, are you running on the OS thread (either by stating it in the s4e file or rolling your own)? Find out what it should be accessed in, and be consistent - don't mix and match.
2) If you are in the os thread, look out for any exceptions. [the marmalade code that calls across the os thread does not like unhandled exceptions.]
3) The API that calls across threads uses varargs (...). This looks powerful but there are known issues with varargs and we'd now advise against - issues relate to 64-bit and similar allignment problems. Suggest creating a parameter block for each function and passing that instead.
If you find any more feel free to post.

Resources