Related
For the life of me, I can't render an image to the iPhone simulator screen. I've simplified my code as much as possible.
The following code is in ViewController.m, a class that extends GLKViewController and is also a GLKViewDelegate.
- (void)viewDidLoad {
[super viewDidLoad];
/*Setup EAGLContext*/
self.context = [self createBestEAGLContext];
[EAGLContext setCurrentContext:self.context];
/*Setup View*/
GLKView *view = [[GLKView alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
view.context = self.context;
view.delegate = self;
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
self.view = view;
}
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
/*Setup GLK effect*/
self.effect = [[GLKBaseEffect alloc] init];
self.effect.transform.projectionMatrix = GLKMatrix4MakeOrtho(0, 320, 480, 0, -1, 1);
glClearColor(0.5, 1, 1, 0.0);
glClear(GL_COLOR_BUFFER_BIT);
NSDictionary * options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES],
GLKTextureLoaderOriginBottomLeft,
nil];
NSError * error;
NSString *path = [[NSBundle mainBundle] pathForResource:#"soccerball" ofType:#"jpg"];
GLKTextureInfo * textureInfo = [GLKTextureLoader textureWithContentsOfFile:path options:options error:&error];
if (textureInfo == nil) {
NSLog(#"Error loading file: %#", [error localizedDescription]);
}
TexturedQuad newQuad;
newQuad.bl.geometryVertex = CGPointMake(0, 0);
newQuad.br.geometryVertex = CGPointMake(textureInfo.width, 0);
newQuad.tl.geometryVertex = CGPointMake(0, textureInfo.height);
newQuad.tr.geometryVertex = CGPointMake(textureInfo.width, textureInfo.height);
newQuad.bl.textureVertex = CGPointMake(0, 0);
newQuad.br.textureVertex = CGPointMake(1, 0);
newQuad.tl.textureVertex = CGPointMake(0, 1);
newQuad.tr.textureVertex = CGPointMake(1, 1);
self.effect.texture2d0.name = textureInfo.name;
self.effect.texture2d0.enabled = YES;
GLKMatrix4 modelMatrix = GLKMatrix4Identity;
modelMatrix = GLKMatrix4Translate(modelMatrix, 100, 200, 0);
self.effect.transform.modelviewMatrix = modelMatrix;
[self.effect prepareToDraw];
long offset = (long)&(newQuad);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
glVertexAttribPointer(GLKVertexAttribPosition, 2, GL_FLOAT, GL_FALSE, sizeof(TexturedVertex), (void *) (offset + offsetof(TexturedVertex, geometryVertex)));
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, sizeof(TexturedVertex), (void *) (offset + offsetof(TexturedVertex, textureVertex)));
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
and some of the structs used...
typedef struct {
CGPoint geometryVertex;
CGPoint textureVertex;
} TexturedVertex;
typedef struct {
TexturedVertex bl;
TexturedVertex br;
TexturedVertex tl;
TexturedVertex tr;
} TexturedQuad;
Right now the only thing that is working is
glClearColor(0.5, 1, 1, 0.0);
glClear(GL_COLOR_BUFFER_BIT);
Which does adjust the background colour. There is no 'soccerball' image.
Any help is greatly appreciated.
EDIT - The TextureVertex CGPoints were incorrect so I fixed them. The problem still persists.
Solution:
The TexturedVertex struct must not use CGPoint, but rather GLKVector2.
This is because there is a conversion issue from the float values stored in these points. GLKit expects float values that have single point precision, but CGPoint float values have double point precision and things get weird. Furthermore, this problem only occurs after iOS 7.0
Refer to here for more detail on the issue.
OpenGL ES Shaders and 64-bit iPhone 5S
I found this tutorial http://codethink.no-ip.org/wordpress/archives/673#comment-118063 from this SO question Screen capture video in iOS programmatically of how to do something like this, and it was a bit outdated for iOS, so I renewed it, and am very close to having it work, but putting the UIImages together just isn't quite working right now.
Here is how I call the method in viewDidLoad
[captureView performSelector:#selector(startRecording) withObject:nil afterDelay:1.0];
[captureView performSelector:#selector(stopRecording) withObject:nil afterDelay:5.0];
and captureView is an IBOutlet connected to my view.
And then I have the class ScreenCapture.h & .m
Here is .h
#protocol ScreenCaptureViewDelegate <NSObject>
- (void) recordingFinished:(NSString*)outputPathOrNil;
#end
#interface ScreenCaptureView : UIView {
//video writing
AVAssetWriter *videoWriter;
AVAssetWriterInput *videoWriterInput;
AVAssetWriterInputPixelBufferAdaptor *avAdaptor;
//recording state
BOOL _recording;
NSDate* startedAt;
void* bitmapData;
}
//for recording video
- (bool) startRecording;
- (void) stopRecording;
//for accessing the current screen and adjusting the capture rate, etc.
#property(retain) UIImage* currentScreen;
#property(assign) float frameRate;
#property(nonatomic, assign) id<ScreenCaptureViewDelegate> delegate;
#end
And here is my .m
#interface ScreenCaptureView(Private)
- (void) writeVideoFrameAtTime:(CMTime)time;
#end
#implementation ScreenCaptureView
#synthesize currentScreen, frameRate, delegate;
- (void) initialize {
// Initialization code
self.clearsContextBeforeDrawing = YES;
self.currentScreen = nil;
self.frameRate = 10.0f; //10 frames per seconds
_recording = false;
videoWriter = nil;
videoWriterInput = nil;
avAdaptor = nil;
startedAt = nil;
bitmapData = NULL;
}
- (id) initWithCoder:(NSCoder *)aDecoder {
self = [super initWithCoder:aDecoder];
if (self) {
[self initialize];
}
return self;
}
- (id) init {
self = [super init];
if (self) {
[self initialize];
}
return self;
}
- (id)initWithFrame:(CGRect)frame {
self = [super initWithFrame:frame];
if (self) {
[self initialize];
}
return self;
}
- (CGContextRef) createBitmapContextOfSize:(CGSize) size {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (size.width * 4);
bitmapByteCount = (bitmapBytesPerRow * size.height);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (bitmapData != NULL) {
free(bitmapData);
}
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL) {
fprintf (stderr, "Memory not allocated!");
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
size.width,
size.height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
(CGBitmapInfo) kCGImageAlphaNoneSkipFirst);
CGContextSetAllowsAntialiasing(context,NO);
if (context== NULL) {
free (bitmapData);
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );
return context;
}
static int frameCount = 0; //debugging
- (void) drawRect:(CGRect)rect {
NSDate* start = [NSDate date];
CGContextRef context = [self createBitmapContextOfSize:self.frame.size];
//not sure why this is necessary...image renders upside-down and mirrored
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, self.frame.size.height);
CGContextConcatCTM(context, flipVertical);
[self.layer renderInContext:context];
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage* background = [UIImage imageWithCGImage: cgImage];
CGImageRelease(cgImage);
self.currentScreen = background;
//debugging
if (frameCount < 40) {
NSString* filename = [NSString stringWithFormat:#"Documents/frame_%d.png", frameCount];
NSString* pngPath = [NSHomeDirectory() stringByAppendingPathComponent:filename];
[UIImagePNGRepresentation(self.currentScreen) writeToFile: pngPath atomically: YES];
frameCount++;
}
//NOTE: to record a scrollview while it is scrolling you need to implement your UIScrollViewDelegate such that it calls
// 'setNeedsDisplay' on the ScreenCaptureView.
if (_recording) {
float millisElapsed = [[NSDate date] timeIntervalSinceDate:startedAt] * 1000.0;
[self writeVideoFrameAtTime:CMTimeMake((int)millisElapsed, 1000)];
}
float processingSeconds = [[NSDate date] timeIntervalSinceDate:start];
float delayRemaining = (1.0 / self.frameRate) - processingSeconds;
CGContextRelease(context);
//redraw at the specified framerate
[self performSelector:#selector(setNeedsDisplay) withObject:nil afterDelay:delayRemaining > 0.0 ? delayRemaining : 0.01];
}
- (void) cleanupWriter {
avAdaptor = nil;
videoWriterInput = nil;
videoWriter = nil;
startedAt = nil;
if (bitmapData != NULL) {
free(bitmapData);
bitmapData = NULL;
}
}
- (void)dealloc {
[self cleanupWriter];
}
- (NSURL*) tempFileURL {
NSString* outputPath = [[NSString alloc] initWithFormat:#"%#/%#", [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0], #"output.mp4"];
NSURL* outputURL = [[NSURL alloc] initFileURLWithPath:outputPath];
NSFileManager* fileManager = [NSFileManager defaultManager];
if ([fileManager fileExistsAtPath:outputPath]) {
NSError* error;
if ([fileManager removeItemAtPath:outputPath error:&error] == NO) {
NSLog(#"Could not delete old recording file at path: %#", outputPath);
}
}
return outputURL;
}
-(BOOL) setUpWriter {
NSError* error = nil;
videoWriter = [[AVAssetWriter alloc] initWithURL:[self tempFileURL] fileType:AVFileTypeQuickTimeMovie error:&error];
NSParameterAssert(videoWriter);
//Configure video
NSDictionary* videoCompressionProps = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithDouble:1024.0*1024.0], AVVideoAverageBitRateKey,
nil ];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:self.frame.size.width], AVVideoWidthKey,
[NSNumber numberWithInt:self.frame.size.height], AVVideoHeightKey,
videoCompressionProps, AVVideoCompressionPropertiesKey,
nil];
videoWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
NSParameterAssert(videoWriterInput);
videoWriterInput.expectsMediaDataInRealTime = YES;
NSDictionary* bufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, nil];
avAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput sourcePixelBufferAttributes:bufferAttributes];
//add input
[videoWriter addInput:videoWriterInput];
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:CMTimeMake(0, 1000)];
return YES;
}
- (void) completeRecordingSession {
[videoWriterInput markAsFinished];
// Wait for the video
int status = videoWriter.status;
while (status == AVAssetWriterStatusUnknown) {
NSLog(#"Waiting...");
[NSThread sleepForTimeInterval:0.5f];
status = videoWriter.status;
}
#synchronized(self) {
[videoWriter finishWritingWithCompletionHandler:^{
[self cleanupWriter];
BOOL success = YES;
id delegateObj = self.delegate;
NSString *outputPath = [[NSString alloc] initWithFormat:#"%#/%#", [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0], #"output.mp4"];
NSURL *outputURL = [[NSURL alloc] initFileURLWithPath:outputPath];
NSLog(#"Completed recording, file is stored at: %#", outputURL);
if ([delegateObj respondsToSelector:#selector(recordingFinished:)]) {
[delegateObj performSelectorOnMainThread:#selector(recordingFinished:) withObject:(success ? outputURL : nil) waitUntilDone:YES];
}
}];
}
}
- (bool) startRecording {
bool result = NO;
#synchronized(self) {
if (! _recording) {
result = [self setUpWriter];
startedAt = [NSDate date];
_recording = true;
}
}
return result;
}
- (void) stopRecording {
#synchronized(self) {
if (_recording) {
_recording = false;
[self completeRecordingSession];
}
}
}
-(void) writeVideoFrameAtTime:(CMTime)time {
if (![videoWriterInput isReadyForMoreMediaData]) {
NSLog(#"Not ready for video data");
}
else {
#synchronized (self) {
UIImage *newFrame = self.currentScreen;
CVPixelBufferRef pixelBuffer = NULL;
CGImageRef cgImage = CGImageCreateCopy([newFrame CGImage]);
CFDataRef image = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));
int status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, avAdaptor.pixelBufferPool, &pixelBuffer);
if(status != 0){
//could not get a buffer from the pool
NSLog(#"Error creating pixel buffer: status=%d", status);
}
// set image data into pixel buffer
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
uint8_t *destPixels = CVPixelBufferGetBaseAddress(pixelBuffer);
CFDataGetBytes(image, CFRangeMake(0, CFDataGetLength(image)), destPixels); //XXX: will work if the pixel buffer is contiguous and has the same bytesPerRow as the input data
if(status == 0){
BOOL success = [avAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:time];
if (!success)
NSLog(#"Warning: Unable to write buffer to video");
}
//clean up
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
CVPixelBufferRelease( pixelBuffer );
CFRelease(image);
CGImageRelease(cgImage);
}
}
}
And I as you can see in the drawRect method I save all the images, and they look great, but then when I try to make the video, it creates just a still image that looks like this, when the images look like this.
Here is the output, it is a video but just this. When the picture looks normal (not slanted and all weird)
My question is what is going wrong when the video is being made?
Thanks for the help and your time, I know this is a long question.
I found this post after having the same issue with certain resolutions causing the exact same video effect when I wanted to create a CVPixelBufferRef from a CGImageRef (coming from a UIImage.)
The very short answer in my case was that I had hard wired the bytes per row to be 4 times the width. Which used to work all the time! Now I query the CVPixelBuffer itself to get this value and poof, problem solved!
Code that created the problem was this:
CGContextRef context = CGBitmapContextCreate(pxdata, w, h, 8, 4*w, rgbColorSpace, bitMapInfo);
Code that fixed the problem was this:
CGContextRef context = CGBitmapContextCreate(
pxdata, w, h,
8, CVPixelBufferGetBytesPerRow(pxbuffer),
rgbColorSpace,bitMapInfo);
And in both cases, the bitMapInfo was set:
GBitmapInfo bitMapInfo =kCGImageAlphaPremultipliedFirst; // According to Apple's doc, this is safe: June 26, 2014
Pixel Buffer adaptors only work with certain pixel sizes of images. You're probably going to need to change the size of the images. You can imagine that what's happening in your video is that the writer is trying to write your, let's say, 361x241 images into a 360x240 size space. Each row starts with the last pixel of the last row so that it ends up getting diagonally skewed like you see. Check the apple docs for supported dimensions. I believe that I used 480x320 and it's supported. You can use this method to resize your images:
+(UIImage *)scaleImage:(UIImage*)image toSize:(CGSize)newSize {
CGRect scaledImageRect = CGRectZero;
CGFloat aspectWidth = newSize.width / image.size.width;
CGFloat aspectHeight = newSize.height / image.size.height;
CGFloat aspectRatio = 3.0 / 2;
scaledImageRect.size.width = image.size.width * aspectRatio;
scaledImageRect.size.height = image.size.height * aspectRatio;
scaledImageRect.origin.x = (newSize.width - scaledImageRect.size.width) / 2.0f;
scaledImageRect.origin.y = (newSize.height - scaledImageRect.size.height) / 2.0f;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(480, 320), NO, 0 );
[image drawInRect:scaledImageRect];
UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}
I think this is because the pixelBuffer bytes per row does not match the UIImage bytes per row. In my case (iPhone 6 iOS8.3) the UIImage is 568 x 320 and the CFDataGetLength is 727040 so the bytes per row is 2272. But the pixelBuffer bytes per row is 2304. I think this extra 32 bytes is from padding so that bytes per row in the pixelBuffer is divisible by 64. How you force the pixelBuffer to match the input data, or vice versa, across all devices I'm not sure yet.
I've suffered a lot in this case. I tried so many ways to create video from the Image array but result was almost same as yours.
The problem was in the CVPixel buffer. The Buffer I used to create from the image was not correct.
But finally I got it working.
Main Function to create video at a url from an Array
You just have toinput array of images and fps, and size can be equal to size of images (if you want).
fps = num of images in array / desired duration
for example: fps = 90 / 3 = 30
- (void)getVideoFrom:(NSArray *)array
toPath:(NSString*)path
size:(CGSize)size
fps:(int)fps
withCallbackBlock:(void (^) (BOOL))callbackBlock
{
NSLog(#"%#", path);
NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:path]
fileType:AVFileTypeMPEG4
error:&error];
if (error) {
if (callbackBlock) {
callbackBlock(NO);
}
return;
}
NSParameterAssert(videoWriter);
NSDictionary *videoSettings = #{AVVideoCodecKey: AVVideoCodecTypeH264,
AVVideoWidthKey: [NSNumber numberWithInt:size.width],
AVVideoHeightKey: [NSNumber numberWithInt:size.height]};
AVAssetWriterInput* writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];
//Start a session:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
CVPixelBufferRef buffer;
CVPixelBufferPoolCreatePixelBuffer(NULL, adaptor.pixelBufferPool, &buffer);
CMTime presentTime = CMTimeMake(0, fps);
int i = 0;
while (1)
{
if(writerInput.readyForMoreMediaData){
presentTime = CMTimeMake(i, fps);
if (i >= [array count]) {
buffer = NULL;
} else {
buffer = [self pixelBufferFromCGImage:[array[i] CGImage] size:CGSizeMake(480, 320)];
}
if (buffer) {
//append buffer
BOOL appendSuccess = [self appendToAdapter:adaptor
pixelBuffer:buffer
atTime:presentTime
withInput:writerInput];
NSAssert(appendSuccess, #"Failed to append");
i++;
} else {
//Finish the session:
[writerInput markAsFinished];
[videoWriter finishWritingWithCompletionHandler:^{
NSLog(#"Successfully closed video writer");
if (videoWriter.status == AVAssetWriterStatusCompleted) {
if (callbackBlock) {
callbackBlock(YES);
}
} else {
if (callbackBlock) {
callbackBlock(NO);
}
}
}];
CVPixelBufferPoolRelease(adaptor.pixelBufferPool);
NSLog (#"Done");
break;
}
}
}
}
Function to get CVPixelBuffer from CGImage
-(CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image size:(CGSize)imageSize
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image),
CGImageGetHeight(image), kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, CGImageGetWidth(image),
CGImageGetHeight(image), 8, CVPixelBufferGetBytesPerRow(pxbuffer), rgbColorSpace,
(int)kCGImageAlphaNoneSkipFirst);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
Function to append to adapter
-(BOOL)appendToAdapter:(AVAssetWriterInputPixelBufferAdaptor*)adaptor
pixelBuffer:(CVPixelBufferRef)buffer
atTime:(CMTime)presentTime
withInput:(AVAssetWriterInput*)writerInput
{
while (!writerInput.readyForMoreMediaData) {
usleep(1);
}
return [adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
}
I am trying to draw only part of my sprite sheet. But it just scales the image and no drawing 1/3 of the width of the image. How can I crop the image so only 1/3 shows. Text is below ( ios 7,opengl es 2)- (GLKMatrix4) modelMatrix {
GLKMatrix4 modelMatrix = GLKMatrix4Identity;
modelMatrix = GLKMatrix4Translate(modelMatrix, x, y, 0);
return modelMatrix;
}
- (void)setSprite:(NSString *)fileName effect:(GLKBaseEffect *)newEffect {
// 1
self.effect = newEffect;
// 2
NSDictionary * options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES],
GLKTextureLoaderOriginBottomLeft,
nil];
// 3
NSError * error;
NSString *path = [[NSBundle mainBundle] pathForResource:fileName ofType:nil];
// 4
self.textureInfo = [GLKTextureLoader textureWithContentsOfFile:path options:options error:&error];
if (self.textureInfo == nil) {
NSLog(#"Error loading file: %#", [error localizedDescription]);
return ;
}
TexturedQuad newQuad;
newQuad.bl.geometryVertex = CGPointMake(0, 0);
newQuad.br.geometryVertex = CGPointMake(self.textureInfo.width, 0);
newQuad.tl.geometryVertex = CGPointMake(0, self.textureInfo.height);
newQuad.tr.geometryVertex = CGPointMake(self.textureInfo.width, self.textureInfo.height);
newQuad.bl.textureVertex = CGPointMake(0, 1);
newQuad.br.textureVertex = CGPointMake(1, 1);
newQuad.tl.textureVertex = CGPointMake(0, 0);
newQuad.tr.textureVertex = CGPointMake(1, 0);
self.quad = newQuad;
}
- (void)render {
// 1
self.effect.texture2d0.name = self.textureInfo.name;
self.effect.texture2d0.enabled = YES;
// 2
y++;
self.effect.transform.modelviewMatrix = self.modelMatrix;
[self.effect prepareToDraw];
// 3
glEnableVertexAttribArray(GLKVertexAttribPosition);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
// 4
TexturedQuad q;
q.bl.textureVertex = CGPointMake(0, 1);
q.br.textureVertex = CGPointMake(1, 1);
q.tl.textureVertex = CGPointMake(0, 0);
q.tr.textureVertex = CGPointMake(1, 0);
q.bl.geometryVertex = CGPointMake(0, 0);
q.br.geometryVertex = CGPointMake(self.textureInfo.width/3, 0);
q.tl.geometryVertex = CGPointMake(0, self.textureInfo.height);
q.tr.geometryVertex = CGPointMake(self.textureInfo.width/3, self.textureInfo.height);
long offset2 =(long)&q;
//long offset = (long)&_quad;
glVertexAttribPointer(GLKVertexAttribPosition, 2, GL_FLOAT, GL_FALSE, sizeof(TexturedVertex), (void *) (offset2 + offsetof(TexturedVertex, geometryVertex)));
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, sizeof(TexturedVertex), (void *) (offset2 + offsetof(TexturedVertex, textureVertex)));
// 5
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
Update:
TexturedQuad newQuad;
newQuad.bl.geometryVertex = CGPointMake(0, 0);
newQuad.br.geometryVertex = CGPointMake(self.textureInfo.width/3, 0);
newQuad.tl.geometryVertex = CGPointMake(0, self.textureInfo.height);
newQuad.tr.geometryVertex = CGPointMake(self.textureInfo.width/3, self.textureInfo.height);
newQuad.bl.textureVertex = CGPointMake(0, 1);
newQuad.br.textureVertex = CGPointMake(.3, 1);
newQuad.tl.textureVertex = CGPointMake(0, 0);
newQuad.tr.textureVertex = CGPointMake(.3, 0);
That is what texture coordinates are for. Your 'textureVertex' is specifying the full dimensions of your texture (with 0.0 to 1.0) mapped to the quad. You need to change some of those values to either 1/3 or 2/3. For example:
newQuad.bl.textureVertex = CGPointMake(0, 0.33);
newQuad.br.textureVertex = CGPointMake(1, 1);
newQuad.tl.textureVertex = CGPointMake(0, 0.33);
newQuad.tr.textureVertex = CGPointMake(1, 0);
I'm using GLKView to render some sprites in a iOS app.
My question is, how can I remove/draw only parts of one image? For example, I have a background, and on top of it an image (both sprites). I want to take some random rectangles out of the image on top, so the background will be visible in those rectangles. Is that possible?
I'm creating my textures like this:
- (id)initWithFile:(NSString *)fileName effect:(GLKBaseEffect *)effect position:(GLKVector2)position{
if ((self = [super init])) {
self.effect = effect;
NSDictionary * options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES],
GLKTextureLoaderOriginBottomLeft,
nil];
NSError * error;
NSString *path = [[NSBundle mainBundle] pathForResource:fileName ofType:nil];
self.textureInfo = [GLKTextureLoader textureWithContentsOfFile:path options:options error:&error];
self.contentSize = CGSizeMake(self.textureInfo.width, self.textureInfo.height);
TexturedQuad newQuad;
newQuad.bl.geometryVertex = CGPointMake(0, 0);
newQuad.br.geometryVertex = CGPointMake(self.textureInfo.width, 0);
newQuad.tl.geometryVertex = CGPointMake(0, self.textureInfo.height);
newQuad.tr.geometryVertex = CGPointMake(self.textureInfo.width, self.textureInfo.height);
newQuad.bl.textureVertex = CGPointMake(0, 0);
newQuad.br.textureVertex = CGPointMake(1, 0);
newQuad.tl.textureVertex = CGPointMake(0, 1);
newQuad.tr.textureVertex = CGPointMake(1, 1);
self.quad = newQuad;
self.position = position;
self.frameHeight = self.textureInfo.height;
}
return self;
}
And then render them like this
- (void)render {
self.effect.texture2d0.name = self.textureInfo.name;
self.effect.texture2d0.enabled = YES;
self.effect.transform.modelviewMatrix = self.modelMatrix;
[self.effect prepareToDraw];
long offset = (long)&_quad;
glEnableVertexAttribArray(GLKVertexAttribPosition);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
glVertexAttribPointer(GLKVertexAttribPosition, 2, GL_FLOAT, GL_FALSE, sizeof(TexturedVertex), (void *) (offset + offsetof(TexturedVertex, geometryVertex)));
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, sizeof(TexturedVertex), (void *) (offset + offsetof(TexturedVertex, textureVertex)));
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
Typically this is done with a second texture that is an alpha map. In the shader, the alpha texture will have regions that are full opaque and other regions that are fully transparent. The alpha channel of the alpha texture is multiplied by the image texture alpha to get the final color.
I am displaying 3 objects with the help of GLKit. However, when I am applying textures to these objects, only one texture is being used for all three.
The code I am using is as follows:
- (void)setUpGL{
NSLog(#"i : %d, %d, %d",i,j,k);
firstPlayerScore = 0;
secondPlayerScore = 0;
staticBall = YES;
isSecondPlayer = NO;
self.boxPhysicsObjects = [NSMutableArray array];
self.spherePhysicsObjects = [NSMutableArray array];
self.immovableBoxPhysicsObjects = [NSMutableArray array];
self.cylinderPhysicsObjects = [NSMutableArray array];
self.secondPlayerCylinderPhysicsObjects = [NSMutableArray array];
self.sphereArray = [NSMutableArray array];
GLKView *view = (GLKView *)self.view;
NSAssert([view isKindOfClass:[GLKView class]],#"View controller's view is not a GLKView");
view.drawableDepthFormat = GLKViewDrawableDepthFormat16;
view.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
[EAGLContext setCurrentContext:view.context];
self.baseEffect = [[GLKBaseEffect alloc] init];
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
//glGenBuffers(1, &_vertexBuffer);
//glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
//glBufferData(GL_ARRAY_BUFFER, (i+j)*sizeof(float), sphereVerts, GL_STATIC_DRAW);
glClearColor(0.5f, 0.5f, 0.5f, 1.0f);
self.baseEffect.light0.enabled = GL_TRUE;
self.baseEffect.light0.ambientColor = GLKVector4Make(0.7f, 0.7f, 0.7f, 1.0f);
[self addImmovableBoxPhysicsObjects];
[self addRandomPhysicsSphereObject];
//[self addFirstPlayerCylinderObject];
//[self addSecondPlayerCylinderObject];
//[self scheduleAddRandomPhysicsSphereObject:nil];
}
- (void)addRandomPhysicsObject{
if(random() % 2 == 0)
{
[self addRandomPhysicsBoxObject];
}
else
{
[self addRandomPhysicsSphereObject];
}
}
- (void)setUpBox{
CGImageRef image = [[UIImage imageNamed:#"outUV2.PNG"] CGImage];
textureInfo1 = [GLKTextureLoader textureWithCGImage:image options:nil error:NULL];
self.baseEffect.texture2d0.name = textureInfo1.name;
self.baseEffect.texture2d0.enabled = YES;
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer( GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), final_meshVerts);
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), final_meshNormals);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 2*sizeof(float), final_meshTexCoords);
//glDisableVertexAttribArray(GLKVertexAttribTexCoord0);
}
- (void)drawPhysicsBoxObjects{
//self.baseEffect.texture2d0.target = textureInfo1.target;
PAppDelegate *appDelegate = [[UIApplication sharedApplication] delegate];
GLKMatrix4 savedModelviewMatrix = self.baseEffect.transform.modelviewMatrix;
for(PPhysicsObject *currentObject in self.boxPhysicsObjects){
self.baseEffect.transform.modelviewMatrix =
GLKMatrix4Multiply(savedModelviewMatrix,[appDelegate physicsTransformForObject:currentObject]);
[self.baseEffect prepareToDraw];
glDrawArrays(GL_TRIANGLES, 0, final_meshNumVerts);
}
self.baseEffect.light0.diffuseColor = GLKVector4Make(1.0f, 1.0f, 1.0f, 1.0f);// Alpha
for(PPhysicsObject *currentObject in self.immovableBoxPhysicsObjects){
self.baseEffect.transform.modelviewMatrix = GLKMatrix4Multiply(savedModelviewMatrix, [appDelegate physicsTransformForObject:currentObject]);
[self.baseEffect prepareToDraw];
glDrawArrays(GL_TRIANGLES,0, final_meshNumVerts);
}
self.baseEffect.transform.modelviewMatrix = savedModelviewMatrix;
}
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect{
static float a = 0;
a = a+0.1;
//NSLog(#"a : %f",a);
self.baseEffect.transform.modelviewMatrix = GLKMatrix4MakeLookAt(
0, 9.8, 10.0, // Eye position
0.0, 1.0, 0.0, // Look-at position
0.0, 1.0, 0.0); // Up direction
const GLfloat aspectRatio = (GLfloat)view.drawableWidth / (GLfloat)view.drawableHeight;
self.baseEffect.transform.projectionMatrix =
GLKMatrix4MakePerspective(GLKMathDegreesToRadians(35.0f),aspectRatio,0.2f,200.0f); // Far arbitrarily far enough to contain scene
self.baseEffect.light0.position = GLKVector4Make(0.6f, 1.0f, 0.4f, 0.0f);
[self.baseEffect prepareToDraw];
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
[self drawPhysicsSphereObjects];
[self drawPhysicsBoxObjects];
//[self drawPhysicsCylinderObjects];
}
- (void)addRandomPhysicsSphereObject{
PAppDelegate *appDelegate = [[UIApplication sharedApplication] delegate];
PPhysicsObject *anObject = nil;
if([self.spherePhysicsObjects count] < PMAX_NUMBER_BLOCKS)
{
NSLog(#"if");
anObject = [[PPhysicsObject alloc] init];
}
else
{
NSLog(#"else");
anObject = [self.spherePhysicsObjects objectAtIndex:0];
[self.spherePhysicsObjects removeObjectAtIndex:0];
}
[self.spherePhysicsObjects addObject:anObject];
[appDelegate physicsRegisterSphereObject:anObject
position:GLKVector3Make(0,3.5,-2)
mass:0.0f];
[self setUpSphere];
/*[appDelegate physicsSetVelocity:GLKVector3Make(
random() / (float)RAND_MAX * 2.0f - 1.0f,
0.0f,
random() /(float)RAND_MAX * 2.0f - 1.0f)
forObject:anObject];*/
}
- (void)setUpSphere{
CGImageRef image = [[UIImage imageNamed:#"basketball.png"] CGImage];
textureInfo = [GLKTextureLoader textureWithCGImage:image options:nil error:NULL];
self.baseEffect.texture2d0.name = textureInfo.name;
self.baseEffect.texture2d0.enabled = YES;
glEnableVertexAttribArray( GLKVertexAttribPosition);
glVertexAttribPointer( GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), newbasketballVerts);
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), newbasketballNormals);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 2*sizeof(float), newbasketballTexCoords);
//glDisableVertexAttribArray(GLKVertexAttribTexCoord0);
}
- (void)drawPhysicsSphereObjects{
NSLog(#"draw");
/*static int x = 1;
if (x>20) {
x=20;
}
matrix = GLKMatrix4Identity;
matrix = GLKMatrix4MakeTranslation(0.1 * (x++), 0.0, 0.0);*/
//self.baseEffect.texture2d0.target = textureInfo2.target;
PAppDelegate *appDelegate = [[UIApplication sharedApplication] delegate];
GLKMatrix4 savedModelviewMatrix = self.baseEffect.transform.modelviewMatrix;
/*CGImageRef image = [[UIImage imageNamed:#"basketball.png"] CGImage];
GLKTextureInfo *textureInfo = [GLKTextureLoader textureWithCGImage:image options:nil error:NULL];
self.baseEffect.texture2d0.name = textureInfo.name;
self.baseEffect.texture2d0.target = textureInfo.target;*/
self.baseEffect.light0.diffuseColor = GLKVector4Make(1.0f, 1.0f, 1.0f, 1.0f);
//glVertexPointer(3, GL_FLOAT, 0, sphereVerts);
//glNormalPointer(GL_FLOAT, 0, sphereNormals);
//glTexCoordPointer(2, GL_FLOAT, 0, final meshTexCoords);
/*glGenBuffers(1, &ballVertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, ballVertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(MeshVertexData), MeshVertexData, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, sizeof(arrowVertexData), 0);
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_TRUE, sizeof(arrowVertexData), (void *)offsetof(arrowVertexData, normal));
glBindVertexArrayOES(arrowVertexArray);*/
//glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
//glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 2*sizeof(float), newbasketballTexCoords);
if (!isSecondPlayer) {
for(PPhysicsObject *currentObject in self.spherePhysicsObjects)
{NSLog(#"first");
self.baseEffect.transform.modelviewMatrix =
GLKMatrix4Multiply(savedModelviewMatrix, [appDelegate physicsTransformForObject:currentObject]);
[self.baseEffect prepareToDraw];
glDrawArrays(GL_TRIANGLES, 0, newbasketballNumVerts);
//glDrawArrays(GL_TRIANGLES, 0, sizeof(MeshVertexData) / sizeof(arrowVertexData));
}
}
else{
for(PPhysicsObject *currentObject in self.secondSpherePhysicsObjects)
{
self.baseEffect.transform.modelviewMatrix =
GLKMatrix4Multiply(savedModelviewMatrix, [appDelegate physicsTransformForObject:currentObject]);
[self.baseEffect prepareToDraw];
glDrawArrays(GL_TRIANGLES, 0, newbasketballNumVerts);
//glDrawArrays(GL_TRIANGLES, 0, sizeof(MeshVertexData) / sizeof(arrowVertexData));
}
}
//glBindBuffer(GL_ARRAY_BUFFER, 0);
//glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
//glDisableVertexAttribArray(GLKVertexAttribTexCoord0);
self.baseEffect.transform.modelviewMatrix = savedModelviewMatrix;
}
Why is this only using one texture for all three, and not three different textures, one for each object? How can I fix this?
I had achieved a scene that the moon around the earth moving. different textures for the earth and the moon. under GLKit frame, the code just like this:
-(void)viewDidLoad
{
//......
// Setup Earth texture
CGImageRef earthImageRef =
[[UIImage imageNamed:#"Earth512x256.jpg"] CGImage];
earthTextureInfo = [GLKTextureLoader
textureWithCGImage:earthImageRef
options:[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES],
GLKTextureLoaderOriginBottomLeft, nil nil]
error:NULL];
// Setup Moon texture
CGImageRef moonImageRef =
[[UIImage imageNamed:#"Moon256x128.png"] CGImage];
moonTextureInfo = [GLKTextureLoader
textureWithCGImage:moonImageRef
options:[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES],
GLKTextureLoaderOriginBottomLeft, nil nil]
error:NULL];
//......
}
then, draw earth and moon.
- (void)drawEarth
{
//setup texture
self.baseEffect.texture2d0.name = earthTextureInfo.name;
self.baseEffect.texture2d0.target = earthTextureInfo.target;
//
GLKMatrixStackPush(self.modelviewMatrixStack);
GLKMatrixStackRotate( // Rotate (tilt Earth's axis)
self.modelviewMatrixStack,
GLKMathDegreesToRadians(SceneEarthAxialTiltDeg),
1.0, 0.0, 0.0);
GLKMatrixStackRotate( // Rotate about Earth's axis
self.modelviewMatrixStack,
GLKMathDegreesToRadians(earthRotationAngleDegrees),
0.0, 1.0, 0.0);
self.baseEffect.transform.modelviewMatrix =
GLKMatrixStackGetMatrix4(self.modelviewMatrixStack);
//draw earth
[self.baseEffect prepareToDraw];
glBindVertexArrayOES(_vertexArray);
glDrawArrays(GL_TRIANGLES, 0, sphereNumVerts);
//pop
GLKMatrixStackPop(self.modelviewMatrixStack);
self.baseEffect.transform.modelviewMatrix =
GLKMatrixStackGetMatrix4(self.modelviewMatrixStack);
}
- (void)drawMoon
{
self.baseEffect.texture2d0.name = moonTextureInfo.name;
self.baseEffect.texture2d0.target = moonTextureInfo.target;
GLKMatrixStackPush(self.modelviewMatrixStack);
GLKMatrixStackRotate( // Rotate to position in orbit
self.modelviewMatrixStack,
GLKMathDegreesToRadians(moonRotationAngleDegrees),
0.0, 1.0, 0.0);
GLKMatrixStackTranslate(// Translate to distance from Earth
self.modelviewMatrixStack,
0.0, 0.0, SceneMoonDistanceFromEarth);
GLKMatrixStackScale( // Scale to size of Moon
self.modelviewMatrixStack,
SceneMoonRadiusFractionOfEarth,
SceneMoonRadiusFractionOfEarth,
SceneMoonRadiusFractionOfEarth);
GLKMatrixStackRotate( // Rotate Moon on its own axis
self.modelviewMatrixStack,
GLKMathDegreesToRadians(moonRotationAngleDegrees),
0.0, 1.0, 0.0);
//
self.baseEffect.transform.modelviewMatrix =
GLKMatrixStackGetMatrix4(self.modelviewMatrixStack);
//draw moon
[self.baseEffect prepareToDraw];
glBindVertexArrayOES(_vertexArray);
glDrawArrays(GL_TRIANGLES, 0, sphereNumVerts);
GLKMatrixStackPop(self.modelviewMatrixStack);
self.baseEffect.transform.modelviewMatrix =
GLKMatrixStackGetMatrix4(self.modelviewMatrixStack);
}
To do multiple textures you will need to do:
effect.texture2d0.name = firstTexture.name;
[effect prepareToDraw];
[self renderFirstObject];
effect.texture2d0.name = secondTexture.name;
[effect prepareToDraw];
[self renderSecondObject];
or something similar. If you have lots of objects, I recommend using texture atlases and then doing batch rendering using:
glDrawElements(GL_TRIANGLES, totalIndicies, GL_UNSIGNED_SHORT, indices);
I tried to use glDrawArray for every single object and the framerate of my app dipped to like 10fps.
In your code, the reason it was using 1 texture for all objects is because you never changed the effect.texture2d0.name to the texture you need before each object. If I were to change your code it would be:
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect{
static float a = 0;
a = a+0.1;
//NSLog(#"a : %f",a);
self.baseEffect.transform.modelviewMatrix = GLKMatrix4MakeLookAt(
0, 9.8, 10.0, // Eye position
0.0, 1.0, 0.0, // Look-at position
0.0, 1.0, 0.0); // Up direction
const GLfloat aspectRatio = (GLfloat)view.drawableWidth / (GLfloat)view.drawableHeight;
self.baseEffect.transform.projectionMatrix =
GLKMatrix4MakePerspective(GLKMathDegreesToRadians(35.0f),aspectRatio,0.2f,200.0f); // Far arbitrarily far enough to contain scene
self.baseEffect.light0.position = GLKVector4Make(0.6f, 1.0f, 0.4f, 0.0f);
[self.baseEffect prepareToDraw];
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
self.baseEffect.texture2d0.name = textureInfo.name;
[self.baseEffect prepareToRender];
[self drawPhysicsSphereObjects];
self.baseEffect.texture2d0.name = textureInfo1.name;
[self.baseEffect prepareToRender];
[self drawPhysicsBoxObjects];
//[self drawPhysicsCylinderObjects];
}
Of course this is simplifying it, and without the vertex attribute array setup.
One thing i did for this problem is that i made one single image with all textures in it... now i give only one texture to my GLKBaseEffect object.
But if any person have answer for multiple objects with multiple textures with the help of GLKit, please let me know...
Thank You.
One solution would be to separate your drawing calls so that first you draw all objects that use texture A, then all objects that use texture B and so on.
There is also the texture atlas alternative described here: https://stackoverflow.com/a/8230592/64167.
I am playing around with learning more OpenGL ES and I may have a way to do this.
In my case I have N quads, each with an individual texture. In [view drawInRect] for each quad I want to draw I set new texture properties on baseEffect before I draw each quad, then call prepareToDraw on the BaseEffect and the quad, then render the quad.
Here is some pseudocode for what I mean:
for (int i = 0; i < quads.count; i++) {
baseEffect.texture2d0.name = textureInfo[i].name;
baseEffect.texture2d0.target = textureInfo[i].target;
[baseEffect prepareToDraw];
[quads[i] prepareToDraw];
glDrawArrays(GL_TRIANGLES, 0, 4);
}
This is working ok for me so far.