C String in iOS looks blurry - ios

I have been trying to render text in a arc . the text rendered as expected but it looks blurry. how can i fix this issue.
- (UIImage*) createMenuRingWithFrame:(CGRect)frame
{
NSArray* sections = [[NSArray alloc] initWithObjects:#"daily", #"yearly", #"monthly", #"weekly",nil];
CGRect imageSize = frame;
float perSectionDegrees = 360 / [sections count];
float totalRotation = 135;
float fontSize = ((frame.size.width/2) /2)/2;
self.menuItemsFont = [UIFont fontWithName:#"Avenir" size:fontSize];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, imageSize.size.width, imageSize.size.height, 8, 4 * imageSize.size.width, colorSpace,(CGBitmapInfo) kCGImageAlphaPremultipliedFirst);
CGPoint centerPoint = CGPointMake(imageSize.size.width / 2, imageSize.size.height / 2);
double radius = (frame.size.width / 2)-2;
for (int index = 0; index < [sections count]; index++)
{
BOOL textRotationDown = NO;
NSString* menuItemText = [sections objectAtIndex:index];
CGSize textSize = [menuItemText sizeWithAttributes:
#{NSFontAttributeName: self.menuItemsFont}];
char* menuItemTextChar = (char*)[menuItemText cStringUsingEncoding:NSASCIIStringEncoding];
if (totalRotation>200.0 && totalRotation <= 320.0) {
textRotationDown = YES;
}
else
textRotationDown= NO;
float x = centerPoint.x + radius * cos(DEGREES_TO_RADIANS(totalRotation));
float y = centerPoint.y + radius * sin(DEGREES_TO_RADIANS(totalRotation));
CGContextSaveGState(context);
CFStringRef font_name = CFStringCreateWithCString(NULL, "Avenir", kCFStringEncodingMacRoman);
CTFontRef font = CTFontCreateWithName(font_name, fontSize, NULL);
CFStringRef keys[] = { kCTFontAttributeName };
CFTypeRef values[] = { font };
CFDictionaryRef font_attributes = CFDictionaryCreate(kCFAllocatorDefault, (const void **)&keys, (const void **)&values, sizeof(keys) / sizeof(keys[0]), &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
CFRelease(font_name);
CFRelease(font);
CFStringRef string = CFStringCreateWithCString(NULL, menuItemTextChar, kCFStringEncodingMacRoman);
CFAttributedStringRef attr_string = CFAttributedStringCreate(NULL, string, font_attributes);
CTLineRef line = CTLineCreateWithAttributedString(attr_string);
CGContextTranslateCTM(context, x, y);
CGContextRotateCTM(context, DEGREES_TO_RADIANS(totalRotation - (textRotationDown?275:90)));
CGContextSetTextPosition(context,0 - (textSize.width / 2), 0 - (textSize.height / (textRotationDown?20:4)));
CTLineDraw(line, context);
CFRelease(line);
CFRelease(string);
CFRelease(attr_string);
CGContextRestoreGState(context);
totalRotation += perSectionDegrees;
}
CGImageRef contextImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return [UIImage imageWithCGImage:contextImage];
}

One problem is that you are not allowing for screen resolution. Make your bitmap context twice as big, or three times as big; multiply all the values appropriately (this is easiest if you just apply a scale CTM at the outset); and then at the end, instead of calling imageWithCGImage:, call imageWithCGImage:scale:orientation:, setting the corresponding scale.
If you had created your context with UIGraphicsBeginImageContextWithOptions, that would have happened automatically (if you had provided a third argument of zero), or you could explicitly have set the third argument to provide a scale for the context and hence the image derived from it. But by building your context manually, you threw away the capacity to provide it with a scale.

Related

Screen capture while backgrounded in iOS 9 [duplicate]

I need to take screenshot of the whole screen including the status bar, I use CARenderServerRenderDisplay to achieve this, it works right on iPad, but wrong at iPhone 6 Plus. As the * marked part in the code, if I set width=screenSize.width*scale and height=screenSize.height*scale, it will cause crash, if I just change them as:width=screenSize.height*scale and height=screenSize.width*scale, it will works, but produce a image like that:
, I've tried much but no reason found, does anyone know that? I hope I've described it clear enough.
- (void)snapshot
{
CGFloat scale = [UIScreen mainScreen].scale;
CGSize screenSize = [UIScreen mainScreen].bounds.size;
//*********** the place where problem appears
size_t width = screenSize.height * scale;
size_t height = screenSize.width * scale;
//***********
size_t bytesPerElement = 4;
OSType pixelFormat = 'ARGB';
size_t bytesPerRow = bytesPerElement * width;
size_t surfaceAllocSize = bytesPerRow * height;
NSDictionary *properties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kIOSurfaceIsGlobal,
[NSNumber numberWithUnsignedLong:bytesPerElement], kIOSurfaceBytesPerElement,
[NSNumber numberWithUnsignedLong:bytesPerRow], kIOSurfaceBytesPerRow,
[NSNumber numberWithUnsignedLong:width], kIOSurfaceWidth,
[NSNumber numberWithUnsignedLong:height], kIOSurfaceHeight,
[NSNumber numberWithUnsignedInt:pixelFormat], kIOSurfacePixelFormat,
[NSNumber numberWithUnsignedLong:surfaceAllocSize], kIOSurfaceAllocSize,
nil];
IOSurfaceRef destSurf = IOSurfaceCreate((__bridge CFDictionaryRef)(properties));
IOSurfaceLock(destSurf, 0, NULL);
CARenderServerRenderDisplay(0, CFSTR("LCD"), destSurf, 0, 0);
IOSurfaceUnlock(destSurf, 0, NULL);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, IOSurfaceGetBaseAddress(destSurf), (width * height * 4), NULL);
CGImageRef cgImage = CGImageCreate(width, height, 8,
8*4, IOSurfaceGetBytesPerRow(destSurf),
CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst |kCGBitmapByteOrder32Little,
provider, NULL, YES, kCGRenderingIntentDefault);
UIImage *image = [UIImage imageWithCGImage:cgImage];
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
}
If you are on a Jailbroken environment, you can use the private UIImage method _UICreateScreenUIImage:
OBJC_EXTERN UIImage *_UICreateScreenUIImage(void);
// ...
- (void)takeScreenshot {
UIImage *screenImage = _UICreateScreenUIImage();
// do something with your screenshot
}
This method uses CARenderServerRenderDisplay for faster rendering of the entire device screen. It replaces the UICreateScreenImage and UIGetScreenImage methods that were removed in the arm64 version of the iOS 7 SDK.

glReadPixels returns incorrect image for iPhone 6, but works ok for iPad and iPhone 5

I'm using following code for reading on image from OpenGL ES scene:
- (UIImage *)drawableToCGImage
{
CGRect myRect = self.bounds;
NSInteger myDataLength = myRect.size.width * myRect.size.height * 4;
glFinish();
glPixelStorei(GL_PACK_ALIGNMENT, 4);
int width = myRect.size.width;
int height = myRect.size.height;
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer2);
for(int y1 = 0; y1 < height; y1++) {
for(int x1 = 0; x1 < width * 4; x1++) {
buffer[(height - 1 - y1) * width * 4 + x1] = buffer2[y1 * 4 * width + x1];
}
}
free(buffer2);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, myDataLength, NULL);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * myRect.size.width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(myRect.size.width, myRect.size.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return image;
}
It works perfectly for iPad and old iPhone versions, but I noticed that on iPhone 6 (both device and simulators) it looks like monochrome glitches.
What could it be?
Also, here is my code for CAEAGLLayer properties:
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
#YES, kEAGLDrawablePropertyRetainedBacking,
kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil];
Could somebody shed the light on this crazy magic, please?
Thanks to #MaticOblak, I've figured out the problem.
Buffer was filled incorrectly, because float values of rect size were not correctly rounded (yeah, only for iPhone 6 dimension). Instead integer values should be used.
UPD: my issue was fixed with the following code:
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
int width = viewport[2];
int height = viewport[3];

How to take screenshot of entire screen on a Jailbroken iOS Device?

I need to take screenshot of the whole screen including the status bar, I use CARenderServerRenderDisplay to achieve this, it works right on iPad, but wrong at iPhone 6 Plus. As the * marked part in the code, if I set width=screenSize.width*scale and height=screenSize.height*scale, it will cause crash, if I just change them as:width=screenSize.height*scale and height=screenSize.width*scale, it will works, but produce a image like that:
, I've tried much but no reason found, does anyone know that? I hope I've described it clear enough.
- (void)snapshot
{
CGFloat scale = [UIScreen mainScreen].scale;
CGSize screenSize = [UIScreen mainScreen].bounds.size;
//*********** the place where problem appears
size_t width = screenSize.height * scale;
size_t height = screenSize.width * scale;
//***********
size_t bytesPerElement = 4;
OSType pixelFormat = 'ARGB';
size_t bytesPerRow = bytesPerElement * width;
size_t surfaceAllocSize = bytesPerRow * height;
NSDictionary *properties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kIOSurfaceIsGlobal,
[NSNumber numberWithUnsignedLong:bytesPerElement], kIOSurfaceBytesPerElement,
[NSNumber numberWithUnsignedLong:bytesPerRow], kIOSurfaceBytesPerRow,
[NSNumber numberWithUnsignedLong:width], kIOSurfaceWidth,
[NSNumber numberWithUnsignedLong:height], kIOSurfaceHeight,
[NSNumber numberWithUnsignedInt:pixelFormat], kIOSurfacePixelFormat,
[NSNumber numberWithUnsignedLong:surfaceAllocSize], kIOSurfaceAllocSize,
nil];
IOSurfaceRef destSurf = IOSurfaceCreate((__bridge CFDictionaryRef)(properties));
IOSurfaceLock(destSurf, 0, NULL);
CARenderServerRenderDisplay(0, CFSTR("LCD"), destSurf, 0, 0);
IOSurfaceUnlock(destSurf, 0, NULL);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, IOSurfaceGetBaseAddress(destSurf), (width * height * 4), NULL);
CGImageRef cgImage = CGImageCreate(width, height, 8,
8*4, IOSurfaceGetBytesPerRow(destSurf),
CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst |kCGBitmapByteOrder32Little,
provider, NULL, YES, kCGRenderingIntentDefault);
UIImage *image = [UIImage imageWithCGImage:cgImage];
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
}
If you are on a Jailbroken environment, you can use the private UIImage method _UICreateScreenUIImage:
OBJC_EXTERN UIImage *_UICreateScreenUIImage(void);
// ...
- (void)takeScreenshot {
UIImage *screenImage = _UICreateScreenUIImage();
// do something with your screenshot
}
This method uses CARenderServerRenderDisplay for faster rendering of the entire device screen. It replaces the UICreateScreenImage and UIGetScreenImage methods that were removed in the arm64 version of the iOS 7 SDK.

RTCI420Frame object to a image or texture

I'm working in a WebRTC app for iOS. My goal is record a video from WebRTC objects.
I have the delegate RTCVideoRenderer that provides me this method.
-(void)renderFrame:(RTCI420Frame *)frame{
}
My question is: How can I convert the object RTCI420Frame in a usefull object for show image or save to disk.
RTCI420Frames use the YUV420 format. You can easily convert them to RGB using OpenCV, then convert them to a UIImage. Make sure you #import <RTCI420Frame.h>
-(void) processFrame:(RTCI420Frame *)frame {
cv::Mat mYUV((int)frame.height + (int)frame.chromaHeight,(int)frame.width, CV_8UC1, (void*) frame.yPlane);
cv::Mat mRGB((int)frame.height, (int)frame.width, CV_8UC1);
cvtColor(mYUV, mRGB, CV_YUV2RGB_I420);
UIImage *image = [self UIImageFromCVMat:mRGB];
}
-(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(cvMat.cols,
cvMat.rows,
8,
8 * cvMat.elemSize(),
cvMat.step[0],
colorSpace,
kCGImageAlphaNone|kCGBitmapByteOrderDefault,
provider,
NULL,
false,
kCGRenderingIntentDefault
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
You may want to do this on a separate thread, especially if you are doing any video processing. Also, remember to use the .mm file extension so you can use C++.
If you don't want to use OpenCV, it is possible to do it manually. The following code kind of works, but the colors are messed up and it crashes after a few seconds.
int width = (int)frame.width;
int height = (int)frame.height;
uint8_t *data = (uint8_t *)malloc(width * height * 4);
const uint8_t* yPlane = frame.yPlane;
const uint8_t* uPlane = frame.uPlane;
const uint8_t* vPlane = frame.vPlane;
for (int i = 0; i < width * height; i++) {
int rgbOffset = i * 4;
uint8_t y = yPlane[i];
uint8_t u = uPlane[i/4];
uint8_t v = vPlane[i/4];
uint8_t r = y + 1.402 * (v - 128);
uint8_t g = y - 0.344 * (u - 128) - 0.714 * (v - 128);
uint8_t b = y + 1.772 * (u - 128);
data[rgbOffset] = r;
data[rgbOffset + 1] = g;
data[rgbOffset + 2] = b;
data[rgbOffset + 3] = UINT8_MAX;
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef gtx = CGBitmapContextCreate(data, width, height, 8, width * 4, colorSpace, kCGImageAlphaPremultipliedLast);
CGImageRef cgImage = CGBitmapContextCreateImage(gtx);
UIImage *uiImage = [[UIImage alloc] initWithCGImage:cgImage];
free(data);

CoreText and right alignment

I'm writing a class to generate PDFs, I'll publish as I will finish it!
I'm unable to align text to the right, with the CTParagraphStyle, the text is always on the left. How does it is possible? What I'm getting wrong?
- (void)addText:(NSString *)text color:(UIColor *)color fontSize:(CGFloat)size floating:(BOOL)floating {
CGContextSaveGState(pdfContext);
// Prepare font
CTFontRef font = CTFontCreateWithName(CFSTR("Verdana"), size, NULL);
// Font color
CGColorRef fontColor = [color CGColor];
// Paragraph
CTTextAlignment alignment = kCTRightTextAlignment;
CTParagraphStyleSetting settings[] = {
{kCTParagraphStyleSpecifierAlignment, sizeof(alignment), &alignment}
};
CTParagraphStyleRef paragraphStyle = CTParagraphStyleCreate(settings, sizeof(settings) / sizeof(settings[0]));
// Create an attributed string
CFStringRef keys[] = { kCTFontAttributeName , kCTParagraphStyleAttributeName, kCTForegroundColorAttributeName};
CFTypeRef values[] = { font, paragraphStyle, fontColor};
CFDictionaryRef attr = CFDictionaryCreate(NULL, (const void **)&keys, (const void **)&values,
sizeof(keys) / sizeof(keys[0]), &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
CFAttributedStringRef attrString = CFAttributedStringCreate(NULL, (CFStringRef)text, attr);
CFRelease(paragraphStyle);
CFRelease(attr);
// Draw the string
CTLineRef line = CTLineCreateWithAttributedString(attrString);
CGContextSetTextPosition(pdfContext, xPadding, [self relativeHeight:currentHeight+size]);
CTLineDraw(line, pdfContext);
// Clean up
CFRelease(line);
CFRelease(attrString);
CFRelease(font);
CGContextRestoreGState(pdfContext);
if(floating == NO) {
currentHeight += size;
}
}
Remove CTLineDraw() and its related code and use CTFrameDraw().
Try this:
// Create the Core Text framesetter using the attributed string.
CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString((CFAttributedStringRef)attrString);
// Create the Core Text frame using our current view rect bounds.
UIBezierPath *path = [UIBezierPath bezierPathWithRect:self.bounds];
CTFrameRef frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, 0), [path CGPath], NULL);
CTFrameDraw(frame, pdfContext);

Resources