I am completing the final part of localizations for a project. The translated text has come back to me split between .txt and .docx formats.
The .txt once entered into the localizable.strings works fine, but that copied from word document doesn't work.
This is what I've tried so far:
save .docx as .txt and let word encode
save .txt as korean (Mac OS X),
then copy this text to XCode and reinterpret as korean (Mac OS X), then
convert to utf-16
Have tried many options to convert to utf-16, but just can't seem to crack it.
Any ideas would be much appreciated.
Here is the localized help view implementation:
helpText = [NSArray arrayWithObjects:
[NSDictionary dictionaryWithObjectsAndKeys:
NSLocalizedString(#" The Actions Tab", nil), kHelpTextKeyString,
#"Arial", kHelpTextKeyFontName,
[NSNumber numberWithInt:20], kHelpTextKeyFontSize,
[[UIColor blackColor] CGColor], kHelpTextKeyColor,
CGRectCreateDictionaryRepresentation(CGRectMake(30.0, 55.0, 200.0, 28.0)), kHelpTextKeyRect,
nil],
[NSDictionary dictionaryWithObjectsAndKeys:
[NSArray arrayWithObjects:
NSLocalizedString(#"
- (void)displaySelectedHelpImage:(UIImage *)orgImage withTextArray:(NSArray *)textArr {
CGImageRef cgImage = [orgImage CGImage];
int pixelsWide = CGImageGetWidth(cgImage);
int pixelsHigh = CGImageGetHeight(cgImage);
int bitsPerComponent = CGImageGetBitsPerComponent(cgImage);//8; // fixed
int bitsPerPixel = CGImageGetBitsPerPixel(cgImage);//bitsPerComponent * numberOfCompnent;
int bytesPerRow = CGImageGetBytesPerRow(cgImage);//(pixelsWide * bitsPerPixel) // 8; // bytes
int byteCount = (bytesPerRow * pixelsHigh);
CGColorSpaceRef colorSpace = CGImageGetColorSpace(cgImage);//CGColorSpaceCreateDeviceRGB();
// Allocate data
NSMutableData *data = [NSMutableData dataWithLength:byteCount];
// Create a bitmap context
CGContextRef context = CGBitmapContextCreate([data mutableBytes], pixelsWide, pixelsHigh, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast); //kCGImageAlphaPremultipliedLast);//kCGImageAlphaNoneSkipLast); //kCGImageAlphaOnly);
// Set the blend mode to copy to avoid any alteration of the source data or to invert to invert image
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Set alpha
CGContextSetAlpha(context, 1.0);
// Color image
//CGContextSetRGBFillColor(context, 1 ,1, 1, 1.0);
//CGContextFillRect(context, CGRectMake(0.0, 0.0, pixelsWide, pixelsHigh));
// Draw the image to extract the alpha channel
CGContextDrawImage(context, CGRectMake(0.0, 0.0, pixelsWide, pixelsHigh), cgImage);
// add text to image
// Changes the origin of the user coordinate system in a context
//CGContextTranslateCTM (context, pixelsWide, pixelsHigh);
// Rotate context upright
//CGContextRotateCTM (context, -180. * M_PI/180);
for (NSDictionary *dic in textArr) {
CGContextSelectFont (context,
//todo
[[dic objectForKey:kHelpTextKeyFontName] UTF8String],
[[dic objectForKey:kHelpTextKeyFontSize] intValue],
kCGEncodingMacRoman);
CGContextSetCharacterSpacing (context, 2);
CGContextSetTextDrawingMode (context, kCGTextFillStroke);
CGColorRef color = (CGColorRef)[dic objectForKey:kHelpTextKeyColor];
CGRect rect;
CGRectMakeWithDictionaryRepresentation((CFDictionaryRef)[dic objectForKey:kHelpTextKeyRect], &rect);
CGContextSetFillColorWithColor(context, color);
CGContextSetStrokeColorWithColor(context, color);
if ([[dic objectForKey:kHelpTextKeyString] isKindOfClass:[NSArray class]]) {
for (NSString *str in [dic objectForKey:kHelpTextKeyString]) {
CGContextShowTextAtPoint(context,
rect.origin.x,
pixelsHigh - rect.origin.y,
[str cStringUsingEncoding:[NSString defaultCStringEncoding]],
[str length]);
rect.origin.y += [[dic objectForKey:kHelpTextKeyFontSize] intValue];
}
For anyone facing this issue, it was solved by using the coretext foundation class.
What do the Word documents contain? What do you mean by "doesn't work?"
If they contain strings, couldn't you simply append them to the existing localizable.strings file? Since that works there is no encoding issue in this file, you could just copy/paste them from Word into the localizable.strings file in XCode.
Related
I am trying to get a list of all the colors in an image in Objective-C. Note, I am COMPLETELY new to Objective-C - I've done some Swift work in the past, but not really Objective-C.
I pulled a library that more or less is supposed to pull all colors as part of its code. I've modified it to look like this (callback at the end is from React Native, path argument is just a string of the path):
getColors:(NSString *)path options:(NSDictionary *)options callback:(RCTResponseSenderBlock)callback) {
UIImage *originalImage = [UIImage imageWithContentsOfFile:path ];
UIImage *image =
[UIImage imageWithCGImage:[originalImage CGImage]
scale:0.5
orientation:(UIImageOrientationUp)];
CGImageRef cgImage = [image CGImage];
NSUInteger width = CGImageGetWidth(cgImage);
NSUInteger height = CGImageGetHeight(cgImage);
// Allocate storage for the pixel data
unsigned char *rawData = (unsigned char *)malloc(height * width * 4);
// Create the color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Set some metrics
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
// Create context using the storage
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
// Release the color space
CGColorSpaceRelease(colorSpace);
// Draw the image into the storage
CGContextDrawImage(context, CGRectMake(0, 0, width, height), cgImage);
// We are done with the context
CGContextRelease(context);
// determine the colours in the image
NSMutableArray * colours = [NSMutableArray new];
float x = 0;
float y = 0;
for (int n = 0; n<(width*height); n++){
int index = (bytesPerRow * y) + x * bytesPerPixel;
int red = rawData[index];
int green = rawData[index + 1];
int blue = rawData[index + 2];
int alpha = rawData[index + 3];
NSArray * a = [NSArray arrayWithObjects:[NSString stringWithFormat:#"%i",red],[NSString stringWithFormat:#"%i",green],[NSString stringWithFormat:#"%i",blue],[NSString stringWithFormat:#"%i",alpha], nil];
[colours addObject:a];
y++;
if (y==height){
y=0;
x++;
}
}
free(rawData);
callback(#[[NSNull null], colours]);
Now, this script is fairly simple it seems like - it should be iterating over each pixel and adding each color to an array, which is then returned to React Native via the callback.
However, the response to the call is always an empty array.
I'm not sure why that is. Could it be due to where the images are located (they're at AWS, on S3), or something in the algorithm? The code looks right to me, but it's entirely possible that I'm missing something just due to unfamiliarity with Objective-C
I ran your code in an empty project and it performs as expected using an image loaded from the assets library. Is it possible that the UIImage *originalImage = [UIImage imageWithContentsOfFile:path ]; call uses an invalid path. You can easily validate that by simply logging the value of the read image:
UIImage * originalImage = [UIImage imageWithContentsOfFile: path];
NSLog(#"image read from file %#", originalImage);
If the image was not read properly from the file, you will get an empty colours array as the width and height will be nil there will be nothing to loop over.
Also, to avoid modifying the array after your function has returned, it is generally a good practice to return a copy of mutable object or an immutable object (i.e. NSArray instead of NSMutableArray):
callback(#[[NSNull null], [colours copy]]);
Hope this helps
The issue was ultimately that the image download method was returning null - not sure why.
So I took this:
UIImage *originalImage = [UIImage imageWithContentsOfFile:path ];
I changed it to this:
NSData * imageData = [[NSData alloc] initWithContentsOfURL: [NSURL URLWithString: path]];
UIImage *originalImage = [UIImage imageWithData: imageData];
And now my image downloads just fine and the rest of the script works great.
I need to take screenshot of the whole screen including the status bar, I use CARenderServerRenderDisplay to achieve this, it works right on iPad, but wrong at iPhone 6 Plus. As the * marked part in the code, if I set width=screenSize.width*scale and height=screenSize.height*scale, it will cause crash, if I just change them as:width=screenSize.height*scale and height=screenSize.width*scale, it will works, but produce a image like that:
, I've tried much but no reason found, does anyone know that? I hope I've described it clear enough.
- (void)snapshot
{
CGFloat scale = [UIScreen mainScreen].scale;
CGSize screenSize = [UIScreen mainScreen].bounds.size;
//*********** the place where problem appears
size_t width = screenSize.height * scale;
size_t height = screenSize.width * scale;
//***********
size_t bytesPerElement = 4;
OSType pixelFormat = 'ARGB';
size_t bytesPerRow = bytesPerElement * width;
size_t surfaceAllocSize = bytesPerRow * height;
NSDictionary *properties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kIOSurfaceIsGlobal,
[NSNumber numberWithUnsignedLong:bytesPerElement], kIOSurfaceBytesPerElement,
[NSNumber numberWithUnsignedLong:bytesPerRow], kIOSurfaceBytesPerRow,
[NSNumber numberWithUnsignedLong:width], kIOSurfaceWidth,
[NSNumber numberWithUnsignedLong:height], kIOSurfaceHeight,
[NSNumber numberWithUnsignedInt:pixelFormat], kIOSurfacePixelFormat,
[NSNumber numberWithUnsignedLong:surfaceAllocSize], kIOSurfaceAllocSize,
nil];
IOSurfaceRef destSurf = IOSurfaceCreate((__bridge CFDictionaryRef)(properties));
IOSurfaceLock(destSurf, 0, NULL);
CARenderServerRenderDisplay(0, CFSTR("LCD"), destSurf, 0, 0);
IOSurfaceUnlock(destSurf, 0, NULL);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, IOSurfaceGetBaseAddress(destSurf), (width * height * 4), NULL);
CGImageRef cgImage = CGImageCreate(width, height, 8,
8*4, IOSurfaceGetBytesPerRow(destSurf),
CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst |kCGBitmapByteOrder32Little,
provider, NULL, YES, kCGRenderingIntentDefault);
UIImage *image = [UIImage imageWithCGImage:cgImage];
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
}
If you are on a Jailbroken environment, you can use the private UIImage method _UICreateScreenUIImage:
OBJC_EXTERN UIImage *_UICreateScreenUIImage(void);
// ...
- (void)takeScreenshot {
UIImage *screenImage = _UICreateScreenUIImage();
// do something with your screenshot
}
This method uses CARenderServerRenderDisplay for faster rendering of the entire device screen. It replaces the UICreateScreenImage and UIGetScreenImage methods that were removed in the arm64 version of the iOS 7 SDK.
I need to take screenshot of the whole screen including the status bar, I use CARenderServerRenderDisplay to achieve this, it works right on iPad, but wrong at iPhone 6 Plus. As the * marked part in the code, if I set width=screenSize.width*scale and height=screenSize.height*scale, it will cause crash, if I just change them as:width=screenSize.height*scale and height=screenSize.width*scale, it will works, but produce a image like that:
, I've tried much but no reason found, does anyone know that? I hope I've described it clear enough.
- (void)snapshot
{
CGFloat scale = [UIScreen mainScreen].scale;
CGSize screenSize = [UIScreen mainScreen].bounds.size;
//*********** the place where problem appears
size_t width = screenSize.height * scale;
size_t height = screenSize.width * scale;
//***********
size_t bytesPerElement = 4;
OSType pixelFormat = 'ARGB';
size_t bytesPerRow = bytesPerElement * width;
size_t surfaceAllocSize = bytesPerRow * height;
NSDictionary *properties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kIOSurfaceIsGlobal,
[NSNumber numberWithUnsignedLong:bytesPerElement], kIOSurfaceBytesPerElement,
[NSNumber numberWithUnsignedLong:bytesPerRow], kIOSurfaceBytesPerRow,
[NSNumber numberWithUnsignedLong:width], kIOSurfaceWidth,
[NSNumber numberWithUnsignedLong:height], kIOSurfaceHeight,
[NSNumber numberWithUnsignedInt:pixelFormat], kIOSurfacePixelFormat,
[NSNumber numberWithUnsignedLong:surfaceAllocSize], kIOSurfaceAllocSize,
nil];
IOSurfaceRef destSurf = IOSurfaceCreate((__bridge CFDictionaryRef)(properties));
IOSurfaceLock(destSurf, 0, NULL);
CARenderServerRenderDisplay(0, CFSTR("LCD"), destSurf, 0, 0);
IOSurfaceUnlock(destSurf, 0, NULL);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, IOSurfaceGetBaseAddress(destSurf), (width * height * 4), NULL);
CGImageRef cgImage = CGImageCreate(width, height, 8,
8*4, IOSurfaceGetBytesPerRow(destSurf),
CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst |kCGBitmapByteOrder32Little,
provider, NULL, YES, kCGRenderingIntentDefault);
UIImage *image = [UIImage imageWithCGImage:cgImage];
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
}
If you are on a Jailbroken environment, you can use the private UIImage method _UICreateScreenUIImage:
OBJC_EXTERN UIImage *_UICreateScreenUIImage(void);
// ...
- (void)takeScreenshot {
UIImage *screenImage = _UICreateScreenUIImage();
// do something with your screenshot
}
This method uses CARenderServerRenderDisplay for faster rendering of the entire device screen. It replaces the UICreateScreenImage and UIGetScreenImage methods that were removed in the arm64 version of the iOS 7 SDK.
I have been trying to render text in a arc . the text rendered as expected but it looks blurry. how can i fix this issue.
- (UIImage*) createMenuRingWithFrame:(CGRect)frame
{
NSArray* sections = [[NSArray alloc] initWithObjects:#"daily", #"yearly", #"monthly", #"weekly",nil];
CGRect imageSize = frame;
float perSectionDegrees = 360 / [sections count];
float totalRotation = 135;
float fontSize = ((frame.size.width/2) /2)/2;
self.menuItemsFont = [UIFont fontWithName:#"Avenir" size:fontSize];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, imageSize.size.width, imageSize.size.height, 8, 4 * imageSize.size.width, colorSpace,(CGBitmapInfo) kCGImageAlphaPremultipliedFirst);
CGPoint centerPoint = CGPointMake(imageSize.size.width / 2, imageSize.size.height / 2);
double radius = (frame.size.width / 2)-2;
for (int index = 0; index < [sections count]; index++)
{
BOOL textRotationDown = NO;
NSString* menuItemText = [sections objectAtIndex:index];
CGSize textSize = [menuItemText sizeWithAttributes:
#{NSFontAttributeName: self.menuItemsFont}];
char* menuItemTextChar = (char*)[menuItemText cStringUsingEncoding:NSASCIIStringEncoding];
if (totalRotation>200.0 && totalRotation <= 320.0) {
textRotationDown = YES;
}
else
textRotationDown= NO;
float x = centerPoint.x + radius * cos(DEGREES_TO_RADIANS(totalRotation));
float y = centerPoint.y + radius * sin(DEGREES_TO_RADIANS(totalRotation));
CGContextSaveGState(context);
CFStringRef font_name = CFStringCreateWithCString(NULL, "Avenir", kCFStringEncodingMacRoman);
CTFontRef font = CTFontCreateWithName(font_name, fontSize, NULL);
CFStringRef keys[] = { kCTFontAttributeName };
CFTypeRef values[] = { font };
CFDictionaryRef font_attributes = CFDictionaryCreate(kCFAllocatorDefault, (const void **)&keys, (const void **)&values, sizeof(keys) / sizeof(keys[0]), &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
CFRelease(font_name);
CFRelease(font);
CFStringRef string = CFStringCreateWithCString(NULL, menuItemTextChar, kCFStringEncodingMacRoman);
CFAttributedStringRef attr_string = CFAttributedStringCreate(NULL, string, font_attributes);
CTLineRef line = CTLineCreateWithAttributedString(attr_string);
CGContextTranslateCTM(context, x, y);
CGContextRotateCTM(context, DEGREES_TO_RADIANS(totalRotation - (textRotationDown?275:90)));
CGContextSetTextPosition(context,0 - (textSize.width / 2), 0 - (textSize.height / (textRotationDown?20:4)));
CTLineDraw(line, context);
CFRelease(line);
CFRelease(string);
CFRelease(attr_string);
CGContextRestoreGState(context);
totalRotation += perSectionDegrees;
}
CGImageRef contextImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return [UIImage imageWithCGImage:contextImage];
}
One problem is that you are not allowing for screen resolution. Make your bitmap context twice as big, or three times as big; multiply all the values appropriately (this is easiest if you just apply a scale CTM at the outset); and then at the end, instead of calling imageWithCGImage:, call imageWithCGImage:scale:orientation:, setting the corresponding scale.
If you had created your context with UIGraphicsBeginImageContextWithOptions, that would have happened automatically (if you had provided a third argument of zero), or you could explicitly have set the third argument to provide a scale for the context and hence the image derived from it. But by building your context manually, you threw away the capacity to provide it with a scale.
I have this methods for draw a table ant populated . What i want is to change the color for one word from each column , but i dont know how can i do it . Can somebeday help me ,please ? Any help will be appreciate .
in my app the user can select some attributes from segmente control... I want to export what he chosen in a pdf , like a table .So on each line a word will be selected
-(void)drawTableDataAt:(CGPoint)origin
withRowHeight:(int)rowHeight
andColumnWidth:(int)columnWidth
andRowCount:(int)numberOfRows
andColumnCount:(int)numberOfColumns
{
int padding = 1;
NSArray* headers = [NSArray arrayWithObjects:#"Grand", #"Taile ok", #"Petit", nil];
NSArray* invoiceInfo1 = [NSArray arrayWithObjects:#"Extra", #"Bon", #"Ordi", nil];
NSArray* invoiceInfo2 = [NSArray arrayWithObjects:#"Gras", #"Etat", #"Maigre", nil];
NSArray* invoiceInfo3 = [NSArray arrayWithObjects:#"Cru", #"Propre", #"Sale", nil];
NSArray* invoiceInfo4 = [NSArray arrayWithObjects:#"PLourd", #"PMoyen", #"PLeger", nil];
NSArray* invoiceInfo5 = [NSArray arrayWithObjects:#"CSup", #"CEgal", #"CInf", nil];
NSArray* allInfo = [NSArray arrayWithObjects:headers, invoiceInfo1, invoiceInfo2, invoiceInfo3, invoiceInfo4, invoiceInfo5,nil];
for(int i = 0; i < [allInfo count]; i++)
{
NSArray* infoToDraw = [allInfo objectAtIndex:i];
for (int j = 0; j < numberOfColumns; j++)
{
int newOriginX = origin.x + (j*columnWidth);
int newOriginY = origin.y + ((i+1)*rowHeight);
CGRect frame = CGRectMake(newOriginX + padding, newOriginY + padding, columnWidth, rowHeight);
[self drawText:[infoToDraw objectAtIndex:j] inFrame:frame];
}
}
}
-(void)drawText:(NSString*)textToDraw inFrame:(CGRect)frameRect
{
CFStringRef stringRef = (__bridge CFStringRef)textToDraw;
// Prepare the text using a Core Text Framesetter
CFAttributedStringRef currentText = CFAttributedStringCreate(NULL, stringRef, NULL);
CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString(currentText);
CGMutablePathRef framePath = CGPathCreateMutable();
CGPathAddRect(framePath, NULL, frameRect);
// Get the frame that will do the rendering.
CFRange currentRange = CFRangeMake(0, 0);
CTFrameRef frameRef = CTFramesetterCreateFrame(framesetter, currentRange, framePath, NULL);
CGPathRelease(framePath);
// Get the graphics context.
CGContextRef currentContext = UIGraphicsGetCurrentContext();
// Put the text matrix into a known state. This ensures
// that no old scaling factors are left in place.
CGContextSetTextMatrix(currentContext, CGAffineTransformIdentity);
// Core Text draws from the bottom-left corner up, so flip
// the current transform prior to drawing.
CGContextTranslateCTM(currentContext, 0, frameRect.origin.y*2);
CGContextScaleCTM(currentContext, 1.0, -1.0);
// Draw the frame.
CTFrameDraw(frameRef, currentContext);
CGContextScaleCTM(currentContext, 1.0, -1.0);
CGContextTranslateCTM(currentContext, 0, (-1)*frameRect.origin.y*2);
CFRelease(frameRef);
CFRelease(stringRef);
CFRelease(framesetter);
}
Based on the comments on the question, you mentioned that the words will never change. You could potentially create a whole bunch of if/else statements checking every word selected against every word in an array. I have put this down as a more efficient alternative and it should hopefully work. It may need some tweaking or even a loop to go through your chosen words, but this should hopefully put you in the right direction:
//declare your textToDraw as a new NSString
NSString *str = textToDraw;
//Make an Array of the str by adding objects that are separated by whitespace
NSArray *words = [str componentsSeparatedByCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]];
//create a BOOL to check if your selected word exists in the array
BOOL wordExists = [words containsObject: #"%#", yourSelectedWord];
CTFramesetterRef framesetter = null;
//if the word exists, make it red
if(wordExists){
NSUInteger indexOfTheString = [words indexOfObject: #"%#", yourSelectedWord];
CFAttributedStringRef currentText = CFAttributedStringCreate(NULL,str, NULL);
[currentText addAttribute:NSForegroundColorAttributeName
value:[UIColor redColor]
range:NSMakeRange(indexOfTheString, yourSelectedWord.length)];
framesetter = CTFramesetterCreateWithAttributedString(currentText);
}
This will match your selected word found against the right word in your array and highlight it red.