Cocos2d-x CClabelTTF invisible - ios

In my game I use cclabelttf to display the score the player made. It was working fine back in the end of July, I've changed nothing in my code, but there was a:
- IOS upgraded (6.1 to 7.0)
- OSX updated
- cocos2d-x
- Xcode update
I'm not using helvetica fonts.
I have a floating text to show the score, if I kill a terrorist a "+10" string floats up and than disappears, if I write " +10 " then it's visible, otherwise it's not.
I've tried to change the text alignment in ccimage.mm, from UITextAlignmentLeft to the same with NS,
uncomment these lines:
if( [font isKindOfClass:[UIFont class] ] )
{
[str drawInRect:CGRectMake(0, startH, dim.width, dim.height) withFont:font lineBreakMode:(UILineBreakMode)UILineBreakModeWordWrap alignment:align];
}
I've read these modifications on the cococs2d-x forum, there was a bug back than, and these was the solution. No luck for me.
Weird part is on my gameScene one of the labels is visible, but only on iPhone simulator, but starting from this, I think it must be an alignment/wrapping problem.

Met the same issue, found a solution works for me, try this.
Modify _initWithString in CCImage.mm, at line:
CGContextRef context = CGBitmapContextCreate(data, dim.width, dim.height, 8, dim.width * 4, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
change it to:
CGContextRef context = CGBitmapContextCreate(data, (int)dim.width, (int)dim.height, 8, (int)dim.width * 4, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
Just three (int) casts.

maybe you can try it like this:
CCLabelTTF* YouClassName::stringNewLine(string orgStr,CCSize sizeTable,const char* fontName,float fontSize){
CCLabelTTF *m_label_content = CCLabelTTF::create( "hello", fontName, fontSize ,sizeTable, kCCTextAlignmentCenter, kCCVerticalTextAlignmentCenter);
m_label_content->setString(orgStr.c_str());
return m_label_content;
}
and use it like this
CCLabelTTF * ttf = stringNewLine("test",CCSizeMake(200,200), "Arial", 28);

Related

Drawing text with an MKOverlayRenderer

I'm trying to render text on a map using an MKOverlayRenderer. I have an existing, functional MKOverlayRenderer rendering a set of points, so my only problem is rendering a piece of text for each point within the '-(void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context' function.
All solutions I have found through SO and Google use annotations or UILabels. But I want to have the text drawing code in the same location as the code rendering the points. Also there are about 10,000 points, though I'm ensuring it's not rendering them all at the same time through zoom and bounds checking. I am reasonably sure I don't want to create 10,000 objects with the other solutions.
This is the current test code I have to try to render one of the 'Text Text' items. It is a combination of some of the methods I have found on the net to try to render something.
CGPoint* point = self.pointList.pointArray + pointIndex;
CGContextSetRGBFillColor(context, 1.0, 0.0, 0.0, 1.0);
CGContextSetRGBStrokeColor(context, 1.0, 0.0, 0.0, 1.0);
CGContextSelectFont(context, "Helvetica", 20.f, kCGEncodingFontSpecific);
CGContextSetTextDrawingMode(context, kCGTextFill);
CGAffineTransform xform = CGAffineTransformMake(1.0, 0.0, 0.0, -1.0, 0.0, 0.0);
CGContextSetTextMatrix(context, xform);
CGContextShowTextAtPoint(context, point->x, point->y, "Test Text 1", 11);
CGContextShowTextAtPoint(context, 10, 10, "Test Text 2", 11);
CGContextShowText(context, "Test Text 4", 11);
UIFont* font = [UIFont fontWithName:#"Helvetica" size:12.0];
[#"Test Text 3" drawAtPoint:*point withFont:font];
This is my first SO questions, so sorry if it isn't that correct.
Edit: I just saw the text when zoomed in as far as I can go, so realise I haven't been accounting for the zoom scale. Assuming I need to do a scale transform before rendering to account for it. Haven't solved it currently, but I think I am on my way.
I have solved it. Sorry for posting this, but I was at my wit's end and thought I needed help.
The line that rendered was:
CGContextShowTextAtPoint(context, point->x, point->y, "Test Text 1", 11);
Which is a deprecated function, but I don't know any other way to render to a specific context.
To fix it, the affine transform became:
CGAffineTransform xform = CGAffineTransformMake(1.0 / zoomScale, 0.0, 0.0, -1.0 / zoomScale, 0.0, 0.0);
The other error was that the 'select font' call needed to become:
CGContextSelectFont(context, "Helvetica", 12.f, kCGEncodingMacRoman);
I had copied the other encoding from some example code I had seen on the net, but it causes the text to have wrong characters.
If there is still a way I can do it without using the deprecated CGContextShowTextAtPoint function I would still love to know.

How to display dashed lines in objective-c?

I have this code to display a grid with dashed line. In runtime on an iphone 5 below it shows fine, but if I run the app on iphone 5s there's no grid. I tested in iPhone Simulator and on real devices and happens the same.
Here's the code:
if (self.dashLongitude) {
CGFloat lengths[] = {3.0, 3.0};
CGContextSetLineDash(context, 0.0, lengths, 2);
}
//other stuff here
CGContextSetLineDash(context, 0, nil, 0);
So anyone could help??
EDIT: Hey guys I solved the issue using the same code I posted here, but in a different method. So I have now two methods: one just for the drawing the grid and another on to draw the line with data and finally got everything working.
The code looks fine.
As I look into the code I saw you are setting the nil in the context. nil is used to remove the dashed line.
Try to use
CGFloat dash[] = {2.0, 2.0};
CGContextSetLineDash(context, 0.0, dash, 2);
where you are creating or provide stroke your line(underneath).
CGContextSetLineDash(context, 0, NULL, 0);
is used to remove that dash pattern.
Instead of setting nil in CGContextSetLineDash, just wrap your code into CGContextSaveGState/CGContextRestoreGState to preserve context state before applying line dash:
CGContextSaveGState(context);
CGFloat dash[] = {2.0, 2.0};
CGContextSetLineDash(context, 0.0, dash, 2);
// Draw some lines here
CGContextRestoreGState(context);

CCLabelTTF invisible issue in iOS 7.0

I'm using cocos2d-x 2.0.4 for my game.
CCLabelTTF works well on both of device and simulator in iOS 6. But when i test it in iOS 7.0, it doesn't work.
Here is my code.
int nScore = 10;
char str[50];
sprintf(str, "SCORE : %d", nScore);
CCLabelTTF *lbl = CCLabelTTF::create(str, "Marker Felt", 50);
lbl->setPosition(ccp(size.width*0.5, size.height*0.88));
lbl->setColor(ccRED);
this->addChild(lbl);
Score doesn't show now. But it shows in iOS 6.
One more strange problem.
If i change above code like this, it works.
CCLabelTTF *lbl = CCLabelTTF::create("SCORE", "Marker Felt", 50);
lbl->setPosition(ccp(size.width*0.5, size.height*0.88));
lbl->setColor(ccRED);
this->addChild(lbl);
But if i change this code like below again, it doesn't work.(invisible)
CCLabelTTF *lbl = CCLabelTTF::create("Score", "Marker Felt", 50);
lbl->setPosition(ccp(size.width*0.5, size.height*0.88));
lbl->setColor(ccRED);
this->addChild(lbl);
Maybe it's case sensitive issue.
Finally below code doesn't work too even if text is upper case. I only added number 10.
CCLabelTTF *lbl = CCLabelTTF::create("SCORE : 10", "Marker Felt", 50);
lbl->setPosition(ccp(size.width*0.5, size.height*0.88));
lbl->setColor(ccRED);
this->addChild(lbl);
Any help will be appreciate.
Thanks in advance.
I ran into this same problem while using cocos2d-x 2.1.3. I found this link stating that the issue is a bug that affects labels in iOS 7. In order to fix the issue, you'll need to either update the engine, or merge this pull request manually.
upgrade your cocos2d-x version it fixed in 3.0 and if you are using cocos2dx older version then change into CCimage.mm this statment
CGContextRef context = CGBitmapContextCreate(data,dim.width,dim.height, 8,dim.width * 4, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextRef context = CGBitmapContextCreate(data, (size_t)dim.width, (size_t)dim.height, 8, (size_t)dim.width * 4, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);

Text/font rendering in OpenGLES 2 (iOS - CoreText?) - options and best practice?

There are many questions on OpenGL font rendering, many of them are satisfied by texture atlases (fast, but wrong), or string-textures (fixed-text only).
However, those approaches are poor and appear to be years out of date (what about using shaders to do this better/faster?). For OpenGL 4.1 there's this excellent question looking at "what should you use today?":
What is state-of-the-art for text rendering in OpenGL as of version 4.1?
So, what should we be using on iOS GL ES 2 today?
I'm disappointed that there appears to be no open-source (or even commercial solution). I know a lot of teams suck it down and spend weeks of dev time re-inventing this wheel, gradually learning how to kern and space etc (ugh) - but there must be a better way than re-writing the whole of "fonts" from scratch?
As far as I can see, there are two parts to this:
How do we render text using a font?
How do we display the output?
For 1 (how to render), Apple provides MANY ways to get the "correct" rendered output - but the "easy" ones don't support OpenGL (maybe some of the others do - e.g. is there a simple way to map CoreText output to OpenGL?).
For 2 (how to display), we have shaders, we have VBOs, we have glyph-textures, we have lookup-textures, and other tecniques (e.g. the OpenGL 4.1 stuff linked above?)
Here are the two common OpenGL approaches I know of:
Texture atlas (render all glyphs once, then render 1 x textured quad per character, from the shared texture)
This is wrong, unless you're using a 1980s era "bitmap font" (and even then: texture atlas requires more work than it may seem, if you need it correct for non-trivial fonts)
(fonts aren't "a collection of glyphs" there's a vast amount of positioning, layout, wrapping, spacing, kerning, styling, colouring, weighting, etc. Texture atlases fail)
Fixed string (use any Apple class to render correctly, then screenshot the backing image-data, and upload as a texture)
In human terms, this is fast. In frame-rendering, this is very, very slow. If you do this with a lot of changing text, your frame rate goes through the floor
Technically, it's mostly correct (not entirely: you lose some information this way) but hugely inefficient
I've also seen, but heard both good and bad things about:
Imagination/PowerVR "Print3D" (link broken) (from the guys that manufacture the GPU! But their site has moved/removed the text rendering page)
FreeType (requires pre-processing, interpretation, lots of code, extra libraries?)
...and/or FTGL http://sourceforge.net/projects/ftgl/ (rumors: slow? buggy? not updated in a long time?)
Font-Stash http://digestingduck.blogspot.co.uk/2009/08/font-stash.html (high quality, but very slow?)
1.
Within Apple's own OS / standard libraries, I know of several sources of text rendering. NB: I have used most of these in detail on 2D rendering projects, my statements about them outputting different rendering are based on direct experience
CoreGraphics with NSString
Simplest of all: render "into a CGRect"
Seem to be a slightly faster version of the "fixed string" approach people recommend (even though you'd expect it to be much the same)
UILabel and UITextArea with plain text
NB: they are NOT the same! Slight differences in how they render the smae text
NSAttributedString, rendered to one of the above
Again: renders differently (the differences I know of are fairly subtle and classified as "bugs", various SO questions about this)
CATextLayer
A hybrid between iOS fonts and old C rendering. Uses the "not fully" toll-free-bridged CFFont / UIFont, which reveals some more rendering differences / strangeness
CoreText
... the ultimate solution? But a beast of its own...
I did some more experimenting, and it seems that CoreText might make for a perfect solution when combined with a texture atlas and Valve's signed-difference textures (which can turn a bitmap glyph into a resolution-independent hi-res texture).
...but I don't have it working yet, still experimenting.
UPDATE: Apple's docs say they give you access to everything except the final detail: which glyph + glyph layout to render (you can get the line layout, and the number of glyphs, but not the glyph itself, according to docs). For no apparent reason, this core piece of info is apparently missing from CoreText (if so, that makes CT almost worthless. I'm still hunting to see if I can find a way to get the actual glpyhs + per-glyph data)
UPDATE2: I now have this working properly with Apple's CT (but no different-textures), but it ends up as 3 class files, 10 data structures, about 300 lines of code, plus the OpenGL code to render it. Too much for an SO answer :(.
The short answer is: yes, you can do it, and it works, if you:
Create CTFrameSetter
Create CTFrame for a theoretical 2D frame
Create a CGContext that you'll convert to a GL texture
Go through glyph-by-glyph, allowing Apple to render to the CGContext
Each time Apple renders a glyph, calculate the boundingbox (this is HARD), and save it somewhere
And save the unique glyph-ID (this will be different for e.g. "o", "f", and "of" (one glyph!))
Finally, send your CGContext up to GL as a texture
When you render, use the list of glyph-IDs that Apple created, and for each one use the saved info, and the texture, to render quads with texture-co-ords that pull individual glyphs out of the texture you uploaded.
This works, it's fast, it works with all fonts, it gets all font layout and kerning correct, etc.
1.
Create any string by NSMutableAttributedString.
let mabstring = NSMutableAttributedString(string: "This is a test of characterAttribute.")
mabstring.beginEditing()
var matrix = CGAffineTransform(rotationAngle: CGFloat(GLKMathDegreesToRadians(0)))
let font = CTFontCreateWithName("Georgia" as CFString?, 40, &matrix)
mabstring.addAttribute(kCTFontAttributeName as String, value: font, range: NSRange(location: 0, length: 4))
var number: Int8 = 2
let kdl = CFNumberCreate(kCFAllocatorDefault, .sInt8Type, &number)!
mabstring.addAttribute(kCTStrokeWidthAttributeName as String, value: kdl, range: NSRange(location: 0, length: mabstring.length))
mabstring.endEditing()
2.
Create CTFrame. The rect calculate from mabstring by CoreText.CTFramesetterSuggestFrameSizeWithConstraints
let framesetter = CTFramesetterCreateWithAttributedString(mabstring)
let path = CGMutablePath()
path.addRect(rect)
let frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, 0), path, nil)
3.
Create bitmap context.
let imageWidth = Int(rect.width)
let imageHeight = Int(rect.height)
var rawData = [UInt8](repeating: 0, count: Int(imageWidth * imageHeight * 4))
let bitmapInfo = CGBitmapInfo(rawValue: CGBitmapInfo.byteOrder32Big.rawValue | CGImageAlphaInfo.premultipliedLast.rawValue)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let bitsPerComponent = 8
let bytesPerRow = Int(rect.width) * 4
let context = CGContext(data: &rawData, width: imageWidth, height: imageHeight, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: rgbColorSpace, bitmapInfo: bitmapInfo.rawValue)!
4.
Draw CTFrame in bitmap context.
CTFrameDraw(frame, context)
Now, we got the raw pixel data rawData. Create OpenGL Texture , MTLTexture , UIImage with rawData is ok.
Example,
To OpenGL Texture:Convert an UIImage in a texture
Set-up your texture:
GLuint textureID;
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
,
//to MTLTexture
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm, width: Int(imageWidth), height: Int(imageHeight), mipmapped: true)
let device = MTLCreateSystemDefaultDevice()!
let texture = device.makeTexture(descriptor: textureDescriptor)
let region = MTLRegionMake2D(0, 0, Int(imageWidth), Int(imageHeight))
texture.replace(region: region, mipmapLevel: 0, withBytes: &rawData, bytesPerRow: imageRef.bytesPerRow)
,
//to UIImage
let providerRef = CGDataProvider(data: NSData(bytes: &rawData, length: rawData.count * MemoryLayout.size(ofValue: UInt8(0))))
let renderingIntent = CGColorRenderingIntent.defaultIntent
let imageRef = CGImage(width: imageWidth, height: imageHeight, bitsPerComponent: 8, bitsPerPixel: 32, bytesPerRow: bytesPerRow, space: rgbColorSpace, bitmapInfo: bitmapInfo, provider: providerRef!, decode: nil, shouldInterpolate: false, intent: renderingIntent)!
let image = UIImage.init(cgImage: imageRef)
I know this post is old, but I came across it while trying to do this exactly in my application. In my search, I came across this sample project
http://metalbyexample.com/rendering-text-in-metal-with-signed-distance-fields/
It is a perfect implementation of CoreText with OpenGL using the techniques of texture atlasing and signed distance fields. It has greatly helped me achieve the results I wanted. Hope this helps someone else.

CGPathCreateCopyByStrokingPath equivalent on iOS4?

I found CGPathCreateCopyByStrokingPath on iOS 5.0 quite convenient to use but it is available on iOS 5 and later.
Is there any simple way to achieve the same path copying on iOS 4?
I use this, which is compatible across IOS5 and IOS4+. It works 100% if you use the same fill + stroke color. Apple's docs are a little shady about this - they say "it works if you fill it", they don't say "it goes a bit wrong if you stroke it" - but it seems to go slightly wrong in that case. YMMV.
// pathFrameRange: you have to provide something "at least big enough to
// hold the original path"
static inline CGPathRef CGPathCreateCopyByStrokingPathAllVersionsOfIOS( CGPathRef
incomingPathRef, CGSize pathFrameRange, const CGAffineTransform* transform,
CGFloat lineWidth, CGLineCap lineCap, CGLineJoin lineJoin, CGFloat miterLimit )
{
CGPathRef result;
if( CGPathCreateCopyByStrokingPath != NULL )
{
/**
REQUIRES IOS5!!!
*/
result = CGPathCreateCopyByStrokingPath( incomingPathRef, transform,
lineWidth, lineCap, lineJoin, miterLimit);
}
else
{
CGSize sizeOfContext = pathFrameRange;
UIGraphicsBeginImageContext( sizeOfContext );
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(c, lineWidth);
CGContextSetLineCap(c, lineCap);
CGContextSetLineJoin(c, lineJoin);
CGContextSetMiterLimit(c, miterLimit);
CGContextAddPath(c, incomingPathRef);
CGContextSetLineWidth(c, lineWidth);
CGContextReplacePathWithStrokedPath(c);
result = CGContextCopyPath(c);
UIGraphicsEndImageContext();
}
}
Hmmm -- don't know if this qualifies as "simple", but check out Ed's method in this SO post.

Resources