I have some image data which is contained in Graphics object. How can I get it and put to Bitmap object, or EncodedImage object? Thanks.
You can obtain a Graphics object from a Bitmap image and draw on it. Take a look at Graphics.create for OS 4.7 and above and the Graphics constructor for previous versions.
Related
In the Delphi IDE, how do you assign a graphic to a TImage?
In VCL, in the Object Inspector, you use the "Picture" property of the TImage. But in FMX, I don't see "Picture" in the Object Inspector, or anything else like it ("Bitmap", "Graphic", etc.)
Please have mercy. I have 20+ years experience in Delphi VCL, but I'm a raw newbie in FireMonkey!
The property you're looking for is MultiResBitmap. It's use is covered in the documentation under Using Multi-Resolution Bitmaps. The portion pertaining to TImage:
In TImage controls. TImage controls keep a TFixedMultiResBitmap multi-resolution bitmap in the MultiResBitmap property. TFixedMultiResBitmap is the descendant of TCustomMultiResBitmap. A TFixedMultiResBitmap multi-resolution bitmap can contain any number of bitmap items having different scales. On each device, TImage retrieves the most appropriate bitmap to display from the bitmap collection in the TFixedMultiResBitmap multi-resolution bitmap and refers to the obtained bitmap with the Bitmap property. The obtained bitmap depends on the device resolution and scales of bitmap items kept in the TFixedMultiResBitmap multi-resolution bitmap. If a multi-resolution bitmap does not contain a bitmap item having exactly the scale required by some particular screen, then FireMonkey automatically stretches or zooms out the bitmap item having the most appropriate scale. For information about how this bitmap is obtained, see Bitmap. Keep in mind that each bitmap item takes resources of the application's executable on all platforms (even if some bitmap item is never used on a particular platform).
I would like to render text in iOS to a Texture, So I will be able to draw it using OpenGL. I am using this code:
CGSize textSize = [m_string sizeWithAttributes: m_attribs];
CGSize frameSize = CGSizeMake(NextPowerOf2((NSInteger)(MAX(textSize.width, textSize.height))), NextPowerOf2((NSInteger)textSize.height));
UIGraphicsBeginImageContextWithOptions(frameSize, NO /*opaque*/ , 1.0 /*scale*/);
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGContextSetTextDrawingMode(currentContext, kCGTextFillStroke);
CGContextSetLineWidth(currentContext, 1);
[m_string drawAtPoint:CGPointMake (0, 0) withAttributes:m_attribs];
When I try to use kCGTextFillStroke or kCGTextStroke I get this:
When I try to use kCGTextFill I get this:
Is there any way to get simple, one line clean text like this? (Taken from rendering on OS X)
This looks like an issue with resolution but no matter that...
Sine you are using iOS I suggest you to use an UI component UILabel for instance. Then set any parameters to the label you wish which includes line break mode, number of lines, attributed text, fonts... You may call sizeToFit to get the minimum possible size of the label. You do not add the label to any other view but create an UIImage from the view (you have quite a few answers for that on SO). Once you have the image you may simply copy the raw RGBA data to the texture (again loads of answers on how to get the RGBA data from the UIImage). And that is it. Well you might want to check for content scale for retina x2 and x3 devices or do those manually by increasing the font sizes by the corresponding factors.
This procedure might seem like a workaround and might be much slower then using core graphics but the truth is quite far from that:
Creating a context with size and options creates an RGBA buffer same as for the CGImage (the UIImage only wraps it)
The core graphics is used to draw the view to UIImage so the procedure is essentially the same under the hood.
You still need to copy the data to the texture but that is in both of the cases. A little downside here might be that in order to access the RGBA raw data from the image you will need to copy (duplicate) the raw data somewhere down the line but that is a relatively quick operation and most likely same happens in your procedure.
So it is possible that this procedure consumes a bit more resources (not much and possibly even less actually) but you do get unlimited power when it comes to drawing a text.
Well, eventually I rendered to a texture with doubled size and converted it to UIImage with scale = 2. By that taking advantage of retina display.
UIImage* pTheImage = UIGraphicsGetImageFromCurrentImageContext();
UIImage* pScaledImage = [UIImage imageWithCGImage:pTheImage.CGImage scale:2 orientation:pTheImage.imageOrientation];
Than I just use it as a texture for openGL drawing.
I'm creating a drawing app and I need the end result to be saved as a png image. But I then need to be able to edit the image with further drawing.
Is a framebuffer object the way to go here? Rendering into an offscreen texture?
It depends how you want to edit the image afterwards. There are two parts to your question:
1) Saving the image as a png
2) Editing the image after drawing to it
1) It is straightforward to save a frame buffer drawing as a png. There is a similar question here for OpenGL ES 1.x (http://stackoverflow.com/questions/5062978/how-can-i-dump-opengl-renderbuffer-to-png-or-jpg-image) that should be a good base to work off of.
2) It depends how soon you want to edit the image. If you are editing the image continuously throughout the program, then keep everything in memory in the frame buffer and only write to a png when you are done editing. If you need to draw on top of the image at a later time (for instance, when you reopen the program) you can save as a png and then load the png as a texture for a new frame buffer when you want to edit the image again. When you draw to this new frame buffer, you will be drawing on top of the texture (which was your previous image).
I'am decoding a video using ffmpeg, convert it from yuv420 color space to the rgba color space with convert it to a CGImage and render it to the screen. The video plays correctly. With Instruments I see that a third of the CPU cycles are used for another conversion (function is called CGSConvertBGRX8888toRGBA8888) Why is this second color space conversion necessary and why is there no conversion if I load for example a PNG image and draw it the same way?
Code for the CGImage creation:
http://pastebin.com/CqePhPzG
Thanks!
Looks like the byte ordering of the source image doesn't match the destination. Also, I believe X is non-premultiplied alpha whereas A is premultiplied alpha. Try changing the byte ordering settings you pass when creating your image or bitmap image context. HTH.
The TImageList of Delphi 2009 has support for PNG images by adding them in the imagelist editor. Is there any way to extract a TPngImage from a TImagelist and preserving the alpha channel?
What I want to do is actually to extract the images from one TImageList, make a disabled version of them and then add them to another TImageList. During this operation I would of course like to preserve the alpha channel of the PNG images.
I did something like this with Delphi 2006.
TImageList contains a protected method GetImages. It can be accessed using the "protected bug"
type
TGetImageImageList = class (TImageList) // Please use a better name!
end;
You can cast the imagelist to the TGetImageImageList to get to the GetImages.
begin
TGetImageList(ImageList).GetImages(index, bitmap, mask);
end;
Bitmap contains the bitmap and mask is a black and white bitmap that determines the transparant sections.
You now can change the bitmap and store it using:
function Add(Image, Mask: TBitmap): Integer;
I hope this gives you enough pointers to explore further.