I'm looking at creating an appearance stream for annotation of type markup highlight by applying raw commands and writing them to output stream. However, I could not pinpoint which correct classes to use.
And so far, I could not find a post on how to do it.
This is what I have.
ByteArrayOutputStream os = new ByteArrayOutputStream();
os.write("/Form Do".getBytes("ISO_8859_1"));
os.write("other commands to draw highlighted rectangle coordinates");
PdfStream stream = pdfAppearance.getFormXObject(0);
stream.writeContent(os);
Or are there any other classes involved needed for writing raw commands.
And correct me if I am wrong but it looks like drawing the rectangle coordinates in the PdfAppearance will have no effect since there seems to be a form necessary where the rectangle fills are drawn instead.
Related
I am trying to develop my own mini game engine in Apple metal on a mac and I am stuck at a place where I want to render text on the GPU. I do not have much graphics programming experience and hence I am not sure how to do it. I stumbled upon an article written by warren more using signed distance fields. But I do not know how it works and I am unable to understand it completely (lack of my graphics knowledge) to implement it myself. The blog post has a code sample which is written in obj-c but unfortunately i do not know obj-c. Is there some swift version of it? Or can someone explain / give pointers on how to render text in metal?
I have been down this road before. I think you might find SceneKit useful if you are after 3D text.
If you are OK with using SceneKit to drive your rendering: SCNText with a SCNView.
If you have your own command buffer, and you can get away with blending your text on TOP of the rest of your graphics: you can still use SCNText, by using the render() method of a SCNRenderer to render to encode a scene's render commands onto a command buffer.
If you want to avoid SceneKit's rendering process, I would recommend doing this: create a SCNText in a SCNTransaction like so:
import SceneKit
SCNTransaction.begin()
let sceneText = SCNText(string: text, extrusionDepth: extrusionDepth)
SCNTransaction.commit()
let mdlMesh = MDLMesh(scnGeometry: sceneText, bufferAllocator: yourBufferAllocator)
let mesh = try MTKMesh(mesh: mdlMesh, device: MTLCreateSystemDefaultDevice()!)
This MTKMesh will have three vertex buffers; the first one (0) is a list of positions in packed_float3 format, the second (1) a list of normals in packed_float3 format, the third (2) a list of texture coordinates in packed_float2 format. Just make sure to reflect that in your vertex shader. It will have 1-5 submeshes with their own index buffers, corresponding I believe to front, back, front chamfer, back chamfer, and extrusion side.
Now, if you are after 2D text, you can either use this method above with an extrusionDepth close to zero, or you can harness CoreText directly to do font metrics and render textured quads with a font atlas texture like the commenter suggested.
The ability to understand Objective-C is certainly useful as well, but you may not need it for this problem specifically. I tried to be brief on my explanations since I don't know what your exact goal is with this problem, but I can provide more detail on any of those methods upon request.
I need to get a polygon comment into a pdf and revise it's shape. I'm able to do so now by merging the pdf and a blank pdf with just the polygon, then I am able to update the vertices and the rect.
However, the polygon shape still looks the old one when opening the new pdf, even though it will be refreshed after a few clicks on the shape. I need to have this fixed and found this is probably caused by the data stream in the annotation object, which seems to still contain the old polygon shape. But I cannot figure out how to overwrite that before saving the new pdf. I used code similar below to update the vertices and rect, but cannot figure out how to update the data stream.
annot.getObject().update({NameObject('/Rect'):ArrayObject([FloatObject(min(xcoords)), FloatObject(min(ycoords)), FloatObject(max(xcoords)), FloatObject(max(ycoords))])})
Please see image in link
I would appreciate any information.
In case someone has a similar problem, just wanted to share my solution --
I don't find a way to update the stream data, however, I am able to get rid of the "ghost" shape by completely removing that object within the annotation object.
annot.getObject().pop('/AP')
Without that ghost shape, the annotation polygon displays properly! Not sure why the use of '/AP' object though. But it looks alright.
another subject could be: How do i crop a t-shirt out of a pattern image(image is big enough we have huge printers)
Everyone,
Could you please guide me on what the commands would be to do a cut of an image. Im new to ImageMagic and did not see anything that was exactly (only partially) to handle my issue.
We get already presized image files to go on a mens or womens tshirt (soon dress). To save spraying all the ink we would like to cut the image to predefined coordinates for each size, and type.
How do I first determine (given a template files) all the coordinates and create commands for each size and style.
Will this work in .NET?
I can still work with php, and c++ if you prefer.
Here is a link to an example of what needs to be done. http://www.relationship1.com/help.zip
Inside help.zip is
STYLE1__XS_TEMPLATE_FILE_PDF.pdf – outline of what needs to be CROPPED out for this size and style
STYLE1_XS_asset_before.png – this is the image before
STYLE1_XS_TEMPLATE_FILE_PHOTOSHOP_METHOD.ODS – this is some photoshop file they use to do this manually with (there is some hidden layer in here)
STYLE1_XS_TFILE_PDF_WITH_IMAGE_AND_TEMPLATE.pdf – this is what the step would look like right before its completed
STYLE1_XS_TT_asset_completed.png – this is the desired input
the following is the answer
static void Main(string[] args)
{
using (MagickImage mask = new MagickImage(#"c:\help\maskO2_XS.png"))//black where the cut would be white where the asset is
using (MagickImage image = new MagickImage(#"c:\help\STYLE1_XS_asset_before.png"))//pattern file
{
mask.Resize(image.Width, image.Height);
image.Composite(mask, CompositeOperator.Bumpmap);
image.Write(#"c:\help\file_out.png");
}
}
I'm working on a graphing application which I wrote using Core Graphics. I have a buffer which accumulates data, and I render it on the screen. It's super slow and I want to avoid going to openGL if possible. According to the profiler, drawing my graph data is what's killing me (which consists of a number of points which are converted to a path, followed by the calls AddPath, DrawPath)..
This is what I want to do, my question is how to best implement it using layers / views / etc..
I have a grid and some text. I want this to be rendered in a CALayer (or some other layer/view?) and only update when required (the graph is rescaled).
Only a portion of the data needs to be refreshed. I want to take the previous screen buffer, erase a rectangle's worth of data (or cover it with a white box) and then draw only the portion of the graphs that have changed.
I then want to merge the background layer with the foreground graphs to generate the composite image. This requires the graph layer to have a transparent background so as not to obscure the grid.
I've looked at using CAlayer as a sublayer, but it doesn't seem to provide a simple way to draw a line. CAShapeLayer seems a bit better, but it looks like it can only draw a single line. I want the grid to be composed of multiple lines.
What's the best approach and combination of objects to allow me to do this?
Thanks,
Reza
I'd have a CGLayerRef that was used for drawing the path into. For each new point I'd draw just the new segment. When the graph got to full width I'd create a new CGLayerRef and start drawing the new line segments into that.
What happens to the previous layer as it's drawn over by the new layer depends on how your graph is displayed, but you could clear the section which is now underneath the new layer (using CGContextSetBlendMode(context, kCGBlendModeClear);) or you could choose to blend them together in some other way.
Drawing the layers each time you make a change to the lines they contain is relatively cheap compared to drawing all of the line segments themselves.
Technically, there would also be CALayers used to manage the drawing of the CGLayerRefs to the screen (via the delegate relationship drawLayer:inContext:), but all of the line drawing is done using the CGLayerRefs context and then the CGLayerRef is drawn as a whole into the CALayers context (CGContextDrawLayerInRect(context, frame, backingCGLayer);).
How do I create a bitmap font image that contains characters from multiple regions and is correctly interpreted by XNA content pipeline?
I want to add some special characters to my bitmap font image, but I don't know how to do it correctly.
UPD: I think I'm getting closer to my answer. Sprite font texture content processor looks for non-magenta squares in the image and probably uses an xml settings file like with normal spritefonts to map each square to a corresponding symbol. I should probably edit that xml file for my custom texture, but I don't know where I can find it yet.
There is no XML file.
You have to create a custom content processor. Inherit that processor from FontTextureProcessor and override the GetCharacterForIndex method.
Have your method return the character for the specified index in your texture.
The default implementation simply returns FirstCharacter + index. Yours can use whatever logic it likes. (I guess you could even make it parse an XML file for the data.)
(Note that, for a single region, you can specify what FirstCharacter is in the properties for the "Sprite Font Texture" content processor, in the properties window (F4) for that content file.)