I am trying to render a string over an image chosen by user via Photochooser task. I have seen various replies to similar question but none of the replies have nailed it.
This is what I have come up with -
void photochoosertask_Completed(object sender, PhotoResult e)
{
if (e.TaskResult == TaskResult.OK)
{
System.Windows.Media.Imaging.BitmapImage bmp = new System.Windows.Media.Imaging.BitmapImage();
bmp.SetSource(e.ChosenPhoto);
image1.Source = bmp;
string steamer = "SO!";
System.Windows.Media.Imaging.WriteableBitmap bmps = new System.Windows.Media.Imaging.WriteableBitmap(bmp);
RenderString(bmps, steamer);
}
}
private void RenderString(System.Windows.Media.Imaging.WriteableBitmap bitmap, string steamer)
{
textBlock1.Text = steamer;
bitmap.Render(textBlock1 , null);
bitmap.Invalidate();
}
}
The code however doesn't work. I am most likely doing a major mistake. Any help appreciated, thanks!
According to the documentation:
If an empty transform is supplied [i.e. the null you're passing as the second parameter], the bits representing the element show up at the same offset as if they were placed within their parent.
So if I understand what's happening correctly (and I probably don't), your textBlock1 element is being rendered with the same offset as it has on your parent form. So it may be that textBlock1 is so far down from the top and left that it doesn't show up in your writeable bitmap.
BTW, I'm not familiar with WriteableBitmap, but what you're doing (putting text into a UI element and then rendering that element onto your bitmap) seems like a strange way to add text to a bitmap.
I just figured it out. Thought I should post the solution code here, might help somebody - someday :)
//setup a writeable bitmap with required dimensions
System.Windows.Media.Imaging.WriteableBitmap wbmps = new System.Windows.Media.Imaging.WriteableBitmap(x,y);
//set up a transform, we'll use ScaleTransform and we'll keep things simple here, 1x on both the axis
ScaleTransform transform = new System.Windows.Media.ScaleTransform();
transform.ScaleX=1;
transform.ScaleY=1;
//now we need to render the image on the writeablebitmap and follow it up by rendering a //string
wbmps.Render(imageelement,transform);
//Now render the string which is equivalent to TextBlock.Text
wbmps.Render(texblock,transform);
//Finally - redraw the writeablebitmap to complete the rendering
wbmps.Invalidate();
Related
I'm learning p5.js. I've tried the following code to draw a circle each time I move the mouse with fill color that changes according to color of an image.
let img;
function setup() {
createCanvas(400, 400);
loadImage('https://upload.wikimedia.org/wikipedia/commons/e/ef/Hayao_Miyazaki.jpg', img => {
image(img, 0, 0);
});
noStroke();
}
function draw() {
let c = get(mouseX, mouseY);
fill(c);
circle(mouseX, mouseY, 30);
}
But it seems to take the color from the canvas, not from an image. Because of that, if you don't move your mouse fast enough the color doesn't change at all, and even if you do, the amount of color is much more limited, in other words it's not what I intended.
I can get the colors right if I put the loadImage() part inside of a draw function, but then only one circle at a time is visible.
May be I should store every pixel of an image in the array and get the values from an array, without using get()? Is it possible?
I think I'm missing something simple, please help.
img.get(mouseX, mouseY) to get values from image, not a whole canvas
I too though that img.get(mouseX, mouseY); would work and #mevfy-y also said it so it might work?!
I'm using the Embarcadero RAD Studio C++ builder XE7 compiler. In an application project, I'm using the both Windows GDI and GDI+ to draw on several device contexts.
My drawing content is something like that:
On the above sample the text background and the user picture are drawn with GDI+. The user picture is also clipped with a rounded path. All the other items (the text and the emojis) are drawn with the GDI.
When I draw to the screen DC, all works fine.
Now I want to draw on a printer device context. Whichever I use for my tests is the new "Export to PDF" printer device available in Windows 10. I prepare my device context to draw on an A4 viewport this way:
HDC GetPrinterDC(HWND hWnd) const
{
// initialize the print dialog structure, set PD_RETURNDC to return a printer device context
::PRINTDLG pd = {0};
pd.lStructSize = sizeof(pd);
pd.hwndOwner = hWnd;
pd.Flags = PD_RETURNDC;
// get the printer DC to use
::PrintDlg(&pd);
return pd.hDC;
}
...
void Print()
{
HDC hDC = NULL;
try
{
hDC = GetPrinterDC(Application->Handle);
const TSize srcPage(793, 1123);
const TSize dstPage(::GetDeviceCaps(hDC, PHYSICALWIDTH), ::GetDeviceCaps(hDC, PHYSICALHEIGHT));
const TSize pageMargins(::GetDeviceCaps(hDC, PHYSICALOFFSETX), ::GetDeviceCaps(hDC, PHYSICALOFFSETY));
::SetMapMode(hDC, MM_ISOTROPIC);
::SetWindowExtEx(hDC, srcPage.Width, srcPage.Height, NULL);
::SetViewportExtEx(hDC, dstPage.Width, dstPage.Height, NULL);
::SetViewportOrgEx(hDC, -pageMargins.Width, -pageMargins.Height, NULL);
::DOCINFO di = {sizeof(::DOCINFO), config.m_FormattedTitle.c_str()};
::StartDoc (hDC, &di);
// ... the draw function is executed here ...
::EndDoc(hDC);
return true;
}
__finally
{
if (hDC)
::DeleteDC(hDC);
}
}
The draw function executed between the StartDoc() and EndDoc() functions is exactly the same as whichever I use to draw on the screen. The only difference is that I added a global clipping rect on my whole page, to avoid the drawing to overlaps on the page margins when the size is too big, e.g. when I repeat the above drawing several times under the first one. (This is experimental, later I will add a page cutting process, but this is not the question for now)
Here are my clipping functions:
int Clip(const TRect& rect, HDC hDC)
{
// save current device context state
int savedDC = ::SaveDC(hDC);
HRGN pClipRegion = NULL;
try
{
// reset any previous clip region
::SelectClipRgn(hDC, NULL);
// create clip region
pClipRegion = ::CreateRectRgn(rect.Left, rect.Top, rect.Right, rect.Bottom);
// select new canvas clip region
if (::SelectClipRgn(hDC, pClipRegion) == ERROR)
{
DWORD error = ::GetLastError();
::OutputDebugString(L"Unable to select clip region - error - " << ::IntToStr(error));
}
}
__finally
{
// delete clip region (it was copied internally by the SelectClipRgn())
if (pClipRegion)
::DeleteObject(pClipRegion);
}
return savedDC;
}
void ReleaseClip(int savedDC, HDC hDC)
{
if (!savedDC)
return;
if (!hDC)
return;
// restore previously saved device context
::RestoreDC(hDC, savedDC);
}
As mentioned above, I expected a clipping around my page. However the result is just a blank page. If I bypass the clipping functions, all is printed correctly, except that the draw may overlap on the page margins. On the other hands, if I apply the clipping on an arbitrary rect on my screen, all works fine.
What I'm doing wrong with my clipping? Why the page is completely broken when I enables it?
So I found what was the issue. Niki was close to the solution. The clipping functions seem always applied to the page in pixels, ignoring the coordinate system and the units defined by the viewport.
In my case, the values passed to the CreateRectRgn() function were wrong, because they remained untransformed by the viewport, although the clipping was applied after the viewport was set in the device context.
This turned the identification of the issue difficult, because the clipping appeared as transformed while the code was read, as it was applied after the viewport, just before the drawing was processed.
I don't know if this is a GDI bug or a wished behavior, but unfortunately I never seen this detail mentioned in all the documents I read about the clipping. Although it seems to me important to know that the clipping isn't affected by the viewport.
I'm saving a PDF file with CGPDF under iOS 10. For this I load an existing PDF page, and write it to a new file with a context. While doing so, the rotation information gets lost and the resulting PDF file has all pages re-arragend at 0°.
let writeContext: CGContext = CGContext(finalPDFURL, mediaBox: nil, nil)!
// Loop through all pages
let page: CGPDFPage = ...
var mediaBox = page.getBoxRect(.mediaBox)
writeContext.beginPage(mediaBox: &mediaBox)
writeContext.drawPDFPage(page)
writeContext.endPage()
// Loop finished
writeContext.closePDF()
Then I came up with this code, which handles rotation just fine, but seems to draw the content with a slight offset. Using it with a PDF which has text or anything else close to the margins, results in cut-off content. Tried later also setting x, y, etc. on the pageInfo dict but I guess I misunderstood something here, see 2nd question below.
let page: CGPDFPage = ...
// Set the rotation
var pageDict = [String: Int32]()
pageDict["Rotate"] = CGFloat.init(page.rotationAngle)
writeContext.beginPDFPage(pageDict as CFDictionary?)
writeContext.drawPDFPage(page)
writeContext.endPDFPage()
So my questions,
1) How to use the first approach but with rotation support? Or the second one, but without cropping of content?
2) Where would I find a complete listing of all available pageInfo key-value pairs for this method? https://developer.apple.com/reference/coregraphics/cgcontext/1456578-beginpdfpage
Thanks!
Question is old, but hope following answer will be useful for someone in the future. It will preserve the rotation information of the original PDF page
How to use the first approach but with rotation support?
let writeContext: CGContext = CGContext(finalPDFURL, mediaBox: nil, nil)!
// Loop through all pages
let page: CGPDFPage = ...
var mediaBox = page.getBoxRect(.mediaBox)
writeContext.beginPage(mediaBox: &mediaBox)
let m = page.getDrawingTransform(.mediaBox, rect: mediaBox, rotate: 0, preserveAspectRatio: true)
// Following 3 lines makes the rotations so that the page look in the right direction
writeContext.translateBy(x: 0.0, y: mediaBox.size.height)
writeContext.scaleBy(x: 1, y: -1)
writeContext.concatenate(m)
writeContext.drawPDFPage(page)
writeContext.endPage()
// Loop finished
writeContext.closePDF()
I got feedback from a fellow iOS coder who suggested following principle:
These three steps should allow you to recreate the original page in the destination, while maintaining the page rotation field: (1) set the source page rotation in the destination page via the page dictionary, (2) set that rotation (or possibly the rotation * -1?) in the CGContext you’re drawing into, and finally (3) explicitly set the media box in the destination to be identical to the source (no rotation).
im using xtk to visualize medical data in a webgl canvas. currently im playing around with this lesson:
lesson 10
this library is pretty good but not very well documented. i want to get rid of that gui and add some mouseevents. if i load the mesh from the gui how can i add a mouse event to the mesh? i actually don't know where to start. it's a little bit confusing to get started with this library....
i tried
mesh.click(function(){
alert("yes");
})
or
mesh.mousedown(function(){
alert("yes");
}
Objects rendered in WebGL are not part of the DOM, and as such don't generate events like DOM elements do. This means that for events like these you have to implement the mouse interaction code yourself.
Traditionally in WebGL/OpenGL this process is known as "Picking", and there's several decent resources for it online. (For example: http://webgldemos.thoughtsincomputation.com/engine_tests/picking) The core process is something like this, though:
For each pickable object in your scene, assign it a color. Put this in a lookup table somewhere
Re-render the entire scene to a texture, rendering each pickable object with it's assigned color
Once the scene is rendered, determine your mouse coordinates and read back the color of the texture at that X/Y.
Fetch the object associated with that color from your lookup table. This is the object your mouse cursor is pointing at!
As you can see, while not a difficult method conceptually this also involves several mid-level WebGL topics, such as rendering to a texture, and as such is not usually recommended for beginners. I'm not sure if there are any features in xtk to assist with this (honestly I had never heard of the library before your post), but I would guess that this is something that you'll have to implement on your own.
DOM events are not supported but you can do it with xtk. Check out this JSFiddle
http://jsfiddle.net/haehn/r7Ugf/
// create and initialize a 3D renderer
var r = new X.renderer3D();
r.init();
// create a cube and a sphere
cube = new X.cube();
sphere = new X.sphere();
sphere.center = [-20, 0, 0];
r.interactor.onMouseMove = function() {
// grab the current mouse position
var _pos = r.interactor.mousePosition;
// pick the current object
var _id = r.pick(_pos[0], _pos[1]);
if (_id != 0) {
// grab the object and turn it red
r.get(_id).color = [1, 0, 0];
} else {
// no object under the mouse
cube.color = [1, 1, 1];
sphere.color = [1, 1, 1];
}
r.render();
}
r.interactor.onMouseDown = function(left, middle, right) {
// only observe right mouse clicks
if (!right) return;
// grab the current mouse position
var _pos = r.interactor.mousePosition;
// pick the current object
var _id = r.pick(_pos[0], _pos[1]);
if (_id == sphere.id) {
// turn the sphere green
sphere.color = [0, 1, 0];
r.render();
}
}
r.add(cube); // add the cube to the renderer
r.add(sphere); // and the sphere as well
r.render(); // ..and render it
Easy, no?
XTK implements picking the way Toji explained (i.e. with a frameBuffer where every object is rendered in a different RGBA "color"). It will work while you have less than 255^4 objects, so almost always. There are other methods like unprojecting but they would be longer I think.
So with X.renderer.pick and X.renderer.get you can find the object under the mouse and change its properties. However for the moment you can only change vizualisation properties (see the setGetter and setSetter in every class) but you cannot move an X.object (since X.object._transform attribute is private and there is no getter/setter for it yet).
That's something interesting to deal with : adding a pair of getter/setter for X.object's transform would allow, for example, an user to put medical stuff (modelized by a mesh or something else) in the scene and place to mesure distances or see if it will fit for an operation or something like that. Shouldn't be a good idea Haehn ? And it's a minor change in the framework.
I'm building a map editor for a project and need to draw a hexagon and fill it with a solid color. I have the shape correct but for the life of me can't figure out how to fill it. I suspect it may be due to whether the thing is a Shape, Sprite or UIComponent. Here is what I have for the polygon itself:
import com.Polygon;
import mx.core.UIComponent;
public class greenFillOne extends UIComponent {
public var hexWidth:Number = 64;
public var hexLength:Number = 73;
public function greenFillOne() {
var hexPoly:Polygon = new Polygon;
hexPoly.drawPolygon(40,6,27+(hexWidth*.25),37,0x499b0e,1,30);
addChild(hexPoly);
}
}
The Polygon class isn't a standard Adobe library, so I don't know the specifics. However, assuming that it uses the standard flash API, it should be no problem to add some code to extend the function. You just need to make sure you're doing a graphics.beginFill before the graphics.lineTo / graphics.moveTo functions. And then finish with graphics.endFill.
e.g.,
var g:Graphics = someShape.graphics;
g.beginFill(0xFF0000,.4); // red, .4 opacity
g.moveTo(x1,y1);
g.lineTo(x2,y2);
g.lineTo(x3,y3);
g.lineTo(x1,y1);
g.endFill();
This will draw a triangle filled with .4 red.
I'll put this here because answering it as a comment to Glenn goes past the character limit. My actionscript file extends UIComponent. When I created a variable hexPoly:Polygon = new Polygon; it would render the outline of the hex, but would not fill it no matter what I did. I examined polygon.as and duplicated the methods, but as a sprite and it worked. So, I need to figure out how to wrap the polygon as a sprite, or just leave it as is.
var hexPoly:Sprite = new Sprite;
hexPoly.graphics.beginFill(0x4ea50f,1);
hexPoly.graphics.moveTo(xCenter+(hexWidth*.25)+Math.sin(radians(330))*radius,offset+(radius-Math.cos(radians(330))*radius));
hexPoly.graphics.lineTo(xCenter+(hexWidth*.25)+Math.sin(radians(30))*radius,offset+(radius-Math.cos(radians(30))*radius));
hexPoly.graphics.lineTo(xCenter+(hexWidth*.25)+Math.sin(radians(90))*radius,offset+(radius-Math.cos(radians(90))*radius));
hexPoly.graphics.lineTo(xCenter+(hexWidth*.25)+Math.sin(radians(150))*radius,offset+(radius-Math.cos(radians(150))*radius));
hexPoly.graphics.lineTo(xCenter+(hexWidth*.25)+Math.sin(radians(210))*radius,offset+(radius-Math.cos(radians(210))*radius));
hexPoly.graphics.lineTo(xCenter+(hexWidth*.25)+Math.sin(radians(270))*radius,offset+(radius-Math.cos(radians(270))*radius));
hexPoly.graphics.endFill();
addChild(hexPoly);