I have a simple Xamarin Forms page that contains a SkiaSharp Canvas. I am allowing the user to load a floor plan bitmap to use as the background for the Canvas, and they can add smaller bitmaps to be dragged around the canvas where they need.
I am following this guidance on the Microsoft page for touch manipulation.
https://learn.microsoft.com/en-us/xamarin/xamarin-forms/user-interface/graphics/skiasharp/transforms/touch
Everything works perfectly if I use it on a single device size. When I launch it on a different device size, the resolution and Canvas size are different, and it is causing the smaller bitmaps to be placed inconsistently.
Here is the result I am seeing. The left is an iPhone 13 Pro Max, and the right is a iPhone 13 Pro. As you can see, the result is off when I view it on different size devices.
Here is the Grid that contains the SKCanvas:
<Grid BackgroundColor="White" Grid.Row="1">
<skia:SKCanvasView x:Name="canvasView" PaintSurface="OnCanvasViewPaintSurface" />
<Grid.Effects>
<tt:TouchEffect Capture="True" TouchAction="OnTouchEffectAction" />
</Grid.Effects>
</Grid>
Here is the PaintSurface handler:
void OnCanvasViewPaintSurface(object sender, SKPaintSurfaceEventArgs args)
{
SKImageInfo info = args.Info;
SKSurface surface = args.Surface;
SKCanvas canvas = surface.Canvas;
canvas.Clear();
//set background
SKRect dest = new SKRect(0, 0, info.Width, info.Height);
canvas.DrawBitmap(backgroundBitmap, dest, BitmapStretch.Uniform,
BitmapAlignment.Start, BitmapAlignment.Start);
//Set the coordinates for the pin bitmap
var transX = 1005;
var transY = 1189;
bitmap.Matrix = SKMatrix.CreateTranslation(transX, transY);
// Display the bitmap
bitmap.Paint(canvas);
// Display the matrix in the lower-right corner
SKSize matrixSize = matrixDisplay.Measure(bitmap.Matrix);
matrixDisplay.Paint(canvas, bitmap.Matrix,
new SKPoint(info.Width - matrixSize.Width,
info.Height - matrixSize.Height));
}
I've been through several posts on StackOverflow as well as in MS forums. I can't figure out a way to make this show consistently on different size devices.
Does anyone have a suggestion on how I can keep this smaller bitmap in the same location on the background/SKCanvas regardless of device size?
I appreciate any feedback.
The coordinate of the bitmap is not supposed to be hard-coded , you need to adjust the value (x/y) according to the background canvas's size.
Something like this
var transX = info.Width - 20;
var transY = info.Height - 20;
Related
I'm using the Embarcadero RAD Studio C++ builder XE7 compiler. In an application project, I'm using the both Windows GDI and GDI+ to draw on several device contexts.
My drawing content is something like that:
On the above sample the text background and the user picture are drawn with GDI+. The user picture is also clipped with a rounded path. All the other items (the text and the emojis) are drawn with the GDI.
When I draw to the screen DC, all works fine.
Now I want to draw on a printer device context. Whichever I use for my tests is the new "Export to PDF" printer device available in Windows 10. I prepare my device context to draw on an A4 viewport this way:
HDC GetPrinterDC(HWND hWnd) const
{
// initialize the print dialog structure, set PD_RETURNDC to return a printer device context
::PRINTDLG pd = {0};
pd.lStructSize = sizeof(pd);
pd.hwndOwner = hWnd;
pd.Flags = PD_RETURNDC;
// get the printer DC to use
::PrintDlg(&pd);
return pd.hDC;
}
...
void Print()
{
HDC hDC = NULL;
try
{
hDC = GetPrinterDC(Application->Handle);
const TSize srcPage(793, 1123);
const TSize dstPage(::GetDeviceCaps(hDC, PHYSICALWIDTH), ::GetDeviceCaps(hDC, PHYSICALHEIGHT));
const TSize pageMargins(::GetDeviceCaps(hDC, PHYSICALOFFSETX), ::GetDeviceCaps(hDC, PHYSICALOFFSETY));
::SetMapMode(hDC, MM_ISOTROPIC);
::SetWindowExtEx(hDC, srcPage.Width, srcPage.Height, NULL);
::SetViewportExtEx(hDC, dstPage.Width, dstPage.Height, NULL);
::SetViewportOrgEx(hDC, -pageMargins.Width, -pageMargins.Height, NULL);
::DOCINFO di = {sizeof(::DOCINFO), config.m_FormattedTitle.c_str()};
::StartDoc (hDC, &di);
// ... the draw function is executed here ...
::EndDoc(hDC);
return true;
}
__finally
{
if (hDC)
::DeleteDC(hDC);
}
}
The draw function executed between the StartDoc() and EndDoc() functions is exactly the same as whichever I use to draw on the screen. The only difference is that I added a global clipping rect on my whole page, to avoid the drawing to overlaps on the page margins when the size is too big, e.g. when I repeat the above drawing several times under the first one. (This is experimental, later I will add a page cutting process, but this is not the question for now)
Here are my clipping functions:
int Clip(const TRect& rect, HDC hDC)
{
// save current device context state
int savedDC = ::SaveDC(hDC);
HRGN pClipRegion = NULL;
try
{
// reset any previous clip region
::SelectClipRgn(hDC, NULL);
// create clip region
pClipRegion = ::CreateRectRgn(rect.Left, rect.Top, rect.Right, rect.Bottom);
// select new canvas clip region
if (::SelectClipRgn(hDC, pClipRegion) == ERROR)
{
DWORD error = ::GetLastError();
::OutputDebugString(L"Unable to select clip region - error - " << ::IntToStr(error));
}
}
__finally
{
// delete clip region (it was copied internally by the SelectClipRgn())
if (pClipRegion)
::DeleteObject(pClipRegion);
}
return savedDC;
}
void ReleaseClip(int savedDC, HDC hDC)
{
if (!savedDC)
return;
if (!hDC)
return;
// restore previously saved device context
::RestoreDC(hDC, savedDC);
}
As mentioned above, I expected a clipping around my page. However the result is just a blank page. If I bypass the clipping functions, all is printed correctly, except that the draw may overlap on the page margins. On the other hands, if I apply the clipping on an arbitrary rect on my screen, all works fine.
What I'm doing wrong with my clipping? Why the page is completely broken when I enables it?
So I found what was the issue. Niki was close to the solution. The clipping functions seem always applied to the page in pixels, ignoring the coordinate system and the units defined by the viewport.
In my case, the values passed to the CreateRectRgn() function were wrong, because they remained untransformed by the viewport, although the clipping was applied after the viewport was set in the device context.
This turned the identification of the issue difficult, because the clipping appeared as transformed while the code was read, as it was applied after the viewport, just before the drawing was processed.
I don't know if this is a GDI bug or a wished behavior, but unfortunately I never seen this detail mentioned in all the documents I read about the clipping. Although it seems to me important to know that the clipping isn't affected by the viewport.
Problem:
Zooming in on image by scaling and moving using matrix causes the app to run out of memory and crash.
Additional Libraries used:
Gestouch - https://github.com/fljot/Gestouch
Description:
In my Flex Mobile app I have an Image inside a Group with pan/zoom enabled using the Gestouch library. The zoom works to an extent but causes the app to die (not freeze, just exit) with no error message after a certain zoom level.
This would be manageable except I can’t figure out how to implement a threshold to stop the zoom at, as it crashes at a different zoom level almost every time. I also use dynamic images so the source of the image could be any size or resolution.
They are usually JPEGS ranging from about 800x600 - 9000x6000 and are downloaded from a server so cannot be packaged with the app.
As of the AS3 docs there is no longer a limit to the size of the BitmapData object so that shouldn't be the issue.
“Starting with AIR 3 and Flash player 11, the size limits for a BitmapData object have been removed. The maximum size of a bitmap is now dependent on the operating system.”
The group is used as a marker layer for overlaying pins on.
The crash mainly happens on iPad Mini and older Android devices.
Things I have tried already tried:
1.Using Adobe Scout to pin point when the memory leak occurs.
2.Debugging to find the exact height and width of the marker layer and image at the time of crash.
3.Setting a max zoom variable based on the size of the image.
4.Cropping the image on zoom to only show the visible area. ( crashes on copyPixels function and BitmapData.draw() function )
5.Using imagemagick to make lower quality images ( small images still crash )
6.Using imagemagick to make very low res image and make a grid of smaller images . Displaying in the mobile app using a List and Tile layout.
7.Using weak references when adding event listeners.
Any suggestions would be appreciated.
Thanks
private function layoutImageResized(e: Event):void
{
markerLayer.scaleX = markerLayer.scaleY = 1;
markerLayer.x = markerLayer.y = 0;
var scale: Number = Math.min(width / image.sourceWidth , height / image.sourceHeight);
image.scaleX = image.scaleY = scale;
_imageIsWide = (image.sourceWidth / image.sourceHeight) > (width / height);
// centre image
if(_imageIsWide)
{
markerLayer.y = (height - image.sourceHeight * image.scaleY ) / 2 ;
}
else
{
markerLayer.x = (width -image.sourceWidth * image.scaleX ) / 2 ;
}
// set max scale
_maxScale = scale*_maxZoom;
}
private function onGesture(event:org.gestouch.events.GestureEvent):void
{
trace("Gesture start");
// if the user starts moving around while the add Pin option is up
// the state will be changed and the menu will disappear
if(currentState == "addPin")
{
return;
}
const gesture:TransformGesture = event.target as TransformGesture;
////trace("gesture state is ", gesture.state);
if(gesture.state == GestureState.BEGAN)
{
currentState = "zooming";
imgOldX = image.x;
imgOldY = image.y;
oldImgWidth = markerLayer.width;
oldImgHeight = markerLayer.height;
if(!_hidePins)
{
showHidePins(false);
}
}
var matrix:Matrix = markerLayer.transform.matrix;
// Pan
matrix.translate(gesture.offsetX, gesture.offsetY);
markerLayer.transform.matrix = matrix;
if ( (gesture.scale != 1 || gesture.rotation != 0) && ( (markerLayer.scaleX < _maxScale && markerLayer.scaleY < _maxScale) || gesture.scale < 1 ) && gesture.scale < 1.4 )
{
storedScale = gesture.scale;
// Zoom
var transformPoint:Point = matrix.transformPoint(markerLayer.globalToLocal(gesture.location));
matrix.translate(-transformPoint.x, -transformPoint.y);
matrix.scale(gesture.scale, gesture.scale);
/** THIS IS WHERE THE CRASH HAPPENS **/
matrix.translate(transformPoint.x, transformPoint.y);
markerLayer.transform.matrix = matrix;
}
}
I would say that's not a good idea to work with such a large image like (9000x6000) on mobile devices.
I suppose you are trying to implement some sort of map navigation so you need to zoom some areas hugely.
My solution would be to split that 9000x6000 into 2048x2048 pieces, then compress it using png2atf utility with mipmaps enabled.
Then you can use Starling to easily load these atf images and add it to stage3d and easily manage it.
In case you are dealing with 9000x6000 image - you'll get about 15 2048x2048 pieces, having them all added on the stage at one time you might think it would be heavy, but mipmaps will make it so that there are only tiny thumbnails of image are in memory until they are not zoomed - so you'll never run out of memory in case you remove invisible pieces from stage from time to time while zooming in, and return it back on zoom out
I've created an app with the latest version of AIR (16.0.0.272) and I'm trying to scale the content to fit any resolution of the iPhone from 4 to 6+
In desktop debugging I see all perfectly scaled changing the stage dimension to test my solution. Trying on my iPhone 5 I can't see the scaled content.
If I comment the resize function everything is ok (I see all the content, not scaled obviously).
That's my solution
private function resize(e:Event=null):void{
w=stageW();
h=stageH();
var initW:Number=640;
var initH:Number=960;
//bg.img.width=w;
trace(w+"/"+h);
main.y=62;
//main.x=w*.5-320;
bg.width=w;
bg.height=h;
menu.bg.width=w;
menu.bg.height=h;
var divisor:Number = 640/w;
main.width = Math.floor(main.width / divisor);
main.height = Math.floor(main.height / divisor);
}
I have tried to temporize the resize call to test it on the iPhone but again after 2000ms I can't see anything.
After this I've tried with a listener addEventListener(Event.ADDED_TO_STAGE, init); calling the resize above at the end of my operations of building UI and so on.
Can't figure why resizing a movieclip that contains my app content make it disappear from iPhone and not from the desktop.
Hope in a solution
Thanks in advance
Capabilities.screenResolutionX and Capabilities.screenResolutionY are what you should use for screen width and height on apps.
Note: the X and Y don't change with rotation - they'll stay the same when screen rotates, so your code to get screen dimensions will look like this:
if (bStagePortrait) {
iScreenWidth = Capabilities.screenResolutionX;
iScreenHeight = Capabilities.screenResolutionY;
} else {
iScreenWidth = Capabilities.screenResolutionY;
iScreenHeight = Capabilities.screenResolutionX;
}
Also use this code to prevent app from resizing or moving:
stage.align = StageAlign.TOP_LEFT;
stage.scaleMode = StageScaleMode.NO_SCALE;
For a variety of reasons, your resize function could be getting called more than once.
main.width = Math.floor(main.width / divisor);
main.height = Math.floor(main.height / divisor);
will continuously make the main clip smaller. It may be better to use a constant or number for size calculation. I usually do something with scaling, though.
main.scaleX = main.scaleY = percentStageScaled;
My game is designed for landscape only (screen resolution 1024*768 - iPad resolution).
My game scene is rendered correctly on every platform supported by PlayN (Android, html etc). The problem with iOS only.
I have prepared all the resources for iPad screen resolution, setup info.plist file, registered platform using IOSPlatform.register(app, IOSPlatform.SupportedOrients.LANDSCAPES);
When I run my application everything regarding device orientaton is correct, but game scene isn't fully rendered. It is rendered for the resolution 768*768 (Not all of the scene objects are visible - only objects that belong to 768*768 rect are visible) and the remaining screen space is black.
I've investigated the issue in the following way:
Applied scale transform to rootLayer (to make sure the entire scene is rendered). PlayN.graphics().rootLayer().setScale(0.75f, 0.75f);
Result - game scene fits in 768*768 rect and I can see all game scene objects.
Applied translate transform to rootLayer (to make sure PlayN doesn't render scene outside of 768*768 rect). PlayN.graphics ().rootLayer().setTranslation(1024.0f - 768.0f, 0.0f);
Result - game scene is translated, but objects that are not belong to 768*768 screen rect are not visible.
My guess is that PlayN prepares its drawing context for the 768*1024 screen resolution (default iPad orientation resolution). When it renders the screen the objects located outside of 768*1024 rectangle are clipped (not rendered).
Any help or ideas what can cause such strange behavior would be very appreciated.
Thanks!
The problem is in the following:
iOS doesn't change main window frame and root view frame after device rotation (while Android does - I've checked).
PlayN code expects that view size after rotation should be changed from 768*1024 (default orientation) to 1024*768.
Actual result: PlayN transforms game scene using transform matrix but doesn't change OpenGL framebuffer size (framebuffer size is still 768*1024)
I've fixed this issue by adding the next piece of code to IOSGLContext:
#Override
public void setSize (int width, int height)
{
if (UIDeviceOrientation.LandscapeLeft == orient || UIDeviceOrientation.LandscapeRight == orient)
{
Console.WriteLine("Swap IOSGLContext width and height for " + UIDeviceOrientation.wrap(orient));
viewWidth = height;
viewHeight = width;
}
else
{
viewWidth = width;
viewHeight = height;
}
super.setSize(viewWidth, viewHeight);
}
Now everything work as expected!
If you have any other ideas how to fix this issue I will be glad to discuss them)
Thanks!
download the latest PlayN-1.5 snapshot (from -> https://github.com/threerings/playn not from google code). This issue is fixed in that version
I want to be able to draw images to the viewport in my 3d Max Plugin,
The GraphicsWindow Class has functions for drawing 3d objects in the viewport but these drawing calls are limited by the current viewport and graphics render limits.
This is undesirable as the image I want to draw should always be drawn no matter what graphics mode 3d max is in and or hardware is used, futher i am only drawing 2d images so there is no need to draw it in a 3d context.
I have managed to get the HWND of the viewport and the max sdk has the function
DrawIconButton();
and i have tried using this function but it does not function properly, the image flickers randomly with user interaction, but disappears when there is no interactivity.
i Have implemented this function in the
RedrawViewsCallback function, however the DrawIconButton() function is not documented and i am not sure if this is the correct way to implemented it.
Here is the code i am using to draw the image:
void Sketch_RedrawViewsCallback::proc (Interface * ip)
{
Interface10* ip10 = GetCOREInterface10();
ViewExp* viewExp = ip10->GetActiveViewport();
ViewExp10* currentViewport;
if (viewExp != NULL)
{
currentViewport = reinterpret_cast<ViewExp10*>(viewExp->Execute(ViewExp::kEXECUTE_GET_VIEWEXP_10));
} else {
return;
}
GraphicsWindow* gw = currentViewport->getGW();
HWND ViewportWindow = gw->getHWnd();
HDC hdc = GetDC(ViewportWindow);
HBITMAP bitmapImage = LoadBitmap(hInstance, MAKEINTRESOURCE(IDB_BITMAP1));
Rect rbox(IPoint2(0,0),IPoint2(48,48));
DrawIconButton(hdc, bitmapImage, rbox, rbox, true);
ReleaseDC(ViewportWindow, hdc);
ip->ReleaseViewport(currentViewport);
};
I could not find a way to draw directly to the view-port window, however I have solved the problem by using a transparent modeless dialog box.
May be a complete redraw will solve the issue. ForceCompleteRedraw