localToGlobal Adobe AIR - ios

building an Adobe AIR app- seem to be having an issue with localToGlobal. I've been following this to set up my app for multiple screen sizes:
http://www.adobe.com/devnet/air/articles/multiple-screen-sizes.html
But I seem to always be running into an issue with my localToGlobal. I have used most of the code in the "Scaling and centering an interface" section and my App is adjusting the app size properly.
I have converted all my localToGlobal calls and multiplied the new scale value to the x and y but am not getting the result needed. It works fine still on the original iPhone 4/4s resolution, but testing on the 3gs resolution my Line Intersections are not working, that specifically use the localToGlobal code to figure out.
Here's the code for a call to a local point:
var p:Point = localToGlobal(_lineStart);
p = parent.globalToLocal(p);
p.x *= GameHolderSun.scaleValue;
p.y *= GameHolderSun.scaleValue;
return p;
Determine Intersection:
var p:Point = determineLineIntersection(_laser.get2ndLastPoint(), _laser.getLastPoint(), _reflectors[i].lineStart, _reflectors[i].lineEnd);
if(p != null){
_laser.addNewLinePoint(p, _reflectors[i].rotation);
}
Line Intersection function I grabbed online:
private function determineLineIntersection(A:Point, B:Point, E:Point, F:Point,as_seg:Boolean=true):Point
{
var ip:Point;
var a1:Number;
var a2:Number;
var b1:Number;
var b2:Number;
var c1:Number;
var c2:Number;
a1= B.y-A.y;
b1= A.x-B.x;
c1= B.x*A.y - A.x*B.y;
a2= F.y-E.y;
b2= E.x-F.x;
c2= F.x*E.y - E.x*F.y;
var denom:Number=a1*b2 - a2*b1;
if (denom == 0) {
return null;
}
ip=new Point();
ip.x=(b1*c2 - b2*c1)/denom;
ip.y=(a2*c1 - a1*c2)/denom;
ip.x *= GameHolderSun.scaleValue;
ip.y *= GameHolderSun.scaleValue;
if(as_seg)
{
if(Math.pow(ip.x - B.x, 2) + Math.pow(ip.y - B.y, 2) > Math.pow(A.x - B.x, 2) + Math.pow(A.y - B.y, 2)) return null;
if(Math.pow(ip.x - A.x, 2) + Math.pow(ip.y - A.y, 2) > Math.pow(A.x - B.x, 2) + Math.pow(A.y - B.y, 2)) return null;
if(Math.pow(ip.x - F.x, 2) + Math.pow(ip.y - F.y, 2) > Math.pow(E.x - F.x, 2) + Math.pow(E.y - F.y, 2)) return null;
if(Math.pow(ip.x - E.x, 2) + Math.pow(ip.y - E.y, 2) > Math.pow(E.x - F.x, 2) + Math.pow(E.y - F.y, 2)) return null;
}
return ip;
}
Hope someone can help!! I can give more information if needed!!

I believe your issue is DPI, at least if you are setting applicationDPI. Stage values do not compensate for the forced DPI change.
To fix this, you have two options:
1) For stage.mouseX/mouseY and stage.stageHeight/Width, you can just use FlexGlobals.topLevelApplication.systemManager properties. I believe SystemManager has a mouseX/mouseY property and you can find the new width/height in the screen property
2) You account for the DPI change yourself. It's relatively simple to do so. For using localToGlobal, I think this is what you will have to do.
var x:Number = 200; //this number is in default DPI size
var appDPI:Number = FlexGlobals.topLevelApplication.applicationDPI;
var deviceDPI:Number = Capabilities.screenDPI; //actual DPI of device
//Adobe doesn't use the actual DPI, they use either 160, 240, or 320
//and use ranges to determine which maps where
if ( deviceDPI < 200 ) {
deviceDPI = 160;
}
else if ( deviceDPI >= 200 && deviceDPI < 280 ) {
deviceDPI = 240;
}
else if ( deviceDPI >= 280 ) {
deviceDPI = 320;
}
var newX:Number = x / ( deviceDPI / appDPI );
I highly suggest you create a Util class and throw this into a (maybe static) function as you will likely end up using it often. The X var can be any number, so long as it is not adjusted for DPI changes.
Hope that helps. I beat my head on a rock trying to figure out why my stage.stageWidth and stage.stageHeight values were incorrect back in July.

Related

Zoom and pan two images simultaneously in opencv

I have two images with similar sizes that show similar scenes. How can we show two images in two frames and when panning or zooming in the left image, it pans and zooms in the right one? I don't want to concatenate the images though.
Is there a solution to do this? Both python or c++ OpenCV are fine.
About zoom in/out:
The basic idea is deciding the scale changed every time on mouse wheel. After you get the current scale (v.s. origin image) and correct region of image you want to show on screen, you can get the position and length of rectangle on scaled image. So you can draw this rectangle on scaled image.
In my github,checking OnMouseWheel () and RefreshSrcView () in Fastest_Image_Pattern_Matching/ELCVMatchTool/ELCVMatchToolDlg.cpp may give what you want.
About showing two images simutaneouly with same region:
use two picture boxes with MFC framework or other UI builder.
or use two cv::namedWindow () without framework
Effect:
Part of the code:
BOOL CELCVMatchToolDlg::OnMouseWheel (UINT nFlags, short zDelta, CPoint pt)
{
POINT pointCursor;
GetCursorPos (&pointCursor);
ScreenToClient (&pointCursor);
// TODO: 在此加入您的訊息處理常式程式碼和 (或) 呼叫預設值
if (zDelta > 0)
{
if (m_iScaleTimes == MAX_SCALE_TIMES)
return TRUE;
else
m_iScaleTimes++;
}
if (zDelta < 0)
{
if (m_iScaleTimes == MIN_SCALE_TIMES)
return TRUE;
else
m_iScaleTimes--;
}
CRect rect;
//GetWindowRect (rect);
GetDlgItem (IDC_STATIC_SRC_VIEW)->GetWindowRect (rect);//重要
if (m_iScaleTimes == 0)
g_dCompensationX = g_dCompensationY = 0;
int iMouseOffsetX = pt.x - (rect.left + 1);
int iMouseOffsetY = pt.y - (rect.top + 1);
double dPixelX = (m_hScrollBar.GetScrollPos () + iMouseOffsetX + g_dCompensationX) / m_dNewScale;
double dPixelY = (m_vScrollBar.GetScrollPos () + iMouseOffsetY + g_dCompensationY) / m_dNewScale;
m_dNewScale = m_dSrcScale * pow (SCALE_RATIO, m_iScaleTimes);
if (m_iScaleTimes != 0)
{
int iWidth = m_matSrc.cols;
int iHeight = m_matSrc.rows;
m_hScrollBar.SetScrollRange (0, int (m_dNewScale * iWidth - m_dSrcScale * iWidth) - 1 + BAR_SIZE);
m_vScrollBar.SetScrollRange (0, int (m_dNewScale * iHeight - m_dSrcScale * iHeight) - 1 + BAR_SIZE);
int iBarPosX = int (dPixelX * m_dNewScale - iMouseOffsetX + 0.5);
m_hScrollBar.SetScrollPos (iBarPosX);
m_hScrollBar.ShowWindow (SW_SHOW);
g_dCompensationX = -iBarPosX + (dPixelX * m_dNewScale - iMouseOffsetX);
int iBarPosY = int (dPixelY * m_dNewScale - iMouseOffsetY + 0.5);
m_vScrollBar.SetScrollPos (iBarPosY);
m_vScrollBar.ShowWindow (SW_SHOW);
g_dCompensationY = -iBarPosY + (dPixelY * m_dNewScale - iMouseOffsetY);
//滑塊大小
SCROLLINFO infoH;
infoH.cbSize = sizeof (SCROLLINFO);
infoH.fMask = SIF_PAGE;
infoH.nPage = BAR_SIZE;
m_hScrollBar.SetScrollInfo (&infoH);
SCROLLINFO infoV;
infoV.cbSize = sizeof (SCROLLINFO);
infoV.fMask = SIF_PAGE;
infoV.nPage = BAR_SIZE;
m_vScrollBar.SetScrollInfo (&infoV);
//滑塊大小
}
else
{
m_hScrollBar.SetScrollPos (0);
m_hScrollBar.ShowWindow (SW_HIDE);
m_vScrollBar.SetScrollPos (0);
m_vScrollBar.ShowWindow (SW_HIDE);
}
RefreshSrcView ();
return CDialogEx::OnMouseWheel (nFlags, zDelta, pt);
}

How to speed up YUV conversion for a fast SkiaSharp camera preview?

I'm writing some code to render camera preview using SkiaSharp. This is cross-platform but I came across a problem while writing the implementation for android.
I needed to convert YUV_420_888 to RGB8888 because that's what SkiaSharp supports and with the help of this thread, somehow managed to show decent quality images to my SkiaSharp canvas. The problem is the speed. At best I can get about 8 fps but usually it's just 4 or 5 fps. It turned out the biggest factor is the conversion. I now have about 3 versions of my ToRGB converter. I've even ended up trying "unsafe" code and parallel loops. I'll just show you my best one yet.
private unsafe byte[] ToRgb(byte[] yValuesArr, byte[] uValuesArr,
byte[] vValuesArr, int uvPixelStride, int uvRowStride)
{
var width = PixelSize.Width;
var height = PixelSize.Height;
var rgb = new byte[width * height * 4];
var partitions = Partitioner.Create(0, height);
Parallel.ForEach(partitions, range =>
{
var (item1, item2) = range;
Parallel.For(item1, item2, y =>
{
for (var x = 0; x < width; x++)
{
var yIndex = x + width * y;
var currentPosition = yIndex * 4;
var uvIndex = uvPixelStride * (x / 2) + uvRowStride * (y / 2);
fixed (byte* rgbFixed = rgb)
fixed (byte* yValuesFixed = yValuesArr)
fixed (byte* uValuesFixed = uValuesArr)
fixed (byte* vValuesFixed = vValuesArr)
{
var rgbPtr = rgbFixed;
var yValues = yValuesFixed;
var uValues = uValuesFixed;
var vValues = vValuesFixed;
var yy = *(yValues + yIndex);
var uu = *(uValues + uvIndex);
var vv = *(vValues + uvIndex);
var rTmp = yy + vv * 1436 / 1024 - 179;
var gTmp = yy - uu * 46549 / 131072 + 44 - vv * 93604 / 131072 + 91;
var bTmp = yy + uu * 1814 / 1024 - 227;
rgbPtr = rgbPtr + currentPosition;
*rgbPtr = (byte) (rTmp < 0 ? 0 : rTmp > 255 ? 255 : rTmp);
rgbPtr++;
*rgbPtr = (byte) (gTmp < 0 ? 0 : gTmp > 255 ? 255 : gTmp);
rgbPtr++;
*rgbPtr = (byte) (bTmp < 0 ? 0 : bTmp > 255 ? 255 : bTmp);
rgbPtr++;
*rgbPtr = 255;
}
}
});
});
return rgb;
}
You can also find it on my repo. You can also find on that same repo the part where I rendered the output to SkiaSharp
For a preview size of 1440x1080, running on my phone, this code takes about 120ms to finish. Even if all the other parts are optimized, the most I can get from that is 8fps. And no, it's not my hardware because the built-in camera app runs smoothly. By the way 1440x1080 is the output of my ChooseOptimalSize algorithm that I got from the mono-droid examples of android's Camera2 API. I don't know if it's the best way or if it lacks logic on detecting the fps and sizing down the preview to make it faster.
Does SkiaSharp support GPU drawing? If you connect the camera to a SurfaceTexture, you can use the preview frames as GL textures and render them efficiently into an OpenGL scene.
Even if not, you may still get faster results by sending the frames to the GPU and reading them back to the CPU with something like glReadPixels, as that'll do a RGB conversion within the GPU.

XamForms.Controls.Calendar Special Dates Text Blurry

I'm using XamForms.Controls.Calendar in my Xamarin.Forms project everything works great, but the texts entered in the "Text" section of the SpeacialDates property appear blurred on the iOS platform, how can I solve it?
DrawText feature ios also have the same way, despite the intense research of this feature could not solve the blur problem
https://imgur.com/a/E0UZnD9 "Screenshot"
protected void DrawText(CGContext g, Pattern p, CGRect r)
{
if (string.IsNullOrEmpty(p.Text)) return;//Copperplate-Light" KohinoorTelugu-Light
string fontname = Device.Idiom == TargetIdiom.Tablet ? "KohinoorTelugu-Light" : "GillSans-UltraBold";//GillSans-UltraBold
var bounds = p.Text.StringSize(UIFont.FromName(fontname, p.TextSize));
//"GillSans-UltraBold
var al = (int)p.TextAlign;
var x = r.X;
if ((al & 2) == 2) // center
{
x = r.X + (int)Math.Round(r.Width / 2.0) - (int)Math.Round(bounds.Width / 2.0);
}
else if ((al & 4) == 4) // right
{
x = (r.X + r.Width) - bounds.Width - 2;
}
var y = r.Y + (int)Math.Round(bounds.Height / 2.0) + 2;
if ((al & 16) == 16) // middle
{
y = r.Y + (int)Math.Ceiling(r.Height / 2.0) + (int)Math.Round(bounds.Height / 5.0);
}
else if ((al & 32) == 32) // bottom
{
y = (r.Y + r.Height) - 2;
}
g.SaveState();
g.SetShouldAntialias(true);
g.SetShouldSmoothFonts(true);
g.SetAllowsFontSmoothing(true);
g.SetAllowsAntialiasing(true);
g.SetShouldSubpixelPositionFonts(false);
g.InterpolationQuality = CGInterpolationQuality.High;
g.SetAllowsSubpixelPositioning(false);
g.SetAllowsFontSubpixelQuantization(false);
g.InterpolationQuality = CGInterpolationQuality.High;
g.TranslateCTM(0, Bounds.Height);
g.ScaleCTM(1, -1);
g.SetStrokeColor(p.TextColor.ToCGColor());
g.DrawPath(CGPathDrawingMode.EOFillStroke);
g.SetFillColor(p.TextColor.ToCGColor());
g.SetTextDrawingMode(CGTextDrawingMode.FillStrokeClip);
g.SelectFont(fontname, p.TextSize, CGTextEncoding.MacRoman);
g.ShowTextAtPoint(x, Bounds.Height - y, p.Text);
g.RestoreState();
}
}

How to change size of bubbles in Highcharts' 3D scatter chart?

I need to change size of bubbles(points) by supplying 4th value in data points in Highcharts' 3D scatter chart. I couldn't find any way how to do this. Can anyone help ?
It seems that it is not supported out of the box. Although in this Thread in the Highcharts-Forum, a wrapper is shown that allows a 4th w value to be used as the size of the bubble (see http://jsfiddle.net/uqLfm1k6/1/):
(function (H) {
H.wrap(H.seriesTypes.bubble.prototype, 'getRadii', function (proceed, zMin, zMax, minSize, maxSize) {
var math = Math,
len,
i,
pos,
zData = this.zData,
wData = this.userOptions.data.map( function(e){ return e.w }), // ADDED
radii = [],
options = this.options,
sizeByArea = options.sizeBy !== 'width',
zThreshold = options.zThreshold,
zRange = zMax - zMin,
value,
radius;
// Set the shape type and arguments to be picked up in drawPoints
for (i = 0, len = zData.length; i < len; i++) {
// value = zData[i]; // DELETED
value = this.chart.is3d()? wData[i] : zData[i]; // ADDED
// When sizing by threshold, the absolute value of z determines the size
// of the bubble.
if (options.sizeByAbsoluteValue && value !== null) {
value = Math.abs(value - zThreshold);
zMax = Math.max(zMax - zThreshold, Math.abs(zMin - zThreshold));
zMin = 0;
}
if (value === null) {
radius = null;
// Issue #4419 - if value is less than zMin, push a radius that's always smaller than the minimum size
} else if (value < zMin) {
radius = minSize / 2 - 1;
} else {
// Relative size, a number between 0 and 1
pos = zRange > 0 ? (value - zMin) / zRange : 0.5;
if (sizeByArea && pos >= 0) {
pos = Math.sqrt(pos);
}
radius = math.ceil(minSize + pos * (maxSize - minSize)) / 2;
}
radii.push(radius);
}
this.radii = radii;
});
}(Highcharts));

How to encode a .wmv file that XNA DirectShow will play properly?

I'm playing around with XNA DirectShow to stream a video from a file rather than loading it into my project (I'm fully aware of the XNA MediaPlayer class by the way). It plays the sample video it came with no problem. When I try to make my own .wmv from a series of PNG files I have using ffmpeg the video plays but is all blue (should be mostly yellow). Pixel format wrong? Wrong codec? I'm certainly no expert in these waters..
The sample video is a VC-1 WMV3 apparantly, and I don't think I can replicate that? What encoding/codec/fileformat should I be using?
Also! If transparent video background is possible, that would be amazing. Is it?
Ok I've solved it - I simply switched the order pixels are assigned to when DirectShow creates its output texture. In the VideoPlayer class I changed UpdateBuffer to:
private void UpdateBuffer()
{
int waitTime = avgTimePerFrame != 0 ? (int)((float)avgTimePerFrame / 10000) : 20;
int samplePosRGBA = 0;
int samplePosRGB24 = 0;
while (true)
{
for (int y = 0, y2 = videoHeight - 1; y < videoHeight; y++, y2--)
{
for (int x = 0; x < videoWidth; x++)
{
samplePosRGBA = (((y2 * videoWidth) + x) * 4);
samplePosRGB24 = ((y * videoWidth) + x) * 3;
//make transparent if pixel matches a certain colour
if (WhiteTransparent && bgrData[samplePosRGB24 + 2] > 200 && bgrData[samplePosRGB24 + 1] > 200 && bgrData[samplePosRGB24 + 0] > 200)
{
//transparent pixel
videoFrameBytes[samplePosRGBA + 0] = 0;
videoFrameBytes[samplePosRGBA + 1] = 0;
videoFrameBytes[samplePosRGBA + 2] = 0;
videoFrameBytes[samplePosRGBA + 3] = 0;
}
else
{
//modified pixel format order - switch the 2,1,0 on the right for other formats..
videoFrameBytes[samplePosRGBA + 0] = bgrData[samplePosRGB24 + 2];
videoFrameBytes[samplePosRGBA + 1] = bgrData[samplePosRGB24 + 1];
videoFrameBytes[samplePosRGBA + 2] = bgrData[samplePosRGB24 + 0];
videoFrameBytes[samplePosRGBA + 3] = alphaTransparency;
}
}
}
frameAvailable = false;
while (!frameAvailable)
{ Thread.Sleep(waitTime); }
}
}
which also displays any white areas as transparent in the final image if a bool I added to the class - WhiteTransparent is true. Crude I know, but it's doing the trick for me. Just use the lines in the else statement if not desired.

Resources