Adobe AIR Stage.contentsScaleFactor always 1 on iPhone4s and iPad mini - ios

My application is just a template AIR Mobile AS3 project from FlashDevelop: application.xml file, and a Main class.
In the Main class, I create a text field with stage.contentsScaleFactor value as a text after the first Event.RESIZE:
var textField:TextField = new TextField();
textField.appendText("Size: " + stage.stageWidth + " x " + stage.stageHeight + "\n");
textField.appendText("Scale: " + stage.contentsScaleFactor + "\n");
addChild(textField);
On my iPhone with retina support, i get
Size: 960 x 640
Scale: 1
for
<requestedDisplayResolution>high</requestedDisplayResolution>
and
Size: 480 x 320
Scale: 1
for
<requestedDisplayResolution>standard</requestedDisplayResolution>.
Almost the same for iPad,
Size: 2048 x 1536
Scale: 1
for high, and
Size: 1024 x 768
Scale: 1
for standard.
I'm compiling with the latest AIR SDK 18.0.0.142 (beta), -swf-version=29.
Same results for release AIR SDK 18.
For AIR 14 SDK and -swf-version=25, I get some garbage values for size (it looks like my swf width and height multiplied by what contentsScaleFactor should be), but still 1 for contentsScaleFactor.
Edit:
I have encountered various questions which mentioned contentsScaleFactor over the web (for example, this one). They claim that contentsScaleFactor should be 2 on Retina display
That is how this property documented:
Specifies the effective pixel scaling factor of the stage. This value
is usually 1 on standard screens and 2 on HiDPI (a.k.a Retina)
screens. When the stage is rendered on HiDPI screens the pixel
resolution is doubled; even if the stage scaling mode is set to
StageScaleMode.NO_SCALE. Stage.stageWidth and Stage.stageHeight
continue to be reported in classic pixel units. Note: this value can
change dynamically depending if the stage is on a HiDPI or standard
screen.
Also, with
<requestedDisplayResolution>standard</requestedDisplayResolution>
and configureBackBuffer with wantsBestResolution set to true, I still get 1024*768 buffer on iPad. I verified that by drawing sharp 256*256 texture on 128*128 quad - result is blurry. Then I doing the same with
<requestedDisplayResolution>high</requestedDisplayResolution>
I get sharp looking image of the same physical size.
My actual questions are:
Is contentsScaleFactor supposed to be 2 on Retina iPad/iPhone? If so, are there some compiler/packager options I'm missing?
How I can determine for requestedDisplayResolution=standard that my stage was scaled?
And if contentsScaleFactor doesn't work on mobile, for what purpose this property is? Should it work, for example, on Mac with Retina display?
Edit 2:
On Mac with Retina display, contentsScaleFactor works just fine, reporting 2.

Use Capability.screenDPI to get the DPI ratio, divide stage.stageWidth by screenDPI. On Ipad you'll get around 8 (one DPI unit = around 2 fingers of real estate), DPI gives you your density. Now compare both, non retina Ipad = 8 DPI units for 132 pixel/DPI, retina Ipad = 8 DPI units for 264 pixel/DPI (HD), etc ...

From the comments:
Yes, that is how i'm doing it now for IOS devices, but unfortunately,
Capability.screenDPI is unreliable in general. It is always 72 on
desktop, some "random" values on android, and doesn't take into
account current monitor DPI
Maybe its not so much Dots Per Inch that you need but instead you are looking for Pixels Per Inch?
DPI applies when printing as ink dots on paper. PPI means how many pixels against a real-world inch. Don't worry even the Wikipedia article admits "... It has become commonplace to refer to PPI as DPI, even though PPI refers to input resolution" (now we understand that DPI is printing resolution)
This means you should forget Capability.screenDPI and instead start including the Capabilities.screenResolutionX (display width) and Capabilities.screenResolutionY (display height) numbers in your calculations.
Here's some code for feedback. Hopefully it will be useful to you in some way: PS: Double-check the Flash traces by using a ruler against computer screen.
var scrn_W : uint = Capabilities.screenResolutionX;
var scrn_H : uint = Capabilities.screenResolutionY;
var scrn_DPI : uint = Capabilities.screenDPI;
var init_stageW : int = stage.stageWidth; var init_stageH : int = stage.stageHeight;
//Diagonal in Pixels
var scrn_Dg_Pix : Number = int( Math.sqrt( scrn_W * scrn_W + scrn_H * scrn_H ) );
//Diagonal in Inches
var scrn_Diag : Number = int( Math.sqrt( scrn_W * scrn_W + scrn_H * scrn_H ) ) / 100;
var scrn_PPI : uint = int( Math.sqrt( scrn_W * scrn_W + scrn_H * scrn_H ) ) / scrn_Diag;
//or scrn_PPI = scrn_Dg_Pix / scrn_Diag; //gives same result as above line of code
var dot_Pitch : Number = scrn_W / ( Math.sqrt( scrn_W * scrn_W + scrn_H * scrn_H ) ) / (scrn_Diag * 25.4) / scrn_W;
var scrn_Inch_W : Number = ( scrn_W / scrn_PPI ); var scrn_Inch_H : Number = ( scrn_H / scrn_PPI );
var result_Num : Number = 0; var temp_Pix_Num:Number = 0;
var testInch : Number = 0; var my_PixWidth : Number = 0;
stage.scaleMode = StageScaleMode.NO_SCALE;
stage.align = StageAlign.TOP_LEFT;
stage.addEventListener(Event.RESIZE, resizeListener); //just in case
////////////////////
//// FUNCTIONS
function inch_toPixels( in_Num : Number) : Number
{
temp_Pix_Num = in_Num;
result_Num = scrn_PPI * temp_Pix_Num;
return result_Num;
}
function pixels_toInch( in_Num : Number) : Number
{
temp_Pix_Num = in_Num;
result_Num = temp_Pix_Num / scrn_PPI;
return result_Num;
}
function cm_toInch( in_CM : Number) : Number
{
//inch = cm * 0.393700787; //from centimetre to inch
result_Num = in_CM * 0.393700787; return result_Num;
}
function inch_toCM( in_Inch : Number) : Number
{
//cm = inch * 2.54; //from inch to centimetre
result_Num = in_Inch * 2.54; return result_Num;
}
function resizeListener (e:Event):void
{
// Handle Stage resize here (ie. app window's Scale Drag / Minimize etc)
//trace("new stageWidth: " + stage.stageWidth + " new stageHeight: " + stage.stageHeight);
}
///////////////
//// TRACES
trace("Stage Init Width : " + init_stageW);
trace("Stage Init Height : " + init_stageH);
trace("Screen Width : " + scrn_W);
trace("Screen Height : " + scrn_H);
trace("Screen DPI : " + scrn_DPI);
trace("Screen PPI : " + scrn_PPI);
trace("Screen Diag : " + scrn_Diag);
trace("Screen Diag Pix : " + scrn_Dg_Pix);
trace("Dot Pitch : " + dot_Pitch);
trace("Disp Width (inches) : " + scrn_Inch_W );
trace("Disp Height (inches) : " + scrn_Inch_H );

How about make use of Application.applicationDPI & Application.runtimeDPI to calculate the actual scale factor?

Related

maximum response of image

I am very much new to matlab.i am working on retinal image analysis .I want to find the maximum response of an image.I have 12 images .Now i have to compare all these images and find the maximum pixel at each point and write it to a new image so that my new image is formed by maximum pixels.I am using matlab for this
eg if my pixel value in 1 image is
[9 8 6 3 2]
and my 2 image is
[5 6 7 9 0].
Now my 3rd new image should be
[9 8 7 9 2] %here i compare these two images on pixel by pixel conversion
ie i compare 9 with 5 and write maximum value 9 to my new
image .next with 8 and 6 i take 8 since it is maximum.
this is just my own idea..Can this be done..How do i do it ny idea. I have to comapre 10 images all together and create 11 image
so far i have tried this
A = getimage();
I=A(:,:,2);
lambda = 8;
theta = 0;
psi = [0,pi/2];
gamma = 0.5;
bw = 1;
N = 12;
angle = 0;
theMaxValues = zeros(1, N);
img_in = I;
img_out = zeros(size(img_in,1), size(img_in,2), N);
for n=1:N
gb = gabor_fn(bw,gamma,psi(1),lambda,theta);
%theMaxValues(n) = max(gb(:));
I TRIED THIS WAY
matrix(:,:,1) = gb;
it gives me error..
theta = angle+pi/180;
angle= angle + 15;
end
[overallMax, index] = max(theMaxValues);
thetaOfMax = theta(index);
final_gb = gabor_fn(bw,gamma,psi(1),lambda,thetaOfMax);
figure;
imshow(final_gb);
title('final image');
You can add all your images into one Matrix, where the third dimension reprents the number ouf your image.
matrix[:,:,1] %first image
matrix[:,:,2] %second image and so on
Then you can simply search the maximum value at each Pixel along the third axis using
C = max(matrix,[],3)

Photoshop script: How to find distance of grapic center related to the upper left corner of canvas?

My problem is this:
Is it possible to measure with Photoshop script (I use CS5.1) THE EXACT (x,y) of the center of the graphic (as shown in the image), related to the upper left corner of the canvas (0,0)? What is the tactic I should follow? Anyone has an idea? (The graphic is in its own layer, and I want to do the measure for each graphic, layer by layer, in order to form the layout in Corona).
Yes, in Photoshop, click on "Image" in the navigation menu, then choose Image Size. Take the width and divide by 2, take the height and divide by 2.
To find the coordinates of the centre of the image you need to find the layer bounds, which will tell you the left, top, right and bottom values of the image. From this we can work out the width and height of the image and the centre (from the top left of the photoshop image)
//pref pixels
app.preferences.rulerUnits = Units.PIXELS;
// call the source document
var srcDoc = app.activeDocument;
// get current width values
var W = srcDoc.width.value;
var H = srcDoc.height.value;
var X = srcDoc.activeLayer.bounds[0]
var Y = srcDoc.activeLayer.bounds[1]
var X1 = srcDoc.activeLayer.bounds[2]
var Y1 = srcDoc.activeLayer.bounds[3]
var selW = parseFloat((X1-X));
var selH = parseFloat((Y1-Y));
var posX = Math.floor(parseFloat((X+X1)/2));
var posY = Math.floor(parseFloat((Y+Y1)/2));
alert(X + ", " + Y + ", " + X1 + ", " + Y1 + "\n" + "W: " + selW + ", H: " + selH + "\nPosition " + posX + "," + posY);

How to render WebGL content using high DPI devices?

What is the right way to setup a WebGL to render to all native pixels on a high dots-per-inch display (such as a macbook retina or pixel chromebook)?
for WebGL it's relatively simple.
var desiredCSSWidth = 400;
var desiredCSSHeight = 300;
var devicePixelRatio = window.devicePixelRatio || 1;
canvas.width = desiredCSSWidth * devicePixelRatio;
canvas.height = desiredCSSHeight * devicePixelRatio;
canvas.style.width = desiredCSSWidth + "px";
canvas.style.height = desiredCSSHeight + "px";
See http://www.khronos.org/webgl/wiki/HandlingHighDPI
There are conformance tests that these rules are followed. Specifically that the browser is not allowed to change the size of the backingstore for the canvas for a WebGL canvas.
For regular 2D canvas it's less simple but that was not the question asked.

Get luminance from iOS camera

I want get luminance from camera, I've checked solution (count average luminance)
obtaining-luminosity-from-an-ios-camera
but camera automatically set exposure, so if it turn camera to the source of light (e.g. bulb) then in first time luminance is very high, but after a while this value is lowe, because exposure settings was changed. I've tested with locking exposure but this is not good solution, because if exposure was locked when image from camera is dark, then little source of light is counted as very high value. Is any way to get absolute value of luminance ?
I've checked application Light detector and this application works well, exposure is changed, but value of luminance is stable.
Regards Adam
Do you know how to create a custom Core Image filter that returns the output of a CIKernel (or CIColorKernel) object? If not, you should; and, I'd be happy to provide you with easy-to-understand instructions for doing that.
Assuming you do, here's the OpenGL ES code that will return only the luminance values of an image it processes:
vec4 rgb2hsl(vec4 color)
{
//Compute min and max component values
float MAX = max(color.r, max(color.g, color.b));
float MIN = min(color.r, min(color.g, color.b));
//Make sure MAX > MIN to avoid division by zero later
MAX = max(MIN + 1e-6, MAX);
//Compute luminosity
float l = (MIN + MAX) / 2.0;
//Compute saturation
float s = (l < 0.5 ? (MAX - MIN) / (MIN + MAX) : (MAX - MIN) / (2.0 - MAX - MIN));
//Compute hue
float h = (MAX == color.r ? (color.g - color.b) / (MAX - MIN) : (MAX == color.g ? 2.0 + (color.b - color.r) / (MAX - MIN) : 4.0 + (color.r - color.g) / (MAX - MIN)));
h /= 6.0;
h = (h < 0.0 ? 1.0 + h : h);
return vec4(h, s, l, color.a);
}
kernel vec4 hsl(sampler image)
{
//Get pixel from image (assume its alpha is 1.0 and don't unpremultiply)
vec4 pixel = unpremultiply(sample(image, samplerCoord(image)));
//Convert to HSL; only display luminance value
return premultiply(vec4(vec3(rgb2hsl(pixel).b), 1.0));
}
The above is OpenGL ES code written originally by Apple developers; I modified it to display only the luminance values.

Finding the largest 16:9 rectangle within another rectangle

I am working on this Lua script and I need to be able to find the largest 16:9 rectangle within another rectangle that doesn't have a specific aspect ratio. So can you tell me how I can do that? You don't have to write Lua - pseudocode works too.
Thanks!
This I have tried and can confirm that won't work on lower ratio outer rects.
if wOut > hOut then
wIn = wOut
hIn = (wIn / 16) *9
else
hIn = hOut
wIn = (hIn / 9) * 16
end
heightCount = originalHeight / 9;
widthCount = originalWidth / 16;
if (heightCount == 0 || widthCount == 0)
throw "No 16/9 rectangle";
recCount = min(heightCount, widthCount);
targetHeight = recCount * 9;
targetWidth = recCount * 16;
So far, any rectangle with left = 0..(originalWidth - targetWidth) and top = 0..(originalHeight - targetHeight) and width = targetWidth and height = targetHeight should satisfy your requirements.
Well, your new rectangle can be described as:
h = w / (16/9)
w = h * (16/9)
Your new rectangle should then be based on the width of the outer rectangle, so:
h = w0 / (16/9)
w = w0
Depending on how Lua works with numbers, you might want to make sure it is using real division as opposed to integer division - last time I looked was 2001, and my memory is deteriorating faster than coffee gets cold, but I seem to remember all numbers being floats anyway...

Resources