Nativescript 8 IOS shadow cornerRadius bug - ios

Here is an example with test shadow (black) for element(white) with cornerRadius 17dpi
But the shadow takes on a radius that is larger than the element it belongs to and visually it looks like the shadow becomes cornerRadius 50% but should be 17 dpi.
Expected result: the cornerRadius on the shadow should match the cornerRadius on the element
This is reproduced in the case of applying a shadow through styles:
box-shadow: 0 20 0 #000;
To element with:
border-radius: 17;
package.json
"#nativescript/core": "8.3.6",
"#nativescript/ios": "8.2.3",
If you apply the shadow directly, it works as it should
const nsView = args.object;
const nsColorShadow = new Color('black');
const nsColorBg = new Color('white');
const iosView = nsView.ios;
iosView.layer.masksToBounds = false;
iosView.layer.shadowColor = nsColorShadow.ios.CGColor;
iosView.layer.shadowOpacity = 1;
iosView.layer.shadowRadius = 0;
iosView.layer.cornerRadius = 17;
iosView.layer.backgroundColor = nsColorBg.ios.CGColor;
iosView.layer.shadowOffset = CGSizeMake(0, 20);

I think that this is fix in core#8.4.2 https://github.com/NativeScript/NativeScript/pull/10142

iosView.layer.cornerRadius is in dp (device pixels), not dip (device independent pixels). When you're setting border-radius: 17; you're setting in dip. You could also set it as border-radius: 17px; to get the same result as you had on the manual setting of the view.
The reason you're seeing it stuck at 50% is because 50% is the max value for border-radius, and 17dip converted to dp will be larger than 50%, so it'll clip to 50%.

Related

Issue when drawing image on canvas on iPad

I'm working on an HTML5 project, that will run in a WKWebView on iPad.
I'm doing everything programatically.
My WKWebView frame takes the full screen, viewport is 1180x820,
so I set my canvas size and height to 1180x820 too.
Here is my original picture I'd like to display:
But when I'm displaying an 1920x1080 image on my canvas with the drawImage function,
the image does not fit on the screen (obsviously) but is really well displayed (not blurred).
_canvasContext.drawImage(imgMenu,0,0,1920,1080).
So I rescale the image when drawing it, still with the drawImage function.
_canvasContext.drawImage(imgMenu,0,0,1920/2,1080/2)
The image fits in the screen, but is blurred.
The downscaling is really really bad (it really could be better, this example is not the worst one).
I already tried the parameters
_canvasContext.imageSmoothingEnabled = true;
_canvasContext.webkitImageSmoothingEnabled = true;
_canvasContext.mozImageSmoothingEnabled = true;
_canvasContext.imageSmoothingQuality = "high";
It does not help.
Maybe I do something wrong, I don't understand what to do.
Screen resolution of my ipad is 2360x1640, so displaying a 1920x1080 picture should not be a problem.
If anyone could help me, that would save my life :)
Best regards,
Alex
This is one of those annoying and confusing things about canvas. What you need to do is size the canvas using devicePixelRatio. This will increase the actual size of the canvas to match the pixel density of the device. This could be 2, or 1.5 etc... a retina screen is often 2.
For drawing your image, the smart way is to support any image size and fit it into the canvas area (usually scaling down). With a tiny image, this code will scale up and lose resolution.
const IMAGE_URL = 'https://unsplash.it/1920/1080';
const canvas = document.createElement('canvas');
const _canvasContext = canvas.getContext('2d');
// you will probably want this, unless you want to support many screen sizes in which case you will actually want the window size:
/*
const width = 1180;
const height = 820;
*/
const width = window.innerWidth
const height = window.innerHeight
const ratio = window.devicePixelRatio;
// size the canvas to use pixel ratio
canvas.width = Math.round(width * ratio);
canvas.height = Math.round(height * ratio);
// downsize the canvas with css - making it
// retina compatible
canvas.style.width = width + 'px';
canvas.style.height = height + 'px';
document.body.appendChild(canvas);
_canvasContext.fillStyle = 'gray'
_canvasContext.fillRect(0, 0, canvas.width, canvas.height);
function drawImage(url) {
const img = new Image()
img.addEventListener('load', () => {
// find a good scale value to fit the image
// on the canvas
const scale = Math.min(
canvas.width / img.width,
canvas.height / img.height
);
// calculate image size and padding
const scaledWidth = img.width * scale;
const scaledHeight = img.height * scale;
const padX = scaledWidth < canvas.width ? canvas.width - scaledWidth : 0;
const padY = scaledHeight < canvas.height ? canvas.height - scaledHeight : 0;
_canvasContext.drawImage(img, padX / 2, padY / 2, scaledWidth, scaledHeight);
})
img.src = url;
}
drawImage(IMAGE_URL);
body,html {
margin: 0;
padding: 0;
}
As mentioned in a comment in the code snippet. If your want your canvas to always use 1180x820... be sure to change the width and height variables:
const width = 1180;
const height = 820;
For the purposes of the snippet I used the window size. Which may be better for you if you wish to support other device sizes.

How to set pure red pixel full screen on iPhone OLED

I am studying the pixel arrangement of the iPhone OLED screen.
I use the code:
for (int i = 0; i < height; i++)
{
for (int j = 0; j < width; j++)
{
data[(i*width + j)*4] = (Byte) (255) ;
data[(i*width + j)*4+1] = (Byte) (0) ;
data[(i*width + j)*4+2] = (Byte) (0) ;
data[(i*width + j)*4+3] = (Byte)255;
}
}
in the viewController, when setting the iPhone X full screen to red, the screen pixels of the iPhone X seen with the microscope is not all red pixels, and green pixels can also be seen first.
What I want to achieve is that when setting to pure red, the pixels of the iPhone X only display red pixels, and green pixels or blue pixels cannot be displayed.
How can I solve this problem?
Try with system color:
view.backgroundColor = UIColor.systemRed
iOS offers a range of system colors that automatically adapt to vibrancy and changes in accessibility settings like Increase Contrast and Reduce Transparency.
https://developer.apple.com/design/human-interface-guidelines/ios/visual-design/color/

Image auto cropping when rotate in OpenCV.js

I'm using OpenCV.js to rotate image to the left and right, but it was cropped when I rotate.
This is my code:
let src = cv.imread('img');
let dst = new cv.Mat();
let dsize = new cv.Size(src.rows, src.cols);
let center = new cv.Point(src.cols/2, src.rows/2);
let M = cv.getRotationMatrix2D(center, 90, 1);
cv.warpAffine(src, dst, M, dsize, cv.INTER_LINEAR, cv.BORDER_CONSTANT, new cv.Scalar());
cv.imshow('canvasOutput', dst);
src.delete(); dst.delete(); M.delete();
Here is an example:
This is my source image:
This is what I want:
But it returned like this:
What should I do to fix this problem?
P/s: I don't know how to use different languages except javascript.
A bit late but given the scarcity of opencv.js material I'll post the answer:
The function cv.warpAffine crops the image because it only does a mathematical transformation as documented on OpenCV and other sources, if you wish to do rotations to any angle you'll need to calculate the padding in order to compensate that.
If you wish to only rotate in multiples of 90 degrees you could use cv.rotate as follows:
cv.rotate(src, dst, cv.ROTATE_90_CLOCKWISE);
Where src is the matrix with your source image, dst is the destination matrix which could be defined empty as follows let dst = new cv.Mat(); and cv.ROTATE_90_CLOCKWISE is the rotate flag indicating the angle of rotation, there are three different options:
cv.ROTATE_90_CLOCKWISE
cv.ROTATE_180
cv.ROTATE_90_COUNTERCLOCKWISE
You can find which OpenCV functions are implemented on OpenCV.js on the repository's opencv_js.congif.py file if the function is indicated as whitelisted then is working on opencv.js even if it is not included in the opencv.js tutorial.
The info about how to use each function can be found in the general documentation, the order of the parameters is generally the indicated on the C++ indications (don't be distracted by the oscure C++ vector types sintax) and the name of the flags (like rotate flag) is usually indicated on the python indications.
I was also experiencing this issue so had a look into #fernando-garcia's answer, however I couldn't see that rotate had been implemented in opencv.js so it seems that the fix in the post #dan-mašek's links is the best solution for this, however the functions required are slightly different.
This is the solution I came up with (note, I haven't tested this exact code and there is probably a more elegant/efficient way of writing this, but it gives the general idea. Also this will only work with images rotated by multiples of 90°):
const canvas = document.getElementById('canvas');
const image = cv.imread(canvas);
let output = new cv.Mat();
const size = new cv.Size();
size.width = image.cols;
size.height = image.rows;
// To add transparent borders
const scalar = new cv.Scalar(0, 0, 0, 0);
let center;
let padding;
let height = size.height;
let width = size.width;
if (height > width) {
center = new cv.Point(height / 2, height / 2);
padding = (height - width) / 2;
// Pad out the left and right before rotating to make the width the same as the height
cv.copyMakeBorder(image, output, 0, 0, padding, padding, cv.BORDER_CONSTANT, scalar);
size.width = height;
} else {
center = new cv.Point(width / 2, width / 2);
padding = (width - height) / 2;
// Pad out the top and bottom before rotating to make the height the same as the width
cv.copyMakeBorder(image, output, padding, padding, 0, 0, cv.BORDER_CONSTANT, scalar);
size.height = width;
}
// Do the rotation
const rotationMatrix = cv.getRotationMatrix2D(center, 90, 1);
cv.warpAffine(
output,
output,
rotationMatrix,
size,
cv.INTER_LINEAR,
cv.BORDER_CONSTANT,
new cv.Scalar()
);
let rectangle;
if (height > width) {
rectangle = new cv.Rect(0, padding, height, width);
} else {
/* These arguments might not be in the right order as my solution only needed height
* > width so I've just assumed this is the order they'll need to be for width >=
* height.
*/
rectangle = new cv.Rect(padding, 0, height, width);
}
// Crop the image back to its original dimensions
output = output.roi(rectangle);
cv.imshow(canvas, output);

Painting with transparency issue

I'm using CGLayers to implement a "painting" technique similar to Photoshop airbrush - and have run into something strange. When I use transparency and overpaint an area, the color never reaches full intensity (if the alpha value is below 0.5). My application uses a circular "airbrush" pattern with opacity fall off at the edges but I have reproduced the problem just using a semi-transparent white square. When the opacity level is less than 0.5, the overpainted area never reaches the pure white of the source layer. I probably wouldn't have noticed but I'm using the result of the painting as a mask, and not being able to get pure white causes problems. Any ideas what's going on here? Target iOS SDK 5.1.
Below is the resultant color after drawing the semi-transparent square many times over black background:
opacity color
------ -----
1.0 255
0.9 255
0.8 255
0.7 255
0.6 255
0.5 255
0.4 254
0.3 253
0.2 252
0.1 247
Simplified code that shows the issue:
- (void)drawRect:(CGRect)rect
{
CGContextRef viewContext = UIGraphicsGetCurrentContext();
// Create grey gradient to compare final blend color
CGRect lineRect = CGRectMake(20, 20, 1, 400);
float greyLevel = 1.0;
for(int i=0;i<728;i++)
{
CGContextSetRGBFillColor(viewContext, greyLevel, greyLevel, greyLevel, 1);
CGContextFillRect(viewContext, lineRect);
lineRect.origin.x += 1;
greyLevel -= 0.0001;
}
// Create semi-transparent white square
CGSize whiteSquareSize = CGSizeMake(40, 40);
CGLayerRef whiteSquareLayer = CGLayerCreateWithContext (viewContext, whiteSquareSize, NULL);
CGContextRef whiteSquareContext = CGLayerGetContext(whiteSquareLayer);
CGContextSetAlpha(whiteSquareContext, 1.0f); // just to make sure
CGContextSetRGBFillColor(whiteSquareContext, 1, 1, 1, 0.3); // ??? color never reaches pure white if alpha < 0.5
CGRect whiteSquareRect = CGRectMake(0, 0, whiteSquareSize.width, whiteSquareSize.height);
CGContextFillRect(whiteSquareContext, whiteSquareRect);
// "Paint" with layer a bazillion times
CGContextSetBlendMode(viewContext, kCGBlendModeNormal); // just to make sure
CGContextSetAlpha(viewContext, 1.0); // just to make sure
for(int strokeNum=0;strokeNum<100;strokeNum++)
{
CGPoint drawPoint = CGPointMake(0, 400);
for(int x=0;x<730;x++)
{
CGContextDrawLayerAtPoint(viewContext, drawPoint, whiteSquareLayer);
drawPoint.x++;
}
}
}

Webgl gl.viewport change

I have a problem with canvas resizing and gl.viewport sync.
Let's say that I start with both the canvas 300x300 canvas, and the initialization of gl.viewport at the same sizes (gl.vieport(0, 0, 300, 300)).
After that, in browser's console I make my tests:
I'm changing size of my canvas, using jquery, calling something like $("#scene").width(200).height(200)
After this, i'm calling my resizeWindow function:
function resizeWindow(width, height){
var ww = width === undefined? w.gl.viewportWidth : width;
var h = height === undefined? w.gl.viewportHeight : height;
h = h <= 0? 1 : h;
w.gl.viewport(0, 0, ww, h);
mat4.identity(projectionMatrix);
mat4.perspective(45, ww / h, 1, 1000.0, projectionMatrix);
mat4.identity(modelViewMatrix);
}
function that's synchronizing viewport with required dimensions.
Unfortunatly, my gl.viewport after this call takes only a part of my canvas.
Could anyone tell me what is going wrong?
There is no such thing is gl.viewportWidth or gl.viewportHeight
If you want to set your perspective matrix you should use canvas.clientWidth and canvas.clientHeight as your inputs to perspective. Those will give you the correct results regardless of what size the browser scales the canvas. As in if you set the canvas auto scale with css
<canvas style="width: 100%; height:100%;"></canvas>
...
var width = canvas.clientHeight;
var height = Math.max(1, canvas.clientHeight); // prevent divide by 0
mat4.perspective(45, width / height, 1, 1000, projectionMatrix);
As for the viewport. Use gl.drawingBufferWidth and gl.drawingBufferHeight. That's the correct way to find the size of your drawingBuffer
gl.viewport(0, 0, gl.drawingBufferWidth, gl.drawingBufferHeight);
Just to be clear there are several things conflated here
canvas.width, canvas.height = size you requested the canvas's drawingBuffer to be
gl.drawingBufferWidth, gl.drawingBufferHeight = size you actually got. In 99.99% of cases this will be the same as canvas.width, canvas.height.
canvas.clientWidth, canvas.clientHeight = size the browser is displaying your canvas.
To see the difference
<canvas width="10" height="20" style="width: 30px; height: 40px"></canvas>
or
canvas.width = 10;
canvas.height = 20;
canvas.style.width = "30px";
canvas.style.height = "40px";
In these cases canvas.width will be 10, canvas.height will be 20, canvas.clientWidth will be 30, canvas.clientHeight will be 40. It's common to set canvas.style.width and canvas.style.height to a percentage so that the browser scales it to fit whatever element it is contained in.
On top of that there are the 2 things you brought up
viewport = generally you want this to be the size of your drawingBuffer
aspect ratio = generally you want this to be the size your canvas is scaled to
Given those definitions the width and height used for viewport is often not the same as the width and height used for aspect ratio.

Resources