Any way to make an image round in Konvajs? - konvajs

I have a Konva image. How do I set radius or border-radius?
In the docs the Image class has width and height, but I want to set the Radius (border-radius). There is a circle class that can have an image as a background. However, when I use this you need to specify the dimensions for each image to make it zoom into and crop the correct location.
<v-image
:config="{
x: 50,
y: 50,
image: image,
shadowBlur: 5
}"
/>
In the Image class there should be a property Radius. Just like in the circle class. Is their an alternative way to do this that I am missing?

If you want rounded corners, or even a complete circle, you need to use a clipping function applied to your group or shape.
See Konva docs example here
The important part if that in the creation of your group or shape you include the definition of the clip func. The example below creates 2 overlapping circles - see the Konva docs for the working code.
var group = new Konva.Group({
clipFunc: function(ctx) {
ctx.arc(250, 120, 50, 0, Math.PI * 2, false);
ctx.arc(150, 120, 60, 0, Math.PI * 2, false);
}
});
For rounded corners see the answer to a similar question with a working code snippet here.

Related

jspdf: how to clip region/shapes?

I am trying to draw some shapes and lines in a restricted area.
http://raw.githack.com/MrRio/jsPDF/master/docs/jsPDF.html#clip
The docs says there is a clip() method, but how can I achieve this exactly? And after I have clipped to shapes, is it possible to draw again outside of the clipping region? (so back to normal)
A clear example would be appreciated. Thank you.
I am using jspdf in combo with jspdf-autotable.
After some trial and error, I have found a solution:
pdf.saveGraphicsState(); // pdf.internal.write('q');
// draw clipping objects
pdf.rect(0, 0, 100, 100, null); // important: style parameter is null!
pdf.clip();
// draw objects that need to be clipped
pdf.setFillColor('#f54f44');
pdf.rect(0, 0, 100, 100, 'F');
pdf.setFillColor('#54ff54');
pdf.rect(50, 50, 100, 100, 'F');
// restores the state to where there was no clipping
pdf.restoreGraphicsState(); // pdf.internal.write('Q');
// these objects will not be clipped
pdf.setFillColor('#001100');
pdf.rect(80, 80, 100, 100, 'F');

Unexpected behaviour with CIKernel

I made this example to show the problem. It takes 1 pixel from texture by hardcoded coordinate and use as result for each pixel in shader. I expect that all the image will be in the same color. When images are small it works perfectly, but when I work with big images it has strange result. For example, here image has size 7680x8580 and you can see 4 squares:
Here is my code
kernel vec4 colorKernel(sampler source)
{
vec4 key = sample(source, samplerTransform(source, vec2(100., 200.)));
return key;
}
Here is how I init Kernel:
override var outputImage: CIImage? {
return colorFillKernel.apply(
extent: CGRect(origin: CGPoint.zero, size: inputImage!.extent.size),
roiCallback:
{
(index, rect) in
return rect
},
arguments: [
inputImage])
}
Also, this code shows image properly, without changes and squares:
vec2 dc = destCoord();
return sample(source, samplerTransform(source, dc));
On a public documentation it says "Core Image automatically splits large images into smaller tiles for rendering, so your callback may be called multiple times." but I can't find ways how to handle this situations. I have kaleidoscopic effects and from any this tile I need to be able to get pixel from another tile as well...
I think the problem occurs due to a wrongly defined region of interest in combination with tiling.
In the roiCallback, Core Image is asking you which area of the input image (at index in case you have multiple inputs) you kernel needs to look at in order to produce the given region (rect) of the output image. The reason why this is a closure is due to tiling:
If the processed image is too large, Core Image is breaking it down into multiple tiles, renders those tiles separately, and stitches them together again afterward. And for each tile Core Image is asking you what part of the input image your kernel needs to read to produce the tile.
So for your input image, the roiCallback might be called something like four times (or even more) during rendering, for example with the following rectangles:
CGRect(x: 0, y: 0, width: 4096, height: 4096) // top left
CGRect(x: 4096, y: 0, width: 3584, height: 4096) // top right
CGRect(x: 0, y: 4096, width: 4096, height: 4484) // bottom left
CGRect(x: 4096, y: 4096, width: 3584, height: 4484) // bottom right
This is an optimization mechanism of Core Image. It wants to only read and process the pixels that are needed to produce a given region of the output. So it's best to adapt the ROI as best as possible to your use case.
Now the ROI depends on the kernel. There are basically four scenarios:
Your kernel has a 1:1 mapping between input pixel and output pixel. So in order to produce an output color value, it needs to read the pixel at the same position from the input image. In this case, you just return the input rect in your roiCallback. (Or even better, you use a CIColorKernel that is made for this use case.)
Your kernel performs some kind of convolution and not only requires the input pixel at the same coordinate as the output but also some region around it (for instance for a blur operation). Your roiCallback could look like this then:
let inset = self.radius // like radius of CIGaussianBlur
let roiCallback: CIKernelROICallback = { _, rect in
return rect.insetBy(dx: -inset, dy: -inset)
}
Your kernel always needs to read a specific region of the input, regardless of which part of the output is rendered. Then you can just return that specific region in the callback:
let roiCallback: CIKernelROICallback = { CGRect(x: 100, y: 200, width: 1, height: 1) }
The kernel always needs access to the whole input image. This is for example the case when you use some kind of lookup table to derive colors. In this case, you can just return the extent of the input and ignore the parameters:
let roiCallback: CIKernelROICallback = { inputImage.extent }
For your example, scenario 3 should be the right choice. For your kaleidoscopic effects, I assume that you need a certain region or source pixels around the destination coordinate in order to produce an output pixel. So it would be best if you'd calculate the size of that region and use a roiCallback like in scenario 2.
P.S.: Using the Core Image Kernel Language (CIKernel(source: "<code>")) is super duper deprecated now. You should consider writing your kernels in the Metal Shading Language instead. Check out this year's WWDC talk to learn more. 🙂

Shape/mask on UIImage?

I was wondering how can this be achieved in Xcode? It's part of a profile, so the header is an actual UIImageView, but the curve below it, not sure how to achieve that. Any ideas?
Say that the grey area is built out of a bottom grey rectangle and on top another rectangle with your arc we could do something like this:
Create a UIBezierPath in the shape of a circle:
let path = UIBezierPath(ovalIn:CGRect(x: 0, y: view.bounds.height/2, width: view.bounds.width, height: view.bounds.height)).cgPath
Apply it to the top rectangle.
let overlay = CAShapeLayer()
overlay.path = path
overlay.fillColor = UIColor.gray.cgColor
overlay.shouldRasterize = true
view.layer.addSublayer(overlay)
This will create a perfect circle but you can tweak the CGRect's your liking in order to get the shape you want!
It can be achieved by giving header view UIBezierPath but if you don't want to do that stuff.
I found a cool way of doing.
Your Header view contain a image (Lets say it HeaderImage).
Make a Image of that shape (Lets say it MaskImage).
let path = UIImageView.init(image: #imageLiteral(resourceName: "MaskImage"))
Than apply this mask to Header Image.
HeaderImage.mask = path
Hope it work for you.

How to do transforms on a CALayer?

Before writing this question, I've
had experience with Affine transforms for views
read the Transforms documentation in the Quartz 2D Programming Guide
seen this detailed CALayer tutorial
downloaded and run the LayerPlayer project from Github
However, I'm still having trouble understanding how to do basic transforms on a layer. Finding explanations and simple examples for translate, rotate and scale has been difficult.
Today I finally decided to sit down, make a test project, and figure them out. My answer is below.
Notes:
I only do Swift, but if someone else wants to add the Objective-C code, be my guest.
At this point I am only concerned with understanding 2D transforms.
Basics
There are a number of different transforms you can do on a layer, but the basic ones are
translate (move)
scale
rotate
To do transforms on a CALayer, you set the layer's transform property to a CATransform3D type. For example, to translate a layer, you would do something like this:
myLayer.transform = CATransform3DMakeTranslation(20, 30, 0)
The word Make is used in the name for creating the initial transform: CATransform3DMakeTranslation. Subsequent transforms that are applied omit the Make. See, for example, this rotation followed by a translation:
let rotation = CATransform3DMakeRotation(CGFloat.pi * 30.0 / 180.0, 20, 20, 0)
myLayer.transform = CATransform3DTranslate(rotation, 20, 30, 0)
Now that we have the basis of how to make a transform, let's look at some examples of how to do each one. First, though, I'll show how I set up the project in case you want to play around with it, too.
Setup
For the following examples I set up a Single View Application and added a UIView with a light blue background to the storyboard. I hooked up the view to the view controller with the following code:
import UIKit
class ViewController: UIViewController {
var myLayer = CATextLayer()
#IBOutlet weak var myView: UIView!
override func viewDidLoad() {
super.viewDidLoad()
// setup the sublayer
addSubLayer()
// do the transform
transformExample()
}
func addSubLayer() {
myLayer.frame = CGRect(x: 0, y: 0, width: 100, height: 40)
myLayer.backgroundColor = UIColor.blue.cgColor
myLayer.string = "Hello"
myView.layer.addSublayer(myLayer)
}
//******** Replace this function with the examples below ********
func transformExample() {
// add transform code here ...
}
}
There are many different kinds of CALayer, but I chose to use CATextLayer so that the transforms will be more clear visually.
Translate
The translation transform moves the layer. The basic syntax is
CATransform3DMakeTranslation(_ tx: CGFloat, _ ty: CGFloat, _ tz: CGFloat)
where tx is the change in the x coordinates, ty is the change in y, and tz is the change in z.
Example
In iOS the origin of the coordinate system is in the top left, so if we wanted to move the layer 90 points to the right and 50 points down, we would do the following:
myLayer.transform = CATransform3DMakeTranslation(90, 50, 0)
Notes
Remember that you can paste this into the transformExample() method in the project code above.
Since we are just going to deal with two dimensions here, tz is set to 0.
The red line in the image above goes from the center of the original location to the center of the new location. That's because transforms are done in relation to the anchor point and the anchor point by default is in the center of the layer.
Scale
The scale transform stretches or squishes the layer. The basic syntax is
CATransform3DMakeScale(_ sx: CGFloat, _ sy: CGFloat, _ sz: CGFloat)
where sx, sy, and sz are the numbers by which to scale (multiply) the x, y, and z coordinates respectively.
Example
If we wanted to half the width and triple the height, we would do the following
myLayer.transform = CATransform3DMakeScale(0.5, 3.0, 1.0)
Notes
Since we are only working in two dimensions, we just multiply the z coordinates by 1.0 to leave them unaffected.
The red dot in the image above represents the anchor point. Notice how the scaling is done in relation to the anchor point. That is, everything is either stretched toward or away from the anchor point.
Rotate
The rotation transform rotates the layer around the anchor point (the center of the layer by default). The basic syntax is
CATransform3DMakeRotation(_ angle: CGFloat, _ x: CGFloat, _ y: CGFloat, _ z: CGFloat)
where angle is the angle in radians that the layer should be rotated and x, y, and z are the axes about which to rotate. Setting an axis to 0 cancels a rotation around that particular axis.
Example
If we wanted to rotate a layer clockwise 30 degrees, we would do the following:
let degrees = 30.0
let radians = CGFloat(degrees * Double.pi / 180)
myLayer.transform = CATransform3DMakeRotation(radians, 0.0, 0.0, 1.0)
Notes
Since we are working in two dimentions, we only want the xy plane to be rotated around the z axis. Thus we set x and y to 0.0 and set z to 1.0.
This rotated the layer in a clockwise direction. We could have rotated counterclockwise by setting z to -1.0.
The red dot shows where the anchor point is. The rotation is done around the anchor point.
Multiple transforms
In order to combine multiple transforms we could use concatination like this
CATransform3DConcat(_ a: CATransform3D, _ b: CATransform3D)
However, we will just do one after another. The first transform will use the Make in its name. The following transforms will not use Make, but they will take the previous transform as a parameter.
Example
This time we combine all three of the previous transforms.
let degrees = 30.0
let radians = CGFloat(degrees * Double.pi / 180)
// translate
var transform = CATransform3DMakeTranslation(90, 50, 0)
// rotate
transform = CATransform3DRotate(transform, radians, 0.0, 0.0, 1.0)
// scale
transform = CATransform3DScale(transform, 0.5, 3.0, 1.0)
// apply the transforms
myLayer.transform = transform
Notes
The order that the transforms are done in matters.
Everything was done in relation to the anchor point (red dot).
A Note about Anchor Point and Position
We did all our transforms above without changing the anchor point. Sometimes it is necessary to change it, though, like if you want to rotate around some other point besides the center. However, this can be a little tricky.
The anchor point and position are both at the same place. The anchor point is expressed as a unit of the layer's coordinate system (default is 0.5, 0.5) and the position is expressed in the superlayer's coordinate system. They can be set like this
myLayer.anchorPoint = CGPoint(x: 0.0, y: 1.0)
myLayer.position = CGPoint(x: 50, y: 50)
If you only set the anchor point without changing the position, then the frame changes so that the position will be in the right spot. Or more precisely, the frame is recalculated based on the new anchor point and old position. This usually gives unexpected results. The following two articles have an excellent discussion of this.
About the anchorPoint
Translate rotate translate?
See also
Border, rounded corners, and shadow on a CALayer
Using a border with a Bezier path for a layer

Using drawImage to draw a filled rectangle

if I want to draw a filled rectangle, I could do
g.fillRect(0, 0, 100, 100)
But if I want to draw the same filled rectangle with drawImage, what do I do? I tried the below:
g.drawImage(null, 0,0,100, 100, Color.BLACK, null);
And it does not work. I am not going to use an image, but must the Image still be non-null? What would be the correct way to do this?

Resources