CraftyJS w: and h: attributes don't work on images? - craftyjs

I'm using the craftyjs library and tried to use this function to spawn a pixel image and scale it up:
Crafty.e("2D, DOM, Image")
.attr({x: 0, y: 0, w: 400, h: 400})
.image("assets/test.png");
However it seems to be staying at its original size.. how can I scale it up programmatically?

You should set width and height attributes after the image call as it sets them by default to image's width and height.
Crafty.e("2D, DOM, Image")
.image("assets/test.png")
.attr({w: 400, h: 400});

Related

How to retrieve clipping region's dataURL?

I am trying to retrieve a clipping region's dataURL, but whenever rects are outside of the region, the resulting image has a margin (I guess that it is trying to include the rects).
Here is an example of the behavior: https://codesandbox.io/embed/clip-area-x24ci
(Click on the "export" button or move the black rectangle outside of the clipping region)
Is is possible to retrieve just the clipping region's content?
You can set x, y, width and height for node.toDataURL() function:
const clipArea = {
x: 100,
y: 100,
height: 200,
width: 200
};
const dataURL = node.toDataURL({
pixelRatio: 2,
...clipArea
});

Konva scale down group

I have an 850x600px Konva stage, and a picture created with lots of SVG images. I want to try to scale down the whole stage/layer/group for it to fit in a phone screen.
I´ve tried to set the scale to less than 1 (Example: 0.5) but I don´t seem to get it to work.
Which would be the best approach to achieve this?
Config example:
var stage = new Konva.Stage({
container: 'container',
width: 850,
height: 600,
scale: 0.5
});
You can't set scale to 0.5. You need to use:
scale: {x: 0.5, y: 0.5}
Or you can use this:
scaleX: 0.5,
scaleY: 0.5
Note: I am going to add a warning for invalid values for scale and some other properties in Konva v3.

Two identical CGSize's and one appears smaller

I have an ImageView and I have a CustomView that I want to appear the same size on screen.
public override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
progress.frame.size = circle.frame.size
print("Progress frame size w: \(progress.frame.size.width) h: \(progress.frame.size.height)
print("Circle frame size w: \(circle.frame.size.width) h: \(circle.frame.size.height)
//Progress frame size w: 300 h: 300
//Circle frame size w: 300 h: 300
and yet they appear with different size on the Simulator:
I tried setting the bounds instead of frame. I also tried calculating the difference from the view to the circle and setting the progress frame according to that:
let widthDifference = UIScreen.main.bounds.width - circle.frame.size.width
let progressWidth = UIScreen.main.bounds.width - widthDifference
...
but no success.
I don't know what's happening, I am really new to iOS development and many things are weird to me.
The size of the progress bar was fine when the scene was wrapped in a NavigationController. When I removed the NavigationController the dimensions were totally screwed.
FYI: Circle is an image view. Progress is a third-party circular progress bar
Thank you for your time!
Your progress bar is indeed the same size of your circle but that doesn't mean that the bar will take the full size of 300px for diameter.
If you are curious, take a look at this line from the library, and also this line to see how the radius is created.
You can also try to print out the value of arcRadius and it must be smaller than 150px.

Proper way to transform image in google slides api with dynamic height

I'm trying to translate an image of dynamic height and fixed width to an absolute position on a google slide, maintaining a 1:1 aspect ratio.
For example, my original images could be (in pixels):
200x150
or
200x220
I want to move it so they are both offset by x: 100, y:200, keep the original aspect ratio, and make sure the width is always fixed to the same size in PX but height is variable.
I am struggling to calculate the correct transform to make the correct translation.
{
'createImage': {
objectId,
url,
elementProperties: {
pageObjectId,
size: {
width: {
magnitude: (7/9)*originalWidthPx, //pixel to PT conversion
unit: 'PT',
},
height: {
magnitude: (7/9)*originalHeightPx, //pixel to PT conversion
unit: 'PT',
},
},
transform: {
scaleX: 1, // I believe i'll need to scale the image based on Height, keeping the original aspect ratio
scaleY: 1,
translateX: 100, // i dont believe this needs to be dynamic based on w/h
translateY: 200,
unit: 'PT',
},
},
},
}

Set stretching parameters for images programmatically in swift for iOS

So if we want to stretch only parts of an image, be it a regular image or a background image, we use the following settings in layout editor:
How do you set those programmatically?
I'm using Xcode 7.2.1
Specifying the cap insets of your image
You can set the stretch specifics by making use of the UIImage method .resizableImageWithCapInsets(_:UIEdgeInsets, resizingMode: UIImageResizingMode).
Declaration
func resizableImageWithCapInsets(capInsets: UIEdgeInsets, resizingMode: UIImageResizingMode) -> UIImage
Description
Creates and returns a new image object with the specified cap insets
and options.
A new image object with the specified cap insets and resizing mode.
Parameters
capInsets: The values to use for the cap insets.
resizingMode: The mode with which the interior of the image is
resized.
Example: custom stretching using the specified cap insets
As an example, let's try to---programmatically---stretch my (current) profile picture along its width, precisely at my right leg (left side from viewing point of view), and leave the rest of the image with its original proportions. This could be comparable to stretching the width of some button texture to the size of its content.
First, let's load our original image foo.png as an UIImage object:
let foo = UIImage(named: "foo.png") // 328 x 328
Now, using .resizableImageWithCapInsets(_:UIEdgeInsets, resizingMode: UIImageResizingMode), we'll define another UIImage instance, with specified cap insets (to the middle of my right leg), and set resizing mode to .Stretch:
/* middle of right leg at ~ |-> 0.48: LEG :0.52 <-| along
image width (for width normalized to 1.0) */
let fooWidth = foo?.size.width ?? 0
let leftCapInset = 0.48*fooWidth
let rightCapInset = fooWidth-leftCapInset // = 0.52*fooWidth
let bar = UIEdgeInsets(top: 0, left: leftCapInset, bottom: 0, right: rightCapInset)
let fooWithInsets = foo?.resizableImageWithCapInsets(bar, resizingMode: .Stretch) ?? UIImage()
Note that 0.48 literal above corresponds to the value you enter for X in the interface builder, as shown in the image in your question above (or as described in detail in the link provided by matt).
Moving on, we finally place the image with cap insets in an UIImageView, and let the width of this image view be larger than the width of the image
/* put 'fooWithInsets' in an imageView.
as per default, frame will cover 'foo.png' size */
let imageView = UIImageView(image: fooWithInsets)
/* expand frame width, 328 -> 600 */
imageView.frame = CGRect(x: 0, y: 0, width: 600, height: 328)
The resulting view stretches the original image as specified, yielding an unproportionally long leg.
Now, as long as the frame of the image has 1:1 width:height proportions (328:328), stretching will be uniform, as if only fitting any image to a smaller/larger frame. For any frame with width values larger than the height (a:1, ratio, a>1), the leg will begin to stretch unproportionally.
Extension to match the X, width, Y and height stretching properties in the Interface Builder
Finally, to thoroughly actually answer your question (which we've really only done implicitly above), we can make use of the detailed explanation of the X, width, Y and height Interface Builder stretching properties in the link provided by matt, to construct our own UIImage extension using (apparently) the same properties, translated to cap insets in the extension:
extension UIImage {
func resizableImageWithStretchingProperties(
X X: CGFloat, width widthProportion: CGFloat,
Y: CGFloat, height heightProportion: CGFloat) -> UIImage {
let selfWidth = self.size.width
let selfHeight = self.size.height
// insets along width
let leftCapInset = X*selfWidth*(1-widthProportion)
let rightCapInset = (1-X)*selfWidth*(1-widthProportion)
// insets along height
let topCapInset = Y*selfHeight*(1-heightProportion)
let bottomCapInset = (1-Y)*selfHeight*(1-heightProportion)
return self.resizableImageWithCapInsets(
UIEdgeInsets(top: topCapInset, left: leftCapInset,
bottom: bottomCapInset, right: rightCapInset),
resizingMode: .Stretch)
}
}
Using this extension, we can achieve the same horizontal stretching of foo.png as above, as follows:
let foo = UIImage(named: "foo.png") // 328 x 328
let fooWithInsets = foo?.resizableImageWithStretchingProperties(
X: 0.48, width: 0, Y: 0, height: 0) ?? UIImage()
let imageView = UIImageView(image: fooWithInsets)
imageView.frame = CGRect(x: 0, y: 0, width: 600, height: 328)
Extending our example: stretching width as well as height
Now, say we want to stretch my right leg as above (along width), but in addition also my hands and left leg along the height of the image. We control this by using the Y property in the extension above:
let foo = UIImage(named: "foo.png") // 328 x 328
let fooWithInsets = foo?.resizableImageWithStretchingProperties(
X: 0.48, width: 0, Y: 0.45, height: 0) ?? UIImage()
let imageView = UIImageView(image: fooWithInsets)
imageView.frame = CGRect(x: 0, y: 0, width: 500, height: 500)
Yielding the following stretched image:
The extension obviously allows for a more versatile use of the cap inset stretching (comparable versatility as using the Interface Builder), but note that the extension, in its current form, does not include any user input validation, so it's up to the caller to use arguments in the correct ranges.
Finally, a relevant note for any operations covering images and their coordinates:
Note: Image coordinate axes x (width) and y (height) run as
x (width): left to right (as expected)
y (height): top to bottom (don't miss this!)

Resources