stretch an object - lua

I have a image of a string of size 12*30. I want to create an animation such that it gives a feel of stretching of a string. I did it by scaling the the image but problem I am facing that the collision is not happening with scaled image. It occurs only in 12*30 region which is the size of original image. I want the collision to happen though out the length of the string. Is there a better way than scaling to do this. Thanks.
image_rect = display.newImage("string.png")
image_rect.x = frog_jump_SheetSet.x + 10
image_rect.y = frog_jump_SheetSet.y + 10
physics.addBody(image_rect )
image_rect.yScale = 0.1
localGroup:insert(image_rect)
image_rect .collision = onStretch
image_rect :addEventListener("collision",image_rect )
tr1 = tnt:newTransition(image_rect,{time = 50,yScale = string_length })
tr2 = tnt:newTransition(image_rect,{delay = 100,time = 50,yScale = 0.1})

Corona Physics engine do not support scaling directly, the only thing you may do is add rectangles to the object or delete them as needed to fit the new shape...
In general, you should avoid using scaling or rotation of the image when using physics, and instead only change the physics API to rotate (using torque) and there are nothing you can do about scaling.

Related

How to rotate a non-squared image in frequency domain

I want to rotate an image in frequency domain. Inspired in the answers in Image rotation and scaling the frequency domain? I managed to rotate square images. (See the following python script using OpenCV)
M = cv2.imread("lenna.png")
M=np.float32(M)
hanning=cv2.createHanningWindow((M.shape[1],M.shape[0]),cv2.CV_32F)
M=hanning*M
sM = fftshift(M)
rotation_center=(M.shape[1]/2,M.shape[0]/2)
rot_matrix=cv2.getRotationMatrix2D(rotation_center,angle,1.0)
FsM = fftshift(cv2.dft(sM,flags = cv2.DFT_COMPLEX_OUTPUT))
rFsM=cv2.warpAffine(FsM,rot_matrix,(FsM.shape[1],FsM.shape[0]),flags=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
IrFsM = ifftshift(cv2.idft(ifftshift(rFsM),flags=cv2.DFT_REAL_OUTPUT))
This works fine with squared images. (Better results could be achieved by padding the image)
However, when only using a non-squared portion of the image, the rotation in frequency domain shows some kind of shearing effect.
Any idea on how to achieve this? Obivously I could pad the image to make it square, however the final purpose of all this is to rotate FFTs as fast as possible for an iterative image registration algorithm and this would slightly slow down the algorithm.
Following the suggestion of #CrisLuengo I found the affine transform needed to avoid padding the image. Obviously it will depend on the image size and the application but for my case avoidding the padding is very interesting.
The modified script looks now like:
#rot_matrix=cv2.getRotationMatrix2D(rotation_center,angle,1.0)
kx=1.0
ky=1.0
if(M.shape[0]>M.shape[1]):
kx= float(M.shape[0]) / M.shape[1]
else:
ky=float(M.shape[1])/M.shape[0]
affine_transform = np.zeros([2, 3])
affine_transform[0, 0] = np.cos(angle)
affine_transform[0, 1] = np.sin(angle)*ky/kx
affine_transform[0, 2] = (1-np.cos(angle))*rotation_center[0]-ky/kx*np.sin(angle)*rotation_center[1]
affine_transform[1, 0] = -np.sin(angle)*kx/ky
affine_transform[1, 1] = np.cos(angle)
affine_transform[1, 2] = kx/ky*np.sin(angle)*rotation_center[0]+(1-np.cos(angle))*rotation_center[1]
FsM = fftshift(cv2.dft(sM,flags = cv2.DFT_COMPLEX_OUTPUT))
rFsM=cv2.warpAffine(FsM,affine_transform, (FsM.shape[1],FsM.shape[0]),flags=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
IrFsM = ifftshift(cv2.idft(ifftshift(rFsM),flags=cv2.DFT_REAL_OUTPUT))

Box2D: How to use b2ChainShape for a tile based map with squares

Im fighting here with the so called ghost collisions on a simple tile based map with a circle as player character.
When applying an impulse to the circle it first starts bouncing correctly, then sooner or later it bounces wrong (wrong angle).
Looking up on the internet i read about an issue in Box2D (i use iOS Swift with Box2d port for Swift).
Using b2ChainShape does not help, but it looks i misunderstood it. I also need to use the "prevVertex" and "nextVertex" properties to set up the ghost vertices.
But im confused. I have a simple map made up of boxes (simple square), all placed next to each other forming a closed room. Inside of it my circle i apply an impulse seeing the issue.
Now WHERE to place those ghost vertices for each square/box i placed on the view in order to solve this issue? Do i need to place ANY vertex close to the last and first vertice of chainShape or does it need to be one of the vertices of the next box to the current one? I dont understand. Box2D's manual does not explain where these ghost vertices coordinates are coming from.
Below you can see an image describing the problem.
Some code showing the physics parts for the walls and the circle:
First the wall part:
let bodyDef = b2BodyDef()
bodyDef.position = self.ptm_vec(node.position+self.offset)
let w = self.ptm(Constants.Config.wallsize)
let square = b2ChainShape()
var chains = [b2Vec2]()
chains.append(b2Vec2(-w/2,-w/2))
chains.append(b2Vec2(-w/2,w/2))
chains.append(b2Vec2(w/2,w/2))
chains.append(b2Vec2(w/2,-w/2))
square.createLoop(vertices: chains)
let fixtureDef = b2FixtureDef()
fixtureDef.shape = square
fixtureDef.filter.categoryBits = Constants.Config.PhysicsCategory.Wall
fixtureDef.filter.maskBits = Constants.Config.PhysicsCategory.Player
let wallBody = self.world.createBody(bodyDef)
wallBody.createFixture(fixtureDef)
The circle part:
let bodyDef = b2BodyDef()
bodyDef.type = b2BodyType.dynamicBody
bodyDef.position = self.ptm_vec(node.position+self.offset)
let circle = b2CircleShape()
circle.radius = self.ptm(Constants.Config.playersize)
let fixtureDef = b2FixtureDef()
fixtureDef.shape = circle
fixtureDef.density = 0.3
fixtureDef.friction = 0
fixtureDef.restitution = 1.0
fixtureDef.filter.categoryBits = Constants.Config.PhysicsCategory.Player
fixtureDef.filter.maskBits = Constants.Config.PhysicsCategory.Wall
let ballBody = self.world.createBody(bodyDef)
ballBody.linearDamping = 0
ballBody.angularDamping = 0
ballBody.createFixture(fixtureDef)
Not sure that I know of a simple solution in the case that each tile can potentially have different physics.
If your walls are all horizontal and/or vertical, you could write a class to take a row of boxes, create a single edge or rectangle body, and then on collision calculate which box (a simple a < x < b test) should interact with the colliding object, and apply the physics appropriately, manually calling the OnCollision method that you would otherwise specify as the callback for each individual box.
Alternatively, to avoid the trouble of manually testing intersection with different boxes, you could still merge all common straight edge boxes into one edge body for accurate reflections. However, you would still retain the bodies for the individual boxes. Extend the boxes so that they overlap the edge.
Now here's the trick: all box collision handlers return false, but they toggle flags on the colliding object (turning flags on OnCollision, and off OnSeparation). The OnCollision method for the edge body then processes the collision based on which flags are set.
Just make sure that the colliding body always passes through a box before it can touch an edge. That should be straightforward.

SKEffectNode - CIFilter Blur Size Limit - Big Black Box

I am trying to blur multiple SKNode objects. I do this by having a parent SKEffectNode with a CIFilter set to #"CIGaussianBlur". Like so:
- (SKEffectNode *)createBlurNode
{
SKEffectNode *blurNode = [[SKEffectNode alloc] init];
blurNode.shouldRasterize = YES;
[blurNode setShouldEnableEffects:NO];
[blurNode setFilter:[CIFilter filterWithName:#"CIGaussianBlur"
keysAndValues:#"inputRadius", #10.0f, nil]];
return blurNode;
}
This works fine for a bunch of nodes currently onscreen. But when I space these notes far away from each other (about 3000 pixels), the blurring no longer happens and I get a big black box. This happens regardless of whether the SKNodes I'm blurring are SKShapeNodes or SKSpriteNodes. Here's a sample project with this issue: Sample Project. (By the way, thanks to BobMoff for the initial version found here):
Here's happy blur (when nodes are less than 3000 pixels away from each other):
Sad blur (when nodes are more than 3000 pixels away from each other):
UPDATE
This behavior occurs whenever an SKEffectNode is the parent. It doesn't matter if it's enabling effects, blurring, etc. If the parent node is an SKNode, it's fine. i.e. Even if the parent blur node is created like it is below, you will get the blackness:
- (SKEffectNode *)createBlurNode
{
SKEffectNode *blurNode = [[SKEffectNode alloc] init];
// blurNode.shouldRasterize = YES;
// [blurNode setShouldEnableEffects:NO];
// [blurNode setFilter:[CIFilter filterWithName:#"CIGaussianBlur"
// keysAndValues:#"inputRadius", #10.0f, nil]];
return blurNode;
}
I had a similar problem, with a very wide, panning scene that I wanted to blur.
To get the blur effect to work, I removed any nodes that were sticking out too far past the edges of the scene:
// Property declarations, elsewhere in the class:
var blurNode: SKEffectNode
var mainScene: SKScene
var exParents: [SKNode : SKNode] = [:]
/**
* Remove outlying nodes from the scene and activate the SKEffectNode
*/
func blurScene() {
let FILTER_MARGIN: CGFloat = 100
let widthMax: CGFloat = mainScene.size.width + FILTER_MARGIN
let heightMax: CGFloat = mainScene.size.height + FILTER_MARGIN
// Recursively iterate through all blurNode's children
blurNode.enumerateChildNodesWithName(".//*", usingBlock: {
[unowned self]
node, stop in
if node.parent != nil && node.scene != nil { // Ignore nodes we already removed
if let sprite = node as? SKSpriteNode {
// Calculate sprite node position in scene coordinates
let sceneOrig = sprite.scene!.convertPoint(sprite.position, fromNode: sprite.parent!)
// Find left, right, bottom and top edges of sprite
let l = sceneOrig.x - sprite.size.width*sprite.anchorPoint.x
let r = l + sprite.size.width
let b = sceneOrig.y - sprite.size.height*sprite.anchorPoint.y
let t = b + sprite.size.height
if l < -FILTER_MARGIN || r > widthMax || b < -FILTER_MARGIN || t > heightMax {
self.exParents[sprite] = sprite.parent!
sprite.removeFromParent()
}
}
}
})
blurNode.shouldEnableEffects = true
}
/**
* Disable blur and reparent nodes we removed earlier
*/
func removeBlur() {
self.blurNode.shouldEnableEffects = false
for (kid, parent) in exParents {
parent.addChild(kid)
}
exParents = [:]
}
NOTES:
This does remove content from your effect node, so extremely wide nodes won't show up in the final result:
You can see the mountain highlighted in red stuck out too far and was removed from the resulting blur.
This code only considers SKSpriteNodes. Empty SKNodes don't seem to break the effect node, but if you're using other visible nodes like SKShapeNodes or SKLabelNodes, you'll have to modify this code to include them.
If you have ignoreSiblingOrder = false, this code might mess up your z-ordering since you can't guarantee what order the nodes are added back to the scene.
Stuff I tried that didn't work
Simply saying node.hidden = true instead of using removeFromParent() doesn't work. That would be WAY too easy ;)
Using an SKCropNode to crop out outlying content didn't work for me. I tried having the SKEffectNode parent the SKCropNode and the other way around, but the black square appeared no matter how small I made the cropped area. This might still be worth looking into if you're desperate for a cleaner solution.
As noted here, SKScenes are secretly SKEffectNodes and you can set their filter just like our blurNode above. SKScenes don't show a black screen when their content is too big. Unfortunately, they seem to just silently disable the filter instead. Again, I might have missed something, so you could explore this option further if you're trying to apply an effect across the entire scene.
Alternate Solutions
You can capture an image of the whole screen and apply a filter to that, as suggested here. I ended up going with an even simpler solution; I took a generic screenshot of the stuff I wanted to blur, then applied a very heavy blur so you can't see the precise details. I used that as the blurred background and you can hardly tell it's not the real thing ;) This also saves a healthy chunk of memory and avoids a small UI hiccup.
Musings
This is a pretty nasty bug, and I hope Apple comes up with a solution soon. You can click this cute picture of a camera to get a GPU trace and some insight on what's happening:
The device seems to be discarding the framebuffer for the effect node because it takes up too much memory. This is affirmed by the fact that when there's more memory pressure on the device, it's easier to get the 'black square' on smaller content in the SKEffectNode.
I used a method that worked for my game but it requires the blurred area to be static without movement.
On iOS 10 using Swift 3 I used SKSpriteNode, SKView, SKEffectNode, CIFilter. I created a sprite from a texture returned from the SKView method "texture from node" and passed the current scene as the parameter because it inherits from SKNode. So essentially I was taking a "screenshot" of the scene and creating a sprite from it. I then put it in an SKEffectNode with a blur filter. (set "should rasterize" to true for better performance as I only needed to blur once). Finally I added the new sprite to the scene. From there you could add sprites to the scene and place them above the new blurred node.
let blurFilter = CIFilter(name: "CIGaussianBlur")!
let blurAmount = 15.0
blurFilter.setValue(blurAmount, forKey: kCIInputRadiusKey)
let blurEffect = SKEffectNode()
blurEffect.shouldRasterize = true
let screenshotNode = SKSpriteNode(texture: gameScene.view!.texture(from: gameScene))
blurEffect.addChild(screenshotNode)
blurEffect.filter = blurFilter
gameScene.addChild(blurEffect)
Possible workaround for the bug:
Use a camera, zoom WAY out, so you can see most everything of your background, take a screenshot style rendering of this image. Crop it to your needs, and then blur it. Then rasterise this.
Then scale this image back up, and slice it up if needs be, and place accordingly.
SKEffectNode renders into a texture. In most iOS systems the maximum size for a texture is 2048x2048. If an SKEffectNode is trying to render content larger than that, it will just use a 2048x2048 texture and anything outside of it will just not appear in the texture. It won't give you any error or warning about this happening; it simply does it silently.
And no, there is no way to tell SKEffectNode to use a texture of a specific size, and pan&clamp the content into it. It always uses a texture that will cover all the child nodes, and if the texture would be too large, it just silently uses that 2048x2048 texture.

multiple bone rotation mystery

I am working with quaternions and the XNA skinned model example(for weeks now......). I am received two sets of quaternions from some open source sensor boards that you can buy these days on the net. I was able to write some code so that I receive these quaternions and I am able to rotate limbs with them. Now my problem is the following. I am using the upper right arm and lower right arm in my example and I am able to rotate them separately. My initial position is the one depicted below, which perfect.
http://i.imgur.com/c7qei.png "initial position"
Now when I want to rotate my right arm forward I should have my final position as shown below on the right in this figure. But somehow the result is the one position of the left but my real "physical" arm is pointing forward.
http://i.imgur.com/tXCp6.png "ideal final position(right), real wrong position(left)"
Some how the lower arm does not compensate for the rotation of the upperarm. I am sure I am missing one small step. Here below I have put the crucial part of the code I am using
protected override void Update(GameTime gameTime)
{
HandleInput();
UpdateCamera(gameTime);
// Read gamepad inputs.
float initposition = currentGamePadState.ThumbSticks.Right.X;
float armRotation = Math.Max(currentGamePadState.ThumbSticks.Right.Y, 0);
// these quaternions are received from bluetooth
Upper.Z = Fq1;
Upper.Y = -Fq2;
Upper.X = -Fq3; // set 1 quaternions
Upper.W = Fq4;
//***************************
forearm.Z = Uq1;
forearm.Y = -Uq2;
forearm.X = -Uq3;
forearm.W = Uq4; // set 2 quaternions
// set initial position
if (initialpos == true)
{
initposition = 0.9f;
R_forTransform = Matrix.CreateRotationY(initposition);
R_forarminderinit = skinningData.BoneIndices["R_UpperArm"];
L_forTransform = Matrix.CreateRotationY(-initposition);
L_forTransform = Matrix.CreateRotationX(-initposition);
L_forTransform = Matrix.CreateRotationZ(-initposition);
L_forarminderinit = skinningData.BoneIndices["L_UpperArm"];
}
// Create rotation matrices for the upper and lower arm bones.
Matrix upperarmTransform = Matrix.CreateFromQuaternion(Upper);
Matrix forearmTransform = Matrix.CreateFromQuaternion(forearm);
animationPlayer.GetBoneTransforms().CopyTo(boneTransforms, 0);
if (initialpos == true)
{
boneTransforms[R_forarminderinit] = R_forTransform * boneTransforms[R_forarminderinit];
boneTransforms[L_forarminderinit] = L_forTransform * boneTransforms[L_forarminderinit];
}
int forearmindex = skinningData.BoneIndices["R_Forearm"];
int upperarmindex = skinningData.BoneIndices["R_UpperArm"];
boneTransforms[upperarmindex] = upperarmTransform * boneTransforms[upperarmindex];
boneTransforms[forearmindex] = (forearmTransform) * boneTransforms[forearmindex];
animationPlayer.UpdateWorldTransforms(Matrix.Identity, boneTransforms);
animationPlayer.UpdateSkinTransforms();
UpdateBoundingSpheres();
base.Update(gameTime);
}
I would like to ask you if you could help me solve this mystery. I hope I have been as clear as possible in describing my problem. Furthermore I would like to thank you in advance for you effort.
Yours
Dave
It looks to me like you have some mixed-up reference frames. Here's what I think I'm seeing:
Your external sensors report their orientation relative to the world. Your rendering code, on the other hand, deals with the lower arm in the upper arm's reference frame.
If we assume that the initial orientations are q_u = [0,0,0,1] and q_l=[0,0,0,1], when you rotate your arm to point forward, the new orientations are both [0,.707,0,.707], or something like that because both arm segments have experienced a rotation of π/2 relative to the world.
When you render the arm, you rotate the entire arm (not just the upper arm) by q_u. This makes sense, since you want to make sure that the elbow stays connected. But then you rotate the lower arm by q_l and it has rotated twice as far as it should because it holds the shoulder's rotation. If you were to hold your arm straight, but turn your body around, you would see the same thing happen: the upper-arm would rotate by the amount of body rotation and the lower-arm would rotate by that much again.
Perhaps the easiest way to deal with this is to remove q_u from q_l. If q_k is the rotation of the lower arm relative to the upper arm, then q_k = q_u' * q_l where q_u' is the inverse quaternion (just negate the w component).

how to embed a watermark on an image using edge in matlab?

in a school project i would like to do the following step to have a watermaked image in matlab
extract the edges from an image
insert a mark on this edge
reconstruct the image
extract the mark
could some one give me a link to have a good idea how to do it or help me to do that?
thank you in advance
You want to add a watermark to an image? Why not just overlay the whole thing.
if you have an image
img = imread('myimage.jpg')
wm = imread('watermark.jpg')
You can just resize the watermark to the size of the image
wm_rs = imresize(wm, [size(img,1) size(img,2)], 'lanczos2');
img_wm(wm_rs ~= 0) = wm_rs; %This sets non-black pixels to be the watermark. (You'll have to slightly modify this for color images)
If you want to put it on the edges of the image, you can extract them like this
edges = edge(rgb2gray(img),'canny')
Then you can set the pixels where the edges exist to be watermark pixels
img_wm = img;
img_wm(edges ~= 0) = wm_rs(edges~=0);
Instead of direct assignment you can play around with using a mix of the img and wm_rs pixel values if you want transparency.
You'll probably have to adjust some of what I said to color images, but most should be the same.
Here, is a nice and simple example how you can embed watermarks using MATLAB (in the spatial domain): http://imageprocessingblog.com/digital-watermarking/
see example below(R2017b or later release):
% your params
img = imread('printedtext.png');
Transparency = 0.6;
fontColor = [1,1,1]; % RGB,range [0,1]
position = [700,200];
%% add watermark
mask = zeros(size(img),'like',img);
outimg = insertText(mask,position,'china', ...
'BoxOpacity',0,...
'FontSize',200,...
'TextColor', 'white');
bwMask = imbinarize(rgb2gray(outimg));
finalImg = labeloverlay(img,bwMask,...
'Transparency',Transparency,...
'Colormap',fontColor);
imshow(finalImg)

Resources