I was curious to see the vertex positions of an MDLMesh for a boxWithExtent and I have notices a strange behaviour. When I print out the positions and normals for the first 3 vertices you get the below values. What I find weird is that for the 1st vertex we get 2 0.0s at the end to fit the stride for sims_float4 but weirdly for every subsequent vertex this becomes 1.0 - 0.0. Anyone has any idea why metal does this instead of just filling the last two positions with 0.0s as the first vertex. Thank you 0.5 0.5 0.5 -1.0 -0.0 -0.0 0.0 0.0 0.5 0.5 -0.5 -1.0 -0.0 -0.0 1.0 0.0 0.5 -0.5 0.5 -1.0 -0.0 -0.0 0.0 1.0
This behaviour goes away if I use a custom vertexDescriptor so I'm confused as what's going on here
Related
Question
I'm working on porting from OpenGL (OGL) to MetalKit (MTK) on iOS. I'm failing to get identical display in the MetalKit version of the app. I modified the projection matrix to account for differences in Normalized Device Coordinates between the two frameworks, but don't know what else to change to get identical display. Any ideas what else needs to be changed to port from OpenGL to MetalKit?
Projection Matrix Changes so far...
I understand that the Normalized Device Coordinates (NDC) are different in OGL vs MTK:
OGL NDC: -1 < z < 1
MTK NDC: 0 < z < 1
I modified the projection matrix to address the NDC difference, as indicated here. Unfortunately, this modification to the projection matrix doesn't result in identical display to the old OGL code.
I'm struggling to even know what else to try.
Background
For reference, here's some misc background information:
The view matrix is very simple (identity matrix); i.e. camera is at (0, 0, 0) and looking toward (0, 0, -1)
In the legacy OpenGL code, I used GLKMatrix4MakeFrustum to produce the projection matrix, using the screen bounds for left, right, top, bottom, and near=1, far=1000
I stripped the scene down to bare bones while debugging and below are 2 images, the first from legacy OGL code and the second from MTK, both just showing the "ground" plane with a debug texture and a black background.
Any ideas about what else might need to change to get to identical display in MetalKit would be greatly appreciated.
Screenshots
OpenGL (legacy)
MetalKit
Edit 1
I tried to extract code relevant to calculation and use of the projection matrix:
float aspectRatio = 1.777; // iPhone 8 device
float top = 1;
float bottom = -1;
float left = -aspectRatio;
float right = aspectRatio;
float RmL = right - left;
float TmB = top - bottom;
float nearZ = 1;
float farZ = 1000;
GLKMatrix4 projMatrix = { 2 * nearZ / RmL, 0, 0, 0,
0, 2 * nearZ / TmB, 0, 0,
0, 0, -farZ / (farZ - nearZ), -1,
0, 0, -farZ * nearZ / (farZ - nearZ), 0 };
GLKMatrix4 viewMatrix = ...; // Identity matrix: camera at origin, looking at (0, 0, -1), yUp=(0, 1, 0);
GLKMatrix4 modelMatrix = ...; // Different for various models, but even when this is the identity matrix in old/new code the visual output is different
GLKMatrix4 mvpMatrix = GLKMatrix4Multiply(projMatrix, GLKMatrix4Multiply(viewMatrix, modelMatrix));
...
GLKMatrix4 x = mvpMatrix; // rename for brevity below
float mvpMatrixArray[16] = {x.m00, x.m01, x.m02, x.m03, x.m10, x.m11, x.m12, x.m13, x.m20, x.m21, x.m22, x.m23, x.m30, x.m31, x.m32, x.m33};
// making MVP matrix available to vertex shader
[renderCommandEncoder setVertexBytes:&mvpMatrixArray
length:16 * sizeof(float)
atIndex:1]; // vertex data is at "0"
[renderCommandEncoder setVertexBuffer:vertexBuffer
offset:0
atIndex:0];
...
[renderCommandEncoder drawPrimitives:MTLPrimitiveTypeTriangleStrip
vertexStart:0
vertexCount:4];
Sadly this issue ended up being due to a bug in the vertex shader that was pushing all geometry +1 on the Z axis, leading to the visual differences.
For any future OpenGL-to-Metal porters: the projection matrix changes above, accounting for the differences in normalized device coordinates, are enough.
Without seeing the code it's hard to say what the problem is. One of the most common issues could be a wrongly configured viewport:
// Set the region of the drawable to draw into.
[renderEncoder setViewport:(MTLViewport){0.0, 0.0, _viewportSize.x, _viewportSize.y, 0.0, 1.0 }];
The default values for the viewport are:
originX = 0.0
originY = 0.0
width = w
height = h
znear = 0.0
zfar = 1.0
*Metal: znear = minZ, zfar = maxZ.
MinZ and MaxZ indicate the depth-ranges into which the scene will be
rendered and are not used for clipping. Most applications will set
these members to 0.0 and 1.0 to enable the system to render to the
entire range of depth values in the depth buffer. In some cases, you
can achieve special effects by using other depth ranges. For instance,
to render a heads-up display in a game, you can set both values to 0.0
to force the system to render objects in a scene in the foreground, or
you might set them both to 1.0 to render an object that should always
be in the background.
Applications typically set MinZ and MaxZ to 0.0 and 1.0 respectively
to cause the system to render to the entire depth range. However, you
can use other values to achieve certain affects. For example, you
might set both values to 0.0 to force all objects into the foreground,
or set both to 1.0 to render all objects into the background.
assuming the canvas size is (wx,wy), the exact coordinates of the vertex lower and left is (-1 + 1/wx , -1 + 1/wy).
But when the pointSize is bigger than 1, i dont managed to find a formula.
in this fiddle, https://jsfiddle.net/3u26rpf0/14/ i draw some pixels of size=1 with the following formula for gl_Position :
float p1 = -1.0 + (2.0 * a_position.x + 1.0) / wx ;
float p2 = -1.0 + (2.0 * a_position.y + 1.0) / wy ;
gl_Position=vec4(p1,p2,0.0,1.0);
a_position.x goes from 0 to wx-1 .
a_position.y goes from 0 to wy-1 .
but if you change the value of size in the vertex (see fiddle link)
my formula doesn't work, there is some offset to put.
From the OpenGL ES 2.0 spec section 3.3
Point rasterization produces a fragment for each framebuffer pixel whose center
lies inside a square centered at the point’s (xw, yw), with side length equal to
the point size
I have a UISlider which goes from 0 to 275. I want to use the slider to scale an UIImageView.
When my slider value is 0 my UIImageView should have the original size (scaleX: 1, scaleY: 1).
When my slider value is 275 my UIImageView should scale to 0.85.
Can someone suggest a good formula to calculate the scale value in relationship with slider value?
Something like this
let scale = slider.value >= 275 ? 0.85 : 1
imageView.transform = CGAffineTransform(scaleX: scale, scaleY: scale)
But I have some trouble making the scale dynamic based on slider value.
You could use a linear scale such as:
let scale = 1.0 - slider.value/275.0 * 0.15
I am working on a few experiments to learn gestures and animations in iOS. Creating a Tinder-like interface is one of them. I am following this guide: http://guti.in/articles/creating-tinder-like-animations/
I understand the changing of the position of the image, but don't understand the rotation. I think I've pinpointed my problem to not understanding CGAfflineTransform. Particularly, the following code:
CGFloat rotationStrength = MIN(xDistance / 320, 1);
CGFloat rotationAngle = (CGFloat) (2 * M_PI * rotationStrength / 16);
CGFloat scaleStrength = 1 - fabsf(rotationStrength) / 4;
CGFloat scale = MAX(scaleStrength, 0.93);
CGAffineTransform transform = CGAffineTransformMakeRotation(rotationAngle);
CGAffineTransform scaleTransform = CGAffineTransformScale(transform, scale, scale);
self.draggableView.transform = scaleTransform;
Where are these values and calculations, such as: 320, 1-fabs(strength) / 4 , .93, etc, coming from? How do they contribute to the eventual rotation?
On another note, Tinder seems to use a combination of swiping and panning. Do they add a swipe gesture to the image, or do they just take into account the velocity of the pan?
That code has a lot of magic constants, most of which are likely chosen because they resulted in something that "looked good". This can make it hard to follow. It's not so much about the actual transforms, but about the values used to create them.
Let's break it down, line by line, and see if that makes it clearer.
CGFloat rotationStrength = MIN(xDistance / 320, 1);
The value 320 is likely assumed to be the width of the device (it was the portrait width of all iPhones until the 6 and 6+ came out).
This means that xDistance / 320 is a factor of how far along the the x axis (based on the name xDistance) that the user has dragged. This will be 0.0 when the user hasn't dragged any distance and 1.0 when the user has dragged 320 points.
MIN(xDistance / 320, 1) Takes the smallest value of the dragged distance factor and 1). This means that if the user drags further than 320 points (so that the distance factor would be larger than 1, the rotation strength would never be larger than 1.0. It doesn't protect agains negative values (if the user dragged to the left, xDistance would be a negative value, which would always be smaller than 1. However, I'm not sure if the guide accounted for that (since 320 is the full width, not the half width.
So, the first line is a factor between 0 and 1 (assuming no negative values) of how much rotation should be applied.
CGFloat rotationAngle = (CGFloat) (2 * M_PI * rotationStrength / 16);
The next line calculates the actual angle of rotation. The angle is specified in radians. Since 2π is a full circle (360°), the rotation angle is ranging from 0 and 1/16 of a full circle (22.5°). Th value 1/16 is likely chosen because it "looked good".
The two lines together means that as the user drags further, the view rotates more.
CGFloat scaleStrength = 1 - fabsf(rotationStrength) / 4;
From the variable name, it would look like it would calculate how much the view should scale. But it's actually calculating what scale factor the view should have. A scale of 1 means the "normal" or unscaled size. When the rotation strength is 0 (when the xDistance is 0), the scale strength will be 1 (unscaled). As rotation strength increase, approaching 1, this scale factor approaches 0.75 (since that's 1 - 1/4).
fabsf is simply the floating point absolute value (fabsf(-0.3) is equal to 0.3)
CGFloat scale = MAX(scaleStrength, 0.93);
On the next line, the actual scale factor is calculated. It's simply the largest value of the scaleStrength and 0.93 (scaled down to 93%). The value 0.93 is completely arbitrary and is likely just what the author found appealing.
Since the scale strength ranges from 1 to 0.75 and the scale factor is never smaller than 0.93, the scale factor only changes for the first third of the xDistance. All scale strength values in the next two thirds will be smaller than 0.93 and thus won't change the scale factor.
With the scaleFactor and rotationAngle calculated as above, the view is first rotated (by that angle) and then scaled down (by that scale factor).
Summary
So, in short. As the view is dragged to the right (as xDistance approaches 320 points), The view linearly rotates from 0° to 22.5° over the full drag and scales from 100% to 93% over the first third of the drag (and then stays at 93% for the remainder of the drag gesture).
I have created a custom item similarity that simulates content-based similarity based on a product taxonomy. I have a user who likes only two items:
UserId ItemId Preference
7656361 1449133 1.00
7656361 18886199 8.00
My custom itemSimilarity returns values from [-1,1] where 1 should mean strong similarity, and -1 strong dissimilarity. The two items the user liked does not have any lowest common ancestors in the taxonomy tree, so they don't have value of 1. But they have values from 0, 0.20 and 0.25 with some items.
I produce recommendation in the following way:
ItemSimilarity similarity = new CustomItemSimilarity(...);
Recommender recommender = new GenericItemBasedRecommender(model, similarity);
List<RecommendedItem> recommendations = recommender.recommend(7656361, 10);
for (RecommendedItem recommendation : recommendations) {
System.out.println(recommendation);
}
I am getting the following result:
RecommendedItem[item:899604, value:4.5]
RecommendedItem[item:1449081, value:4.5]
RecommendedItem[item:1449274, value:4.5]
RecommendedItem[item:1449259, value:4.5]
RecommendedItem[item:715796, value:4.5]
RecommendedItem[item:3255539, value:4.5]
RecommendedItem[item:333440, value:4.5]
RecommendedItem[item:1450204, value:4.5]
RecommendedItem[item:1209464, value:4.5]
RecommendedItem[item:1448829, value:4.5]
Which at first glance someone will say, ok it produce recommendations. I tried to print the values from the itemSimilarity as it does the comparison between pairwise items, and I got this supprising result:
ItemID1 ItemID2 Similarity
899604 1449133 -1.0
899604 18886199 -1.0
1449081 1449133 -1.0
1449081 18886199 -1.0
1449274 1449133 -1.0
1449274 18886199 -1.0
1449259 1449133 -1.0
1449259 18886199 -1.0
715796 1449133 -1.0
715796 18886199 -1.0
3255539 1449133 -1.0
3255539 18886199 -1.0
333440 1449133 -1.0
333440 18886199 -1.0
1450204 1449133 -1.0
1450204 18886199 -1.0
1209464 1449133 -1.0
1209464 18886199 -1.0
1448829 1449133 -1.0
1448829 18886199 -1.0
228964 1449133 -1.0
228964 18886199 0.25
57648 1449133 -1.0
57648 18886199 0.0
899573 1449133 -1.0
899573 18886199 0.2
950062 1449133 -1.0
950062 18886199 0.25
5554642 1449133 -1.0
5554642 18886199 0.0
...
and there are few more. They are not in the produce order. I just wanted to make a point. All the items that have very strong dissimilarity of -1 are recommended, and those that have some similarity of 0.0, 0.2 and 0.25 are not recommended at all. How is this possible?
The itemSimilarity method of the interface ItemSimilarity have the following explenation:
Implementations of this interface define a notion of similarity
between two items. Implementations should return values in the range
-1.0 to 1.0, with 1.0 representing perfect similarity.
If I use similarity between [0,1] I get the following recommendations:
RecommendedItem[item:228964, value:8.0]
RecommendedItem[item:899573, value:8.0]
RecommendedItem[item:950062, value:8.0]
And pairwise similarity is as follows (only for those tree, for the others is 0):
228964 1449133 0.0
228964 18886199 0.25
950062 1449133 0.0
950062 18886199 0.25
228964 1449133 0.0
228964 18886199 0.25
EDIT: I also printed out the most similar items to 1449133, 18886199 with: (GenericItemBasedRecommender)delegate).mostSimilarItems(new long[]{1449133, 18886199}, 10)
and I got: [RecommendedItem[item:228964, value:0.125], RecommendedItem[item:950062, value:0.125], RecommendedItem[item:899573, value:0.1]]
Only for item 18886199, (GenericItemBasedRecommender)delegate).mostSimilarItems(new long[]{18886199}, 10) I got [RecommendedItem[item:228964, value:0.25]]. For 1449133 only there are no similar items.
I don't understand why it does not work with strong dissimilarity?
Another question is why all the predicted preference values are 8.0 or 4.5. I can see that only the item 18886199 is similar with the the recommended items, but is there a way to multiply the value of 8.0 with the similarity in the case 0.25, and get value of 2.0 instead of 8.0. This I can't do while computing the similarity because I don't know the user yet, but I think it should be done during the recommendation phase. Isn't this how the recommender should work or maybe I should create a custom recommender and do the job in a custom way?
I would really appreciate if someone from the Mahout community can give me directions.