My graphics is such that in order for my view content to properly align with graphics, it needs to rotate in depth ie along z-axis. Maybe 5-10 degrees along z, and negative 5-10 degrees along y.
Currently I use this: (argView is my imageview)
[argView.layer setTransform:CATransform3DMakeRotation(0.5, 1, -1, 0)];
However this gives me very skewed image - definitely not what I want, but I don't know what I should be supplying there.
I began by reading apple docs but there are just so many functions I am getting confused.
If someone has any good guide that will make it very easy for me, great.
If someone can point to where this exact thing has been done, greater.
Related
I need to be able to interact with a representation of a cilinder that has many different parts in it. When the users taps over on of the small rectangles, I need to display a popover related to the specific piece (form).
The next image demonstrates a realistic 3d approach. But, I repeat, I need to solve the problem, the 3d is NOT required (would be really cool though). A representation that complies the functional needs will suffice.
The info about the parts to make the drawing comes from an API (size, position, etc)
I dont need it to be realistic really. The simplest aproximation would be to show a cilinder in a 2d representation, like a rectangle made out of interactable small rectangles.
So, as I mentioned, I think there are (as I see it) two opposite approaches: Realistic or Simplified
Is there a way to achieve a nice solution in the middle? What libraries, components, frameworks that I should look into?
My research has led me to SceneKit, but I still dont know if I will be able to interact with it. Interaction is a very important part as I need to display a popover when the user taps on any small rectangle over the cylinder.
Thanks
You don't need any special frameworks to achieve an interaction like this. This effect can be achieved with standard UIKit and UIView and a little trigonometry. You can actually draw exactly your example image using 2D math and drawing. My answer is not an exact formula but involves thinking about how the shapes are defined and break the problem down into manageable steps.
A cylinder can be defined by two offset circles representing the end pieces, connected at their radii. I will use an orthographic projection meaning the cylinder doesn't appear smaller as the depth extends into the background (but you could adapt to perspective if needed). You could draw this with CoreGraphics in a UIView drawRect.
A square slice represents an angle piece of the circle, offset by an amount smaller than the length of the cylinder, but in the same direction, as in the following diagram (sorry for imprecise drawing).
This square slice you are interested in is the area outlined in solid red, outside the radius of the first circle, and inside the radius of the imaginary second circle (which is just offset from the first circle by whatever length you want the slice).
To draw this area you simply need to draw a path of the outline of each arc and connect the endpoints.
To check if a touch is inside one of these square slices:
Check if the touch point is between angle a from the origin at a.
Check if the touch point is outside the radius of the inside circle.
Check if the touch point is inside the radius of the outside circle. (Note what this means if the circles are more than a radius apart.)
To find a point to display the popover you could average the end points on the slice or find the middle angle between the two edges and offset by half the distance.
Theoretically, doing this in Scene Kit with either SpriteKit or UIKit Popovers is ideal.
However Scene Kit (and Sprite Kit) seem to be in a state of flux wherein nobody from Apple is communicating with users about the raft of issues folks are currently having with both. From relatively stable and performant Sprite Kit in iOS 8.4 to a lot of lost performance in iOS 9 seems common. Scene Kit simply doesn't seem finished, and the documentation and community are both nearly non-existent as a result.
That being said... the theory is this:
Material IDs are what's used in traditional 3D apps to define areas of an object that have different materials. Somehow these Material IDs are called "elements" in SceneKit. I haven't been able to find much more about this.
It should be possible to detect the "element" that's underneath a touch on an object, and respond accordingly. You should even be able to change the state/nature of the material on that element to indicate it's the currently selected.
When wanting a smooth, well rounded cylinder as per your example, start with a cylinder that's made of only enough segments to describe/define the material IDs you need for your "rectangular" sections to be touched.
Later you can add a smoothing operation to the cylinder to make it round, and all the extra smoothing geometry in each quadrant of unique material ID should be responsive, regardless of how you add this extra detail to smooth the presentation of the cylinder.
Idea for the "Simplified" version:
if this representation is okey, you can use a UICollectionView.
Each cell can have a defined size thanks to
collectionView:layout:sizeForItemAtIndexPath:
Then each cell of the collection could be a small rectangle representing a
touchable part of the cylinder.
and using
collectionView:(UICollectionView *)collectionView
didSelectItemAtIndexPath:(NSIndexPath *)indexPath
To get the touch.
This will help you to display the popover at the right place:
CGRect rect = [collectionView layoutAttributesForItemAtIndexPath:indexPath].frame;
Finally, you can choose the appropriate popover (if the app has to work on iPhone) here:
https://www.cocoacontrols.com/search?q=popover
Not perfect, but i think this is efficient!
Yes, SceneKit.
When user perform a touch event, that mean you knew the 2D coordinate on screen, so your only decision is to popover a view or not, even a 3D model is not exist.
First, we can logically split the requirement into two pieces, determine the touching segment, showing right "color" in each segment.
I think the use of 3D model is to determine which piece of data to show in your case if I don't get you wrong. In that case, the SCNView's hit test method will do most of work for you. What you should do is to perform a hit test, take out the hit node and the hit's local 3D coordinate of this node, you can then calculate which segment is hit by this touch and do the decision.
Now how to draw the surface of the cylinder would be the only left question, right? There are various ways to do, for example simply paint each image you need and programmatically and attach it to the cylinder's material or have your image files on disk and use as material for the cylinder ...
I think the problem would be basically solved.
I am working on a writing application, My writing is working fine, but what I want to implement is variable stroke width, so that the writing is very realistic and intuitive, as done by "BAMBOO" and "PENULTIMATE" application.
Firstly I want to tell you what I have tried.
1) In iOS, their is no pressure detection, or velocity detection according to my research. For velocity detection, I have to use OPENGL.
2) Due to these limitations, I tried using the method given in this tutorial which is pretty straight forward.Here is the link http://mobile.tutsplus.com/tutorials/iphone/ios-sdk-advanced-freehand-drawing-techniques/
3) This works fine, But here what happens is that, the width increases as I move faster and decreases, as I move slower. But I want is the opposite effect, that is the width should increase as I move slower and when I move fast, the thickness should only be seen only at the edges and for the whole line.
Here are the screenshot of the BAMBOO app and my app.
1)BAMBOO app
In the above image, the line is drawn with speed and you will see that the thickness is only at edges.
2) MY APP
Here you will see that the line is thinner at edges and thick every where else.
So, here are my doubts
1) Is their any better approach to fulfil my requirement, other than what I have tried.
2) If what I have tried, is correct approach to tackle the problem, then what changes I need to make to achieve the desired effect.
Regards
Ranjit
The answer to how to reverse the width behaviour and (and even the same question as yours) is right there in the link that you posted. All I did was to search for the word "width"
The question (my highlighting is not part of the quote):
The final version of this seems to work opposite of the first version. I would like to have the line thicker as the user moves slower and not thinner. Where do I change the code to inverse the final varying thickness to perform or like a pen? Meaning the slower the user moves the thicker or more ink the pen puts down... Thanks! Great tutorials, btw...
And the answer:
thanks for the great tutorial!
with these changes i got the opposite line width cahnge effect:
#define CAPACITY 100
#define FF 20.0
#define LOWER 0.01
#define UPPER 1.5
float frac1 = clamp(FF/len_sq(pointsBuffer[i], pointsBuffer[i+1]), LOWER, UPPER); // ................. (4)
float frac2 = clamp(FF/len_sq(pointsBuffer[i+1], pointsBuffer[i+2]), LOWER, UPPER);
float frac3 = clamp(FF/len_sq(pointsBuffer[i+2], pointsBuffer[i+3]), LOWER, UPPER);
Another search in the same link for the text "float frac1 =" shows that this change should be applied to lines 76-78 (somewhere inside touchesMoved:withEvent: in the code from the article)
In your touchesBegan: method, UItouch is supplied.
UITouch has below instance functions,
– locationInView:
– previousLocationInView:
And below property
#property(nonatomic, readonly) NSTimeInterval timestamp
From the above, i think you can easily calculate velocity.I didn't go through any of mentioned links.I just want to give you an idea of how to calculate velocty based on touch object.
I know this is a well documented problem but I didn't manage to find a satisfactory solution online. Here goes.
I am using cvCalcOpticalFlowPyrLK to track motion of feature points. I find the feature points with cvGoodFeaturesToTrack and refine it with cvFindCornerSubPix. I find the feature points in my first frame (reference frame) and use LK to track the movement of these points with respect to the reference frame. I update the points with current frame feature points coordinate with they are found. Heres what I observed:
1) The no. of good feature points found by cvGoodFeaturesToTrack is very little. I tried to find 100 points but I always get less than 10 points.
2) The no. of feature points after 5-6 frames decreased by 50 percent and then another 50 by 5 frames later, and this is when the subject is not in motion. The tracking is patchy in the sense some of the points are correctly tracked but some are way off.
I have seen demo application on youtube or iphone app. The drop off of the no. of feature points from frame to frame is not what I see in my application. So I am suspecting parameters I set might be wrong.
This is how I call the functions:
cvGoodFeaturesToTrack(
image,
eigen_image,
temp_image,
corners_point,
&corner_count,
0.01(quality level),
3(min distance),
0,
10(block size),
0(use harris),
0.04(k));
cvFindCornerSubPix(
image,
cornersPoint,
corner_count,
cvSize(WINDOW_SIZE, WINDOW_SIZE),
cvSize(-1, -1),
cvTermCriteria(CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.3));
cvCalcOpticalFlowPyrLK(image,
currentFrame,
rpV->pyramid_images0,
rpV->pyramid_images1,
cornersPoint,
cornersCurrent,
corner_count,
cvSize(WINDOW_SIZE, WINDOW_SIZE),
10(level),
features_found,
feature_errors,
cvTermCriteria(CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.3),
0);
Another thing is that I am using a greyscale camera (infra red camera). I dont it matters by too much though. I am wondering if I am missing anything important here.
Any form of help is much appreciated.
Thanks,
Kelvin
There are a few issues:
Calling cvFindCornerSubpix does not help if the features you are tracking don't look like corners of a checkerboard.
Use of a pyramid is appropriate only if the apparent motion is larger than the window size from frame to frame, for a reasonable window size.
Hard to tell why you are not getting enough good features to track without seeing your imagery. Perhaps it's rather blurry?
So I worked on an iPhone game several years ago that uses CGPointMake calls hundreds of times for some OpenGL stuff. That was back when 480 was the only size for landscape phones, but now I'd like to support the 568 point size display. The problem is things aren't centered now and theres this big empty black vertical bar on the right side of the screen
I was wondering what the best way to fill the screen again would be without rewriting these hundreds of CGPoint calls. I was thinking if I could overload CGPointMake somehow to offset each one by (44,0) it would help to center things. Or maybe there's a way with OpenGL to shift everything in one direction? I'm not overly familiar with OpenGL so I'm not sure where the best place to start would be - any help is greatly appreciated!
Overloading CGPointMake sounds like too much magic to me. Are all these occurences really just centering the point? In that case you could write the screen-agnostic version:
// Or something similar given your UI orientation and transformations
CGPointMake(CGRectGetMidY([[UIScreen mainScreen] bounds]), someY);
And since that’s quite a mouthful, you could introduce a macro or a function:
CGPoint CGPointMakeHorizontalCenter(CGFloat y) { … }
Then bite the bullet, write a nice regular expression and replace all centering CGPointMake references with calls to this CGPointMakeHorizontalCenter.
(This all assumes you just need to center things better. In reality, maybe you also have to change some assets to better fill the screen? I think you could just stretch your whole older rendering code to fill the screen, but that would look ugly.)
I feel stupid asking this question, but I can not find a clear answer anywhere (or much of an answer at all) so I feel I must ask. Is there anyone out there who can explain clearly how the parallaxRatio of CCParallaxNode works?
I have checked the source of CCParallaxNode and it does not explain it at all. I have searched the internet and stackOverflow extensively. I have tried to do good old trial and error. I'm still confused.
[parallaxLayer addChild:backgroundEffect_subtleRed z:100 parallaxRatio:ccp(0.5, 0.5) positionOffset:backgroundEffect_subtleRed.position];
In this piece of code I am trying to add a particle emitter to a parallaxLayer and have it move somewhat like you would expect an object on a parallax layer to move. Unfortunately I do not see the particles at all. I have had this problem anytime I try to add anything to a parallaxNode when I want it to move. I have been using CCParallaxNode to create static UI layers, but have not been able to use them for what they were built to do.
In summary:
parallaxRatio takes a CGPoint. What do the floats in the CGPoint apply to? Are they ratios of x and y in relation to the window? Are they (parallaxLayerMovementInRelationTo, parentNode)? A working piece of example code would be very helpful.
Thank you.
To quote from a cocos2d book I own:
[paraNode addChild:para1 z:1 parallaxRatio:CGPointMake(0.5f, 0) positionOffset:topOffset];
[paraNode addChild:para2 z:2 parallaxRatio:CGPointMake(1, 0) positionOffset:topOffset];
[paraNode addChild:para3 z:4 parallaxRatio:CGPointMake(2, 0) positionOffset:midOffset];
[paraNode addChild:para4 z:3 parallaxRatio:CGPointMake(3, 0) positionOffset:downOffset];
"The CCParallaxNode is created like any other node, but its children are added using a special initializer. With it you specify the parallax ratio, which is a CGPoint used as a multiplier for any movement of the CCParallaxNode In this case, para1 would move at half the speed, para2 at the normal speed, para3 at double the speed of the CCParallaxNode, and so on"
So basically, its the ratio that the individual layers are moved in the relation to the movement of the whole CCParallaxNode.