I study on this airfoil and ı want to detail meshing on forepart. Actually my element quality was good for in this mesh but ı dont want to red ared on this airfoil. In addition to, ı want to hear all of your ideas. enter image description here
As you are providing very little information on what you are trying to achieve (Is this a structural, fluid or thermal problem? ) I can only guess:
Increasing the element count on the edge could produce less skewed elements.
In reality the edge is probalby not sharp but has some kind of chamfer. Try to model it accordingly
Related
In ARKit for iOS. If you display a virtual item then it always comes before any real item. This means if I stand in front of the virtual item then I would still see the virtual item. How can I fix this scenario?
The bottle should be visible but it is cutting off.
You cannot achieve this with ARkit only. It offers no off the shelve solution for solving occlusion, which is a hard problem.
Ideally you'd know the depth of each pixel projected on the camera, and would use that to determine those that are in front and those that are behind. I would not try something with the feature points ARKit is exposing since 1) their position is innacurate 2) there's no way to know between two frames which feature point of frame A is which feature point in frame B. It's way to noisy data to do anything good.
You might be able to achieve something with third party options that'd process the captured image and understand depth or different depth levels in the scene, but I don't know any good solution. There's some SLAM technique that yields dense depth map like DTAM (https://www.kudan.eu/kudan-news/different-types-visual-slam-systems/) but that'd be redoing most of what arkit is doing. There might be other approaches that I'm not aware off. Apps like snap do this in their own way so it is possible!
So basically your question is to mapping the coordinate of the virtual item on real world coordinate system, in short, you want to see the virtual item blocked by the real item, and you can only see the virtual item once you pass the real item.
If so, you need to know the physical relations of each object in this environment, and then you need to know exactly where you are to decide if the virtual item is blocked.
It's not an intuitive way to fix this, however, it's the only way I can think of.
Cheers.
What you are trying to achieve is not easy.
You need to detect the parts of the real world that "should be visible" using some kind of image processing. Or maybe the ARKit feature points that have the depth information, then based on this you have to add "an invisible virtual object" that cuts the drawing of things behind it. This will represent your "real object" inside the "virtual world" so that the background (camera feed) remains visible in places where this invisible virtual object is present.
I am working on project in C#/Emgu CV, but answer in any language with OpenCv should be ok.
I have following image: http://i42.tinypic.com/2z89h5g.jpg
Or it might look like this: http://i43.tinypic.com/122iwsk.jpg
I am trying to do automatic calibration and I would like to know how to find corners of the field. They are marked by LEDs, but I would prefer to find it by color tags. If need I can replace all tags by same color tags. (Note that light in room is changing so the colors might be bit different next time)
Edge detection might be ok too, but I am afraid that I would not find the corner correctly.
Please help.
Thank you.
Edit:
Thanks aardvarkk for advice, but I think I need to give you little bit more info.
I am already able to detect and identify robots in field and get their position and rotation. But for that I have to set corners of field manually first. So I was looking for aa automatic way, but I was worried I would not be able to distinguish color tags from background because light in the room is changing quite often.
And as for the camera angle. Point of this is that camera can be every time from different (reasonable) angle.
I would start by searching for the colours. The LEDs won't be much help to you as they're not much brighter than anything else in the scene. I would look for the rectangular pieces of coloured tape. Try segmenting the image based on colour. That may allow you to retrieve the corner tape pieces directly without needing to know their exact colour in advance. After that, you may look for pairs of the same colour blob that are close to each other to define the corners where the pieces of tape are the same. Knowing what kinds of camera angles you are going to have to solve is also very important -- if you need this to work when viewing from the side, then this approach certainly won't work. If it's almost top down, this would probably be a good start. Nobody will be able to provide you with a start to finish solution, but this might be a good base to begin with.
I am creating an app at the moment which take a picture of a person's face and I want to change the colour of their skin (just for fun!)
I have a piece of code that runs through the image pixel by pixel and finds the skin colour and then amends it to a new colour, which kind of works, but even though I am allowing for the differences in tones and adjusting the new colour in the same way it is still very hit and miss.
can anyone point me in the right direction?? is it even possible? I dont really want to use a filter as I dont think it would give the right effect.
Thanks
You should look at some computer vision techniques such as segmentation and feature extraction.
I want to be able to recognize what page of a text only (no images) book I'm on... what is the best approach:
I was initially thinking some sort of image matching but the pages of an all text book look so similar not sure how well this would work?
Second thought was to use OCR??
Any ideas or suggestions... thanks!
I think image matching is really useless in your case...
If you want to detect on which page you are and that the book has numbered pages you can use an OCR like Tesseract.
1) Locate the page number (top left hand corner, right, bottom..)
2) Extract it (extract the imaget to proceed to decoding on it)
( 2bis) Preprocess the imaget... )
3) Decode it (use Tesseract or another OCR)
If you don't want to use an OCR you can look at Hu Moments, if the numbers are standard printed numbers it can be quite good at recognising them.
I wanna achieve something that looks like the wizard's ability in the game Trine.
I want to create a game where the player uses the mouse to create certain objects, so i will need to compare the shape the player drew to a predefined shape of my own and check if its close.
I have no idea how to achieve this and where to look for, I assume it has something to do with shape recognition like in image processing and computer vision but it should be much simpler and work in real time.
does anyone have a clue how this can be done or where can i look for something like that?
Is this what you're going for? http://www.youtube.com/watch?v=7Zh79q_xvZw
I would start by researching gesture recognition. I think that's the phrase you need to get good info. http://en.wikipedia.org/wiki/Gesture_recognition
Also, sketch recognition: http://en.wikipedia.org/wiki/Sketch_recognition
Have a look at this question. What you are looking for in particular is on-line handwriting recognition, meaning that you follow every move of the user from beginning to end.
Now, you might want to simplify it a whole lot, so one way is defining 9 areas, like a 3x3 grid. Then convert the user's movement into a list of how the user moved through these grids (use thresholds to make sure it was in that area for a while). Now you will have an array like this: 1-1, 1-2, 2-2, 2-3 (meaning the user went from upper-left corner, the upper-middle, etc.)
This information is now fairly easy to match to a set of gestures. If it performs poorly, you can either make it more difficult and introduce a Hidden Markov Model, that will allow some mistakes in the gesture (but still matching the most likely one you have in your gesture set), or you could simply display the grid to the user, so that the user will learn the gestures like number codes.