With intercept being 2, I want to guess the value for the hyperplane in the image in the below link.
https://ibb.co/vB6NszN
I didn't quiet get the whole concept of it and I don't know how to guess a value for hyperplane. Any help is appreciated.
Note:I'm sorry I'm not able to share images in my question yet.
Related
Helloooo back here with another question as I'm making another calculator for my Role Play Group.
I'm wondering how I can make a calculator that can calculate the Experience Points required! Unfortunately 86-100 on the scale is completely different...as 100 is the max level achievable. If it's too difficult to account for the last levels, I don't mind making it 86-106.
I really don't know anything about math and formulas...so any and all help would be super appreciated !
Try INDEX()/MATCH() combination like-
=INDEX(C4:C9,MATCH(F4,INDEX(SPLIT(B4:B9,"-"),,1),1))
I want to create some objects (boxes, cylinders, pyramids, doesnt really matter) which display text / a number on the side / on all of it's sides. Short of making individual materials with the numbers displayed on them by hand, is there a simple way to achieve this?
I am using Swift 4 in XCode.
First thing, please try not to be discouraged. Thank you for reaching out to the ARKit community on stack :-)
We are here to help each other.
(I do feel your pain…and why I am trying to help)
Here is an interesting stack page that has helped me with placing items on the sides of objects(like boxes cylinders, pyramids).
I hope it can help you or others.
SCNBox different colour or texture on each face
Rickster pointed out some other possibilities.
We all learn by sharing what we know.
Smartdog
Depends on what you mean by "by hand". If you want the text displayed on the surface of the geometry, like a texture map, then texture-mapping it is the way to go. If you draw your text into a UIImage, you can set that as the material contents, which is a bit more dynamic than, say, creating a bunch of PNGs that each have a different number on them. Just make sure to choose an image size/resolution that looks good at the size your objects are displayed at.
For anyone lost in the internet trying to find an answer to this it's stupidly simple. Use SCNText and set it as a node. I just wasted 7 hours of my life trying to make number .dae models position themselves next to each other because there is no mention of this feature anywhere.
I hope I saved you as much pain as I just endured discovering this.
I am working on project in C#/Emgu CV, but answer in any language with OpenCv should be ok.
I have following image: http://i42.tinypic.com/2z89h5g.jpg
Or it might look like this: http://i43.tinypic.com/122iwsk.jpg
I am trying to do automatic calibration and I would like to know how to find corners of the field. They are marked by LEDs, but I would prefer to find it by color tags. If need I can replace all tags by same color tags. (Note that light in room is changing so the colors might be bit different next time)
Edge detection might be ok too, but I am afraid that I would not find the corner correctly.
Please help.
Thank you.
Edit:
Thanks aardvarkk for advice, but I think I need to give you little bit more info.
I am already able to detect and identify robots in field and get their position and rotation. But for that I have to set corners of field manually first. So I was looking for aa automatic way, but I was worried I would not be able to distinguish color tags from background because light in the room is changing quite often.
And as for the camera angle. Point of this is that camera can be every time from different (reasonable) angle.
I would start by searching for the colours. The LEDs won't be much help to you as they're not much brighter than anything else in the scene. I would look for the rectangular pieces of coloured tape. Try segmenting the image based on colour. That may allow you to retrieve the corner tape pieces directly without needing to know their exact colour in advance. After that, you may look for pairs of the same colour blob that are close to each other to define the corners where the pieces of tape are the same. Knowing what kinds of camera angles you are going to have to solve is also very important -- if you need this to work when viewing from the side, then this approach certainly won't work. If it's almost top down, this would probably be a good start. Nobody will be able to provide you with a start to finish solution, but this might be a good base to begin with.
I do have few images. Some of the images contains text and few other doesn't contains text at all. I want a robust algorithm which can conclude if image contains text or not.
Even Probabilistic Algorithms are fine.
Can anyone suggest such algorithm?
Thanks
There are a some specifics that you'll want to pin down:
Will there be much text in the image? Or just a character or two?
Will the text be oriented properly? Or does rotation also need to be performed?
How big will you expect the text to be?
How similar to text will be background be?
Since images can vary significantly you want to define the problem and find as many constraints as you can to make the problem as simple as possible. It's a difficult problem.
For such an algorithm you'll want to focus on what makes text unique from the background (consistent spacing between characters and lines, consistent height, consistent baseline, etc. There's an area of research in "text detection" that you'll want to investigate and you'll find a number of algorithms there. Two surveys of some of these methods can be found here and here
I am looking for a library that would help scrape the information from the image below.
I need the current value so it would have to recognise the values on the left and then estimate the value of the bottom line.
Any ideas if there is a library out there that could do something like this? Language isn't really important but I guess Python would be preferable.
Thanks
I don't know of any "out of the box" solution for this and I doubt one exists. If all you have is the image, then you'll need to do some image processing. A simple binarization method (like Otsu binarization) would make it easier to process:
The binarization makes it easier because now the pixels are either "on" or "off."
The locations for the lines can be found by searching for some number of pixels that are all on horizontally (5 on in a row while iterating on the x axis?).
Then a possible solution would be to pass the image to an OCR engine to get the numbers (tesseractOCR is an open source OCR engine hosted at Google (C++): tesseractOCR). You'd still have to find out where the numbers are in the image by iterating through it.
Then, you'd have to find where the lines are relative to the keys on the left and do a little math and you can get your answer.
OpenCV is a beefy computer vision library that has things like the binarization. It is also a C++ library.
Hope that helps.