How to position an object in 3D space using cameras [closed] - opencv

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Is it possible to use a couple of webcams (or any camera for that matter) to get the x, y and z co-ordinates of an object and then track them perhaps using OpenCV as it moves around a room.
I'm thinking of it in relation to localising and then controling an RC helicopter.

Yes. You need to detect points on both images simultaneously and then match the pairs that correspond to the same point in the scene. This way you will have the same point represented by different coordinate spaces (camera 1 and camera 2).
You can start here.

If using depth sensor is acceptable then you can take a look at how ReconstructMe does it. Otherwise take a look at this google search.

Related

Mapping YOLO results onto 2D plan [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I'm using YOLO to detect people in a video stream from camera and would like to "map" founded bonding boxes onto 2D plan of the room.
Could you please give me a hint which algorithms might be used for it?
The idea is shown on the picture from the github repository, but I need not to measure distance but "project" an object position on 2D map of the room
https://github.com/sassoftware/iot-tracking-social-distancing-computer-vision
Using 3D cameras or just 2 regular ones might help a lot as well

Is it possible to gather the image matrix of a pygame gameboard without rendering the image to the screen? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I wrote a 2d simulation (very similar to the Atari- OpenAi games) in pygame, which I need for an reinforcement learning project. I'd like to train a neural network using mainly image data, i.e. screenshots of the pygame gameboard.
I am able to make those screenshots, but:
- Is possible to gather this image data - or, more precisely, the
corresponding rgb image matrix - also without rendering the whole
playing ground to the screen?
As I figured out there is the possibility to do such in pyglet ... But I would like to avoid to rewrite the whole simulation.
Basically, yes. You don't have to actually draw anything to the screen surface.
Once you have a Surface, you can use methods like get_at, the PixelArray module or the surfarray module to access the RGB(A)-values of each pixel.

Is it possible to create a complex platformer game with SpriteKit? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Can we achieve this with SKPhsyicsBody? I already did some research, and knew SpriteKit physics body does handle irregular shapes like curves. But there's no mentioning that the SKNote's rotation angle will dynamically change like the example game provided (made with unity) in the image. It would be a really bad experience for players when the main element of the game does not follow the curve with its rotation.
Yes, yes you can. And yes, yes it will (rotate with contact). All possible, all permitted and all exactly how the physics bodies interact. This is simply a matter of setting friction properties between the surface (ground) and your hero's ski in conjunction with gravity so it provides your idealised experience.

Delphi : How to plot a simple 2d graph? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm using Delphi for my computer science coursework and need my program to plot a simple graph for projectile motion. I've struggled in finding a way to implement this and was wondering if anyone has had experience in drawing graphs or would be able to point me in the right direction.
The main idea that I have tried is plotting all the x values and drawing a line to the corresponding y value at that given time, but each time it comes out really weird and doesn't work.
Steps for simple graph:
1. Provide data values in array/list
2. Find minimal and maximal values for X and Y components
3. Calculate linear formulas for mapping data values to screen coordinates (min x = > left of rectangle for drawing and so on)
4. Draw line segments applying formulas to get coordinates
5. If needed, draw axes
P.S. Is TChart using prohibited?
I agree that TChart that ships with Delphi is enough to do that task. Anyway you can check this page also.

Swift: How can i get height of an object with the camera? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Hi i would like measure an object in 'cm' (or similar) of an object obtained with the camera
Any idea?
Thanks!!
You can't get a precise measurement.
You would need to input roughly how far away the object is from the camera in inches.
You need to measure how many pixels tall the item is that you want to measure.
Using the pixels measured, combined with the DPI of the camera and the distance the camera is from that object and some estimated angles then you can work out an approximate height of the object in inches using trigonometry.

Resources