How to do a point light on Windows Phone? - xna

All of the stuff that I can find about point lights with XNA uses custom shaders, which aren't available on Windows Phone. Can one of the 5 built-in effects be used to simulate a point light? How do you do a point light on Windows Phone?

BasicEffect lets you specify Directional lighting, would it be possible for you to use one of these? just set it's location in world space and point it towards every model you render?

Related

Augmented Reality – Lighting Real-World objects with Virtual light

Is it possible to import a virtual lamp object into the AR scene, that projects a light cone, which illuminates the surrounding space in the room and the real objects in it, e.g. a table, floor, walls?
For ARKit, I found this SO post.
For ARCore, there is an example of relighting technique. And this source code.
I have also been suggested that post-processing can be used to brighten the whole scene.
However, these examples are from a while ago and perhaps threre is a newer or a more straight forward solution to this problem?
At the low level, RealityKit is only responsible for rendering virtual objects and overlaying them on top of the camera frame.
If you want to illuminate the real scene, you need to post-process the camera frame.
Here are some tutorials on how to do post-processing:
Tutorial1⃣️
Tutorial2⃣️
If all you need is an effect like This , then all you need to do is add a CGImage-based post-processing effect for the virtual object (lights).
More specifically, add a bloom filter to the rendered image(You can also simulate bloom filters with Gaussian blur).
In this way, the code is all around UIImage and CGImage, so it's pretty simple😎
If you want to be more realistic, consider using the depth map provided by LiDAR to calculate which areas can be illuminated for a more detailed brightness.
Or If you're a true explorer, you can use Metal to create a real world Digital Twin point cloud in real time to simulate occlusion of light.
There's nothing new in relighting techniques based on 3D compositing principles in 2021. At the moment, when you're working with RealityKit or SceneKit, you have to personally implement the relighting functionality with the help of two additional render passes (RGB pass is always needed) - Normals pass and PointPosition pass. Both AOVs must be 32-bit.
However, in the near future, when Apple engineers finally implement texture capturing in Scene Reconstruction – any inexperienced AR developer will be able to apply a relighting procedure.
Watch this Vimeo Video to find out how relighting can be achieved in The Foundry NUKE.
A crucial point here, when implementing the Relighting effect, is the presence of a LiDAR scanner (or iToF sensor if you're using ARCore). In other words, today's relighting solution for iOS is Metal + RealityKit.

ARKit detect house exterior planes

I know that ARKit is able to detect and classify planes on A12+ processors. It does the job reasonably well inside the house, but what about the outside? Is it able to detect windows and doors if I move around a house a little? I tried it myself and the result did not satisfy me: i moved around the building too much and still ARKit did not distinguish wall from the window.
I used app from here for tests: https://developer.apple.com/documentation/arkit/tracking_and_visualizing_planes
I’m I doing everything correct? Maybe there is some third party library to detect house parts better?
Thanks in advance!
When you test the sample app outside and try to use ARKit to detect the surfaces on the exterior of a house it will not work. ARKit is built to map flat surfaces and their orientations (horizontal/vertical). This means ARKit can understand that a surface is flat, is either a wall or a floor. When you attempt to "map" the exterior of a house, ARKit will only detect the horizontal surfaces as walls, it cannot distinguish between walls and windows.
You will need to develop/source an AI model and run it against the camera data using CoreML to enable your app to distinguish between windows and walls on the exterior of a house.
ARKit Plane tracking documentation for reference: https://developer.apple.com/documentation/arkit/tracking_and_visualizing_planes
a couple articles about ARKit with CoreML
https://www.rightpoint.com/rplabs/dev/arkit-and-coreml
https://medium.com/s23nyc-tech/using-machine-learning-and-coreml-to-control-arkit-24241c894e3b
[Update]
Yes you are correct, for A12+ devices Apple does allow for plane classification. I would assume the issue with exterior windows vs interior is either distance to the window (too far for the CV to properly classify) or Apple has tuned it more for interior windows vs exterior. The difference may seem trivial but to a CV algorithm it's quite different.

ARKit and Unity - How can I detect the act of hitting the AR object by a real world object from the camera?

Think if someone in real life waved their hand and hit the 3D object in AR, how would I detect that? I basically want to know when something crosses over the AR object so I can know that something "hit" it and react.
Another example would be to place a virtual bottle on the table and then wave your hand in the air where the bottle is and then it gets knocked over.
Can this be done? If so how? I would prefer unity help but if this can only be done via Xcode and ARKit natively, I would be open to that as well.
ARKit does solve a ton of issues with AR and make them a breeze to work with. Your issue just isn't one of them.
As #Draco18s notes (and emphasizes well with the xkcd link 👍), you've perhaps unwittingly stepped into the domain of hairy computer vision problems. You have some building blocks to work with, though: ARKit provides pixel buffers for each video frame, and the projection matrix needed for you to work out what portion of the 2D image is overlaid by your virtual water bottle.
Deciding when to knock over the water bottle is then a problem of analyzing frame-to-frame differences over time in that region of the image. (And tracking that region's movement relative to the whole camera image, since the user probably isn't holding the device perfectly still.) The amount of of analysis required varies depending on the sophistication of effect you want... a simple pixel diff might work (for some value of "work"), or there might be existing machine learning models that you could put together with Vision and Core ML...
You should take a look at ManoMotion: https://www.manomotion.com/
They're working on this issue and suppose to release a solution in form of library soon.

Creating a 3d coordinate system plus 3d vectors in ActionScript?

I am just starting to dive into the world of 3D objects and perspectives, esp. in Flash.
My goal is to have a 3d coordinate system with grids plus the option to define x, y, z values of a vector to be plotted.
Simple example:
It would also be great to use the mouse to rotate the coordinate system, e. g. like in this video or here.
Does anybody know if there is such a tool or library that provides such a 3d coordinate-system? I would prefer Flash AS2/AS3 but this is not a must-be. The only requirement: The tool must be running within the browser, no use of software such as Blender or SketchUp.
Maybe somebody has already written a program like that?
Thank you.
PS: I know that there are web services like wolframalpha that can plot in 3d, but I need an interactive tool.
PaperVision3D is an opensource actionscript3 library that can handle all sorts of 3D rendering. http://code.google.com/p/papervision3d/ . I've used it for programs that give you a full 3D environment with typical 6 degrees of freedom.

How to use the VGA camera as a optical sensor?

I am designing an information kiosk which incorporates a mobile phone hidden inside the kiosk.
I wonder whether it would be possible to use the VGA camera of the phone as a sensor to detect when somebody is standing in front of the kiosk.
Which SW components (e.g. Java, APIs, bluetooth stack etc) would be required for a code to use the VGA camera for movement detection?
Obvious choice is to use face detection. But you would have to calibrate this to ensure that the face detected is close enough to the kiosk. May be using the relative size of the face in the picture. This could be done using opencv lib which is widely used. But as this kiosk would be deployed in places you would have little control of the lighting, there's a good chance of false positives and negatives. May be you also want to consider a proximity sensor in combination with face detection.
Depending on what platform is the information kiosk using the options would vary... But assuming there is linux somewhere underneath, you should take a look at OpenCV library. And in case it is of any use - here's a link to my funny experiment to get the 'nod-controlled interface' for reading the long web pages.
And speaking of false positives - or even worse - false negatives - in case of bad lighting or unusual angle the chances are pretty high. So you'd need to complement that by some fallback mechanism like onscreen button 'press here to start' which would be there by default, and then use the inactivity timeout alongside with the face detection to avoid having just one information input vector.
Another idea (depending on the light conditions), might be to measure the overall amount of light in the picture - natural light should be eliciting only slow changes, while the person walking close to the kiosk would cause rapid lighting change.
In j2me (java for mobile phones), you can use the mmapi (mobile media api) to capture the camera screen.
Most phones support this.
#Andrew's suggestion on OpenCV is good. There are a lot of motion detection projects. BUT, I would suggest adding a cheap CMOS camera rather than the mobile phone camera.

Resources