AR Player (360 Videos) - augmented-reality

I want to experiment with 360° videos and I have a little idea, but I don't know how I can solve this. I want to embed a video from a streed and want to display the street-names and want to draw colored lines on the roads. When I move the view of the video (I turn 45° as example) I want that the lines and street-names go with the video and always stay on their places.
Do you have a solution approach for me?

I understand that you want a 360 video with street names? It will be easiest to just render everything on top of that video. You will need to render the AR layer as a equirectangular 360x180 video in some software like Blender and then merge it with the video you have with something like Adobe After Effects.
An alternative would be to do the AR layer in a 3d engine like Unity and put the video in the background.
It's hard to suggest you anything as what you're asking is very broad.

Related

How to play video over SKScene

Basically I have an SKScene, and I want to play a video over the scene. The video is confetti falling with an alpha background. It will play when the player gets a high score. I am using an SKScene with shapes and images drawn with shape nodes and image nodes. I just was wondering if anyone could please tell me how to play the video over the screen and still see the game in the back, and be able to touch the buttons through the video. It is supposed to look like an animation playing.
I am using a video because I was just thinking that playing a video would be more processor efficient than having the game generate particles.
There is no built-in iOS solution. You can play 24BPP (fully opaque) movies under iOS, but the only built-in way to display alpha channel video would be to load a series of PNG images with alpha. Downside is that takes up a huge amount of memory and it bloats the app download. If you want to have a look at some working examples of this kind of functionality with a 3rd party app then see Alpha Channel Examples. You might also be interested in this blog post which shows example code of how to impl Alpha channel textures in OpenGL would could be implemented on top of SpriteKit too. The cube example shows rendering an alpha channel movie onto a cube, it was adapted from a Ray Wenderlich tutorial.
Here as an answer how to do that with GPUImageView. Also project on GitHub here and similar question from stackoverflow
The video stack doesn't yet support formats with alpha. For confetti, you should use SKEmitterNode. Size it to the area you envisioned for your video, and see Creating Particle Effects, i.e., its link to Add a particle emitter to your project and try out the "Snow" effect. It looks more like confetti when you give it a different color than white. Click the dot under "Color Ramp" to set the color.

iOS: Draw on top of AV video, then save the drawing in the video file

I'm working on an iPad app that records and plays videos using AVFoundation classes. I have all of the code for basic record/playback in place and now I would like to add a feature that allows the user to draw and make annotations on the video—something I believe will not be too difficult. The harder part, and something that I have not been able to find any examples of, will be to combine the drawing and annotations into the video file itself. I suspect this is part is accomplished with AVComposition but have no idea exactly how. Your help would be greatly appreciated.
Mark
I do not think that you can actually save a drawing into a video file in iOS. You could however consider using a separate view to save the drawing and synchronize the overlay onto the video using a transparent view. In other words, the user circled something at time 3 mins 42 secs in the video. Then when the video is played back you overlay the saved drawing onto the video at the 3:42 mark. It's not what you want but I think it is as close as you can get right now.
EDIT: Actually there might be a way after all. Take a look at this tutorial. I have not read the whole thing but it seems to incorporate the overlay function you need.
http://www.raywenderlich.com/30200/avfoundation-tutorial-adding-overlays-and-animations-to-videos

Canvas and Object for Coronos

In Corona, I am trying to write a Jigsaw puzzle like game, but the pieces are part of a video, so is there a way to run a video sequence and represent clipped segments of that video in the display objects?
Any samples would be great....
Regards
If by clipped you mean create a display object that plays a snippet of the video rather than the whole thing then yes - if you mean cut out a shape from the video the size/shape of a jigsaw puzzle piece then no.
If it's the lesser take a look at the native.newVideo() API.

Augmented Reality Gaming

I want to develop a augmented reality game. Player will stand in a room and some cameras will take video of him. Idea is to add a monster to that video which will be seen by player with glasses or direct view from a lcd. Basically this can be done with some image proccessing consept. Adding colored parts or some markers where the monster will be and some hardworking would do that.
But my question is how to make this monster move and as a result have a video which monster looks like attacking the player. Actual game starts after that but I will go step by step. First step is to have that video with attacking monster.
I'm completely new to this , I only used opencv. So I will need some tools to achieve my goal. Where would you suggest me to start? I prefer C++ but any language with some api suggestions are also accepted. I m also open for theoretical, conceptual suggestions. Thank you for reading my question
Not: This idea came to my mind after watching anime Sword Art Online. If you like to watch animes and virtual reality stuff; I suggest you to watch it. It is a good one.
If you want the monster to move like attacking the player you will need to know the 3D coordinates of the player or some parts of the player. This can be done by making the player wearing recognizable markers that can be detected so homography can be extracted to get the 3D position.
You can start reading this post on the topic, it is about c++ agugmented reality on OpennCV.

Create a Video Overlaying another Video

I am trying to make a nice pretty video.
I have a AVI video from a GOPro video camera, and I hae some info I want to overlay on top of the video. Like Time, GPS, Speed, G-Force etc.
I got my raw data, and ActionScript coded it up into a Flash movie, but then worked out I have two issues.
Flash export to AVI is pretty crap, and basically does a screen capture.
The export to AVI cant be transparent or anything but spare/rectangle.
So, can anyone suggest a better way? Should I use something other than Flash to create my speedometer, that is more friendly for overlaying on a AVI?
This is the sort of thing I am trying to create.
youtube.com/watch?v=tT-vDtQyCbo
I have a CSV of all my raw data, and am trying to find a way to overlay it and look as professional as that link above. I can make the dials in actionscript, but then exporting to AVI with a 'screen capture' type program, they look pretty crap. But on the other hand, inporting my HD video info Flash, and it becomes pretty crap quality, and still have the export issue at the end.
I'm not 100% clear on what you're trying to do.
If you mean that you want to put some info over a video using Flash, all you need to do is import your video onto the timeline on one layer and then place your information on a higher layer. If you want your video to play as a different shape, then you can simply apply a mask to the layer your video is sitting on.
If you throw in more directing information then I'll improve this answer for you :)

Resources