I'm trying to use the Perspective Virtual Camera library, to create a game, where the player can move, the library seems to be pretty famous as a see many people referring to it, but i did not found any tutorials on how to use it. The "Camera" is working, and the man player is being followed around, but thats it.
Basically i want the Background Sky to always stay in the screen. And the mountains and threes to move. How could i do that? Is this the right tool to do the job?
This is the code to add then to the Camera:
camera:add(background, 8, false) -- SKY
camera:add(montanhas, 7, false) -- Mountains
camera:add(arvores, 6, false) -- Trees
camera:add(floor,5, false)
camera:add(hero, 1, true) - hero
The grey circles on the bottom of the image, are my HUD, i'm not adding them to the camera ( just to scene ), so they just stay in the correct position.
Thanks guys!
I don't know how to use your plugin, but it might be easier to not use it. Just create different groups with the display.newGroup() api. You would basically have one group for the foreground, and one group for the background.
You might want to try this. Instead of inserting the background into the groups set by that program, use the object:toBack() API.
Related
I am currently playing a bit with ARKit. My goal is to detect a shelf and draw stuff onto it.
I did already find the ARReferenceImage and that basically works for a very, very simple prototype, but the image needs to be quite complex it seems? Xcode always complains if I try to use something a lot simpler (like a QR-Code like image). With that marker I would know the position of an edge and then I'd know the physical size of my shelf and know how to place stuff into it. So that would be ok, but I think small and simple markers will not work, right?
But ideally I would not need a marker at all.
I know that I can detect e.g. planes, but I want to detect the shelf itself. But as my shelf is open, it's not really a plane. Are there other possibilities to find an object using ARKit?
I know that my question is very vague, but maybe somebody could point me in the right direction. Or tell me if that's even possible with ARKit or if I need other tools? Like Unity?
There are several different possibilities for positioning content in augmented reality. They are called content anchors, and they are all subclasses of the ARAnchor class.
Image anchor
Using an image anchor, you would stick your reference image on a pre-determined spot on the shelf and position your 3D content relative to it.
the image needs to be quite complex it seems? Xcode always complains if I try to use something a lot simpler (like a QR-Code like image)
That's correct. The image needs to have enough visual detail for ARKit to track it. Something like a simple black and white checkerboard pattern doesn't work very well. A complex image does.
Object anchor
Using object anchors, you scan the shape of a 3D object ahead of time and bundle this data file with your app. When a user uses the app, ARKit will try to recognise this object and if it does, you can position your 3D content relative to it. Apple has some sample code for this if you want to try it out quickly.
Manually creating an anchor
Another option would be to enable ARKit plane detection, and have the user tap a point on the horizontal shelf. Then you perform a raycast to get the 3D coordinate of this point.
You can create an ARAnchor object using this coordinate, and add it to the ARSession.
Then you can again position your content relative to the anchor.
You could also implement a drag gesture to let the user fine-tune the position along the shelf's plane.
Conclusion
Which one of these placement options is best for you depends on the use case of your app. I hope this answer was useful :)
References
There are a lot of informative WWDC videos about ARKit. You could start off by watching this one: https://developer.apple.com/videos/play/wwdc2018/610
It is absolutely possible. If you do this in swift or Unity depends entirely on what you are comfortable working in.
Arkit calls them https://developer.apple.com/documentation/arkit/arobjectanchor. In other implementations they are often called mesh or model targets.
This Youtube video shows what you want to do in swift.
But objects like a shelf might be hard to recognize since their content often changes.
I'm trying to develop a platform game similar to Geometry Dash but i'm facing a lot of problems during the making of the algorithm.
I don't barely know how to proceed. Are the levels structured with a long image ( that is the ground) with obstacles added, or there are many obstacles generated progressively during the game?
I'd like to know where to start from, what to draw and how to place it in my game, how to build the collision detection.
The game will be an auto-scrolling platformer, so, will the character's asset be moving right or will all the level except for the character be moving left?
I'm a beginner, so i would like to receive detailed answers and not too difficult to understand. Thank you.
if you have any advice I would gladly listen to it.
I've done all corona tutorials but they doesn't explain how to do a platformer. - Luca Pasini
Looks like you still don't feel how game work from inside. Tutorials probably will not help you much. I think you need to start something very simple by your own - not by tutorials.
For example:
Show red rectangle
Show blue recatngle
Tap on the screen and red rectangle must change his position (not by transitions - just by core x,y change)
If they are collide - show text: "You win". Collision check just by raw calculations.
Then go with updates, that will make it looks more like a game.
I want to make a 2 player mode, split screen style, like Tiny Wings HD did where each side of an iPad gets a flipped orientation screen of the current Level.
I wanted to also implement it on tvOS (without the flipped orientation) as I feel TV begs for this sort of gameplay as it's pretty classic to have this style of gameplay on TV (e.g. Mario Kart 64 or Goldeneye).
Over on the Apple Developer forum, someone suggested that it could be done as follows, but, there we're no other responses.
"You can have two views attached to the main window (add a subview in your viewcontroller). To both views you can present a copy of the scene. Then you can exchange game data between scenes via singletons."
I was looking for a more in-depth explanation as I don't exactly understand what the answer is saying.
I'd just like to be able to have two cameras both rendering the same scene but one focusing on player 1 and the other player 2.
Obviously this isn't a simple answer, so I don't expect a full in-depth tutorial.
Unfortunately I could find no info on this.
Has anyone tried this?
A sample project would be ideal or some documentation/links that might help.
I'm sure a demonstration of this would be valuable to quite a lot of people.
No Cameras involved or necessary
The players just look like they're moving along the x axis because the backgrounds are scrolling by. You can allow the players to move up & down on the y axis whether jumping, ducking, rolling or following a path like in Tiny Wings, but the player never leaves their x position. You can even have each half of the screen background scrolling at different speeds to represent that one player is moving faster than the other.
In your update method in you scene file you can scroll your backgrounds, and in your touches methods you can jump, duck etc the players
Instead of using an SKView to present an SKScene, you can use SKRenderer and MTKView. SKRenderer renders a scene into a Metal pipeline, which in turn can be presented by an MTView.
Crucially, you can decide if SKRenderer updates the scene, allowing you to render the same scene frame multiple times (possibly using different cameras).
So a pipeline might look like this:
Apple actually talk about this option in Choosing a SpriteKit Scene Renderer. There's also a section about using SKRenderer in Going Beyond 2D with SpriteKit from WWDC17 which is quite helpful. This answer also shows how to use SKRenderer (albeit in Objective-C).
I want to develop a augmented reality game. Player will stand in a room and some cameras will take video of him. Idea is to add a monster to that video which will be seen by player with glasses or direct view from a lcd. Basically this can be done with some image proccessing consept. Adding colored parts or some markers where the monster will be and some hardworking would do that.
But my question is how to make this monster move and as a result have a video which monster looks like attacking the player. Actual game starts after that but I will go step by step. First step is to have that video with attacking monster.
I'm completely new to this , I only used opencv. So I will need some tools to achieve my goal. Where would you suggest me to start? I prefer C++ but any language with some api suggestions are also accepted. I m also open for theoretical, conceptual suggestions. Thank you for reading my question
Not: This idea came to my mind after watching anime Sword Art Online. If you like to watch animes and virtual reality stuff; I suggest you to watch it. It is a good one.
If you want the monster to move like attacking the player you will need to know the 3D coordinates of the player or some parts of the player. This can be done by making the player wearing recognizable markers that can be detected so homography can be extracted to get the 3D position.
You can start reading this post on the topic, it is about c++ agugmented reality on OpennCV.
New to XNA. Would love to hear your input in how to set up my clases for my Domino game. So far, I have a "BonesSprite" class which has fields like first value, second value, orientation, position etc. I have code on the LoadContent method which creates a List for each bone as shown in the code below.
Background = Game.Content.Load<Texture2D>(#"Images\Wood");
//Load several different automated sprites into the list
fichasList.Add(new Ficha(Game.Content.Load<Texture2D>(#"Images/46"),
10, Vector2.Zero, new Vector2(150, 150), 0, 0, true, true));
This is what i have so far: http://i129.photobucket.com/albums/p239/itsshortforleo/Untitled-1copy.jpg
I still can't come up with:
How to deal 7 bones to each player (I have an empty Player class that i don't know how to fill yet)
How to place the 7 bonesprites on the board so that only player 1 can see his bones and not the other players'
How to click on one bone to play it on the board on the exact position right next to the other bone and in the correct orientation
How can I highlight a bone when i have the mouse over it
The game seemed so simple to me until I started designing the classes. Appreciate your help.
Just a few ideas for your consideration:
You can deal with (1) and (2) simply. Make a Player and Bone class. Add to the Bone a field "owner" so that you can assign a Player to it. You did not write whether it is going to be turn-based "hot seats" or network game, nevertheless you'll get the correct bones to display just by checking their correcponding "owners" in a loop.
These are basics of the object oriented programming, I suggest you to read more about these concepts before starting a game. It won't take much time but it will make your life easier.
(4) First think how to get a correct bone recognized when clicking.
As others suggested you should also split your questions, (1) and (2) can go together, others not.