Truck trailer angling while riding - xna

I am about creating a new game with trucks (2D, top-view).
I made a truck moving by keyboard and a stiff trailer attached to it. My goal is to make a loosy trailer like in real trucks with external trailers to attach. I tried to find something at google (even some formulas) but nothing found. Any ideas what angles, what delays should I make to change trailer's angle? Maybe I am thinking wrong?
P.S. I use XNA.

I figured it out myself. I made calculations between angle of truck and trailer, made some additions/substractions and it looks like behaving naturally.

Related

Optimized RPG inventory parsing using OpenCV

I'm trying to develop an OpenCV-based Path of Exile inventory parser. The inventory looks like this, with items left and right. The round things on items are called "sockets", are randomized, but they can be hidden.
There are two options for this:
When you hover an item in game, and press CTRL-C, a description of the item is copied to your clipboard. A solution would be to do this on every single inventory cell, to re-create the whole inventory, bit per bit. There is an issue with this, however: the "item copy" action is probably logged somewhere, and having 12 * 5 = 60 actions like this in under 2 seconds would definitely look fishy on GGG's (the devs) end.
Using image-recognition software to decompose the inventory like a human being would. There are several methods with this, and I'm struggling to find the most effective.
Method 1: Sprite detection
This is the "obvious" method. Store the sprite of every single item in the game (I think there must be around 900-ish sprites for all the bases and unique items, so probably around 250 sprites if we exclude the unique items), and perform sprite detection for each of them, on the whole inventory. This is without a doubt extremely overkill. And it requires tons of storage. Discarded.
Method 2: Reverse sprite detection
For every single sprite in the game, calculate an associated md5, and store it in a file. Using OpenCV, cut the inventory's items one by one, calculate their md5, and match against the file to detect which item it is. It's probably faster this way, but still needs a ton of processing power.
Method 3: Same than #2 but smarter
Use OpenCV to cut the items one by one, and based on their size, optimize the search (2x2 item means that it's either a helmet, boots, gloves, or a shield. 2x1 item is always a belt. 1x1 is always a ring/amulet, and so on).
Is there another method that I'm forgetting? All of these look like they need both heavy processing, code, and before-hand work from me.
Thanks in advance.

Why cant virtual reality sets (HTC Vive/Oculus) play standard games

Visually speaking, the "displayed image" (in the steam/vive window) looks very similar to any other game being rendered on the desktop. Eg: Counterstrike, WoW, etc.
Question: Why is it then these games don't "feel" like being in a VR environment?
Also, programmatically speaking (image rendering, camera angles, depth field, etc)
Question: Can a non-VR game work with the VR sets as long as you configure the controls to the headset and wands? Eg: Headset = joystick; wand buttons = menu etc.
Thank you.
Edit: Please let me know if you have any reading recommendations on this subject.
The non-VR games simply weren't made for VR.
That said, there are hacks that make non-VR games semi-work in VR. You can check out Vorpx for Oculus, but I don't know of anything for Vive. There will be very big issues and headaches, though.
A lot of things will look bad - like missing graphics as almost all games go through shortcuts so they don't render what you will not see. For example there is no sky in RTS games and the map ends just after the end of scrollable space. Or when you're driving a car in a race game, there probably isn't even more to the car then the dashboard (no seats, back of the car etc). No one should see them, so no one made them.
It's even worse with the user interface of these games, no one had depth in mind when they designed this, so you'll have an ammo counter that makes you cross eyes end such.
I could go on and on with the issues, as this is just the tip of the iceberg.

Domino 2D board game in XNA

New to XNA. Would love to hear your input in how to set up my clases for my Domino game. So far, I have a "BonesSprite" class which has fields like first value, second value, orientation, position etc. I have code on the LoadContent method which creates a List for each bone as shown in the code below.
Background = Game.Content.Load<Texture2D>(#"Images\Wood");
//Load several different automated sprites into the list
fichasList.Add(new Ficha(Game.Content.Load<Texture2D>(#"Images/46"),
10, Vector2.Zero, new Vector2(150, 150), 0, 0, true, true));
This is what i have so far: http://i129.photobucket.com/albums/p239/itsshortforleo/Untitled-1copy.jpg
I still can't come up with:
How to deal 7 bones to each player (I have an empty Player class that i don't know how to fill yet)
How to place the 7 bonesprites on the board so that only player 1 can see his bones and not the other players'
How to click on one bone to play it on the board on the exact position right next to the other bone and in the correct orientation
How can I highlight a bone when i have the mouse over it
The game seemed so simple to me until I started designing the classes. Appreciate your help.
Just a few ideas for your consideration:
You can deal with (1) and (2) simply. Make a Player and Bone class. Add to the Bone a field "owner" so that you can assign a Player to it. You did not write whether it is going to be turn-based "hot seats" or network game, nevertheless you'll get the correct bones to display just by checking their correcponding "owners" in a loop.
These are basics of the object oriented programming, I suggest you to read more about these concepts before starting a game. It won't take much time but it will make your life easier.
(4) First think how to get a correct bone recognized when clicking.
As others suggested you should also split your questions, (1) and (2) can go together, others not.

iPad Car Race Game

i will be developing a car race application, you will see the car from the top as well as the track (road) on the screen.
Tilting iPad (using accelerometer) will make the car move in the direction of title but I want to restrict the car movement along the road only.
In ActionScript 3 we have hit test between the objects, and i could just use the hit test method between the car and the road to keep the car on the track.
How can I do the same in iPad to keep the car on track? Will I be working at a very low level coordinate based logic or do we have something easier in here to avoid the car getting side tracked?
Do you suggest me to look into cocos2D?
It's a very broad question, kindly answer whatever you can instead of de-voting (as happened before), kindly just point out a direction regardless of sorting out an exact/specific answer. Thanks.
Short answer: you will have to calculate this yourself. Pretty much the only function the iOS SDK provides that sort of helps with collision detection is CGRectIntersectsRect()

simple shape recognition

I wanna achieve something that looks like the wizard's ability in the game Trine.
I want to create a game where the player uses the mouse to create certain objects, so i will need to compare the shape the player drew to a predefined shape of my own and check if its close.
I have no idea how to achieve this and where to look for, I assume it has something to do with shape recognition like in image processing and computer vision but it should be much simpler and work in real time.
does anyone have a clue how this can be done or where can i look for something like that?
Is this what you're going for? http://www.youtube.com/watch?v=7Zh79q_xvZw
I would start by researching gesture recognition. I think that's the phrase you need to get good info. http://en.wikipedia.org/wiki/Gesture_recognition
Also, sketch recognition: http://en.wikipedia.org/wiki/Sketch_recognition
Have a look at this question. What you are looking for in particular is on-line handwriting recognition, meaning that you follow every move of the user from beginning to end.
Now, you might want to simplify it a whole lot, so one way is defining 9 areas, like a 3x3 grid. Then convert the user's movement into a list of how the user moved through these grids (use thresholds to make sure it was in that area for a while). Now you will have an array like this: 1-1, 1-2, 2-2, 2-3 (meaning the user went from upper-left corner, the upper-middle, etc.)
This information is now fairly easy to match to a set of gestures. If it performs poorly, you can either make it more difficult and introduce a Hidden Markov Model, that will allow some mistakes in the gesture (but still matching the most likely one you have in your gesture set), or you could simply display the grid to the user, so that the user will learn the gestures like number codes.

Resources