I'm using the Gamera library for my game's camera and so as to be compatible with multiple screen resolutions on mobile I'm using the setScale function with Gamera to adjust the size accordingly. Gamera uses love.graphics.scale to scale the graphics and unfortunately this results in a sub pixel figure for the graphic size since the scale ratio is a decimal number. The screen visible area is resized to perfect integers however so there is no problem there.
This sub pixel figure for the graphic size causes pixel bleed when drawing tiles with Sprite batches. I've tried "nearest neighbour" image filtering for these problem images, which was a workaround but didn't fix the pixel bleed for the player's y axis. Generally I thought there must be a better way since all that really needs to be done is to math.floor the scale. The 2048x2048 level size is resized to something like 1742.8234. If it was simply 1742 there would be no problem.
This is the relevant draw function in Gamera:
function gamera:draw(f)
love.graphics.setScissor(self:getWindow())
love.graphics.push()
local scale = self.scale
love.graphics.scale(scale)
love.graphics.translate((self.w2 + self.l) / scale, (self.h2+self.t) / scale)
love.graphics.rotate(-self.angle)
love.graphics.translate(-self.x, -self.y)
f(self:getVisible())
love.graphics.pop()
love.graphics.setScissor()
end
If you want to use math.floor on your scale, all you have to do is to replace any occurance of scale where you want to do that with math.floor(scale).
function gamera:draw(f)
love.graphics.setScissor(self:getWindow())
love.graphics.push()
local scale = self.scale
love.graphics.scale(math.floor(scale))
love.graphics.translate((self.w2 + self.l) / math.floor(scale), (self.h2+self.t) / math.floor(scale))
love.graphics.rotate(-self.angle)
love.graphics.translate(-self.x, -self.y)
f(self:getVisible())
love.graphics.pop()
love.graphics.setScissor()
end
As programmers are lazy and we do not want to calculate things more often than necessary this solution is not very pretty. So we forget what we did above and simply replace
local scale = self.scale
with
local scale = math.floor(self.scale)
now within that function scale is uses as intended.
If we want keep that change beyond that function's scope we could as well do
self.scale = math.floor(self.scale)
local scale = self.scale
Edit due to comment
If you want to use ensure that love.graphics.scale() results in integer dimensions you have to use an appropriate scale.
Sticking to your example:
2048 shall be shrinked to 1742:
love.graphics.scale(1742/2048)
If you let's say want to shrink it to 30%:
love.graphics.scale(math.floor(0.3*2048)/2048)
would result in 614 instead of 614.4
Simple math.
Related
I want to detect pixel-perfect collisions between 2 sprites.
I use the following function which I have found online, but makes total sense to me.
static bool PerPixelCollision(Sprite a, Sprite b)
{
// Get Color data of each Texture
Color[] bitsA = new Color[a.Width * a.Height];
a.Texture.GetData(0, a.CurrentFrameRectangle, bitsA, 0, a.Width * a.Height);
Color[] bitsB = new Color[b.Width * b.Height];
b.Texture.GetData(0, b.CurrentFrameRectangle, bitsB, 0, b.Width * b.Height);
// Calculate the intersecting rectangle
int x1 = (int)Math.Floor(Math.Max(a.Bounds.X, b.Bounds.X));
int x2 = (int)Math.Floor(Math.Min(a.Bounds.X + a.Bounds.Width, b.Bounds.X + b.Bounds.Width));
int y1 = (int)Math.Floor(Math.Max(a.Bounds.Y, b.Bounds.Y));
int y2 = (int)Math.Floor(Math.Min(a.Bounds.Y + a.Bounds.Height, b.Bounds.Y + b.Bounds.Height));
// For each single pixel in the intersecting rectangle
for (int y = y1; y < y2; ++y)
{
for (int x = x1; x < x2; ++x)
{
// Get the color from each texture
Color colorA = bitsA[(x - (int)Math.Floor(a.Bounds.X)) + (y - (int)Math.Floor(a.Bounds.Y)) * a.Texture.Width];
Color colorB = bitsB[(x - (int)Math.Floor(b.Bounds.X)) + (y - (int)Math.Floor(b.Bounds.Y)) * b.Texture.Width];
if (colorA.A != 0 && colorB.A != 0) // If both colors are not transparent (the alpha channel is not 0), then there is a collision
{
return true;
}
}
}
//If no collision occurred by now, we're clear.
return false;
}
(all the Math.floor are useless, I copied this function from my current code where I'm trying to make it work with floats).
It reads the color of the sprites in the rectangle portion that is common to both sprites.
This actually works fine, when I display the sprites at x/y coordinates where x and y are int's (.Bounds.X and .Bounds.Y):
View an example
The problem with displaying sprites at int's coordinates is that it results in a very jaggy movement in diagonals:
View an example
So ultimately I would like to not cast the sprite position to int's when drawing them, which results in a smooth(er) movement:
View an example
The issue is that the PerPixelCollision works with ints, not floats, so that's why I added all those Math.Floor. As is, it works in most cases, but it's missing one line and one row of checking on the bottom and right (I think) of the common Rectangle because of the rounding induced by Math.Floor:
View an example
When I think about it, I think it makes sense. If x1 is 80 and x2 would actually be 81.5 but is 81 because of the cast, then the loop will only work for x = 80, and therefore miss the last column (in the example gif, the fixed sprite has a transparent column on the left of the visible pixels).
The issue is that no matter how hard I think about this, or no matter what I try (I have tried a lot of things) - I cannot make this work properly. I am almost convinced that x2 and y2 should have Math.Ceiling instead of Math.Floor, so as to "include" the last pixel that otherwise is left out, but then it always gets me an index out of the bitsA or bitsB arrays.
Would anyone be able to adjust this function so that it works when Bounds.X and Bounds.Y are floats?
PS - could the issue possibly come from BoxingViewportAdapter? I am using this (from MonoExtended) to "upscale" my game which is actually 144p.
Remember, there is no such thing as a fractional pixel. For movement purposes, it completely makes sense to use floats for the values and cast them to integer pixels when drawn. The problem is not in the fractional values, but in the way that they are drawn.
The main reason the collisions are not appearing to work correctly is the scaling. The colors for the new pixels in between the diagonals get their colors by averaging* the surrounding pixels. The effect makes the image appear larger than the original, especially on the diagonals.
*there are several methods that may be used for the scaling, bi-cubic and linear are the most common.
The only direct(pixel perfect) solution is to compare the actual output after scaling. This requires rendering the entire screen twice, and requires the scale factor more computations. (not recommended)
Since you are comparing the non-scaled images your collisions appear to be off.
The other issue is movement speed. If you are moving faster than one pixel per Update(), detecting per pixel collisions is not enough, if the movement is to be restricted by the obstacle. You must resolve the collision.
For enemies or environmental hazards your original code is sufficient and collision resolution is not required. It will give the player a minor advantage.
A simple resolution algorithm(see below for a mathematical solution) is to unwind the movement by half, check for collision. If it is still colliding, unwind the movement by a quarter, otherwise advance it by a quarter and check for collision. Repeat until the movement is less than 1 pixel. This runs log of Speed times.
As for the top wall not colliding perfectly: If the starting Y value is not a multiple of the vertical movement speed, you will not land perfectly on zero. I prefer to resolve this by setting the Y = 0, when Y is negative. It is the same for X, and also when X and Y > screen bounds - origin, for the bottom and right of the screen.
I prefer to use mathematical solutions for collision resolution. In your example images, you show a box colliding with a diamond, the diamond shape is represented mathematically as the Manhattan distance(Math.Abs(x1-x2) + Math.Abs(y1-y2)). From this fact, it is easy directly calculate the resolution to the collision.
On optimizations:
Be sure to check that the bounding Rectangles are overlapping before calling this method.
As you have stated, remove all Math.Floors, since, the cast is sufficient. Reduce all calculations inside of the loops not dependent on the loop variable outside of the loop.
The (int)a.Bounds.Y * a.Texture.Width and (int)b.Bounds.Y * b.Texture.Width are not dependent on the x or y variables and should be calculated and stored before the loops. The subtractions 'y-[above variable]` should be stored in the "y" loop.
I would recommend using a bitboard(1 bit per 8 by 8 square) for collisions. It reduces the broad(8x8) collision checks to O(1). For a resolution of 144x144, the entire search space becomes 18x18.
you can wrap your sprite with a rectangle and use its function called Intersect,which detedct collistions.
Intersect - XNA
1.Introduction:
So I want to develop a special filter method for uiimages - my idea is to change from one picture all the colors to black except a certain color, which should keep their appearance.
Images are always nice, so look at this image to get what I'd like to achieve:
2.Explanation:
I'd like to apply a filter (algorithm) that is able to find specific colors in an image. The algorithm must be able to replace all colors that are not matching to the reference colors with e.g "black".
I've developed a simple code that is able to replace specific colors (color ranges with threshold) in any image.
But tbh this solution doesn't seems to be a fast & efficient way at all!
func colorFilter(image: UIImage, findcolor: String, threshold: Int) -> UIImage {
let img: CGImage = image.cgImage!
let context = CGContext(data: nil, width: img.width, height: img.height, bitsPerComponent: 8, bytesPerRow: 4 * img.width, space: CGColorSpaceCreateDeviceRGB(), bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)!
context.draw(img, in: CGRect(x: 0, y: 0, width: img.width, height: img.height))
let binaryData = context.data!.assumingMemoryBound(to: UInt8.self),
referenceColor = HEXtoHSL(findcolor) // [h, s, l] integer array
for i in 0..<img.height {
for j in 0..<img.width {
let pixel = 4 * (i * img.width + j)
let pixelColor = RGBtoHSL([Int(binaryData[pixel]), Int(binaryData[pixel+1]), Int(binaryData[pixel+2])]) // [h, s, l] integer array
let distance = calculateHSLDistance(pixelColor, referenceColor) // value between 0 and 100
if (distance > threshold) {
let setValue: UInt8 = 255
binaryData[pixel] = setValue; binaryData[pixel+1] = setValue; binaryData[pixel+2] = setValue; binaryData[pixel+3] = 255
}
}
}
let outputImg = context.makeImage()!
return UIImage(cgImage: outputImg, scale: image.scale, orientation: image.imageOrientation)
}
3.Code Information The code above is working quite fine but is absolutely ineffective. Because of all the calculation (especially color conversion, etc.) this code is taking a LONG (too long) time, so have a look at this screenshot:
My question I'm pretty sure there is a WAY simpler solution of filtering a specific color (with a given threshold #c6456f is similar to #C6476f, ...) instead of looping trough EVERY single pixel to compare it's color.
So what I was thinking about was something like a filter (CIFilter-method) as alternative way to the code on top.
Some Notes
So I do not ask you to post any replies that contain suggestions to use the openCV libary. I would like to develop this "algorithm" exclusively with Swift.
The size of the image from which the screenshot was taken over time had a resolution of 500 * 800px
Thats all
Did you really read this far? - congratulation, however - any help how to speed up my code would be very appreciated! (Maybe theres a better way to get the pixel color instead of looping trough every pixel) Thanks a million in advance :)
First thing to do - profile (measure time consumption of different parts of your function). It often shows that time is spent in some unexpected place, and always suggests where to direct your optimization effort. It doesn't mean that you have to focus on that most time consuming thing though, but it will show you where the time is spent. Unfortunately I'm not familiar with Swift so cannot recommend any specific tool.
Regarding iterating through all pixels - depends on the image structure and your assumptions about input data. I see two cases when you can avoid this:
When there is some optimized data structure built over your image (e.g. some statistics in its areas). That usually makes sense when you process the same image with same (or similar) algorithm with different parameters. If you process every image only once, likely it will not help you.
When you know that the green pixels always exist in a group, so there cannot be an isolated single pixel. In that case you can skip one or more pixels and when you find a green pixel, analyze its neighbourhood.
I do not code on your platform but...
Well I assume your masked areas (with the specific color) are continuous and large enough ... that means you got groups of pixels together with big enough areas (not just few pixels thick stuff). With this assumption you can create a density map for your color. What I mean if min detail size of your specific color stuff is 10 pixels then you can inspect every 8th pixel in each axis speeding up the initial scan ~64 times. And then use the full scan only for regions containing your color. Here is what you have to do:
determine properties
You need to set the step for each axis (how many pixels you can skip without missing your colored zone). Let call this dx,dy.
create density map
simply create 2D array that will hold info if center pixel of region is set with your specific color. so if your image has xs,ys resolution than your map will be:
int mx=xs/dx;
int my=ys/dy;
int map[mx][my],x,y,xx,yy;
for (yy=0,y=dy>>1;y<ys;y+=dy,yy++)
for (xx=0,x=dx>>1;x<xs;x+=dx,xx++)
map[xx][yy]=compare(pixel(x,y) , specific_color)<threshold;
enlarge map set areas
now you should enlarge the set areas in map[][] to neighboring cells because #2 could miss edge of your color region.
process all set regions
for (yy=0;yy<my;yy++)
for (xx=0;xx<mx;xx++)
if (map[xx][yy])
for (y=yy*dy,y<(yy+1)*dy;y++)
for (x=xx*dx,x<(xx+1)*dx;x++)
if (compare(pixel(x,y) , specific_color)>=threshold) pixel(x,y)=0x00000000;
If you want to speed up this even more than you need to detect set map[][] cells that are on edge (have at least one zero neighbor) you can distinquish the cells like:
0 - no specific color is present
1 - inside of color area
2 - edge of color area
That can be done by simply in O(mx*my). After that you need to check for color only the edge regions so:
for (yy=0;yy<my;yy++)
for (xx=0;xx<mx;xx++)
if (map[xx][yy]==2)
{
for (y=yy*dy,y<(yy+1)*dy;y++)
for (x=xx*dx,x<(xx+1)*dx;x++)
if (compare(pixel(x,y) , specific_color)>=threshold) pixel(x,y)=0x00000000;
} else if (map[xx][yy]==0)
{
for (y=yy*dy,y<(yy+1)*dy;y++)
for (x=xx*dx,x<(xx+1)*dx;x++)
pixel(x,y)=0x00000000;
}
This should be even faster. In case your image resolution xs,ys is not a multiple of region size mx,my you should handle the outer edge of image either by zero padding or by special loops for that missing part of image...
btw how long it takes to read and set your whole image?
for (y=0;y<ys;y++)
for (x=0;x<xs;x++)
pixel(x,y)=pixel(x,y)^0x00FFFFFF;
if this alone is slow than it means your pixel access is too slow and you should use different api for this. That is very common mistake on Windows GDI platform as people usually use Pixels[][] which is slower than crawling snail. there are other ways like bitlocking/blitting,ScanLine etc so in such case you need to look for something fast on your platform. If you are not able to speed even this stuff than you can not do anything else ... btw what HW is this run on?
The problem is simple: I want to move (and later, be able to rotate) an image. For example, every time i press the right arrow on my keyboard, i want the image to move 0.12 pixels to the right, and every time i press the left arrow key, i want the image to move 0.12 pixels to the left.
Now, I have multiple solutions for this:
1) simply add the incremental value, i.e.:
image.x += 0.12;
this is of course assuming that we're going to the right.
2) i multiplicate the value of a single increment by the times i already went into this particular direction + 1, like this:
var result:Number = 0.12 * (numberOfTimesWentRight+1);
image.x = result;
Both of these approaches work but produce similiar, yet subtly different, results. If we add some kind of button component that simply resets the x and y coordinates of the image, you will see that with the first approach the numbers don't add up correctly.
it goes from .12, .24, .359999, .475 etc.
But with the second approach it works well. (It's pretty obvious as to why though, it seems like += operations with Numbers are not really precise).
Why not use the second approach then? Well, i want to rotate the image as well. This will work for the first attempt, but after that the image will jump around. Why? In the second approach we never took the original position of the image in account. So if the origin-point shifts a bit down or up because you rotated your image, and THEN you try to move the image again: it will move to the same position as if you hadn't rotated before.
Alright, to make this short:
How can i reliably move, scale and rotate images for 1/10 of a pixel?
Short answer: I don't know! You're fighting with floating point math!
Luckily, I have a workaround, if you don't mind.
You store the location (x and y) of the image in a separate variable... at a larger scale. Such as 100x. So 123.45 becomes 12345, and you then divide by 100 to set the attribute that flash uses to display.
Yes, there are limits to number sizes too, but if you're willing to accept some error rate, and the fact that you'll be limited to, I dunno, a million pixels in each direction, you can fit it in a regular int. The only rounding error you will encounter will be a single rounding error when you divide by 100 (or the factor you used). So instead of the compound rounding error which you described (0.12 * 4 = 0.475), you should see things like 0.47999999. Which doesn't matter because it's, well, so small.
To expand on #Pimgd answer a bit, you're probably hitting a floating point error (multiple +='s will exaggerate the error more than one *='s) - Numbers in Flash are 53-bit precision.
There's also another thing to keep in mind, which is probably playing a bigger role with such small movement values; Flash positions all objects using twips, which is roughly about 1/20th of a pixel, or 0.05, so all values are rounded to this. When you say image.x += 0.12, it's actually the equivalent of image.x += 0.10, hence which the different becomes apparent; you're losing 0.02 of a pixel with every move.
You should be able to get around it by moving to another scale, as #Pimgd says, or just storing your position separately - i.e. work from a property _x rather than image.x so you're not losing that precision everytime:
this._x += 0.12;
image.x = this._x;
I want to add force to the grenade according to the touch positions of the user.
--this is the code
physics.addBody(grenade1,"dynamic",{density=1,friction=.9,bounce=0})
grenade1:applyForce(event.x,event.y,grenade1.x,grenade1.y)
Here more the x and y positions are the lower the force is. But the force here is too high that the grenade is up in the sky.
You must calibrate the force applied. Remember that F=ma so if x=250 then F=250, if the mass of the display object (set when added body, based on material density * object area) is 1 then acceleration a = 250, which is very large. So try:
local coef = 0.001
grenade1:applyForce(coef*event.x, coef*event.y, grenade1.x, grenade1.y)
and see what you get. If too small, increase coef until the response is what you are looking for. You may find that linear (i.e., constant coef) doesn't give you the effect you want, for example coef=0.01 might be fine for small y but for large y you might find that coef=0.001 works better. In this case you would have to make coef a function of event.y, for example
local coef = event.y < 100 and 0.001 or 0.01
You could also increase the mass of the display object, instead of using coeff.
Recall also that top-level corner is 0,0: force in top level corner will be 0,0. So if you want force to increase as you go up on the screen, you need to use display.contentHeight - event.x.
How to make a 2d world with fixed size, which would repeat itself when reached any side of the map?
When you reach a side of a map you see the opposite side of the map which merged togeather with this one. The idea is that if you didn't have a minimap you would not even notice the transition of map repeating itself.
I have a few ideas how to make it:
1) Keeping total of 3x3 world like these all the time which are exactly the same and updated the same way, just the players exists in only one of them.
2) Another way would be to seperate the map into smaller peaces and add them to required place when asked.
Either way it can be complicated to complete it. I remember that more thatn 10 years ago i played some game like that with soldiers following each other in a repeating wold shooting other AI soldiers.
Mostly waned to hear your thoughts about the idea and how it could be achieved. I'm coding in XNA(C#).
Another alternative is to generate noise using libnoise libraries. The beauty of this is that you can generate noise over a theoretical infinite amount of space.
Take a look at the following:
http://libnoise.sourceforge.net/tutorials/tutorial3.html#tile
There is also an XNA port of the above at: http://bigblackblock.com/tools/libnoisexna
If you end up using the XNA port, you can do something like this:
Perlin perlin = new Perlin();
perlin.Frequency = 0.5f; //height
perlin.Lacunarity = 2f; //frequency increase between octaves
perlin.OctaveCount = 5; //Number of passes
perlin.Persistence = 0.45f; //
perlin.Quality = QualityMode.High;
perlin.Seed = 8;
//Create our 2d map
Noise2D _map = new Noise2D(CHUNKSIZE_WIDTH, CHUNKSIZE_HEIGHT, perlin);
//Get a section
_map.GeneratePlanar(left, right, top, down);
GeneratePlanar is the function to call to get the sections in each direction that will connect seamlessly with the rest of your world.
If the game is tile based I think what you should do is:
Keep only one array for the game area.
Determine the visible area using modulo arithmetics over the size of the game area mod w and h where these are the width and height of the table.
E.g. if the table is 80x100 (0,0) top left coordinates with a width of 80 and height of 100 and the rect of the viewport is at (70,90) with a width of 40 and height of 20 you index with [70-79][0-29] for the x coordinate and [90-99][0-9] for the y. This can be achieved by calculating the index with the following formula:
idx = (n+i)%80 (or%100) where n is the top coordinate(x or y) for the rect and i is in the range for the width/height of the viewport.
This assumes that one step of movement moves the camera with non fractional coordinates.
So this is your second alternative in a little bit more detailed way. If you only want to repeat the terrain, you should separate the contents of the tile. In this case the contents will most likely be generated on the fly since you don't store them.
Hope this helped.