In XNA , what's the difference between the rectangle width and the texture's width in a sprite? - xna

Let's say you did this: spriteBatch.Draw(myTexture, myRectangle, Color.White);
And you have this:
myTexture = Content.Load<Texture2D>("myCharacterTransparent");
myRectangle = new Rectangle(10, 100, 30, 50);
Ok, so now we have a rectangle width of 30. Let's say the myTexture's width is 100.
So with the first line, does it make the sprite's width 30 because that's the width you set to the rectangle, while the myTexture width stays 100? Or does the sprite's width go 100 because that's the width of the texture?

the Rectangles used by the Draw-method defines what part of the Texture2D should be drawn in what part of the rendertarget (usually the screen).
This is how we use tilesets, for instance;
class Tile
{
int Index;
Vector2 Position;
}
Texture2D tileset = Content.Load<Texture2D>("sometiles"); //128x128 of 32x32-sized tiles
Rectangle source = new Rectangle(0,0,32,32); //We set the dimensions here.
Rectangle destination = new Rectangle(0,0,32,32); //We set the dimensions here.
List<Tile> myLevel = LoadLevel("level1");
//the tileset is 4x4 tiles
in Draw:
spriteBatch.Begin();
foreach (var tile in myLevel)
{
source.Y = (int)((tile.Index / 4) * 32);
source.X = (tile.Index - source.Y) * 32;
destination.X = (int)tile.Position.X;
destination.Y = (int)tile.Position.Y;
spriteBatch.Draw(tileset, source, destination, Color.White);
}
spriteBatch.End();
I may have mixed up the order of which the rectangles are used in the draw-method, as I am doing this off the top of my head while at work.
Edit; Using only Source Rectangle lets you draw only a piece of a texture on a position of the screen, while using only destination lets you scale the texture to fit wherever you want it.

Related

Image auto cropping when rotate in OpenCV.js

I'm using OpenCV.js to rotate image to the left and right, but it was cropped when I rotate.
This is my code:
let src = cv.imread('img');
let dst = new cv.Mat();
let dsize = new cv.Size(src.rows, src.cols);
let center = new cv.Point(src.cols/2, src.rows/2);
let M = cv.getRotationMatrix2D(center, 90, 1);
cv.warpAffine(src, dst, M, dsize, cv.INTER_LINEAR, cv.BORDER_CONSTANT, new cv.Scalar());
cv.imshow('canvasOutput', dst);
src.delete(); dst.delete(); M.delete();
Here is an example:
This is my source image:
This is what I want:
But it returned like this:
What should I do to fix this problem?
P/s: I don't know how to use different languages except javascript.
A bit late but given the scarcity of opencv.js material I'll post the answer:
The function cv.warpAffine crops the image because it only does a mathematical transformation as documented on OpenCV and other sources, if you wish to do rotations to any angle you'll need to calculate the padding in order to compensate that.
If you wish to only rotate in multiples of 90 degrees you could use cv.rotate as follows:
cv.rotate(src, dst, cv.ROTATE_90_CLOCKWISE);
Where src is the matrix with your source image, dst is the destination matrix which could be defined empty as follows let dst = new cv.Mat(); and cv.ROTATE_90_CLOCKWISE is the rotate flag indicating the angle of rotation, there are three different options:
cv.ROTATE_90_CLOCKWISE
cv.ROTATE_180
cv.ROTATE_90_COUNTERCLOCKWISE
You can find which OpenCV functions are implemented on OpenCV.js on the repository's opencv_js.congif.py file if the function is indicated as whitelisted then is working on opencv.js even if it is not included in the opencv.js tutorial.
The info about how to use each function can be found in the general documentation, the order of the parameters is generally the indicated on the C++ indications (don't be distracted by the oscure C++ vector types sintax) and the name of the flags (like rotate flag) is usually indicated on the python indications.
I was also experiencing this issue so had a look into #fernando-garcia's answer, however I couldn't see that rotate had been implemented in opencv.js so it seems that the fix in the post #dan-mašek's links is the best solution for this, however the functions required are slightly different.
This is the solution I came up with (note, I haven't tested this exact code and there is probably a more elegant/efficient way of writing this, but it gives the general idea. Also this will only work with images rotated by multiples of 90°):
const canvas = document.getElementById('canvas');
const image = cv.imread(canvas);
let output = new cv.Mat();
const size = new cv.Size();
size.width = image.cols;
size.height = image.rows;
// To add transparent borders
const scalar = new cv.Scalar(0, 0, 0, 0);
let center;
let padding;
let height = size.height;
let width = size.width;
if (height > width) {
center = new cv.Point(height / 2, height / 2);
padding = (height - width) / 2;
// Pad out the left and right before rotating to make the width the same as the height
cv.copyMakeBorder(image, output, 0, 0, padding, padding, cv.BORDER_CONSTANT, scalar);
size.width = height;
} else {
center = new cv.Point(width / 2, width / 2);
padding = (width - height) / 2;
// Pad out the top and bottom before rotating to make the height the same as the width
cv.copyMakeBorder(image, output, padding, padding, 0, 0, cv.BORDER_CONSTANT, scalar);
size.height = width;
}
// Do the rotation
const rotationMatrix = cv.getRotationMatrix2D(center, 90, 1);
cv.warpAffine(
output,
output,
rotationMatrix,
size,
cv.INTER_LINEAR,
cv.BORDER_CONSTANT,
new cv.Scalar()
);
let rectangle;
if (height > width) {
rectangle = new cv.Rect(0, padding, height, width);
} else {
/* These arguments might not be in the right order as my solution only needed height
* > width so I've just assumed this is the order they'll need to be for width >=
* height.
*/
rectangle = new cv.Rect(padding, 0, height, width);
}
// Crop the image back to its original dimensions
output = output.roi(rectangle);
cv.imshow(canvas, output);

Flutter - How to rotate an image around the center with canvas

I am trying to implement a custom painter that can draw an image (scaled down version) on the canvas and the drawn image can be rotated and scaled.
I get to know that to scale the image I have to scale the canvas using scale method.
Now the questions is how to rotate the scaled image on its center (or any other point). The rotate method of canvas allow only to rotate on top left corner.
Here is my implementation that can be extended
Had the same problem, Solution was simply making your own rotation method in three lines
void rotate(Canvas canvas, double cx, double cy, double angle) {
canvas.translate(cx, cy);
canvas.rotate(angle);
canvas.translate(-cx, -cy);
}
We thus first move the canvas towards the point you want to pivot around. We then rotate along the the topleft (default for Flutter) which in coordinate space is the pivot you want and then put the canvas back to the desired position, with the rotation applied. Method is very efficient, requiring only 4 additions for the translation and the rotation cost is identical to the original one.
This can achieve by shifting the coordinate space as illustrated in figure 1.
The translation is the difference in coordinates between C1 and C2, which are exactly as between A and B in figure 2.
With some geometry formulas, we can calculate the desired translation and produce the rotated image as in the method below
ui.Image rotatedImage({ui.Image image, double angle}) {
var pictureRecorder = ui.PictureRecorder();
Canvas canvas = Canvas(pictureRecorder);
final double r = sqrt(image.width * image.width + image.height * image.height) / 2;
final alpha = atan(image.height / image.width);
final beta = alpha + angle;
final shiftY = r * sin(beta);
final shiftX = r * cos(beta);
final translateX = image.width / 2 - shiftX;
final translateY = image.height / 2 - shiftY;
canvas.translate(translateX, translateY);
canvas.rotate(angle);
canvas.drawImage(image, Offset.zero, Paint());
return pictureRecorder.endRecording().toImage(image.width, image.height);
}
alpha, beta, angle are all in radian.
Here is the repo of the demo app
If you don't want to rotate the image around the center of the image you can use this way. You won't have to care about what the offset of the canvas should be in relation to the image rotation, because the canvas is moved back to its original position after the image is drawn.
void rotate(Canvas c, Image image, Offset focalPoint, Size screenSize, double angle) {
c.save();
c.translate(screenSize.width/2, screenSize.height/2);
c.rotate(angle);
// To rotate around the center of the image, focal point is the
// image width and height divided by 2
c.drawImage(image, focalPoint*-1, Paint());
c.translate(-screenSize.width/2, -screenSize.height/2);
c.restore();
}

Rotating a square TBitmap on its center

I am trying to find the simplest way to rotate and display a TBitmap on its center by any given angle needed. The TBitmap is square and any clipping that might occur is not important so long as the rotated bitmap's center point remains constant. The image is very small, only around 50 x 50 pixels so speed isn't an issue. Here is the code I have so far which rotates a TBitmap to 90 degrees, which is simple, the any angle thing less so.
std::auto_ptr<Graphics::TBitmap> bitmap1(new Graphics::TBitmap);
std::auto_ptr<Graphics::TBitmap> bitmap2(new Graphics::TBitmap);
bitmap1->LoadFromFile("c:/myimage.bmp");
bitmap1->Transparent = true;
bitmap1->TransparentColor = bitmap1->Canvas->Pixels[50][50];
bitmap2->Width=bitmap1->Height;
bitmap2->Height=bitmap1->Width;
double x1 = 0.0;
double y1 = 0.0;
for (int x = 0;x < bitmap1->Width; x++)
{
for(int y = 0;y < bitmap1->Height;y++)
{
x1 = std::cos(45.0) * x - std::sin(45.0) * y;
y1 = sin(45.0) * x + cos(45.0) * y;
bitmap2->Canvas->Pixels[x1][y1] =
bitmap1->Canvas->Pixels[x][y];
}
}
Form1->Canvas->Draw( 500, 200, bitmap2.get());
See revised code... This allows for rotation but the copy creates a hazy image and the rotation point is at the top left.
you are doing this the other way around so there may be present holes in the resulting image because you are looping the source pixels with 1 pixel step .... to remedy this loop the target pixels instead...
loop through bitmap2 pixels (x2,y2)
for each compute rotated-back (x1,y1) position in bitmap1
copy pixel value
if (x1,y1) is outside bitmap1 then use backgroun color like clBlack instead.
To improve speed use TBitmap->ScanLine[y] property that will improve speed at least 1000x times if used right see:
Display an array of color in C
After I put all this together I got this:
#include <math.h> // just for cos,sin
// rotate src around x0,y0 [pixels] by angle [rad] and store result in dst
void rotate(Graphics::TBitmap *dst,Graphics::TBitmap *src,double x0,double y0,double angle)
{
int x,y,xx,yy,xs,ys;
double s,c,fx,fy;
// resize dst to the same size as src
xs=src->Width;
ys=src->Height;
dst->SetSize(xs,ys);
// allow direct pixel access for src
src->HandleType=bmDIB;
src->PixelFormat=pf32bit;
DWORD **psrc=new DWORD*[ys];
for (y=0;y<ys;y++) psrc[y]=(DWORD*)src->ScanLine[y];
// allow direct pixel access for dst
dst->HandleType=bmDIB;
dst->PixelFormat=pf32bit;
DWORD **pdst=new DWORD*[ys];
for (y=0;y<ys;y++) pdst[y]=(DWORD*)dst->ScanLine[y];
// precompute variables
c=cos(angle);
s=sin(angle);
// loop all dst pixels
for (y=0;y<ys;y++)
for (x=0;x<xs;x++)
{
// compute position in src
fx=x; // convert to double
fy=y;
fx-=x0; // translate to center of rotation
fy-=y0;
xx=double(+(fx*c)+(fy*s)+x0); // rotate and translate back
yy=double(-(fx*s)+(fy*c)+y0);
// copy pixels
if ((xx>=0)&&(xx<xs)&&(yy>=0)&&(yy<ys)) pdst[y][x]=psrc[yy][xx];
else pdst[y][x]=0; // black
}
// free memory
delete[] psrc;
delete[] pdst;
}
usage:
// init
Graphics::TBitmap *bmp1,*bmp2;
bmp1=new Graphics::TBitmap;
bmp1->LoadFromFile("image.bmp");
bmp1->HandleType=bmDIB;
bmp1->PixelFormat=pf32bit;
bmp2=new Graphics::TBitmap;
bmp2->HandleType=bmDIB;
bmp2->PixelFormat=pf32bit;
// rotate
rotate(bmp2,bmp1,bmp1->Width/2,bmp1->Height/2,25.0*M_PI/180.0);
// here render bmp2 or whatever
// exit
delete bmp1;
delete bmp2;
Here example output:
On the left is bmp1 and on the right rotated bmp2

Convert an image to a SceneKit Node

I have a bit-map image:
( However this should work with any arbitrary image )
I want to take my image and make it a 3D SCNNode. I've accomplished that much with this code. That takes each pixel in the image and creates a SCNNode with a SCNBox geometry.
static inline SCNNode* NodeFromSprite(const UIImage* image) {
SCNNode *node = [SCNNode node];
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
const UInt8* data = CFDataGetBytePtr(pixelData);
for (int x = 0; x < image.size.width; x++)
{
for (int y = 0; y < image.size.height; y++)
{
int pixelInfo = ((image.size.width * y) + x) * 4;
UInt8 alpha = data[pixelInfo + 3];
if (alpha > 3)
{
UInt8 red = data[pixelInfo];
UInt8 green = data[pixelInfo + 1];
UInt8 blue = data[pixelInfo + 2];
UIColor *color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f];
SCNNode *pixel = [SCNNode node];
pixel.geometry = [SCNBox boxWithWidth:1.001 height:1.001 length:1.001 chamferRadius:0];
pixel.geometry.firstMaterial.diffuse.contents = color;
pixel.position = SCNVector3Make(x - image.size.width / 2.0,
y - image.size.height / 2.0,
0);
[node addChildNode:pixel];
}
}
}
CFRelease(pixelData);
node = [node flattenedClone];
//The image is upside down and I have no idea why.
node.rotation = SCNVector4Make(1, 0, 0, M_PI);
return node;
}
But the problem is that what I'm doing takes up way to much memory!
I'm trying to find a way to do this with less memory.
All Code and resources can be found at:
https://github.com/KonradWright/KNodeFromSprite
Now you drawing each pixel as SCNBox of certain color, that means:
one GL draw per box
drawing of unnecessary two invisible faces between adjancent boxes
drawing N of same 1x1x1 boxes in a row when one box of 1x1xN can be drawn
Seems like common Minecraft-like optimization problem:
Treat your image is 3-dimensional array (where depth is wanted image extrusion depth), each element representing cube voxel of certain color.
Use greedy meshing algorithm (demo) and custom SCNGeometry to create mesh for SceneKit node.
Pseudo-code for meshing algorithm that skips faces of adjancent cubes (simplier, but less effective than greedy meshing):
#define SIZE_X = 16; // image width
#define SIZE_Y = 16; // image height
// pixel data, 0 = transparent pixel
int data[SIZE_X][SIZE_Y];
// check if there is non-transparent neighbour at x, y
BOOL has_neighbour(x, y) {
if (x < 0 || x >= SIZE_X || y < 0 || y >= SIZE_Y || data[x][y] == 0)
return NO; // out of dimensions or transparent
else
return YES;
}
void add_face(x, y orientation, color) {
// add face at (x, y) with specified color and orientation = TOP, BOTTOM, LEFT, RIGHT, FRONT, BACK
// can be (easier and slower) implemented with SCNPlane's: https://developer.apple.com/library/mac/documentation/SceneKit/Reference/SCNPlane_Class/index.html#//apple_ref/doc/uid/TP40012010-CLSCHSCNPlane-SW8
// or (harder and faster) using Custom Geometry: https://github.com/d-ronnqvist/blogpost-codesample-CustomGeometry/blob/master/CustomGeometry/CustomGeometryView.m#L84
}
for (x = 0; x < SIZE_X; x++) {
for (y = 0; y < SIZE_Y; y++) {
int color = data[x][y];
// skip current pixel is transparent
if (color == 0)
continue;
// check neighbour at top
if (! has_neighbour(x, y + 1))
add_face(x,y, TOP, );
// check neighbour at bottom
if (! has_neighbour(x, y - 1))
add_face(x,y, BOTTOM);
// check neighbour at bottom
if (! has_neighbour(x - 1, y))
add_face(x,y, LEFT);
// check neighbour at bottom
if (! has_neighbour(x, y - 1))
add_face(x,y, RIGHT);
// since array is 2D, front and back faces is always visible for non-transparent pixels
add_face(x,y, FRONT);
add_face(x,y, BACK);
}
}
A lot of depends on input image. If it is not big and without wide variety of colors, it I would go with SCNNode adding SCNPlane's for visible faces and then flattenedClone()ing result.
An approach similar to the one proposed by Ef Dot:
To keep the number of draw calls as small as possible you want to keep the number of materials as small as possible. Here you will want one SCNMaterial per color.
To keep the number of draw calls as small as possible make sure that no two geometry elements (SCNGeometryElement) use the same material. In other words, use one geometry element per material (color).
So you will have to build a SCNGeometry that has N geometry elements and N materials where N is the number of distinct colors in your image.
For each color in you image build a polygon (or group of disjoint polygons) from all the pixels of that color
Triangulate each polygon (or group of polygons) and build a geometry element with that triangulation.
Build the geometry from the geometry elements.
If you don't feel comfortable with triangulating the polygons yourself your can leverage SCNShape.
For each polygon (or group of polygons) create a single UIBezierPath and a build a SCNShape with that.
Merge all the geometry sources of your shapes in a single source, and reuse the geometry elements to create a custom SCNGeometry
Note that some vertices will be duplicated if you use a collection of SCNShapes to build the geometry. With little effort you can make sure that no two vertices in your final source have the same position. Update the indexes in the geometry elements accordingly.
I can also direct you to this excellent GitHub repo by Nick Lockwood:
https://github.com/nicklockwood/FPSControls
It will show you how to generate the meshes as planes (instead of cubes) which is a fast way to achieve what you need for simple scenes using a "neighboring" check.
If you need large complex scenes, then I suggest you go for the solution proposed by Ef Dot using a greedy meshing algorithm.

iOS: transforming a view into cylindrical shape

With Quartz 2D we can transform our views on the x, yand z axis.
In some cases we could even make them look 3D by changing the values of the matrixes.
I was wondering if it could be possible to transform a view into a cylinder shape like in the following picture?
Please ignore the top part of the cylinder. I am more curious to know whether it would be possible warping an UIView around like the side of the cylinder as in the image.
Is that possible only making use of Quartz 2D, layers and transformations (not OpenGL)? If not, is it possible to at least draw it in CGContext to make a view appear like so?
You definitely can't do this with a transform. What you could do is create your UIView off-screen, get the context for the view, get an image from that, and then map the image to a new image, using a non-linear mapping.
So:
Create an image context with UIGraphicsBeginImageContext()
Render the view there, with view.layer.renderInContext()
Get an image of the result with CGBitmapContextCreateImage()
Write a mapping function that takes the x/y screen coordinates and maps them to coordinates on the cylinder.
Create a new image the size of the screen view, and call the mapping
function to copy pixels from the source to the destination.
Draw the destination bitmap to the screen.
None of these steps is particularly-difficult, and you might come up with various ways to simplify. For example, you can just render strips of the original view, offsetting the Y coordinate based on the coordinates of a circle, if you are okay with not doing perspective transformations.
If you want the view to actually be interactive, then you'd need to do the transform in the opposite direction when handling touch events.
No you can't bend a view using a transform.
The transform can only manipulate the four corners of the view so no matter what you do it will still be a plane.
I realize this goes beyond Quartz2D... You could try adding SceneKit.
Obtain the view's image via UIGraphicsBeginImageContext(), view.layer.renderInContext(), CGBitmapContextCreateImage().
Create a SCNMaterial with the diffuse property set to the image of your view
Create an SCNCylinder and apply the material to it.
Add the cylinder to an SCNScene.
Create an SCNView and set its scene.
Add the SCNView to your view hierarchy.
Reference : Using OpenGL ES 2.0 with iOS, how do I draw a cylinder between two points?
I have also used the same code for one of my project:
Check this one where it is mentioned to draw cone shape; it's dated but after adapting the algorithm, it works.
See code below for solution. Self represents the mesh and contains the vertices, indices, and such.
- (instancetype)initWithOriginRadius:(CGFloat)originRadius
atOriginPoint:(GLKVector3)originPoint
andEndRadius:(CGFloat)endRadius
atEndPoint:(GLKVector3)endPoint
withPrecision:(NSInteger)precision
andColor:(GLKVector4)color
{
self = [super init];
if (self) {
// normal pointing from origin point to end point
GLKVector3 normal = GLKVector3Make(originPoint.x - endPoint.x,
originPoint.y - endPoint.y,
originPoint.z - endPoint.z);
// create two perpendicular vectors - perp and q
GLKVector3 perp = normal;
if (normal.x == 0 && normal.z == 0) {
perp.x += 1;
} else {
perp.y += 1;
}
// cross product
GLKVector3 q = GLKVector3CrossProduct(perp, normal);
perp = GLKVector3CrossProduct(normal, q);
// normalize vectors
perp = GLKVector3Normalize(perp);
q = GLKVector3Normalize(q);
// calculate vertices
CGFloat twoPi = 2 * PI;
NSInteger index = 0;
for (NSInteger i = 0; i < precision + 1; i++) {
CGFloat theta = ((CGFloat) i) / precision * twoPi; // go around circle and get points
// normals
normal.x = cosf(theta) * perp.x + sinf(theta) * q.x;
normal.y = cosf(theta) * perp.y + sinf(theta) * q.y;
normal.z = cosf(theta) * perp.z + sinf(theta) * q.z;
AGLKMeshVertex meshVertex;
AGLKMeshVertexDynamic colorVertex;
// top vertex
meshVertex.position.x = endPoint.x + endRadius * normal.x;
meshVertex.position.y = endPoint.y + endRadius * normal.y;
meshVertex.position.z = endPoint.z + endRadius * normal.z;
meshVertex.normal = normal;
meshVertex.originalColor = color;
// append vertex
[self appendVertex:meshVertex];
// append color vertex
colorVertex.colors = color;
[self appendColorVertex:colorVertex];
// append index
[self appendIndex:index++];
// bottom vertex
meshVertex.position.x = originPoint.x + originRadius * normal.x;
meshVertex.position.y = originPoint.y + originRadius * normal.y;
meshVertex.position.z = originPoint.z + originRadius * normal.z;
meshVertex.normal = normal;
meshVertex.originalColor = color;
// append vertex
[self appendVertex:meshVertex];
// append color vertex
[self appendColorVertex:colorVertex];
// append index
[self appendIndex:index++];
}
// draw command
[self appendCommand:GL_TRIANGLE_STRIP firstIndex:0 numberOfIndices:self.numberOfIndices materialName:#""];
}
return self;
}

Resources