Can anyone please explain or describe how the coordinates points in the LinearGradient?
For example: I have my code in this way.
var gradient = new LinearGradient(0, 0, 500, 500, colors, null, Shader.TileMode.Clamp);
paint.SetShader(gradient);
paint.Dither = true;
how it display in the rectangle while applying in the rectangle ?
In Android, the coordinate system always is like what you can see above picture.
1) (0,0) is top left corner.
2) (maxX,0) is top right corner
3) (0,maxY) is bottom left corner
4) (maxX,maxY) is bottom right corner
The maxX or maxY is the screen's(or view's) max width or max height.
This new LinearGradient(0, 0, 500, 500, colors, null, Shader.TileMode.Clamp) method will be sure a Gradient line which you can see in above picture. And when you use Canvas to draw the rectangle with the paint, the colors will be rendered along this line.
Related
I'm using opencv, I have left, right, top, bottom co-ordinates of a rectangle.
I want to measure width and height of the rectangle in centimeters. How do I do that?
I tried to find euclidean distance like this:
D = dist.euclidean((top, left), (top, right)), but am not sure whether it is right or not, because I want a value of height and width in centimeters
Code segment that I use to determine the co-ordinates
(xmin, ymin, xmax, ymax) = (box[num][1], box[num][0], box[num][3], box[num][2])
(left, right, top, bottom) = (xmin * im_width, xmax * im_width, ymin * im_height, ymax * im_height)
I'm looking at some code and i don't understand this line. can someone explain what it is doing?
smallImg = image( Rect(0, Slice_row, image.cols, 6) );
smallImg is a sub-image/portion of the larger image.
This smallImg is formed using a cv::Rect, which is an object that describes a rectangular region in the image. This rectangular region is defined by the top-left coordinates, the width, and height of the rectangle. So here, (0, Slice_row) is the top left corner of the rectangle, image.cols is the width and 6 is the height.
So smallImg is a portion of image with the same width as the original image but only having 6 rows of pixels starting from Slice_row.
Hope this helps
What orthographicProjection does one have to use to be able to make a 2D application in SceneKit with 1:1 SceneKit points to screen points/pixels ratio?
Example:
I want to position something at (200, 200) on the screen and I want to use a SCNVector with (200, 200, 0) for it. What orthographicProjection do I need for this?
If you want an orthographic projection where a unit of scene space corresponds to a point of screen space, you need a projection where the left clipping plane is at zero and the right clipping plane is at whatever the screen's width in points is. (Ditto for top/bottom, and near/far doesn't matter so long as you keep objects within whatever near/far you set up.)
For this it's probably easiest to set up your own projection matrix, rather than working out what orthographicScale and camera position correspond to the dimensions you need:
GLKMatrix4 mat = GLKMatrix4MakeOrtho(0, self.view.bounds.size.width,
0, self.view.bounds.size.height,
1, 100); // z range arbitrary
cameraNode.camera.projectionTransform = SCNMatrix4FromGLKMatrix4(mat);
// still need to position the camera for its direction & z range
cameraNode.position = SCNVector3Make(0, 0, 50);
i'm trying to build a game in xna, i got a circle which i want the player to move around it, as you can see in the following picture, its working great except the drawing part which i'm not pleased with
here's a link to an image http://s12.postimage.org/poiip0gtp/circle.png
i want to center the player object to the edge of the circle so it won't look like the player is standing on air
this is how i calculate the position of the player
rad = (degree * Math.PI / 180);
rotationDegree = (float)((Math.PI * degree) / 180);
currentPosition.X = (float)(Math.Cos(rad) * Earth.radius + (GraphicsDevice.Viewport.Width / 2));
currentPosition.Y = (float)(Math.Sin(rad) * Earth.radius + (GraphicsDevice.Viewport.Height / 2));
and this is how i draw the player
spriteBatch.Draw(texture,currentPosition, null, Color.White,rotationDegree, Vector2.Zero,1f,SpriteEffects.None, 1f);
thank you.
Use the origin overload for spritebatch. Which is where the sprite is drawn according to the position.
Spritebatch.Draw(texture,Position, null,Color.White,0f,new Vector2(texture.Width / 2,texture.Height /2),1f,SpriteEffects.None, 0);
Using texture.Width / 2,texture.Height /2 for origin will center it.
It looks like what you want to do here is adjust the sprite's origin, which is the vector that you're passing into SpriteBatch.Draw(). This is used to determine the "center point" of your sprite; {0, 0} represents the sprite's upper-left corner, while {spriteWidth, spriteHeight} represents the bottom-right corner. Your sprite will be positioned and rotated relative to this origin.
I use CGContextStrokePath painted on a straight line in a white background picture, stroke color is red, alpha is 1.0
After drawing the line, why the points is not (255, 0, 0), but (255, 96, 96)
Why not pure red?
Quartz (the iOS drawing layer) uses antialiasing to make things look smooth. That's why you're seeing non-pure-red pixels.
If you stroke a line of width 1.0 and you want only pure red pixels, the line needs to be horizontal or vertical and it needs to run along the center of the pixels, like this:
CGContextMoveToPoint(gc, 0, 10.5);
CGContextAddLineToPoint(gc, 50, 10.5);
CGContextStroke(gc);
The .5 in the y coordinates puts the long along the centers of the pixels.