Gap between two drawings in Jetpack compose Canvas - android-jetpack-compose

I am trying to make a rectangle and an arc attached to rectangle's bottom. I have used the size provided by the drawScope to lay the drawings on screen but I am unable to get why there is an unnecessary gap between the two drawings even if the specified topLeft parameter of the arc is equal to the height of the rectangle drawn.
Canvas(
modifier = Modifier.fillMaxWidth()
.height(200.dp)
){
drawRect(
color = Color(0xFFEF3125),
topLeft = Offset(0f, 0f),
size = Size(this.size.width, this.size.height.times(0.75f))
)
drawArc(
color = Color(0xFFEF3125),
startAngle = 0f,
sweepAngle = 180f,
useCenter = false,
topLeft = Offset(
0f, this.size.height.times(0.75f)
)
)
}

There is a gap because when you draw an Arc you use a rectangle as refefrence but you draw arc to half of the this rectangle so you need to offset up as much as the half of the height of the rectangle that you draw arc into
#Composable
private fun ArcSample() {
Canvas(
modifier = Modifier.fillMaxWidth()
.height(200.dp)
){
drawRect(
color = Color(0xFFEF3125),
size = Size(this.size.width, this.size.height.times(0.75f)),
style = Stroke(4.dp.toPx())
)
drawArc(
color = Color(0xFFEF3125),
startAngle = 0f,
sweepAngle = 180f,
useCenter = false,
size = Size(size.width, size.height.times(.25f)),
topLeft = Offset(
0f, this.size.height.times(0.625f)
),
style = Stroke(4.dp.toPx())
)
}
}
I used Stroke style to show bounds, it's for demonstration.
#Composable
private fun ArcSample2() {
Canvas(
modifier = Modifier.fillMaxWidth()
.height(200.dp)
){
drawRect(
color = Color(0xFFEF3125),
size = Size(this.size.width, this.size.height.times(0.75f))
)
val sizeCoefficient = 0.25f
drawArc(
color = Color(0xFFEF3125),
startAngle = 0f,
sweepAngle = 360f,
useCenter = false,
size = Size(size.width, size.height.times(sizeCoefficient)),
topLeft = Offset(
0f, this.size.height.times(0.75f-sizeCoefficient/2f)
)
)
}
}

Related

Compose draw autoresizable text that always inside defined rectangle using canvas

As defined in the image, xAxis's texts are going off from the blue rectangle below the xAxis.
Is it possible to make the text auto resizable to always fit inside the defined rectangle inrespect of defined textSize?
val xPoints: List<String> = listOf(
"Sunday",
"Monday",
"Tuesday",
"Wednesday",
"Thursday",
"Friday",
"Saturday"
)
Canvas(
modifier = modifier
.fillMaxWidth()
) {
val rectWidth = (size.width - leftPadding - rightPadding) / xPoints.size
//Draw x-Axis Text
xPoints.forEachIndexed { index, text ->
val offset = Offset(
0f + leftPadding + (index * rectWidth) + (rectWidth / 2),
graphHeight - (bottomPadding / 2)
)
val rect = Rect(offset, rectWidth / 2)
drawRect(
color = Color.Red,
Offset(0f + leftPadding + (index * rectWidth), graphHeight - bottomPadding)
, size = Size(rectWidth, bottomPadding)
)
rotate(degrees = -45f, rect.center) {
drawIntoCanvas {
it.nativeCanvas.drawText(
text,
rect.center.x,
rect.center.y,
xAxisTextPaint
)
}
}
}
}
One possibility would be to use SubcomposeLayout to first measure the composable in one slot, then calculate the desired scale factor and use it to wrap the composable in another composable with a scale modifier, and finally, layout the scaled composable in another layout slot.

How to draw ticket shape in Jetpack Compose

I would like to draw the ticket shape in the picture using Path in Jetpack Compose
Path().apply
Help is appreciated.
class TicketShape(private val cornerRadius: Float) : Shape {
override fun createOutline(
size: Size,
layoutDirection: LayoutDirection,
density: Density
): Outline {
return Outline.Generic(
// Draw your custom path here
path = drawTicketPath(size = size, cornerRadius = cornerRadius)
)
}
}
fun drawTicketPath(size: Size, cornerRadius: Float): Path {
return Path().apply {
reset()
// Top left arc
arcTo(
rect = Rect(
left = -cornerRadius,
top = -cornerRadius,
right = cornerRadius,
bottom = cornerRadius
),
startAngleDegrees = 90.0f,
sweepAngleDegrees = -90.0f,
forceMoveTo = false
)
lineTo(x = size.width - cornerRadius, y = 0f)
// Top right arc
arcTo(
rect = Rect(
left = size.width - cornerRadius,
top = -cornerRadius,
right = size.width + cornerRadius,
bottom = cornerRadius
),
startAngleDegrees = 180.0f,
sweepAngleDegrees = -90.0f,
forceMoveTo = false
)
lineTo(x = size.width, y = size.height - cornerRadius)
// Bottom right arc
arcTo(
rect = Rect(
left = size.width - cornerRadius,
top = size.height - cornerRadius,
right = size.width + cornerRadius,
bottom = size.height + cornerRadius
),
startAngleDegrees = 270.0f,
sweepAngleDegrees = -90.0f,
forceMoveTo = false
)
lineTo(x = cornerRadius, y = size.height)
// Bottom left arc
arcTo(
rect = Rect(
left = -cornerRadius,
top = size.height - cornerRadius,
right = cornerRadius,
bottom = size.height + cornerRadius
),
startAngleDegrees = 0.0f,
sweepAngleDegrees = -90.0f,
forceMoveTo = false
)
lineTo(x = 0f, y = cornerRadius)
close()
}
}
Now use it with any Composable,
MyComp(
Modifier.clip(
TicketShape( 24.dp.toPx() )
)
)
Source: https://juliensalvi.medium.com/custom-shape-with-jetpack-compose-1cb48a991d42
We're basically inheriting from a Shape object, which is something the clip modifiers and other shape parameters accept to render the desired shape. Other examples are RectangleShape, CircleShape, RoundedCornerShape, etc, which are pre-built for compose. You need a distinct shape and so, you require to create your own Shape object. You didn't exactly need to create the class separately if you don't wish to use it again and again. In that case, an object at the place of usage should have sufficed, but it appears that it is not just a decoration so you might wanna create the class itself to not have to create the same objects at multiple places.

Extract Color from image using OpenCV

Predefined: My A4 sheet will always be of white color.
I need to detect A4 sheet from image. I am able to detect rectangles, now the problem is I am getting multiple rectangles from my image. So I extracted the images from the detected rectangle points.
Now I want to match image color with white color.
Using below method to extract image from contours detected :
- (cv::Mat) getPaperAreaFromImage: (std::vector<cv::Point>) square, cv::Mat image
{
// declare used vars
int paperWidth = 210; // in mm, because scale factor is taken into account
int paperHeight = 297; // in mm, because scale factor is taken into account
cv::Point2f imageVertices[4];
float distanceP1P2;
float distanceP1P3;
BOOL isLandscape = true;
int scaleFactor;
cv::Mat paperImage;
cv::Mat paperImageCorrected;
cv::Point2f paperVertices[4];
// sort square corners for further operations
square = sortSquarePointsClockwise( square );
// rearrange to get proper order for getPerspectiveTransform()
imageVertices[0] = square[0];
imageVertices[1] = square[1];
imageVertices[2] = square[3];
imageVertices[3] = square[2];
// get distance between corner points for further operations
distanceP1P2 = distanceBetweenPoints( imageVertices[0], imageVertices[1] );
distanceP1P3 = distanceBetweenPoints( imageVertices[0], imageVertices[2] );
// calc paper, paperVertices; take orientation into account
if ( distanceP1P2 > distanceP1P3 ) {
scaleFactor = ceil( lroundf(distanceP1P2/paperHeight) ); // we always want to scale the image down to maintain the best quality possible
paperImage = cv::Mat( paperWidth*scaleFactor, paperHeight*scaleFactor, CV_8UC3 );
paperVertices[0] = cv::Point( 0, 0 );
paperVertices[1] = cv::Point( paperHeight*scaleFactor, 0 );
paperVertices[2] = cv::Point( 0, paperWidth*scaleFactor );
paperVertices[3] = cv::Point( paperHeight*scaleFactor, paperWidth*scaleFactor );
}
else {
isLandscape = false;
scaleFactor = ceil( lroundf(distanceP1P3/paperHeight) ); // we always want to scale the image down to maintain the best quality possible
paperImage = cv::Mat( paperHeight*scaleFactor, paperWidth*scaleFactor, CV_8UC3 );
paperVertices[0] = cv::Point( 0, 0 );
paperVertices[1] = cv::Point( paperWidth*scaleFactor, 0 );
paperVertices[2] = cv::Point( 0, paperHeight*scaleFactor );
paperVertices[3] = cv::Point( paperWidth*scaleFactor, paperHeight*scaleFactor );
}
cv::Mat warpMatrix = getPerspectiveTransform( imageVertices, paperVertices );
cv::warpPerspective(image, paperImage, warpMatrix, paperImage.size(), cv::INTER_LINEAR, cv::BORDER_CONSTANT );
if (true) {
cv::Rect rect = boundingRect(cv::Mat(square));
cv::rectangle(image, rect.tl(), rect.br(), cv::Scalar(0,255,0), 5, 8, 0);
UIImage *object = [self UIImageFromCVMat:paperImage];
}
// we want portrait output
if ( isLandscape ) {
cv::transpose(paperImage, paperImageCorrected);
cv::flip(paperImageCorrected, paperImageCorrected, 1);
return paperImageCorrected;
}
return paperImage;
}
EDITED: I used below method to get the color from image. But now my problem after converting my original image to cv::mat, when I am cropping there is already transparent grey color over my image. So always I am getting the same color.
Is there any direct way to get original color from cv::mat image?
- (UIColor *)averageColor: (UIImage *) image {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char rgba[4];
CGContextRef context = CGBitmapContextCreate(rgba, 1, 1, 8, 4, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, 1, 1), image.CGImage);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
if(rgba[3] > 0) {
CGFloat alpha = ((CGFloat)rgba[3])/255.0;
CGFloat multiplier = alpha/255.0;
return [UIColor colorWithRed:((CGFloat)rgba[0])*multiplier
green:((CGFloat)rgba[1])*multiplier
blue:((CGFloat)rgba[2])*multiplier
alpha:alpha];
}
else {
return [UIColor colorWithRed:((CGFloat)rgba[0])/255.0
green:((CGFloat)rgba[1])/255.0
blue:((CGFloat)rgba[2])/255.0
alpha:((CGFloat)rgba[3])/255.0];
}
}
EDIT 2 :
Input Image
Getting this output
Need to detect only A4 sheet of white color.
I just resolved it using Google Vision api.
My objective was to calculate the cracks for builder purpose from image so in my case User will be using A4 sheet as reference on the image where crack is, and I will capture the A4 sheet and calculate the size taken by each pixel. Then build will tap on two points in the crack, and I will calculate the distance.
In google vision I used document text detection api and printed my app name on A4 sheet fully covered vertically or horizontally. And google vision api detect that text and gives me the coordinate.

Improving the grey scale conversion result

Here is the colour menu:
Here is the same menu with some of the menu items disabled, and the bitmaps set as greyscale:
The code that converts to grey scale:
auto col = GetRValue(pixel) * 0.299 +
GetGValue(pixel) * 0.587 +
GetBValue(pixel) * 0.114;
pixel = RGB(col, col, col);
I am colourblind but it seems that some of them don’t look that much different. I assume it relates to the original colours in the first place?
It would just be nice if it was more obvious they are disabled. Like, it is very clear with the text.
Can we?
For people who are not colour blind it's pretty obvious.
Just apply the same intensity reduction to the images that you do to the text.
I did not check your values. Let's assume the text is white (100% intensity).
And the grayed out text is 50% intensity.
Then the maximum intensity of the bitmap should be 50% as well.
for each gray pixel:
pixel_value = pixel_value / max_pixel_value * gray_text_value
This way you decrease further decrease the contrast of each bitmap and avoid having any pixel brighter than the text.
This is not directly related to your question, but since you are changing colors you can also fix the corner pixels which stand out (by corner pixels I don't mean pixels at the edges of bitmap rectangle, I mean the corner of human recognizable image)
Example, in image below, there is a red pixel at the corner of the page. We want to find that red pixel and blend it with background color so that it doesn't stand out.
To find if the corner pixels, check the pixels at left and top, if both left and top are the background color then you have a corner pixel. Repeat the same for top-right, bottom-left, and bottom-right. Blend the corner pixels with background.
Instead of changing to grayscale you can change the alpha transparency as suggested by zett42.
void change(HBITMAP hbmp, bool enabled)
{
if(!hbmp)
return;
HDC memdc = CreateCompatibleDC(nullptr);
BITMAP bm;
GetObject(hbmp, sizeof(bm), &bm);
int w = bm.bmWidth;
int h = bm.bmHeight;
BITMAPINFO bi = { sizeof(BITMAPINFOHEADER), w, h, 1, 32, BI_RGB };
std::vector<uint32_t> pixels(w * h);
GetDIBits(memdc, hbmp, 0, h, &pixels[0], &bi, DIB_RGB_COLORS);
//assume that the color at (0,0) is the background color
uint32_t old_color = pixels[0];
//this is the new background color
uint32_t bk = GetSysColor(COLOR_MENU);
//swap RGB with BGR
uint32_t new_color = RGB(GetBValue(bk), GetGValue(bk), GetRValue(bk));
//define lambda functions to swap between BGR and RGB
auto bgr_r = [](uint32_t color) { return GetBValue(color); };
auto bgr_g = [](uint32_t color) { return GetGValue(color); };
auto bgr_b = [](uint32_t color) { return GetRValue(color); };
BYTE new_red = bgr_r(new_color);
BYTE new_grn = bgr_g(new_color);
BYTE new_blu = bgr_b(new_color);
//change background and modify disabled bitmap
for(auto &p : pixels)
{
if(p == old_color)
{
p = new_color;
}
else if(!enabled)
{
//blend color with background, similar to 50% alpha
BYTE red = (bgr_r(p) + new_red) / 2;
BYTE grn = (bgr_g(p) + new_grn) / 2;
BYTE blu = (bgr_b(p) + new_blu) / 2;
p = RGB(blu, grn, red); //<= BGR/RGB swap
}
}
//fix corner edges
for(int row = h - 2; row >= 1; row--)
{
for(int col = 1; col < w - 1; col++)
{
int i = row * w + col;
if(pixels[i] != new_color)
{
//check the color of neighboring pixels:
//if that pixel has background color,
//then that pixel is the background
bool l = pixels[i - 1] == new_color; //left pixel is background
bool r = pixels[i + 1] == new_color; //right ...
bool t = pixels[i - w] == new_color; //top ...
bool b = pixels[i + w] == new_color; //bottom ...
//we are on a corner pixel if:
//both left-pixel and top-pixel are background or
//both left-pixel and bottom-pixel are background or
//both right-pixel and bottom-pixel are background or
//both right-pixel and bottom-pixel are background
if(l && t || l && b || r && t || r && b)
{
//blend corner pixel with background
BYTE red = (bgr_r(pixels[i]) + new_red) / 2;
BYTE grn = (bgr_g(pixels[i]) + new_grn) / 2;
BYTE blu = (bgr_b(pixels[i]) + new_blu) / 2;
pixels[i] = RGB(blu, grn, red);//<= BGR/RGB swap
}
}
}
}
SetDIBits(memdc, hbmp, 0, h, &pixels[0], &bi, DIB_RGB_COLORS);
DeleteDC(memdc);
}
Usage:
CBitmap bmp1, bmp2;
bmp1.LoadBitmap(IDB_BITMAP1);
bmp2.LoadBitmap(IDB_BITMAP2);
change(bmp1, enabled);
change(bmp2, disabled);

Core Text calculate letter frame in iOS

I need to calculate exact bounding box for every each character (glyph) in NSAttributedString (Core Text).
After putting together some code used to solve similar problems (Core Text selection, etc..), the result is quite good, but only few frames (red) are being calculated properly:
Most of the frames are misplaces either horizontally or vertically (by tiny bit). What is the cause of that? How can I perfect this code?:
-(void)recalculate{
// get characters from NSString
NSUInteger len = [_attributedString.string length];
UniChar *characters = (UniChar *)malloc(sizeof(UniChar)*len);
CFStringGetCharacters((__bridge CFStringRef)_attributedString.string, CFRangeMake(0, [_attributedString.string length]), characters);
// allocate glyphs and bounding box arrays for holding the result
// assuming that each character is only one glyph, which is wrong
CGGlyph *glyphs = (CGGlyph *)malloc(sizeof(CGGlyph)*len);
CTFontGetGlyphsForCharacters(_font, characters, glyphs, len);
// get bounding boxes for glyphs
CTFontGetBoundingRectsForGlyphs(_font, kCTFontDefaultOrientation, glyphs, _characterFrames, len);
free(characters); free(glyphs);
// Measure how mush specec will be needed for this attributed string
// So we can find minimun frame needed
CFRange fitRange;
CGSize s = CTFramesetterSuggestFrameSizeWithConstraints(_framesetter, rangeAll, NULL, CGSizeMake(W, MAXFLOAT), &fitRange);
_frameRect = CGRectMake(0, 0, s.width, s.height);
CGPathRef framePath = CGPathCreateWithRect(_frameRect, NULL);
_ctFrame = CTFramesetterCreateFrame(_framesetter, rangeAll, framePath, NULL);
CGPathRelease(framePath);
// Get the lines in our frame
NSArray* lines = (NSArray*)CTFrameGetLines(_ctFrame);
_lineCount = [lines count];
// Allocate memory to hold line frames information:
if (_lineOrigins != NULL)free(_lineOrigins);
_lineOrigins = malloc(sizeof(CGPoint) * _lineCount);
if (_lineFrames != NULL)free(_lineFrames);
_lineFrames = malloc(sizeof(CGRect) * _lineCount);
// Get the origin point of each of the lines
CTFrameGetLineOrigins(_ctFrame, CFRangeMake(0, 0), _lineOrigins);
// Solution borrowew from (but simplified):
// https://github.com/twitter/twui/blob/master/lib/Support/CoreText%2BAdditions.m
// Loop throught the lines
for(CFIndex i = 0; i < _lineCount; ++i) {
CTLineRef line = (__bridge CTLineRef)[lines objectAtIndex:i];
CFRange lineRange = CTLineGetStringRange(line);
CFIndex lineStartIndex = lineRange.location;
CFIndex lineEndIndex = lineStartIndex + lineRange.length;
CGPoint lineOrigin = _lineOrigins[i];
CGFloat ascent, descent, leading;
CGFloat lineWidth = CTLineGetTypographicBounds(line, &ascent, &descent, &leading);
// If we have more than 1 line, we want to find the real height of the line by measuring the distance between the current line and previous line. If it's only 1 line, then we'll guess the line's height.
BOOL useRealHeight = i < _lineCount - 1;
CGFloat neighborLineY = i > 0 ? _lineOrigins[i - 1].y : (_lineCount - 1 > i ? _lineOrigins[i + 1].y : 0.0f);
CGFloat lineHeight = ceil(useRealHeight ? abs(neighborLineY - lineOrigin.y) : ascent + descent + leading);
_lineFrames[i].origin = lineOrigin;
_lineFrames[i].size = CGSizeMake(lineWidth, lineHeight);
for (int ic = lineStartIndex; ic < lineEndIndex; ic++) {
CGFloat startOffset = CTLineGetOffsetForStringIndex(line, ic, NULL);
_characterFrames[ic].origin = CGPointMake(startOffset, lineOrigin.y);
}
}
}
#pragma mark - Rendering Text:
-(void)renderInContext:(CGContextRef)context contextSize:(CGSize)size{
CGContextSaveGState(context);
// Draw Core Text attributes string:
CGContextSetTextMatrix(context, CGAffineTransformIdentity);
CGContextTranslateCTM(context, 0, CGRectGetHeight(_frameRect));
CGContextScaleCTM(context, 1.0, -1.0);
CTFrameDraw(_ctFrame, context);
// Draw line and letter frames:
CGContextSetStrokeColorWithColor(context, [UIColor colorWithRed:0.0 green:0.0 blue:1.0 alpha:0.5].CGColor);
CGContextSetLineWidth(context, 1.0);
CGContextBeginPath(context);
CGContextAddRects(context, _lineFrames, _lineCount);
CGContextClosePath(context);
CGContextStrokePath(context);
CGContextSetStrokeColorWithColor(context, [UIColor colorWithRed:1.0 green:0.0 blue:0.0 alpha:0.5].CGColor);
CGContextBeginPath(context);
CGContextAddRects(context, _characterFrames, _attributedString.string.length);
CGContextClosePath(context);
CGContextStrokePath(context);
CGContextRestoreGState(context);
}
You did an impressive amount of work in your question and were so close on your own. The problem you were having comes from this line of code where you position the bounding boxes for each frame:
_characterFrames[ic].origin = CGPointMake(startOffset, lineOrigin.y);
The problem with it is that you are overriding whatever offset the frame already had.
If you were to comment out that line you would see that all the frames were positioned more or less in the same place but you would also see that they are not positioned at the exact same place. Some are positioned more to the left or right and some more up or down. This means that the frames for the glyphs have a position of their own.
The solution to your problem is to take the current position of the frames into account when you move them into their correct place on the lines. You can either do it by adding to x and y separately:
_characterFrames[ic].origin.x += startOffset;
_characterFrames[ic].origin.y += lineOrigin.y;
or by offsetting the rectangle:
_characterFrames[ic] = CGRectOffset(_characterFrames[ic],
startOffset, lineOrigin.y);
Now the bounding boxes will have their correct positions:
and you should see that it works for some of the more extreme fonts out there
Swift 5, Xcode 11:
override func draw(_ rect: CGRect) {
guard let context = UIGraphicsGetCurrentContext() else { return }
context.textMatrix = .identity
context.translateBy(x: 0, y: self.bounds.size.height)
context.scaleBy(x: 1.0, y: -1.0)
let string = "|優勝《ゆうしょう》の|懸《か》かった|試合《しあい》。|Test《テスト》.\nThe quick brown fox jumps over the lazy dog. 12354567890 ##-+"
let attributedString = Utility.sharedInstance.furigana(String: string)
let range = attributedString.mutableString.range(of: attributedString.string)
attributedString.addAttribute(.font, value: font, range: range)
let framesetter = attributedString.framesetter()
let textBounds = self.bounds.insetBy(dx: 20, dy: 20)
let frame = framesetter.createFrame(textBounds)
//Draw the frame text:
frame.draw(in: context)
let origins = frame.lineOrigins()
let lines = frame.lines()
context.setStrokeColor(UIColor.red.cgColor)
context.setLineWidth(0.7)
for i in 0 ..< origins.count {
let line = lines[i]
for run in line.glyphRuns() {
let font = run.font
let glyphPositions = run.glyphPositions()
let glyphs = run.glyphs()
let glyphsBoundingRects = font.boundingRects(of: glyphs)
//DRAW the bounding box for each glyph:
for k in 0 ..< glyphPositions.count {
let point = glyphPositions[k]
let gRect = glyphsBoundingRects [k]
var box = gRect
box.origin += point + origins[i] + textBounds.origin
context.stroke(box)
}// for k
}//for run
}//for i
}//func draw
Made with a CoreText Swift Wrapper.
Full Source: https://github.com/huse360/LetterFrame

Resources