IOS Xamarin - Realtime plot draw lines - ios

I'm working in a cross test project for display ecg graph in realtime. it's work fine on windows and android using Xamarin, BUT in IOS i have slow and slow performarces.
I think the problem is due to my lack of expertise in ios..
I've done two tests both failed, someone have a solutions to speed up ?
TEST A
I call Plot and then assign the UIImage wBitmap as MyView Backgroud every < 10ms
public void Plot(PointF pPrec, PointF pNext)
{
SizeF bitmapSize = new SizeF(wBitmap.Size);
using (CGBitmapContext context2 = new CGBitmapContext(IntPtr.Zero, (int)bitmapSize.Width, (int)bitmapSize.Height, 8, (int)(4 * bitmapSize.Width), CGColorSpace.CreateDeviceRGB(), CGImageAlphaInfo.PremultipliedFirst))
{
context2.DrawImage(new RectangleF(0, 0, wBitmap.Size.Width, wBitmap.Size.Height), wBitmap.CGImage);
context2.SetLineWidth(1);
context2.AddLineToPoint(pNext.X, pNext.Y);
context2.DrawPath(CGPathDrawingMode.Stroke);
// output the drawing to the view
wBitmap = UIImage.FromImage(context2.ToImage());
}
}
TEST B
I call Plot and then assign the UIImage wBitmap as MyView Backgroud every < 10ms
public void Plot2(PointF pPrec, PointF pNext) { UIGraphics.BeginImageContext(wBitmap.Size); context = UIGraphics.GetCurrentContext();
using (context)
{
if (wBitmap != null)
{
context.TranslateCTM(0f, wBitmap.Size.Height);
context.ScaleCTM(1.0f, -1.0f);
context.DrawImage(new RectangleF(0f, 0f, wBitmap.Size.Width, wBitmap.Size.Height), wBitmap.CGImage);
context.ScaleCTM(1.0f, -1.0f);
context.TranslateCTM(0f, -wBitmap.Size.Height);
}
context.SetLineWidth(1);
context.MoveTo(pPrec.X, pPrec.Y);
context.AddLineToPoint(pNext.X, pNext.Y);
context.DrawPath(CGPathDrawingMode.Stroke);
wBitmap = UIGraphics.GetImageFromCurrentImageContext();
}//end using cont
UIGraphics.EndImageContext();
}

You don't need to draw into a bitmap first and then use that bitmap in a view. The correct approach to this is to subclass UIView and then override the Draw() method and paint your stuff in there.
You can then call SetNeedsDisplay() on the view to schedule a redraw.
The example below draw an arrow shaped background:
class YourView: UIView
{
public override void Draw(RectangleF rect)
{
const float inset = 15f;
UIColor.Blue.SetFill();
var path = new UIBezierPath();
path.MoveTo(new PointF(0, 0));
path.AddLineTo(new PointF(rect.Width - inset, 0));
path.AddLineTo(new PointF(rect.Width, rect.Height * 0.5f));
path.AddLineTo(new PointF(rect.Width - inset, rect.Bottom));
path.AddLineTo(new PointF(0, rect.Bottom));
path.AddLineTo(new PointF(0, 0));
path.Fill();
}
}
}
Alternatively you might want to look at existing components, like CorePlot or TeeChart available in the Xamarin Components store. TeeChart would even be cross platform, so you wouldn't have to worry about Android vs. iOS.

Related

How to Flip FaceOSC in Processing3.2.1

I am new to the Processing and now trying to use FaceOSC. Everything was done already, but it is hard to play the game I made when everything is not a mirror view. So I want to flip the data that FaceOSC sent to processing to create video.
I'm not sure if FaceOSC sent the video because I've tried flip like a video but it doesn't work. I also flipped like a image, and canvas, but still doesn't work. Or may be I did it wrong. Please HELP!
//XXXXXXX// This is some of my code.
import oscP5.*;
import codeanticode.syphon.*;
OscP5 oscP5;
SyphonClient client;
PGraphics canvas;
boolean found;
PVector[] meshPoints;
void setup() {
size(640, 480, P3D);
frameRate(30);
initMesh();
oscP5 = new OscP5(this, 8338);
// USE THESE 2 EVENTS TO DRAW THE
// FULL FACE MESH:
oscP5.plug(this, "found", "/found");
oscP5.plug(this, "loadMesh", "/raw");
// plugin for mouth
oscP5.plug(this, "mouthWidthReceived", "/gesture/mouth/width");
oscP5.plug(this, "mouthHeightReceived", "/gesture/mouth/height");
// initialize the syphon client with the name of the server
client = new SyphonClient(this, "FaceOSC");
// prep the PGraphics object to receive the camera image
canvas = createGraphics(640, 480, P3D);
}
void draw() {
background(0);
stroke(255);
// flip like a vdo here, does not work
/* pushMatrix();
translate(canvas.width, 0);
scale(-1,1);
image(canvas, -canvas.width, 0, width, height);
popMatrix(); */
image(canvas, 0, 0, width, height);
if (found) {
fill(100);
drawFeature(faceOutline);
drawFeature(leftEyebrow);
drawFeature(rightEyebrow);
drawFeature(nosePart1);
drawFeature(nosePart2);
drawFeature(leftEye);
drawFeature(rightEye);
drawFeature(mouthPart1);
drawFeature(mouthPart2);
drawFeature(mouthPart3);
drawFeature(mouthPart4);
drawFeature(mouthPart5);
}
}
//XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
void drawFeature(int[] featurePointList) {
for (int i = 0; i < featurePointList.length; i++) {
PVector meshVertex = meshPoints[featurePointList[i]];
if (i > 0) {
PVector prevMeshVertex = meshPoints[featurePointList[i-1]];
line(meshVertex.x, meshVertex.y, prevMeshVertex.x, prevMeshVertex.y);
}
ellipse(meshVertex.x, meshVertex.y, 3, 3);
}
}
/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
public void found(int i) {
// println("found: " + i); // 1 == found, 0 == not found
found = i == 1;
}
//XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
The scale() and translate() snippet you're trying to use makes sense, but it looks like you're using it in the wrong place. I'm not sure what canvas should do, but I'm guessing the face features is drawn using drawFeature() calls is what you want to mirror. If so, you should do place those calls in between pushMatrix() and popMatrix() calls, right after the scale().
I would try something like this in draw():
void draw() {
background(0);
stroke(255);
//flip horizontal
pushMatrix();
translate(width, 0);
scale(-1,1);
if (found) {
fill(100);
drawFeature(faceOutline);
drawFeature(leftEyebrow);
drawFeature(rightEyebrow);
drawFeature(nosePart1);
drawFeature(nosePart2);
drawFeature(leftEye);
drawFeature(rightEye);
drawFeature(mouthPart1);
drawFeature(mouthPart2);
drawFeature(mouthPart3);
drawFeature(mouthPart4);
drawFeature(mouthPart5);
}
popMatrix();
}
The push/pop matrix calls isolate the coordinate space.
The coordinate system origin(0,0) is the top left corner: this is why everything is translated by the width before scaling x by -1. Because it's not at the centre, simply mirroring won't leave the content in the same place.
For more details checkout the Processing Transform2D tutorial
Here's a basic example:
boolean mirror;
void setup(){
size(640,480);
}
void draw(){
if(mirror){
pushMatrix();
//translate, otherwise mirrored content will be off screen (pivot is at top left corner not centre)
translate(width,0);
//scale x -= 1 mirror
scale(-1,1);
//draw mirrored content
drawStuff();
popMatrix();
}else{
drawStuff();
}
}
//this could be be the face preview
void drawStuff(){
background(0);
triangle(0,0,width,0,0,height);
text("press m to toggle mirroring",450,470);
}
void keyPressed(){
if(key == 'm') mirror = !mirror;
}
Another option is to mirror each coordinate, but in your case it would be a lot of effort when scale(-1,1) will do the trick. For reference, to mirror the coordinate, you simply need to subtract the current value from the largest value:
void setup(){
size(640,480);
background(255);
}
void draw(){
ellipse(mouseX,mouseY,30,30);
//subtract current value(mouseX in this case) from the largest value it can have (width in this case)
ellipse(width-mouseX,mouseY,30,30);
}
You can run these examples right here:
var mirror;
function setup(){
createCanvas(640,225);
fill(255);
}
function draw(){
if(mirror){
push();
//translate, otherwise mirrored content will be off screen (pivot is at top left corner not centre)
translate(width,0);
//scale x -= 1 mirror
scale(-1,1);
//draw mirrored content
drawStuff();
pop();
}else{
drawStuff();
}
}
//this could be be the face preview
function drawStuff(){
background(0);
triangle(0,0,width,0,0,height);
text("press m to toggle mirroring",450,470);
}
function keyPressed(){
if(key == 'M') mirror = !mirror;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.5.4/p5.min.js"></script>
function setup(){
createCanvas(640,225);
background(0);
fill(0);
stroke(255);
}
function draw(){
ellipse(mouseX,mouseY,30,30);
//subtract current value(mouseX in this case) from the largest value it can have (width in this case)
ellipse(width-mouseX,mouseY,30,30);
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.5.4/p5.min.js"></script>

Graphics drawing code generates blank images, only on iOS

My app displays some images that I created using Image.createImage(). In some cases, the images are completely blank, but only on iOS. The images work fine on Android. Also, I create several images using Image.createImage() and most of them work fine. I don't see any difference between those and these.
To reproduce, run the enclosed app on both Android and iOS. The app shows two images. The second one is taken from the bottom half of the first one. On Android, the images show up fine. On iOS, the images show up for a few seconds, then vanish. It turns out that they only show up while iOS is displaying the startup screen. Once it switches to the actual app, the images are blank, although they take up the same space. Further tests reveal that the images are the correct size but are filled with transparent pixels.
I should say that, in my actual application, the images scale with the size of the screen, and are colored according to a user preference, so I can't just load them from a resource.
(BTW Notice the change I made to the stop method. This is unrelated but worth mentioning.)
Here's the test case:
import com.codename1.ui.Component;
import com.codename1.ui.Container;
import com.codename1.ui.Display;
import com.codename1.ui.Form;
import com.codename1.ui.Dialog;
import com.codename1.ui.Graphics;
import com.codename1.ui.Image;
import com.codename1.ui.Label;
import com.codename1.ui.layouts.BorderLayout;
import com.codename1.ui.layouts.BoxLayout;
import com.codename1.ui.plaf.UIManager;
import com.codename1.ui.util.Resources;
import com.codename1.io.Log;
import com.codename1.ui.Toolbar;
import java.util.Arrays;
/**
* This file was generated by Codename One for the purpose
* of building native mobile applications using Java.
*/
#SuppressWarnings("unused")
public class HalfImageBug {
private Form current;
private Resources theme;
public void init(Object context) {
theme = UIManager.initFirstTheme("/theme");
// Enable Toolbar on all Forms by default
Toolbar.setGlobalToolbar(true);
}
public void start() {
if (current != null) {
current.show();
return;
}
Form hi = new Form("Hi World", new BorderLayout());
hi.addComponent(BorderLayout.CENTER, makeComponent());
hi.show();
}
public void stop() {
current = Display.getInstance().getCurrent();
// This was originally if, but it should be while, in case there are multiple layers of dialogs.
while (current instanceof Dialog) {
((Dialog) current).dispose();
current = Display.getInstance().getCurrent();
}
}
public void destroy() {
}
private Component makeComponent() {
final Container container = new Container(new BoxLayout(BoxLayout.Y_AXIS));
container.setScrollableY(true);
container.add(new Label("Full Image:"));
Image fullIcon = createFullImage(0x44ff00, 40, 30);
Label fullImage = new Label(fullIcon);
container.add(fullImage);
container.add(new Label("---"));
container.add(new Label("Half Image:"));
Image halfIcon = createHalfSizeImage(fullIcon);
Label halfImage = new Label(halfIcon);
container.add(halfImage);
return container;
}
private Image createFullImage(int color, int verticalDiameter, int horizontalRadius) {
// Make sure it's an even number. Otherwise the half image will have its right and left halves reversed!
int diameter = (verticalDiameter / 2) * 2;
final int iconWidth = 2 * horizontalRadius;
int imageWidth = iconWidth + 2;
int imageHt = diameter + 2;
Image fullImage = Image.createImage(imageWidth, imageHt);
Graphics g = fullImage.getGraphics();
g.setAntiAliased(true);
g.setColor(color);
g.fillRect(0, 0, imageWidth, imageHt);
g.setColor(darken(color, 25));
g.fillArc(1, 1, iconWidth, diameter, 180, 360);
g.setColor(0xbfbfbf);
final int smallerHt = (9 * diameter) / 10;
g.fillArc(0, 0, iconWidth, smallerHt, 180, 360);
Image maskImage = Image.createImage(imageWidth, imageHt);
g = maskImage.getGraphics();
g.setAntiAliased(true);
g.setColor(0);
g.fillRect(0, 0, imageWidth, imageHt);
g.setColor(0xFF);
g.fillArc(1, 1, iconWidth, diameter, 180, 360);
fullImage = fullImage.applyMask(maskImage.createMask());
return fullImage;
}
private Image createHalfSizeImage(Image fullImage) {
int imageWidth = fullImage.getWidth();
int imageHt = fullImage.getHeight();
int[] rgbValues = fullImage.getRGB();
// yeah, I've since discovered a much more sensible way to do this, but it doesn't fix the bug.
int[] bottomHalf = Arrays.copyOfRange(rgbValues, rgbValues.length / 2, rgbValues.length);
//noinspection StringConcatenation
Log.p("Cutting side image from " + imageWidth + " x " + imageHt + " to " + imageWidth + " x " + (imageHt / 2));
return Image.createImage(bottomHalf, imageWidth, imageHt / 2);
}
private static int darken(int color, int percent) {
if ((percent > 100) || (percent < 0)) {
throw new IllegalArgumentException("Percent out of range: " + percent);
}
int percentRemaining = 100 - percent;
return (darkenPrimary((color & 0xFF0000) >> 16, percentRemaining) << 16)
| (darkenPrimary((color & 0xFF00) >> 8, percentRemaining) << 8)
| (darkenPrimary(color & 0xFF, percentRemaining));
}
private static int darkenPrimary(int primaryValue, int percentRemaining) {
if ((primaryValue < 0) || (primaryValue > 255)) {
throw new IllegalArgumentException("Primary value out of range (0-255): " + primaryValue);
}
return (primaryValue * percentRemaining) / 100;
}
}
This is discussed in this issue.
Generally the images initially appear because of the screenshot process that shows them so they never really show up on iOS natively.
A common cause for these issues is creating images off of the EDT which doesn't seem to be the issue in this specific code.
It's hard to see what is going on so I guess we'll need to evaluate the issue.
Here's a workaround. This works, but doesn't anti-alias very well. It will do until the iOS code gets fixed.
The problem is as described elsewhere. The Graphics.fillArc() and drawArc() methods work fine on Android, but often fail on iOS. Here's the behavior:
if width == height, they correctly draw a circle.
if width < height, they should draw an ellipse, but they draws a circle, centered over the intended ellipse, with a diameter equal to width.
if width > height, they draw nothing.
The workaround draws a circle against a transparent background, then draws that circle, squeezed in one direction to an ellipse, into the proper place. It doesn't do a very good job of anti-aliasing, so this is not a good substitute for working code, but it will do until the bug gets fixed. (This workaround handles fillArc, but it shouldn't be hard to modify it for drawArc()
/**
* Workaround for fillArc bug. Graphics.fillArc() works fine on android, but usually fails on iOS. There are three
* cases for its behavior.
*
* If width < height, it draws a circle with a diameter equal to width, and concentric with the intended ellipse.<br>
* If width > height, it draws nothing.<br>
* If width == height, it works correctly.
*
* To work around this we create a separate image, draw a circle, re-proportion it to the proper ellipse, then draw
* it to the Graphics object. It doesn't anti-alias very well.
*/
public static void fillArcWorkaround(Graphics masterG, int x, int y, int width, int height, int startAngle, int arcAngle) {
if (width == height) {
masterG.fillArc(x, y, width, height, startAngle, arcAngle);
} else {
int max = Math.max(width, height);
Image tempCircle = Image.createImage(max, max);
Graphics tempG = tempCircle.getGraphics();
tempG.setColor(masterG.getColor());
tempG.fillRect(0, 0, max, max);
// At this point tempCircle is just a colored rectangle. It becomes a circle when we apply the circle mask. The
// region outside the circle becomes transparent that way.
Image mask = Image.createImage(max, max);
tempG = mask.getGraphics();
tempG.setAntiAliased(masterG.isAntiAliased());
tempG.setColor(0);
tempG.fillRect(0, 0, max, max);
tempG.setColor(0xFF); // blue
tempG.fillArc(0, 0, max, max, startAngle, arcAngle);
tempCircle = tempCircle.applyMask(mask.createMask());
// Now tempCircle is a filled circle of the correct color. We now draw it in its intended proportions.
masterG.setAntiAliased(true);
masterG.drawImage(tempCircle, x, y, width, height);
}
}

Outline color of the Frame is not displayed in Xamarin Forms Android Project using MvvmCross

Currently I am working on a Xamarin Forms Android project using MvvmCross. I have a strange problem regarding the Frame. Whenever I set the OutlineColor, it is displayed only in iOS and not in Android. I've tried with a different Xamarin Forms projects and it is displayed by both platforms without any problems. I don't have any indications why this is happening. Could MvvmCross somehow related with the issue?
Here is a sample:
<core:BasePage
xmlns="http://xamarin.com/schemas/2014/forms"
xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
xmlns:core="clr-namespace:Core.Base.Views;assembly=Core"
x:Class="Views.TestPage"
BackgroundImage="background_secret.png"
Title="Test">
<ContentPage.Content>
<Grid
HorizontalOptions="FillAndExpand"
Padding="12,20,12,20"
VerticalOptions="FillAndExpand">
<Frame
HasShadow="false"
VerticalOptions="Fill"
BackgroundColor="White"
OutlineColor="#1961ac">
<StackLayout>
<Frame
VerticalOptions="Start"
Padding="8,4,8,4"
HasShadow="false"
OutlineColor="#9DB0BB">
<Label Text="Test"></Label>
</Frame>
</StackLayout>
</Frame>
</Grid>
</ContentPage.Content>
</core:BasePage>
Xamarin Forms Version 2.1
MvvmCross Version 4.1
Even i got the same issue, to solve this i have added custom renderer for Frame control.
In framerenderer need to override method Draw and private method DrawOutline as follows,
public override void Draw(ACanvas canvas)
{
base.Draw(canvas);
DrawOutline(canvas, canvas.Width, canvas.Height, 4f);//set corner radius
}
void DrawOutline(ACanvas canvas, int width, int height, float cornerRadius)
{
using (var paint = new Paint { AntiAlias = true })
using (var path = new Path())
using (Path.Direction direction = Path.Direction.Cw)
using (Paint.Style style = Paint.Style.Stroke)
using (var rect = new RectF(0, 0, width, height))
{
float rx = Forms.Context.ToPixels(cornerRadius);
float ry = Forms.Context.ToPixels(cornerRadius);
path.AddRoundRect(rect, rx, ry, direction);
paint.StrokeWidth = 2f; //set outline stroke
paint.SetStyle(style);
paint.Color = Color.ParseColor("#A7AE22");//set outline color //_frame.OutlineColor.ToAndroid();
canvas.DrawPath(path, paint);
}
}
And in another approach you can also consider using the android selector xml of rounded corner as a background resource.
For more detail on this check my blog post: http://www.appliedcodelog.com/2016/11/xamarin-form-frame-outline-color_21.html
Suchith answer is correct and solve my problem here, but the Xamarin.Forms.Forms.Context is obsolete since the Xamarin version 2.5.
Now, the better approach is using Android.App.Application.Context so this is what code should be now.
public override void Draw(Canvas canvas)
{
base.Draw(canvas);
DrawOutline(canvas, canvas.Width, canvas.Height, 4f);//set corner radius
}
void DrawOutline(Canvas canvas, int width, int height, float cornerRadius)
{
using (var paint = new Paint { AntiAlias = true })
using (var path = new Path())
using (Path.Direction direction = Path.Direction.Cw)
using (Paint.Style style = Paint.Style.Stroke)
using (var rect = new RectF(0, 0, width, height))
{
float rx = Android.App.Application.Context.ToPixels(cornerRadius);
float ry = Android.App.Application.Context.ToPixels(cornerRadius);
path.AddRoundRect(rect, rx, ry, direction);
paint.StrokeWidth = 2f; //set outline stroke
paint.SetStyle(style);
paint.Color = Android.Graphics.Color.ParseColor("#FFFFFF");//set outline color //_frame.OutlineColor.ToAndroid();
canvas.DrawPath(path, paint);
}
}
In this link we have a great explanation for why we should use this new approach and why the Xamarin.Forms.Forms.Context is obsolete.
MyCustomRenderer, see you ;)
using Xamarin.Forms.Platform.Android;
using Android.Graphics;
using Android.Graphics.Drawables;
[assembly: ExportRenderer(typeof(Frame), typeof(YourProject.Droid.Renderers.BorderFrameRenderer))]
namespace YourProject.Droid.Renderers
{
public class BorderFrameRenderer : FrameRenderer
{
public override void Draw(Canvas canvas)
{
base.Draw(canvas);
using (var strokePaint = new Paint())
using (var rect = new RectF(0, 0, canvas.Width, canvas.Height))
{
// stroke
strokePaint.SetStyle(Paint.Style.Stroke);
strokePaint.Color = Element.OutlineColor.ToAndroid();
strokePaint.StrokeWidth = 5;
canvas.DrawRoundRect(rect, Element.CornerRadius * 2, Element.CornerRadius * 2, strokePaint); // stroke
}
}
public BorderFrameRenderer(Context context) : base(context)
{
}
protected override void OnElementChanged(ElementChangedEventArgs<Frame> e)
{
base.OnElementChanged(e);
}
}
}

Animate multiple shapes in UIView

I have a custom class that inherit from UIView. In the draw method I draw several shapes including some circles. I want to animate the color (now stroke color) of the circles independent of each other, e.g. I would like the color of one or more the circles to "pulse" or flash (using ease-in/ease-out and not linearly).
What would be the best way to archive this?
It would be great to be able to use the built-in animation code (CABasicAnimation and the like) but I'm not sure how?
EDIT: Here's the code involved. (I am using Xamarin.iOS but my question is not specific to this).
CGColor[] circleColors;
public override void Draw (RectangleF rect)
{
base.Draw (rect);
using (CGContext g = UIGraphics.GetCurrentContext ()) {
g.SetLineWidth(4);
float size = rect.Width > rect.Height ? rect.Height : rect.Width;
float xCenter = ((rect.Width - size) / 2) + (size/2);
float yCenter = ((rect.Height - size) / 2) + (size/2);
float d = size / (rws.NumCircles*2+2);
var circleRect = new RectangleF (xCenter, yCenter, 0, 0);
for (int i = 0; i < rws.NumCircles; i++) {
circleRect.X -= d;
circleRect.Y -= d;
circleRect.Width += d*2;
circleRect.Height += d*2;
CGPath path = new CGPath ();
path.AddEllipseInRect (circleRect);
g.SetStrokeColor (circleColors [i]);
g.AddPath (path);
g.StrokePath ();
}
}
}
You need to move all your drawing code to a subclass of CALayer, and decide parameters which, once varied, will produce the desired animations. Convert these parameters to the layer's properties, and you can animate the layer's properties with CABasicAnimation (or even [UIView animateXXX]).
See this SO question for more information.
Make sure that you set the layer's rasterizationScale to [UIScreen mainScreen].scale to avoid blurs on Retina.

is there a way to draw on the screen using finger with Qml

is there a way to draw on the screen using my finger
i'm creating a Blackberry-10 application like draw something as a school assignment
If you have Qt Quick 2.0 available, you can make use the Canvas object in QML. The following sample shows how to draw a red line whenever there is a onPressed / onPositionChanged event:
import QtQuick 2.0
Rectangle {
property int startX;
property int startY;
property int finishX;
property int finishY;
Canvas {
anchors.fill: parent
onPaint: {
var ctx = getContext("2d");
ctx.fillStyle = "black";
ctx.fillRect(0, 0, width, height);
ctx.strokeStyle = "red";
ctx.lineWidth = 1;
ctx.beginPath();
ctx.moveTo(startX, startY);
ctx.lineTo(finishX, finishY);
ctx.stroke();
ctx.closePath();
}
MouseArea {
anchors.fill: parent
onPressed: {
startX = mouseX;
startY = mouseY;
}
onPositionChanged: {
finishX = mouseX;
finishY = mouseY;
parent.requestPaint();
}
}
}
}
Sorry but you are planning to use what a BB10 because I didn't understood the question.
Try this solution from the SDK documentation.

Resources