I am codeing a little project where i need a Line from a given Object to my Mouse. I made things work and came up with this quick and dirty code:
addListener(new ClickListener() {
Image lineImage;
Pixmap pixmap;
#Override
public void touchDragged(InputEvent event, float x, float y, int pointer) {
// Get Actor Origin
// Get local Origin
int x2 = (int) event.getListenerActor().getX(Align.center);
int y2 = (int) event.getListenerActor().getY(Align.center);
// Make it global
x2 = (int) event.getListenerActor().getParent().getX() + x2;
y2 = (int) event.getListenerActor().getParent().getY() + y2;
// Get Stage Coordinates
Vector2 v = localToStageCoordinates(new Vector2(x, y));
Vector2 v2 = new Vector2(x2, y2);
Stage stage = event.getStage();
int width = (int) stage.getWidth();
int height = (int) stage.getHeight();
if (pixmap == null) {
pixmap = new Pixmap(width, height, Pixmap.Format.RGBA8888);
} else {
pixmap.setColor(1, 1, 1, 0);
pixmap.fill();
}
pixmap.setColor(Color.BLUE);
// line
for (int m = -2; m <= 2; m++) {// x
for (int n = -2; n <= 2; n++) {// y
pixmap.drawLine((int) (v2.x+m), (int) (height-v2.y+n) , (int) (v.x+m), (int) (height-v.y+n));
}
}
if (lineImage != null) {
/*lineImage.clear();
lineImage.remove();
*/
lineImage.setDrawable(new SpriteDrawable(new Sprite(new Texture(pixmap))));
} else {
lineImage = new Image(new Texture(pixmap));
}
lineImage.setPosition(0,0);
stage.addActor(lineImage);
// super.touchDragged(event, x, y, pointer);
}
#Override
public void touchUp(InputEvent event, float x, float y, int pointer, int button) {
if (lineImage != null) {
lineImage.clear();
lineImage.remove();
}
lineImage = null;
super.touchUp(event, x, y, pointer, button);
}
});
The Problem with this is, when i use this Listener on a Image and i activate touchdragged for about 20 seconds, there will be a memory leak.
I have no idea why this happens, i tried a lot of things but nothing seams to help me fix this. Do you have any ideas?
#noone is right. Add the line where is commented to dispose your pixmap after you assigned the drawable to the lineImage.
if (lineImage != null) {
/*lineImage.clear();
lineImage.remove();
*/
lineImage.setDrawable(new SpriteDrawable(new Sprite(new Texture(pixmap))));
} else {
lineImage = new Image(new Texture(pixmap));
}
pixmap.dispose(); // <-----------Add this line here!!!
lineImage.setPosition(0,0);
stage.addActor(lineImage);
Related
I have implemented a quick sort algorithm in Dart which sorts a list of random integers in the range from 0 to 100. Most of the times, it throws a 'not in inclusive range' exception. Other times, it works. I don't understand what is wrong with my code or logic here.
import 'dart:math';
List<int> list = [];
void main() {
randomize();
quickSort(0, list.length - 1);
print(list);
}
void randomize() {
for (int i = 0; i < 100; ++i) {
list.add(Random().nextInt(100));
}
}
void quickSort(int l, int h) {
if (l < h) {
int mid = partition(l, h);
quickSort(l, mid - 1);
quickSort(mid + 1, h);
}
}
int partition(int l, int h) {
int pivot = l;
int i = l;
int j = h;
while (i < j) {
while (list[i] <= list[pivot]) {
++i;
}
while (list[j] > list[pivot]) {
--j;
}
if (i < j) {
int temp = list[i];
list[i] = list[j];
list[j] = temp;
}
}
int temp = list[pivot];
list[pivot] = list[j];
list[j] = temp;
return j;
}
As an exercise I am trying to calculate a recursive EMA with a burn period in Esper, EPL. It has moderately complex startup logic, and I thought this would be a good test for evaluating the sorts of things Esper could achieve.
Assuming a stream of values x1, x2, x3 at regular intervals, we want to calculate:
let p = 0.1
a = average(x1, x2, x3, x4, x5) // Assume 5, in reality use a parameter
y1 = p * x1 + (p - 1) * a // Recursive calculation initialized with look-ahead average
y2 = p * x2 + (p - 1) * y1
y3 = p * x3 + (p - 1) * y2
....
The final stream should only publish y5, y6, y7, ...
I was toying with a context that produces an event containing the average a, and that event triggers a second context that begins the recursive calculations. But by the time I try to get the first context to trigger once and once only, and the second context to handle the initial case using a and subsequent events recursively I end up with a messy tangle of logic.
Is there a straight-forward way to approach this problem?
(I'm ignoring using a custom aggregator, since this is a learning exercise)
This doesn't answer the question, but might be useful - implementation as a custom aggregation function, tested with esper 7.1.0
public class EmaFactory implements AggregationFunctionFactory {
int burn = 0;
#Override
public void setFunctionName(String s) {
// Don't know why/when this is called
}
#Override
public void validate(AggregationValidationContext ctx) {
#SuppressWarnings("rawtypes")
Class[] p = ctx.getParameterTypes();
if ((p.length != 3)) {
throw new IllegalArgumentException(String.format(
"Ema aggregation required three parameters, received %d",
p.length));
}
if (
!(
(p[0] == Double.class || p[0] == double.class) ||
(p[1] == Double.class || p[1] == double.class) ||
(p[2] == Integer.class || p[2] == int.class))) {
throw new IllegalArgumentException(
String.format(
"Arguments to Ema aggregation must of types (Double, Double, Integer), got (%s, %s, %s)\n",
p[0].getName(), p[1].getName(), p[2].getName()) +
"This should be made nicer, see AggregationMethodFactorySum.java in the Esper source code for " +
"examples of correctly dealing with multiple types"
);
}
if (!ctx.getIsConstantValue()[2]) {
throw new IllegalArgumentException(
"Third argument 'burn' to Ema aggregation must be constant"
);
}
;
burn = (int) ctx.getConstantValues()[2];
}
#Override
public AggregationMethod newAggregator() {
return new EmaAggregationFunction(burn);
}
#SuppressWarnings("rawtypes")
#Override
public Class getValueType() {
return Double.class;
}
}
public class EmaAggregationFunction implements AggregationMethod {
final private int burnLength;
private double[] burnValues;
private int count = 0;
private double value = 0.;
EmaAggregationFunction(int burn) {
this.burnLength = burn;
this.burnValues = new double[burn];
}
private void update(double x, double alpha) {
if (count < burnLength) {
value += x;
burnValues[count++] = x;
if (count == burnLength) {
value /= count;
for (double v : burnValues) {
value = alpha * v + (1 - alpha) * value;
}
// in case burn is long, free memory
burnValues = null;
}
} else {
value = alpha * x + (1 - alpha) * value;
}
}
#Override
public void enter(Object tmp) {
Object[] o = (Object[]) tmp;
assert o[0] != null;
assert o[1] != null;
assert o[2] != null;
assert (int) o[2] == burnLength;
update((double) o[0], (double) o[1]);
}
#Override
public void leave(Object o) {
}
#Override
public Object getValue() {
if (count < burnLength) {
return null;
} else {
return value;
}
}
#Override
public void clear() {
// I don't know when / why this is called - this part untested
count = 0;
value = 0.;
burnValues = new double[burnLength];
}
}
public class TestEmaAggregation {
private EPRuntime epRuntime;
private SupportUpdateListener listener = new SupportUpdateListener();
void send(int id, double value) {
epRuntime.sendEvent(
new HashMap<String, Object>() {{
put("id", id);
put("value", value);
}},
"CalculationEvent");
}
#BeforeEach
public void beforeEach() {
EPServiceProvider provider = EPServiceProviderManager.getDefaultProvider();
EPAdministrator epAdministrator = provider.getEPAdministrator();
epRuntime = provider.getEPRuntime();
ConfigurationOperations config = epAdministrator.getConfiguration();
config.addPlugInAggregationFunctionFactory("ema", EmaFactory.class.getName());
config.addEventType(
"CalculationEvent",
new HashMap<String, Object>() {{ put("id", Integer.class); put("value", Double.class); }}
);
EPStatement stmt = epAdministrator.createEPL("select ema(value, 0.1, 5) as ema from CalculationEvent where value is not null");
stmt.addListener(listener);
}
Double getEma() {
return (Double)listener.assertOneGetNewAndReset().get("ema");
}
#Test
public void someTest() {
send(1, 1);
assertEquals(null, getEma());
send(1, 2);
assertEquals(null, getEma());
send(1, 3);
assertEquals(null, getEma());
send(1, 4);
assertEquals(null, getEma());
// Last of the burn period
// We expect:
// a = (1+2+3+4+5) / 5 = 3
// y1 = 0.1 * 1 + 0.9 * 3 = 2.8
// y2 = 0.1 * 2 + 0.9 * 2.8
// ... leading to
// y5 = 3.08588
send(1, 5);
assertEquals(3.08588, getEma(), 1e-10);
// Outside burn period
send(1, 6);
assertEquals(3.377292, getEma(), 1e-10);
send(1, 7);
assertEquals(3.7395628, getEma(), 1e-10);
send(1, 8);
assertEquals(4.16560652, getEma(), 1e-10);
}
}
Output
I think the following code isn't giving the correct result.
What's wrong withe following code?
public class ImagePadder
{
public static Bitmap Pad(Bitmap image, int newWidth, int newHeight)
{
int width = image.Width;
int height = image.Height;
if (width >= newWidth) throw new Exception("New width must be larger than the old width");
if (height >= newHeight) throw new Exception("New height must be larger than the old height");
Bitmap paddedImage = Grayscale.CreateGrayscaleImage(newWidth, newHeight);
BitmapLocker inputImageLocker = new BitmapLocker(image);
BitmapLocker paddedImageLocker = new BitmapLocker(paddedImage);
inputImageLocker.Lock();
paddedImageLocker.Lock();
//Reading row by row
for (int y = 0; y < image.Height; y++)
{
for (int x = 0; x < image.Width; x++)
{
Color col = inputImageLocker.GetPixel(x, y);
paddedImageLocker.SetPixel(x, y, col);
}
}
string str = string.Empty;
paddedImageLocker.Unlock();
inputImageLocker.Unlock();
return paddedImage;
}
}
Relevant Source Code:
public class BitmapLocker : IDisposable
{
//private properties
Bitmap _bitmap = null;
BitmapData _bitmapData = null;
private byte[] _imageData = null;
//public properties
public bool IsLocked { get; set; }
public IntPtr IntegerPointer { get; private set; }
public int Width { get { return _bitmap.Width; } }
public int Height { get { return _bitmap.Height; } }
public int Stride { get { return _bitmapData.Stride; } }
public int ColorDepth { get { return Bitmap.GetPixelFormatSize(_bitmap.PixelFormat); } }
public int Channels { get { return ColorDepth / 8; } }
public int PaddingOffset { get { return _bitmapData.Stride - (_bitmap.Width * Channels); } }
public PixelFormat ImagePixelFormat { get { return _bitmap.PixelFormat; } }
public bool IsGrayscale { get { return Grayscale.IsGrayscale(_bitmap); } }
//Constructor
public BitmapLocker(Bitmap source)
{
IsLocked = false;
IntegerPointer = IntPtr.Zero;
this._bitmap = source;
}
/// Lock bitmap
public void Lock()
{
if (IsLocked == false)
{
try
{
// Lock bitmap (so that no movement of data by .NET framework) and return bitmap data
_bitmapData = _bitmap.LockBits(
new Rectangle(0, 0, _bitmap.Width, _bitmap.Height),
ImageLockMode.ReadWrite,
_bitmap.PixelFormat);
// Create byte array to copy pixel values
int noOfBitsNeededForStorage = _bitmapData.Stride * _bitmapData.Height;
int noOfBytesNeededForStorage = noOfBitsNeededForStorage / 8;
_imageData = new byte[noOfBytesNeededForStorage * ColorDepth];//# of bytes needed for storage
IntegerPointer = _bitmapData.Scan0;
// Copy data from IntegerPointer to _imageData
Marshal.Copy(IntegerPointer, _imageData, 0, _imageData.Length);
IsLocked = true;
}
catch (Exception)
{
throw;
}
}
else
{
throw new Exception("Bitmap is already locked.");
}
}
/// Unlock bitmap
public void Unlock()
{
if (IsLocked == true)
{
try
{
// Copy data from _imageData to IntegerPointer
Marshal.Copy(_imageData, 0, IntegerPointer, _imageData.Length);
// Unlock bitmap data
_bitmap.UnlockBits(_bitmapData);
IsLocked = false;
}
catch (Exception)
{
throw;
}
}
else
{
throw new Exception("Bitmap is not locked.");
}
}
public Color GetPixel(int x, int y)
{
Color clr = Color.Empty;
// Get color components count
int cCount = ColorDepth / 8;
// Get start index of the specified pixel
int i = (Height - y - 1) * Stride + x * cCount;
int dataLength = _imageData.Length - cCount;
if (i > dataLength)
{
throw new IndexOutOfRangeException();
}
if (ColorDepth == 32) // For 32 bpp get Red, Green, Blue and Alpha
{
byte b = _imageData[i];
byte g = _imageData[i + 1];
byte r = _imageData[i + 2];
byte a = _imageData[i + 3]; // a
clr = Color.FromArgb(a, r, g, b);
}
if (ColorDepth == 24) // For 24 bpp get Red, Green and Blue
{
byte b = _imageData[i];
byte g = _imageData[i + 1];
byte r = _imageData[i + 2];
clr = Color.FromArgb(r, g, b);
}
if (ColorDepth == 8)
// For 8 bpp get color value (Red, Green and Blue values are the same)
{
byte c = _imageData[i];
clr = Color.FromArgb(c, c, c);
}
return clr;
}
public void SetPixel(int x, int y, Color color)
{
// Get color components count
int cCount = ColorDepth / 8;
// Get start index of the specified pixel
int i = (Height - y - 1) * Stride + x * cCount;
try
{
if (ColorDepth == 32) // For 32 bpp set Red, Green, Blue and Alpha
{
_imageData[i] = color.B;
_imageData[i + 1] = color.G;
_imageData[i + 2] = color.R;
_imageData[i + 3] = color.A;
}
if (ColorDepth == 24) // For 24 bpp set Red, Green and Blue
{
_imageData[i] = color.B;
_imageData[i + 1] = color.G;
_imageData[i + 2] = color.R;
}
if (ColorDepth == 8)
// For 8 bpp set color value (Red, Green and Blue values are the same)
{
_imageData[i] = color.B;
}
}
catch (Exception ex)
{
throw new Exception("(" + x + ", " + y + "), " + _imageData.Length + ", " + ex.Message + ", i=" + i);
}
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if (disposing)
{
// free managed resources
_bitmap = null;
_bitmapData = null;
_imageData = null;
IntegerPointer = IntPtr.Zero;
}
}
}
The layout of a Windows bitmap is different than you might expect. The bottom line of the image is the first line in memory, and continues backwards from there. It can also be laid out the other way when the height is negative, but those aren't often encountered.
Your calculation of an offset into the bitmap appears to take that into account, so your problem must be more subtle.
int i = (Height - y - 1) * Stride + x * cCount;
The problem is that the BitmapData class already takes this into account and tries to fix it for you. The bitmap I described above is a bottom-up bitmap. From the documentation for BitmapData.Stride:
The stride is the width of a single row of pixels (a scan line), rounded up to a four-byte boundary. If the stride is positive, the bitmap is top-down. If the stride is negative, the bitmap is bottom-up.
It is intended to be used with the Scan0 property to access the bitmap in a consistent fashion whether it's top-down or bottom-up.
I have some code to draw a line between two points on an image which are selected by mouse, and then to display a histogram.
However, when I press q as required by code I get an error saying R6010 abort() has been called and saying VC++ run time error.
Please advise me how I can find this error.
#include <vector>
#include "opencv2/highgui/highgui.hpp"
#include <opencv\cv.h>
#include <iostream>
#include<conio.h>
using namespace cv;
using namespace std;
struct Data_point
{
int x;
unsigned short int y;
};
int PlotMeNow(unsigned short int *values, unsigned int nSamples)
{
std::vector<Data_point> graph(nSamples);
for (unsigned int i = 0; i < nSamples; i++)
{
graph[i].x = i;
graph[i].y = values[i];
}
cv::Size imageSize(5000, 500); // your window size
cv::Mat image(imageSize, CV_8UC1);
if (image.empty()) //check whether the image is valid or not
{
std::cout << "Error : Image cannot be created..!!" << std::endl;
system("pause"); //wait for a key press
return 0;
}
else
{
std::cout << "Good job : Image created successfully..!!" << std::endl;
}
// tru to do some ofesseting so the graph do not hide on x or y axis
Data_point dataOffset;
dataOffset.x = 20;
// we have to mirror the y axis!
dataOffset.y = 5000;
for (unsigned int i = 0; i<nSamples; ++i)
{
graph[i].x = (graph[i].x + dataOffset.x) * 3;
graph[i].y = (graph[i].y + dataOffset.y) / 200;
}
// draw the samples
for (unsigned int i = 0; i<nSamples - 1; ++i)
{
cv::Point2f p1;
p1.x = graph[i].x;
p1.y = graph[i].y;
cv::Point2f p2;
p2.x = graph[i + 1].x;
p2.y = graph[i + 1].y;
cv::line(image, p1, p2, 'r', 1, 4, 0);
}
cv::namedWindow("MyWindow1", CV_WINDOW_AUTOSIZE); //create a window with the name "MyWindow"
cv::imshow("MyWindow1", image); //display the image which is stored in the 'img' in the "MyWindow" window
while (true)
{
char c = cv::waitKey(10);
if (c == 'q')
break;
}
destroyWindow("MyWindow1");
destroyWindow("MyWindow"); //destroy the window with the name, "MyWindow"
return 0;
}
void IterateLine(const Mat& image, vector<ushort>& linePixels, Point p2, Point p1, int* count1)
{
LineIterator it(image, p2, p1, 8);
for (int i = 0; i < it.count; i++, it++)
{
linePixels.push_back(image.at<ushort>(it.pos())); //doubt
}
*count1 = it.count;
}
//working line with mouse
void onMouse(int evt, int x, int y, int flags, void* param)
{
if (evt == CV_EVENT_LBUTTONDOWN)
{
std::vector<cv::Point>* ptPtr = (std::vector<cv::Point>*)param;
ptPtr->push_back(cv::Point(x, y));
}
}
void drawline(Mat image, std::vector<Point>& points)
{
cv::namedWindow("Output Window");
cv::setMouseCallback("Output Window", onMouse, (void*)&points);
int X1 = 0, Y1 = 0, X2 = 0, Y2 = 0;
while (1)
{
cv::imshow("Output Window", image);
if (points.size() > 1) //we have 2 points
{
for (auto it = points.begin(); it != points.end(); ++it)
{
}
break;
}
waitKey(10);
}
//just for testing that we are getting pixel values
X1 = points[0].x;
X2 = points[1].x;
Y1 = points[0].y;
Y2 = points[1].y;
// Draw a line
line(image, Point(X1, Y1), Point(X2, Y2), 'r', 2, 8);
cv::imshow("Output Window", image);
//exit image window
while (true)
{
char c = cv::waitKey(10);
if (c == 'q')
break;
}
destroyWindow("Output Window");
}
void show_histogram_image(Mat img1)
{
int sbins = 65536;
int histSize[] = { sbins };
float sranges[] = { 0, 65536 };
const float* ranges[] = { sranges };
cv::MatND hist;
int channels[] = { 0 };
cv::calcHist(&img1, 1, channels, cv::Mat(), // do not use mask
hist, 1, histSize, ranges,
true, // the histogram is uniform
false);
double maxVal = 0;
minMaxLoc(hist, 0, &maxVal, 0, 0);
int xscale = 10;
int yscale = 10;
cv::Mat hist_image;
hist_image = cv::Mat::zeros(65536, sbins*xscale, CV_16UC1);
for int s = 0; s < sbins; s++)
{
float binVal = hist.at<float>(s, 0);
int intensity = cvRound(binVal * 65535 / maxVal);
rectangle(hist_image, cv::Point(s*xscale, hist_image.rows),
cv::Point((s + 1)*xscale - 1, hist_image.rows - intensity),
cv::Scalar::all(65535), 1);
}
imshow("Histogram", hist_image);
waitKey(0);
}
int main()
{
vector<Point> points1;
vector<ushort>linePixels;
Mat img = cvLoadImage("desert.jpg");
if (img.empty()) //check whether the image is valid or not
{
cout << "Error : Image cannot be read..!!" << endl;
system("pause"); //wait for a key press
return -1;
}
//Draw the line
drawline(img, points1);
//now check the collected points
Mat img1 = cvLoadImage("desert.jpg");
if (img1.empty()) //check whether the image is valid or not
{
cout << "Error : Image cannot be read..!!" << endl;
system("pause"); //wait for a key press
return -1;
}
int *t = new int;
IterateLine( img1, linePixels, points1[1], points1[0], t );
PlotMeNow(&linePixels[0], t[0]);
show_histogram_image(img);
delete t;
_getch();
return 0;
}
This is one of the bad smells in your code:
void IterateLine(const Mat& image, vector<ushort>& linePixels, Point p2, Point p1, int* count1)
{
...
linePixels.push_back(image.at<ushort>(it.pos())); //doubt
Now image is a CV_8UC3 image (from Mat img1 = cvLoadImage("desert.jpg");, but you are accessing here like it is CV_16UC1, so what gets put in linePixels is garbage. This will almost certainly cause PlotMeNow() to draw outside its image and corrupt something, which is probably why your code is crashing.
Sine it is very unclear what your code is trying to do, I can't suggest what you should have here instead.
I have just managed to do this, you only have to put "-1" to your loop limit:
for (unsigned int i = 0; i < nSamples-1; i++)
{
graph[i].x = i;
graph[i].y = values[i];
}
Could someone help me figure out how to draw a route on RichMapField,
i am able to draw on a MapField.
I want to use RichMapField, because i can use the MapDataModel to add more than one marker dynamically.
Updated Code:
This is my attempt at writing code to display route from A to B on a RichMapField, all I am getting is a dot on the map. Could someone please help me with this:
class MapPathScreen extends MainScreen {
MapControl map;
Road mRoad = new Road();
RichMapField mapField = MapFactory.getInstance().generateRichMapField();
public MapPathScreen() {
double fromLat = 47.67, fromLon = 9.38, toLat =47.12, toLon = 9.47;
/* double fromLat = 49.85, fromLon = 24.016667;
double toLat = 50.45, toLon = 30.523333;
*/
String url = RoadProvider.getUrl(fromLat, fromLon, toLat, toLon);
InputStream is = getConnection(url);
mRoad = RoadProvider.getRoute(is);
map = new MapControl(mapField);
add(new LabelField(mRoad.mName));
add(new LabelField(mRoad.mDescription));
add(map);
}
protected void onUiEngineAttached(boolean attached) {
super.onUiEngineAttached(attached);
if (attached) {
map.drawPath(mRoad);
}
}
private InputStream getConnection(String url) {
HttpConnection urlConnection = null;
InputStream is = null;
try {
urlConnection = (HttpConnection) Connector.open(url);
urlConnection.setRequestMethod("GET");
is = urlConnection.openInputStream();
} catch (IOException e) {
e.printStackTrace();
}
return is;
}
protected boolean keyDown(int keycode, int time)
{
MapAction action=mapField.getAction();
StringBuffer sb = new StringBuffer();
// Retrieve the characters mapped to the keycode for the current keyboard layout
Keypad.getKeyChars(keycode, sb);
// Zoom in
if(sb.toString().indexOf('i') != -1)
{
action.zoomIn();
return true;
}
// Zoom out
else if(sb.toString().indexOf('o') != -1)
{
action.zoomOut();
return true;
}
return super.keyDown(keycode, time);
}
}
class MapControl extends net.rim.device.api.lbs.maps.ui.MapField {
Bitmap bmp = null;
MapAction action;
MapField map = new MapField();
RichMapField mapRich;
Road road;
public MapControl(RichMapField mapRich)
{
this.mapRich = mapRich;
}
public void drawPath(Road road) {
if (road.mRoute.length > 0) {
Coordinates[] mPoints = new Coordinates[] {};
for (int i = 0; i < road.mRoute.length; i++) {
Arrays.add(mPoints, new Coordinates(road.mRoute[i][1],
road.mRoute[i][0], 0));
}
double moveToLat = mPoints[0].getLatitude()
+ (mPoints[mPoints.length - 1].getLatitude() - mPoints[0].getLatitude()) / 2;
double moveToLong = mPoints[0].getLongitude()
+ (mPoints[mPoints.length - 1].getLongitude() - mPoints[0].getLongitude()) / 2;
Coordinates moveTo = new Coordinates(moveToLat, moveToLong, 0);
action = this.getAction();
action.setZoom(15);
action.setCentreAndZoom(new MapPoint(moveToLat,moveToLong), 15);
bmp = new Bitmap(500, 500);
bmp.createAlpha(Bitmap.ALPHA_BITDEPTH_8BPP);
Graphics g = Graphics.create(bmp);
int x1 = -1, y1 = -1, x2 = -1, y2 = -1;
XYPoint point = new XYPoint();
Coordinates c = new Coordinates(mPoints[0].getLatitude(),mPoints[0].getLongitude(),0);
map.convertWorldToField(c, point);
x1=point.x;
y1 = point.y;
g.fillEllipse(x1, y1, x1, y1 + 1, x1 + 1, y1, 0, 360);
for (int i = 1; i < mPoints.length; i++) {
XYPoint point1 = new XYPoint();
Coordinates c1 = new Coordinates(mPoints[i].getLatitude(),mPoints[i].getLongitude(),0);
map.convertWorldToField(c1, point1);
x2 = point1.x;
y2 = point1.y;
g.setColor(Color.GREEN);
//g.fillEllipse(x1, y1, x1, y1 + 1, x1 + 1, y1, 0, 360);
g.drawLine(x1, y1, x2, y2);
x1 = x2;
y1 = y2;
}
}
}
protected void paint(Graphics g) {
super.paint(g);
if (bmp != null) {
g.setGlobalAlpha(100);
g.drawBitmap(0, 0, bmp.getWidth(), bmp.getHeight(), bmp, 0, 0);
}
}