low rendering with the big data in teechart pro vcl - delphi

i use teechart pro vcl for plot charts from input data.
i read data from comport and add points to TFastlineseries with this code :
var
a : integer;
b : double;
-----------------------------
With Dbchart1.Series[0] Do
Begin
Series0.AddXY(a, b, '', clTeeColor);
end;
i have very simple 2d or sometimes 3d colorfully graphs with more than 100000 points. but after 20000 points the rendering gets very slow and in some place it will be stop.
what can i do!? is there any algorithm for improve this situation?

Turn off drawing all the points.
Series0.DrawAllPoints := false;
From Real-time charting in TeeChart VCL:
TFastLineSeries introduces several properties for fast drawing
The DrawAllPoints boolean property, default value True. Normally
chart size is limited to a fixed number of screen pixels. This means
that if, for example, you have 1.000.000 points, they will inevitably
"share" the same screen pixel coordinate (in horizontal, vertical or
both directions). Drawing an algorithm will then plot multiple points
with different real x,y coordinates at the same screen coordinate.
After multiple calls to drawing the algorithm and waste of cpu time
you'll end up with a single painted screen pixel. In this case a
reasonable thing to do is group the points with the same x screen
pixel coordinate and replace them with two points (group minimum and
maximum values). The end result will visually be the same as drawing
all the points in the group. But it will be a lot faster, especially
if there are lots of points per group. Setting DrawAllPoints to False
does precisely that : the internal algorithm processes data and draws
only non-repeated (group) points. Using this trick you can plot
millions of points in realtime with little fuss.
The PDF also mentions how to delete from a series in real time.
Series Delete method. The Delete method now includes a second
parameter which controls how many points will be deleted from a
series. This allows fast delete of multiple points in a single call,
which is much faster than deleting multiple points using a loop.

Related

FEM Integrating Close to Integration Points

I am working on a program that can essentially determine the electrostatic field of some arbitrarily shaped mesh with some surface charge. To test my program I make use of a cube whose left and right faces are oppositely charged.
I use a finite element method (FEM) that discretizes the object's surface into triangles and gives to each triangle 3 integration points (see below figure, bottom-left and -right). To obtain the field I then simply sum over all these points, whilst taking into account some weight factor (because not all triangles have the same size).
In principle this works all fine, until I get too close to a triangle. Since three individual points are not the same as a triangular surface, the program breaks and gives these weird dots. (block spots precisely between two integration points).
Below you see a figure showing the simulation of the field (top left), the discretized surface mesh (bottom left). The picture in the middle depicts what you see when you zoom in on the surface of the cube. The right-most picture shows qualitatively how the integration points are distributed on a triangle.
Because the electric field of one integration point always points away from that point, two neighbouring points will cancel each other out since their vectors aim in the exact opposite direction. Of course what I need instead is that both vectors point away from the surface instead.
I have tried many solutions, mostly around the following points:
Patching the regions near an integration point with a theoretically correct uniform field pointing away from the surface.
Reorienting the vectors only nearby the integration point to manually put them in the right direction.
Apply a sigmoid or other decay function to make the above look more smooth.
Though, none of the methods above allow me to properly connect the nearby and faraway regions.
I guess what might work is some method to extrapolate the correct value from the surroundings. Though, because of the large number of computations, I moved the simulation the my GPU, which means that I have to be careful allowing two pixels to write to each other.
Either way, my question here is as follows:
What would be a good way to smooth out my results? That is, I need a more accurate description of my model when I get closer to a triangle.
As a final note I want to add that it is not my goal to simply obtain a smooth image. Later in the program I need this data to determine the response of a conducting material, which is where these black dots internally become a real pain...
Thank you for your help !!!

Multiple Plotspaces or shifted axes/plots

I'm looking to incorporate 4 real time scatter-plots into a graph and it has been requested that they be separated (at least in pairs) to make it easier to pick out signals. Would it be less resource intensive to have multiple plotspaces on my graph, or shift a new set of axes and plots on the same plotspace? Is this still the case if I add 2-4 more scatter-plots (for 6-8 total)?
FYI, I'm currently using CorePlot 1.6 (haven't had time to make the jump to 2.0).
If all of the plots are in the same graph, use multiple plot spaces. A plot space just defines a coordinate mapping between the data and the screen so it does't use any video memory or other system resources (just a small amount of memory for the plot space object itself). Each plot and axis are CALayer objects, so those will be the primary drivers of resource usage.

Removing sun light reflection from images of IR camera in realtime OpenCV application

I am developing speed estimation and vehicle counting application with OpencV and I use IR camera.
I am facing a problem of sun light reflection which causes vertical white region or lines in the images and has bad effect on my vehicle detections.
I want an approach with very high speed, because it is a real-time application.
The vertical streak defect in those images is called "blooming", happens when the one or a few wells in a CCD saturate to the point that they spill charge over neighboring wells in the same column. In addition, you have "regular" saturation with no blooming around the area of the reflection.
If you can, the best solution is to control the exposure (faster shutter time, or close lens iris if you have one). This will reduce but not eliminate blooming occurrence.
Blooming will always occur in a constant direction (vertical or horizontal, depending on your image orientation), and will normally fill entirely one or few contiguous columns. So you can cheaply detect it by heavily subsampling in the opposite dimension and looking for maxima that repeat in the same column. E.g., in you images, you could look for saturated maxima in the same column over 10 rows or so spread over the image height.
Once you detect the blooming columns, you can follow them in a small band around them to try to locate the saturated area. Note that saturation does not necessarily imply values at the end of the dynamic range (e.g. 255 for 8-bit image). Your sensor could be completely saturated at values that the A/D conversion assign at, say, 252. Saturation simply means that the image response becomes constant with respect to the input luminance.
The easiest solution (to me) is a hardware solution. If you can modify the physical camera setup add a polarizing filter to the lens of the camera. You don't even need a(n expensive) camera specific lens, adding a simple sheet of polarized film is good enough Here is one site I just googled "polarizing film" You will have to play with the orientation, but with this mounted position most surfaces are at the same angle and glare will be polarized near horizontal. So you should find a position that works well in most situations.
I've used this method before and the best part is it adds no extra algorithmic complexity or lag. Especially for mounted cameras where all surfaces are at nearly the same angle. This won't help you process the images you currently have but it will help in processing and acquiring future images.

Best way to draw on iOS

I am making a math drawing app on ipad. Users can manipulate the obejects on the screen like quadratic curve or sin curve. To update all objects on the screen, I need to redraw the whole screen at 60 fps, which costs lots of time. I currently implement drawing with Quartz2D, but the performance is bad when there are many objects on the screen. I heard that directly using openGL ES is better, because it use GPU to draw. But I am wondering how to draw cubic or quadratic curve with openGL ES. Or, is there other better choice to improve the drawing?
The openGL should be quite good at doing this but the curves are not just out of the box. Assuming you find some sample or create one from the scratch till the point you can draw some shapes there may be quite a few procedures accomplishing this.
The most direct one would be to create lines from points with specific resolution. That would mean when having function f(x) simply iterate x through the resolution. Say you are seeing the curve on interval [a, b] and want a resolution of R intervals that would produce the loop for(float x=a; x<=b; x+=(b-a)/R). Now just draw this as a line strip. In most cases the resolution can be quite high and you will have a nice result but there are cases where this will not work, the steep functions or some alternating functions will be missing some points. A problematic kind are for instance sin(x*some_large_factor).
Using the same procedure it might work better if you could transform the function to be dependent on the curve length instead of the X. This procedure would also be able to draw curves such as circles.
Another way is injecting a function into the shader and checking if the point is near enough the function. You will receive the x and y positions which you may insert into your function and check their distance in a way if(abs(y-f(x)) < lineWidth) // do draw. This procedure will be totally accurate but the problem is now the line width is defined by Y which will make the steep parts of the curve appear thicker. If you are able to find the true distance to the curve (the nearest point on the curve) this would work perfectly...
Have you tried finding some library or some open source to draw the curve in openGL though?

Simple Delphi 3d functions

Could anyone help me with examples of some bare-bone, old school 3d methods in Delphi? Not using openGL or firemonkey or any external library (vanilla canvas coding). What i want to do is to be able to rotate X number of points around a common origo. From what i remember from the old days, you subtract left from right (on the 3d points) so that origo is always 0,0 - then perform the calculations, and finally add the left/top pixel offset to get the actual screen positions.
What im looking for is a set of small, ad-hoc routines, ala:
RotateX(aValue:T3dpoint; degr:float):T3dPoint;
RotateY(--/--)
RotateZ(--/--)
Using these functions it should be fairly easy to create the old "rotating 3d cube" (8 points).
Also, are there functions for figuring out the visible "faces"? If i want a filled vector cube, then i guess i need to extract visible regions (based on distance/overlapping?) which in turn is drawn as X number of filled polygons? And these must no doubt be sorted by depth to not come out a mess.
for instance:
PointsToFaces(const a3dObject:T3dPointArray):TPolyFaceArray;
SortFaces(Const aFaces:TPolyFaceArray):TPolyFaceArray;
Any help is welcome!
Here are some nice good-old resource for Delphi Math from efg's Reference.
You can find a list of graphic projects.
2D/3D Lab Vector graphics: translation, rotation, scaling, view transform, homogeneous coordinates, clipping, projections, vectors, matrices etc...
I did write a simple 3D rendering 'engine' a few years ago, using only naïve linear algebra. Might not be the most efficient one, though. A few thousand of points is the limit if you want to be able to move reasonably smooth. Sample EXE. You can get the code if you like, but it might not be that pretty.

Resources