Storing a graph with Core Plot without displaying it - core-plot

Is there a way to make Core Plot render a graph even if it is not displayed? I need to do some graphs to share them as images but I can't ask the user to display all plots first so I can save images of them once they are drawn to the screen.
If there is no such possibility: Do you know any other solutions that support simple XY charts with lines and dots.

Core Plot graphs are Core Animation layers. Set the frame to the desired size when you create the graph, set it up normally, and call -imageOfLayer to render it to an NSImage on the Mac or UIImage on iOS. You can also call -dataForPDFRepresentationOfLayer to render it to PDF.

Related

Core Plot Graph Capture as an Image

Is there a way to capture the Core Plot Line Graph as an image as a whole.Currently I have scrolling enabled in the horizontal direction and when I use the imageOfLayer method provided; it gives only the on-screen content.I want to capture the whole graph including the offscreen content as well.
No, what you see is what you get. You don't have to display the graph on screen to draw it however. You can make a graph offscreen, size it however you want, and render it to an image without ever displaying it on screen.

What is the best way to use layers and partial rendering in iOS for speed

I'm working on a graphing application which I wrote using Core Graphics. I have a buffer which accumulates data, and I render it on the screen. It's super slow and I want to avoid going to openGL if possible. According to the profiler, drawing my graph data is what's killing me (which consists of a number of points which are converted to a path, followed by the calls AddPath, DrawPath)..
This is what I want to do, my question is how to best implement it using layers / views / etc..
I have a grid and some text. I want this to be rendered in a CALayer (or some other layer/view?) and only update when required (the graph is rescaled).
Only a portion of the data needs to be refreshed. I want to take the previous screen buffer, erase a rectangle's worth of data (or cover it with a white box) and then draw only the portion of the graphs that have changed.
I then want to merge the background layer with the foreground graphs to generate the composite image. This requires the graph layer to have a transparent background so as not to obscure the grid.
I've looked at using CAlayer as a sublayer, but it doesn't seem to provide a simple way to draw a line. CAShapeLayer seems a bit better, but it looks like it can only draw a single line. I want the grid to be composed of multiple lines.
What's the best approach and combination of objects to allow me to do this?
Thanks,
Reza
I'd have a CGLayerRef that was used for drawing the path into. For each new point I'd draw just the new segment. When the graph got to full width I'd create a new CGLayerRef and start drawing the new line segments into that.
What happens to the previous layer as it's drawn over by the new layer depends on how your graph is displayed, but you could clear the section which is now underneath the new layer (using CGContextSetBlendMode(context, kCGBlendModeClear);) or you could choose to blend them together in some other way.
Drawing the layers each time you make a change to the lines they contain is relatively cheap compared to drawing all of the line segments themselves.
Technically, there would also be CALayers used to manage the drawing of the CGLayerRefs to the screen (via the delegate relationship drawLayer:inContext:), but all of the line drawing is done using the CGLayerRefs context and then the CGLayerRef is drawn as a whole into the CALayers context (CGContextDrawLayerInRect(context, frame, backingCGLayer);).

iOS FloodFill : UIImage vs Core Graphics

I'm considering building an app that would make heavy use of a flood fill / paint bucket feature. The images I'd be coloring are simply like coloring book pages; white background, black borders. I'm debating which is better to use UIImage (by manipulating pixel data) or drawing the images with Core Graphics and changing the fill color on touch.
With UIImage, I'm unable to account for retina images properly; it destroys the image when I write the context into a new UIImage, but I can probably figure out. I open to tips though...
With CoreGraphics, I have no idea how to calculate which shape to fill when a user touches an area and then actually filling that area. I've looked but I have not turned up a successful search.
Overall, I believe the optimal solution is using CoreGraphics, since it'll be lighter overall and I won't have to keep several copies of the same image for different sizes.
Thoughts? Go easy on me! It's my first app and first SO question ;)
I'd suggest using Core Graphics.
Instead of images, define the shapes using CGPath or NSBezierPath, and use Core Graphics to stroke and/or fill the shapes. Filling shapes is then as easy as switching drawing mode from just stroking to stroking and filling.
Creating even more complex shapes is made much easier with the "PaintCode" app (which lets you draw and creates the path code for you).
As your first app, I would suggest something with a little less custom graphics fiddling, though.

Best way to plot array as image with ROI selection and scale

I have a 2D numpy array that I need to plot as an image with a certain scale. Within that image I need to be able to select a ROI or at least be able to display the mouse coordinates (of a specific target contained in the image). I tried using pyqtgraph but I can't seem to plot an image as a data source rather than just an image (i.e. can't seem to set axes, etc)... what would be the best way to do this, then? The image browser is compiled as a widget with a slider that scrolls through frames of the file; this widget is then embedded in a main window with a few table widgets.
I think imshow in matplotlib might work for you. It is easy to zoom, pan, and scale, and works easily with numpy.
(If this answer doesn't work for you, could you please refine your question. I'm unsure whether you're looking for any tool that will do the job, or something that works within the context of a gui that you've already implemented. If the later, I think you'll probably need to do the ROI yourself, by, say, selecting areas of the numpy array to plot, e.g. a[xmin:xmax, ymin:ymax].)

how to creat a graph in simple way in iPad app with some parameters

I want to create a graph in my ipad which takes 3 to 4 inputs and then creates graph i have found solution using pie chart but i want simple graph with x-axis values and y-axis values.
If you want to use a complex framework with lot of features, checkout : Core Plot.
In case you want simple Graph paper layout,
You will need to subclass UIView and override drawRect and make draw dotted lines using loops.
depending what how many vertical and horizontal lines you can accomodate.
Also for drawing the graph, pass an NSArray to the GraphView containing CGPoints wrapped by NSValue.
loop through the points and convert them to physical coordinates of the View.
for this you will need to keep track of how many screen pixels does each x and y unit represent and then little bit of math will give you screen pixels of the GraphView.
Just join the Points by straight lines using Core Graphics functions.
CGContextMoveToPoint and CGContextAddlineToPoint.
You will need to use different CGPaths to draw different components of the view like Graph lines, axis lines and the graph itself.
But if you really need more complex tasks go for Core-Plot.
Checkout Core-Plot examples at :
http://www.switchonthecode.com/tutorials/using-core-plot-in-an-iphone-application

Resources