I am using highchart plugin to draw charts in a website.I need dark black shadow around the dial. I am attaching image what I have done and what I need to achieve. The left image is a real graph and right one is the image which is the target. I need to make both identical.
Related
I just start exploring computer-vision field and I'm trying to create something like this (this image is what I'm trying to achieve not what I've already achieve)
My approach is (just a logical solution, havent try it yet):
Color detection.
First, get pixel position of lines with red and green color then add all that value to arrayRed and arrayGreen.
Segmentation
Get base image from cache then get all pixel with value that close to arrayRed and label it as label background. Do the same for arrayGreen
Convert color space to RGBA and set the Alpha of label background to 0
My question :
Am I on the right path?
Is this possible to achieve with OpenCV library?
If my approach is wrong, what's the efficient and actually right approach(in pseudo-code or python) to achieve the goal?
I have an image which is like a progress bar in circle. I need to show only a part of the progress bar based on the progress percentage. What I am trying to do is to put the image inside an imageview and show only a part of the image (arc section of the entire circle) based on the progress.
I have tried using UIBezierpath to create an arc shaped view and setting my progress bar image as the background image. But the image will try to fit inside the arc shaped view and looks bad.
Is there a better way to achieve the same?
Edit: Here is a sample image of what I am trying to achieve
But instead of the blue background, I need an image. So, only part of the image will be visible depending on the progress. (When its 100%, the entire image is visible)
I have spend almost half a day on this one. Any help would be appreciated.
At present I'm generating a chart based on a bunch of user-selected options, and this is rendered on the server to generate a png output file. The generated png chart is then displayed on the user's system, over an underlying system background.
Where the plotBackgroundColor of the chart has some opacity, the user's underlying system background will of course show through, and will influence how the chart appears.
That's all fine, because the user has complete control over both the highcharts plotBackgroundColor and the system background colour.
But now I want to generate a chart that is "free standing", with a solid background colour (no opacity) which represents exactly how the chart appears when over the system background. That way, I can display that chart on any system, to give a true picture of what the user is seeing, regardless of the system background colour in the target device.
I do have access to the user's system background colour (it's either a bitmap, or I can just extract the "dominant" colour somehow and use that as a solid colour instead if it's easier).
So using the concept of layers, this would be like merging the highcharts plotBackgroundColor with a solid colour that represents the system background colour, and using that as plotBackgroundColor instead.
Or maybe there's a way to change an underlying background "browser" colour that is used in the highcharts renderer, independent of plotBackgroundColor?
I'm sure this must be possible somehow?
One way of doing this, and it's what I'm doing until someone posts a better answer, is just to manually combine each backgroundColor value with the underlying system canvas colour, using the method described at https://stackoverflow.com/a/10782314/4070848. This is basically just laying one colour over the other, and using an algorithm to determine the combined colour based on the respective opacities and rgb values.
I check whether the backgroundColor is a plain colour or a linear/radial gradient, and if the latter then I combine each of the stop colours separately with the underlying colour and reconstruct the gradient based on the merged stop colours.
Seems to work OK, but maybe there's an off-the-shelf method, or maybe someone can tell me how to do it better...!
I want to identify squares/rectangles inside my UIImageView (or UIImage).
I looked at "Very simple image recognition on iOS", but that's not quite what I'm looking.
At the moment I have an UIImageView which is given a UIImage from time to time.
Most of the UIImagees has black squares/rectangles like this:
.
But the corners may (or may not) have rounded edges.
How can I identify the first black square/rectangle's size?
The end result would be to resize my UIImageView to make the first black square in the UIImage fill the screen. Like so:
If your images will always be sharp black squares in a horizontal row, you could use corner detection to identify the rectangles, then pick out the four leftmost corners. I have three variants of corner detectors in my open source GPUImage framework based on the Harris, Noble, and Shi-Tomasi corner detection methods.
Running a GPUImageHarrisCornerDetectionFilter against your boxes with a threshold of 0.4 and sensitivity of 4.0 yields the following result:
They're a little hard to see, but red crosshairs mark where the detector found the corners of your boxes. Again, you just need to take the leftmost four points to find your target rectangle, and then simply scale your image or view so that this rectangle now fills your view.
An example of how to run such feature detection can be found in either the FilterShowcase or FeatureExtractionTest example within my framework. I describe the process by which I do this in this answer over at Signal Processing.
It seems easiest solution would be:
sum up all pixels vertically to the top-most row (like an excel table)
rows with the smallest/biggest value are your "gap" region
width can be derived from (2).
From what I understood about your question, you need to implement the Canny Edge Detection Algorithm for detecting the edges of the black borders in your image.
For this you should use the image processing framework available at the following links
Google
Github
Use the ImageWrapper *Image::cannyEdgeExtract(float tlow, float thigh)function from the Image.m file.
On iOS, I'm using core-plot to draw a line graph. I want the line to be bordered top and bottom with a solid color and a different color in the middle, a striping effect. This cannot be done using CPTLineStyle, so I created a custom line style that uses CGContextSetStrokePattern to draw the line.
I thought I achieve the desired effect by creating a striped image and using it as the stroke pattern. This works, but the image orientation does not follow the direction of the path. The stripes are always horizontal even if the direction of the path is 45 degrees.
How can I tell Quartz auto rotate the pattern fill such that it follows the vector direction of the graph segment? Or alternatively, how can I get core-plot to do this for me?
We recently added the lineGradient property to CPTLineStyle that gives you a very flexible way to do this. See the "Gradient Scatter Plot" demo in the Plot Gallery example app.
Note that this change was added after the 1.3 release and is not part of a downloadable release yet. You will need to pull the latest code with Mercurial to get the change or wait for the next release.
The best solution I've found is to use two plots. The first with the wider line style the second with the narrower one. That achieves the desired effect.