I'm working on an ARKit/SceneKit Ruler app and I'm trying to render the ruler's tick marks and numbers as a texture. The ruler is of variable length and can change on the fly.
What's a good way of render the numbers and tick marks? Right now we're using UIGraphicsBeginImageContextWithOptions and doing a
ctx.fill(CGRect.init(x: 0, y: 0, width: 64, height: 8))
but it seems like not a great solution. The tick marks seem possibly easier to use an image texture with, but what about the numbers?
For numbers you could use SCNText. If you'll set extrusionDepth to zero, you'll get plane text. And for marks you could use SCNPlane. Both of them have one-sided materials by default, which means that back side will be invisible.
Related
I have a boxplot and a scatterplot within a VictoryChart and want them shifted a number of pixels in the non-domain axis.
My current code and resulting chart are here: https://codesandbox.io/s/beautiful-violet-opclp?file=/index.js
Now I want the red triangle to be placed a bit higher, so about 15 pixels in the y-direction upwards. I tried padding, domainPadding, dx, dy, but nothing works really.
Any hints are appreciated!
By creating a custom point, the actual position can be set. See https://codesandbox.io/s/focused-lederberg-i8ddx?file=/index.js
I am using iOS charts library and in my application there could be lots of x axis values for my bar chart view but, I need to show few of them. I have tried to do it with barChartView.setVisibleXRangeMaximum(12) though it is causing some weird issues. (Like Bar chart's bar fill all x-axis) Instead of using that function I am trying to use zoom property barChartView.zoom(scaleX: 4, scaleY: 0, x: 0, y: 0) . However, I am not sure how many x axis values come when I draw the chart. In some cases, it's 60,84 or 800. How can I calculate right zoom ratio to accomplish for showing 10-12(any number between them enough for me) x-Axis values?
You might want to use:
barChartView.setVisibleXRange(minXRange: 10.0, maxXRange: 12.0)
instead of:
barChartView.setVisibleXRangeMaximum(12.0)
I don't know which numbers do what in the coordinates example here. I imagine they mean things like place the top left corner at this position and the bottom right corner at this position, but I don't know which number corresponds to which position.
I've been trying to fool around with the numbers to get a small green rectangle but keep getting weird results like the following, and don't know which numbers need to be what is order to make the rectangle symmetrical and at the bottom
This is what the rectangle should look like
The height of the rectangle is 50, the height of the screen is 1000, and the width of the screen is 1700.
Here's my draw function
function love.draw()
love.graphics.setColor(0.28, 0.63, 0.05) -- set the drawing color to green for the ground
love.graphics.polygon("fill", objects.ground.body:getWorldPoints(objects.ground.shape:getPoints())) -- draw a "filled in" polygon using the ground's coordinates
-- These are the grounds coordinates. -11650 950 13350 950 13350 1000 -11650 1000
love.graphics.setColor(0.76, 0.18, 0.05) --set the drawing color to red for the ball
love.graphics.circle("fill", objects.ball.body:getX(), objects.ball.body:getY(), objects.ball.shape:getRadius())
love.graphics.setColor(0.20, 0.20, 0.20) -- set the drawing color to grey for the blocks
love.graphics.polygon("fill", objects.block1.body:getWorldPoints(objects.block1.shape:getPoints()))
love.graphics.polygon("fill", objects.block2.body:getWorldPoints(objects.block2.shape:getPoints()))
print(objects.block1.body:getWorldPoints(objects.block1.shape:getPoints()))
end
As described at https://love2d.org/wiki/love.graphics, Löve's coordinate system has (0, 0) at the upper left corner of the screen. X values increase to the right, Y values increase down.
The polygon function expects the drawing mode as it's first parameter, and the the remaining (variable) parameters are the coordinates of the vertices of the polygon you wish to draw. Since you want to draw a rectangle you need four vertices/eight numbers. You do not have to list the upper left corner of the rectangle first, but that's probably the easiest thing to do.
So in your case, you want something like:
love.graphics.polygon('fill', 0, 950, 0, 1000, 1700, 1000, 1700, 950)
I've not worked with the physics system, so I'm not quite sure how it's coordinate system relates to "screen" coordinates. The values you show in the comment in your code listing seem like they should give a rectangle (although x = -11650 wouldn't be on screen). You might try experimenting without the physics system first.
Also, since the physics system in Löve is just a binding to Box2D, you might want to read its documentation (http://box2d.org/about/). Not really sure what you're trying to do with feeding shape:getPoints into body:getWorldPoints.
1.Introduction:
So I want to develop a special filter method for uiimages - my idea is to change from one picture all the colors to black except a certain color, which should keep their appearance.
Images are always nice, so look at this image to get what I'd like to achieve:
2.Explanation:
I'd like to apply a filter (algorithm) that is able to find specific colors in an image. The algorithm must be able to replace all colors that are not matching to the reference colors with e.g "black".
I've developed a simple code that is able to replace specific colors (color ranges with threshold) in any image.
But tbh this solution doesn't seems to be a fast & efficient way at all!
func colorFilter(image: UIImage, findcolor: String, threshold: Int) -> UIImage {
let img: CGImage = image.cgImage!
let context = CGContext(data: nil, width: img.width, height: img.height, bitsPerComponent: 8, bytesPerRow: 4 * img.width, space: CGColorSpaceCreateDeviceRGB(), bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)!
context.draw(img, in: CGRect(x: 0, y: 0, width: img.width, height: img.height))
let binaryData = context.data!.assumingMemoryBound(to: UInt8.self),
referenceColor = HEXtoHSL(findcolor) // [h, s, l] integer array
for i in 0..<img.height {
for j in 0..<img.width {
let pixel = 4 * (i * img.width + j)
let pixelColor = RGBtoHSL([Int(binaryData[pixel]), Int(binaryData[pixel+1]), Int(binaryData[pixel+2])]) // [h, s, l] integer array
let distance = calculateHSLDistance(pixelColor, referenceColor) // value between 0 and 100
if (distance > threshold) {
let setValue: UInt8 = 255
binaryData[pixel] = setValue; binaryData[pixel+1] = setValue; binaryData[pixel+2] = setValue; binaryData[pixel+3] = 255
}
}
}
let outputImg = context.makeImage()!
return UIImage(cgImage: outputImg, scale: image.scale, orientation: image.imageOrientation)
}
3.Code Information The code above is working quite fine but is absolutely ineffective. Because of all the calculation (especially color conversion, etc.) this code is taking a LONG (too long) time, so have a look at this screenshot:
My question I'm pretty sure there is a WAY simpler solution of filtering a specific color (with a given threshold #c6456f is similar to #C6476f, ...) instead of looping trough EVERY single pixel to compare it's color.
So what I was thinking about was something like a filter (CIFilter-method) as alternative way to the code on top.
Some Notes
So I do not ask you to post any replies that contain suggestions to use the openCV libary. I would like to develop this "algorithm" exclusively with Swift.
The size of the image from which the screenshot was taken over time had a resolution of 500 * 800px
Thats all
Did you really read this far? - congratulation, however - any help how to speed up my code would be very appreciated! (Maybe theres a better way to get the pixel color instead of looping trough every pixel) Thanks a million in advance :)
First thing to do - profile (measure time consumption of different parts of your function). It often shows that time is spent in some unexpected place, and always suggests where to direct your optimization effort. It doesn't mean that you have to focus on that most time consuming thing though, but it will show you where the time is spent. Unfortunately I'm not familiar with Swift so cannot recommend any specific tool.
Regarding iterating through all pixels - depends on the image structure and your assumptions about input data. I see two cases when you can avoid this:
When there is some optimized data structure built over your image (e.g. some statistics in its areas). That usually makes sense when you process the same image with same (or similar) algorithm with different parameters. If you process every image only once, likely it will not help you.
When you know that the green pixels always exist in a group, so there cannot be an isolated single pixel. In that case you can skip one or more pixels and when you find a green pixel, analyze its neighbourhood.
I do not code on your platform but...
Well I assume your masked areas (with the specific color) are continuous and large enough ... that means you got groups of pixels together with big enough areas (not just few pixels thick stuff). With this assumption you can create a density map for your color. What I mean if min detail size of your specific color stuff is 10 pixels then you can inspect every 8th pixel in each axis speeding up the initial scan ~64 times. And then use the full scan only for regions containing your color. Here is what you have to do:
determine properties
You need to set the step for each axis (how many pixels you can skip without missing your colored zone). Let call this dx,dy.
create density map
simply create 2D array that will hold info if center pixel of region is set with your specific color. so if your image has xs,ys resolution than your map will be:
int mx=xs/dx;
int my=ys/dy;
int map[mx][my],x,y,xx,yy;
for (yy=0,y=dy>>1;y<ys;y+=dy,yy++)
for (xx=0,x=dx>>1;x<xs;x+=dx,xx++)
map[xx][yy]=compare(pixel(x,y) , specific_color)<threshold;
enlarge map set areas
now you should enlarge the set areas in map[][] to neighboring cells because #2 could miss edge of your color region.
process all set regions
for (yy=0;yy<my;yy++)
for (xx=0;xx<mx;xx++)
if (map[xx][yy])
for (y=yy*dy,y<(yy+1)*dy;y++)
for (x=xx*dx,x<(xx+1)*dx;x++)
if (compare(pixel(x,y) , specific_color)>=threshold) pixel(x,y)=0x00000000;
If you want to speed up this even more than you need to detect set map[][] cells that are on edge (have at least one zero neighbor) you can distinquish the cells like:
0 - no specific color is present
1 - inside of color area
2 - edge of color area
That can be done by simply in O(mx*my). After that you need to check for color only the edge regions so:
for (yy=0;yy<my;yy++)
for (xx=0;xx<mx;xx++)
if (map[xx][yy]==2)
{
for (y=yy*dy,y<(yy+1)*dy;y++)
for (x=xx*dx,x<(xx+1)*dx;x++)
if (compare(pixel(x,y) , specific_color)>=threshold) pixel(x,y)=0x00000000;
} else if (map[xx][yy]==0)
{
for (y=yy*dy,y<(yy+1)*dy;y++)
for (x=xx*dx,x<(xx+1)*dx;x++)
pixel(x,y)=0x00000000;
}
This should be even faster. In case your image resolution xs,ys is not a multiple of region size mx,my you should handle the outer edge of image either by zero padding or by special loops for that missing part of image...
btw how long it takes to read and set your whole image?
for (y=0;y<ys;y++)
for (x=0;x<xs;x++)
pixel(x,y)=pixel(x,y)^0x00FFFFFF;
if this alone is slow than it means your pixel access is too slow and you should use different api for this. That is very common mistake on Windows GDI platform as people usually use Pixels[][] which is slower than crawling snail. there are other ways like bitlocking/blitting,ScanLine etc so in such case you need to look for something fast on your platform. If you are not able to speed even this stuff than you can not do anything else ... btw what HW is this run on?
I referred to following article. What I actually need to draw is concentric/ Concrete circles with an effect as shown in image below.
I am finding it difficult to a) Draw the white streaks radially b) Find some key terms to search for related articles to proceed further on this.
Any hint or link to read about this will be of great help.
Try these
Metallic Knob
Metallic Knob 2
http://maniacdev.com/2012/06/ios-source-code-example-making-reflective-metallic-buttons-like-the-music-app
This is a tutorial on making reflective metal buttons. You can apply the techniques from the source code to whatever object you're trying to make. The source code is found here on github. I just googled "ios objective c metal effect" because that's what you're trying to do, right? The metal effect appears in concentric circles and changes as you tilt your phone, just as the iOS6 music slider does.
I don't have any code for you but the idea is actually quite simple. You're drawing a number of lines radiating from a single, central point (say 50,50) to four different sets of points. First set is for x = 0 to 100, y = 0. Second set is for y = 0 to 100, x = 0. Third set is for x = 0 to 100, y = 100. Fourth set is for y = 0 to 100, x = 100. And for each step you need to either change the colour from white to black or white to grey in increments or use a look up table with your colour values in it.