iOS Charts : Bar chart spacing and position - ios

Objective
I'm trying to display data from two sets (ACTIVE and REST) in a bar chart.
The sets are alternating (meaning there is one ACTIVE interval, followed by a REST interval, etc...). Rather, there is alwats one REST set between every two ACTIVE sets.
The code I've used is below, for reference.
I am, however, running into problems with the bar positions and spacing.
Issue: centering bar at x-value
First issue, the bar is not centered at the corresponding x-value. In the example below, the first orange bar has x = 1.
It clearly starts at value 1 on the x axis (its leftmost side corresponds to about 1), but I want it to be centered around the 1 value on the x-axis
Issue: bar spacing
For some reason, the bars intersection and the spacing is not equal (see screenshot).
Note the white space between bar 2 and 3, and the absence of space between bar 1 and 2.
The relevant code here is :
let barWidth = 1.0
let barSpace = 0.10
let groupSpace = 0.1
chartData.groupBars(fromX: 1, groupSpace: groupSpace, barSpace: barSpace)
chartData.barWidth = barWidth
Indexing bars and BarChartDataEntry
Each set (REST and ACTIVE) comes from a different array of values.
If I press on the 1st REST bar, I would like to obtain the corresponding value of 0. on the 1st ACTIVE bar, the value 0 also. 2nd ACTIVE bar, value 1...
However, in chartValueSelected, using the entry.x value gives the bar's x-position on the axis (NOT the x value I've set in the code), which is casing errors.
How can I get the value of the selected bar's index in the set it belongs to ?
Code
//
// MARK: Chart Setup
//
func setupBarChart() {
let intervals = self.session!.getIntervals()
// Set chart delegate
intervalBarChartView.delegate = self
//
// Create pairs (x, y) of values
//
var values_ACTIVE : [BarChartDataEntry] = []
var values_REST : [BarChartDataEntry] = []
for i in 0...(intervals.count - 1) {
let newValue_ACTIVE = intervals[i].duration!.doubleValue
// let newIndex_ACTIVE = Double(2*i+1)
let newIndex_ACTIVE = Double(i)
values_ACTIVE += [BarChartDataEntry(x: newIndex_ACTIVE, y: newValue_ACTIVE)]
if i < (intervals.count - 1) {
let newValue_REST = (intervals[i+1].startTime!.doubleValue) - intervals[i].getEndTime()!
// let newIndex_REST = Double(2*i+2)
let newIndex_REST = Double(i)
values_REST += [BarChartDataEntry(x: newIndex_REST, y: newValue_REST)]
}
}
// Create data sets
let dataSet_ACTIVE = BarChartDataSet(values: values_ACTIVE, label: "ACTIVE")
let dataSet_REST = BarChartDataSet(values: values_REST, label: "REST")
//Set chart data
let chartData = BarChartData()
chartData.addDataSet(dataSet_REST)
chartData.addDataSet(dataSet_ACTIVE)
self.intervalBarChartView.data = chartData
// Bar sizes
let barWidth = 1.0
let barSpace = 0.10
let groupSpace = 0.1
chartData.groupBars(fromX: 0, groupSpace: groupSpace, barSpace: barSpace)
chartData.barWidth = barWidth
// Bar Colors
dataSet_ACTIVE.colors = [runOrange]
dataSet_REST.colors = [RunGreen]
self.intervalBarChartView.gridBackgroundColor = NSUIColor.white
// Enable/Disable show values and position of values
chartData.setDrawValues(false)
intervalBarChartView.drawValueAboveBarEnabled = false
// Bar Axes:
intervalBarChartView.xAxis.axisMinimum = 0.0
intervalBarChartView.xAxis.axisMaximum = 10
intervalBarChartView.leftAxis.axisMinimum = 0.0
intervalBarChartView.rightAxis.enabled = false
// Bar Axes: GridLines
self.intervalBarChartView.xAxis.drawGridLinesEnabled = false
self.intervalBarChartView.xAxis.drawAxisLineEnabled = false
self.intervalBarChartView.rightAxis.drawAxisLineEnabled = true
self.intervalBarChartView.rightAxis.drawGridLinesEnabled = false
self.intervalBarChartView.leftAxis.drawGridLinesEnabled = false
self.intervalBarChartView.leftAxis.drawAxisLineEnabled = false
// Bar Text
self.intervalBarChartView.chartDescription?.text = "Barchart Demo"
// Control interaction
self.intervalBarChartView.doubleTapToZoomEnabled = false
}
EDIT
Here is what I get with two different compilations of settings :
With these settings :
// Bar sizes
let barWidth = 0.3
let barSpace = 0.10
let groupSpace = 0.1
The gap between the bars is (more or less) equal:
However, with these setttings:
let groupSpace = 0.3
let barSpace = 0.05
let barWidth = 0.3
I get a wider gap between bars 2 and 3 (compared to 1 and 2)

You need to change your barWidth calculation so that you can achieve what you expect in your output.
Logic of GroupBarChart Bar & Spacing calculation:
(baSpace + barWidth) * dataSet count + groupSpace = 1.0
So in your case you need to update above value like this:
let groupSpace = 0.3
let barSpace = 0.05
let barWidth = 0.3
// (0.3 + 0.05) * 2 + 0.3 = 1.00 -> interval per "group"
let startVal = 0
chartData.barWidth = barWidth;
chartData.groupBars(fromX: Double(startVal), groupSpace: groupSpace, barSpace: barSpace)
By doing this you will get proper cantered bar position in your bar chart.
Also one more option is only set BarWidth for your charts like below:
let barChartData = BarChartData(dataSet: barChartDataSet)
barChartData.barWidth = 0.4
Hope this will helps!

Related

SKTileDefinition Normal Texture Not Rendering Properly Outside of Simulator

I'm currently making a game with Spritekit and struggling to figure out why my normalized Textures are not being applied properly when rendering on device, while they seem to be fine in the simulator.
Here is the code that adds the normal textures to the tile definitions, among other things:
self.wallTileMap = self.scene?.childNode(withName: "Walls") as? SKTileMapNode
let textureAtlas = SKTextureAtlas(named: "Wall Normal Maps")
if let tileMap = self.wallTileMap {
let startingLocation:CGPoint = tileMap.position
let tileSize = tileMap.tileSize
let halfWidth = CGFloat(tileMap.numberOfColumns) / 2.0 * tileSize.width
let halfHeight = CGFloat(tileMap.numberOfRows) / 2.0 * tileSize.height
let rows = tileMap.numberOfRows
let columns = tileMap.numberOfColumns
for column in 0..<columns {
for row in 0..<rows {
let x = CGFloat(column) * tileSize.width - halfWidth + (tileSize.width / 2)
let y = CGFloat(row) * tileSize.height - halfHeight + (tileSize.height / 2)
if let tileDefinition = tileMap.tileDefinition(atColumn: column, row: row) {
if let name = tileDefinition.name {
let normalTexture = textureAtlas.textureNamed("\(name)_n")
tileDefinition.normalTextures = [normalTexture]
}
if (tileDefinition.userData?["shouldKill"] as? Bool ?? false) {
let newNode = SKShapeNode(rectOf: tileDefinition.size)
newNode.position = CGPoint(x: x, y: y)
newNode.isHidden = true
newNode.physicsBody = SKPhysicsBody(texture: tileDefinition.textures[0], size: tileDefinition.size)
newNode.physicsBody?.isDynamic = false
newNode.physicsBody?.affectedByGravity = false
newNode.physicsBody?.categoryBitMask = CollisionTypes.wall.rawValue
newNode.physicsBody?.collisionBitMask = CollisionTypes.dynamicComponents.rawValue
newNode.physicsBody?.contactTestBitMask = CollisionTypes.dynamicComponents.rawValue
self.addChild(newNode)
newNode.position = CGPoint(x: newNode.position.x + startingLocation.x, y: newNode.position.y + startingLocation.y)
}
}
}
}
}
The result for simulator--which is expected:
The result on device--which is incorrect:
I tried multiple simulators, it worked on them all. I've also tried multiple physical devices and it was broken on all of them.
The only thing that I could find while debugging is that the normal images on device seemed to be off by one pixel in size occasionally. So the normal size is 128 x 128 and occasionally the size on device would be 128 x 127 or 127 x 127. No clue what could cause this, nor if that is the actual problem.
Does anyone have any ideas as to why the normal maps would be rendered properly in the simulator, but not on device?

How to flip the orientation of beamsplitter in pst-optexp package in latex

I was trying to plot an interferometer in latex using pst-optexp package. Everything works fine the beamsplitter is in the wrong orientation and need to be fliped 90 degree. However, the command\beamsplitter[bsstyle=plate, compname=BS,label = 0.8 -45](A)(BS)(PD){BS} does not work. Does anyone know how to rotate the beamsplitter in that package?
The code I used is
\documentclass[margin=36pt]{standalone}
\usepackage{pst-optexp}
\usepackage{amsmath}
\begin{document}
\begin{pspicture}[showgrid](6,6)
\pnodes(0,3){S}(1,3){LS}(2,3){A}(3,3){BS}(5,3){M1}(3,5){M2}(3,1){PD}
\optbox[abspos = 0.5, optboxsize=1 0.5, labeloffset = 0](S)(LS){Laser}
\lens[compname=L0, lensradius=0.5 0.5, lensheight = 0.5, abspos = 0.2, n=2,thicklens = false](LS)(A){obj}
\pinhole[compname = PH, labeloffset = -0.7](LS)(A){Pinhole}
\lens[compname=L1, abspos = 1, n=2, thicklens = false](LS)(A){Lens1}
\psset{mirrortype=extended, mirrordepth=0.2}
\beamsplitter[bsstyle=plate, compname=BS,label = 0.8 -45](A)(BS)(PD){BS}
\optbox[abspos = 1.75, optboxsize=0.5 0.5, label = 0.7 180](BS)(M1){Sample}
\mirror[compname=M1,labeloffset = 1](BS)(M1)(BS){Test Arm}
\mirror[compname=M2, mirrortype=piezo](BS)(M2)(BS){Ref Arm}
\lens[compname=L2, label = 1 180](BS)(PD){Lens2}
\optdetector[compname=Det, dettype=round, label = 0.5 90,abspos = 2.5](BS)(PD){Camera}
\addtopsstyle{Beam}{beamwidth=0.2, fillstyle=solid, linecolor = red, fillcolor=red, opacity = 0.2}
\drawwidebeam(LS){L0}{L1}{BS}{M1}{BS}{M2}{BS}{L2}{Det}
\end{pspicture}
\end{document}
The resulting picture is the orientation of beam splitter is wrong. The example given in pst-optexp manual is wrong in the same way.

CAEmitterCell emits in two opposite orientation (one is wrong)

I want CAEmitterCell emits in a specific angle. I use emitterCell.emissionLongitude = -CGFloat.pi to specify the orientation angle is -pi(which is left), but it emits to right as well.
func setUpEmitterCell() {
emitterCell.contents = UIImage(named: "spark_small")?.cgImage
emitterCell.velocity = 50.0
emitterCell.velocityRange = 500.0
let zeroDegreesInRadians = degreesToRadians(0.0)
emitterCell.spin = degreesToRadians(130.0)
emitterCell.spinRange = zeroDegreesInRadians
emitterCell.emissionRange = degreesToRadians(10.0)
emitterCell.emissionLongitude = -CGFloat.pi
emitterCell.lifetime = 1.0
emitterCell.birthRate = 1000.0
emitterCell.xAcceleration = 0.0
emitterCell.yAcceleration = 0.0
}
This is the problem:
emitterCell.velocity = 50.0
// but then:
emitterCell.velocityRange = 500.0
You are allowing the velocity to range so widely from its original value of 50 that it can be negative. When it is, the cells are emitted in the opposite direction (that is what negative velocity means).

How to normalize disparity data in iOS?

In WWDC session "Image Editing with Depth" they mentioned few times normalizedDisparity and normalizedDisparityImage:
"The basic idea is that we're going to map our normalized disparity
values into values between 0 and 1"
"So once you know the min and max you can normalize the depth or disparity between 0 and 1."
I tried to first get the disparit image like this:
let disparityImage = depthImage.applyingFilter(
"CIDepthToDisparity", withInputParameters: nil)
Then I tried to get depthDataMap and do normalization but it didn't work. I'm I on the right track? would be appreciate some hint on what to do.
Edit:
This is my test code, sorry for the quality. I get the min and max then I try to loop over the data to normalize it (let normalizedPoint = (point - min) / (max - min))
let depthDataMap = depthData!.depthDataMap
let width = CVPixelBufferGetWidth(depthDataMap) //768 on an iPhone 7+
let height = CVPixelBufferGetHeight(depthDataMap) //576 on an iPhone 7+
CVPixelBufferLockBaseAddress(depthDataMap, CVPixelBufferLockFlags(rawValue: 0))
// Convert the base address to a safe pointer of the appropriate type
let floatBuffer = unsafeBitCast(CVPixelBufferGetBaseAddress(depthDataMap),
to: UnsafeMutablePointer<Float32>.self)
var min = floatBuffer[0]
var max = floatBuffer[0]
for x in 0..<width{
for y in 0..<height{
let distanceAtXYPoint = floatBuffer[Int(x * y)]
if(distanceAtXYPoint < min){
min = distanceAtXYPoint
}
if(distanceAtXYPoint > max){
max = distanceAtXYPoint
}
}
}
What I expected is the the data will reflect the disparity where the user clicked on the image but it didn't match. The code to find the disparity where the user clicked is here:
// Apply the filter with the sampleRect from the user’s tap. Don’t forget to clamp!
let minMaxImage = normalized?.clampingToExtent().applyingFilter(
"CIAreaMinMaxRed", withInputParameters:
[kCIInputExtentKey : CIVector(cgRect:rect2)])
// A four-byte buffer to store a single pixel value
var pixel = [UInt8](repeating: 0, count: 4)
// Render the image to a 1x1 rect. Be sure to use a nil color space.
context.render(minMaxImage!, toBitmap: &pixel, rowBytes: 4,
bounds: CGRect(x:0, y:0, width:1, height:1),
format: kCIFormatRGBA8, colorSpace: nil)
// The max is stored in the green channel. Min is in the red.
let disparity = Float(pixel[1]) / 255.0
There's a new blog post on raywenderlich.com called "Image Depth Maps Tutorial for iOS" contains sample app and details related to working with depth. The sample code shows how to normalize the depth data using a CVPixelBuffer extension:
extension CVPixelBuffer {
func normalize() {
let width = CVPixelBufferGetWidth(self)
let height = CVPixelBufferGetHeight(self)
CVPixelBufferLockBaseAddress(self, CVPixelBufferLockFlags(rawValue: 0))
let floatBuffer = unsafeBitCast(CVPixelBufferGetBaseAddress(self), to: UnsafeMutablePointer<Float>.self)
var minPixel: Float = 1.0
var maxPixel: Float = 0.0
for y in 0 ..< height {
for x in 0 ..< width {
let pixel = floatBuffer[y * width + x]
minPixel = min(pixel, minPixel)
maxPixel = max(pixel, maxPixel)
}
}
let range = maxPixel - minPixel
for y in 0 ..< height {
for x in 0 ..< width {
let pixel = floatBuffer[y * width + x]
floatBuffer[y * width + x] = (pixel - minPixel) / range
}
}
CVPixelBufferUnlockBaseAddress(self, CVPixelBufferLockFlags(rawValue: 0))
}
}
Something to keep in mind when working with depth data that they are lower resolution than the actual image so you need to scale up (more info in the blog and in the WWDC video)
Will's answer above is very good, but it can be improved as follows. I'm using it with depth data from a photo, it's possible that if the depth data doesn't follow 16-bits, as mentioned above, it won't work. Haven't found such a photo yet. I'm surprised there isn't a filter to handle this in Core Image.
extension CVPixelBuffer {
func normalize() {
CVPixelBufferLockBaseAddress(self, CVPixelBufferLockFlags(rawValue: 0))
let width = CVPixelBufferGetWidthOfPlane(self, 0)
let height = CVPixelBufferGetHeightOfPlane(self, 0)
let count = width * height
let pixelBufferBase = unsafeBitCast(CVPixelBufferGetBaseAddressOfPlane(self, 0), to: UnsafeMutablePointer<Float>.self)
let depthCopyBuffer = UnsafeMutableBufferPointer<Float>(start: pixelBufferBase, count: count)
let maxValue = vDSP.maximum(depthCopyBuffer)
let minValue = vDSP.minimum(depthCopyBuffer)
let range = maxValue - minValue
let negMinValue = -minValue
let subtractVector = vDSP.add(negMinValue, depthCopyBuffer)
let normalizedDisparity = vDSP.divide(subtractVector, range)
pixelBufferBase.initialize(from: normalizedDisparity, count: count)
CVPixelBufferUnlockBaseAddress(self, CVPixelBufferLockFlags(rawValue: 0))
}
}
Try using the Accelerate Framework vDSP vector functions.. here is a normalize in two functions.
to change the cvPixel buffer to a 0..1 normalized range
myCVPixelBuffer.setUpNormalize()
import Accelerate
extension CVPixelBuffer {
func vectorNormalize( targetVector: UnsafeMutableBufferPointer<Float>) -> [Float] {
// range = max - min
// normalized to 0..1 is (pixel - minPixel) / range
// see Documentation "Using vDSP for Vector-based Arithmetic" in vDSP under system "Accelerate" documentation
// see also the Accelerate documentation section 'Vector extrema calculation'
// Maximium static func maximum<U>(U) -> Float
// Returns the maximum element of a single-precision vector.
//static func minimum<U>(U) -> Float
// Returns the minimum element of a single-precision vector.
let maxValue = vDSP.maximum(targetVector)
let minValue = vDSP.minimum(targetVector)
let range = maxValue - minValue
let negMinValue = -minValue
let subtractVector = vDSP.add(negMinValue, targetVector)
// adding negative value is subtracting
let result = vDSP.divide(subtractVector, range)
return result
}
func setUpNormalize() -> CVPixelBuffer {
// grayscale buffer float32 ie Float
// return normalized CVPixelBuffer
CVPixelBufferLockBaseAddress(self,
CVPixelBufferLockFlags(rawValue: 0))
let width = CVPixelBufferGetWidthOfPlane(self, 0)
let height = CVPixelBufferGetHeightOfPlane(self, 0)
let count = width * height
let bufferBaseAddress = CVPixelBufferGetBaseAddressOfPlane(self, 0)
// UnsafeMutableRawPointer
let pixelBufferBase = unsafeBitCast(bufferBaseAddress, to: UnsafeMutablePointer<Float>.self)
let depthCopy = UnsafeMutablePointer<Float>.allocate(capacity: count)
depthCopy.initialize(from: pixelBufferBase, count: count)
let depthCopyBuffer = UnsafeMutableBufferPointer<Float>(start: depthCopy, count: count)
let normalizedDisparity = vectorNormalize(targetVector: depthCopyBuffer)
pixelBufferBase.initialize(from: normalizedDisparity, count: count)
// copy back the normalized map into the CVPixelBuffer
depthCopy.deallocate()
// depthCopyBuffer.deallocate()
CVPixelBufferUnlockBaseAddress(self, CVPixelBufferLockFlags(rawValue: 0))
return self
}
}
You can see it in action in a modified version of the Apple sample 'PhotoBrowse' app at
https://github.com/racewalkWill/PhotoBrowseModified

Swift 3 - Slider's pointer width

I want to make a label for a slider that shows the position of the pointer like this:
This is my code that's moving this label when the slider's pointer moves:
var sliderPointWidth : CGFloat = 32.0
#IBAction func sliderValueChanged( _ slider : UISlider ){
sliderLabel.text = "\( Int( roundf( slider.value ) ) )"
let leftMove = slider.frame.minX
let allRange = ( slider.frame.width - sliderPointWidth ) * CGFloat( slider.value / slider.maximumValue )
let middleOfSliferLabel = sliderLabel.frame.width / 2
let x = leftMove + sliderPointWidth / 2 + allRange - middleOfSliferLabel
sliderLabel.frame.origin = CGPoint( x: CGFloat(x) , y: sliderLabel.frame.minY )
}
But to make the label's middle x and the pointer's middle x the same, I need to know the width of this pointer. Fluently looking in this, I understood that it's about 32 points. But I don't know this value for other screens. Are there any methods in UISlider that can say this value?
You should retreive the size for the thumb with thumbRect:
let thumbSize = slider.thumbRect(forBounds: slider.bounds, trackRect: slider.trackRect(forBounds: slider.bounds), value: 0).size

Resources