I am using this example to create a pie chart on iOS:
https://www.highcharts.com/ios/demo/pie-gradient
The pie chart renders fine but the gradient fill is only black color. I converted the code in example to Swift like this:
let colors = [
HIColor(radialGradient: ["cx": 0.5, "cy": 0.3,"r": 0.7],
stops: [[ 0, "#7cb5ec" ],[1, "rgb(48,105,160)"]]),
HIColor(radialGradient: ["cx": 0.5, "cy": 0.3,"r": 0.7],
stops: [[ 0, "#434348" ],[1, "rgb(0,0,0)"]])!,
HIColor(radialGradient: ["cx": 0.5, "cy": 0.3,"r": 0.7],
stops: [[ 0, "#90ed7d" ],[1, "rgb(68,161,49)"]]),
HIColor(radialGradient: ["cx": 0.5, "cy": 0.3,"r": 0.7],
stops: [[ 0, "#f7a35c" ],[1, "rgb(171,87,16)"]]),
HIColor(radialGradient: ["cx": 0.5, "cy": 0.3,"r": 0.7],
stops: [[ 0, "#8085e9" ],[1, "rgb(52,57,157)"]]),
HIColor(radialGradient: ["cx": 0.5, "cy": 0.3,"r": 0.7],
stops: [[ 0, "#f15c80" ],[1, "rgb(165,16,52)"]]),
HIColor(radialGradient: ["cx": 0.5, "cy": 0.3,"r": 0.7],
stops: [[ 0, "#e4d354" ],[1, "rgb(152,135,8)"]]),
HIColor(radialGradient: ["cx": 0.5, "cy": 0.3,"r": 0.7],
stops: [[ 0, "#2b908f" ],[1, "rgb(0,68,67)"]]),
HIColor(radialGradient: ["cx": 0.5, "cy": 0.3,"r": 0.7],
stops: [[ 0, "#f45b5b" ],[1, "rgb(168,15,15)"]]),
HIColor(radialGradient: ["cx": 0.5, "cy": 0.3,"r": 0.7],
stops: [[ 0, "#91e8e1" ],[1, "rgb(69,156,149)"]])
]
In the given example, they have assigned colors array to options.colors but it takes only String array and not HIColor array. Here's the error I am getting:
error.png
In order to fix the error, here's the code modification I tried which gives black colored pie:
let colors_str = colors.map{
(color: HIColor!) -> String in
let c = color.getData().debugDescription
.replacingOccurrences(of: "Optional(", with: "")
.replacingOccurrences(of: "\n", with: "")
.replacingOccurrences(of: "\"", with: "")
.dropLast()
let value = String(c)
return value
}
options.colors = colors_str
black-pie-chart
Any help will be highly appreciated.
I found the solution to this problem. Wanted to post it since I spent a day on it and in case someone else faces the same issue. So the code of iOS on Highcarts samples is incorrect. I referred to the Javascript code of same pie and found out that colors is assigned to plot options pie color. Here's the sample code in swift:
let plot_options = HIPlotOptions()
plot_options.pie = HIPie()
plot_options.pie.colors = colors as? [HIColor]
Related
I prepared a toy experiment to project a point defined in the world frame to the image plane. I'm trying to calculate the 3D point (inverse-projection) with the calculated pixel coordinates. I used the same coordinate frames as the figure below [https://www.researchgate.net/figure/World-and-camera-frame-definition_fig1_224318711]. (world x,y,z -> camera z,-x,-y). Then I will project the same point for different image frames. The problem here is that the 3D point defined as (8,-2.1) is calculated at (6.29, -1.60, 0.7). Since there is no loss of information, I think I need to find the same point. I believe there is a problem with depth. I couldn't find where I missed.
import numpy as np
#3d point at 8 -2 1 wrt world frame
P_world = np.array([8, -2, 1, 1]).reshape((4,1))
T_wc = np.array([
[ 0, -1, 0, 0],
[ 0, 0, -1, 0],
[ 1, 0, 0, 0],
[ 0, 0, 0, 1]])
pose0 = np.eye(4)
pose0[:3,-1] = [1, 0, .6]
pose0 = np.matmul(T_wc, pose0)
pose1 = np.eye(4)
pose1[:3,-1] = [3, 0, .6]
pose1 = np.matmul(T_wc, pose1)
depth0 = np.linalg.norm(P_world[:3].flatten() - np.array([1, 0, .6]).flatten())
depth1 = np.linalg.norm(P_world[:3].flatten() - np.array([3, 0, .6]).flatten())
K = np.array([
[173.0, 0 , 173.0],
[ 0 , 173.0, 130.0],
[ 0 , 0 , 1 ]])
uv1 = np.matmul(np.matmul(K, pose0[:3]), P_world)
uv1 = (uv1 / uv1[-1])[:2]
uv2 = np.matmul(np.matmul(K, pose1[:3]), P_world)
uv2 = (uv2 / uv2[-1])[:2]
img0 = np.zeros((260,346))
img0[int(uv1[0]), int(uv1[1])] = 1
img1 = np.zeros((260,346))
img1[int(uv2[0]), int(uv2[1])] = 1
#%% Op
pix_coord = np.array([int(uv1[0]), int(uv1[1]), 1])
pt_infilm = np.matmul(np.linalg.inv(K), pix_coord.reshape(3,1))
pt_incam = depth0*pt_infilm
pt_incam_hom = np.append(pt_incam, 1)
pt_inworld = np.matmul(np.linalg.inv(pose0), pt_incam_hom)
I am new to image processing. I am trying out a few experiments.
I have binarized my image with otsu
Found connected pixels with skimage
from PIL import Image
import numpy as np
import skimage
im = Image.open("DMSO_Resized.png")
imgr = im.convert("L")
im2arr = np.array(imgr)
arr2im = Image.fromarray(im2arr)
thresh = skimage.filters.threshold_otsu(im2arr)
binary = im2arr > thresh
connected = skimage.morphology.label(binary)
I'd now like to count the number of background pixels that are either "completely" covered by other background pixels or "partially" covered.
For example, pixel[1][1] is partially covered
1 0 2
0 0 0
3 0 8
AND
For example, pixel[1][1] is completely covered
0 0 0
0 0 0
0 0 0
Is there a skimage or other package that has a method to do these ? Or would I have to implement them as an array processing loop ?
import numpy as np
from skimage import morphology
bad_connection = np.array([[1, 0, 0, 0, 1],
[1, 0, 0, 0, 1],
[1, 0, 0, 0, 1],
[1, 0, 1, 0, 1],
[1, 0, 0, 0, 1]], dtype=np.uint8)
expected_good = np.array([[0, 0, 1, 0, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]], dtype=np.uint8)
another_bad = np.array([[1, 0, 0, 0, 1],
[1, 1, 0, 1, 1],
[1, 1, 1, 1, 1],
[1, 1, 0, 1, 1],
[1, 0, 0, 0, 1]], dtype=np.uint8)
another_good = np.array([[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]], dtype=np.uint8)
footprint = np.array([[1, 0, 0, 0, 1],
[1, 0, 0, 0, 1],
[1, 0, 0, 0, 1]], dtype=np.uint8)
Outputs (incorrect or not as expected):
Initially, I tried implementing the OpenCV Basic Drawing example with rust using Rust OpenCV bindings (Crate opencv 0.48.0).
However, I was stuck.
I want to draw a closed polygon with opencv::imgproc::polylines.
The vertices of the polygon are given by an array of two-dimensional Cartesian coordinates.
I need to pass these points to the 2nd argument of the function which is of type &dyn opencv::core::ToInputArray.
This is where I struggle. How do I convert the array of vertices to an argument of type opencv::core::ToInputArray?
let pts = [[100, 50], [50, 150], [150, 150]];
imgproc::polylines(
&mut image,
???, <-- "pts" have to go here
true,
core::Scalar::from([0.0, 0.0, 255.0, 255.0]),
1, 8, 0).unwrap();
Minimal example
use opencv::{core, imgproc, highgui};
fn main() {
let mut image : core::Mat = core::Mat::new_rows_cols_with_default(
200, 200, core::CV_8UC4, core::Scalar::from([0.0, 0.0, 0.0, 0.0])).unwrap();
// draw yellow quad
imgproc::rectangle(
&mut image, core::Rect {x: 50, y: 50, width: 100, height: 100},
core::Scalar::from([0.0, 255.0, 255.0, 255.0]), -1, 8, 0).unwrap();
// should draw red triangle -> causes error (of course)
/*
let pts = [[100, 50], [50, 150], [150, 150]];
imgproc::polylines(
&mut image,
&pts,
true,
core::Scalar::from([0.0, 0.0, 255.0, 255.0]),
1, 8, 0).unwrap();
*/
highgui::imshow("", &image).unwrap();
highgui::wait_key(0).unwrap();
}
[dependencies]
opencv = {version = "0.48.0", features = ["buildtime-bindgen"]}
I found the solution with the help of the comment from #kmdreko.
I can define the vertices with an opencv::types::VectorOfPoint, that implements an opencv::core::ToInputArray trait:
let pts = types::VectorOfPoint::from(vec![
core::Point{x: 100, y: 50},
core::Point{x: 50, y: 150},
core::Point{x: 150, y: 150}]
);
Complete example:
use opencv::{core, types, imgproc, highgui};
fn main() {
let mut image : core::Mat = core::Mat::new_rows_cols_with_default(
200, 200, core::CV_8UC4, core::Scalar::from([0.0, 0.0, 0.0, 0.0])).unwrap();
let pts = types::VectorOfPoint::from(vec![
core::Point{x: 100, y: 50},
core::Point{x: 50, y: 150},
core::Point{x: 150, y: 150}]
);
imgproc::polylines(
&mut image,
&pts,
true,
core::Scalar::from([0.0, 0.0, 255.0, 255.0]),
1, 8, 0).unwrap();
highgui::imshow("", &image).unwrap();
highgui::wait_key(0).unwrap();
}
I need help creating a function to convert three angles (in degrees, yaw pitch and roll) to six float variables.
How would I go about making a function output these floats?
{0, 0, 0} = {1, 0, 0, -0, -0, 1}
{45, 0, 0} = {0.70710676908493, 0.70710676908493, 0, -0, -0, 1}
{0, 90, 0} = {-4.3711388286738e-08, 0, 1, -1, 0, -4.3711388286738e-08}
{0, 0, 135} = {1, -0, 0, -0, -0.70710676908493, -0.70710676908493}
{180, 180, 0} = {1, -8.7422776573476e-08, 8.7422776573476e-08, 8.7422776573476e-08, 0, -1}
{225, 0, 225} = {-0.70710682868958, 0.5, 0.5, -0, 0.70710670948029, -0.70710682868958}
{270, 270, 270} = {1.4220277639103e-16, -2.3849761277006e-08, 1, 1, 1.1924880638503e-08, 1.42202776319103e-16}
{315, 315, 315} = {0.5, -0.85355341434479, 0.14644680917263, 0.70710688829422, 0.5, 0.5}
MORE EXAMPLES REQUESTED BY: Egor Skriptunoff
{10, 20, 30} = {0.92541658878326, -0.018028322607279, 0.37852230668068, -0.34202012419701, -0.46984630823135, 0.81379765272141}
{10, 30, 20} = {0.85286849737167, -0.0052361427806318, 0.52209949493408, -0.5, -0.29619812965393, 0.81379765272141}
{20, 10, 30} = {0.92541658878326, 0.21461015939713, 0.3123245537281, -0.17364817857742, -0.49240386486053, 0.85286849737167}
{20, 30, 10} = {0.81379765272141, 0.25523611903191, 0.52209949493408, -0.5, -0.15038372576237, 0.85286849737167}
{30, 10, 20} = {0.85286849737167, 0.41841205954552, 0.3123245537281, -0.17364817857742, -0.33682405948639, 0.92541658878326}
{30, 20, 10} = {0.81379765272141, 0.4409696161747, 0.37852230668068, -0.34202012419701, -0.16317591071129, 0.92541658878326}
The code I currently have can calculate all of the floats except the 2nd and 3rd.
function convert_rotations(Yaw, Pitch, Roll)
return {
math.cos(math.rad(Yaw))*math.cos(math.rad(Pitch)),
0,
0,
math.sin(math.rad(Pitch))*-1,
math.sin(math.rad(Roll))*math.cos(math.rad(Pitch))*-1,
math.cos(math.rad(Roll))*math.cos(math.rad(Pitch))
}
end
I cannot seem to find the correct math for when all angles are nonzero for the 2nd float and 3rd float, but I did come up with this:
-- The second float when the Yaw is 0 degrees
math.sin(math.rad(Pitch))*math.sin(math.rad(Roll))*-1
-- The second float when the Pitch is 0 degrees
math.sin(math.rad(Yaw))*math.cos(math.rad(Roll))
-- The second float when the Roll is 0 degrees
math.sin(math.rad(Yaw))*math.sin(math.rad(Pitch))
And for the 3rd float I came up with this:
-- The third float when Yaw is 0 degrees
math.sin(math.rad(Pitch))*math.cos(math.rad(Roll))
-- The third float when Pitch is 0 degrees
math.sin(math.rad(Yaw))*math.sin(math.rad(Roll))
-- The third float when Roll is 0 degrees
math.cos(math.rad(Yaw))*math.sin(math.rad(Pitch))
local function Rotate(X, Y, alpha)
local c, s = math.cos(math.rad(alpha)), math.sin(math.rad(alpha))
local t1, t2, t3 = X[1]*s, X[2]*s, X[3]*s
X[1], X[2], X[3] = X[1]*c+Y[1]*s, X[2]*c+Y[2]*s, X[3]*c+Y[3]*s
Y[1], Y[2], Y[3] = Y[1]*c-t1, Y[2]*c-t2, Y[3]*c-t3
end
local function convert_rotations(Yaw, Pitch, Roll)
local F, L, T = {1,0,0}, {0,1,0}, {0,0,1}
Rotate(F, L, Yaw)
Rotate(F, T, Pitch)
Rotate(T, L, Roll)
return {F[1], -L[1], -T[1], -F[3], L[3], T[3]}
end
Can someone please help me with the following code:
gl.viewport(0, 0, gl.viewportWidth, gl.viewportHeight);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
mat4.ortho(pMatrix, 0, gl.viewportWidth, 0, gl.viewportHeight, 0.1, 100);
mat4.identity(mvMatrix);
mat4.lookAt(mvMatrix, [0, 0, -40], [0, 0, 0], [0, 1, 0]);
Full source http://jsfiddle.net/bepa/2QXkp/
I trying to display a cube with a orthographic camera, but all I see is black. The cube should be at (0, 0, 0), the camera (0, 0, -40) and look at (0,0,0).
For all matrix transformations I use gl-matrix 2.2.0.
EDIT:
This works fine:
mat4.perspective(pMatrix, 45, gl.viewportWidth / gl.viewportHeight, 0.1, 100.0);
mat4.identity(mvMatrix);
mat4.lookAt(mvMatrix, [0, 40, -40], [0, 0, 0], [0, 1, 0]);
mat4.rotate(mvMatrix, mvMatrix, degToRad(45), [0, 1, 0]);
This don't work:
mat4.ortho(pMatrix, 0, gl.viewportWidth, 0, gl.viewportHeight, 0.1, 100);
mat4.identity(mvMatrix);
mat4.lookAt(mvMatrix, [0, 40, -40], [0, 0, 0], [0, 1, 0]);
mat4.rotate(mvMatrix, mvMatrix, degToRad(45), [0, 1, 0]);
mat4.ortho(pMatrix, -1.0, 1.0, -1.0, 1.0, 0.1, 100);
Gives a result that is not black ;)
The documentation of mat4.ortho():
/**
* Generates a orthogonal projection matrix with the given bounds
*
* #param {mat4} out mat4 frustum matrix will be written into
* #param {number} left Left bound of the frustum
* #param {number} right Right bound of the frustum
* #param {number} bottom Bottom bound of the frustum
* #param {number} top Top bound of the frustum
* #param {number} near Near bound of the frustum
* #param {number} far Far bound of the frustum
* #returns {mat4} out
*/
mat4.ortho = function (out, left, right, bottom, top, near, far) {
The width and height of the canvas is not needed for an ortho projection. But I'm not familiar enough with projection matrices to give you an in depth explanation why.