Can forio contour be used to plot points on a sphere so that the sphere can be rotated and zoomed? Or do I need to do this in d3.js? Or possibly some Juila package? I would like to integrate this into a forio epicenter project and also make it interactive with the underlying data.
I'm not exactly sure what you mean by 'plot points on a sphere', in any case, the 'sphere chart' is not part of the base functionality of Contour, but you can write an extension that does what you want. One important point is that Contour (and d3 in general) has native support for 2d shapes but not 3d shapes, so you'd have to implement projecting the sphere into 2d screen space.
If you can tell me a bit more about what you're trying to do, maybe I can be of more help. In the meantime, here's a simple example of an extension that plots points on a 2d circle (data is angles in this case)
http://jsfiddle.net/tmzsudv5/
Contour.export('round', function (data, layer, options) {
var r = 100;
var theta = 2 * Math.PI / 180;
var centerX = options.chart.width / 2;
var centerY = options.chart.height / 2;
layer.selectAll('circle').data(data[0].data).enter()
.append('circle')
.attr('class', 'dot')
.attr('r', 1)
.attr('cx', function(d, i) { return r * Math.cos(d.y * theta) + centerX; })
.attr('cy', function(d, i) { return r * Math.sin(d.y * theta) + centerY; });
});
var data = _.range(100).map(function(n) { return Math.floor(Math.random() * 360); });
new Contour({
el: '.chart',
})
.round(data)
.render();
Related
I would like to perform something similar to Vignette, but instead of Darken the edges, I would like to make a transparent gradient of the edges. I am not looking for a solution using widget.
Any idea how should I modify the code below? Or is it even the same thing at all?? I am sorry but I really badly need this.
Thanks a lot
Note: The code below is from the Image library
https://github.com/brendan-duncan/image
Image vignette(Image src, {num start = 0.3, num end = 0.75, num amount = 0.8}) {
final h = src.height - 1;
final w = src.width - 1;
final num invAmt = 1.0 - amount;
final p = src.getBytes();
for (var y = 0, i = 0; y <= h; ++y) {
final num dy = 0.5 - (y / h);
for (var x = 0; x <= w; ++x, i += 4) {
final num dx = 0.5 - (x / w);
num d = sqrt(dx * dx + dy * dy);
d = _smoothStep(end, start, d);
p[i] = clamp255((amount * p[i] * d + invAmt * p[i]).toInt());
p[i + 1] = clamp255((amount * p[i + 1] * d + invAmt * p[i + 1]).toInt());
p[i + 2] = clamp255((amount * p[i + 2] * d + invAmt * p[i + 2]).toInt());
}
}
return src;
}
num _smoothStep(num edge0, num edge1, num x) {
x = ((x - edge0) / (edge1 - edge0));
if (x < 0.0) {
x = 0.0;
}
if (x > 1.0) {
x = 1.0;
}
return x * x * (3.0 - 2.0 * x);
}
Solution
This code works without any widgets. Actually it doesn't use any of the flutter libraries and is solely based on dart and the image package you
introduced in your question.
The code contains comments which may not make a lot of sense until you read the explanation. The code is as following:
vignette.dart
import 'dart:isolate';
import 'dart:typed_data';
import 'package:image/image.dart';
import 'dart:math' as math;
class VignetteParam {
final Uint8List file;
final SendPort sendPort;
final double fraction;
VignetteParam(this.file, this.sendPort, this.fraction);
}
void decodeIsolate(VignetteParam param) {
Image image = decodeImage(param.file.buffer.asUint8List())!;
Image crop = copyCropCircle(image); // crop the image with builtin function
int r = crop.height~/2; // radius is half the height
int rs = r*r; // radius squared
int tr = (param.fraction * r).floor(); // we apply the fraction to the radius to get tr
int ors = (r-tr)*(r-tr); // ors from diagram
x: for (int x = -r; x <= r; x++) { // iterate across all columns of pixels after shifting x by -r
for (int y = -r; y <= r; y++) { // iterate across all rows of pixels after shifting y by -r
int pos = x*x + y*y; // which is (r')² (diagram)
if (pos <= rs) { // if we are inside the outer circle, then ...
if (pos > ors) { // if we are outside the inner circle, then ...
double f = (rs-pos) / (rs-ors); // calculate the fraction of the alpha value
int c = setAlpha(
crop.getPixelSafe(x+r, y+r),
(0xFF * f).floor()
); // calculate the new color by changing the alpha
crop.setPixelSafe(x+r, y+r, c); // set the new color
} else { // if we reach the inner circle from the top then jump down
y = y*-1;
}
} else { // if we are outside the outer circle and ...
if (y<0) { // above it and jump down to the outer circle
y = -(math.sin(math.acos(x/r)) * r).floor()-1;
}
else continue x; // or if beneath it then jump to the next column
}
}
}
param.sendPort.send(crop);
}
Future<Uint8List> getImage(Uint8List bytes, double radiusFraction) async {
var receivePort = ReceivePort();
await Isolate.spawn(
decodeIsolate,
VignetteParam(bytes, receivePort.sendPort, radiusFraction)
);
Image image = await receivePort.first;
return encodePng(image) as Uint8List;
}
main.dart (example app)
import 'dart:typed_data';
import 'package:flutter/material.dart';
import 'package:flutter_playground/vignette.dart';
import 'package:flutter/services.dart' show ByteData, rootBundle;
void main() => runApp(MyApp());
class MyApp extends StatelessWidget {
MyApp({super.key});
#override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Material App',
home: Scaffold(
appBar: AppBar(
title: const Text('Material App Bar'),
),
body: Center(
child: FutureBuilder<Uint8List>(
future: imageFuture(),
builder: (context, snapshot) {
switch (snapshot.connectionState) {
case ConnectionState.done:
return Image.memory(
snapshot.data!,
);
default:
return CircularProgressIndicator();
}
}
),
),
),
);
}
Future<Uint8List> imageFuture() async {
// Load your file here and call getImage
ByteData byteData = await rootBundle.load("assets/image.jpeg");
return getImage(byteData.buffer.asUint8List(), 0.3);
}
}
Explanation
The math behind this algorithm is very simple. It's only based on the equation of a circle. But first of all, have a look at this diagram:
Diagram
The diagram contains the square which is our image. Inside the square we can see the circle which is the visible part of the image. The opaque area is fully opaque while the transparent are gets more transparent (= less alpha) if we get closer to the outer circle. r (radius) is the radius of the outer circle, tr (transparent radius) is the part of the radius which is in the transparent area. That's why r-tr is the radius of the inner circle.
In order to apply this diagram to our needs we have to shift our x-Axis and y-Axis. The image package has a grid which starts with (0,0) at the top left corner. But we need (0,0) at the center of the image, hence we shift it by the radius r. If you look close you may notice that our diagram doesn't look as usual. The y-Axis usually goes up, but in our case it really doesn't matter and makes things easier.
Calculation of the position
We need to iterate across all pixels inside the transparent area and
change the alpha value. The equation of a circle, which is x'²+y'²=r'², helps us to figure out whether the pixel is in the transparent area. (x',y') is the coordinate of the pixel and r' is the distance between the pixel and the origin. Is the distance behind the inner circle and before the outer circle, then r' <= r and r' > r-tr must hold. In order to avoid calculating the square root, we instead compare the squared values, so pos <= rs and pos > ors must hold. (Look at the diagram to see what pos, rs, and ors are.)
Changing the alpha value
The pixel is inside the transparent area. How do we change the alpha value? We need a linear gradient transparency, so we calculate the distance between the actual position pos and the outer circle, which is rs-pos. The longer the distance, the more alpha we need for this pixel. The total distance is rs-ors, so we calculate (rs-pos) / (rs-ors) to get the fraction of our alpha value f. Finally we change our alpha value of this pixel with f.
Optimization
This algorithm actually does the whole job. But we can optimize it. I wrote that we have to iterate across all pixels inside the transparent area. So, we don't need the pixels outside of the outer circle and inside the inner circle, as their alpha value don't change. But we have two for loops iterating through all pixels from left to right and from top to bottom. Well, when we reach the inner circle from outside, we can jump down by negating the y-Position. Hence, we set y = y*-1; if we are inside the outer circle (pos <= rs) but not outside the inner circle (pos <= ors) anymore.
What if we are above the outer circle (pos > rs)? Well then we can jump to the outer circle by calculating the y-Position with sine and arccosine. I won't go much into detail here, but if you want further explanation, let me know by commenting below. The if (y<0) just determines if we are above the outer circle (y is negative) or beneath it. If above then jump down, if beneath jump to the next column of pixels. Hence we 'continue' the x for loop.
Here you go - the widgetless approach based on my previous answer:
Canvas(PictureRecorder()).drawImage(
your_image, //here is the image you want to change
Offset.zero,// the offset from the corner of the canvas
Paint()
..shader = const RadialGradient(
radius: needed_radius, // the radius of the result gradient - it should depend on the image dimens
colors: [Colors.black, Colors.transparent],
).createShader(
Rect.fromLTRB(0, 0, your_image_width, your_image_height), // the portion of your image that should be influenced by the shader - in this case, I use the whole image.
)
..blendMode = BlendMode.dstIn); // for the black color of the gradient to be masking one
I will add it to the previous answer, also
Given that you want "transparency", you need the alpha channel. The code you provide seems to only have 3 bytes per pixel, so only RGB, without alpha channel.
A solution may be:
Modify the code such that it has alpha channel, i.e. 4 bytes per pixel.
Instead of modifying the RGB to make it darker, i.e. p[i] = ..., p[i+1] = ..., p[i+2] = ..., leave RGB unchanged, and modifying the alpha channel to make alpha smaller. For example, say, p[i+3]=... (suppose you are RGBA format instead of ARGB).
I am trying to project a giving 3D point to image plane, I have posted many question regarding this and many people help me, also I read many related links but still the projection doesn't work for me correctly.
I have a 3d point (-455,-150,0) where x is the depth axis and z is the upwards axis and y is the horizontal one I have roll: Rotation around the front-to-back axis (x) , pitch: Rotation around the side-to-side axis (y) and yaw:Rotation around the vertical axis (z) also I have the position on the camera (x,y,z)=(-50,0,100) so I am doing the following
first I am doing from world coordinates to camera coordinates using the extrinsic parameters:
double pi = 3.14159265358979323846;
double yp = 0.033716827630996704* pi / 180; //roll
double thet = 67.362312316894531* pi / 180; //pitch
double k = 89.7135009765625* pi / 180; //yaw
double rotxm[9] = { 1,0,0,0,cos(yp),-sin(yp),0,sin(yp),cos(yp) };
double rotym[9] = { cos(thet),0,sin(thet),0,1,0,-sin(thet),0,cos(thet) };
double rotzm[9] = { cos(k),-sin(k),0,sin(k),cos(k),0,0,0,1};
cv::Mat rotx = Mat{ 3,3,CV_64F,rotxm };
cv::Mat roty = Mat{ 3,3,CV_64F,rotym };
cv::Mat rotz = Mat{ 3,3,CV_64F,rotzm };
cv::Mat rotationm = rotz * roty * rotx; //rotation matrix
cv::Mat mpoint3(1, 3, CV_64F, { -455,-150,0 }); //the 3D point location
mpoint3 = mpoint3 * rotationm; //rotation
cv::Mat position(1, 3, CV_64F, {-50,0,100}); //the camera position
mpoint3=mpoint3 - position; //translation
and now I want to move from camera coordinates to image coordinates
the first solution was: as I read from some sources
Mat myimagepoint3 = mpoint3 * mycameraMatrix;
this didn't work
the second solution was:
double fx = cameraMatrix.at<double>(0, 0);
double fy = cameraMatrix.at<double>(1, 1);
double cx1 = cameraMatrix.at<double>(0, 2);
double cy1= cameraMatrix.at<double>(1, 2);
xt = mpoint3 .at<double>(0) / mpoint3.at<double>(2);
yt = mpoint3 .at<double>(1) / mpoint3.at<double>(2);
double u = xt * fx + cx1;
double v = yt * fy + cy1;
but also didn't work
I also tried to use opencv method fisheye::projectpoints(from world to image coordinates)
Mat recv2;
cv::Rodrigues(rotationm, recv2);
//inputpoints a vector contains one point which is the 3d world coordinate of the point
//outputpoints a vector to store the output point
cv::fisheye::projectPoints(inputpoints,outputpoints,recv2,position,mycameraMatrix,mydiscoff );
but this also didn't work
by didn't work I mean: I know (in the image) where should the point appear but when I draw it, it is always in another place (not even close) sometimes I even got a negative values
note: there is no syntax errors or exceptions but may I made typos while I am writing code here
so can any one suggest if I am doing something wrong?
I just started learning metal and can best show you my frustration with the following series of screenshots. From top to bottom we have
(1) My model where the model matrix is the identity matrix
(2) My model rotated 60 deg about the x axis with orthogonal projection
(3) My model rotated 60 deg about the y axis with orthogonal projection
(4) My model rotated 60 deg about the z axis
So I use the following function for conversion into normalized device coordinates:
- (CGPoint)normalizedDevicePointForViewPoint:(CGPoint)point
{
CGPoint p = [self convertPoint:point toCoordinateSpace:self.window.screen.fixedCoordinateSpace];
CGFloat halfWidth = CGRectGetMidX(self.window.screen.bounds);
CGFloat halfHeight = CGRectGetMidY(self.window.screen.bounds);
CGFloat px = ( p.x - halfWidth ) / halfWidth;
CGFloat py = ( p.y - halfHeight ) / halfHeight;
return CGPointMake(px, -py);
}
The following rotates and orthogonally projects the model:
- (matrix_float4x4)zRotation
{
self.rotationZ = M_PI / 3;
const vector_float3 zAxis = { 0, 0, 1 };
const matrix_float4x4 zRot = matrix_float4x4_rotation(zAxis, self.rotationZ);
const matrix_float4x4 modelMatrix = zRot;
return matrix_multiply( matrix_float4x4_orthogonal_projection_on_z_plane(), modelMatrix );
}
As you can see when I use the exact same method for rotating about the other two axes, it looks fine-not distorted. What am I doing wrong? Is there some sort of scaling/aspect ratio thing I should be setting somewhere? What things could it be? I've been staring at this for an embarrassingly long period of time so any help/ideas that can lead me in the right direction are much appreciated. Thank you in advance.
There's nothing wrong with your rotation or projection matrices. The visual oddity arises from the fact that you move your vertices into NDC space prior to rotation. A rectangle doesn't preserve its aspect ratio when rotating in NDC space, because the mapping from NDC back to screen coordinates is not 1:1.
I would recommend not working in NDC until the very end of the vertex pipeline (i.e., pass vertices into your vertex function in "world" space, and out to the rasterizer as NDC). You can do this with a classic construction of the orthographic projection matrix that scales and biases the vertices, correctly accounting for the non-square aspect ratio of window coordinates.
I need the angular velocity expressed as a quaternion for updating the quaternion every frame with the following expression in OpenCV:
q(k)=q(k-1)*qwt;
My angular velocity is
Mat w; //1x3
I would like to obtain a quaternion form of the angles
Mat qwt; //1x4
I couldn't find information about this, any ideas?
If I understand properly you want to pass from this Axis Angle form to a quaternion.
As shown in the link, first you need to calculate the module of the angular velocity (multiplied by delta(t) between frames), and then apply the formulas.
A sample function for this would be
// w is equal to angular_velocity*time_between_frames
void quatFromAngularVelocity(Mat& qwt, const Mat& w)
{
const float x = w.at<float>(0);
const float y = w.at<float>(1);
const float z = w.at<float>(2);
const float angle = sqrt(x*x + y*y + z*z); // module of angular velocity
if (angle > 0.0) // the formulas from the link
{
qwt.at<float>(0) = x*sin(angle/2.0f)/angle;
qwt.at<float>(1) = y*sin(angle/2.0f)/angle;
qwt.at<float>(2) = z*sin(angle/2.0f)/angle;
qwt.at<float>(3) = cos(angle/2.0f);
} else // to avoid illegal expressions
{
qwt.at<float>(0) = qwt.at<float>(0)=qwt.at<float>(0)=0.0f;
qwt.at<float>(3) = 1.0f;
}
}
Almost every transformation regarding quaternions, 3D space, etc is gathered at this website.
You will find time derivatives for quaternions also.
I find it useful the explanation of the physical meaning of a quaternion, which can be seen as an axis angle where
a = angle of rotation
x,y,z = axis of rotation.
Then the conversion uses:
q = cos(a/2) + i ( x * sin(a/2)) + j (y * sin(a/2)) + k ( z * sin(a/2))
Here is explained thoroughly.
Hope this helped to make it clearer.
One little trick to go with this and get rid of those cos and sin functions. The time derivative of a quaternion q(t) is:
dq(t)/dt = 0.5 * x(t) * q(t)
Where, if the angular velocity is {w0, w1, w2} then x(t) is a quaternion of {0, w0, w1, w2}. See David H Eberly's book section 10.5 for proof
I have the last two CGPoints from a Array which contains points of line drawn by the user . i need to extend the line upto a fixed distance at the same angle. so i first calculate the angle between the last two points with the help of following code
-(CGFloat)angleBetweenFirstPoint:(CGPoint)firstPoint ToSecondPoint:(CGPoint)secondPoint
{
CGPoint diff = ccpSub(secondPoint, firstPoint);
NSLog(#"difference point %f , %f",diff.x,diff.y);
CGFloat res = atan2(diff.y, diff.x);
/*if ( res < 0 )
{
res = (0.5 * M_PI) + res;
}
if ( dx<0 && dy>0 ) { // 2nd quadrant
res += 0.5 * M_PI;
} else if ( dx<0 && dy<0 ) { // 3rd quadrant
res += M_PI;
} else if ( dx>0 && dy<0 ) { // 4th quadrant
res += M_PI + (0.5 * M_PI);
}*/
//res=res*180/M_PI;
res = CC_RADIANS_TO_DEGREES(res);
return res;
}
After calculating the angle i find the extend point with the help of following maths
-(void)extendLine
{
lineAngle = [self angleBetweenFirstPoint:pointD ToSecondPoint:endPt];
extendEndPt.x = endPt.x - cos(lineAngle) * 200;
extendEndPt.y = endPt.y - sin(lineAngle) * 200;
// draw line unto extended point
}
But the point i am getting is not right to draw the extended line at the same angle as the original line.
I think it is because i am not getting the right angle between those last points.. what am i possibly doing wrong?? Do i need to consider the whole quadrant system while considering the angle and how? and m working in landscape mode. does that make any difference??
Ye gods, you are doing this in a way that is WILDLY INCREDIBLY over-complicated.
Skip all of the crapola with angles. You don't need it. Period. Do it all with vectors, and very simple ones. First of all, I'll assume that you are given two points, P1 and P2. You wish to find a new point P3, that is a known distance (d) from P2, along the line that connects the two points.
All you need do is first, compute a vector that points along the line in question.
V = P2 - P1;
I've written it as if I am writing in MATLAB, but all this means is to subtract the x and y coordinates of the two points.
Next, scale the vector V to have unit length.
V = V/sqrt(V(1)^2 + V(2)^2);
Dividing the components of the vector V by the length (or 2-norm if you prefer) of that vector creates a vector with unit norm. That norm is just the square root of the sum of squares of the elements of V, so it is clearly the length of the vector.
Now it is simple to compute P3.
P3 = P2 + d*V;
P3 will lie at a distance of d units from P2, in the direction of the line away from point P1. Nothing sophisticated required. No angles computed. No worry about quadrants.
Learn to use vectors. They are your friends, or at the least, they can be if you let them.