plotly.net chart not found - f#

My code is below, i am running this on a chromebook (debian)in vs code.
The url the browser tries to load is file:///tmp/631a67c5-f214-4538-8f46-e74ea90ecb98.html
The browser does have access to the tmp directory and I load the list of files at file:///tmp/. It appears plotly doesn't put the file in the tmp directory.
What am i doing wrong?
#r "nuget: Plotly.NET, 2.0.0-preview.6"
open Plotly.NET
let x = [1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 9.; 10.; ]
let y = [2.; 1.5; 5.; 1.5; 3.; 2.5; 2.5; 1.5; 3.5; 1.]
let line1 =
Chart.Line(
x,y,
Name="line",
ShowMarkers=true,
MarkerSymbol=StyleParam.Symbol.Square)
|> Chart.withLineStyle(Width=2.,Dash=StyleParam.DrawingStyle.Dot)
line1
|> Chart.Show;;

Related

Set some of the Excel Chart Series values to #N/A using Excel-DNA or Excel.Interop

I am using ExcelDNA and Microsoft.Office.Interop.Excel to set some of the y values of a given Excel chart series to #N/A. (*)
This is what I am trying to achieve in VBA. This works as expected:
Sub test()
Dim xdata As Variant, ydata As Variant
Dim chrt As Chart
Set chrt = ActiveWorkbook.Worksheets("Sheet1").ChartObjects("Chart 1").Chart
With chrt
xdata = Array(0, 1, 2, 3, 4, 5)
ydata = Array(0, 10, 20, CVErr(xlErrNA), 40, 50) 'Works as expected
.SeriesCollection(1).XValues = xdata
.SeriesCollection(1).Values = ydata
End With
End Sub
In F# I have tried the 2 following approaches, and none worked:
module TEST =
open Microsoft.Office.Interop.Excel
open ExcelDna.Integration
let series : Series =
let app = ExcelDnaUtil.Application :?> Application
let wks = (app.Sheets.Item "Sheet1") :?> Worksheet
let cho = wks.ChartObjects("Chart 1") :?> ChartObject
let ch = cho.Chart
let s = ch.SeriesCollection(1) :?> Series
s
[<ExcelFunction(Category="Chart", Description="")>]
let setYSeries1() : obj =
let s = series
s.XValues <- [| 0.0; 1.0; 2.0; 3.0; 4.0; 5.0 |]
s.Values <- [| 0.0; 10.0; 20.0; 30.0; 40.0; 50.0 |] |> Array.map box // Works as expected.
box "Done."
[<ExcelFunction(Category="Chart", Description="")>]
let setYSeries2() : obj =
let s = series
s.XValues <- [| 0.0; 1.0; 2.0; 3.0; 4.0; 5.0 |]
s.Values <- [| box 0.0; box 10.0; box 20.0; ExcelError.ExcelErrorNA |> box; box 40.0; box 50.0 |] |> Array.map box // y = 42 for x = 3, instead of y = #N/A
box "Done."
[<ExcelFunction(Category="Chart", Description="")>]
let setYSeries3() : obj =
let s = series
s.XValues <- [| 0.0; 1.0; 2.0; 3.0; 4.0; 5.0 |]
s.Values <- [| box 0.0; box 10.0; box 20.0; ((int32) -2146826246) |> box; box 40.0; box 50.0 |] // y = -2146826246 for x = 3, instead of y = #N/A
box "Done."
setYSeries1 is the base case without #N/A values. It works fine.
setYSeries2 was the natural way, using ExcelDNA's ExcelError.ExcelErrorNA enum, but #N/A is replaced by its enum value in the chart (y = 42).
I tried setYSeries3 after I read in this article that, internally, Excel uses integers to represent errors like #N/A (while Excel uses doubles to represent numbers), substituting (int) -2146826246 for #N/A value. No luck either.
My question: what should I do to pass #N/A values to the series' .Values array?
(*) I need to set the series' .Values properties via an array, rather than via a sheet range.
You need to convert the error value to a type that .NET will understand as a COM error type (like CVErr in VBA).
There is an Excel-DNA helper that maps the C API error enums to COM error:
ExcelDna.Integration.ExcelErrorUtil.ToComError(ExcelError.ExcelErrorNA)
Internally this will do
new System.Runtime.InteropServices.ErrorWrapper(-2146826246)

Unhandled Exception: System.ArgumentOutOfRangeException: Schema mismatch for feature column 'Features': expected Vector<R4>, got Vector<R8>

I am trying to write a basic 'hello world' type program to predict the values of the XOR function. This is the error message I am getting:
Unhandled Exception: System.ArgumentOutOfRangeException: Schema mismatch for feature column 'Features': expected Vector<R4>, got Vector<R8>
Parameter name: inputSchema
This is my code:
type Sample = {
X: float
Y: float
Result: float
}
let createSample x y result = {X = x; Y = y; Result = result}
let solveXOR() =
let problem =
[
createSample 0.0 0.0 0.0
createSample 1.0 0.0 1.0
createSample 0.0 1.0 1.0
createSample 1.0 0.0 0.0
]
let context = new MLContext()
let data = context.Data.ReadFromEnumerable(problem)
let pipeline =
context.Transforms
.Concatenate("Features", "X", "Y")
.Append(context.Transforms.CopyColumns(inputColumnName = "Result", outputColumnName = "Label"))
//.Append(context.Transforms.Conversion.MapKeyToVector("X"))
//.Append(context.Transforms.Conversion.MapKeyToVector("Y"))
.AppendCacheCheckpoint(context)
.Append(context.Regression.Trainers.FastTree())
let model = pipeline.Fit(data)
let predictions = model.Transform(data)
let metrics = context.BinaryClassification.Evaluate(predictions)
printfn "Accuracy %f" metrics.Accuracy
Any pointers as to what I am doing wrong would be greatly appreciated.
It seems to be complaining about the size of float numbers. A C# float is equivalent to an F# float32 and a double is equivalent to an F# float. So try replacing your float with float32 or single, and 0.0 with 0.0f.
A float32 is also called a single in F#
C# float is equivalent to F# single or float32
C# double is equivalent to F# float or double

cropping an image with respect to co ordinates selected

I am having an input image like this
Cropping the redpoints is easy since its a rectangle. How can i crop if the red point on 2,3,6 and 7 are moved to green points dynamically. These points may change how can i crop dynamically in program.
The result may look like this
I tried Warppperspective but i was unable to get expected result.
The program was like this
import matplotlib.pyplot as plt
import numpy as np
import cv2
img = cv2.imread('sudoku_result.png')
pts1 = np.float32([[100,60],[260,60],[100,180],[260,180],[100,300],[260,300]])
pts2 = np.float32([[20,60],[340,60],[60,180],[300,180][100,300],[260,300]])
M = cv2.getPerspectiveTransform(pts1,pts2)
dst = cv2.warpPerspective(img,M,(360,360))
plt.subplot(121),plt.imshow(img),plt.title('Input')
plt.subplot(122),plt.imshow(dst),plt.title('Output')
plt.show()
I am new to image processing an would like to know which is the best method.
Crop the enclosing rectangle the one created by (minX,minY,maxX,maxY) and then for each pixel in the cropped image you can check if the point inside the polygon created by the original points or not and for the points outside the original shape you put zero.
The code:
import cv2
import numpy as np
# Read a image
I = cv2.imread('i.png')
# Define the polygon coordinates to use or the crop
polygon = [[[20,110],[450,108],[340,420],[125,420]]]
# First find the minX minY maxX and maxY of the polygon
minX = I.shape[1]
maxX = -1
minY = I.shape[0]
maxY = -1
for point in polygon[0]:
x = point[0]
y = point[1]
if x < minX:
minX = x
if x > maxX:
maxX = x
if y < minY:
minY = y
if y > maxY:
maxY = y
# Go over the points in the image if thay are out side of the emclosing rectangle put zero
# if not check if thay are inside the polygon or not
cropedImage = np.zeros_like(I)
for y in range(0,I.shape[0]):
for x in range(0, I.shape[1]):
if x < minX or x > maxX or y < minY or y > maxY:
continue
if cv2.pointPolygonTest(np.asarray(polygon),(x,y),False) >= 0:
cropedImage[y, x, 0] = I[y, x, 0]
cropedImage[y, x, 1] = I[y, x, 1]
cropedImage[y, x, 2] = I[y, x, 2]
# Now we can crop again just the envloping rectangle
finalImage = cropedImage[minY:maxY,minX:maxX]
cv2.imwrite('finalImage.png',finalImage)
The final image:
If you want to stretch the croped image
# Now strectch the polygon to a rectangle. We take the points that
polygonStrecth = np.float32([[0,0],[finalImage.shape[1],0],[finalImage.shape[1],finalImage.shape[0]],[0,finalImage.shape[0]]])
# Convert the polygon corrdanite to the new rectnagle
polygonForTransform = np.zeros_like(polygonStrecth)
i = 0
for point in polygon[0]:
x = point[0]
y = point[1]
newX = x - minX
newY = y - minY
polygonForTransform[i] = [newX,newY]
i += 1
# Find affine transform
M = cv2.getPerspectiveTransform(np.asarray(polygonForTransform).astype(np.float32), np.asarray(polygonStrecth).astype(np.float32))
# Warp one image to the other
warpedImage = cv2.warpPerspective(finalImage, M, (finalImage.shape[1], finalImage.shape[0]))
cv2.imshow('a',warpedImage)
Looks like the co-ordinates you mentioned aren't accurate. So tweaking the coordinates to match the shape and using the Cloudinary distort function complemented by custom shapes cropping, here's the result:
http://res.cloudinary.com/demo/image/fetch/e_distort:20:60:450:60:340:410:140:410,l_sample,fl_cutter,g_north_west/e_trim/http://i.stack.imgur.com/oGSKW.png
If you'd like play around with these Cloudinary functions, here are some samples:
http://cloudinary.com/blog/how_to_dynamically_distort_images_to_fit_your_graphic_design
http://cloudinary.com/cookbook/custom_shapes_cropping

publishing location using tf broadcast

I am having a little problem using tf::TransformListener with the following method call:
listener.lookupTransform("/base_footprint", "/odom", ros::Time(0), transform);
I get this error:
[ERROR] [1430761593.614566598, 10.000000000]: "base_footprint" passed to lookupTransform argument target_frame does not exist.
I thought it was because I have not used a tf broacaster, but even with it the problem still remain. What am I doing wrong?
The code for the listener:
tf::TransformListener listener;
ros::Rate rate(1.0);
listener.waitForTransform("/base_footprint", "/odom", ros::Time(0), ros::Duration(10.0));
tf::StampedTransform transform;
try
{
listener.lookupTransform("/base_footprint", "/odom", ros::Time(0), transform);
double x = transform.getOrigin().x();
double y = transform.getOrigin().y();
ROS_INFO("Current position: ( %f , %f)\n",x,y);
}
catch (tf::TransformException &ex)
{
ROS_ERROR("%s",ex.what());
}
The code for the broadcaster:
ros::Time current_time, last_time;
tf::TransformBroadcaster odom_broadcaster;
double x = 0.0;
double y = 0.0;
double th = 0.0;
double vx = 0.1;
double vy = -0.1;
double vth = 0.1;
current_time = ros::Time::now();
double dt = (current_time - last_time).toSec();
double delta_x = (vx * cos(th) - vy * sin(th)) * dt;
double delta_y = (vx * sin(th) + vy * cos(th)) * dt;
double delta_th = vth * dt;
x += delta_x;
y += delta_y;
th += delta_th;
geometry_msgs::Quaternion odom_quat = tf::createQuaternionMsgFromYaw(th);
geometry_msgs::TransformStamped odom_trans;
odom_trans.header.stamp = current_time;
odom_trans.header.frame_id = "odom";
odom_trans.child_frame_id = "base_link";
odom_trans.transform.translation.x = x;
odom_trans.transform.translation.y = y;
odom_trans.transform.translation.z = 0.0;
odom_trans.transform.rotation = odom_quat;
//send the transform
odom_broadcaster.sendTransform(odom_trans);
last_time = current_time;
If this is the only tf publisher you are using (e.g. no joint_state_publisher or other publishers), I suggest you to have a look at tf tutorials. Especially to this one: robot setup.
As you can find here, lookupTransform(std::string &W, std::string &A, ros::Time &time, StampedTransform &transform) stores in transform the transformation which lead you from frame A to frame W.
In your example you are tring to get the transform from "/odom" to "/base_footprint", while the publisher is broadcasting the transform between "/base_link" and "/odom" (i.e. "/base_footprint" is not specified). It should be fine to use the same name (e.g. both "/base_link" or "/base_footprint" if they represent the same frame as I have supposed).
Also, be aware that in your publisher you are broadcasting the transformation from "/base_link" to "/odom" and not the opposite (as you might want).
EDIT: if you are using an .urdf model for your robot, please add it in your question or post the tf tree.
I suggest running the next command in your terminal to make sure your frames are correct, and that their are being broadcast. When we run your listener you should be able to graphically see you \base_footprint transforming into \odom:
rosrun rqt_tf_tree rqt_tf_tree

Shader has strange input values

my current issue is the following: I try to create a SpriteBatch with integrated Primtive-Rendering (Sprite and PrimtiveBatch in one).
Currently Visual Studio 11's Debugger shows that there is a line rendered. The IA-Stage is correct, so the VertexShader-Stage is. But i don´t get any output.
I run over it with the debugger and found out, that the shader has abnormal huge values.
I´m passing two vertices: 20,20 (Position) and 80, 80 (Position) both colored black.
The ConstantBuffer has three matrices: World (for manual object transformation), View and Projection.
In code, they´re all correct:
M11 = 0.00215749722, and so on.
In the shader, it looks like this:
Position = x = 200000.000000000, y = 200000.000000000, z = 0.000000000, w = 0.000000000
(#1 vertex's position)
Projection[0] = x = 22.000000000, y = 0.000000000, z = 0.000000000, w = 0.000000000
Projection[1] x = 0.000000000, y = -45.000000000, z = 0.000000000, w = 0.000000000
Projection[2] x = 0.000000000, y = 0.000000000, z = 10000.000000000, w = 0.00000 0000
Projection[3] x = 0.000000000, y = 0.000000000, z = 0.000000000, w = 10000.000000000
And finally the output result (position) is 0 for XYZW.
Before it was 413,-918 or something like that. I don´t know why, but now the result is always 0...
This is very strange, and i´m totaly stuck :-/
Thanks
R

Resources