How to assign a specific size to the marker in leaflet R - r-leaflet

I am trying to plot spatial data using leaflet package in R. i want to change the location marker's default size. How can i assign a particular size to the marker in R leaflet package when i am using addAwesomeMarkers option

Related

Map data points onto an image in google sheets?

I'd like to be able to plot points onto this image automatically based on a data set in google sheets. Is this possible? If so, how do you do it?
Edit: For example, if I have the data
#1 - 7,8
#2 - 8,7
I'd like to plot those on the map like so:
First, I think I could have a stored table of the center pixel coordinates of each hex, then vlookup the coordinates (e.g. 8,7) on that table to pull the pixel coordinates. Then I have pixel coordinates to plot on the image, I am just unsure of how to plot them.
I ended up making a scatterplot on top of it using vlookup to convert the hex coordinates above to x and y pixel coordinates on the image.

Highlight points that are not placed in a plane

Is it possible to highlight all tracked points that are not placed on planes using Sceneform, just like the hello_ar sample?

Package to identify dimensions of non-standard shape (image processing)

I'm trying to calculate the motion of dust particles, and I can't seem to find a package that can identify the dimensions of a non-standard shape (i.e. approximately the radius if we assume the dust particle to be circular.
Say if I had the following image, and feed the package the location of the 'centre' of one of these particles, it would return some dimensions (shape, major-axis radius etc).
![dust] (https://scontent-lht6-1.xx.fbcdn.net/v/t1.15752-9/47071656_339293956870649_3030152264015675392_n.png?_nc_cat=106&_nc_ht=scontent-lht6-1.xx&oh=b72ca130a01659d870f085ae8a5a0e87&oe=5CB1C1FF

Mapping infrared images to color images in the RealSense Library

I currently use an Intel d435 camera.
I want to align with the left-infrared camera and the color camera.
the align function provided by the RealSense library has only the ability to align depth and color.
I heard that RealSense Camera is already aligned with the left-infrared camera and the depth camera.
However, I cannot map the infrared image and the color image with this information. The depth image is again set to the color image through the align function. I wonder how I can fit the color image with the left-infrared image that is set to the depth of the initial state.
ㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡ
[Realsense Customer Engineering Team Comment]
#Panepo
The align class used in librealsense demos maps between depth and some other stream and vice versa. We do not offer other forms of stream alignments.
But one suggestion for you to have a try, Basically the mapping is a triangulation technique where we go through the intersection point of a pixel in 3D space to find its origin in another frame, this method work properly when the source data is depth (Z16 format). One possible way to map between two none-depth stream is to play three streams (Depth+IR+RGB), then calculate the UV map for Depth to Color, and then use this UV map to remap IR frame ( remember that Depth and left IR are aligned by design).
Hope the suggestion give you some idea.
ㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡ
This is the method suggested by Intel Corporation.
Can you explain what it means to be able to solve the problem by creating a UV map using deep and color images? and does the RealSense2 library have a UV map function?
I need your precious answer.
Yes, Intel RealSense SDK 2.0 provides class PointCloud.
So, you
-configure sensors
-start streaming
-obtain color & depth frames
-get UV Map as follows (C#):
var pointCloud = new PointCloud();
pointCloud.MapTexture(colorFrame);
var points = pointCloud.Calculate(depthFrame);
var vertices = new Points.Vertex[depth frame height * depth frame width];
var uvMap = new Points.TextureCoordinate[depth frame height * depth frame width];
points.CopyTo(vertices);
points.CopyTo(uvMap);
uvMap you'll get is a normalized depth to color mapping
NOTE: if depth is aligned to color, size of vertices and UV Map is calculated using color frame width and height

Google photo sphere

Does anyone have information on how to map the texture-image of a Google photo-sphere image onto a sphere yourself (not using the google api)? My goal is todo it myself in matlab but I was unable to find any information about the mapping coordinates.
Thanks in advance,
Thomas
you can find details about the metadata of a Photo Sphere here:
https://developers.google.com/photo-sphere/metadata/
Essentially, the image uses equirectangular projection [1] which you only need to map on the inside of a sphere, and put the camera at the center.If the Photo Sphere is a full 360/180 degree panorama, then you can map the whole of the inside of a sphere. If it is only a partial panorama, then you can use the metadata inside the photo to determine the exact position you need to place the texture.
[1] https://en.wikipedia.org/wiki/Equirectangular_projection
Did you try using the warp function?
a = imread('PANO.jpg');
[x,y,z] = sphere(200);
warp(x,y,z,a);
Using camera tools, you may set the camera position inside the sphere, but I think outside view can be enough with an adequate level of zoom.

Resources