How would I replicate the intro of the linked video in manim? - manim

https://www.youtube.com/watch?v=aVwxzDHniEw
I want to replicate the intro (Timestamp: 0:00 to 1:05) of the linked video in Manim. It doesn’t need to be exact; I just want the general flow of a curve gliding across the screen. How would I achieve this?

Here, I've left comments on what I'm doing.
from manim import *
class MovingScene(MovingCameraScene):
def construct(self):
#create a value tracker to move a dot at the end of the graph
k = ValueTracker(0)
#make a graph
axes = Axes(x_range=[0, 10, 1])
#plot a sine wave on the graph
graph = axes.plot(lambda x: np.sin(x))
#make a dot that will move along the graph
dot = always_redraw(
lambda: Dot().move_to(
axes.c2p(k.get_value(), np.sin(k.get_value()))
)
)
#add said dot to the scene, but not the graph so we don't see the axes
self.add(dot)
#add everything at once so that all of the actions happen at the same time
self.play(MoveAlongPath(self.camera.frame, graph), Create(graph), k.animate.set_value(10),run_time = 10)
self.wait()

Related

How to do InverseDynamics with a floating base robot?

I tried with the CalcInverseDynamics, but the returned tau is an 18 dimension vector, 6(floating base) + 12(actuator), which is supposed to be 12 (equal with the num of actuators). Is there any example to do InverseDynamics with the floating-base robot using known_vdot and contact force trajectories?
I tried with the LittLeDog.urdf model. My code is:
def DoID():
legs = [plant.GetBodyByName("front_left_lower_leg"),
plant.GetBodyByName("front_right_lower_leg"),
plant.GetBodyByName("back_left_lower_leg"),
plant.GetBodyByName("back_right_lower_leg")]
contacts = [foot_frame[0].CalcPoseInBodyFrame(plant_context).translation() for i in range(4)]
F_expected = np.array([0., 0., 0., 0., 0., 0.])
forces = MultibodyForces(plant)
# add SpatialForce applied to legs into MultibodyForces
for i in range(4):
legs[i].AddInForce(
plant_context, p_BP_E=contacts[i],
F_Bp_E=SpatialForce(F=F_expected),
frame_E=plant.world_frame(), forces=forces)
nv = plant.num_velocities()
vd_d = np.zeros(nv)
tau = plant.CalcInverseDynamics(plant_context, vd_d, forces)
return tau
update:
at the CalcInverseDynamics API, it writes:
tau = M(q)v̇ + C(q, v)v - tau_app - ∑ J_WBᵀ(q) Fapp_Bo_W
This should also work for the floating-base robot, with the form of
from here, different notation but the same equation. I hope when the contact force and the known_vdot (or qddot) are 'reasonable', then the will become zeros, and the become the joint torque commands. I will use APIs like CalcMassMatrix, CalcBiasTerm and CalcGravityGeneralizedForces to get .
After get the joint commands, use PD controller or other controller to apply to robot. A functional solution to 'controller a desired acceleration' may still need to formulate a QP like http://groups.csail.mit.edu/robotics-center/public_papers/Kuindersma13.pdf. But will try the simpler way first.
My guess is that you are trying to find a controller that will (approximately) follow a desired acceleration of the entire state vector using only the actuators (for littledog, you have 12 actuators, but 19 positions / 18 velocities)?
In addition, with a legged robot like littledog, you have to think about the contact forces (and their friction cones).
The most common generalization of the inverse dynamics control for situations like this involves solving a quadratic program (using a linearization of the friction cone constraints). See for instance http://groups.csail.mit.edu/robotics-center/public_papers/Kuindersma13.pdf

Osmnx cannot find path between nodes in composed graph?

I am trying to use osmnx to find distances between a origin point (lat/lon) and nearest infrastructure, such as railways, water or parks.
1) I get the entire graph from an area with network_type='walk'.
2) Get the needed infrastructure, e.g. railway for that same area.
3) Compose the two graphs into one.
4) Find the nearest node from origin point in the original graph.
5) Find the nearest node from the origin point in the infrastructure graph
6) Find the shortest route length between the two nodes.
If you run the example below, you will see that it is missing 20% of the data because it cannot find a route between the nodes. For infrastructure='way["leisure"~"park"]' or infrastructure='way["natural"~"wood"]' this is even worse, with 80-90% of nodes not being connected.
Minimal reproducible example:
import osmnx as ox
import networkx as nx
bbox = [55.5267243, 55.8467243, 12.4100724, 12.7300724]
g = ox.graph_from_bbox(bbox[0], bbox[1], bbox[2], bbox[3],
retain_all=True,
truncate_by_edge=True,
simplify=False,
network_type='walk')
points = [(55.6790884456018, 12.568493971506154),
(55.6790884456018, 12.568493971506154),
(55.6867418740291, 12.58232314016353),
(55.6867418740291, 12.58232314016353),
(55.6867418740291, 12.58232314016353),
(55.67119624894504, 12.587201455313153),
(55.677406927839506, 12.57651997656002),
(55.6856574907879, 12.590500429002823),
(55.6856574907879, 12.590500429002823),
(55.68465359365924, 12.585474365063224),
(55.68153666806675, 12.582594757267945),
(55.67796979175, 12.583111746311117),
(55.68767346629932, 12.610040871066179),
(55.6830855237578, 12.575431380892427),
(55.68746749645466, 12.589488615911913),
(55.67514254640597, 12.574308210656602),
(55.67812748568291, 12.568454119053886),
(55.67812748568291, 12.568454119053886),
(55.6701733527419, 12.58989203029166),
(55.677700136266616, 12.582800629527789)]
railway = ox.graph_from_bbox(bbox[0], bbox[1], bbox[2], bbox[3],
retain_all=True,
truncate_by_edge=True,
simplify=False,
network_type='walk',
infrastructure='way["railway"]')
g_rail = nx.compose(g, railway)
l_rail = []
for point in points:
nearest_node = ox.get_nearest_node(g, point)
rail_nn = ox.get_nearest_node(railway, point)
if nx.has_path(g_rail, nearest_node, rail_nn):
l_rail.append(nx.shortest_path_length(g_rail, nearest_node, rail_nn, weight='length'))
else:
l_rail.append(-1)
There are 2 things that caught my attention.
OSMNX documentation specifies ox.graph_from_bbox parameters be given in the order of north, south, east, west (https://osmnx.readthedocs.io/en/stable/osmnx.html). I mention this because when I tried to run your code, I was getting empty graphs.
The parameter 'retain_all = True' is the key as you may already know. When set to true, it retains all nodes in the graph, even if they are not connected to any of the other nodes in the graph. This happens primarily due to the incompleteness of OpenStreetMap which contains voluntarily contributed geographic information. I suggest you set 'retain_all = False' meaning your graph now contains only the connected nodes. In this way, you get a complete list without any -1.
I hope this helps.
g = ox.graph_from_bbox(bbox[1], bbox[0], bbox[3], bbox[2],
retain_all=False,
truncate_by_edge=True,
simplify=False,
network_type='walk')
railway = ox.graph_from_bbox(bbox[1], bbox[0], bbox[3], bbox[2],
retain_all=False,
truncate_by_edge=True,
simplify=False,
network_type='walk',
infrastructure='way["railway"]')
g_rail = nx.compose(g, railway)
l_rail = []
for point in points:
nearest_node = ox.get_nearest_node(g, point)
rail_nn = ox.get_nearest_node(railway, point)
if nx.has_path(g_rail, nearest_node, rail_nn):
l_rail.append(nx.shortest_path_length(g_rail, nearest_node, rail_nn, weight='length'))
else:
l_rail.append(-1)
print(l_rail)
Out[60]:
[7182.002999999995,
7182.002999999995,
5060.562000000002,
5060.562000000002,
5060.562000000002,
6380.099999999999,
7127.429999999996,
4707.014000000001,
4707.014000000001,
5324.400000000003,
6153.250000000002,
6821.213000000002,
8336.863999999998,
6471.305,
4509.258000000001,
5673.294999999996,
6964.213999999994,
6964.213999999994,
6213.673,
6860.350000000001]

Healpy plotting: How do i make a figure with subplots using the healpy.mollview projection?

I've just recently started trying to use healpy and i can't figure out how to make subplots to contain my maps. I have a thermal emission map of a planet as function of time and i need to look at it at several moments in time (lets say 9 different times) and superimpose some coordinates, to check that my planet is rotating the right way.
So far, i can do 2 things.
Make 9 different figures with the superimposed coordinates.
Make a figure with 9 subplots containing 9 different maps but that superimposes all of my coordinates on all of my subplots, instead of just the time-appropriate ones.
I'm not sure if this is a very simple problem but it's been driving me crazy and i cant find anything that works.
I'll show you what i mean:
OPTION 1:
import healpy as hp
import matplolib.pyplot as plt
MAX = 10**(23)
MIN = 10**10
for i in range(9):
t = 4000+10*i
hp.visufunc.mollview(Fmap_wvpix[t,:],
title = "Map at t="+str(t), min = MIN, max=MAX))
hp.visufunc.projplot(d[t,np.where(np.abs(d[t,:,2]-SSP[t])<0.5),1 ],
d[t,np.where(np.abs(d[t,:,2]-SSP[t])<0.5),2],
'k*',markersize = 6)
hp.visufunc.projplot(d[t,np.where(np.abs(d[t,:,2]-(SOP[t]))<0.2),1 ],
d[t,np.where(np.abs(d[t,:,2]-(SOP[t]))<0.2),2],
'r*',markersize = 6)
This makes 9 figures that look pretty much like this :
Flux map superimposed with some stars at time = t
But i need a lot of them so i want to make an image that contains 9 subplots that look like the image.
OPTION 2:
fig = plt.figure(figsize = (10,8))
for i in range(9):
t = 4000+10*i
hp.visufunc.mollview(Fmap_wvpix[t,:],
title = "Map at t="+str(t), min = MIN, max=MAX,
sub = int('33'+str(i+1)))
hp.visufunc.projplot(d[t,np.where(np.abs(d[t,:,2]-SSP[t])<0.5),1 ],
d[t,np.where(np.abs(d[t,:,2]-SSP[t])<0.5),2],
'k*',markersize = 6)
hp.visufunc.projplot(d[t,np.where(np.abs(d[t,:,2]-(SOP[t]))<0.2),1 ],
d[t,np.where(np.abs(d[t,:,2]-(SOP[t]))<0.2),2],
'r*',markersize = 6)
This gives me subplots but it draws all the projplot stars on all of my subplots! (see following image)
Subplots with too many stars
I know that i need a way to call the axes that has the time = t map and draw the stars for time = t on the appropriate map, but everything i've tried so far has failed. I've mostly tried to use projaxes thinking i can define a matplotlib axes and draw the stars on it but it doesnt work. Any advice?
Also, i would like to draw some lines on my map as well but i cant figure out how to do that either. The documentation says projplot but it won't draw anyting if i don't tell it i want a marker.
PS: This code is probably useless to you as it won't work if you don't have my arrays. Here's a simpler version that should run:
import numpy as np
import healpy as hp
import matplotlib.pyplot as plt
NSIDE = 8
m = np.arange(hp.nside2npix(NSIDE))*1
MAX = 900
MIN = 0
fig = plt.figure(figsize = (10,8))
for i in range(9):
t = 4000+10*i
hp.visufunc.mollview(m+100*i, title = "Map at t="+str(t), min = MIN, max=MAX,
sub = int('33'+str(i+1)))
hp.visufunc.projplot(1.5,0+30*i, 'k*',markersize = 16)
So this is supposed to give me one star for each frame and the star is supposed to be moving. But instead it's drawing all the stars on all the frames.
What can i do? I don't understand the documentation.
If you want to have healpy plots in matplotlib subplots, the following would be the way to go. The key is to use plt.axes() to select the active subplot and to use the hold=True keyword in the healpy functions.
import healpy as hp
import numpy as np
import matplotlib.pyplot as plt
fig, (ax1, ax2) = plt.subplots(ncols=2)
plt.axes(ax1)
hp.mollview(np.random.random(hp.nside2npix(32)), hold=True)
plt.axes(ax2)
hp.mollview(np.arange(hp.nside2npix(32)), hold=True)
I have just encountered this question looking for a solution to the same problem, but managed to find it from the documentation of mollview (here).
As you notice there, they say that 'sub' received the same syntax as the function subplot (from matplotlib). This format is:
( # of rows, # of columns, # of current subplot)
E.g. to make your plot, the value sub wants to receive in each iteration is
sub=(3,3,i)
Where i runs from 1 to 9 (3*3).
This worked for me, I haven't tried this with your code, but should work.
Hope this helps!

Simple registration algorithm for small sets of 2D points

I am trying to find a simple algorithm to find the correspondence between two sets of 2D points (registration). One set contains the template of an object I'd like to find and the second set mostly contains points that belong to the object of interest, but it can be noisy (missing points as well as additional points that do not belong to the object). Both sets contain roughly 40 points in 2D. The second set is a homography of the first set (translation, rotation and perspective transform).
I am interested in finding an algorithm for registration in order to get the point-correspondence. I will be using this information to find the transform between the two sets (all of this in OpenCV).
Can anyone suggest an algorithm, library or small bit of code that could do the job? As I'm dealing with small sets, it does not have to be super optimized. Currently, my approach is a RANSAC-like algorithm:
Choose 4 random points from set 1 and from set 2.
Compute transform matrix H (using openCV getPerspective())
Warp 1st set of points using H and test how they aligned to the 2nd set of points
Repeat 1-3 N times and choose best transform according to some metric (e.g. sum of squares).
Any ideas? Thanks for your input.
With python you can use Open3D librarry, wich is very easy to install in Anaconda. To your purpose ICP should work fine, so we'll use the classical ICP, wich minimizes point-to-point distances between closest points in every iteration. Here is the code to register 2 clouds:
import numpy as np
import open3d as o3d
# Parameters:
initial_T = np.identity(4) # Initial transformation for ICP
distance = 0.1 # The threshold distance used for searching correspondences
(closest points between clouds). I'm setting it to 10 cm.
# Read your point clouds:
source = o3d.io.read_point_cloud("point_cloud_1.xyz")
target = o3d.io.read_point_cloud("point_cloud_0.xyz")
# Define the type of registration:
type = o3d.pipelines.registration.TransformationEstimationPointToPoint(False)
# "False" means rigid transformation, scale = 1
# Define the number of iterations (I'll use 100):
iterations = o3d.pipelines.registration.ICPConvergenceCriteria(max_iteration = 100)
# Do the registration:
result = o3d.pipelines.registration.registration_icp(source, target, distance, initial_T, type, iterations)
result is a class with 4 things: the transformation T(4x4), 2 metrict (rmse and fitness) and the set of correspondences.
To acess the transformation:
I used it a lot with 3D clouds obteined from Terrestrial Laser Scanners (TLS) and from robots (Velodiny LIDAR).
With MATLAB:
We'll use the point-to-point ICP again, because your data is 2D. Here is a minimum example with two point clouds random generated inside a triangle shape:
% Triangle vértices:
V1 = [-20, 0; -10, 10; 0, 0];
V2 = [-10, 0; 0, 10; 10, 0];
% Create clouds and show pair:
points = 5000
N1 = criar_nuvem_triangulo(V1,points);
N2 = criar_nuvem_triangulo(V2,points);
pcshowpair(N1,N2)
% Registrate pair N1->N2 and show:
[T,N1_tranformed,RMSE]=pcregistericp(N1,N2,'Metric','pointToPoint','MaxIterations',100);
pcshowpair(N1_tranformed,N2)
"criar_nuvem_triangulo" is a function to generate random point clouds inside a triangle:
function [cloud] = criar_nuvem_triangulo(V,N)
% Function wich creates 2D point clouds in triangle format using random
% points
% Parameters: V = Triangle vertices (3x2 Matrix)| N = Number of points
t = sqrt(rand(N, 1));
s = rand(N, 1);
P = (1 - t) * V(1, :) + bsxfun(#times, ((1 - s) * V(2, :) + s * V(3, :)), t);
points = [P,zeros(N,1)];
cloud = pointCloud(points)
end
results:
You may just use cv::findHomography. It is a RANSAC-based approach around cv::getPerspectiveTransform.
auto H = cv::findHomography(srcPoints, dstPoints, CV_RANSAC,3);
Where 3 is the reprojection threshold.
One traditional approach to solve your problem is by using point-set registration method when you don't have matching pair information. Point set registration is similar to method you are talking about.You can find matlab implementation here.
Thanks

relationship between density of edges to the number of vertices in graph

I want to understand how to compute big-O for a dense versus sparse graph.
"Algorithms in a nutshell" says that for sparse graph, O(E) is O(V) and for dense graph O(E) is closer to O(V^2). Does anyone know how is that derived?
Assuming the graph is simple - at the worst case every node can be connected to all |V|-1 other nodes, resulting in [in not directed graph:] |E| = (|V|-1) + (|V| -2) + ... + 1 <= |V| * (|V| -1) = O(|V|^2). And in directed graph: |E| = |V| * (|V|-1) = O(|V|^2).
A good example for a dense graph is a clique - which have all the edges.
For sparsed graph - we assume the number of edges connected to each vertex is bounded by a constant. Let this constant be k. Thus: |E| <= k* |V|, and we get |E| = O(|V|)
A good example for a sparsed graph is the internet, where every URL is a node and every link is an edge.
Note that if the graph is not simple, you cannot bound |E| with any function of |V|.
It's not derived, it's a definition. In a fully connected (directed) graph with self-loops, the number of edges |E| = |V|² so the definition of a dense graph is reasonable. The definition of a sparse graph is one where O(|E|) = O(|V|), so there's a constant maximum number of edges per vertex.
Note that if the number of edges is much smaller, e.g. O(lg |V|), then it's still O(|V|) as well. One could imagine a "semi-sparse" class of graphs with |E| = O(|V| lg |V|) or something like that, but I personally have never encountered such a class in practice.

Resources