I am trying to get x,y coordinates from rviz and get it subscribed to a topic So when i do publish point and click on any free space in rviz i get its x,y,z coordinates in the topic /clicked_point I want to subscribe to these points and print it on the terminal. The code i have is a simple subscriber to the topic /clicked_point.
But when i run the code i get x,y as 0.0 and not the desired coordinates that i click.
Can someone help me please in looking at my code. I suspect it has to do something with the point stamped attributes.
Please let me know
Full code
#!/usr/bin/env python
import rospy
from geometry_msgs.msg import PointStamped
import tf
import random
import numpy as np
global x,y,z
x = 0.0
y = 0.0
z = 0.0
def callback(msg):
point = PointStamped()
point.header.stamp = rospy.Time.now()
point.header.frame_id = "/map"
point.point.x = x
point.point.y = y
point.point.z = z
rospy.loginfo("coordinates:x=%f y=%f" %(x,y))
def listener():
rospy.init_node('goal_publisher', anonymous=True)
rospy.point_pub = rospy.Subscriber('/clicked_point', PointStamped, callback)
rospy.spin()
if __name__ == '__main__':
listener()
In terminal
varun-flox#varunflox:~/Desktop$ ./ros_localisation_subscriber.py
[INFO] [1607606216.839750]: coordinates:x=0.000000 y=0.000000
[INFO] [1607606221.911950]: coordinates:x=0.000000 y=0.000000
In another terminal with rostopic echo /clicked_point
varun-flox#varunflox:~$ rostopic echo /clicked_point
header:
seq: 31
stamp:
secs: 1607606021
nsecs: 774185596
frame_id: "map"
point:
x: -4.99073648453
y: -4.93864154816
z: -0.00288152694702
---
header:
seq: 32
stamp:
secs: 1607606216
nsecs: 838675541
frame_id: "map"
point:
x: -4.98981332779
y: -4.96337604523
z: -0.0113334655762
---
Thanks in advance
You have to change your assignment of the variables in the callback function.
See
def callback(msg):
x = msg.point.x
y = msg.point.y
z = msg.point.z
rospy.loginfo("coordinates:x=%f y=%f" %(x,y))
Fir explanation the msg variable ahnded over as paramter for the callback function wraps the PointStamped msg type and can be unpacked seen above.
Here is your code modified and tested, Thank me later.
#!/usr/bin/env python
import rospy
from geometry_msgs.msg import PointStamped
global x,y,z
def callback(msg):
rospy.loginfo("coordinates:x=%f y=%f" %(msg.point.x,msg.point.y))
def listener():
rospy.init_node('goal_publisher', anonymous=True)
rospy.point_pub = rospy.Subscriber('/clicked_point', PointStamped, callback)
rospy.spin()
if __name__ == '__main__':
listener()
Related
I'm working on a simulation of a Soft Robot using the Piecewise Constant Curvature (PCC) assumption and representing each PCC segment with an Augmented Rigid Body Model (ARBM). For this I first would like to implement a manual slider for curvature control, i.e., a slider where I can input the two curvature parameters (Theta and Phi) and after mapping it to the ARBM via some known map m(Theta, Phi) showcase the robot in meshcat.
I'm already displaying the ARBM, however, am struggling to get the slider to run. As a result, I'd like to have something similar to the Let's get you a Robot notebook from the Manipulation course. So ideally a kinematic simulation in which I can set the curvatures to show the resulting ARBM configuration.
As of right now, my approach is the following:
Create a Custom Slider Class that is based on LeafSystem that creates two slides
Create a Transformation System based on a VectorSystem that applies the map m(Theta, Phi) to the output of the sliders yielding the ARBM joint states on its output
Set the Robot Joints to this State (This is where I'm struggling)
The problem seems to be that I cannot connect the desired output directly to the Joint positions in the same way that, e.g., the Joint sliders are. Is there a way to do this or should I follow a different approach?
See below for the code of the individual instances:
CurvatureSlider.py:
from dataclasses import dataclass
import numpy as np
from pydrake.systems.framework import LeafSystem
class CurvatureSliders(LeafSystem):
#dataclass
class SliderDefault:
"""Default values for the meshcat sliders."""
name: str
"""The name that is used to add / query values from."""
default: float
"""The initial value of the slider."""
_THETA = SliderDefault("Theta", 0.0)
_PHI = SliderDefault("Phi", 0.0)
def __init__(self, meshcat):
"""
#param meshcat The already created pydrake.geometry.Meshcat instance.
"""
LeafSystem.__init__(self)
self.DeclareVectorOutputPort("theta_phi", 2, self.DoCalcOutput)
self.meshcat = meshcat
# Curvature Control Sliders
self.meshcat.AddSlider(
name=self._THETA.name, min=-2.0 * np.pi,
max=2.0 * np.pi, step=0.01,
value=self._THETA.default)
self.meshcat.AddSlider(
name=self._PHI.name, min=-2.0 * np.pi,
max=2.0 * np.pi, step=0.01,
value=self._PHI.default)
def SetConfiguration(self, config: tuple):
"""
#param pose Tuple of floats that is ordered (Theta, Phi)
"""
assert(len(config) == 2)
self.meshcat.SetSliderValue(self._THETA.name, config[0])
self.meshcat.SetSliderValue(self._PHI.name, config[1])
def DoCalcOutput(self, context, output):
theta = self.meshcat.GetSliderValue(self._THETA.name)
phi = self.meshcat.GetSliderValue(self._PHI.name)
output.SetAtIndex(0, theta)
output.SetAtIndex(1, phi)
CC2ARBM.py:
import numpy as np
from pydrake.systems.framework import VectorSystem
class CC2ARBM(VectorSystem):
def __init__(self, L: float):
"""
#param L The length of the segment.
"""
VectorSystem.__init__(self, 2, 10)
# Define length of the segment
self._L = L
def DoCalcVectorOutput(self, context, u, x, y):
# Extract Input
theta = u[0]
phi = u[1]
# Compute ARBM equivalent configuration
b = 0.5 * self._L
eta = 0
if not theta == 0:
b = self._L / theta * np.sqrt(1
+ 4 * np.sin(0.5 * theta) / theta
* (np.sin(0.5 * theta) / theta)
- np.cos(0.5 * theta))
eta = np.arccos(1 / b * self._L / theta * np.sin(0.5 * theta))
print("Computed ARBM Joint Position")
# Aggreate Output
y.SetAtIndex(0, phi)
y.SetAtIndex(1, 0.5 * theta - eta)
y.SetAtIndex(2, b)
y.SetAtIndex(3, eta)
y.SetAtIndex(4, - phi)
y.SetAtIndex(5, phi)
y.SetAtIndex(6, eta)
y.SetAtIndex(7, b)
y.SetAtIndex(8, 0.5 * theta - eta)
y.SetAtIndex(9, - phi)
Main.py:
import sys
import time
import matplotlib.pyplot as plt
import numpy as np
from CurvatureSliders import CurvatureSliders
from CC2ARBM import CC2ARBM
from pydrake.geometry import (
MeshcatVisualizerCpp,
MeshcatVisualizerParams,
Role,
StartMeshcat
)
from pydrake.math import (
RigidTransform,
RotationMatrix
)
from pydrake.multibody.meshcat import JointSliders
from pydrake.multibody.tree import FixedOffsetFrame
from pydrake.multibody.parsing import Parser
from pydrake.multibody.plant import AddMultibodyPlantSceneGraph
from pydrake.systems.analysis import Simulator
from pydrake.systems.framework import DiagramBuilder
from pydrake.systems.primitives import LogVectorOutput
def do_main(teleop: bool,
target_realtime_rate: float,
simulation_time: float,
max_time_step: float,
description_path: str) -> None:
# Start the visualizer.
# The cell will output an HTTP link after the execution.
# Click the link and a MeshCat tab should appear in your browser.
meshcat = StartMeshcat()
# Reset Meshcat Simulator
meshcat.Delete()
meshcat.DeleteAddedControls()
# Init the Diagram Builder
builder = DiagramBuilder()
# Note: the time_step here is chosen arbitrarily.
plant, scene_graph = AddMultibodyPlantSceneGraph(
builder, time_step=max_time_step)
# Load the files into the plant/scene_graph.
parser = Parser(plant)
# L - Mount
mount = parser.AddModelFromFile("../mount.sdf")
# Robot
model = parser.AddModelFromFile(description_path)
# Create an offset frame located half the link height above the origin
base_frame = plant.GetFrameByName("mount_base")
offset_frame = plant.AddFrame(
frame=FixedOffsetFrame(
name="offset_frame",
P=plant.world_frame(),
X_PF=RigidTransform(
R=RotationMatrix.Identity(),
p=np.array([0, 0, 0.5])
),
model_instance=None)
)
# Weld the base link to the offset frame
plant.WeldFrames(offset_frame, base_frame)
# Weld the robot base to the L-Mount
robot_base_frame = plant.GetFrameByName("robot_base")
robot_mounting_frame = plant.GetFrameByName("robot_location")
plant.WeldFrames(robot_mounting_frame, robot_base_frame)
# Finalize Plant
plant.Finalize()
#############################################################
# Post Plant Finalization Code #
#############################################################
# Set the Default states of the Joints
for i in plant.GetJointIndices(model):
j = plant.get_joint(i)
if j.num_positions() == 1:
j.set_default_positions([-0.2])
# Make the control inputs of the model an input to the diagram.
builder.ExportInput(plant.get_actuation_input_port())
# Add two visualizers, one to publish the "visual" geometry, and one to
# publish the "collision" geometry.
visual = MeshcatVisualizerCpp.AddToBuilder(
builder, scene_graph, meshcat,
MeshcatVisualizerParams(
role=Role.kPerception, prefix="visual")
)
collision = MeshcatVisualizerCpp.AddToBuilder(
builder, scene_graph, meshcat,
MeshcatVisualizerParams(role=Role.kProximity, prefix="collision"))
# Disable the collision geometry at the start; it can be enabled by the
# checkbox in the meshcat controls.
meshcat.SetProperty("collision", "visible", False)
# Setup Loggers
plant_logger = LogVectorOutput(plant.get_state_output_port(), builder)
# Joint Sliders Work like this
# sliders = builder.AddSystem(JointSliders(meshcat, plant))
# Add Curvature Sliders
curv_sliders = builder.AddSystem(CurvatureSliders(meshcat))
cc2arbm = builder.AddSystem(CC2ARBM(0.2))
# Connect the Sliders to the transformation block
builder.Connect(curv_sliders.get_output_port(0),
cc2arbm.get_input_port(0))
# Build the diagram
diagram = builder.Build()
# Start runnin the teleoperation
# Start Running the Slider similar to the Joint slider
# e.g. sliders.Run(diagram)
if __name__ == '__main__':
if len(sys.argv) <= 1:
do_main(target_realtime_rate=1,
simulation_time=10.0,
max_time_step=1.0e-3,
description_path="single_cc_segment.sdf")
else:
do_main(target_realtime_rate=float(sys.argv[1]),
simulation_time=float(sys.argv[2]),
max_time_step=float(sys.argv[3]),
description_path=sys.argv[4])
There are a few moving parts here. First, you say "kinematic simulation", but the demonstration in "Let's get you a robot" does not simulate (physics), it only visualizes the kinematics as set by the sliders. Assuming that is sufficient for your goal, then you could pass a callback into your sliders.Run method (as I do in the notebook corresponding to this chapter), and I believe that if you call plant.SetPositions in that callback, it should work?
I eventually ran the custom slider using a reimplementation of JointSlider.Run(...). For me, the following was sufficient (Two planar constant curvature segments, represented by 4 rigid joints each):
from dataclasses import dataclass
import numpy as np
import functools
import operator
import logging
import time
from typing import List, Tuple
from pydrake.systems.framework import LeafSystem
class CurvatureSliders(LeafSystem):
#dataclass
class SliderDefault:
"""Default values for the meshcat sliders."""
name: str
"""The name that is used to add / query values from."""
default: float
"""The initial value of the slider."""
_Q1 = SliderDefault("q1", 0.0)
_Q2 = SliderDefault("q2", 0.0)
def __init__(self, meshcat, plant, L=0.2):
"""
#param meshcat The already created pydrake.geometry.Meshcat instance.
#param plant The plant the sliders are connected to
#param L the restlength of the segment
"""
# Call Super Constructor
LeafSystem.__init__(self)
# Declare System Output
output = self.DeclareVectorOutputPort(
"q1_q2", 2, self.DoCalcOutput)
output.disable_caching_by_default()
# Init Class Variables
self._meshcat = meshcat
self._plant = plant
self._L = L
# Curvature Control Sliders
self._meshcat.AddSlider(
name=self._Q1.name, min=-2.0 * np.pi,
max=2.0 * np.pi, step=0.1,
value=self._Q1.default)
self._meshcat.AddSlider(
name=self._Q2.name, min=-2.0 * np.pi,
max=2.0 * np.pi, step=0.1,
value=self._Q2.default)
def SetConfiguration(self, q: Tuple):
"""
#param q configuration for each CC segment descriped by (q1, q2)
"""
self._meshcat.SetSliderValue(self._Q1.name, q[0])
self._meshcat.SetSliderValue(self._Q2.name, q[1])
def CC2AGRB(self, q: Tuple) -> List[float]:
# Extract input
q1 = q[0]
q2 = q[1]
# Compute ARBM equivalent configuration
config1 = [0, 0.5 * self._L, 0.5 * self._L, 0]
config2 = [0, 0.5 * self._L, 0.5 * self._L, 0]
if q1 != 0:
config1 = [
0.5 * q1,
self._L * np.sin(0.5 * q1) / q1,
self._L * np.sin(0.5 * q1) / q1,
0.5 * q1
]
if q2 != 0:
config2 = [
0.5 * q2,
self._L * np.sin(0.5 * q2) / q2,
self._L * np.sin(0.5 * q2) / q2,
0.5 * q2
]
return functools.reduce(operator.iconcat, [config1, config2], [])
def DoCalcOutput(self, context, output):
q1 = self._meshcat.GetSliderValue(self._Q1.name)
q2 = self._meshcat.GetSliderValue(self._Q2.name)
output.SetAtIndex(0, q1)
output.SetAtIndex(1, q2)
def Run(self, diagram, timeout=1e5):
# Get all the contextes
root_context = diagram.CreateDefaultContext()
sliders_context = self.GetMyContextFromRoot(root_context)
plant_context = self._plant.GetMyMutableContextFromRoot(root_context)
# Add Stop Button
kButtonName = "Stop Curvature Sliders"
logging.info("Press the '{}' button in Meshcat to continue.",
kButtonName)
self._meshcat.AddButton(kButtonName)
# Greb current time to implement the timeout
t0 = time.time()
# Loop until the button is clicked, or
# the timeout (when given) is reached.
diagram.Publish(root_context)
while self._meshcat.GetButtonClicks(kButtonName) < 1:
# Break out of loop if timeout elapsed
elapsed_t = time.time() - t0
if elapsed_t >= timeout:
break
# If the sliders have not changed, avoid invalidating the context.
old_positions = self._plant.GetPositions(plant_context)
new_positions = self.CC2AGRB(
self.get_output_port().Eval(sliders_context))
if (np.abs(new_positions - old_positions) < 0.001).all():
time.sleep(0.01)
continue
# Publish the new positions.
self._plant.SetPositions(plant_context, new_positions)
diagram.Publish(root_context)
And then just add them using the builder and calling the Run() function.
I would like to publish velocities for my ARDrone using /cmd_vel topic using the below python script. But it does nothing. It does not publish the required information.
What is wrong in the below code?
#!/usr/bin/env python3
import numpy as np
import rospy
from geometry_msgs.msg import Twist
class KeyboardControl:
def __init__(self):
rospy.init_node('Script_controlling_ARDrone', anonymous=False)
publisher = rospy.Publisher('/cmd_vel', Twist, queue_size=1)
twist = Twist()
twist.linear.x = 0
twist.linear.y = 0
twist.linear.z = -1
twist.angular.x = 0
twist.angular.y = 0
twist.angular.z = 0.5
publisher.publish(twist)
def main():
try:
kc = KeyboardControl()
rospy.spin()
except KeyboardInterrupt:
rospy.loginfo("Shutting down")
if __name__ == '__main__':
main()
This does actually do something, however it's probably not noticeable. This is because it only publishes something once when the object is created. If you would like you publish periodically you should use a main run loop like so:
import numpy as np
import rospy
from geometry_msgs.msg import Twist
class KeyboardControl:
def __init__(self):
rospy.init_node('Script_controlling_ARDrone', anonymous=False)
self.pub = rospy.Publisher('/cmd_vel', Twist, queue_size=1)
self.rate = rospy.Rate(10) #10Hz
def main():
try:
kc = KeyboardControl()
twist = Twist()
twist.linear.x = 0
twist.linear.y = 0
twist.linear.z = -1
twist.angular.x = 0
twist.angular.y = 0
twist.angular.z = 0.5
while not rospy.is_shutdown():
#Do any needed edits to the twist message here
kc.pub.publish(twist)
kc.rate.sleep()
except KeyboardInterrupt:
rospy.loginfo("Shutting down")
if __name__ == '__main__':
main()
I have a problem with getting option parameters using the library ib_insync.
import time
from ib_insync import *
import pandas as pd
from configparser import ConfigParser
from ibapi.common import TickerId, SetOfFloat, SetOfString, MarketDataTypeEnum
config = ConfigParser()
# TWs 7497, IBGW 4001
def get_chain(ib,ticker, exp_list):
exps = {}
df = pd.DataFrame(columns=['strike', 'kind', 'close', 'last'])
for i in exp_list:
ib.sleep()
cds = ib.reqContractDetails(Option(ticker, i, exchange='SMART'))
options = [cd.contract for cd in cds]
tickers = [t for i in range(0, len(options), 100)
for t in ib.reqTickers(*options[i:i + 100])]
for x in tickers:
df = df.append(
{'strike': x.contract.strike, 'kind': x.contract.right, 'close': x.close, 'last': x.last, 'bid': x.bid,
'ask': x.ask, 'mid': (x.bid + x.ask) / 2, 'volume': x.volume}, ignore_index=True)
exps[i] = df
return exps
def get_individual(ib,ticker, exp, strike, kind):
cds = ib.reqContractDetails(Option(ticker, exp, strike, kind, exchange='SMART'))
options = [cd.contract for cd in cds]
tickers = [t for i in range(0, len(options), 100) for t in ib.reqTickers(*options[i:i + 100])]
con = {'strike': tickers[0].contract.strike, 'kind': tickers[0].contract.right, 'close': tickers[0].close,
'last': tickers[0].last, 'bid': tickers[0].bid, 'ask': tickers[0].ask, 'volume': tickers[0].volume}
return con
def main():
with IB().connect('127.0.0.1', 7497) as ib:
ib.reqMarketDataType(3)
time.sleep(1)
print(get_chain(ib,"AAPL", ["20220211"]))
if __name__ == '__main__':
main()
Output: "Error 10090, reqId 4: Part of requested market data is not subscribed. Subscription-independent ticks are still active.Delayed market data is available.AAPL NASDAQ.NMS/TOP/ALL"
My solution was to implement ib.sleep because I thought maybe the calls are overlapping. As you can see the MarketDataType is set to 3.
In the end a table with strikes and all other parameters are NaN. Sometimes the first row gets some values, because of this I think it is some kind of overlapping problem.
Sorry if my question is to long.
I'm following this paper:
http://robotics.stanford.edu/users/shoham/www%20papers/Empirical%20Hardness.pdf
and I try to predict the running time for the Traveling salesman problem on a blackbox solver.
I get some weird results during regression that I'd love to consult about:
I find it hard to believe that in XGBOOST or at any regessor the number of cities is irrelevant as a feature? as seen in XGBOOST feature importance image.
In the RIDGE and LINEAR REGRESSION results graphs you can see that for some problem instances the graphs you can see that the predicted value is negative (when we talk about run time), I saw in other question here that this is because "Linear regression does not respect the bounds of 0" and that I should put a natural log on it, but I don't know exactly where. So I'd love help with that also.
I'd also love to be reccomended on other regression models that may fit my problem.
Thanks a lot!
Here is my code pieces (google colab), followed by the results I got:
1
# Import the standard libraries of pandas.
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import warnings
warnings. filterwarnings("ignore")
sns.set_style('whitegrid')
from google.colab import files
2
# Install the solver and import its libraries, in addition import all the
# libraries with which we will prepare the features.
!pip3 install ortools
!pip install python-igraph
from ortools.constraint_solver import routing_enums_pb2
from ortools.constraint_solver import pywrapcp
import numpy as np
import time
import random
from random import randrange
from scipy import stats
from scipy.stats import skew
from scipy.sparse import csr_matrix
from scipy.sparse.csgraph import minimum_spanning_tree
from scipy.sparse.csgraph import depth_first_tree
from igraph import Graph, mean
import igraph
import itertools
import math
3
# Simple travelling salesman problem between cities - solver OR Tools By Google.
def create_data_model():
# Stores the data for the problem.
data = {}
# dim will be the number of Vertices\Cities in the Traveling Salesman Problem.
# Randomly select the matrix dimension in unifom distribution.
dim = np.random.randint(10, 350)
# Generate a square symmetric matrix It will be the distance matrix that the solver will solve.
square_matrice = [[0 for row in range(dim)] for col in range(dim)]
for i in range(dim):
for j in range(dim):
if i == j:
square_matrice[i][j] = 0
else:
# Randomly fill the matrix in unifom distribution.
square_matrice[i][j] = square_matrice[j][i] = np.random.randint(1, 1000)
data['distance_matrix'] = square_matrice # yapf: disable
data['num_vehicles'] = 1
data['depot'] = 0
return data
def main():
# Start measuring solution time.
start_time = time.time()
# Instantiate the data problem.
data = create_data_model()
# Create the routing index manager.
manager = pywrapcp.RoutingIndexManager(len(data['distance_matrix']),
data['num_vehicles'], data['depot'])
# Create Routing Model.
routing = pywrapcp.RoutingModel(manager)
def distance_callback(from_index, to_index):
# Returns the distance between the two nodes.
# Convert from routing variable Index to distance matrix NodeIndex.
from_node = manager.IndexToNode(from_index)
to_node = manager.IndexToNode(to_index)
return data['distance_matrix'][from_node][to_node]
transit_callback_index = routing.RegisterTransitCallback(distance_callback)
# Define cost of each arc.
routing.SetArcCostEvaluatorOfAllVehicles(transit_callback_index)
# Setting first solution heuristic.
search_parameters = pywrapcp.DefaultRoutingSearchParameters()
search_parameters.first_solution_strategy = (
routing_enums_pb2.FirstSolutionStrategy.PATH_CHEAPEST_ARC)
# Solve the problem.
solution = routing.SolveWithParameters(search_parameters)
solution_time = time.time() - start_time
'''In this part of the code we will create the following features on the distance matrix of the problem.
* Mean - Average weights of the distance matrix.
* Std - Standard Deviation of the distance matrix.
* Skewness - What is the tendency of the weights in the distance matrix.
* Noc - Number of cities we have in the distance matrix [matrix dimension].
* Td - The total distance of the solution rout.
* Dmft - Distance matrix features time, That is how long it took us to calculate all these features.
'''
dmt_start_time = time.time()
mat = np.array(data['distance_matrix'])
mean = mat.mean()
std = mat.std()
merged = list(itertools.chain(*mat))
skewness = skew(merged)
noc = len(data['distance_matrix'])
td = solution.ObjectiveValue() if solution else -1
dmft = time.time() - dmt_start_time
'''In this part of the code we will create from the distance matrix of the problem an MST and than
on the MST we take the following features.
* MST_Mean - Average weights of the MST.
* MST_Std - Standard Deviation of the MST.
* MST_Skewness - What is the tendency of the weights in the MST.
* MST_ft - MST features time, That is how long it took us to calculate the MST & all these features.
'''
spt_start_time = time.time()
X = csr_matrix(mat)
Tcsr = minimum_spanning_tree(X)
mat_st = np.array(Tcsr.toarray().astype(int))
mst_mean = mat_st.mean()
mst_std = mat_st.std()
merged_st = list(itertools.chain(*mat_st))
mst_skewness = skew(merged_st)
mst_ft = time.time() - spt_start_time
'''In this part of the code we calculate features from the MST that are considered to be
related to the rank and depth of the tracks in it.
* D_Mean - Average degree of the MST.
* D_Std - Standard Deviation of the MST degrees.
* D_Skewness - What is the tendency of the degrees in the MST.
* DFT_Mean - The average weight of the deepest track in MST.
* DFT_Std - Standard Deviation of the deepest track in MST.
* DFT_Max - The heaviest arch on the longest route in MST.
* DDFT_ft - Degree & DFT features time, That is how long it took us to calculate all these features.
'''
dstt_start_time = time.time()
g = Graph.Weighted_Adjacency(mat_st.tolist())
d_mean = igraph.statistics.mean(g.degree())
d_std = igraph.statistics.sd(g.degree())
d_skewness = skew(g.degree())
d_t = depth_first_tree(X, 0, directed=False)
mat_dt = np.array(d_t.toarray().astype(int))
dft_mean = mat_dt.mean()
dft_std = mat_dt.std()
dft_max = np.amax(mat_dt)
ddft_ft = time.time() - dstt_start_time
# In this map we will hold all the features and their results.
features_map = {'Mean': mean, 'Std': std, 'Skewness': skewness, 'Noc': noc, 'Td': td, 'Dmft': dmft,
'MST_Mean': mst_mean, 'MST_Std': mst_std, 'MST_Skewness': mst_skewness, 'MST_ft': mst_ft,
'D_Mean': d_mean, 'D_Std': d_std, 'D_Skewness': d_skewness, 'DFT_Mean': dft_mean,'DFT_Std': dft_std,
'DFT_Max': dft_max, 'DDFT_ft': ddft_ft, 'Solution_time': solution_time}
return features_map
# Main
# Create dataFrame.
data_TSP = pd.DataFrame()
# Fill the dataFrame.
for i in range(10000):
#print(i)
features_map = main()
data_TSP = data_TSP.append(features_map, ignore_index=True)
# Show data frame.
data_TSP.head()
data_TSP.to_csv('data_10000.csv')
files.download('data_10000.csv')
Regression models:
# Import the standard libraries of pandas.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import warnings
warnings. filterwarnings("ignore")
sns.set_style('whitegrid')
2
# Neaded for opening data file in drive.
from google.colab import files
uploaded = files.upload()
import io
df = pd.read_csv(io.BytesIO(uploaded['data_10000_clean.csv']))
try:
df.drop(['Unnamed: 0'], axis=1, inplace=True)
except:
pass
df.head()
from sklearn.model_selection import train_test_split
# Split the data to training set and test set (70%, 30%)
features = list(df.drop('Solution_time', axis = 1, inplace = False))
y = df['Solution_time']
X = df[features]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3)
Import models which we predicted with tham the solution,
And scoring methods to evaluate these models.
from sklearn.ensemble import RandomForestRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.linear_model import Ridge
from sklearn.linear_model import RidgeCV
from sklearn.model_selection import GridSearchCV
from sklearn import linear_model
!pip3 install xgboost
from xgboost import XGBRegressor
from sklearn.metrics import r2_score
from sklearn.model_selection import cross_val_score
!pip install scikit-plot
import scikitplot as skplt
import matplotlib as mpl
########################################## Several functions for different regression models. ##########################################
class Score:
r2 = 0.0 # This score determines how close the predictions really are to the real data.
cross_vali_score = 0.0 # How true is our algorithm if it going to well predict new data.
class Regressor:
def __init__(self, name):
self.name = name
self.score = Score()
self.y_pred = None
self.reg = None
# Map between the name of a model and the model itself.
models_map = {'Random Forest': RandomForestRegressor(), 'Xgboost': XGBRegressor(), 'Ridge': Ridge(),
'Kneighbors': KNeighborsRegressor(), 'Linear Regressor': linear_model.LinearRegression()}
# This function return a map that maps between each model and its Regressor class.
def get_models():
result_map = {}
for key, val in models_map.items():
result = Regressor(key)
reg = val
result.score.cross_vali_score = np.mean(cross_val_score(reg, X_train, y_train, cv=5))
result.reg = reg.fit(X_train, y_train)
result.y_pred = reg.predict(X_test)
result.score.r2 = r2_score(y_test, result.y_pred)
result_map[key] = result
return result_map
# This function print a graph for models of the features that influenced their decision making.
def print_influence_graph(map):
for key, val in map.items():
if key == 'Random Forest' or key == 'Xgboost':
# The parameters that most influenced the decision.
feature_imp = pd.Series(val.reg.feature_importances_,index=features).sort_values(ascending=False)
sns.barplot(x=feature_imp, y=feature_imp.index)
# Add labels to your graph
plt.xlabel('Feature Importance Score')
plt.ylabel('Features')
plt.title(val.name.upper() +" - Visualizing Important Features")
plt.show()
# This function print a graph for models that show the real results against the model predictions.
def show_predicted_vs_actual(map):
for key, val in map.items():
fig, ax = plt.subplots()
ax.scatter(y_test, val.y_pred, edgecolors=(0, 0, 1))
ax.plot([y_test.min(), y_test.max()], [y_test.min(), y_test.max()], 'r--', lw=3)
ax.set_xlabel('Predicted')
ax.set_ylabel('Actual')
ax.title.set_text(val.name.upper() +" - Predicted time vs actual time")
plt.show()
# This function print numerical scores for the models.
def print_scores(map):
for key, val in map.items():
print(val.name.upper() + ' SCORE: ')
print('R2score' + ' = ', val.score.r2)
print('Cross_val_score' + ' = ', val.score.cross_vali_score)
print('------------------------------------------\n')
# This function print a graph showing the differences between the scores of the models.
def show_models_differences_graph(map):
comp_df = pd.DataFrame(columns = ('Method', 'R2 Score', 'Cross val score'))
for i in map:
row = {'Method': i, 'R2 Score': map[i].score.r2, 'Cross val score': map[i].score.cross_vali_score}
comp_df = comp_df.append(row, ignore_index=True)
ax = comp_df.plot.bar(x='Method', rot=30, figsize=(12,6))
ax.set_title('Comparison graph')
#########################################################################################################################################
models = get_models()
print_influence_graph(models)
show_predicted_vs_actual(models)
print_scores(models)
show_models_differences_graph(models)
And here are the results:
I am using MD49 Motor Drive with its motors
https://www.robot-electronics.co.uk/htm/md49tech.htm
http://wiki.ros.org/md49_base_controller
How to subscribe (encoder_l and encoder_r) from md49_base_controller package and publish (vx , vy ,and vth ) as a form odom (nav_msgs/Odometry) ?
There are two problem:
1-The first is that the encoders outputs are not correct "the package needs to be modified.
2-The second is the I want to create a package that subscribe the right and left wheels encoder counts (encoder_l and encoder_r) and publish (vx , vy ,and vth) as a form odom (nav_msgs/Odometry) to be compatable wth imu MPU9250
http://wiki.ros.org/robot_pose_ekf
The proposed package is:
1- We have to convert (encoder_l and encoder_r) to (RPM_l and RPM_r) as follow:
if (speed_l>128){newposition1 = encoder_l;}
else if (speed_l<128){ newposition1 = 0xFFFFFFFF-encoder_l;}
else if (speed_l==128) {newposition1=0;}
newtime1 = millis();
RPM_l = ((newposition1-oldposition1)*1000*60)/((newtime1-oldtime1)*980);
oldposition1 = newposition1;
oldtime1 = newtime1;
delay(250);
if (speed_r>128){ newposition2 = encoder_r;}
else if (speed_r<128){ newposition2 = 0xFFFFFFFF-encoder_r;}
else if (speed_r==128) { newposition2=0;}
newtime2 = millis();
RPM_r = ((newposition2-oldposition2)*1000*60)/((newtime2-oldtime2)*980);
oldposition2 = newposition2;
oldtime2= newtime2;
delay(250);
2- We have to convert (RPM_l and RPM_r) to (vx, vy, and vth) as follow:
vx=(r/2)*RPM_l*math.cos(th)+(r/2)*RPM_r*math.cos(th);
vx=(r/2)*RPM_l*math.sin(th)+(r/2)*RPM_r*math.sin(th);
vth=(r/B)*omega_l-(r/B)*omega_r;
Hint: r and B are wheel radius and vehicle width respectively.
3- The odom (nav_msgs/Odometry) package is:
#!/usr/bin/env python
import math
from math import sin, cos, pi
import rospy
import tf
from nav_msgs.msg import Odometry
from geometry_msgs.msg import Point, Pose, Quaternion, Twist, Vector3
from md49_messages.msg import md49_encoders
rospy.init_node('odometry_publisher')
odom_pub = rospy.Publisher("odom", Odometry, queue_size=50)
odom_broadcaster = tf.TransformBroadcaster()
x = 0.0
y = 0.0
th = 0.0
vx =0.1
vy = -0.1
vth = 0.1
current_time = rospy.Time.now()
last_time = rospy.Time.now()
r = rospy.Rate(1.0)
while not rospy.is_shutdown():
current_time = rospy.Time.now()
# compute odometry in a typical way given the velocities of the robot
dt = (current_time - last_time).to_sec()
delta_x = (vx * cos(th) - vy * sin(th)) * dt
delta_y = (vx * sin(th) + vy * cos(th)) * dt
delta_th = vth * dt
x += delta_x
y += delta_y
th += delta_th
# since all odometry is 6DOF we'll need a quaternion created from yaw
odom_quat = tf.transformations.quaternion_from_euler(0, 0, th)
# first, we'll publish the transform over tf
odom_broadcaster.sendTransform(
(x, y, 0.),
odom_quat,
current_time,
"base_link",
"odom"
)
# next, we'll publish the odometry message over ROS
odom = Odometry()
odom.header.stamp = current_time
odom.header.frame_id = "odom"
# set the position
odom.pose.pose = Pose(Point(x, y, 0.), Quaternion(*odom_quat))
# set the velocity
odom.child_frame_id = "base_link"
odom.twist.twist = Twist(Vector3(vx, vy, 0), Vector3(0, 0, vth))
# publish the message
odom_pub.publish(odom)
last_time = current_time
r.sleep()
The problem was in serial comunication setup not for the code.
Setup UART on the raspi 3 GPIO
For some strange reason the default for Pi3 using the latest 4.4.9 kernel is to DISABLE UART. To enable it you need to change enable_uart=1 in /boot/config.txt. (There is no longer necessary to add core_freq=250 to fix the core frequency to get stable baudrate.)
If you don’t use Bluetooth (or have undemanding uses) it is possible to swap the ports back in the Device Tree. There is a pi3-miniuart-bt and pi3-disable-bt module which are described in /boot/overlays/README here.
As mentioned, by default the new GPIO UART is not enabled so the first thing to do is to edit the config.txt file to enable the serial UART:
sudo nano /boot/config.txt
and add the line (at the bottom):
enable_uart=1
You have to reboot for the changes to take effect.
To check where your serial ports are pointing you can use the list command with the long “-l” listing format:
ls -l /dev
In the long listing you will find:
serial0 -> ttyS0
serial1 -> ttyAMA0
Thus on a Raspberry Pi 3 and Raspberry Pi Zero W, serial0 will point to GPIO J8 pins 8 and 10 and use the /dev/ttyS0. On other Raspberry Pi’s it will point to /dev/ttyAMA0.
So where possible refer to the serial port via it’s alias of “serial0” and your code should work on both Raspberry Pi 3 and other Raspberry Pi’s.
You also need to remove the console from the cmdline.txt. If you edit this with:
sudo nano /boot/cmdline.txt
you will see something like this:
dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline fsck.repair=yes root wait
remove the line: console=serial0,115200 and save and reboot for changes to take effect:
reboot
First off, you need to import nav_msgs/Odometry as the following:
from nav_msgs.msg import Odometry
You must have a function that performs those conversions and then in rospy.Subscriber import those variables, like this:
def example(data):
data.vx=<conversion>
data.vth=<conversion>
def listener():
rospy.Subscriber('*topic*', Odometry, example)
rospy.spin()
if __name__ == 'main':
listener()
I think this would work