I'm trying to visualize collisions and different events visually, and am searching for the best way to update color or visual element properties after registration with RegisterVisualGeometry.
I've found the GeometryInstance class, which seems like a promising point for changing mutable illustration properties, but have yet to find and example where an instance is called from the plant (from a GeometryId from something like GetVisualGeometriesForBody?) and its properties are changed.
As a basic example, I want to change the color of a box's visual geometry when two seconds have passed. I register the geometry pre-finalize with
// box : Body added to plant
// X_WA : Identity transform
// FLAGS_box_l : box side length
geometry::GeometryId box_visual_id = plant.RegisterVisualGeometry(
box, X_WA,
geometry::Box(FLAGS_box_l, FLAGS_box_l, FLAGS_box_l),
"BoxVisualGeometry",
Eigen::Vector4d(0.7, 0.5, 0, 1));
Then, I have a while loop to create a timed event at two seconds where I would like for the box to change it's color.
double current_time = 0.0;
const double time_delta = 0.008;
bool changed(false);
while( current_time < FLAGS_duration ){
if (current_time > 2.0 && !changed) {
std::cout << "Change color for id " << box_visual_id.get_value() << "\n";
// Change color of box using its GeometryId
changed = true;
}
simulator.StepTo(current_time + time_delta);
current_time = simulator_context.get_time();
}
Eventually I'd like to call something like this with a more specific trigger like proximity to another object, or velocity, but for now I'm not sure how I would register a simple visual geometry change.
Thanks for the details. This is sufficient for me to provide a meaningful answer of the current state of affairs as well as the future (both near- and far-term plans).
Taking your question as a representative example, changing a visual geometry's color can mean one of two things:
The color of the object changes in an "attached" visualizer (drake_visualizer being the prime example).
The color of the object changes in a simulated rgb camera (what is currently dev::RgbdCamera, but imminently RgbdSensor).
Depending on what other properties you might want to change mid simulation, there might be additional subtleties/nuances. But using the springboard above, here are the details:
A. Up until recently (drake PR 11796), changing properties after registration wasn't possible at all.
B. PR 11796 was the first step in enabling that. However, it only enables it for changing ProximityProperties. (ProximityProperties are associated with the role geometry plays in proximity queries -- contact, signed distance, etc.)
C. Changing PerceptionProperties is a TODO in that PR and will follow in the next few months (single digit unless a more pressing need arises to bump it up in priority). (PerceptionProperties are associated with the properties geometry has in simulated sensors -- how they appear, etc.)
D. Changing IllustrationProperties is not supported and it is not clear what the best/right way to do so may be. (IllustrationProperties are what get fed to an external visualizer like drake_visualizer.) This is the trickiest, due to the way the LCM communication is currently articulated.
So, when we compare possible implications of changing an object's color (1 or 2, above) with the state of the art and near-term art (C & D, above), we draw the following conclusions:
In the near future, you should be able to change it in a synthesized RGB image.
No real plan for changing it in an external visualizer.
(Sorry, it seems the answer is more along the lines of "oops...you can't do that".)
Related
I am trying to use drake to control a kuka iiwa robot and use the ManipulationStationHardwareInterface for Lcm communication. Before deployment on the real robot, I use mock_station_simulation to test. One thing that I find is that after initializing the simulator (which I think should trigger the initialization event?), Eval HardwareInterface's output will give the default values instead of the current lcm message values. For example,
drake::systems::DiagramBuilder<double> builder;
auto *interface = builder.AddSystem<ManipulationStationHardwareInterface>();
interface->Connect();
auto diagram = builder.Build();
drake::systems::Simulator<double> simulator(*diagram);
auto &simulator_context = simulator.get_mutable_context();
auto &interface_context = interface->GetMyMutableContextFromRoot(&simulator_context);
interface->GetInputPort("iiwa_position").FixValue(&interface_context, current_position);
simulator.set_publish_every_time_step(false);
simulator.set_target_realtime_rate(1.0);
simulator.Initialize();
auto q = interface->GetOutputPort("iiwa_position_measured").Eval(interface_context);
std::cout << "after initialization, interface think that the position of robot is" << q << std::endl;
q will be a zero vector.
This behavior bothers me when I try to use the robot_state input port of DifferentialInverseKinematicsIntegrator. DifferentialInverseKinematicsIntegrator will use this q to initialize its internal state rather than the robot's real position. The robot will move violently. As a workaround, I need to read the robot's position first and use the SetPositions method of DifferentialInverseKinematicsIntegrator and do not connect the robot_state input port. Another thing is that LogVectorOutput will always have the default value as the first entry, which is of little use.
I think this problem should be related with the LcmSubscriberSystem. My question is that is it possible to use the Lcm message to initialize the system rather than using the default value?
Thank you.
This is an interesting (and very reasonable) question. I could imagine having an initialization event for the LcmSubscriber that blocks until the first message to arrive. But currently I don't believe that we guarantee the order of the initialization events in a Diagram (likely the order is determined by something like the order the system was added to the Diagram, and we don't have a nice mechanism for setting it). It's possible that the diff IK block could initialize before the LcmSubscriber.
In this case, I think it might be better to capture the first LcmSubscriber message yourself outside the simulation loop, and manually set the diff IK integrator initial state. Then start the simulation.
I'll see if I can get some of the other Drake developers to weigh in.
I want to offer the user the choice of imperial or metric measurement of weight in my app to increase audience suitability. I have designed the following below to allow me to determine which setting the user wishes to use.
However, im unsure how I would go about applying the metric selection to the whole rest of the app? Would it be a case of setting the app reach into each object the user has created in coredata and all text labels relating to a weight measurement and alter their weight property by multiplication or division each time the user changes weight system?
Appreciate any insight into how I may achieve this as I didnt want to go too far in the wrong direction!
func convertAppMetric() {
if self.userSelectedWeightSystem == "Metric" {
print("THE USER SET THE APP TO METRIC, CONVERTING FIGURES...")
//some code
} else if self.userSelectedWeightSystem == "Imperial" {
print("THE USER SET THE APP TO IMPERIAL, CONVERTING FIGURES...")
//some other code
}
}
This is going to be one of those answers that SO hates, but you want to go read up on NSMeasurement.
NSMeasurement holds both a value and a Unit, the later of which is the original measurement type. You store all your data in the format that was originally provided - if the user puts in pounds, store a NSMeasurement with 182 pounds. If they put in kg, make one with 90 kg. You can even put in your own Units, like stone.
From then on, always present the data using an NSMeasurementFormatter. You can pass in the output type, which in your case is the global setting you mentioned in your question. This means that no matter what unit they provided, it always comes out properly converted to the one you want, and changing it instantly changes it everywhere.
Its easy to make your own converters for weird units. I made one for decimal inches and feet/inches, so 13.5 inches turns into 1' 1.5".
I need to find the number of times the accelerometer value stream attains a maximum. I made a plot of the accelerometer values obtained from an iPhones against time, using CoreMotion method to obtain the DeviceMotionUpdates. When the data was being recorded, I shook the phone 9 times (where each extremity was one of the highest points of acceleration).
I have marked the 18 (i.e. 9*2) times when acceleration had attained maximum in red boxes on the plot.
But, as you see, there are some local maxima that I do not want to consider. Can someone direct me towards an idea that will help me achieve detecting only the maxima of importance to me?
Edit: I think I have to use a low pass filter. But, how do I implement this in Swift? How do I choose the frequency of cut-off?
Edit 2:
I implemented a low pass filter and passed the raw motion data through it and obtained the graph as shown below. This is a lot better. I still need a way to avoid the insignificant maxima that can be observed. I'll work in depth with the filter and probably fix it.
Instead of trying to find the maximas, I would try to look for cycles. Especially, we note that the (main) minimas seem to be a lot more consistent than the maximas.
I am not familiar with swift, so I'll layout my idea in pseudo code. Suppose we have our values in v[i] and the derivative in dv[i] = v[i] - v[i - 1]. You can use any other differentiation scheme if you get a better result.
I would try something like
cycles = [] // list of pairs
cstart = -1
cend = -1
v_threshold = 1.8 // completely guessing these figures looking at the plot
dv_threshold = 0.01
for i in v:
if cstart < 0 and
v[i] > v_threshold and
dv[i] < dv_threshold then:
// cycle is starting here
cstart = i
else if cstart > 0 and
v[i] < v_threshold and
dv[i] < dv_threshold then:
// cycle ended
cend = i
cycles.add(pair(cstart, cend))
cstart = -1
cend = -1
end if
Now you note in comments that the user should be able to shake with different force and you should be able to recognise the motion. I would start with a simple 'hard-coded' cases as the one above, and see if you can get it to work sufficiently well. There is a lot of things you could try to get a variable threshold, but you will nevertheless always need one. However, from the data you show I strongly suggest at least limiting yourself to looking at the minimas and not the maximas.
Also: the code I suggested is written assuming you have the full data set, however you will want to run this in real time. This will be no problem, and the algorithm will still work (that is, the idea will still work but you'll have to code it somewhat differently).
I am looking for a method that redraws all the features stored in a layer (equivalent to method "redraw" with OL2)
the method "changed" of class ol.layer.Vector "refreshes" only the features visible on a map (for instance in the zoomed part)
and thus doesn't impact the features outside
the treatment applied to those data is to delete periodically old features
how can I achieve this ?
another question is how to be notified of the end of this specific deletion ?
thanks in advance
Jean-Marie
first thanks for your answers
my question requires effectively more information :
the browser client receives points through a real time websocket connection
every second, an array of new features collected from those points is added into the Vector layer in this way :
vectorLayer.getSource().addFeatures(features);
the duration of the source buffer is, for instance one hour, and to manage a temporal sliding window of one hour, old features are removed every minute
map.once('postrender',removeOldFeatures);
vectorLayer.changed(); or map.renderSync();
this removal is only correctly done for visible features
But as soon as some features are not visible due, for instance, to a zoom on a portion of the map where those features are not displayed, then the removal treatment (removeOldFeatures) is not executed for those features whatever the method used (vectorLayer.changed() or map.render())
as a consequence the number of features doesn't stop increasing...
Jean-Marie
I had the same problem with a TileVector Source and format GeoJSON. At the end i use the provided TileUrlFunction and to redraw the layer, i just set the Source again with the layer.setSource(yourdefinedSource) method. Dube is right. Most of the time (if the source is updated to often) it is useful to send a unique param (like unix timestamp) as a cachebuster.
Here's the situation:
I have a predetermined GPS route that the user will run. The route has some checkpoints and the user should pass near all of them (think of them as a racing game checkpoint, that prevents the user from taking shortcuts). I need to ensure that the user passes through all the checkpoints.
I want to determine an area that will be considered inside a checkpoint's radius, but I don't want it to be just a radial area, it should be an area taking into consideration the form of the path.
Didn't understand it? Neither did I. Look at this poorly drawn image to understand it better:
The black lines represents the pre-determined path, the blue ball is the checkpoint and the blue polygon is the wanted area. The green line is a more precise user, and the red line is a less accurate user (a drunk guy driving maybe? lol). Both lines should be inside the polygon, but a user who skips totally the route shouldn't.
I already saw somewhere here a function to check is the user is inside a polygon like this, but I need to know how to calculate the polygon.
Any suggestions?
EDIT:
I'm considering using the simple distanceTo() function to just draw an imaginary circle and check if the user is there. That's good because is so much simple to implement and understand, and bad because to make sure the most erronic user passes whithin the checkpoint I would need a big radius, making the correct user enter the checkpoint area sooner than expected.
And just so you guys understand the situation better, this is for an app that is supposed to be used in traffic (car or bus), and the checkpoints should be landmarks or spots that divides your route, for example, somewhere where traffic jam starts or stops.
You could just check the distance between the two, assuming you know the GeoLocation of the checkpoint.
Use the distanceTo function and setup a threshold of however many meters the user needs to be from the checkpoint to continue on.
Edit
Since you want to avoid distanceTo, here is a small function I wrote a while back to check if a point is in a polygon:
public boolean PIP(Point point, List<Point> polygon){
boolean nodepolarity=false;
int sides = polygon.size();
int j = sides -1;
for(int i=0;i<sides;i++){
if((polygon.get(i).y<point.y && polygon.get(j).y>=point.y) ||(polygon.get(j).y<point.y && polygon.get(i).y>=point.y)){
if (polygon.get(i).x+(point.y-polygon.get(i).y)/(polygon.get(j).y-polygon.get(i).y)*(polygon.get(j).x-polygon.get(i).x)<point.x) {
nodepolarity=!nodepolarity;
}
}
j=i;
}
return nodepolarity; //FALSE=OUTSIDE, TRUE=INSIDE
}
List<Point> polygon is a list of the points that make up a polygon.
This uses the Ray casting algorithm to determine how many intersections a ray makes through the polygon.
All you would need to do is create the 'boundary' around the area you need with GeoPoints being translated into pixels using the toPixels method.
Store those points into a List<> of points, and you should be all set.
check a few algos to do this in the link below
http://geospatialpython.com/2011/01/point-in-polygon.html
I know this is an old question, but maybe it would be useful for someone.
This is a simpler method, with much less computation needed. This would not trigger the first time the user comes inside the threshold area, it only gets the closest point where the user has passed near the checkpoint AND (s)he has come close enough.
The idea is to maintain a 3 item list of distances for every checkpoint, with the last three distances in it (so it would be [d(t), d(t-1), d(t-2)]). This list should be rotated on every distance calculation.
If on any distance calculation the previous d(t-1) distance is smaller than the current one d(t) and bigger than the preceding d(t-2), then the moving point has passed the checkpoint. Whether this was a real passing, or it was only a glitch, can be decided by checking the actual distance d(t-1).
private long DISTANCE_THRESHOLD = 2000;
private Checkpoint calculateCheckpoint(Map<Checkpoint, List<Double>> checkpointDistances)
{
Map<Checkpoint, Double> candidates = new LinkedHashMap<Checkpoint, Double>();
for (Checkpoint checkpoint: checkpointDistances.keySet())
{
List<Double> distances = checkpointDistances.get(checkpoint);
if (distances == null || distances.size() < 3)
continue;
if (distances.get(0) > distances.get(1) && distances.get(1) < distances.get(2) && distances.get(1) < (DISTANCE_THRESHOLD)) //TODO: make this depend on current speed
candidates.put(checkpoint, distances.get(1));
}
List<Entry<Checkpoint, Double>> list = new LinkedList<Entry<Checkpoint,Double>>(candidates.entrySet());
Collections.sort(list, comp);
if (list.size() > 0)
return list.get(0).getKey();
else
return null;
}
Comparator<Entry<Checkpoint, Double>> comp = new Comparator<Entry<Checkpoint,Double>>()
{
#Override
public int compare(Entry<Checkpoint, Double> o1, Entry<Checkpoint, Double> o2)
{
return o1.getValue().compareTo(o2.getValue());
}
};
The function gets one parameter - a Map<Checkpoint, List<Double>> with the checkpoints and the list of the last three distances. It outputs the closest Checkpoint passed or null (if there were none).
The DISTANCE_THRESHOLD should be chosen wisely.
The Comparator is just to be able to sort the Checkpoints based on their distance to the user to get the closest one.
Naturally this has some minor flaws, e.g. if the moving point is moving criss-cross, or the error movement from GPS precision is commensurable with the actual speed of the user, than this would give multiple pass marks, but this would hit almost any algorithm.