Where is MobyLCPSolver? - drake

Where is MobyLCPSolver?
ImportError: cannot import name 'MobyLCPSolver' from 'pydrake.all' (/home/docker/drake/drake-build/install/lib/python3.8/site-packages/pydrake/all.py)
I have the latest Drake and cannot import it.
Can anyone help?

As of pydrake v1.12.0, the MobyLcp C++ API is not bound in Python.
However, if you feed an LCP into Solve() then Drake can choose Moby to solve it. You can take advantage of this to create an instance of MobyLCP:
import numpy as np
from pydrake.all import (
ChooseBestSolver,
MakeSolver,
MathematicalProgram,
)
prog = MathematicalProgram()
x = prog.NewContinuousVariables(2)
prog.AddLinearComplementarityConstraint(np.eye(2), np.array([1, 2]), x)
moby_id = ChooseBestSolver(prog)
moby = MakeSolver(moby_id)
print(moby.SolverName())
# The output is: "Moby LCP".
# The C++ type of the `moby` object is drake::solvers::MobyLCP.
That only allows for calling Moby via the MathematicalProgram interface, however. To call any MobyLCP-specific C++ functions like SolveLcpFastRegularized, those would need to be added to the bindings code specifically before they could be used.
You can file a feature request on the Drake GitHub page when you need access to C++ classes or functions that aren't bound into Python yet, or even better you can open a pull request with the bindings that you need.

Related

How to monitor Python Application(without django or other framework) in instana

i have a simple python application without any framework and API calls.
how i will monitor python application on instana kubernates.
i want code snippet to add with python application ,which trace application
and display on instana
how i will monitor python application on instana kubernates
There is publicly available guide, that should help you setting up the kubernetes agent.
i have a simple python application without any framework and API calls
Well, instana is for distributed tracing, meaning distributed components calling each other, each other's APIs predominantly by using known frameworks (with registered spans).
Nevertheless, you could make use of SDKSpan, here is a super simple example:
import os
os.environ["INSTANA_TEST"] = "true"
import instana
import opentracing.ext.tags as ext
from instana.singletons import get_tracer
from instana.util.traceutils import get_active_tracer
def foo():
tracer = get_active_tracer()
with tracer.start_active_span(
operation_name="foo_op",
child_of=tracer.active_span
) as foo_scope:
foo_scope.span.set_tag(ext.SPAN_KIND, "exit")
result = 20 + 1
foo_scope.span.set_tag("result", result)
return result
def main():
tracer = get_tracer()
with tracer.start_active_span(operation_name="main_op") as main_scope:
main_scope.span.set_tag(ext.SPAN_KIND, "entry")
answer = foo() + 21
main_scope.span.set_tag("answer", answer)
if __name__ == '__main__':
main()
spans = get_tracer().recorder.queued_spans()
print('\nRecorded Spans and their IDs:',
*[(index,
span.s,
span.data['sdk']['name'],
dict(span.data['sdk']['custom']['tags']),
) for index, span in enumerate(spans)],
sep='\n')
This should work in any environment, even without an agent and should give you an output like this:
Recorded Spans and their IDs:
(0, 'ab3af60079f3ca57', 'foo_op', {'span.kind': 'exit', 'result': 21})
(1, '53b67f7298684cb7', 'main_op', {'span.kind': 'entry', 'answer': 42})
Of course in a production, you wouldn't want to print the recorded spans, but send it to the well configured agent, so you should remove setting the INSTANA_TEST.

Using Mosek with Drake on Deepnote

ValueError: "MosekSolver cannot Solve because MosekSolver::available() is false, i.e., MosekSolver has not been compiled as part of this binary. Refer to the MosekSolver class overview documentation for how to compile it."
Hi, I got the above error when trying to use the Mosek solver in Drake. It is not clear to me how to enable Mosek in Deepnote with Drake. Do I need to include something in the Dockerfile or the init file? Any tips would be appreciated.
Links I looked at:
https://drake.mit.edu/pydrake/pydrake.solvers.mosek.html
https://drake.mit.edu/bazel.html#mosek
Mosek+Drake does work on Deepnote. The workflow is like this:
Obtain a Mosek license file (from the Mosek website), and upload it to Deepnote.
Set an environment variable to tell Drake where to find the license file. For instance, you can add the following at the top of your notebook:
import os
os["MOSEKLM_LICENSE_FILE"] = "mosek.lic"
Now MosekSolver.available() should be True, and Mosek will even be chosen as the default preferred solver for if you simply call Solve(prog).
Note: Please be very careful not to share the Deepnote notebook with your mosek.lic uploaded.

Pointcloud Visualization in Drake Visualizer in Python

I would like to visualize pointcloud in drake-visualizer using python binding.
I imitated how to publish images through lcm from here, and checked out these two issues (14985, 14991). The snippet is as follows :
point_cloud_to_lcm_point_cloud = builder.AddSystem(PointCloudToLcm())
point_cloud_to_lcm_point_cloud.set_name('pointcloud_converter')
builder.Connect(
station.GetOutputPort('camera0_point_cloud'),
point_cloud_to_lcm_point_cloud.get_input_port()
)
point_cloud_lcm_publisher = builder.AddSystem(
LcmPublisherSystem.Make(
channel="DRAKE_POINT_CLOUD_camera0",
lcm_type=lcmt_point_cloud,
lcm=None,
publish_period=0.2,
# use_cpp_serializer=True
)
)
point_cloud_lcm_publisher.set_name('point_cloud_publisher')
builder.Connect(
point_cloud_to_lcm_point_cloud.get_output_port(),
point_cloud_lcm_publisher.get_input_port()
)
However, I got the following runtime error:
RuntimeError: DiagramBuilder::Connect: Mismatched value types while connecting output port lcmt_point_cloud of System pointcloud_converter (type drake::lcmt_point_cloud) to input port lcm_message of System point_cloud_publisher (type drake::pydrake::Object)
When I set 'use_cpp_serializer=True', the error becomes
LcmPublisherSystem.Make(
File "/opt/drake/lib/python3.8/site-packages/pydrake/systems/_lcm_extra.py", line 71, in _make_lcm_publisher
serializer = _Serializer_[lcm_type]()
File "/opt/drake/lib/python3.8/site-packages/pydrake/common/cpp_template.py", line 90, in __getitem__
return self.get_instantiation(param)[0]
File "/opt/drake/lib/python3.8/site-packages/pydrake/common/cpp_template.py", line 159, in get_instantiation
raise RuntimeError("Invalid instantiation: {}".format(
RuntimeError: Invalid instantiation: _Serializer_[lcmt_point_cloud]
I saw the cpp example here, so maybe this issue is specific to python binding.
I also saw this python example, but thought using 'PointCloudToLcm' might be more convenient.
P.S.
I am aware of the development in recent commits on MeshcatVisualizerCpp and MeshcatPointCloudVisualizerCpp, but I am still on the drake-dev stable build 0.35.0-1 and want to stay on drake visualizer until the meshcat c++ is more mature.
The old version in pydrake.systems.meshcat_visualizer.MeshcatVisualizer is a bit too slow on my current use-case (multiple objects drop). I can visualize the pointcloud with this visualization setting, but it took too much machine resources.
Only the message types that are specifically bound in lcm_py_bind_cpp_serializers.cc can be used on an LCM message input/output port connection between C++ and Python. For all other LCM message types, the input/output port connection must be from a Python system to a Python system or a C++ System to a C++ System.
The lcmt_image_array is listed there, but not the lcmt_point_cloud.
If you're stuck using Drake's v0.35.0 capabilities, then I don't see any great solutions. Some options:
(1) Write your own PointCloudToLcm system in Python (by re-working the C++ code into Python, possibly with a narrower set of supported features / channels for simplicity).
(2) Write your own small C++ helper function MakePointCloudPublisherSystem(...) that calls LcmPublisherSystem::Make<lcmt_point_cloud> function in C++, and bind it into Python. Then your Python code can call MakePointCloudPublisherSystem() and successfully connect that to the existing C++ PointCloudToLcm.

FastAI Question on data loading using TextList

My end-goal to implement ULMFit using FastAI to predict disaster tweets(as a part of this Kaggle competition). What I'm trying to do is read the tweets from a Dataframe. But for reasons unknown to me, I'm stuck at the data loading stage. I'm simply unable to do so using the below method -
from fastai.text.all import *
train= pd.read_csv('../input/nlp-getting-started/train.csv')
dls_lm = (TextList.from_df(path,train,cols='text',is_lm=True)
.split_by_rand_pct(0.1)
#.label_for_lm()
.databunch(bs=64))
This line throws - NameError: name 'TextList' is not defined.
I'm able to work around this problem with the below code -
dls_lm = DataBlock(
blocks=TextBlock.from_df('text', is_lm=True),
get_x=ColReader('text'),
splitter=RandomSplitter(0.1)
# using only 10% of entire comments data for validation inorder to learn more
)
dls_lm = dls_lm.dataloaders(train, bs=64, seq_len=72)
Why does this work and not the previous method?
Notebook Link for reference.
Which version of fastai are you running?
import fastai
print(fastai.__version__)
TextList class is from FastAI v1, but it seems to me your import path is for Fastai v2, and in v2, TextList is changed with https://docs.fast.ai/text.data.html#TextBlock (thats why it's working with the Datablock part wich is the good way to handle this)

Have problems building neo4j GetAll server extension

I have been trying to build the example GetAll neo4j server extension, but unfortunately I cannot make it work. I installed the Windows version of neo4j and run it as a server. I also installed Python neo4jrestclient and I am accessing neo4j through Python scripts. The following works fine:
from neo4jrestclient.client import GraphDatabase
gdb = GraphDatabase("http://localhost:7474/db/data/")
print gdb.extensions
It gives me "CypherPlugin" and "GremlinPlugin". I want to build the example GetAll server extension, which is Java. I am using Eclipse. I am able to create the jar file in the folder "c:\neo4j_installation_root\neo4j-community-1.7\plugins\GetAll.jar", but when I restart the neo4j server and run the neo4jrestclient it does not show the GetAll server extension. I searched a lot, but in vain. I have lots of experience with C++ and Python, but new to Java. I will really appreciate some help to be able to build neo4j server extensions. It is critically important for my evaluation of neo4j.
Are you sure there is a META-INF/services etc listing the plugin class, and the jar file is created with intermediate dirs (which is not the default in Eclipse export settings) so dirs are seen by the classloader?
Check out the tips at http://docs.neo4j.org/chunked/snapshot/server-plugins.html
You can do get-all with Bulbs (http://bulbflow.com) without building an extension:
>>> from bulbs.neo4jserver import Graph
>>> g = Graph()
>>> g.vertices.get_all()
>>> g.edges.get_all()
Custom models work the same way:
# people.py
from bulbs.model import Node, Relationship
from bulbs.property import String, Integer, DateTime
from bulbs.utils import current_datetime
class Person(Node):
element_type = "person"
name = String(nullable=False)
age = Integer()
class Knows(Relationship):
label = "knows"
created = DateTime(default=current_datetime, nullable=False)
And then call get_all on the model proxies:
>>> from people import Person, Knows
>>> from bulbs.neo4jserver import Graph
>>> g = Graph()
>>> g.add_proxy("people", Person)
>>> g.add_proxy("knows", Knows)
>>> james = g.people.create(name="James")
>>> julie = g.people.create(name="Julie")
>>> g.knows.create(james, julie)
>>> g.people.get_all()
>>> g.knows.get_all()

Resources