I am quite new to C++ socket programming. Since I am in an FRC team, I need to communicate between my application and the Compact RIO via an interface known as "Network Tables". I need to communicate from my C++ vision application to our robot code in Java. How do I implement NetworkTables in regular C++?
So here is what I did in python but the concept is the same. The goal would be to move motors based on values (sensor data) from what you receive in your driver station? so, how do I accomplish this... data transfers will be done through network tables
first, initlize...
from networktables import NetworkTables
# As a client to connect to a robot
NetworkTables.initialize(server='roborio-XXX-frc.local')
creating the instance you will be able to access NetworkTables conections, configure settings, listeners and create table objects which is what is actually being used to send data
next,
sd = NetworkTables.getTable('SmartDashboard')
sd.putNumber('someNumber', 1234)
otherNumber = sd.getNumber('otherNumber')
Here, we're interacting with the SmartDashboard and calling two methods, to send and recieve values.
another example, from API docs
#!/usr/bin/env python3
#
# This is a NetworkTables server (eg, the robot or simulator side).
#
# On a real robot, you probably would create an instance of the
# wpilib.SmartDashboard object and use that instead -- but it's really
# just a passthru to the underlying NetworkTable object.
#
# When running, this will continue incrementing the value 'robotTime',
# and the value should be visible to networktables clients such as
# SmartDashboard. To view using the SmartDashboard, you can launch it
# like so:
#
# SmartDashboard.jar ip 127.0.0.1
#
import time
from networktables import NetworkTables
# To see messages from networktables, you must setup logging
import logging
logging.basicConfig(level=logging.DEBUG)
NetworkTables.initialize()
sd = NetworkTables.getTable("SmartDashboard")
i = 0
while True:
print("dsTime:", sd.getNumber("dsTime", -1))
sd.putNumber("robotTime", i)
time.sleep(1)
i += 1
Related
i have a simple python application without any framework and API calls.
how i will monitor python application on instana kubernates.
i want code snippet to add with python application ,which trace application
and display on instana
how i will monitor python application on instana kubernates
There is publicly available guide, that should help you setting up the kubernetes agent.
i have a simple python application without any framework and API calls
Well, instana is for distributed tracing, meaning distributed components calling each other, each other's APIs predominantly by using known frameworks (with registered spans).
Nevertheless, you could make use of SDKSpan, here is a super simple example:
import os
os.environ["INSTANA_TEST"] = "true"
import instana
import opentracing.ext.tags as ext
from instana.singletons import get_tracer
from instana.util.traceutils import get_active_tracer
def foo():
tracer = get_active_tracer()
with tracer.start_active_span(
operation_name="foo_op",
child_of=tracer.active_span
) as foo_scope:
foo_scope.span.set_tag(ext.SPAN_KIND, "exit")
result = 20 + 1
foo_scope.span.set_tag("result", result)
return result
def main():
tracer = get_tracer()
with tracer.start_active_span(operation_name="main_op") as main_scope:
main_scope.span.set_tag(ext.SPAN_KIND, "entry")
answer = foo() + 21
main_scope.span.set_tag("answer", answer)
if __name__ == '__main__':
main()
spans = get_tracer().recorder.queued_spans()
print('\nRecorded Spans and their IDs:',
*[(index,
span.s,
span.data['sdk']['name'],
dict(span.data['sdk']['custom']['tags']),
) for index, span in enumerate(spans)],
sep='\n')
This should work in any environment, even without an agent and should give you an output like this:
Recorded Spans and their IDs:
(0, 'ab3af60079f3ca57', 'foo_op', {'span.kind': 'exit', 'result': 21})
(1, '53b67f7298684cb7', 'main_op', {'span.kind': 'entry', 'answer': 42})
Of course in a production, you wouldn't want to print the recorded spans, but send it to the well configured agent, so you should remove setting the INSTANA_TEST.
Our project has a few GRPC servers defined as go_binary targets. We develop client SDKs for Java and Python applications and we would like to use java_test and py_test. Is any way to start a specific go_binary target before java_test or py_test?
You can create a test harness that starts the gRPC server before running the tests. For example, you could add the binary to the data attribute of the test, and then started it beforehand:
go_binary(
name = "my_grpc_server",
[...]
)
py_test(
name = "my_test",
[...]
data = [":my_grpc_server"],
)
and then inside the test file:
class ClientTestCase(unittest.TestCase):
def setUp(self):
r = runfiles.Create()
self.server = subprocess.Popen([r.Rlocation("path/to/my_grpc_server")])
def tearDown(self):
self.server.terminate()
self.server.wait()
This example is very simple, you'll probably run into issues regarding the availability of the port the server listens on, or waiting for the server to start up. You could add flags to your gRPC server to allow communication over a domain socket, or make it listen on an unused port and have the test parse the port number from the server's log output.
For details on finding the server with runfiles: https://github.com/bazelbuild/bazel/blob/a7a0d48fbeb059ee60e77580e5d05baeefdd5699/tools/python/runfiles/runfiles.py#L16-L58
If you find yourself copy-pasting this pattern a lot, or having to implement it in multiple languages, you could try using an sh_test() rule to wrap the underlying py_test or java_test, and to start the server, then start the test with an environment variable telling it how to reach the server (eg MY_GRPC_SERVER_ADDRESS=localhost:${test_port}.
We are setting up a federated scenario with Server and Client on different physical machines.
On the server, we have used the docker container to kickstart:
The above has been borrowed from Kubernetes tutorial. We believe this creates a 'local executor' [Ref 1] which helps create a gRPC server [Ref 2].
Ref 1:
Ref 2:
Next on the client 1, we are calling tff.framework.RemoteExecutor that connects to the gRPC server.
Our understanding based on the above is that the Remote Executor runs on the client which connects to the gRPC server.
Assuming the above is correct, how can we send a
tff.tf_computation
from the server to the client and print the output on the client side to ensure the whole setup works well.
Your understanding is definitely correct.
If you construct an ExecutorFactory directly, as seems to be the case in the code above, passing it to tff.framework.set_default_context will install your remote stack as the default mechanism for executing computations in the TFF runtime. You should additionally be able to pass the appropriate channels to tff.backends.native.set_remote_execution_context to handle the remote executor construction and context installation if desired, but the way you are doing it certainly works, and allows for greater customization.
Once you have set this up, running an example end-to-end should be fairly simple. We will set up a computation which takes a set of federated integers, prints on the clients, and sums the integers up. Let:
#tff.tf_computation(tf.int32)
def print_and_return(x):
# We must use tf.print here, as this logic will be
# serialized and run on the clients as TensorFlow.
tf.print('hello world')
return x
#tff.federated_computation(tff.FederatedType(tf.int32, tff.CLIENTS))
def print_and_sum(federated_arg):
same_ints = tff.federated_map(print_and_return, federated_arg)
return tff.federated_sum(same_ints)
Suppose we have N clients; we simply instantiate the set of federated integers, and invoke our computation.
federated_ints = [1] * N
total = print_and_sum(federated_ints)
assert total == N
This should cause the tf.prints defined above to run on the remote machine; as long as tf.print is directed to an output stream which you can monitor, you should be able to see it.
PS: you may note that the federated sum above is unnecessary; it certainly is. The same effect can be had by simply mapping the identity function with the serialized print.
I have a ROS node that allows you to "publish" a data structure to it, to which it responds by publishing an output. The timestamp of what I published and what it publishes is matched.
Is there a mechanism for a blocking function where I send/publish and output, and it waits until I receive an output?
I think you need the ROS_Services (client/server) pattern instead of the publisher/subscriber.
Here is a simple example to do that in Python:
Client code snippet:
import rospy
from test_service.srv import MySrvFile
rospy.wait_for_service('a_topic')
try:
send_hi = rospy.ServiceProxy('a_topic', MySrvFile)
print('Client: Hi, do you hear me?')
resp = send_hi('Hi, do you hear me?')
print("Server: {}".format(resp.response))
except rospy.ServiceException, e:
print("Service call failed: %s"%e)
Server code snippet:
import rospy
from test_service.srv import MySrvFile, MySrvFileResponse
def callback_function(req):
print(req)
return MySrvFileResponse('Hello client, your message received.')
rospy.init_node('server')
rospy.Service('a_topic', MySrvFile, callback_function)
rospy.spin()
MySrvFile.srv
string request
---
string response
Server out:
request: "Hi, do you hear me?"
Client out:
Client: Hi, do you hear me?
Server: Hello client, your message received.
Learn more in ros-wiki
Project repo on GitHub.
[UPDATE]
If you are looking for fast communication, TCP-ROS communication is not your purpose because it is slower than a broker-less communicator like ZeroMQ (it has low latency and high throughput):
ROS-Service pattern equivalent in ZeroMQ is REQ/REP (client/server)
ROS publisher/subscriber pattern equivalent in ZeroMQ is PUB/SUB
ROS publisher/subscriber with waitformessage equivalent in ZeroMQ is PUSH/PULL
ZeroMQ is available in both Python and C++
Also, to transfer huge amounts of data (e.g. pointcloud), there is a mechanism in ROS called nodelet which is supported only in C++. This communication is based on shared memory on a machine instead of TCP-ROS socket.
What exactly is a nodelet?
Since you want to stick with publish/ subscribers, assuming from your comment, that services are to slow I would have a look at waitForMessage (Documentation).
And for an example on how to use it you can have a look at this ros answers question.
All you need to do is to publish your data and immediately call waitForMessage on the output topic and manually pass the received message to your "callback".
I hope this is what you were looking for.
To get this request/reply behaviour ROS has a mechanism called ROS service.
You can specify the input and output of your service in a service file similar to a ROS message definition. You can then call the service of a node with your input and the call will receive an output when the service is finished.
Here is a tutorial how to use this mechanism in python. If you prefer C++ there is also one, you should find it.
I´m newbie with Thingsquare
I'm traying to create a IP64 Gateway with MB851 Board (STM32W108CC microprocessor)
I send and receive data (IPv6 Radio) using 2 MB851 Boards with Thingsquare udp-multicast example.
I modified mist-mb851 platform to include an enc28j60-arch.c file to implement the platform code of SPI functions that are called by the enc28j60 driver
I modified ip64-conf.h to include enc28j60 driver and fallback interface
include "ip64-eth-interface.h"
include "enc28j60-ip64-driver.h"
define IP64_CONF_UIP_FALLBACK_INTERFACE ip64_eth_interface
define IP64_CONF_INPUT ip64_eth_interface_input
define IP64_CONF_DHCP 1
define IP64_CONF_ETH_DRIVER enc28j60_ip64_driver
I modified Contiki/platform/mb851 to include STM32 PeripheralLibs to create the SPI driver
The enj28j60 driver is tested
I compile the Thingsquare Router-Node example but when initialize the DHCP process
ip64_init();--->ip64_ipv4_dhcp_init();--->ip64_dhcpc_request();--->handle_dhcp();--->send_discover();
Nothing happens
Debugging the code when tcpip_ipv6_output(); function is call to send the packet the function ip64_6to4(...); fails and i don´t know why
Best Regards