Accessing and monitoring log file in POX controller - logfile

I want to do some analysis on log file in POX controller and I have to do it online. For that, I need the log file of this controller that accumulates online. (For example recording information while h1 ping h2)
Can any body help me to find the log file in pox with in network information. Thanks in advance.

You can add listeners for the stats from switches. Add them like so
core.openflow.addListenerByName("FlowStatsReceived", self._handle_flowstats_received)
core.openflow.addListenerByName("PortStatsReceived", self._handle_portstats_received)
core.openflow.addListenerByName("QueueStatsReceived", self._handle_qeuestats_received)
And in some class methods later
def _handle_qeuestats_received (self, event):
"""
handler to manage queued packets statistics received
Args:
event: Event listening to QueueStatsReceived from openflow
"""
stats = flow_stats_to_list(event.stats)
# log.info("QueueStatsReceived from %s: %s", dpidToStr(event.connection.dpid), stats)
and
def _handle_portstats_received(self,event):
"""
Handler to manage port statistics received
Args:
event: Event listening to PortStatsReceived from openflow
"""
print event.stats
and a method for flow stats. You will get the point. For a full example check https://github.com/tsartsaris/pythess-SDN/blob/master/pythess.py

Related

UMQTT.simple function to check client.ping() call back

I am trying to get my head around UMQTT.simple. I am looking to handle instances in which my server might disconnect for a reboot. I want to check whether the client is connected, and if not, wait some period and try to reconnect.The guidance seems to be to use client.ping() for this (How to check Micropython umqtt client is connected?).
For the MQTT.paho client I see there is a way to access ping responses in the logs function (see here: http://www.steves-internet-guide.com/mqtt-keep-alive-by-example/). For UMQTT the docs indicate that ping response is handled automatically by wait_msg(): Ping server (response is automatically handled by wait_msg() (https://mpython.readthedocs.io/en/master/library/mPython/umqtt.simple.html). There does not appear to be any analogous logs function mentioned in the UMQTT.simple docs.
This is confounding for a couple of reasons:
If i use client.wait_msg() how do I call client.ping()? client.wait_msg() is a blocking function, so I can't make the ping. The system just disconnects when the keepalive time is reached.
If I call client.check_msg(), and client.ping() intermittently, I can't access the callback. My callback function doesn't have parameters to access pingresponse (params are f(topic, msg) in the docs).
The way I am solving this for now is to set a bunch of try-except calls on my client.connect and then connect-subscribe functions, but its quite verbose. Is this the way to handle or can i take advantage of the pingresponse in UMQTT.simple?
Below is a sample of the code i am running:
#Set broker variables and login credentials
#Connect to the network
#write the subscribe call back
def sub_cb(topic, msg):
print((topic, msg))
#write a function that handles connecting and subscribing
def connect_and_subscribe():
global CLIENT_NAME, BROKER_IP, USER, PASSWORD, TOPIC
client = MQTTClient(client_id=CLIENT_NAME,
server=BROKER_IP,
user=USER,
password=PASSWORD,
keepalive=60)
client.set_callback(sub_cb)
client.connect()
client.subscribe(TOPIC)
print('Connected to MQTT broker at: %s, subscribed to %s topic' % (BROKER_IP, TOPIC))
return(client) #return the client so that i can do stuff with it
client = connect_and_subscribe()
#Check messages
now = time.time()
while True:
try:
client.check_msg()
except OSError as message_error: #except if disconnected and check_msg() fails
if message_error == -1:
time.sleep(30) #wait for reboot
try:
client = connect_and_subscribe() #Try connect again to the server
except OSError as connect_error: #If the server is still down
time.sleep(30) #wait and try again
try:
client = connect_and_subscribe()
except:
quit() #Quite so that i don't get stuck in a loop
time.sleep(0.1)
if time.time() - now > 80: #ping to keepalive (60 * 1.5)
client.ping()
now = time.time() #reset the timer

ThingsBoard IoT Gateway doesn't update MQTT values

I try to receive simple text values from external MQTT broker topics with IoT Gateway.
For this purpose I simplify the existing script (extensions/mqtt/custom_mqtt_uplink_converter.py):
from thingsboard_gateway.connectors.mqtt.mqtt_uplink_converter import MqttUplinkConverter, log
class CustomMqttUplinkConverter(MqttUplinkConverter):
def __init__(self, config):
self.__config = config.get('converter')
self.dict_result = {}
def convert(self, topic, body):
try:
log.debug("New data received: %s: %s" % (topic,body))
# if topic = '/devices/buzzer/controls/volume' device name will be 'buzzer'.
self.dict_result["deviceName"] = topic.split("/")[2]
# just hardcode this
self.dict_result["deviceType"] = "buzzer"
self.dict_result["telemetry"] = {"data": body}
log.debug("Result: %s" % (self.dict_result))
return self.dict_result
except ...
When I start gateway I see in his log that he successfully connected and read the values:
INFO ... MQTT Broker Connector connected to 10.1.1.2:1883 - successfully.'
DEBUG ... Client <paho.mqtt.client.Client object at 0x7fb42d19dd68>, userdata None, flags {'session present': 0}, extra_params ()'
DEBUG ... <module 'CustomMqttUplinkConverter' from '/var/lib/thingsboard_gateway/extensions/mqtt/custom_mqtt_uplink_converter.py'>'
DEBUG ... Import CustomMqttUplinkConverter from /var/lib/thingsboard_gateway/extensions/mqtt.'
DEBUG ... Converter CustomMqttUplinkConverter for topic /devices/buzzer/controls/volume - found!'
INFO ... Connector "MQTT Broker Connector" subscribe to /devices/buzzer/controls/volume'
DEBUG ... Received data: {}'
DEBUG ... (None,)'
INFO ... "MQTT Broker Connector" subscription success to topic /devices/buzzer/controls/volume, subscription message id = 1'
DEBUG ... New data received: /devices/buzzer/controls/volume: 66'
DEBUG ... Result: {'deviceName': 'buzzer', 'deviceType': 'buzzer', 'telemetry': {'data': 66}}'
But this values are the last values he can read. If I change volume one broker new values will not appear neither in the log nor in TB UI. (I control updates with mosquitto_sub.)
Seems this converter will never called again until gateway restarted. Is it correct behaveour?
How can I make sure that my code is correct if I don't see the result?
Hi I have tried your version of the custom converter, it didn't work, but when I changed
self.dict_result["telemetry"] = {"data": body}
to
self.dict_result["telemetry"] = [{"data": body}]
It sent data correctly.
The gateway requires an array of telemetry of attributes from the converter.

ESP8266, NodeMCU, soft AP - UDP server-like soft AP, independent access point

I am using NodeMCU (with ESP8266-E) with an upgraded firmware. All basic commands work perfectly but there is one problem.
I wanted to create an independent access point, which could have a behaviour like a UDP server. That means without direct connection to any other access points. A simple UDP server like soft AP.
I followed these steps:
I have uploaded a new firmware to NodeMCU.
I have downloaded ESPlorer for better work with NodeMCU.
I have uploaded the source code below.
I have connected to the NodeMCU access point on my desktop.
I have sent some strings to the NodeMCU using a Java UDP client program.
I have looked at the messages on ESPlorer.
NodeMCU has not received any such strings.
--
print("ESP8266 Server")
wifi.setmode(wifi.STATIONAP);
wifi.ap.config({ssid="test",pwd="12345678"});
print("Server IP Address:",wifi.ap.getip())
-- 30s timeout for an inactive client
srv = net.createServer(net.UDP, 30)
-- server listens on 5000, if data received, print data to console
srv:listen(5000, function(sk)
sk:on("receive", function(sck, data)
print("received: " .. data)
end)
sk:on("connection", function(s)
print("connection established")
end)
end)
When I tried to send a message using a Java application, there was no change in ESPlorer. Not even when I tried to send a message using the Hercules program (great program for TCP, UDP communication).
I guess that maybe it will be the wrong IP address. I am using the IP address of the AP and not the IP address of the station.
In other words I am using this address: wifi.ap.getip() and not this address wifi.sta.getip() for connections to the UDP server. But sta.getip() returns a nil object. Really I don't know.
I will be glad for any advice.
Thank you very much.
Ok, let's restart this since you updated the question. I should have switched on my brain before I gave you the first hints, sorry about this.
UDP is connectionless and, therefore, there's of course no s:on("connection"). As a consequence you can't register your callbacks on a socket but on the server itself. It is in the documentation but it's easy to miss.
This should get you going:
wifi.setmode(wifi.STATIONAP)
wifi.ap.config({ ssid = "test", pwd = "12345678" })
print("Server IP Address:", wifi.ap.getip())
srv = net.createServer(net.UDP)
srv:listen(5000)
srv:on("receive", function(s, data)
print("received: " .. data)
s:send("echo: " .. data)
end)
I ran this against a firmware from the dev branch and tested from the command line like so
$ echo "foo" | nc -w1 -u 192.168.4.1 5000
echo: foo
ESPlorer then also correctly printed "received: foo".
This line is invalid Lua code. connected is in the wrong place here. you can't just put a single word after a function call.
print(wifi.ap.getip()) connected
I guess you intended to do something like
print(wifi.ap.getip() .. " connected")
Although I think you should add som error handling here in case wifi.ap.getip() does not return an IP.
Here you do not finish the function definition. Neither did you complete the srv:on call
srv:on("receive", function(srv, pl)
print("Strings received")
srv:listen(port)
I assume you just did not copy/paste the complete code.

Grails controller service - src/groovy poll controller for property value

I'm basically trying to get the percentage of time a task is taking to display to the user on the screen in an overlay template.
I have a service that is calculating the process percentage:
def progressCalculation(requestsToSend, requestsSent, requestsFailed, progressPercentage) {
progressPercentage = 100 / requestsToSend * (requestsSent + requestsFailed)
progressPercentage = Math.round(progressPercentage * 1) / 1
MyController upCont = new MyController()
upCont.progress(progressReport.progressPercentage)
}
this continues to send progressReport.progressPercentage to the controller:
def progress(progressData) {
int statusToView = progressData
if (statusToView % 5==0) {
[statusToView: statusToView]
}
}
I have created a src/groovy file that is using websockets from here: https://github.com/vahidhedayati/grails-websocket-example/blob/master/README.md
My connection is working but I need to show the percentage on the view using the websocket which is working.
#OnMessage
public String handleMessage(String message) {
message = MyController.progressPercentage
String replyMessage = "echo "+message
return replyMessage
}
now what I'm trying to so here is return the progressPercentage value from the controller to the src/groovy file so that my view can continually updated with the latest property value whilst the task is completing.
MyController upCont = new MyController() seriously?
It is good idea to move the code that hosts and modifies progressPercentage variable to service layer and access it using service rather than controller.
myService.progressPercentage rather than MyController.progressPercentage
Also you must inject myService , not instantiate it as myService = new MyService(), services are singletons you can not instantiate them like this. They are managed by the spring container.
Actually if you do MyController upCont = new MyController()
and you try to access a property of upCont you will get this beautiful error message:
java.lang.IllegalStateException: No thread-bound request found: Are you referring to request attributes outside of an actual web request, or processing a request outside of the originally receiving thread? If you are actually operating within a web request and still receive this message, your code is probably running outside of DispatcherServlet/DispatcherPortlet: In this case, use RequestContextListener or RequestContextFilter to expose the current request.
I put those instructions together so if I can help you in any way do let me know.
Websockets require as much frontend work as backend. so to get back the data via websockets you need to expand on the java script as well as expand on the backend websocket sending that information to the java script.
So if you had a button on the frontend gsp that rather than was a typical
You can take a look at some of my plugins that already do this. There is a ping/pong that happens discretely in https://github.com/vahidhedayati/jssh which if user defines within taglib the websocket connection triggers a pong that frontend javascript receives and sends ping - and they continue doing this..
Here is another example which is what you probably need to use:
This is the result back from websocket
https://github.com/vahidhedayati/grails-jenkins-plugin/blob/master/grails-app/views/jen/_process.gsp#L411
which when recieved updates this span or div id:
https://github.com/vahidhedayati/grails-jenkins-plugin/blob/master/grails-app/views/jen/_process.gsp#L213
so you need to get your websocket to send it back in some json format that your frontend javascript picks up the json request and if it is of a certain convention to look for a value and update a div on the frontend.
There is a good video I have done on wschat which shows you updating frontend using websocket client/server. it may help you understand it better
https://www.youtube.com/watch?v=xagMYM9n3l0 or https://www.youtube.com/watch?v=zAySkzNid3E
unsure which one it was in
E2A: it will need to be a service:
https://github.com/vahidhedayati/grails-wschat-plugin/blob/master/src/groovy/grails/plugin/wschat/WsChatEndpoint.groovy#L63 then the few lines ahead registers those services in the websocket endpoint. Now going back in the history of the code or if you follow onMessage to verifyAction - you will need to send something from frontend - or upon when a connection is made to then send a message to frontend https://github.com/vahidhedayati/grails-wschat-plugin/blob/75590bf10ea040c18548377dedc716fdab2aa820/src/groovy/grails/plugin/wschat/WsChatEndpoint.groovy#L148. You can use userSession to directly message the person making the socket connection. On webpage using javascript parse json and update div as mentioned above

Any idea why requests to vertx embedded in grails are synchronously queued up

Environment: Mac osx lion
Grails version: 2.1.0
Java: 1.7.0_08-ea
If I start up vertx in embedded mode within Bootstrap.groovy and try to hit the same websocket endpoint through multiple browsers, the requests get queued up.
So depending on the timing of the requests, after one request is done with its execution the next request gets into the handler.
I've tried this with both websocket and SockJs and noticed the same behavior on both.
BootStrap.groovy (SockJs):
def vertx = Vertx.newVertx()
def server = vertx.createHttpServer()
def sockJSServer = vertx.createSockJSServer(server)
def config = ["prefix": "/eventbus"]
sockJSServer.installApp(config) { sock ->
sleep(10000)
}
server.listen(8088)
javascript:
<script>
function initializeSocket(message) {
console.log('initializing web socket');
var socket = new SockJS("http://localhost:8088/eventbus");
socket.onmessage = function(event) {
console.log("received message");
}
socket.onopen = function() {
console.log("start socket");
socket.send(message);
}
socket.onclose = function() {
console.log("closing socket");
}
}
OR
BootStrap.groovy (Websockets):
def vertx = Vertx.newVertx()
def server = vertx.createHttpServer()
server.setAcceptBacklog(10000);
server.websocketHandler { ws ->
println('**received websocket request')
sleep(10000)
}.listen(8088)
javascript
socket = new WebSocket("ws://localhost:8088/ffff");
socket.onmessage = function(event) {
console.log("message received");
}
socket.onopen = function() {
console.log("socket opened")
socket.send(message);
}
socket.onclose = function() {
console.log("closing socket")
}
From the helpful folks at vertx:
def server = vertx.createHttpServer() is actually a verticle and a verticle is a single threaded process
As bluesman says, each verticle goes in its own thread. You can span your verticles across cores in your hardware, even clustering them with more machines. But this add capacity to accept simultaneous requests.
When programming realtime apps, we should try to build the response as soon as posible to avoid blocking. If you think your operation can be time intensive, consider this model:
Make a request
Pass the task to a worker verticle and assign this task an UUID (for example), and put it into response. The caller now knows that the work is in progress and receive the response so fast
When the worker ends the task, put a notification in event bus using the UUID assigned.
The caller check the event bus for the task result.
This is tipically done in a web application vía websockets, sockjs, etc.
This way you can accept thousands of request without blocking. And clients will receive the result without blocking the UI.
Vert.x use the JVM to create a so called "multi-reactor pattern", that is a reactor pattern modified to perform better.
As far as I understood is not true that each verticle has its own thread: the fact is that each verticle is always served by the same event loop, but more verticles can be binded with the same event loop and there can be multiple event loops. An event loop is basically a thread, so few threads should serve many verticles.
I didn't use vert.x in embedded mode (and I don't know if the main concept change) but you should perform much better instantiating many verticles for the job
Regards,
Carlo
As mentioned before Vertx concept is based on reactor pattern which means the single instance has at least one single-threaded event loop and processes events sequentially. Now the request processing may consist of several events, the point here is to serve the request and each event with non-blocking routines.
E.g. when you wait for Web Socket message the request should be suspended and in the event of message it is woken back. Whatever you do with the message should be also non-blocking thus asynchronous, like any file IO, networking IO, DB access. Vertx provides basic elements which you should use to build such async flow: Buffers, Pumps, Timers, EventBus.
To wrap it up - just never block. The use of sleep(10000) kills the concept. If you really need to halt the execution use VertX's Timers instead.

Resources