Why are some entire sub topics absent from my Telegraf MQTT data? - mqtt
I am running InfluxDB and Telegraf ingesting MQTT data into the database. I have an odd behavior where in I am missing some data that I should be ingesting with Telegraf. I'm seeing loads of MQTT data, from topics and subtopics that are siblings of the missing data, but yet some data is still missing. I'm also not seeing anything indicating an error in the logs. This is my telegraf.conf
[[outputs.influxdb_v2]]
urls = ["http://influxdb:8086"]
token = "$KTS_TELEMETRY_INFLUXDB_TOKEN"
organization = "$KTS_TELEMETRY_INFLUXDB_ORG"
bucket = "$KTS_TELEMETRY_INFLUXDB_BUCKET"
[[inputs.mqtt_consumer]]
data_format = "json"
servers = ["tcp://10.0.200.10:1883"]
topics = [
"/sample/#"
]
And I am getting data on topics like /sample/runout/data and other subtopics, but I'm not getting anything on one of my topics, /sample/runout/status. I can see that there definitely is data coming across the wire on that topic however, running this command gives me plenty of data constantly.
mosquitto_sub -h 10.0.200.10 -t /sample/runout/status
Running the same command with /sample/runout/data also shows lots of messages just the same. Oddly /sample/runout/data is showing up in the database but /sample/runout/status is not.
This is one example of a message published to /sample/runout/status
{"RollingReady":false,"TwinSAFE_enable":false,"ReceivedDataValid":false,"FramesAligned":false,"FramesCoupled":false,"inner_frame":{"virtualAxisJogEnabled":true,"virtualAxisJogReady":false,"motionReady":false,"motionAccepted":false},"outer_frame":{"virtualAxisJogEnabled":true,"virtualAxisJogReady":false,"motionReady":false,"motionAccepted":false},"inner_big_vee":{"axesInPosition":true,"enabled":false,"error":false,"initialized":false,"operational":false,"payoutMode":false,"payoutReady":false,"ready":false,"rollingMode":true,"rollingReady":false},"outer_big_vee":{"axesInPosition":true,"enabled":false,"error":false,"initialized":false,"operational":false,"payoutMode":false,"payoutReady":false,"ready":false,"rollingMode":true,"rollingReady":false},"support_i1":{"axesInPosition":true,"enabled":false,"error":false,"initialized":false,"operational":false,"payoutMode":false,"payoutReady":false,"ready":false,"rollingMode":true,"rollingReady":false},"support_a1":{"axesInPosition":true,"enabled":false,"error":false,"initialized":false,"operational":false,"payoutMode":false,"payoutReady":false,"ready":false,"rollingMode":true,"rollingReady":false},"support_a2":{"axesInPosition":true,"enabled":false,"error":false,"initialized":false,"operational":false,"payoutMode":false,"payoutReady":false,"ready":false,"rollingMode":true,"rollingReady":false},"support_a3":{"axesInPosition":true,"enabled":false,"error":false,"initialized":false,"operational":false,"payoutMode":false,"payoutReady":false,"ready":false,"rollingMode":true,"rollingReady":false},"support_b1":{"axesInPosition":false,"enabled":false,"error":false,"initialized":false,"operational":false,"payoutMode":false,"payoutReady":false,"ready":false,"rollingMode":true,"rollingReady":false},"support_b2":{"axesInPosition":false,"enabled":false,"error":false,"initialized":false,"operational":false,"payoutMode":false,"payoutReady":false,"ready":false,"rollingMode":true,"rollingReady":false},"support_b3":{"axesInPosition":false,"enabled":false,"error":false,"initialized":false,"operational":false,"payoutMode":false,"payoutReady":false,"ready":false,"rollingMode":true,"rollingReady":false},"support_b4":{"axesInPosition":false,"enabled":false,"error":false,"initialized":false,"operational":false,"payoutMode":false,"payoutReady":false,"ready":false,"rollingMode":true,"rollingReady":false},"support_c1":{"axesInPosition":false,"enabled":false,"error":false,"initialized":false,"operational":false,"payoutMode":false,"payoutReady":false,"ready":false,"rollingMode":true,"rollingReady":false},"support_c2":{"axesInPosition":false,"enabled":false,"error":true,"initialized":false,"operational":false,"payoutMode":false,"payoutReady":false,"ready":false,"rollingMode":true,"rollingReady":false},"support_c3":{"axesInPosition":false,"enabled":false,"error":true,"initialized":false,"operational":false,"payoutMode":false,"payoutReady":false,"ready":false,"rollingMode":true,"rollingReady":false},"support_c4":{"axesInPosition":false,"enabled":false,"error":true,"initialized":false,"operational":false,"payoutMode":false,"payoutReady":false,"ready":false,"rollingMode":true,"rollingReady":false},"support_d1":{"axesInPosition":false,"enabled":false,"error":true,"initialized":false,"operational":false,"payoutMode":false,"payoutReady":false,"ready":false,"rollingMode":true,"rollingReady":false},"support_d2":{"axesInPosition":false,"enabled":false,"error":true,"initialized":false,"operational":false,"payoutMode":false,"payoutReady":false,"ready":false,"rollingMode":true,"rollingReady":false}}
The only thing I can thing of of significance here is that all of the values in this sub topic are booleans but the InfluxDB supports booleans just fine. What am I missing? I haven't been able to find any helpful ways to debug this.
The answer here turned out to be that the input data_format="json" does not support booleans. I needed to upgrade to json_v2. This was not exactly clear to me how to just get the same exact behavior because the documentation wasn't really great. But I ended up with this and it works just fine.
[[outputs.influxdb_v2]]
urls = ["http://influxdb:8086"]
token = "$KTS_TELEMETRY_INFLUXDB_TOKEN"
organization = "$KTS_TELEMETRY_INFLUXDB_ORG"
bucket = "$KTS_TELEMETRY_INFLUXDB_BUCKET"
[[inputs.mqtt_consumer]]
data_format = "json_v2"
servers = ["tcp://10.0.200.10:1883"]
topics = [
"/sample/#"
]
[[inputs.mqtt_consumer.json_v2]]
[[inputs.mqtt_consumer.json_v2.object]]
path = "#this"
excluded_keys = []
Related
Instagram Graph API Recent Search result returns blank data
thank you for reviewing my question. I've been using Instagram Graph API to make some hashtag recent search. # Retrieve Keys key = json.loads(keycontent) HASHTAG_ID = key['HASHTAG_ID'] USER_ID = key['USER_ID'] ACCESS_TOKEN = key['ACCESS_TOKEN'] CURSOR = key['CURSOR'] topic = 'HASHTAG_ID' # Job # Get Request URL url = f"https://graph.facebook.com/{HASHTAG_ID}/recent_media?fields=id,permalink,caption,media_url&limit=50&user_id={USER_ID}&access_token={ACCESS_TOKEN}" if CURSOR != "": url = url + '&after=' + CURSOR res = requests.get(url) print(res.json()[‘data’]) It works quite successfully, but problem turns out that it starts to return blank data after calling the function several times. The data I am receiving at the moment are equal to either of the followings: {"data": []} { "data": [ ], "paging": { "cursors": { "after": "NEXT_CURSOR" }, "next": "LINK_WITH_NEXT_CURSOR" } } I've checked several known issues, and the list of what I've checked are stated below. It is not the permission issue. I've checked all the permissions related, and it is confirmed that the app has all permission it needs. The app is certainly below the execution limit. Actually, it is mostly even below half of it. The app is also below the number of hashtags I can search. I've called significantly lower than 30 hashtags within 7 days. So, I'd like to know if there are potential reasons that I am having blank data for Instagram Graph API call. Thank you in advance.
New stream from a streaming topic (Json format) without any data
I have a streaming topic in Json with 50 fields. I try to create another stream with 1 field using KSQL from the topic as below: create stream data (timeGMT string) with (kafka_topic='json_data', value_format='json'); The stream was created successfully, however no data returns from below KSQL query: select * from data; This is running on KSQL 5.0.0
There are a few things to check, including: Is there any data in the topic? PRINT 'json_data' FROM BEGINNING; Have you told KSQL to read from the beginning of the topic? SET 'auto.offset.reset' = 'earliest'; Are there messages in your topic that aren't JSON or can't be parsed? Check the KSQL Server log for errors. You can see more information on these, and other troubleshooting tips, in this blog.
Octokit GitHub API
I would like to get the number of pull requests and issues for a particularly GitHub rep. At the moment the method I'm using is really clumsy. Using the octokit gem and the following code: # Builds data that is sent to the API def request_params data = { } # labels example: "bug,invalid,question" data["labels"] = labels.present? ? labels : "" # filter example: "assigned" "created" "mentioned" "subscribed" "all" data["filter"] = filter # state example: "open" "closed" "all" data["state"] = state return data end Octokit.auto_paginate = true github = Octokit::Client.new(access_token: oauth_token) github.list_issues("#{user}/#{repository}", request_params).count The data received is extremely big, so its very ineficient in terms of memory. I don't need data regarding the issues only how many are there, X issues ( based on the filters / state / labels ). I thought of a solution but was not able to implement it. Basically: do 1 request to get the header, in the header there should be a link to the last page. Then make 1 more request to the last page, and check how many issues are there. Then we can calculate: count = ( number of pages * (issues-per-page - 1) ) + issues-on-last-page But I did not found out how to get request header information from octokit Authentificated Client. If there is a simple way of doing it without octokit, I will happily use it. Note: I want to fix this issue because the number of pull requests is quite high, and the code above generates R14 errors on Heroku. Thank You!
I feel an easy way is to use the GitHub API and restrict the number of PRs you want displayed in a page by using the per_page filter. For example: to find out all the PRs of the repo OneGet/oneget you can use.. https://api.github.com/search/issues?q=repo:OneGet/oneget+type:pr&per_page=1. The JSON response has the field "total_count" which gives the count of the total number of PRs. And the response will be relatively light since it will have only one issue listed. Ref: Search Issues
Publish a message from Vernemq plugin
I want to convert an HTTP request to MQTT request. So for that I received the HTTP request which consist of the information like topic and message to publish. I have to publish the provided message to provided topic. I am able to publish the message but the problem is that I can only provide the Topic and Payload to the function I used. Here is the piece of code I write to publish: Data = mochiweb_request:parse_post(Req), {RegisterFun, PublishFun, SubscribeFun} = vmq_reg:direct_plugin_exports(http_to_mqtt), Topic = get_value("topic", Data), List_of_topics = string:tokens(Topic, "/"), Lot = lists:map(fun(X) -> list_to_binary(X) end, List_of_topics), Payload = list_to_binary(get_value("message", Data)), error_logger:info_msg("Topics: ~p~nPayload: ~p",[Lot, Payload]), PublishFun(Lot,Payload), Req:ok({"text/html", [], "<p>Thank you. <p>"}) Here the PublishFun I get from the vmq_reg can only allow to give topic and message. Is there any other way I can publish a message giving the value to Qos, Retain and Dup also. I am creating a server using mochiweb and use it as a plugin in vernemq.
It is now possible in the new release of VerneMQ as stated by Andre. Here is how it works : Data = mochiweb_request:parse_post(Req), {RegisterFun,PublishFun,SubscribeFun} = vmq_reg:direct_plugin_exports(http_to_mqtt), Topic = get_value("topic", Data), List_of_topics = string:tokens(Topic, "/"), Lot = lists:map(fun(X) -> list_to_binary(X) end, List_of_topics), Payload = list_to_binary(get_value("message", Data)), Qos = erlang:list_to_integer(get_value("qos",Data)), Retain = erlang:list_to_integer(get_value("retain",Data)), error_logger:info_msg("Topics: ~p~nPayload: ~p~nQOS: ~p~nRetain: ~p",[Lot, Payload,Qos,Retain]), PublishFun(Lot,Payload,#{qos => Qos, retain => Retain}), Req:ok({"text/html", [], "<p>Thank you. <p>"})
It is not possible in the current version, but a planned feature for the future.
Fusion Tables API v2 still giving "Response size is larger than 10 MB. Please use media download." Error
The text here: https://developers.google.com/fusiontables/docs/v2/migration_guide implies that the 10MB limit not in effect for API v2, or that an alternative service "Media download" could be used for large responses. The API Reference here: https://developers.google.com/fusiontables/docs/v2/reference/ does not have any information regarding the 10MB limit, or how you use "media download" to recieve your request. How do I work around the 10MB limit for Fusion Tables API v2? I can't seem to find documentation that explains it.
To use media-download simply add the parameter alt=media to the URL
For those who use Google's API Client Libraries, the 'media download' is specified by using a specific method. For the Python library, there are two versions of the SQL query methods: sql*() and sql*_media() (and this is very likely true for the other client libraries as well). Example usage: # Build the googleapiclient service FusionTables = build('fusiontables', 'v2', credentials=credentials); query = 'select * from <table id>'; # "standard" query, returning fusiontables#sqlresponse JSON: jsonRequest = FusionTables.query().sqlGet(sql = query); jsonResponse = jsonRequest.execute(); # alt=media query, returning a CSV-formatted bytestring (in Python, at least): bytestrRequest = FusionTables.query().sqlGet_media(sql = query); byteResponse = bytestrRequest.execute(); As Kerry mentions here, media format queries that are too large to be sent as a GET request will fail (while regular format queries of the same length succeed provided the query result is less than 10 MB). In the python client, this failure appears as a HTTP 502: Bad Gateway error. Also note that ROWIDs are currently not returned in the media response format.