Lua resty_dogstatsd only publishes first instance - lua

I have a metric client that looks something like:
package.path = package.path .. ';../?.lua'
local metrics = {}
local resty_dogstatsd = require('resty_dogstatsd')
local logger = require('module.utils.logger')
local _statsd = resty_dogstatsd.new({
statsd = {
host = config.dataDog.host,
port = config.dataDog.port,
namespace = config.dataDog.namespace
},
tags = config.dataDog.tags
})
function metrics.incMetric1 ()
logger.debug('Updating metric metric1');
_statsd:increment(metric1 1, 1)
end
return metrics;
Then in my application I import it and use it
local metrics = require('module.metrics')
-- somewhere, some condition
metrics.incMetric1()
This publishes the log Updating metric metric1 in datadog and I can see it. But this will only publish the first instance. Until I restart service nginx restart, I will not get any more increments.
Update
So I have a start.lua in /etc/nginx/conf.d/start.lua that is:
package.path = package.path .. ';/path/to/my/app/?.lua'
local app = require('app.init');
app.start()
And the nginx config is
rewrite_by_lua_file /etc/nginx/conf.d/start.lua;
If I were to copy/paste the metric code into start.lua, then the metric is updated every time. Why is this?!
Update
I noticed this in the error logs:
2018/05/23 10:02:07 [error] 24483#0: *6 attempt to send data on a closed socket: u:some-hex, c:some-hex, client: 123.12.0.123, server: *.my-url.com, request: "GET / HTTP/1.1", host: "my.my-url.com"
This happens on the 2nd request to the nginx; the first time after restart, this is all fine ...
Update 2
This happens only if I have a metrics file and require it in my other. So if I instantiate the resty_dogstatsd client inside the main lua file, then everything is fine ...

Related

pyhdfs.HdfsIOException: Failed to find datanode, suggest to check cluster health. excludeDatanodes=null

I am trying to run hadoop using docker provided here:
https://github.com/big-data-europe/docker-hadoop
I use the following command:
docker-compose up -d
to up the service and am able to access it and browse file system using: localhost:9870. Problem rises whenever I try to use pyhdfs to put file on HDFS. Here is my sample code:
hdfs_client = HdfsClient(hosts = 'localhost:9870')
# Determine the output_hdfs_path
output_hdfs_path = 'path/to/test/dir'
# Does the output path exist? If not then create it
if not hdfs_client.exists(output_hdfs_path):
hdfs_client.mkdirs(output_hdfs_path)
hdfs_client.create(output_hdfs_path + 'data.json', data = 'This is test.', overwrite = True)
If test directory does not exist on HDFS, the code is able to successfully create it but when it gets to the .create part it throws the following exception:
pyhdfs.HdfsIOException: Failed to find datanode, suggest to check cluster health. excludeDatanodes=null
What surprises me is that my code is able to create the empty directory but fails to put the file on HDFS. My docker-compose.yml file is exactly the same as the one provided in the github repo. The only change I've made is in the hadoop.env file where I change:
CORE_CONF_fs_defaultFS=hdfs://namenode:9000
to
CORE_CONF_fs_defaultFS=hdfs://localhost:9000
I have seen this other post on sof and tried the following command:
hdfs dfs -mkdir hdfs:///demofolder
which works fine in my case. Any help is much appreciated.
I would keep the default CORE_CONF_fs_defaultFS=hdfs://namenode:9000 setting.
Works fine for me after adding a forward slash to the paths
import pyhdfs
fs = pyhdfs.HdfsClient(hosts="namenode")
output_hdfs_path = '/path/to/test/dir'
if not fs.exists(output_hdfs_path):
fs.mkdirs(output_hdfs_path)
fs.create(output_hdfs_path + '/data.json', data = 'This is test.')
# check that it's present
list(fs.walk(output_hdfs_path))
[('/path/to/test/dir', [], ['data.json'])]

Unexpected HTTP Request: POST /mqtt/auth

I am new to emqtt. I am trying to use emq_auth_http but it is not working.
I have these 3 requests to console some data and send data back with status 200.
app.post('/mqtt/auth', function(req, res) {
console.log('This is body ', req.body);
res.status(200).send(req.body);
});
app.post('/mqtt/superuser', function(req, res) {
console.log('This is body in superuser ', req.body);
res.status(200).send(req.body);
});
app.get('/mqtt/acl', function(req, res) {
console.log('This is params in acl ', req.params);
res.status(200).send(req.body);
});
Requests are working fine on postman.
I have configured my emqtt on windows with docker. I have placed my config file in /etc/plugins/emq_auth_http.conf.
This is my config file
## Variables: %u = username, %c = clientid, %a = ipaddress, %P = password, %t = topic
auth.http.auth_req = http://127.0.0.1:3000/mqtt/auth
auth.http.auth_req.method = post
auth.http.auth_req.params = clientid=%c,username=%u,password=%P
auth.http.super_req = http://127.0.0.1:3000/mqtt/superuser
auth.http.super_req.method = post
auth.http.super_req.params = clientid=%c,username=%u
## 'access' parameter: sub = 1, pub = 2
auth.http.acl_req = http://127.0.0.1:3000/mqtt/acl
auth.http.acl_req.method = get
auth.http.acl_req.params =
access=%A,username=%u,clientid=%c,ipaddr=%a,topic=%t
Then I enabled emq_auth_http from dashboard
Now when I tried to connect my mqtt client to my server it is not calling the api. It logs
09:28:29.642 [error] Unexpected HTTP Request: POST /mqtt/auth
09:28:29.644 [error] Client(19645050-9d1b-4c50-acf9-
c1fe7e69eea8#172.17.0.1:60968): Username 'username' login failed for 404
Is there anything I missed? Why it is not working?
Thanks
127.0.0.1 in a container refers to the container itself and not the host machine. you should set the host machine ip,you can obtain the host machine ip from a container by issuing the command /sbin/ip route|awk '/default/ { print $3 }' which could be found here
ps: this way you can get the ip of docker machine and not the host ,if your service is served by windows you can reach the ip of host machine from the container which is 10.0.75.1. you can find it in
How to connect to docker host from container on Windows 10 (Docker for Windows)

Yaws code inside <erl></erl> not running

I am trying out Yaws, however I have run into a bump. The code inside my .yaws file is not compiling when I got to the path, instead it is being printed on the window. Here is my code and configuration:
<erl>
method(Arg) ->
Rec = Arg#arg.req,
Rec#http_request.method.
out(Arg) ->
{ehtml, f("Method: ~s", [method(Arg)])}.
</erl>
Server configuration:
<server localhost>
port = 8000
listen = 127.0.0.1
docroot = /home/something/
dir_listings = true
dav = true
auth_log = true
statistics = true
</server>
Any info would really be appreciated, thank you.
The problem is that you have dav = true in your server configuration, which turns on WebDAV, a protocol for content management. Under this configuration, a .yaws file is treated as just a regular file, not as one that requires special Yaws processing, which is why you see the verbatim contents of the file when you access it via your browser.
Removing dav = true from your configuration and then restarting Yaws will make it process your example.yaws file as you expect.

source data from syslog into flume

I tried to setup a flume agent to source data from syslog server.
basically, I have setup a syslog server on an server so-called (server1) to receive syslog events, then forward all messages to different server (server2) where the flume agent installed, then finally all data will be sink to kafka cluster.
Flume configuration as below.
# For each one of the sources, the type is defined
agent.sources.syslogSrc.type = syslogudp
agent.sources.syslogSrc.port = 9090
agent.sources.syslogSrc.host = server2
# The channel can be defined as follows.
agent.sources.syslogSrc.channels = memoryChannel
# Each channel's type is defined.
agent.channels.memoryChannel.type = memory
# Other config values specific to each type of channel(sink or source)
# can be defined as well
# In this case, it specifies the capacity of the memory channel
agent.channels.memoryChannel.capacity = 100
# config for kafka sink
agent.sinks.kafkaSink.channel = memoryChannel
agent.sinks.kafkaSink.type = org.apache.flume.sink.kafka.KafkaSink
agent.sinks.kafkaSink.kafka.topic = flume
agent.sinks.kafkaSink.kafka.bootstrap.servers = <kafka.broker.list>:9092
agent.sinks.kafkaSink.kafka.flumeBatchSize = 20
agent.sinks.kafkaSink.kafka.producer.acks = 1
agent.sinks.kafkaSink.kafka.producer.linger.ms = 1
agent.sinks.kafkaSink.kafka.producer.compression.type = snappy
But, somehow logsys is not getting injected into flume agent.
appricate for your advice.
I have setup a syslog server on an server so-called (server1)
The syslogudp Source must bind to server1 host
agent.sources.syslogSrc.host = server1
then forward all messages to different server (server2)
the different server refers to the Sink
agent.sinks.kafkaSink.kafka.bootstrap.servers = server2:9092
Flume agent is only a process that hosts these components (Source, Sink, Channel) to facilitate the flow of events.

uWSGI as a standalone http server with lua

I'm trying to set up uWSGI to run as a standalone server running a simple LUA script(right now, as a POC, using the hello world from http://uwsgi-docs.readthedocs.org/en/latest/Lua.html).
Here's my uwsgi.ini file:
[uwsgi]
master = true
workers = 1
threads = 8
listen = 4096
max-request = 512
pidfile = /uwsgi/logs/uwsgi.pid
procname-master = uWSGI master
auto-procname = true
lua = /uwsgi/hello.lua
socket-timeout = 30
socket = /uwsgi/uwsgi_1.sock
http = 127.0.0.1:80
http-to = /uwsgi/uwsgi_1.sock
When sending a web request, an empty response is received, and uWSGI process outputs:
-- unavailable modifier requested: 0 --
I've read this usually means a plugin is missing, however, LUA plugin is installed, and when doing the same but through NGINX everything works fine, which means there's no problem loading LUA.
Any help please?
Thanks.
Somebody told me I had to add http-modifier1 = 6 and now it works.
Still don't understand what does '6' mean, but whatever.

Resources