I want to run a python script every 60 seconds and send output across to Influxdb. The python script is embedded and called from a windows batch file.
While the batch file and python script are running fine, I am unable to run it through TELEGRAF
Here is my input and output snapshot from telegraf config file
# Output
# Metrics
[[outputs.influxdb]]
urls = ["http://localhost:8086"] # required
database = "test_db" # required
# Input
# Metrics
[[inputs.exec]]
# Shell/commands array
commands = ["C:\\Users\\P\\Desktop\\metrics.cmd"]
data_format = "influx"
interval = "60s"
I have a stock Influxdb & Telegraf version. I did not install any plugins.
Am I missing something?
Refer to
https://groups.google.com/forum/#!topic/influxdb/3MsoI59wsw0
The path to .bat should be absolute, not relative. Missing the leading slash.
Related
So I have a mariadb in a container in /home/admin/containers/mariadb.
This directory also contains an .env-file with MARIADB_ROOT_PASSWORD specified.
I want to backup the database using the following command:
* * * * * root docker exec --env-file /home/admin/containers/mariadb/.env mariadb sh -c 'exec mysqldump --databases dbname -uroot -p"$MARIADB_ROOT_PASSWORD"' > /home/admin/containers/mariadb/backups/dbname.sql
The command works when running from the terminal but crontab only creates an empty sql file.
I assume there are some issues with cron ready the .env file.
Bash command line is nice.
Cron is "different".
Let me count the ways.
Here are things to pay attention to.
To simplify the description,
let's assume you put the above instructions
into backup.sh, so the crontab line is simply
* * * * * root sh backup.sh
Cron is running under UID zero here. Test interactively with: $ sudo backup.sh
Cron uses a restricted $PATH. Test with: $ env PATH=/usr/bin:/bin backup.sh
Cron's $CWD won't be your home directory.
More generally, env will report different results from interactive. Your login dot files have not all been sourced.
Cron doesn't necessarily set umask to 0022. Likely not an issue here.
Output of ulimit -a might differ from what you see interactively.
Cron does not provide a pty, which can affect e.g. password prompts. Likely not an issue here.
Likely there are other details that differ.
If you find that some aspect of the
environment is crucial to a successful
run, then arrange for that near the
top of backup.sh. You might want
to adjust PATH, source a file,
or cd somewhere.
Now let's examine what diagnostic clues
you're gathering from each cron run.
The most important detail is that
while you're logging stdout,
you are regrettably discarding messages sent to
FD 2, stderr.
You can accomplish your logging
on the crontab command line,
or within the backup.sh script.
Use 2>&1 to merge stderr with stdout.
Or capture each stream separately:
docker ... 2> errors.txt > dbname.sql
With no errors, you will see a zero-byte
text file.
Also, remember the default behavior of crond.
If you just run a command, with no
redirect, cron assumes it should
complete silently with zero exit status,
such as /usr/bin/true does.
If there's a non-zero status, cron will
report the error.
If there's any stdout text,
such as /usr/bin/date produces,
cron wants to email you that text.
If there's any stderr text,
again it should be emailed to you.
Test your email setup.
Set the cron MAILTO=me#some.where
variable if the default of root
wasn't suitable.
Interactively verify that email sending on
that server actually works.
Repair your setup for postfix or
whatever if you find that emails are
not reliably being delivered.
Jenkins Cosole output :updated 2
When executing the Jmeter .JMX file using JMeter non GUI mode, it is working fine and I am able to get the .JTL file, but when I am trying to trigger the build using Jenkins it is getting failed giving Error: Unable to access jarfile ApacheJMeter.jar
errorlevel=1 as message from console output .
Please note that I have added Perfomance Plugin in Jenkins and made jmeter.save.saveservice.output_format=xml enabled in JMeter user properties. Please help if I am missing anything to setup the configuration.
Build cmd command
C:\jmeter\bin\jmeter -J jmeter.save.saveservice.output_format=xml -n -t C:\jmeter\bin\edueka.jmx -l C:\jmeter\bin\report3.jtl
code from CMD - JMeter non GUI
JMeter Jar file access from CMD
There is nothing wrong with your command, it should work normally.
Double check that the main JMeter executable .jar exists and the user which executes Jenkins process has correct permissions to access this file:
c:\jmeter\bin\ApacheJMeter.jar
The most common mistake is that users download source bundle of JMeter instead of the binary
Also the correct syntax for passing JMeter Properties via -J command line argument is:
J needs to be capital
remove the space between J and the property name
C:\jmeter\bin\jmeter -Jjmeter.save.saveservice.output_format=xml -n -t C:\jmeter\bin\edueka.jmx -l C:\jmeter\bin\report3.jtl
Also currently there is no need to switch JMeter results format to XML, Jenkins Performance Plugin is capable of processing JMeter results files in .CSV format as well
I am new to docker, influx grafana etc. I got grafana and influxdb running, but seems to be unable to connect telegraf to influxdb. I followed many guides, but I am missing something.
I created a Telegraf conf file on E:\docker\containers\telegraf and try to use it with:
docker run -v e:/docker/containers/telegraf/:/etc/telegraf/telegraf:ro telegraf
But I keep getting the following error:
2017/05/13 20:32:39 I! Using config file: /etc/telegraf/telegraf.conf
2017-05-13T20:32:39Z E! Database creation failed: Post
http://localhost:8086/query?db=&q=CREATE+DATABASE+%22telegraf%22: dial tcp
[::1]:8086:
getsockopt: connection refused
I have this in the influxdb output part of the conf file:
[[outputs.influxdb]]
# urls = ["udp://localhost:8089"] # UDP endpoint example
urls = ["http://10.0.75.1:8086"] # required
database = "telegraf" # required
retention_policy = ""
write_consistency = "any"
timeout = "5s"
#username = "telegraf"
#password = "telegraf"
If you look ad the urls, it does not seem to read the conf file. I just keeps trying to connect to localhost. (localhost:8083 and 10.0.75.1:8083 both open the influxdb webpage)
This sounds like the mapping and / or E drive is now allowed to be mapped in Docker for Windows.
First, your mapping doesn't appear correct. If you have a file of telegraf.conf at e:/docker/containers/telegraf/ then your current mapping will end up with the file at /etc/telegraf/telegraf/telegraf.conf which is one extra telegraf folder deep. The error states it is looking for /etc/telegraf/telegraf.conf. In this case, it is likely using a default telegraf.conf.
Next, I believe the Docker on Windows doesn't allow mapping of drives other than C by default. Check the shared drive settings to make sure that E is allowed to be mapped (an article I found that shows this is at https://rominirani.com/docker-on-windows-mounting-host-directories-d96f3f056a2c).
After fixing both of these errors, if it still persists, I would get into the container with docker exec and confirm that the /etc/telegraf/telegraf.conf file does appear to have the contents that it should.
Everyone, please need help.
I am using telegraf now as a log feeder for my InfluxDB database, the concept is my telegraf will read a log then send the result to InfluxDB.
[[inputs.logparser]]
files = ["/here/is/the/directory/logname.log"]
from_beginning = false
It works as expected when the log file name is logname.log. But, today i need to changes the logname system to logname.20170320.log where 20170320 is the date of log. Do you mind, how is the right configuration for:
files = ["/here/is/the/directory/logname.log"]
So it can read the daily log that the name dynamicly changed everyday like:
files = ["/here/is/the/directory/logname.20170320.log"]
files = ["/here/is/the/directory/logname.20170321.log"]
Thanks for your help.
Based on #Luv33preet comment here, then i make a script to change the configuration daily using sed, here is the code:
/bin/sed -i "s/`date +'%Y%m%d' -d '1 day ago'`/`date +'%Y%m%d'`/" /etc/telegraf/conf.d/my-config.conf
To change telegraf configuration.
Why do you just set a wildcard for your logfile?
[[inputs.logparser]]
/var/log/*/*.log -> find all .log files with a parent dir in /var/log
from_beginning = false
I have a little script to read my PATH and store in a file, which I would like to be scheduled to run daily.
path = os.getenv("PATH")
file_name = "C:\\temp.txt"
file = io.open(file_name, "w")
file:write(path)
file:close()
If I run it from command line it works, but when I create batch file (I work on Windows XP) and double click it - the os.getenv("PATH") returns false. The batch file:
"C:\Program Files\Lua\5.1\lua" store_path.lua
I read in comments to this question that it "is not a process environment variable, it's provided by the shell, so it won't work". And indeed, some other env variables (like username) work fine.
The two questions I have are:
Why the shell does not have access to the PATH? I thought it would
make a copy of the environment (so only setting env variable would be a problem)?
What would be the best way to read the PATH in such a way that I can add
it to a scheduler?
Have the batch file run it from a shell so that you get shell variables:
cmd /c C:\path\to\lua myfile.lua