Python script have a problem whith int() convertion whith crontab - memory

I'm checking memory of my Raspberry pi. It's work's fine.
But when I want run this every minute, crontab says have an error to convert string to int
ValueError: invalid literal for int() with base 10: ''
My script.py :
intmemused = 0
cmd = "top -n1 | grep 'Mem :'| awk '{print $6;}'"
output = Popen(cmd,shell=True, stdout=PIPE)
memused = output.communicate()[0].strip()
memused = str(memused.decode("utf-8"))
print(memused) |-----------> 589020
intmemused = int(memused) #Error when crontab execute my scrypt
mem = intmemused * 100
mem = float(mem) / float(memtot)
mem = 100 - float(mem)
mem = round(mem,2)
My crontab :
*/1 * * * * /home/dietpi/<b>info.sh</b> 2>/home/dietpi/marseille.log
My info.sh :
#!/bin/bash
/usr/bin/python3 /home/dietpi/script.py
marseille.log it create to log errors when it's execute by crontab and in it have :
TERM environment variable not set.
Traceback (most recent call last):
File "/home/dietpi/config", line 56, in <module>
intmemused = int(memused)
ValueError: invalid literal for int() with base 10: ''
When I look this error I've belived memused is empty but not. Print gived 589020
I've belived blank character but I've used .strip()
I've belived TERM environment variable not set. it's a problem, but whith this command set | grep TERM have a good answer TERM=xterm
I don't undertand why it's work with python3 and not whith crontab
Can you help me ?
Thank's a lot !
MaxKweeger

Related

Cron job to keep rails server running

I'm trying to add the following line to the crontab for a rails app:
* * * * * lsof -i tcp | grep -v grep | grep -q puma || /home/me/app_name/bin/rails s
... in short, I have an error that occasionally takes down my rails server, and I would like to run this command every minute to make sure the server remains as available as possible. (The actual error is a question for another day.)
Initially, I got an error:
shared_helpers.rb: 34
in 'default_gemfile': could not locate Gemfile (Bundler::GemfileNotFound)
... which traced back to line 4 in bin/rails:
load File.expand_path('../spring', __FILE__)
I changed the ('../spring') to ('./spring'), because the spring script was located in the same directory as the rails script, not in the parent. (Oddly, running rails s from the command line works whether I use '..' or '.' in the path.)
This, at least got rid of the error above, but now I have the following:
/bin/sh: 1: lsof: not found
/bin/sh: 1: grep: not found
/bin/sh: 1: grep: not found
/usr/bin/env: 'ruby_executable_hooks': No such file or directory
I get these errors once per minute, so at least I know the time format my entry is correct.
The relevant part of my crontab looks like this:
PATH=$PATH:/usr/share/rvm/rubies/ruby-2.4.1/bin/
GEM_PATH=$GEM_PATH:/usr/share/rvm/gems/ruby-
2.4.1:/usr/share/rvm/gems/ruby-2.4.1#global
* * * * * lsof -i tcp | grep -v grep | grep -q puma || /home/me/app/bin/rails s
I also tried:
* * * * * lsof -i tcp | grep -v grep | grep -q puma || /usr/share/rvm/gems/ruby-2.4.1/bin/rails s
... but get the same error.
I can run rails s from the command line, as I mentioned. I know running the command from a logged-in shell has the benefit of login scripts being read and environment variables being set, but I don't know what I need to do to get the environment in this crontab to co-operate.
NOTE: I'm not using the whenever gem for this, because I'm using external commands like grep and lsof; if there is an equivalent rails-based solution (to test if the server is running, and start it if it isn't), I could do that instead of using the crontab. (But I don't know what that would be.)

Run command on GPS fix

I have GPSD running on a Linux system (specifically SkyTraq Venus 6 on a Raspberry Pi 3, but that shouldn't matter). Is there a way to trigger a command when the GPS first acquires or loses the 3D fix, almost like the scripts in /etc/network/if-up.d and /etc/network/if-down.d?
I found a solution:
Step 1: With GPSD running, gpspipe -w outputs JSON data, documented here. The TPV class has a mode value, which can take one of these values:
0=unknown mode
1=no fix
2=2D fix
3=3D fix
Step 2: Write a little program called gpsfix.py:
#!/usr/bin/env python
import sys
import errno
import json
modes = {
0: 'unknown',
1: 'nofix',
2: '2D',
3: '3D',
}
try:
while True:
line = sys.stdin.readline()
if not line: break # EOF
sentence = json.loads(line)
if sentence['class'] == 'TPV':
sys.stdout.write(modes[sentence['mode']] + '\n')
sys.stdout.flush()
except IOError as e:
if e.errno == errno.EPIPE:
pass
else:
raise e
For every TPV sentence, gpspipe -w | ./gpsfix.py will print the mode.
Step 3: Use grep 3D -m 1 to wait for the first fix, and then quit (which sends SIGPIPE to all other processes in the pipe).
gpspipe -w | ./gpsfix.py | grep 3D -m 1 will print 3D on the first fix.
Step 4: Put in in a bash script:
#!/usr/bin/env bash
# Wait for first 3D fix
gpspipe -w | ./gpsfix.py | grep 3D -m 1
# Do something nice
cowsay "TARGET LOCATED"
And run it:
$ ./act_on_gps_fix.sh
3D
________________
< TARGET LOCATED >
----------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||

Ruby string single quotes causing trouble

I am trying to create a cronjob using whenever gem.
every 1.day, :at => "12:00pm" do
grep_part_of_command = '"#timestamp":"'+Date.today.to_s
command "cat logstash_development.log | grep '#{grep_part_of_command}' > todays_logstash_development.log"
end
What I want to achieve:
* * * * * /bin/bash -l -c 'cat logstash_development.log | grep '"#timestamp":"2016-04-20' > todays_logstash_development.log'
But when I open my crontab, what I get is :
* * * * * /bin/bash -l -c 'cat logstash_development.log | grep '\''"#timestamp":"2016-04-20'\'' > todays_logstash_development.log'
Note the extra '\' around the grep matcher string.
Can anyone help me find my mistake.
That seems correct! Whenever uses single quotes everywhere, so that special symbols like ! are not interpreted in shell. '\'' is a way of print single quote in a single quoted string. Try the following:
echo 'grep '\''"#timestamp":"2016-04-20'\'' > '
It will output:
grep '"#timestamp":"2016-04-20' >
So, don't worry! The output text is correct.

Pass value grep command in python

I am obtaining CPU and RAM statistics for the openvpn process by running the following command in a Python script on a Linux Debian 7 box.
>ps aux | grep openvpn
The output is parsed and sent to a zabbix monitoring server.
I currently use the following Python script called psperf.py.
If I want CPU% stats I run: psperf 2
>#!/usr/bin/env python
>
>import subprocess, sys, shlex
>
>psval=sys.argv[1] #ps aux val to extract such as CPU etc #2 = %CPU, 3 = %MEM, 4 = VSZ, 5 = RSS
>
>#https://stackoverflow.com/questions/6780035/python-how-to-run-ps-cax-grep-something-in-python
>proc1 = subprocess.Popen(shlex.split('ps aux'),stdout=subprocess.PIPE)
>proc2 = subprocess.Popen(shlex.split('grep >openvpn'),stdin=proc1.stdout,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
>
>proc1.stdout.close() # Allow proc1 to receive a SIGPIPE if proc2 exits.
>out,err=proc2.communicate()
>
>#string stdout?
>output = (format(out))
>
>#create output list
>output = output.split()
>
>#make ps val an integer to enable list location
>psval = int(psval)
>
>#extract value to send to zabbix from output list
>val = output[psval]
>
>#OUTPUT
>print val
This script works fine for obtaining the data in relation to openvpn. However I now want to reuse the script by passing process details from which to extract data without having to have a script for each individual process. For example I might want CPU and RAM statistics for the zabbix process.
I have tried various solutions including the following but get an index out of range.
For example I run: psperf 2 apache
>#!/usr/bin/env python
>
>import subprocess, sys, shlex
>
>psval=sys.argv[1] #ps aux val to extract such as CPU etc #2 = %CPU, 3 = %MEM, 4 = VSZ, 5 = RSS
>psname=sys.argv[2] #process details/name
>
>#https://stackoverflow.com/questions/6780035/python-how-to-run-ps-cax-grep-something-in-python
>proc1 = subprocess.Popen(shlex.split('ps aux'),stdout=subprocess.PIPE)
>proc2 = subprocess.Popen(shlex.split('grep', >psname),stdin=proc1.stdout,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
>
>proc1.stdout.close() # Allow proc1 to receive a SIGPIPE if proc2 exits.
>out,err=proc2.communicate()
>
>#string stdout?
>output = (format(out))
>
>#create output list
>output = output.split()
>
>#make ps val an integer to enable list location
>psval = int(psval)
>
>#extract value to send to zabbix from output list
>val = output[psval]
>
>#OUTPUT
>print val
Error:
>root#Deb764opVPN:~# python /usr/share/zabbix/externalscripts/psperf.py 4 openvpn
>Traceback (most recent call last):
> File "/usr/share/zabbix/externalscripts/psperf.py", line 25, in <module>
> val = output[psval]
>IndexError: list index out of range
In the past I haven't used the shlex class which is new to me. This was necessary to pipe the ps aux command to grep securely - avoiding shell = true - a security hazard (http://docs.python.org/2/library/subprocess.html).
I adopted the script from: How to run " ps cax | grep something " in Python?
I believe its to do with how shlex handles my request but I`m not to sure how to go forward.
Can you help? As in how can I successfully pass a value to the grep command.
I can see this being benfical to many others who pipe commands etc.
Regards
Aidan
I carried on researching and solved using the following:
!/usr/bin/env python
import subprocess, sys # , shlex
psval=sys.argv[1] #ps aux val to extract such as CPU etc #2 = %CPU, 3 = %MEM, 4 = VSZ, 5 = RSS
psname=sys.argv[2] #process details/name
#http://www.cyberciti.biz/tips/grepping-ps-output-without-getting-grep.html
proc1 = subprocess.Popen(['ps', 'aux'], stdout=subprocess.PIPE)
proc2 = subprocess.Popen(['grep', psname], stdin=proc1.stdout,stdout=subprocess.PIPE)
proc1.stdout.close() # Allow proc1 to receive a SIGPIPE if proc2 exits.
stripres = proc2.stdout.read()
#TEST RESULT
print stripres
#create output list
output = stripres.split()
#make ps val an integer to enable list location
psval = int(psval)
#extract value to send to zabbix from output list
val = output[psval]
#OUTPUT
print val
Regards
Aidan

Bad resource requirement syntax Error

I am trying to use the memory resource allocation command available in LSF.
A normal format of the command is :
bsub -R "rusage [mem=1000]" sleep 100s
When I launch this command directly from terminal,it works.
When i lsunch this command from a script, it fails.
Here is my script:
#! /bin/csh -f
set cktsim_memory = $1
set tmp = "|rusage [mem = $cktsim_memory]|" #method2
set tmp = `echo $tmp | sed 's/ =/=/g'` #method2
set tmp = `echo $tmp | sed 's/|/"/g' `
set bsub_option = ""
set bsub_option = ( "$bsub_option" "-R" "$tmp") #method2
set cmd = "bsub $bsub_option sleep 100s"
echo $cmd
$cmd
Its run output is:
>./cktsim_memory_test 100
bsub -R "rusage [mem= 100]" sleep 100s
Bad resource requirement syntax. Job not submitted.
>bsub -R "rusage [mem= 100]" sleep 100s
Job <99775> is submitted to default queue <medium>.
As you can see in the terminal output above- when the bsub command launched from script, it fails, when the same command run from terminal it is ok...
Please help me debug the issue.
set cktsim_memory = $1
set temp = "'rusage [mem = $cktsim_memory]'" #method2
set temp = `echo $temp | sed 's/ =/=/g'` #method2
set bsub_option = ( "$bsub_option" "-R" "$temp")

Resources