run Linux commands in Lua program - lua

im want to use lua to reset one of the openwrt`s Linux services . when i use below command in Linux directly and it works :
$ service
it shows below services
when i type below command it shows me more options :
$ service led
finally when i type below command it resets the service .
$ service led restart
but with lua`s below program i got error .
>os.execute("service led restart")
sh: service: not found
is there any other library or command to access services ?

command -V service says:
service is a function
To be able to invoke it in a subshell created by os.execute you have to source the script which creates the function. (I don't know where this function was defined).
The more easy way is to invoke the specific service executable:
os.execute"/etc/init.d/led restart"

Related

start logstash from rails application

Question: How can I start logstash from a script on my rails application?
Background: I have logstash and elasticsearch running on a server. I have a rails application which uploads a CSV to the server and logstash then processes. It works if I manually execute the generated script. If I try to have rails do a system command, I get an error.
Manually call the script from server (WORKS)
logstash_folder/execute_random.sh
Rails app system command(ERROR)
`logstash_folder/execute_random.sh`
Error: WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
execute_random.sh (script being called)
#!/bin/bash
sudo systemctl stop logstash
sleep 1
sudo /usr/share/logstash/bin/logstash -f logstash_folder/conf_folder/logstash.conf
Looking over this blog
https://discuss.elastic.co/t/warning-could-not-find-logstash-yml-which-is-typically-located-in-ls-home-config-or-etc-logstash/131022/15
I change the script to include --path.settings
sudo /usr/share/logstash/bin/logstash --path.settings /etc/logstash/ --path.data -f logstash_folder/#{self.logstash_index}/#{self.logstash_index}.conf"
& recieve this error
ERROR: Unknown command 'logstash_folder/conf_folder/logstash.conf'
OR
[INFO ][logstash.config.source.local.configpathloader] No config files found in
path {:path=>"/logstash_folder/conf_folder/logstash.conf"}
[ERROR][logstash.config.sourceloader]
No configuration found in the configured sources.
I was creating the command as a variable
example = "execute_#{index}"
and then setting that into backticks with a variable
execute_example = example
which didn't work
ended up using System("#{example}") and that worked...
3days of stress

Docker RUN instruction to source bash profile

I run some installation scripts via docker, they change ~/.bashrc but then I need to source it to use installed commands in RUN instructions below.
Tried obvious RUN . ~/.bashrc and got /bin/sh: 13: /root/.bashrc: shopt: not found error.
Tried RUN . ~/.profile and got mesg: ttyname failed: Inappropriate ioctl for device
I do not want to use ENV instructions. The point of having external installation scripts is to use them in non-Docker environments, for example when running unit tests locally. ENV instructions would duplicate environment setup which is already done in installation scripts.
You should not try to set up shell dotfiles in Docker. Many typical paths do not run them at all; for example
# In a Dockerfile
CMD ["some", "command", "here"]
# From the command line
docker run myimage some command here
The Docker environment is, fundamentally, different from a standalone Linux system; in addition to shell dotfiles, "home directory" isn't really a Docker concept, and if you have a multi-part process, on Docker it's standard to run each part in a separate container, but on standalone Linux you could use the init system to keep all of the parts running together. If you're expecting things to work exactly the same with exactly the same installation scripts, a virtual machine would be a better technological match for what you're attempting.
("Inappropriate ioctl for device" also suggests that there are things in the dotfiles that strongly expect to be run from an actual terminal, which you don't necessarily have at docker build time.)
My generic advice here is:
If possible, install things in the "system" directories within the image and avoid needing custom environment variable settings. (Don't use a version manager like nvm or rvm; don't use a Python virtual environment.)
If you do have to set environment variables, ENV is the way to do it.
If you really can't do either of the above, you can set environment variables in an ENTRYPOINT script before launching the main process; but if it's important to you that variables show up in docker inspect or docker exec shells, they won't be set there.
(Also remember that each RUN command launches a new container with a totally new shell environment. You can RUN . .profile; foo, but the environment variable settings won't carry through to the next RUN line.)

"docker-compose up" fails with error

I want to work on a project, but I need to use docker for running the app, but the docker-compose up command fails with this error:
System error: exec: "./wait_to_start": stat ./wait_to_start:
no such file or directory
The wait_to_start command is an executable python script in the subfolder backend/.
I need to determine why it cannot be executed. Either it's been searched in the wrong path, or there are access right problems, or maybe the wrong python version is used.
Can I debug it with details, or login with SSH and check the files on the virtual machine? I'm too unexperienced with Docker...
You can either set the "workdir" metadata to make sure you are in the right place when you start a container or simply call /backend/wait_to_start instead of ./wait_to_start so you remove the need to be in the proper directory.
Do debug with docker-compose I would do this:
docker-compose run --entrypoint bash <servicename>
That should give you a prompt and let you inspect the file and working directory, so see what's wrong.

fpm is not recognised if executing script with jenkins and ssh

I am trying to execute a script over ssh connexion with Jenkins. I am using the SSH plugin and it is well configured. I arrive to execute the first part of the script, but when I try to execute a fpm command it says:
fpm: command not found
If I connect to the instance and run the same script that I call via Jenkins it runs and there is no error (fpm is installed).
So, I have created a test like a script test.sh:
#!/bin/bash -x
fpm
but, with Jenkins, I get the same error: fpm: command not found, while if I execute it I get a normal "parameter needed":
Missing required -s flag. What package source did you want? {:level=>:warn}
Missing required -t flag. What package output did you want? {:level=>:warn}
No parameters given. You need to pass additional command arguments so that I know what you want to build packages from. For example, for '-s dir' you would pass a list of files and directories. For '-s gem' you would pass a one or more gems to package from. As a full example, this will make an rpm of the 'json' rubygem: `fpm -s gem -t rpm json` {:level=>:warn}
Fix the above problems, and you'll be rolling packages in no time! {:level=>:fatal}
What am I missing? Why it cannot find fpm if it is installed?
Make sure fpm is in /usr/bin..
It seems that the problem came because the fpm was installed in the /home/user2connect/bin/, and the command was not recognised. For fixing this I had to call it wit the whole path:
/home/user2connect/bin/fpm ...
I have chosen to reinstall the fpm using sudo, so now it works.

Nagios return status unknow

I install Nagios on CentOS to monitor some servers, and one of them is a TSM server.
I download a plugin written in bash when i execute it in command line it works.
/usr/lib64/nagios/plugins/check_tsm db -v6
db - database utilization 42%, OK
and the return code of the batch script is 0 ( from the command echo $? )
So the script work fine, and return 0 that mean a OK status in nagios, but the status still unknown, I really don't know why.
And i check logs in nagios, etc. It's not a problem of commands definition in commands.cfg or the declaration of service, because I copy the command that nagios send automatically every 5 min and the command works fine in command line, but still unknow status.
Definition of command:
define command{
command_name check_tsm_v6
command_line /usr/lib64/nagios/plugins/check_tsm $ARG1$ -v6 $ARG2$ $ARG3$
}
Service declaration :
define service{
use generic-service
host_name tsm-test
service_description database utilization
check_command check_tsm_v6!db!85!90
}
And here's the bash script.
One thing that's caught me out in the past with Nagios scripts is user rights. When testing your script directly on the command line be sure to precede it with:
sudo -u nagios
So yours would be:
sudo -u nagios /usr/lib64/nagios/plugins/check_tsm db -v6
This assumes that your nagios instance is being run by the nagios user, which is a fairly safe bet.
Good luck
Brad
Try to use yum install sysstat -y command to download the package.
If it work that will a great. If you are facing still same please upload the complete error which is showing in browser?

Resources