I'm using a DAG that dynamically creates a task: everytime I create a new task in the DAG I'd like to clear another task in the same DAG, is it possible?
We've not found any API that supports clearing task, are we missing anything?
We're trying using airflow CLI, but seems impossible to clear a task identified by
a specific "Execution Date", is there another option?
Thanks in advance
You can use airflow clear cli command to achieve that.
airflow clear [-h] [-t TASK_REGEX] [-s START_DATE] [-e END_DATE] [-sd SUBDIR]
[-u] [-d] [-c] [-f] [-r] [-x] [-dx]
dag_id
Documentation: https://airflow.apache.org/cli.html#clear
You can clear task states with this command in composer:
gcloud composer environments run <environment> --location=asia-northeast1 clear -- <DAG_ID> -c -s <dag run start date> -e <dag run end date> --upstream --downstream
Related
I’m running an Azuracast docker instance on Linode and want to try to find a way to automate my updates. Right now my routine is when I notice there are updates by accessing the Azuracast web panel, I usually run timeshift to create a backup using the following command
timeshift —-create —-comment “azuracast update ”
And then I use the following to update azuracast
cd /var/azuracast/
./docker.sh update-self
./docker.sh update
Then it asks me to ensure the azuracast installation is backed up before updating, to which i would usually just press enter.
After that is completed, it asks me if i want to clean up all stopped docker containers and images to save space, which i usually say no to.
What I’m wondering is if there is a way to create a bash script, or python or something to automate all of this, and then have it run on a schedule?
Sure, you can write a shell script to execute these commands and then run it on a schedule using crontab(5).
For example your script might look like:
#! /bin/sh
# Backup azuracast and restart docker container
timeshift --create --comment “azuracast update” && \
cd /var/azuracast/ && \
./docker.sh update-self && \
(yes | ./docker.sh update)
It sounds like this docker.sh program takes some user inputs. See if there are options you can pass to it that will allow you to run it non-interactively. (Seems there isn't, see edit.)
To setup your cron job, you can put the script in /etc/cron.hourly, /etc/cron.daily, /etc/cron.weekly, or /etc/cron.monthly. Or if you need more control, you can get started configuring a cron job with crontab -e. Better explanation.
EDIT: Assuming this is the script you're using, it doesn't seem to have a way to run update non-interactively. Fear not though, there's a program for this: yes(1). This will answer yes to both of the questions, but honestly running docker system prune -f is probably a good idea. If you really want to answer no to that, you could probably substitute yes for printf "y\nn" to answer yes to the first and no to the second.
Also note that there's at least one other y/n question it could ask you, which you probably want to answer yes to.
I have setup a Geodocker-accumulo-geomesa configuration by cloning https://github.com/geodocker/geodocker-accumulo-geomesa .
To add some sample data, many websites suggest adding GDELT data, as this does not require specific converters.
I use the following commands.
docker cp C:path\20170712.export.CSV geodockeraccumulogeomesa_accumulo-master_1:/tmp/20170712.export.CSV
docker exec geodockeraccumulogeomesa_accumulo-master_1 geomesa ingest -c geomesa.gdelt -C gdelt -f gdelt -s gdelt -u root -p GisPwd /tmp/20170712.export.CSV
I get the following response: Using GEOMESA_HOME = /opt/geomesa/tools
Nothing happens after this. What does it mean? What is the correct or next step?
That repo is not maintained any more. Try using this one, and following the README: https://github.com/geodocker/geodocker-geomesa
i have a codestains.conf file in ~/.init folder
description "Codestains"
author "Varun Mundra"
start on virtual-filesystems
stop on runlevel [06]
env PATH=/opt/www/codestains.com/current/bin:/usr/local/rbenv/shims:/usr/local/rbenv/bin:/usr/local/bin:/us$
env RAILS_ENV=production
env RACK_ENV=production
setuid ubuntu
setgid sudo
chdir /opt/www/codestains.com
pre-start script
exec >/home/ubuntu/codestains.log 2>&1
exec /opt/www/codestains.com/current/bin/unicorn -D -c /opt/www/codestains.com/current/config/unicorn.rb $
end script
post-stop script
exec kill 'cat /tmp/unicorn.codestains.pid'
end script
I have added https://gist.github.com/bradleyayers/1660182 in /etc/dbus-1/system.d/Upstart.conf` to enable Upstart user jobs
But everytime I run
start codestains
sudo start codestains
I get "start: Unknown job: codestains".
I have tried a lot of things available online. Nothing seems to help.
Also,
init-checkconf codestains.conf
gives "File codestains.conf: syntax ok"
I spot one error that is certainly a problem; I do not know if it is the only problem. I haven't made any attempt to test it. However, this bit:
exec kill 'cat /tmp/unicorn.codestains.pid'
is definitely wrong, it would pass the string cat /tmp/unicorn.codestains.pid to the kill command, which will not do what you want.
You may have seen an example, and missed that they are backtick characters, which causes the shell to execute cat /tmp/unicorn.codestains.pid, capture its STDOUT, and then interpolate the result where you put the backticks; IOW it passes the contents of that pid file to the kill command.
Like this:
exec kill `cat /tmp/unicorn.codestains.pid`
Note the subtly different backtick character
Which shells (bash, at least) will treat specially as I described: http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_03_04.html
(see the section on "Command substitution")
HTH
I install Nagios on CentOS to monitor some servers, and one of them is a TSM server.
I download a plugin written in bash when i execute it in command line it works.
/usr/lib64/nagios/plugins/check_tsm db -v6
db - database utilization 42%, OK
and the return code of the batch script is 0 ( from the command echo $? )
So the script work fine, and return 0 that mean a OK status in nagios, but the status still unknown, I really don't know why.
And i check logs in nagios, etc. It's not a problem of commands definition in commands.cfg or the declaration of service, because I copy the command that nagios send automatically every 5 min and the command works fine in command line, but still unknow status.
Definition of command:
define command{
command_name check_tsm_v6
command_line /usr/lib64/nagios/plugins/check_tsm $ARG1$ -v6 $ARG2$ $ARG3$
}
Service declaration :
define service{
use generic-service
host_name tsm-test
service_description database utilization
check_command check_tsm_v6!db!85!90
}
And here's the bash script.
One thing that's caught me out in the past with Nagios scripts is user rights. When testing your script directly on the command line be sure to precede it with:
sudo -u nagios
So yours would be:
sudo -u nagios /usr/lib64/nagios/plugins/check_tsm db -v6
This assumes that your nagios instance is being run by the nagios user, which is a fairly safe bet.
Good luck
Brad
Try to use yum install sysstat -y command to download the package.
If it work that will a great. If you are facing still same please upload the complete error which is showing in browser?
I'm working on creating a single command that will run mulitple things on the command line of another machine. Here is what I'm looking to do.
Use psexec to access remote machine
travel to proper directory and file
execute ant task
exit cmd
run together in one line
I can run the below command from Run to complete what I need accomplished but can't seem to get the format correct for psexec to understand it.
cmd /K cd /d D:\directory & ant & exit
I've tried appling this to the psexec example below:
psexec \\machine cmd /K cd /d D:\directory & ant & exit
When executing this it will activate the command line and travel to D:\directory but won't execute the remaining commands. Adding "" just creates more issues.
Can anyone guide me to the correct format? Or something other than psexec I can use to complete this (free options only)?
Figured it out finally after some more internet searching and trial and error. psexec needs /c to run multiple commands, but that syntax doesn't work with the setup I wrote above. I've gotten the below command to run what I need.
psexec \\machine cmd /c (^d:^ ^& cd directory^ ^& ant^)
I don't need to exit because psexec will exit itself upon completion. You can also use && to require success to continue on to the next command. Found this forum helpful
http://forum.sysinternals.com/psexec_topic318.html
And this for running psexec commands
http://ss64.com/nt/psexec.html
This works:
psexec \ComputerName cmd /c "echo hey1 & echo hey2"
For simple cases I use:
PsExec \\machine <options> CMD /C "command_1 & command_2 & ... & command_N"
For more complex cases, using a batch file with PsExec's -c switch may be more suitable:
The -c switch directs PsExec to copy the specified executable to the remote system for execution and delete the executable from the remote system when the program has finished running.
PsExec \\machine <options> -c PSEXEC_COMMANDS.cmd <arguments>
Since you asked about other options and this has tag configuration managment-- I guess you may be interested in Jenkins (or Hudson). It provide very good way of creating master-slave mechanism which may help in simplifying the build process.
I always use like this way :) and works properly
psexec \\COMPUTER -e cmd /c (COMMAND1 ^& COMMAND2 ^& COMMAND3)