Tweet from Growl/Notification Center/Applescript? - twitter

I'm looking to automatically tweet notification center messages from my mac. Is there a way to use Growl or Applescript to achieve this? Thanks!

I've been able to use the following method to script tweets.
This Applescript code is used to compose the tweet manually with a enter text dialogue and run the shell script which checks and send the tweet. It does not use text from Growl or Notification Centre.
Applescript code:
set q to display dialog "Tweet:" default answer ""
set myTweet to text returned of q
if (length of myTweet) > 140 then
display dialog "I'm pretty sure that tweet is too long (" & (length of t as text) & " chars)" buttons {"oh"} default button 1
else
set e to do shell script "cd /pathTo/folderContaining/;./tweet.sh <yourtwitterhandle> <yourtwitterpassword> \"" & myTweet & "\""
if e ≠ "" then display dialog e buttons {"OK"} default button 1
end if
I've had some problems with the character limit, so I usually make my tweets extra short. Maybe you can find a 'length of' limit shorter than 140 (see AS code: if (length of myTweet) > 140 then) that will work consistently.
Here is the shell script; modified from http://360percents.com/posts/command-line-twitter-status-update-for-linux-and-mac/ ,
Make sure you name it 'tweet.sh' and make it executable using chmod or Kilometre.app (notice I've commented out all the reporting so it runs completely quietly):
#!/bin/bash
#REQUIRED PARAMS (crg - now passed through command line)
username=$1
password=$2
tweet=$3 #must be less than 140 chars
#EXTRA OPTIONS
uagent="Mozilla/5.0" #user agent (fake a browser)
sleeptime=0 #add pause between requests
if [ $(echo "$tweet" | wc -c) -gt 140 ]; then
echo "[FAIL] Tweet must not be longer than 140 chars!" && exit 1
elif [ "$tweet" == "" ]; then
echo "[FAIL] Nothing to tweet. Enter your text as argument." && exit 1
fi
touch "cookie.txt" #create a temp. cookie file
#crg - commented out all 'success' echos ... will only return string if error
#GRAB LOGIN TOKENS
#echo "[+] Fetching twitter.com..." && sleep $sleeptime
initpage=$(curl -s -b "cookie.txt" -c "cookie.txt" -L --sslv3 -A "$uagent" "https://mobile.twitter.com/session/new")
token=$(echo "$initpage" | grep "authenticity_token" | sed -e 's/.*value="//' | sed -e 's/" \/>.*//')
#LOGIN
#echo "[+] Submitting the login form..." && sleep $sleeptime
loginpage=$(curl -s -b "cookie.txt" -c "cookie.txt" -L --sslv3 -A "$uagent" -d "authenticity_token=$token&username=$username&password=$password" "https://mobile.twitter.com/session")
#GRAB COMPOSE TWEET TOKENS
#echo "[+] Getting compose tweet page..." && sleep $sleeptime
composepage=$(curl -s -b "cookie.txt" -c "cookie.txt" -L -A "$uagent" "https://mobile.twitter.com/compose/tweet")
#TWEET
#echo "[+] Posting a new tweet: $tweet..." && sleep $sleeptime
tweettoken=$(echo "$composepage" | grep "authenticity_token" | sed -e 's/.*value="//' | sed -e 's/" \/>.*//' | tail -n 1)
update=$(curl -s -b "cookie.txt" -c "cookie.txt" -L --sslv3 -A "$uagent" -d "authenticity_token=$tweettoken&tweet[text]=$tweet&tweet[display_coordinates]=false" "https://mobile.twitter.com/")
#GRAB LOGOUT TOKENS
logoutpage=$(curl -s -b "cookie.txt" -c "cookie.txt" -L --sslv3 -A "$uagent" "https://mobile.twitter.com/account")
#LOGOUT
#echo "[+] Logging out..." && sleep $sleeptime
logouttoken=$(echo "$logoutpage" | grep "authenticity_token" | sed -e 's/.*value="//' | sed -e 's/" \/>.*//' | tail -n 1)
logout=$(curl -s -b "cookie.txt" -c "cookie.txt" -L --sslv3 -A "$uagent" -d "authenticity_token=$logouttoken" "https://mobile.twitter.com/session/destroy")
rm "cookie.txt"

Related

cron not running in alpine docker

I have created and added below entry in my entry-point.sh for docker file.
# start cron
/usr/sbin/crond &
exec "${DIST}/bin/ss" "$#"
my crontab.txt looks like below:
bash-4.4$ crontab -l
*/5 * * * * /cleanDisk.sh >> /apps/log/cleanDisk.log
So when I run the docker container, i don't see any file created called as cleanDisk.log.
I have setup all permissions and crond is running as a process in my container see below.
bash-4.4$ ps -ef | grep cron
12 sdc 0:00 /usr/sbin/crond
208 sdc 0:00 grep cron
SO, can anyone, guide me why the log file is not getting created?
my cleanDisk.sh looks like below. Since it runs for very first time,and it doesn't match all the criteria, so I would expect at least to print "No Error file found on Host $(hostname)" in cleanDisk.log.
#!/bin/bash
THRESHOLD_LIMIT=20
RETENTION_DAY=3
df -Ph /apps/ | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print $5,$1 }' | while read output
do
#echo $output
used=$(echo $output | awk '{print $1}' | sed s/%//g)
partition=$(echo $output | awk '{print $2}')
if [ $used -ge ${THRESHOLD_LIMIT} ]; then
echo "The partition \"$partition\" on $(hostname) has used $used% at $(date)"
FILE_COUNT=$(find ${SDC_LOG} -maxdepth 1 -mtime +${RETENTION_DAY} -type f -name "sdc-*.sdc" -print | wc -l)
if [ ${FILE_COUNT} -gt 0 ]; then
echo "There are ${FILE_COUNT} files older than ${RETENTION_DAY} days on Host $(hostname)."
for FILENAME in $(find ${SDC_LOG} -maxdepth 1 -mtime +${RETENTION_DAY} -type f -name "sdc-*.sdc" -print);
do
ERROR_FILE_SIZE=$(stat -c%s ${FILENAME} | awk '{ split( "B KB MB GB TB PB" , v ); s=1; while( $1>1024 ){ $1/=1024; s++ } printf "%.2f %s\n", $1, v[s] }')
echo "Before Deleting Error file ${FILENAME}, the size was ${ERROR_FILE_SIZE}."
rm -rf ${FILENAME}
rc=$?
if [[ $rc -eq 0 ]];
then
echo "Error log file ${FILENAME} with size ${ERROR_FILE_SIZE} is deleted on Host $(hostname)."
fi
done
fi
if [ ${FILE_COUNT} -eq 0 ]; then
echo "No Error file found on Host $(hostname)."
fi
fi
done
edit
my docker file looks like this
FROM adoptopenjdk/openjdk8:jdk8u192-b12-alpine
ARG SDC_UID=20159
ARG SDC_GID=20159
ARG SDC_USER=sdc
RUN apk add --update --no-cache bash \
busybox-suid \
sudo && \
echo 'hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4' >> /etc/nsswitch.conf
RUN addgroup --system ${SDC_USER} && \
adduser --system --disabled-password -u ${SDC_UID} -G ${SDC_USER} ${SDC_USER}
ADD --chown=sdc:sdc crontab.txt /etc/crontabs/sdc/
RUN chgrp sdc /etc/cron.d /etc/crontabs /usr/bin/crontab
# Also tried to run like this but not working
# RUN /usr/bin/crontab -u sdc /etc/crontabs/sdc/crontab.txt
USER ${SDC_USER}
EXPOSE 18631
RUN /usr/bin/crontab /etc/crontabs/sdc/crontab.txt
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["dc", "-exec"]

RedHat Memory Used High

Looking for some help if you will..
I have a virtual machine on RedHat 6.5 with 32gb memory.
A free is showing 24.6gb used, 8.2gb free. Only 418mb is cached, 1.8gb buffers.
Executed a top and sorted by virtual used, and I can only account for about 6gb of that 24.6gb used.
A "ps aux" doesn't show any processes that could be taking the memory.
I am flummoxed and looking for some advice on where I can look to see whats taking the memory?
Any help would be appreciated.
Below Bash Script will help you figure out which application is consuming how much of memory.
#!/bin/bash
# Make sure only root can run our script
if [ "$(id -u)" != "0" ]; then
echo "This script must be run as root" 1>&2
exit 1
fi
### Functions
#This function will count memory statistic for passed PID
get_process_mem ()
{
PID=$1
#we need to check if 2 files exist
if [ -f /proc/$PID/status ];
then
if [ -f /proc/$PID/smaps ];
then
#here we count memory usage, Pss, Private and Shared = Pss-Private
Pss=`cat /proc/$PID/smaps | grep -e "^Pss:" | awk '{print $2}'| paste -sd+ | bc `
Private=`cat /proc/$PID/smaps | grep -e "^Private" | awk '{print $2}'| paste -sd+ | bc `
#we need to be sure that we count Pss and Private memory, to avoid errors
if [ x"$Rss" != "x" -o x"$Private" != "x" ];
then
let Shared=${Pss}-${Private}
Name=`cat /proc/$PID/status | grep -e "^Name:" |cut -d':' -f2`
#we keep all results in bytes
let Shared=${Shared}*1024
let Private=${Private}*1024
let Sum=${Shared}+${Private}
echo -e "$Private + $Shared = $Sum \t $Name"
fi
fi
fi
}
#this function make conversion from bytes to Kb or Mb or Gb
convert()
{
value=$1
power=0
#if value 0, we make it like 0.00
if [ "$value" = "0" ];
then
value="0.00"
fi
#We make conversion till value bigger than 1024, and if yes we divide by 1024
while [ $(echo "${value} > 1024"|bc) -eq 1 ]
do
value=$(echo "scale=2;${value}/1024" |bc)
let power=$power+1
done
#this part get b,kb,mb or gb according to number of divisions
case $power in
0) reg=b;;
1) reg=kb;;
2) reg=mb;;
3) reg=gb;;
esac
echo -n "${value} ${reg} "
}
#to ensure that temp files not exist
[[ -f /tmp/res ]] && rm -f /tmp/res
[[ -f /tmp/res2 ]] && rm -f /tmp/res2
[[ -f /tmp/res3 ]] && rm -f /tmp/res3
#if argument passed script will show statistic only for that pid, of not – we list all processes in /proc/ #and get statistic for all of them, all result we store in file /tmp/res
if [ $# -eq 0 ]
then
pids=`ls /proc | grep -e [0-9] | grep -v [A-Za-z] `
for i in $pids
do
get_process_mem $i >> /tmp/res
done
else
get_process_mem $1>> /tmp/res
fi
#This will sort result by memory usage
cat /tmp/res | sort -gr -k 5 > /tmp/res2
#this part will get uniq names from process list, and we will add all lines with same process list
#we will count nomber of processes with same name, so if more that 1 process where will be
# process(2) in output
for Name in `cat /tmp/res2 | awk '{print $6}' | sort | uniq`
do
count=`cat /tmp/res2 | awk -v src=$Name '{if ($6==src) {print $6}}'|wc -l| awk '{print $1}'`
if [ $count = "1" ];
then
count=""
else
count="(${count})"
fi
VmSizeKB=`cat /tmp/res2 | awk -v src=$Name '{if ($6==src) {print $1}}' | paste -sd+ | bc`
VmRssKB=`cat /tmp/res2 | awk -v src=$Name '{if ($6==src) {print $3}}' | paste -sd+ | bc`
total=`cat /tmp/res2 | awk '{print $5}' | paste -sd+ | bc`
Sum=`echo "${VmRssKB}+${VmSizeKB}"|bc`
#all result stored in /tmp/res3 file
echo -e "$VmSizeKB + $VmRssKB = $Sum \t ${Name}${count}" >>/tmp/res3
done
#this make sort once more.
cat /tmp/res3 | sort -gr -k 5 | uniq > /tmp/res
#now we print result , first header
echo -e "Private \t + \t Shared \t = \t RAM used \t Program"
#after we read line by line of temp file
while read line
do
echo $line | while read a b c d e f
do
#we print all processes if Ram used if not 0
if [ $e != "0" ]; then
#here we use function that make conversion
echo -en "`convert $a` \t $b \t `convert $c` \t $d \t `convert $e` \t $f"
echo ""
fi
done
done < /tmp/res #this part print footer, with counted Ram usage echo "--------------------------------------------------------" echo -e "\t\t\t\t\t\t `convert $total`" echo "========================================================" # we clean temporary file [[ -f /tmp/res ]] && rm -f /tmp/res [[ -f /tmp/res2 ]] && rm -f /tmp/res2 [[ -f /tmp/res3 ]] && rm -f /tmp/res3
I am going to take a wild stab at this. Without having access to the machine or additional information troubleshooting this will be difficult.
The /tmp file system is special in that it exists entirely in memory. There are a couple others that are like this but /tmp is a special flower. Check the disk usage on this directory and you may see where your memory is getting consumed. ( du -sh /tmp )

Is the CSV download link API stable (or official)?

Is this a part of a stable api? Can I use it in an app to consume the data?
https://docs.google.com/spreadsheets/d/<KEY>/export?gid=<WORKBOOK_ID>&format=csv
AFAIK https://docs.google.com/spreadsheets/d/<KEY>/export?gid=<WORKBOOK_ID>&format=csv is not part of a current API.
The Google Sheets API documentation link is
https://developers.google.com/google-apps/spreadsheets/
I think the then-working solutions no longer work reliably. sometimes they do, sometimes they do not.
#!/bin/bash
u='1VaW8fRFJeJ5Wg6-prF8l3xCwbjkx-q_Mqyt8XfjIn6Q'
echo "pubhtml"
curl -s "https://docs.google.com/spreadsheets/d/$u/pubhtml" | grep -c 'exists to test'
echo "export1"
curl -s "https://docs.google.com/spreadsheets/d/$u/export?format=csv&id=$u" | grep -c 'exists to test'
echo "export2"
curl -s "https://docs.google.com/spreadsheets/d/$u/export?format=csv&gid=$u" | grep -c 'exists to test'
echo "export3"
curl -s "https://docs.google.com/spreadsheets/d/$u/export?format=csv&gid=0" | grep -c 'exists to test'
echo "export4"
curl -s "https://docs.google.com/spreadsheet/tq?key=$u&gid=1&tq=select%20A,%20B&tqx=reqId:1;out:html;%20responseHandler:webQuery" | grep -c "exists to test"
echo "export5"
curl -s "https://docs.google.com/spreadsheet/tq?key=$u&id=1&tq=select%20A,%20B&tqx=reqId:1;out:html;%20responseHandler:webQuery" | grep -c "exists to test"
exit 0
(my test spreadsheet contains the string.) only the first request (pubhtml) still works. that is, my CL output is 1 0 0 0 0 0 . so, short of using .net or java, the current working choice seems to be pubhtml and reparsing its HTML?
shame---it was such a useful easy way to pull down data.
/iaw

JMXterm command works manually from terminal but not from script, what could be the reason?

I'm trying to automate a JMX command using a tool called JMXterm.
The script looks like so:
#!/bin/bash
beanstemp="/tmp/jmx_beans"
read -r -p "Enter server name in FQDN " scrapername
bean1="BrokerName=localhost,Connection=Scraper_$scrapername,ConnectorName=openwire,Type=Connection"
echo beans -d $domain | $cmd > $beanstemp
idnum=$(grep "ID_$scrapername" $beanstemp | grep openwire | awk -F- '{print $2"-"$3}')
idname="$scrapername-$idnum"
bean2="BrokerName=localhost,Connection=ID_"$idname"2_0,ConnectorName=openwire,Type=Connection"
domain="org.apache.activemq"
server="scrapermq.sj.peer39.com:1099"
cmd="java -jar Downloads/jmxterm-1.0-alpha-4-uber.jar -l $server"
echo run stop -b $bean1 -d $domain | $cmd -l $server
echo run stop -b $bean2 -d $domain | $cmd -l $server
When I run the script I get the proper response for the first command but the second one fails.
Here's the output of sh -x:
itaig#iganot-lt:~$ sh -x jmx.sh
+ beanstemp=/tmp/jmx_beans
+ read -r -p Enter server name in FQDN scrapername
Enter server name in FQDN selvmnj27.domain.company.com
+ bean1=BrokerName=localhost,Connection=Scraper_selvmnj27.domain.company.com,ConnectorName=openwire,Type=Connection
+ echo beans -d
+
+ + grep openwire
grep ID_selvmnj27.domain.company.com+ awk -F- {print $2"-"$3}
/tmp/jmx_beans
+ idnum=
+ idname=selvmnj27.domain.company.com-
+ bean2=BrokerName=localhost,Connection=ID_selvmnj27.domain.company.com-2_0,ConnectorName=openwire,Type=Connection
+ domain=org.apache.activemq
+ server=scrapermq.sj.peer39.com:1099
+ cmd=java -jar Downloads/jmxterm-1.0-alpha-4-uber.jar -l scrapermq.sj.peer39.com:1099
+ echo+ run stop -b BrokerName=localhost,Connection=Scraper_selvmnj27.domain.company.com,ConnectorName=openwire,Type=Connectionjava -jar -d Downloads/jmxterm-1.0-alpha-4-uber.jar -l org.apache.activemq scrapermq.sj.peer39.com:1099 -l
scrapermq.sj.peer39.com:1099
Welcome to JMX terminal. Type "help" for available commands.
$>run stop -b BrokerName=localhost,Connection=Scraper_selvmnj27.domain.company.com,ConnectorName=openwire,Type=Connection -d org.apache.activemq
#calling operation stop of mbean org.apache.activemq:BrokerName=localhost,Connection=Scraper_selvmnj27.domain.company.com,ConnectorName=openwire,Type=Connection
#operation returns:
null
$>+ + echo runjava stop -b -jar Downloads/jmxterm-1.0-alpha-4-uber.jar BrokerName=localhost,Connection=ID_selvmnj27.domain.company.com-2_0,ConnectorName=openwire,Type=Connection -l scrapermq.sj.peer39.com:1099 -l -d scrapermq.sj.peer39.com:1099
org.apache.activemq
Welcome to JMX terminal. Type "help" for available commands.
$>run stop -b BrokerName=localhost,Connection=ID_selvmnj27.domain.company.com-2_0,ConnectorName=openwire,Type=Connection -d org.apache.activemq
#InstanceNotFoundException: org.apache.activemq:BrokerName=localhost,Connection=ID_selvmnj27.domain.company.com-2_0,ConnectorName=openwire,Type=Connection
$>itaig#iganot-lt:~$
"Operation returns: null" is the response I'm looking for.
The problem begins with this line:
echo beans -d $domain | $cmd > $beanstemp
When it runs the output of the command should be written to the $beanstemp file but the file stays empty.
When I run the command manually from terminal it works:
itaig#iganot-lt:~$ echo beans -d org.apache.activemq | java -jar Downloads/jmxterm-1.0-alpha-4-uber.jar -l scrapermq.domain.company.com:1099 > /tmp/jmx_beans
Welcome to JMX terminal. Type "help" for available commands.
$>beans -d org.apache.activemq
#domain = org.apache.activemq:
$>itaig#iganot-lt:~$ tail -5 /tmp/jmx_beans
org.apache.activemq:BrokerName=localhost,Type=Subscription,clientId=Scraper_selvmnj28.domain.company.com,consumerId=ID_selvmnj28.domain.company.com-45117-1431866382308-0_416_1_1,destinationName=SCRAPER_VIDEO_INPUT,destinationType=Queue,persistentMode=Non-Durable
org.apache.activemq:BrokerName=localhost,Type=Subscription,clientId=Scraper_selvmnj29.domain.company.com,consumerId=ID_selvmnj29.domain.company.com-52961-1431866382264-0_416_-1_1,destinationName=topic_//ActiveMQ.Advisory.TempQueue_topic_//ActiveMQ.Advisory.TempTopic,destinationType=Topic,persistentMode=Non-Durable
org.apache.activemq:BrokerName=localhost,Type=Subscription,clientId=Scraper_selvmnj29.domain.company.com,consumerId=ID_selvmnj29.domain.company.com-52961-1431866382264-0_416_1_1,destinationName=SCRAPER_VIDEO_INPUT,destinationType=Queue,persistentMode=Non-Durable
org.apache.activemq:BrokerName=localhost,Type=Subscription,clientId=Scraper_selvmnj30.domain.company.com,consumerId=ID_selvmnj30.domain.company.com-33036-1431866396456-0_416_-1_1,destinationName=topic_//ActiveMQ.Advisory.TempQueue_topic_//ActiveMQ.Advisory.TempTopic,destinationType=Topic,persistentMode=Non-Durable
org.apache.activemq:BrokerName=localhost,Type=Subscription,clientId=Scraper_selvmnj30.domain.company.com,consumerId=ID_selvmnj30.domain.company.com-33036-1431866396456-0_416_1_1,destinationName=SCRAPER_VIDEO_INPUT,destinationType=Queue,persistentMode=Non-Durable
itaig#iganot-lt:~$
So my question is: What am I doing wrong and why does the command work when running manually from terminal but not from the script?
Thanks in advance
Ok I found the problem.
Certain lines included variables which have not been declared in time.
After re-organizing the order of variable declaration the problem has been solved.

Move script if upload is completed

I have a script which has to move uploaded files from the first directory to the second directory.
The problem is that the script already moves the files during upload.
Anyone who can help?
#!/bin/sh
lockfile=/home/mediaze111/cronjobs/zenon_move.lock
if ( set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null; then
trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT
if [ "$(ls -A /home/mediaze111/domains/optimaal.fm/ZenonImport/Import1/)" ]; then
ps faux | grep -E 'UPLOAD' | grep -v 'grep' > /dev/null || mv -f /home/mediaze111/domains/optimaal.fm/ZenonImport/Import1/*.* /home/mediaze111/domains/optimaal.fm/ZenonImport/Import2/
fi
rm -f "$lockfile"
trap - INT TERM EXIT
fi
u can get the file size before upload . then check the destination file size . if same move uploaded. pul all the script in a loop so when was the same break it an move files.

Resources