TIme bind parameter command error *influxql.StringLiteral are not compatible - influxdb

Can someone please point out what I am doing wrong in my command below which gives the error shown below?
Command:
eCollection=( $(cut -d ',' -f2 new.txt ) )
start= date --utc +%FT%T.%2NZ
sleep 10
end= date --utc +%FT%T.%2NZ
for i in "${eCollection[#]}"
do
var=$((var+1))
if [[ $var -gt 1 ]] ; then
curl -G 'http://localhost:8086/query?db=telegraf' --data-urlencode \
'q=SELECT * FROM '$i' WHERE "time" >= $timebegin AND "time" \
<= $timeend' --data-urlencode \
'params {"timebegin":"${start}","timeend":"${end}"}'
fi
done
Error:
{"results":[{"statement_id":0,"error":"invalid operation: time and *influxql.StringLiteral
are not compatible"}]}

Here is an update that has to do with working with strings
start=$(date --utc +"%FT%T.%2NZ")
sleep 100
end=$(date --utc +"%FT%T.%2NZ")
startCall='"'$start'"'
endCall='"'$end'"'
echo "$startCall"
echo "$endCall"
for i in "${eCollection[#]}"
do
var=$((var+1))
if [[ $var -gt 1 ]] ; then
echo ${i}
curl -G 'http://localhost:8086/query?db=telegraf' --data-urlencode 'q=SELECT * FROM '$i' WHERE "time" >= $timebegin AND "time" <= $timeend' \
--data-urlencode 'params={"timebegin":'$startCall', "timeend": '$endCall'}'
fi
done

Related

Jenkins pass a shell script variable to a down stream job

I'm new to Jenkins.
I have a job with an "Execute Shell" Build trigger, and in that shell script i initiate some variables with values i take from some source.
I need to pass these values to a downstream job.
The values i want to pass are $IMG_NAME and $IMG_PATH from this shell script:
#!/bin/bash -x
whoami
echo "BASE_PATH: $BASE_PATH"
declare -A BRANCHES
for i in $(echo $BRANCHES_LIST | tr ',' '\n'); do BRANCHES[$i]= ; done
echo 'user' | sudo -S umount -rf /mnt/tlv-s-comp-prod/drops/
echo 'user' | sudo -S mount.nfs -o nolock,nfsvers=3 tlv-s-comp-prod:/export/drops /mnt/tlv-s-comp-prod/drops
ls /mnt/tlv-s-comp-prod/drops/
echo "cleanup workspace"
rm ${WORKSPACE}/*.txt &> /dev/null
i="0"
while [ $i -lt 6 ]
do
if [[ ${BASE_PATH} == *"Baseline"* ]]; then
unset BRANCHES[#]
declare -A BRANCHES
BRANCHES[Baseline]=
fi
for BRANCH in "${!BRANCHES[#]}"; do
echo "BRANCH: $BRANCH"
if [ $BRANCH == "Baseline" ]; then BRANCH=; fi
img_dir=$(ls -td -- ${BASE_PATH}/${BRANCH}/*/ | head -n 1)
echo "img_dir: $img_dir"
IMG_PATH=$(ls $img_dir*.rpm)
echo "IMG_PATH: $IMG_PATH"
cd $img_dir
IMG_NAME=$(ls *.rpm) > env.properties
if [ ! -z "$IMG_NAME" ]; then
if [ $(( $(date +%s) - $(stat -c %Z $IMG_PATH) )) -lt 10000800 ]; then
echo "IMG_NAME: ${IMG_NAME}"
#BRANCHES[$BRANCH]=$IMG_PATH
#echo "REG_OSA_SOFTSYNC_BUILD_IMG_FULL_PATH=${BRANCHES[$BRANCH]}" >> ${WORKSPACE}/$BRANCH.txt
echo "BRANCH_NAME=$BRANCH" >> ${WORKSPACE}/${BRANCH}_branch.txt
echo "REG_OSA_SOFTSYNC_BUILD_NAME=$BRANCH-$IMG_NAME" >> ${WORKSPACE}/${BRANCH}_branch.txt
else
echo "$IMG_NAME is out dated"
fi
else
echo "IMG_NAME is empty"
fi
BRANCH_NAME=""
done
$TEMP=$BRANCH_NAME
echo "TEMP: $TEMP"
if [ $(ls ${WORKSPACE}/*_branch.txt | wc -l) == $(echo ${#BRANCHES[#]}) ]; then break; fi
#for i in $(ls *_branch.txt); do i=$(echo $i | awk -F '_branch.txt' '{print $1}'); if [ $(echo ${!BRANCHES[#]} | grep $i | wc -l) == 0 ]; then state=1 break; fi done
i=$[$i+1]
sleep 1800
done
This is the "Trigger parameterized build on other projects" configuration:

Drop DynamoDB if it exists

I am trying to setup dynamodb in my local using docker. I wish to control is initialization by using makefile. Here is the makefile I am using
TABLE_NAME="users"
create_db:
#aws dynamodb --endpoint-url http://localhost:8042 create-table \
--table-name $(TABLE_NAME) \
--attribute-definitions \
AttributeName=userID,AttributeType=N \
--key-schema \
AttributeName=userID,KeyType=HASH \
--provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 >> /dev/null;
drop_db: check_db
check_db; if [test $$? -eq 1] then \
#aws dynamodb --endpoint-url http://localhost:8042 delete-table --table-name $(TABLE_NAME); \
fi
check_db:
-#aws dynamodb --endpoint-url http://localhost:8042 describe-table --table-name $(TABLE_NAME);
AWS does not give DROP IF EXISTS functionality like MYSQL, so I am trying to use output of describe table to check the presence of the table. However getting the following error
check_db; if [test $? -eq 1] then \
#aws dynamodb --endpoint-url http://localhost:8042 delete-table --table-name "requests"; \
fi
/bin/sh: -c: line 0: syntax error near unexpected token `fi'
/bin/sh: -c: line 0: `check_db; if [test $? -eq 1] then #aws dynamodb --endpoint-url http://localhost:8042 delete-table --table-name "requests"; fi'
make: *** [drop_db] Error 2
I am new to makefile and cannot figure out how to solve the error. What is wrong in the above makefile? And is there any better way to check presence of dynamo table
It's not a Makefile issue, it's a syntax error in your shell script. Basically you need a semicolon before then.
$ false; if [test $? -eq 1] then echo foo; fi
bash: syntax error near unexpected token `fi'
You also need to decide to either use [ or test, as the current syntax is also incorrect.
$ false; if [test $? -eq 1]; then echo foo; fi
Command '[test' not found, did you mean:
...
Working version:
$ false; if [ $? -eq 1 ]; then echo foo; fi
foo
I just made this workaround to mimic DROP IF EXISTS functionality with dynamoDB
drop_db: check_db
#if grep -q -i "active" a.out ; then \
aws dynamodb --endpoint-url http://localhost:8042 delete-table --table-name $(TABLE_NAME) >> /dev/null; \
rm a.out; \
fi
check_db:
#aws dynamodb --endpoint-url http://localhost:8042 describe-table --table-name $(TABLE_NAME) --output text &> a.out;

cron not running in alpine docker

I have created and added below entry in my entry-point.sh for docker file.
# start cron
/usr/sbin/crond &
exec "${DIST}/bin/ss" "$#"
my crontab.txt looks like below:
bash-4.4$ crontab -l
*/5 * * * * /cleanDisk.sh >> /apps/log/cleanDisk.log
So when I run the docker container, i don't see any file created called as cleanDisk.log.
I have setup all permissions and crond is running as a process in my container see below.
bash-4.4$ ps -ef | grep cron
12 sdc 0:00 /usr/sbin/crond
208 sdc 0:00 grep cron
SO, can anyone, guide me why the log file is not getting created?
my cleanDisk.sh looks like below. Since it runs for very first time,and it doesn't match all the criteria, so I would expect at least to print "No Error file found on Host $(hostname)" in cleanDisk.log.
#!/bin/bash
THRESHOLD_LIMIT=20
RETENTION_DAY=3
df -Ph /apps/ | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print $5,$1 }' | while read output
do
#echo $output
used=$(echo $output | awk '{print $1}' | sed s/%//g)
partition=$(echo $output | awk '{print $2}')
if [ $used -ge ${THRESHOLD_LIMIT} ]; then
echo "The partition \"$partition\" on $(hostname) has used $used% at $(date)"
FILE_COUNT=$(find ${SDC_LOG} -maxdepth 1 -mtime +${RETENTION_DAY} -type f -name "sdc-*.sdc" -print | wc -l)
if [ ${FILE_COUNT} -gt 0 ]; then
echo "There are ${FILE_COUNT} files older than ${RETENTION_DAY} days on Host $(hostname)."
for FILENAME in $(find ${SDC_LOG} -maxdepth 1 -mtime +${RETENTION_DAY} -type f -name "sdc-*.sdc" -print);
do
ERROR_FILE_SIZE=$(stat -c%s ${FILENAME} | awk '{ split( "B KB MB GB TB PB" , v ); s=1; while( $1>1024 ){ $1/=1024; s++ } printf "%.2f %s\n", $1, v[s] }')
echo "Before Deleting Error file ${FILENAME}, the size was ${ERROR_FILE_SIZE}."
rm -rf ${FILENAME}
rc=$?
if [[ $rc -eq 0 ]];
then
echo "Error log file ${FILENAME} with size ${ERROR_FILE_SIZE} is deleted on Host $(hostname)."
fi
done
fi
if [ ${FILE_COUNT} -eq 0 ]; then
echo "No Error file found on Host $(hostname)."
fi
fi
done
edit
my docker file looks like this
FROM adoptopenjdk/openjdk8:jdk8u192-b12-alpine
ARG SDC_UID=20159
ARG SDC_GID=20159
ARG SDC_USER=sdc
RUN apk add --update --no-cache bash \
busybox-suid \
sudo && \
echo 'hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4' >> /etc/nsswitch.conf
RUN addgroup --system ${SDC_USER} && \
adduser --system --disabled-password -u ${SDC_UID} -G ${SDC_USER} ${SDC_USER}
ADD --chown=sdc:sdc crontab.txt /etc/crontabs/sdc/
RUN chgrp sdc /etc/cron.d /etc/crontabs /usr/bin/crontab
# Also tried to run like this but not working
# RUN /usr/bin/crontab -u sdc /etc/crontabs/sdc/crontab.txt
USER ${SDC_USER}
EXPOSE 18631
RUN /usr/bin/crontab /etc/crontabs/sdc/crontab.txt
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["dc", "-exec"]

Makefile foreach inside if

I'm trying to run a foreach function that only should run when a condition is fulfilled.
The foreach function works fine when I delete the surrounding if. When I write my makefile like the followring, the foreach loop stops working before the first command is executed and the build process never ends (I don't think that I built an infinite loop, because none of the commands inside is executed).
if [ $(BUILD_SPEC) = mySpec ]; \
then ( \
if [ ! -d $(PRJ_ROOT_DIR)/TARGET ]; then mkdir $(PRJ_ROOT_DIR)/TARGET; fi; \
$(foreach target,$(basename $(PROJECT_TARGETS)), \
if [ -e $(PRJ_ROOT_DIR)/$(BUILD_SPEC)/$(target).crc ]; \
then ( \
echo Deleting $(PRJ_ROOT_DIR)/$(BUILD_SPEC)/$(target).crc; \
rm -f $(PRJ_ROOT_DIR)/$(BUILD_SPEC)/$(target).crc; \
) \
fi; \
) \
) \
fi;
I solved this by my own:
For some reason the command "if [ $(BUILD_SPEC) = mySpec ];" does not work. I replaced it by "ifeq ($(BUILD_SPEC),mySpec)". Now the Script works the way I wanted it to.

RedHat Memory Used High

Looking for some help if you will..
I have a virtual machine on RedHat 6.5 with 32gb memory.
A free is showing 24.6gb used, 8.2gb free. Only 418mb is cached, 1.8gb buffers.
Executed a top and sorted by virtual used, and I can only account for about 6gb of that 24.6gb used.
A "ps aux" doesn't show any processes that could be taking the memory.
I am flummoxed and looking for some advice on where I can look to see whats taking the memory?
Any help would be appreciated.
Below Bash Script will help you figure out which application is consuming how much of memory.
#!/bin/bash
# Make sure only root can run our script
if [ "$(id -u)" != "0" ]; then
echo "This script must be run as root" 1>&2
exit 1
fi
### Functions
#This function will count memory statistic for passed PID
get_process_mem ()
{
PID=$1
#we need to check if 2 files exist
if [ -f /proc/$PID/status ];
then
if [ -f /proc/$PID/smaps ];
then
#here we count memory usage, Pss, Private and Shared = Pss-Private
Pss=`cat /proc/$PID/smaps | grep -e "^Pss:" | awk '{print $2}'| paste -sd+ | bc `
Private=`cat /proc/$PID/smaps | grep -e "^Private" | awk '{print $2}'| paste -sd+ | bc `
#we need to be sure that we count Pss and Private memory, to avoid errors
if [ x"$Rss" != "x" -o x"$Private" != "x" ];
then
let Shared=${Pss}-${Private}
Name=`cat /proc/$PID/status | grep -e "^Name:" |cut -d':' -f2`
#we keep all results in bytes
let Shared=${Shared}*1024
let Private=${Private}*1024
let Sum=${Shared}+${Private}
echo -e "$Private + $Shared = $Sum \t $Name"
fi
fi
fi
}
#this function make conversion from bytes to Kb or Mb or Gb
convert()
{
value=$1
power=0
#if value 0, we make it like 0.00
if [ "$value" = "0" ];
then
value="0.00"
fi
#We make conversion till value bigger than 1024, and if yes we divide by 1024
while [ $(echo "${value} > 1024"|bc) -eq 1 ]
do
value=$(echo "scale=2;${value}/1024" |bc)
let power=$power+1
done
#this part get b,kb,mb or gb according to number of divisions
case $power in
0) reg=b;;
1) reg=kb;;
2) reg=mb;;
3) reg=gb;;
esac
echo -n "${value} ${reg} "
}
#to ensure that temp files not exist
[[ -f /tmp/res ]] && rm -f /tmp/res
[[ -f /tmp/res2 ]] && rm -f /tmp/res2
[[ -f /tmp/res3 ]] && rm -f /tmp/res3
#if argument passed script will show statistic only for that pid, of not – we list all processes in /proc/ #and get statistic for all of them, all result we store in file /tmp/res
if [ $# -eq 0 ]
then
pids=`ls /proc | grep -e [0-9] | grep -v [A-Za-z] `
for i in $pids
do
get_process_mem $i >> /tmp/res
done
else
get_process_mem $1>> /tmp/res
fi
#This will sort result by memory usage
cat /tmp/res | sort -gr -k 5 > /tmp/res2
#this part will get uniq names from process list, and we will add all lines with same process list
#we will count nomber of processes with same name, so if more that 1 process where will be
# process(2) in output
for Name in `cat /tmp/res2 | awk '{print $6}' | sort | uniq`
do
count=`cat /tmp/res2 | awk -v src=$Name '{if ($6==src) {print $6}}'|wc -l| awk '{print $1}'`
if [ $count = "1" ];
then
count=""
else
count="(${count})"
fi
VmSizeKB=`cat /tmp/res2 | awk -v src=$Name '{if ($6==src) {print $1}}' | paste -sd+ | bc`
VmRssKB=`cat /tmp/res2 | awk -v src=$Name '{if ($6==src) {print $3}}' | paste -sd+ | bc`
total=`cat /tmp/res2 | awk '{print $5}' | paste -sd+ | bc`
Sum=`echo "${VmRssKB}+${VmSizeKB}"|bc`
#all result stored in /tmp/res3 file
echo -e "$VmSizeKB + $VmRssKB = $Sum \t ${Name}${count}" >>/tmp/res3
done
#this make sort once more.
cat /tmp/res3 | sort -gr -k 5 | uniq > /tmp/res
#now we print result , first header
echo -e "Private \t + \t Shared \t = \t RAM used \t Program"
#after we read line by line of temp file
while read line
do
echo $line | while read a b c d e f
do
#we print all processes if Ram used if not 0
if [ $e != "0" ]; then
#here we use function that make conversion
echo -en "`convert $a` \t $b \t `convert $c` \t $d \t `convert $e` \t $f"
echo ""
fi
done
done < /tmp/res #this part print footer, with counted Ram usage echo "--------------------------------------------------------" echo -e "\t\t\t\t\t\t `convert $total`" echo "========================================================" # we clean temporary file [[ -f /tmp/res ]] && rm -f /tmp/res [[ -f /tmp/res2 ]] && rm -f /tmp/res2 [[ -f /tmp/res3 ]] && rm -f /tmp/res3
I am going to take a wild stab at this. Without having access to the machine or additional information troubleshooting this will be difficult.
The /tmp file system is special in that it exists entirely in memory. There are a couple others that are like this but /tmp is a special flower. Check the disk usage on this directory and you may see where your memory is getting consumed. ( du -sh /tmp )

Resources