I want to place year on picture with imagemagick. I have about 4000 pictures. I try to do it with imagemagick using the -compose parameter.
Logo is 200x67 px.
But all pictures is not the same size. How i can add proportional size of year on each image?
Example image
I not yet try imagemagick, but i placed logo to two different pictures with photoshop.
Or imagemagick handles this? Or i can put text on each image with defined font size? Or better convert all images to one size? If that, can imagemagick tell me wich is smallest picture?
I suggest you do the following:
Create the logo in a bigger size, so you can scale it down later
Then loop through all images:
Get image size of image:
$size_array = getimagesize ( $image_src );
$width = $size_array[0];
$height = $size_array[1];
Depending on image size, scale a copy of logo down
Compose logo over image
I made script http://pastebin.com/HdBMx2Zm It looks good in my XP machine (ACDSee) and also on windows 7 (windows integrated image viewer). In some pictures year is little bit bolder, but it is ok.
#!/bin/bash
#
#
#
# find /media/sf_test/meklee/ -type f -iname "*.jpg" -exec /root/imagick_script.sh "{}" \;
#
# depends on jhead and imagemagick
# if call find from another file, then is possible to count all pictures, place count on file and in imagick_script.sh
# decrase that amount by 1.
#
# in script some directory names is in Latvian :)
#
backgroundimage=$1
bgp=/media/sf_test/
if [ -f "${bgp}stop" ]
then
echo -ne "*"
exit 0
fi
if [ ! -d "${bgp}2019" ]
then
mkdir -p "${bgp}2019"
fi
# "%[fx:w] %[fx:h] %[exif:DateTime]" (use this if images has no exif data)
#dim=`identify -format "%[fx:.15*w] %[fx:.15*h] %[exif:orientation] %[exif:DateTime]" "$backgroundimage"`
# be careful with auto-orient
# see this: http://www.imagemagick.org/script/command-line-options.php?#auto-orient
#orient=`echo $dim | awk '{print $3}'`
#if [ "$orient" != "1" ]
#then
#orient image (rewrite original)
# convert -auto-orient "$1" "$1"
#re-read image data
# dim=`identify -format "%[fx:.15*w] %[fx:.15*h] %[exif:orientation] %[exif:DateTime]" "$backgroundimage"`
#fi
# jhead is much faster...
#ww=`echo $dim | awk '{print $1}'`
#hh=`echo $dim | awk '{print $2}'`
#ww=`printf "%.0f\n" "${ww}"`
#hh=`printf "%.0f\n" "${hh}"`
ww=`jhead "$1" | grep 'Resolution' | awk '{print $3}'`
hh=`jhead "$1" | grep 'Resolution' | awk '{print $5}'`
ww=`echo "$ww * .15" | bc -l | xargs printf "%1.0f"`
hh=`echo "$hh * .15" | bc -l | xargs printf "%1.0f"`
if [ "$hh" -gt "$ww" ]
then
let ww=$ww*2
fi
#year=`echo $dim | awk '{print substr($4,1,4)}'`
# works only if exif is avaiable..
year=`jhead "$1" | grep 'File date' | awk '{print substr($4,1,4)}'`
# i have images takin in range from 2004 to 2012, so if some exim data is removed, use year 2019..
case "$year" in
'2004')
#
;;
'2005')
#
;;
'2006')
#
;;
'2007')
#
;;
'2008')
#
;;
'2009')
#
;;
'2010')
#
;;
'2011')
#
;;
'2012')
#
;;
*)
year=2019
mv "$1" "${bgp}2019"
echo -ne "!"
exit 0
;;
esac
if [ ! -f ${bgp}${year}.png ];
then
convert -gravity southeast -size 300x130 xc:transparent -font Courier-bold -pointsize 125 -fill red -draw "text 0,0 '${year}'" ${bgp}output.png
composite ${bgp}output.png ${bgp}fons.png ${bgp}${year}.png
#echo "${year}.png not found, create one ..";
fi
Watermark=${bgp}${year}.png
Fname="${backgroundimage##*/}"
Fdir="${backgroundimage:0:${#backgroundimage} - ${#Fname}}"
#echo "${Fdir}new_$Fname"
#echo "${ww}x$hh $1"
if [ ! -d "/media/sf_test/resize/$year/" ]
then
mkdir "/media/sf_test/resize/$year/"
fi
if [ ! -d "/media/sf_test/apstradatie/$year/" ]
then
mkdir "/media/sf_test/apstradatie/$year/"
fi
if [ ! -f "/media/sf_test/resize/$year/$Fname" ]
then
composite -gravity southeast \( $Watermark -resize ${ww}x${hh} \) "$backgroundimage" "/media/sf_test/resize/$year/$Fname"
fi
mv "$1" "/media/sf_test/apstradatie/$year"
#"${Fdir}neew_$Fname"
echo -ne "."
Related
how to monitor docker containers resource usage using a shell script.
I was just wondering can we use the docker stats command to get metrics to monitor docker containers resource usage
I have written a small shell script that will help to filter docker containers that are using max system resources. (I guess will work for one docker-swarm node cluster)
#!/bin/bash
#This script is used to complete the output of the docker stats command.
#The docker stats command does not compute the total amount of resources (RAM or CPU)
#Get the total amount of RAM, assumes there are at least 1024*1024 KiB, therefore > 1 GiB
docker stats | while read line
do
HOST_MEM_TOTAL=$(grep MemTotal /proc/meminfo | awk '{print $2/1024/1024}')
#echo "HOST TOTAL Memory: $HOST_MEM_TOTAL"
oldifs=IFS
IFS=;
dStats=$(docker stats --no-stream --format "table {{.MemPerc}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.Name}}\t{{.ID}}" | sed -n '1!p')
#dStats=$( docker stats --no-stream --format "table {{.MemPerc}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.Name}}\t{{.ID}}")
SUM_RAM=`echo $dStats | tail -n +2 | sed "s/%//g" | awk '{s+=$1} END {print s}'`
SUM_CPU=`echo $dStats | tail -n +2 | sed "s/%//g" | awk '{s+=$2} END {print s}'`
SUM_RAM_QUANTITY=`LC_NUMERIC=C printf %.2f $(echo "$SUM_RAM*$HOST_MEM_TOTAL*0.01" | bc)`
# Output the result
echo "########################################### Start of Resources Output ##############################################" >> /tmp/emailFiles/Docker-Resources-Usage-Stats.txt
echo " " >> /tmp/emailFiles/Docker-Resources-Usage-Stats.txt
dat=$(date)
echo "Present date & Time is: $dat" >> /tmp/emailFiles/Docker-Resources-Usage-Stats.txt
#IFS=$olifs
#echo "MEM % CPU % MEM USAGE / LIMIT NAME CONTAINER ID" >> /tmp/emailFiles/Docker-Resources-Usage-Stats.txt
echo "MEM % CPU % MEM USAGE / LIMIT NAME CONTAINER ID" >> /tmp/emailFiles/Docker-Resources-Usage-Stats.txt
IFS=$'\r\n' GLOBIGNORE='*'
for i in $dStats
do
cpuPerc=$(echo $i | awk '{print $2}')
memPerc=$(echo $i | awk '{print $1}')
cpuPerc=${cpuPerc%"%"}
cpuPerc=${cpuPerc/.*}
memPerc=${memPerc%"%"}
memPerc=${memPerc/.*}
#if [ $cpuPerc -ge 100 ] && [ $memPerc -ge 35 ]
if [ $cpuPerc -ge 100 ] || [ $memPerc -ge 50 ]
then
#IFS=$oldifs
echo $i >> /tmp/emailFiles/Docker-Resources-Usage-Stats.txt
else
a="hello"
fi
done
#IFS=$oldifs
SUM_RAM=${SUM_RAM/.*}
SUM_CPU=${SUM_CPU/.*}
if [ $SUM_RAM -ge 70 ] && [ $SUM_CPU -ge 100 ]
#if [ $SUM_RAM -ge 70 ] || [ $SUM_CPU -ge 100 ]
then
echo " " >>/tmp/emailFiles/Docker-Resources-Usage-Stats.txt
echo "Total-MEMORY-Usage Total-CPU-Usage Used-MEM / Total-MEM" >> /tmp/emailFiles/SIMSAPP-Docker-Resources-Usage-Stats.txt
#echo -e "${SUM_RAM}%\t\t\t${SUM_CPU}%\t\t${SUM_RAM_QUANTITY}GiB / ${HOST_MEM_TOTAL}GiB\tTOTAL" >> /tmp/emailFiles/Docker-Resources-Usage-Stats.txt
echo -e "${SUM_RAM}%\t\t\t${SUM_CPU}%\t\t${SUM_RAM_QUANTITY}GiB / ${HOST_MEM_TOTAL}GiB" >> /tmp/emailFiles/Docker-Resources-Usage-Stats.txt
echo " ">>/tmp/emailFiles/Docker-Resources-Usage-Stats.txt
fi
disk_usage=$(df -hT | grep ext4 | awk '{print $6}')
#disk_usage=$(df -kv| grep sda1 | awk '{preint $5}')
disk_usage=${disk_usage%"%"}
#disk_usage=${disk_usage/.*}
if [ $disk_usage -ge 90 ]
then
#echo "Filesystem Size Used Avail Use% Mounted on" >>/tmp/emailFiles/Docker-Resources-Usage-Stats.txt
echo "Filesystem Type Size Used Avail Use% Mounted on" >> /tmp/emailFiles/Docker-Resources-Usage-Stats.txt
#df -kh | grep sda1 >> /tmp/emailFiles/Docker-Resources-Usage-Stats.txt
df -hT | grep ext4 >> /tmp/emailFiles/Docker-Resources-Usage-Stats.txt
echo " "
#cat /tmp/emailFiles/Docker-Resources-Usage-Stats.txt
fi
echo "########################################### End of Resources Output ################################################" >> /tmp/emailFiles/Docker-Resources-Usage-Stats.txt
echo " " >> /tmp/emailFiles/Docker-Resources-Usage-Stats.txt
done
Please modify it according to your requirement, if you find it useful.
I have a sh_test invoking docker run my_image where my_image is produced by a container_bundle rule. I need the container_bundle rule to be run as dependency to sh_test. How to achieve that? Adding container_bundle to sh_test's data only invokes container_bundle build, but I need run which pushes an image to a docker registry.
What we do is pass the rootpath of the container image rule to the script (as $IMAGE_LOADER) and do:
$IMAGE_LOADER --norun | tee image-loader.log
IMAGE_ID=$(cat ./image-loader.log | grep "Loaded image ID" | cut -d":" -f2-)
The easiest way I know is wrapping a docker_push rule around your bundle. Then your test rule can run the docker_push's output file, which is a binary that will do the docker load. Use runfiles.bash to get its full path.
Something like this:
# --- begin runfiles.bash initialization v2 ---
# Copy-pasted from the Bazel Bash runfiles library v2.
set -uo pipefail; f=bazel_tools/tools/bash/runfiles/runfiles.bash
source "${RUNFILES_DIR:-/dev/null}/$f" 2>/dev/null || \
source "$(grep -sm1 "^$f " "${RUNFILES_MANIFEST_FILE:-/dev/null}" | cut -f2- -d' ')" 2>/dev/null || \
source "$0.runfiles/$f" 2>/dev/null || \
source "$(grep -sm1 "^$f " "$0.runfiles_manifest" | cut -f2- -d' ')" 2>/dev/null || \
source "$(grep -sm1 "^$f " "$0.exe.runfiles_manifest" | cut -f2- -d' ')" 2>/dev/null || \
{ echo>&2 "ERROR: cannot find $f"; exit 1; }; f=; set -e
# --- end runfiles.bash initialization v2 ---
$(rlocation "my_workspace/some/package/my_container_push")
With some/package/BUILD having this:
load("#io_bazel_rules_docker//contrib:push-all.bzl", "docker_push")
load("#io_bazel_rules_docker//container:container.bzl", "container_bundle")
container_bundle(
name = "my_container_bundle",
# All your existing attrs here, etc etc.
)
docker_push(
name = "my_container_push",
bundle = ":my_container_bundle",
)
sh_test(
name = "my_test",
data = [
":my_container_push",
],
deps = [
"#bazel_tools//tools/bash/runfiles",
],
)
I have a file source-push.sh which returns the list of files which I want to exclude from the results of find command.
It looks like this:
#!/usr/bin/env bash
find . -not \( -path './node_modules' -prune \) -name '*.js' | grep -vE $(echo $(./source-push.sh | xargs -I{} echo -n "{}|") | rev | cut -b2- | rev) | xargs -L1 standard --fix
find . -not \( -path './node_modules' -prune \) -name '*.css' | grep -vE $(echo $(./source-push.sh | xargs -I{} echo -n "{}|") | rev | cut -b2- | rev) | xargs -L1 stylelint --config stylelint.json
There are supposed to be a way to do the job better than that. Any suggestions?
Instead of:
... | grep -vE $(echo $(./source-push.sh | xargs -I{} echo -n "{}|") | rev | cut -b2- | rev ) | ...
you can use the POSIX options -F and -f:
... | grep -v -F -f <( ./source-push.sh ) | ...
-F tells grep that the patterns are fixed strings
(avoiding the problem that your original code would break if the patterns contain characters that are special to grep -E)
-f file tells grep to use a list of patterns from file
<( ... ) is a bash way to present output of a program as a file (named pipe)
I am trying to grep or find for 2 specific words in each file in a directory. And then If i find more than one file found with such a combination - only then I should print those file names to a CSV file.
Here is what I tried so far:
find /dir/test -type f -printf "%f\n" | xargs grep -r -l -e 'ABCD1' -e 'ABCD2' > log1.csv
But this will provide all file names that have "ABCD1" and "ABCD2". In other words, this command will print the filename even if there is only one file that has this combo.
I will need to grep the entire directory for those 2 words and both words MUST be in more than one file if it has to write the filenames to CSV. I should also be able to include sub directories
Any help would be great!
Thanks
find + GNU grep solution:
find . -type f -exec grep -qPz 'ABCD1[\s\S]*ABCD2|ABCD2[\s\S]*ABCD1' {} \; -printf "%f\n" \
| tee /tmp/flist | [[ $(wc -l) -gt 1 ]] && cat /tmp/flist > log1.csv
Alternative way:
grep -lr 'ABCD2' /dir/test/* | xargs grep -l 'ABCD1' | tee /tmp/flist \
| [[ $(wc -l) -gt 1 ]] && sed 's/.*\/\([^\/]*\)$/\1/' /tmp/flist > log1.csv
Looking for some help if you will..
I have a virtual machine on RedHat 6.5 with 32gb memory.
A free is showing 24.6gb used, 8.2gb free. Only 418mb is cached, 1.8gb buffers.
Executed a top and sorted by virtual used, and I can only account for about 6gb of that 24.6gb used.
A "ps aux" doesn't show any processes that could be taking the memory.
I am flummoxed and looking for some advice on where I can look to see whats taking the memory?
Any help would be appreciated.
Below Bash Script will help you figure out which application is consuming how much of memory.
#!/bin/bash
# Make sure only root can run our script
if [ "$(id -u)" != "0" ]; then
echo "This script must be run as root" 1>&2
exit 1
fi
### Functions
#This function will count memory statistic for passed PID
get_process_mem ()
{
PID=$1
#we need to check if 2 files exist
if [ -f /proc/$PID/status ];
then
if [ -f /proc/$PID/smaps ];
then
#here we count memory usage, Pss, Private and Shared = Pss-Private
Pss=`cat /proc/$PID/smaps | grep -e "^Pss:" | awk '{print $2}'| paste -sd+ | bc `
Private=`cat /proc/$PID/smaps | grep -e "^Private" | awk '{print $2}'| paste -sd+ | bc `
#we need to be sure that we count Pss and Private memory, to avoid errors
if [ x"$Rss" != "x" -o x"$Private" != "x" ];
then
let Shared=${Pss}-${Private}
Name=`cat /proc/$PID/status | grep -e "^Name:" |cut -d':' -f2`
#we keep all results in bytes
let Shared=${Shared}*1024
let Private=${Private}*1024
let Sum=${Shared}+${Private}
echo -e "$Private + $Shared = $Sum \t $Name"
fi
fi
fi
}
#this function make conversion from bytes to Kb or Mb or Gb
convert()
{
value=$1
power=0
#if value 0, we make it like 0.00
if [ "$value" = "0" ];
then
value="0.00"
fi
#We make conversion till value bigger than 1024, and if yes we divide by 1024
while [ $(echo "${value} > 1024"|bc) -eq 1 ]
do
value=$(echo "scale=2;${value}/1024" |bc)
let power=$power+1
done
#this part get b,kb,mb or gb according to number of divisions
case $power in
0) reg=b;;
1) reg=kb;;
2) reg=mb;;
3) reg=gb;;
esac
echo -n "${value} ${reg} "
}
#to ensure that temp files not exist
[[ -f /tmp/res ]] && rm -f /tmp/res
[[ -f /tmp/res2 ]] && rm -f /tmp/res2
[[ -f /tmp/res3 ]] && rm -f /tmp/res3
#if argument passed script will show statistic only for that pid, of not – we list all processes in /proc/ #and get statistic for all of them, all result we store in file /tmp/res
if [ $# -eq 0 ]
then
pids=`ls /proc | grep -e [0-9] | grep -v [A-Za-z] `
for i in $pids
do
get_process_mem $i >> /tmp/res
done
else
get_process_mem $1>> /tmp/res
fi
#This will sort result by memory usage
cat /tmp/res | sort -gr -k 5 > /tmp/res2
#this part will get uniq names from process list, and we will add all lines with same process list
#we will count nomber of processes with same name, so if more that 1 process where will be
# process(2) in output
for Name in `cat /tmp/res2 | awk '{print $6}' | sort | uniq`
do
count=`cat /tmp/res2 | awk -v src=$Name '{if ($6==src) {print $6}}'|wc -l| awk '{print $1}'`
if [ $count = "1" ];
then
count=""
else
count="(${count})"
fi
VmSizeKB=`cat /tmp/res2 | awk -v src=$Name '{if ($6==src) {print $1}}' | paste -sd+ | bc`
VmRssKB=`cat /tmp/res2 | awk -v src=$Name '{if ($6==src) {print $3}}' | paste -sd+ | bc`
total=`cat /tmp/res2 | awk '{print $5}' | paste -sd+ | bc`
Sum=`echo "${VmRssKB}+${VmSizeKB}"|bc`
#all result stored in /tmp/res3 file
echo -e "$VmSizeKB + $VmRssKB = $Sum \t ${Name}${count}" >>/tmp/res3
done
#this make sort once more.
cat /tmp/res3 | sort -gr -k 5 | uniq > /tmp/res
#now we print result , first header
echo -e "Private \t + \t Shared \t = \t RAM used \t Program"
#after we read line by line of temp file
while read line
do
echo $line | while read a b c d e f
do
#we print all processes if Ram used if not 0
if [ $e != "0" ]; then
#here we use function that make conversion
echo -en "`convert $a` \t $b \t `convert $c` \t $d \t `convert $e` \t $f"
echo ""
fi
done
done < /tmp/res #this part print footer, with counted Ram usage echo "--------------------------------------------------------" echo -e "\t\t\t\t\t\t `convert $total`" echo "========================================================" # we clean temporary file [[ -f /tmp/res ]] && rm -f /tmp/res [[ -f /tmp/res2 ]] && rm -f /tmp/res2 [[ -f /tmp/res3 ]] && rm -f /tmp/res3
I am going to take a wild stab at this. Without having access to the machine or additional information troubleshooting this will be difficult.
The /tmp file system is special in that it exists entirely in memory. There are a couple others that are like this but /tmp is a special flower. Check the disk usage on this directory and you may see where your memory is getting consumed. ( du -sh /tmp )