Add low frequency amplitude modulation to noise - chuck

I am trying to reproduce a SoX script using ChucK which creates brown noise with a slight oscillation (tremolo).
The SoX script is:
set -u
set -e
minutes=${1:-'59'}
repeats=$(( minutes - 1 ))
center=${2:-'1786'}
wave=${3:-'0.0333333'}
noise='brown'
len='01:00'
if [ $minutes -eq 1 ] ; then
progress='--show-progress'
else
progress='--no-show-progress'
fi
echo " :: Please stand-by... sox will 'play' $noise noise for $minutes minute(s)."
play $progress -c 2 --null -t alsa synth $len ${noise}noise \
band -n $center 499 \
tremolo $wave 43 reverb 19 \
bass -11 treble -1 \
vol 14dB \
repeat $repeats
exit 0
The following ChucK script, modified from the wind2.ck example, creates brown noise of the desired frequency:
Noise n => BiQuad f => dac;
0.99 => f.prad;
0.0333333 => f.gain;
1 => f.eqzs;
0.00 => float t;
while(true)
5::ms => now;
I am unable to reproduce the effect that the SoX tremolo option creates.
It seems like I should be able to add a Sine wave to the main chain and then oscillate that parameter. I am trying variations of FM.ck frequency modulation example without success:
SinOsc m => Noise n => BiQuad f => dac;
20 => m.freq;
200 => m.gain;
0.99 => f.prad;
0.0333333 => f.gain;
1 => f.eqzs;
0.00 => float t;
while(true)
30 + ( Math.sin(t) + 1.0 ) * 10000.0 => m.sfreq;
t + .004 => t;
5::ms => now;
I expect to hear some fluctuation in the tone, but instead, no sound appears to be produced.
How can I add a low frequency amplitude modulation to the brown noise I've generated?

Related

Weka RF doesn't give any confusion matrix or expected results

I am using WEKA to classify a small dataset with only 27 instances into a binary classification. I have tried with bigger datasets and weka show the confusion matrix and the other metrics, but with my main and small 27 instances dataset only shows this:
Scheme: weka.classifiers.trees.RandomForest -P 100 -I 100 -num-slots 1 -K 0 -M 1.0 -V 0.001 -S 1
Relation: t_PROMIS_mtbi-weka.filters.unsupervised.attribute.Remove-R1
Instances: 27
Attributes: 7
Var2
Var3
Var4
Var5
Var6
Var7
ERS
Test mode: 10-fold cross-validation
=== Classifier model (full training set) ===
RandomForest
Bagging with 100 iterations and base learner
weka.classifiers.trees.RandomTree -K 0 -M 1.0 -V 0.001 -S 1 -do-not-check-capabilities
Time taken to build model: 0.01 seconds
=== Cross-validation ===
=== Summary ===
Correlation coefficient 0.0348
Mean absolute error 0.4544
Root mean squared error 0.529
Relative absolute error 91.7269 %
Root relative squared error 102.952 %
Total Number of Instances 27
i don't undersantd why this is happening. Is it a size thing?
I have already solved it, The problem was that i was using numbers 1/0 on my class viariable, I changed it for a "Yes"/"No" variable and works.

What modifications can I make in svmtrain of LIBSVM to improve the accuracy of my spam classifier?

I am using Octave version 5.2.0 and LIBSVM 3.24 to build a spam classifier.
Without using LIBSVM I got an accuracy of >99% on both test and train data.
But while using LIBSVM, I only get an accuracy of 68-69% .What modifications should I do on my LIBSVM options?
This is the code I used
model = svmtrain(X, y,'-c 0.1 -t 2 -s 0 -g 1000');
p = svmpredict(y,X,model);
Are you aware of the settings of LibSVM?
% libSVM options:
% -s svm_type: set type of SVM (default 0)
% 0 -- C-SVC
% 1 -- nu-SVC
% 2 -- one-class SVM
% 3 -- epsilon-SVR
% 4 -- nu-SVR
% -t kernel_type: set type of kernel function (default 2)
% 0 -- linear: u'*v
% 1 -- polynomial: (gamma*u'*v + coef0)^degree
% 2 -- radial basis function: exp(-gamma*|u-v|^2)
% 3 -- sigmoid: tanh(gamma*u'*v + coef0)
% -d degree: set degree in kernel function (default 3)
% -g gamma: set gamma in kernel function (default 1/num_features)
% -r coef0: set coef0 in kernel function (default 0)
% -c cost: set the parameter C of C-SVC, epsilon-SVR, and nu-SVR (default 1)
% -n nu: set the parameter nu of nu-SVC, one-class SVM, and nu-SVR (default 0.5)
% -p epsilon: set the epsilon in loss function of epsilon-SVR (default 0.1)
% -m cachesize: set cache memory size in MB (default 100)
% -e epsilon: set tolerance of termination criterion (default 0.001)
% -h shrinking: whether to use the shrinking heuristics, 0 or 1 (default 1)
% -b probability_estimates: whether to train a SVC or SVR model for probability estimates, 0 or 1 (default 0)
% -wi weight: set the parameter C of class i to weight*C, for C-SVC (default 1)
So your -s 0 -t 2 -g 1000 -c 0.1 settings translate to a C-SVM (-s 0) with a Gaussian kernel (-t 2) with a large scaling (-g 1000) and a smaller than default cost for violations (-c 0.1).
I suggest to try it first with the default values (-s 0 -t 2), and than increase the cost -c. Your gamma looks ridiculously large but without knowing your data, none can judge this. Have a look on hyperparameter optimization, which exactly sets those values. There exists plenty of work on this but I am only familiar with regression analysis. If in doubt, do a global optimization on those parameters through gridsearch or ga.

FDDB evaluation code

I'm studying opencv and dlib, for a face detector to use on a university project, and I'm really new at this whole thing of machine learning and computer vision. How can I use the evaluation code from FDDB to evaluate my code for face detection? I'm using dlib's CNN method for detecting faces from images.
import cv2
import dlib
image = cv2.imread('..\\pessoas\\beatles.jpg')
detector = dlib.cnn_face_detection_model_v1("..\\mmods\\mmod_human_face_detector.dat")
detectedFaces = detector(image)
for face in detectedFaces:
l, t, r, b, c = (int(face.rect.left()), int(face.rect.top()), int(face.rect.right()), int(face.rect.bottom()),
face.confidence)
cv2.rectangle(image, (l, t), (r, b), (255, 0, 0), 2)
cv2.imshow("CNN Detector", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
As you can see the code is pretty simple, but I have to calculate precision, recall, and F1-score to plot the ROC curves and I'm don't know yet how to do it, the readme on the project's github doesn't help.
As to me in ubuntu16, I have to done it by the following steps:
Download the fddb original images dataset which you detect face and get detection result.You can download it here.Here is my directory:
Join all the images file path to a txt file , and join all the fddb annotations to a txt file.
You can download all the files here
As to me I move all the FDDB-FOLD-%d.txt to the directory all_file_path, and then join them to one file by cat * > filePath.txt
Join all the FDDB-fold-%d-ellipseList.txt to one txt by cat *ellipse*.txt > annotFile.txt
Note you may no need to create it, because runEvaluate.pl have do it for you during the running process.
3.Create FDDB evalute exe, download the source code here here
And then compile it , you may change the makefile, see the reason here, add
INCS = -I/usr/local/include/opencv
LIBS = -L/usr/local/lib -lopencv_core -lopencv_imgproc -lopencv_highgui
-lopencv_ml -lopencv_video -lopencv_features2d -lopencv_calib3d
-lopencv_objdetect -lopencv_contrib -lopencv_legacy
to the make file.
Evaluate, you can use the runEvaluate.pl to evaluate it , but as to me(ubuntu16), I can’t run it directly.
4.1 change the GUNPLOT path(you should install gnuplot first using it to create ROC image )
4.2 I use rectangle detection model, so I change $detFormat to 0.
my $detFormat = 0; # 0: rectangle, 1: ellipse 2: pixels
4.3 All the images relative path:
my $listFile ="/home/xy/face_sample/evaluation/compareROC/FDDB-folds/filePath.txt";
4.4 All the images annotations
my $annotFile = "/home/xy/face_sample/evaluation/compareROC/FDDB-folds/annotFile.txt";
4.5 The roc file you want to generate(created by evaluate exe):
my $gpFile ="/home/xy/face_sample/evaluation/compareROC/createROC.p";
4.6 You detection file (I will give how to create it latter)
my $detFile ="/home/xy/face_sample/evaluation/compareROC/detDir/fddb_rect_ret1.txt";
It’s content like that:
The ‘runEvaluate.pl’ have some error, change the execute evaluation to the below:
system($evaluateBin, "-a", $annotFile, "-d", $detFile, "-f", $detFormat, "-i", $imDir, "-l", $listFile, "-r", $detDir, "-z", ".jpg");
You can also use command to check it:
xy#xy:~/face_sample/evaluation/compareROC$ ./evaluate \
> -a /home/xy/face_sample/evaluation/compareROC/FDDB-folds/annotFile.txt \
> -d /home/xy/face_sample/evaluation/compareROC/detDir/fddb_rect_ret1.txt \
> -f 0 \
> -i /home/xy/face_sample/evaluation/compareROC/originalPics/ \
> -l /home/xy/face_sample/evaluation/compareROC/FDDB-folds/filePath.txt \
> -r /home/xy/face_sample/evaluation/compareROC/detDir/ \
> -z .jpg
Use python to create fddb evaluation txt file:
def get_img_relative_path():
"""
:return: ['2002/08/11/big/img_344', '2002/08/02/big/img_473', ......]
"""
f_name = 'E:/face_rec/face__det_rec_code/face_det/FDDB-folds/all_img_files.txt'
lst_name = open(f_name).read().split('\n')
return lst_name
def write_lines_to_txt(lst):
# lst = ['line1', 'line2', 'line3']
f_path = 'fddb_rect_ret.txt'
with open(f_path, 'w') as fp:
for line in lst:
fp.write("%s\n" % line)
# For example use opencv to face detection
def detect_face_lst(img):
"""
:param img: opencv image
:return: face rectangles [[x, y, w, h], ..........]
"""
m_path = 'D:/opencv/sources/data/haarcascades/haarcascade_frontalface_default.xml'
face_cascade = cv2.CascadeClassifier(m_path)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
return faces
def generate_fddb_ret():
# The directory from which we get the test images from FDDB
img_base_dir = 'E:/face_rec/face__det_rec_code/face_det/originalPics/'
# All the images relative path, like '['2002/08/11/big/img_344', '2002/08/02/big/img_473', ......]'
lst_img_name = get_img_relative_path()
# Store detect result, like:
# ['2002/08/11/big/img_344', '1', '10 10 50 50 1', .............]
lst_write2_fddb_ret = []
try:
for img_name in lst_img_name:
img_full_name = img_base_dir + img_name + '.jpg'
img = cv2.imread(img_full_name)
if img == None:
print 'error %s not exists, can not generate complete fddb evaluate file' % img_full_name
return -1
lst_face_rect = detect_face_lst(img)
# append img name like '2002/08/11/big/img_344'
lst_write2_fddb_ret.append(img_name)
face_num = len(lst_face_rect)
# append face num, note if no face 0 should be append
lst_write2_fddb_ret.append(str(face_num))
if face_num > 0:
# append each face rectangle x y w h score
for face_rect in lst_face_rect:
# append face rectangle x, y, w, h score
# note: opencv hava no confidence so use 1 here
s_rect = " ".join(str(item) for item in face_rect) + " 1"
lst_write2_fddb_ret.append(s_rect)
except Exception as e:
print 'error %s , can not generate complete fddb evaluate file' % e
return -1
# Write all the result to txt for FDDB evaluation
write_lines_to_txt(lst_write2_fddb_ret)
After run the above code you can create FDDB result:
Note: when you create the above txt in windows, if you test it in ubuntu you may get the following errorIncompatible annotation and detection files. See output specifications:
Just copy the content to a new txt file(created in ubuntu) then it solves.
Here is the result:
Some tips:
You can see the runEvaluate.pl it's not hard, the above changes may not be needed.You can also change some variable in runEvaluate.pl, like $GNUPLOT, $imDir and so on.
add "-z", ".jpg" to
system($evaluateBin, "-a", $annotFile, "-d", $detFile, "-f", $detFormat, "-i", $imDir, "-l", $listFile, "-r", $detDir);
system($evaluateBin, "-a", $annotFile, "-d", $detFile, "-f", $detFormat, "-i", $imDir, "-l", $listFile, "-r", $detDir, "-z", ".jpg");
You can also read the evaluate code(mainly the evaluate.cpp which is easy to understand ), so you will have a deep understand of how to evaluate it.
can you explain the step you are in?
You need to download the labelled data from:
http://vis-www.cs.umass.edu/fddb/ where it says: Download the database
After that you need to download the result source code:
http://vis-www.cs.umass.edu/fddb/results.html
Then you need to modify your program so that the output looks like this:
2002/08/11/big/img_591
1
191 88 164 163 0
2002/08/26/big/img_265
3
52 39 95 95 0
282 59 114 114 0
Where first is the name of the image,
then the number of faces in that image,
then the coordenates for each face and repeat...
I advice you to build the evaluation on linux since it's a lot easier (at least for me it was).
Hope it helps.

how to use fred's imagemagick textcleaner script in opencv c++/opencv java?

I'm trying to develop an app that can read text from image. I have to clean the background of image. I heard that fred's imagemagick textcleaner script can be use but i don't know how to use it. Does anyone has any idea about it?
Input Image :
I had a try at this and while the news is not good, it's still an answer, even if negative. Maybe someone else wants to take my efforts further, or maybe you feel my efforts confirm that textcleaner is not the way to go. Anyway, I took your image and wrote a script to vary the most promising parameters of Fred Weinhaus's textcleaner. I feel that the ones that may help are -f, -o and -t, and I varied these through their likely ranges like this:
#!/bin/bash
for f in 1 5 10 15 20 25; do
for o in 1 3 6 9 12; do
for t in 1 25 50 75 100; do
./textcleaner -f $f -o $o -t $t cc.jpg z_${f}_${o}_${t}.png
convert -label "f=$f, o=$o, t=$t" z_${f}_${o}_${t}.png miff:-
done
done
done | montage - -frame 5 -tile 6x montage.png
That gives me this montage of all the results
To my eye, the most promising was maybe f=10, o=1, t=1
I then thought "why bother seeing what I like, let's see what Tesseract likes?". So I changed the script to this so that Tesseract got to look at all the permutations:
#!/bin/bash
for f in 1 5 10 15 20 25; do
for o in 1 3 6 9 12; do
for t in 1 25 50 75 100; do
./textcleaner -f $f -o $o -t $t cc.jpg z_${f}_${o}_${t}.png
tesseract z_${f}_${o}_${t}.png res > /dev/null 2>&1
if grep "[0-9]" res* ; then echo z_${f}_${o}_${t}.png ;fi
done
done
done
And the results were abysmal... here is the output
um 0-" V _
L"“1}- H
z_5_3_50.png
:1:J£‘u “
z_15_3_75.png
”':{E]!) /3: '55‘
z_15_6_75.png
E2?
z_15_9_1.png
:1:
z_15_12_100.png
I -.352}: "H ,1 5
z_20_12_25.png
1/
, ,5». 3».
z_25_6_75.png
3
z_25_9_25.png
- ::'§—:am I-:L’5‘:*‘f§~f.’i'7""“-‘-"I 5="
z_25_12_1.png
7 3:2‘
z_25_12_75.png
Nothing even remotely useful. Maybe someone else has a better idea about how to tune it and which parameters to tweak, but I suspect that textcleaner may be the wrong approach here.
Without seeing your data first it's hard to guess. If you have fairly uniform background you can use adaptive thresholding to remove the background.
Here are some theoretical informations on how to use adaptive thresholding. This algorithm is implemented in OpenCV.

ANSI Color Specific RGB Sequence Bash

I know that in bash terminals a reliable way to change color is using ANSI escape sequences. For example:
echo -e "\033[0;31mbrown text\033[0;00m"
should output
brown text (in brown)
Is there a way to output color using a specific RGB set with ANSI? Say I want bright red:
echo -e "**\033[255:0:0m**red text\033[0;00m"
Does this sort of thing exist?
I just want to use standard bash.
Both answers here fail to mention the Truecolor ANSI support for 8bpc color. This will get the RGB color the OP originally asked for.
Instead of ;5, use ;2, and specify the R, G, and B values (0-255) in the following three control segments.
\x1b[38;2;40;177;249m
To test if your terminal supports Truecolor:
printf "\x1b[38;2;40;177;249mTRUECOLOR\x1b[0m\n"
On my machine, XTerm happily outputted the correct color; although, terminals that are modeled after terminals that predate modern RGB color generally will not support truecolor - make sure you know your target before using this particular variant of the escape code.
I'd also like to point out the 38 and the ;5/;2 - Blue Ice mentioned that 38 routes and then 5 changes the color. That is slightly incorrect.
38 is the xterm-256 extended foreground color code; 30-37 are simply 16-color foreground codes (with a brightness controlled by escape code 1 on some systems and the arguably-supported 90-97 non-standard 'bright' codes) that are supported by all vt100/xterm-compliant colored terminals.
The ;2 and ;5 indicate the format of the color, ultimately telling the terminal how many more sequences to pull: ;5 specifying an 8-bit format (as Blue Ice mentioned) requiring only 1 more control segment, and ;2 specifying a full 24-bit RGB format requiring 3 control segments.
These extended modes are technically "undocumented" and are completely implementation defined. As far as I know and can research, they are not governed by the ANSI committee.
For the so inclined, the 5; (256 color) format starts with the 16 original colors (both dark/light, so 30-37 and 90-97) as colors 0-15.
The proceeding 216 colors (16-231) are formed by a 3bpc RGB value offset by 16, packed into a single value.
The final 24 colors (232-256) are greyscale starting from a shade slightly lighter than black ranging up to a shade slightly darker than white. Some emulators interpret these steps as linear increments from (256 / 24) on all three channels, though I've come across some emulators that seem to explicitly define these values.
Here is a Javascript function that performs such a conversion, taking into account all of the greys.
function rgbToAnsi256(r, g, b) {
// we use the extended greyscale palette here, with the exception of
// black and white. normal palette only has 4 greyscale shades.
if (r === g && g === b) {
if (r < 8) {
return 16;
}
if (r > 248) {
return 231;
}
return Math.round(((r - 8) / 247) * 24) + 232;
}
var ansi = 16
+ (36 * Math.round(r / 255 * 5))
+ (6 * Math.round(g / 255 * 5))
+ Math.round(b / 255 * 5);
return ansi;
}
So in a way, you can calculate 256 ANSI colors from initial RGB values by reducing them from 8 to 3 bits in order to form a 256 encoded value in the event you want to programmatically do so on terminals that do not support Truecolor.
This does exist, but instead of the 16777216 (256^3) colors that the OP was looking for, there are 216 (6^3) equally distributed colors, in a larger set of 256 colors. Example:
echo -e "\033[38;5;208mpeach\033[0;00m"
This will output a pleasing sort of peach colored text.
Taking apart this command: \033[38;5;208m
The \033 is the escape code. The [38; directs command to the foreground. If you want to change the background color instead, use [48; instead. The 5; is just a piece of the sequence that changes color. And the most important part, 208m, selects the actual color.
There are 3 sets of colors that can be found in the 256 color sequence for this escape. The first set is the basic "candy" color set, or values 0-15. Then there is a cube of distributed colors, from 16-231. Lastly there is a detailed grayscale set from 232-255.
You can find a table with all of these values here: http://bitmote.com/index.php?post/2012/11/19/Using-ANSI-Color-Codes-to-Colorize-Your-Bash-Prompt-on-Linux#256%20(8-bit)%20Colors
This will work
echo -e "**\033[38;2;255;0;0m**red text\033[0;00m"
format: "\033[38;2;R;G;Bm"
R is your RED component of your RGB
G is your GREEN component of your RGB
B is your BLUE component of your RGB
Playing with RGB (and HSV) in bash
ANSI sequences in terminal.
There are two way of printing colors in bash.
After playing with nice tools found on xterm's source tree, here is how vttests/256colors2.pl show on my gnome-terminal:
show 256 colors: 16 terminal colors + 6 * 6 * 6 RGB levels + 24 grayscales.
this use ANSI syntax \e[48;5;COLORm:
printf '\e[48;5;%sm' $color;
instead of \e[48;2;RED;GREEN;BLUEm:
printf '\e[48;2;%s;%s;%sm' $red $green $blue;
I've done some bash functions to play with RGB, and HSV:
RGB to HSV
hsv() {
local -n _result=$4
local -i _hsv_min _hsv_t
local _hsv_s
local -i _hsv_max=" $1 > $2 ?
(_hsv_min=($2 > $3 ? $3:$2 ), ( $1 > $3 ? $1 : $3 )) :
(_hsv_min=($1 > $3 ? $3:$1 ), $2) > $3 ? $2 : $3 "
case $_hsv_max in
$_hsv_min) _hsv_t=0 ;;
$1) _hsv_t=" ( 60 * ( $2 - $3 ) / ( _hsv_max-_hsv_min )+ 360 )%360";;
$2) _hsv_t=" 60 * ( $3 - $1 ) / ( _hsv_max-_hsv_min )+ 120 " ;;
$3) _hsv_t=" 60 * ( $1 - $2 ) / ( _hsv_max-_hsv_min )+ 240 " ;;
esac
_hsv_s=0000000$(( _hsv_max==0?0 : 100000000-100000000*_hsv_min / _hsv_max ))
printf -v _hsv_s %.7f ${_hsv_s::-8}.${_hsv_s: -8}
_result=($_hsv_t $_hsv_s $_hsv_max)
}
Then
RED=255 GREEN=240 BLUE=128
hsv $RED $GREEN $BLUE hsvAr
echo ${hsvAr[#]}
52 0.4980392 255
printf 'Hue: %d, Saturation: %f, Value: %d\n' "${hsvAr[#]}"
Hue: 52, Saturation: 0.498039, Value: 255
HSV to RGB
rgb() {
local -n _result=$4
local -i _rgb_i=" (($1%360)/60)%6 "
local -i _rgb_f=" 100000000*($1%360)/60-_rgb_i*100000000 "
local _rgb_s
printf -v _rgb_s %.8f "$2"
_rgb_s=$((10#${_rgb_s/.}))
local -i _rgb_l=" $3*(100000000-_rgb_s)/100000000 "
case $_rgb_i in
0 )
local -i _rgb_n="$3*(100000000-(100000000-_rgb_f)*_rgb_s/100000000)/
100000000 "
_result=("$3" "$_rgb_n" "$_rgb_l") ;;
1 )
local -i _rgb_m=" $3*(100000000-_rgb_f*_rgb_s/100000000)/100000000 "
_result=("$_rgb_m" "$3" "$_rgb_l") ;;
2 )
local -i _rgb_n="$3*(100000000-(100000000-_rgb_f)*_rgb_s/100000000)/
100000000 "
_result=("$_rgb_l" "$3" "$_rgb_n") ;;
3 )
local -i _rgb_m=" $3*(100000000-_rgb_f*_rgb_s/100000000)/100000000 "
_result=("$_rgb_l" "$_rgb_m" "$3") ;;
4 )
local -i _rgb_n="$3*(100000000-(100000000-_rgb_f)*_rgb_s/100000000)/
100000000 "
_result=("$_rgb_n" "$_rgb_l" "$3") ;;
* )
local -i _rgb_m=" $3*(100000000-_rgb_f*_rgb_s/100000000)/100000000 "
_result=("$3" "$_rgb_l" "$_rgb_m") ;;
esac
}
Then
rgb 160 .6 240 out
echo ${out[#]}
96 240 192
printf '\e[48;5;%d;%d;%dm \e[0m\n' "${out[#]}"
Will produce a bunch of colored spaces.
Further: hsvrgb-browser.sh
Preamble: Store previous two function into a file called hsvrgb.sh, stored in same directory than downloaded hsvrgb-browser.sh.
HSV-RGB Color browser - Usage:
[RrGgBbVb] Incrase/decrase value by step ('1'), from 0 to 255..
[HhTt] Incrase/decrase Hue (tint), loop over 0 - 359.
[Ss] Increase/decrase Saturation by .006 x step (1).
[Cc] Toggle Color bar rendering (upper C fix HSV)
[+-] Incrase/decrase step.
[u] show this.
[q] quit.
Note: Regarding mmeisner' comment, If you encounter issue with this script, try to run them by:
LC_ALL=C.UTF8 ./hsvrgb-browser.sh
Currently true color escape sequences (\e[38;2;R;G;Bm) are supported by certain terminal emulators including gnome-terminal (with vte >= 0.36), konsole, and st [suckless].
The feature is not supported by certain others, e.g. pterm [putty], terminology [enlightenment], urxvt.
xterm is halfway in between: it recognizes the escape sequences, but rounds every color to the nearest one in the 256-color palette.
No there's not.
And to nitpick, those are technically not "ANSI escape sequences" but VT100 control codes (which were defined long before there were graphical terminals and terms like "RGB").

Resources