I'm studying opencv and dlib, for a face detector to use on a university project, and I'm really new at this whole thing of machine learning and computer vision. How can I use the evaluation code from FDDB to evaluate my code for face detection? I'm using dlib's CNN method for detecting faces from images.
import cv2
import dlib
image = cv2.imread('..\\pessoas\\beatles.jpg')
detector = dlib.cnn_face_detection_model_v1("..\\mmods\\mmod_human_face_detector.dat")
detectedFaces = detector(image)
for face in detectedFaces:
l, t, r, b, c = (int(face.rect.left()), int(face.rect.top()), int(face.rect.right()), int(face.rect.bottom()),
face.confidence)
cv2.rectangle(image, (l, t), (r, b), (255, 0, 0), 2)
cv2.imshow("CNN Detector", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
As you can see the code is pretty simple, but I have to calculate precision, recall, and F1-score to plot the ROC curves and I'm don't know yet how to do it, the readme on the project's github doesn't help.
As to me in ubuntu16, I have to done it by the following steps:
Download the fddb original images dataset which you detect face and get detection result.You can download it here.Here is my directory:
Join all the images file path to a txt file , and join all the fddb annotations to a txt file.
You can download all the files here
As to me I move all the FDDB-FOLD-%d.txt to the directory all_file_path, and then join them to one file by cat * > filePath.txt
Join all the FDDB-fold-%d-ellipseList.txt to one txt by cat *ellipse*.txt > annotFile.txt
Note you may no need to create it, because runEvaluate.pl have do it for you during the running process.
3.Create FDDB evalute exe, download the source code here here
And then compile it , you may change the makefile, see the reason here, add
INCS = -I/usr/local/include/opencv
LIBS = -L/usr/local/lib -lopencv_core -lopencv_imgproc -lopencv_highgui
-lopencv_ml -lopencv_video -lopencv_features2d -lopencv_calib3d
-lopencv_objdetect -lopencv_contrib -lopencv_legacy
to the make file.
Evaluate, you can use the runEvaluate.pl to evaluate it , but as to me(ubuntu16), I can’t run it directly.
4.1 change the GUNPLOT path(you should install gnuplot first using it to create ROC image )
4.2 I use rectangle detection model, so I change $detFormat to 0.
my $detFormat = 0; # 0: rectangle, 1: ellipse 2: pixels
4.3 All the images relative path:
my $listFile ="/home/xy/face_sample/evaluation/compareROC/FDDB-folds/filePath.txt";
4.4 All the images annotations
my $annotFile = "/home/xy/face_sample/evaluation/compareROC/FDDB-folds/annotFile.txt";
4.5 The roc file you want to generate(created by evaluate exe):
my $gpFile ="/home/xy/face_sample/evaluation/compareROC/createROC.p";
4.6 You detection file (I will give how to create it latter)
my $detFile ="/home/xy/face_sample/evaluation/compareROC/detDir/fddb_rect_ret1.txt";
It’s content like that:
The ‘runEvaluate.pl’ have some error, change the execute evaluation to the below:
system($evaluateBin, "-a", $annotFile, "-d", $detFile, "-f", $detFormat, "-i", $imDir, "-l", $listFile, "-r", $detDir, "-z", ".jpg");
You can also use command to check it:
xy#xy:~/face_sample/evaluation/compareROC$ ./evaluate \
> -a /home/xy/face_sample/evaluation/compareROC/FDDB-folds/annotFile.txt \
> -d /home/xy/face_sample/evaluation/compareROC/detDir/fddb_rect_ret1.txt \
> -f 0 \
> -i /home/xy/face_sample/evaluation/compareROC/originalPics/ \
> -l /home/xy/face_sample/evaluation/compareROC/FDDB-folds/filePath.txt \
> -r /home/xy/face_sample/evaluation/compareROC/detDir/ \
> -z .jpg
Use python to create fddb evaluation txt file:
def get_img_relative_path():
"""
:return: ['2002/08/11/big/img_344', '2002/08/02/big/img_473', ......]
"""
f_name = 'E:/face_rec/face__det_rec_code/face_det/FDDB-folds/all_img_files.txt'
lst_name = open(f_name).read().split('\n')
return lst_name
def write_lines_to_txt(lst):
# lst = ['line1', 'line2', 'line3']
f_path = 'fddb_rect_ret.txt'
with open(f_path, 'w') as fp:
for line in lst:
fp.write("%s\n" % line)
# For example use opencv to face detection
def detect_face_lst(img):
"""
:param img: opencv image
:return: face rectangles [[x, y, w, h], ..........]
"""
m_path = 'D:/opencv/sources/data/haarcascades/haarcascade_frontalface_default.xml'
face_cascade = cv2.CascadeClassifier(m_path)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
return faces
def generate_fddb_ret():
# The directory from which we get the test images from FDDB
img_base_dir = 'E:/face_rec/face__det_rec_code/face_det/originalPics/'
# All the images relative path, like '['2002/08/11/big/img_344', '2002/08/02/big/img_473', ......]'
lst_img_name = get_img_relative_path()
# Store detect result, like:
# ['2002/08/11/big/img_344', '1', '10 10 50 50 1', .............]
lst_write2_fddb_ret = []
try:
for img_name in lst_img_name:
img_full_name = img_base_dir + img_name + '.jpg'
img = cv2.imread(img_full_name)
if img == None:
print 'error %s not exists, can not generate complete fddb evaluate file' % img_full_name
return -1
lst_face_rect = detect_face_lst(img)
# append img name like '2002/08/11/big/img_344'
lst_write2_fddb_ret.append(img_name)
face_num = len(lst_face_rect)
# append face num, note if no face 0 should be append
lst_write2_fddb_ret.append(str(face_num))
if face_num > 0:
# append each face rectangle x y w h score
for face_rect in lst_face_rect:
# append face rectangle x, y, w, h score
# note: opencv hava no confidence so use 1 here
s_rect = " ".join(str(item) for item in face_rect) + " 1"
lst_write2_fddb_ret.append(s_rect)
except Exception as e:
print 'error %s , can not generate complete fddb evaluate file' % e
return -1
# Write all the result to txt for FDDB evaluation
write_lines_to_txt(lst_write2_fddb_ret)
After run the above code you can create FDDB result:
Note: when you create the above txt in windows, if you test it in ubuntu you may get the following errorIncompatible annotation and detection files. See output specifications:
Just copy the content to a new txt file(created in ubuntu) then it solves.
Here is the result:
Some tips:
You can see the runEvaluate.pl it's not hard, the above changes may not be needed.You can also change some variable in runEvaluate.pl, like $GNUPLOT, $imDir and so on.
add "-z", ".jpg" to
system($evaluateBin, "-a", $annotFile, "-d", $detFile, "-f", $detFormat, "-i", $imDir, "-l", $listFile, "-r", $detDir);
system($evaluateBin, "-a", $annotFile, "-d", $detFile, "-f", $detFormat, "-i", $imDir, "-l", $listFile, "-r", $detDir, "-z", ".jpg");
You can also read the evaluate code(mainly the evaluate.cpp which is easy to understand ), so you will have a deep understand of how to evaluate it.
can you explain the step you are in?
You need to download the labelled data from:
http://vis-www.cs.umass.edu/fddb/ where it says: Download the database
After that you need to download the result source code:
http://vis-www.cs.umass.edu/fddb/results.html
Then you need to modify your program so that the output looks like this:
2002/08/11/big/img_591
1
191 88 164 163 0
2002/08/26/big/img_265
3
52 39 95 95 0
282 59 114 114 0
Where first is the name of the image,
then the number of faces in that image,
then the coordenates for each face and repeat...
I advice you to build the evaluation on linux since it's a lot easier (at least for me it was).
Hope it helps.
Related
Good afternoon! I have a question about AttentionOCR model inference using OpenVino.
There is an AttentionOCR model that takes a size tensor (1,1,32,214) as input, I convert it to OpenVino using the following command:
mo \
--input_model=model/path/frozen_graph.pb \
--input="map/TensorArrayStack/TensorArrayGatherV3:0[1 32 214 1]" \
--output "transpose_1,transpose_2" \
--output_dir path/to/ir/
I submit a picture and the model returns the following transpose_1 and transpose_2 arrays , which, as I understand it, should output a tensor with predicted symbols and their probabilities, but output something not what was expected. And it's still not clear why lists of size 8 are returned.
Before feeding the model as input, I converted the image to gray and then also executed the different commands like that, but get the same result:
blob = cv2.dnn.blobFromImage(image, 1.0, (32, 214))
or
image = cv2.resize(image, (32,214))
image = image[None, None,:,:]
I'm getting images from an IP camera that have a strong fish-eye effect. I found that in Gimp I can get lines mostly straight by applying the Lens Distortion filter with a "main" value of -30 (all other parameters remain zero).
Now I need to do this ad-hoc using OpenCV. I gathered that the undistort function in imgproc would be the right thing to call. But how do I generate the correct camera and distortion matrix? I see there is a calibrateCamera function, but it seem you need a PhD in computer vision or so to use it. I have no clue. Since I know the one parameter, there must be a simple way to translate it into the matrix expected by 'undistort'?
Note: I only need the radial distortion coefficients, I'm not interested in the tangential distortion.
There is a sample provided by opencv for calibration. For that all you need is the list of the images of checkerboard(around 20 should be good). taken by your desired camera. It will give you all the required parameters( distortion coefficients, intrinsic parameters etc.). Then you can use 'undistort' function of opencv to correct your image.
You need to change in default.xml,(or you can create your own .xml) the name of the xml file containing the address of your images, the count of inner squares and their dimension in real world.
tadaa you have you required parameters :-)
For those who wonder where that calibration tool comes from. Seems one has to build it from source. This is what I did on Linux:
git clone https://github.com/opencv/opencv.git
cd opencv
git checkout -b 3.1.0 3.1.0 # make sure we build that version
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=Release -D BUILD_EXAMPLES=ON ..
make -j4
Then to calibrate:
./bin/cpp-example-calibration -w=8 -h=6 -o=camera.yml -op -oe -su image_list.xml
The -su lets you verify how the images look after un-distortion. The -w and -h parameters take "inner corners" which is not the number of squares in the chess pattern, but rather (num-black-squares - 1) * 2.
Here's how the perspective transform is applied in the end, using Scala and JavaCV:
import org.bytedeco.javacpp.indexer.FloatRawIndexer
import org.bytedeco.javacpp.opencv_core.Mat
import org.bytedeco.javacpp.{opencv_core, opencv_imgcodecs, opencv_imgproc}
import java.io.File
// from the camera_matrix > data part of the yml:
val cameraFocal = 1.4656877976320607e+03
val cameraCX = 1920.0/2
val cameraCY = 1080.0/2
val cameraMatrixData = Array[Double](
cameraFocal, 0.0 , cameraCX,
0.0 , cameraFocal, cameraCY,
0.0 , 0.0 , 1.0
)
// from the distortion_coefficients of the yml:
val distMatrixData = Array[Double](
-4.016824381742e-01, 4.368842493074e-02, 0.0, 0.0, 1.096412142704e-01
)
def run(in: File, out: File): Unit = {
val matOut = new Mat
val camMat = new Mat(3, 3, opencv_core.CV_32FC1)
val camIdx = camMat.createIndexer[FloatRawIndexer]
for (row <- 0 until 3) {
for (col <- 0 until 3) {
camIdx.put(row, col, cameraMatrixData(row * 3 + col).toFloat)
}
}
val distVec = new Mat(1, 5, opencv_core.CV_32FC1)
val distIdx = distVec.createIndexer[FloatRawIndexer]
for (col <- 0 until 5) {
distIdx.put(0, col, distMatrixData(col).toFloat)
}
val matIn = opencv_imgcodecs.imread(in.getPath)
opencv_imgproc.undistort(matIn, matOut, camMat, distVec)
opencv_imgcodecs.imwrite(out.getPath, matOut)
}
I have large collecton of card images, and one photo of particular card. What tools can I use to find which image of collection is most similar to mine?
Here's collection sample:
Abundance
Aggressive Urge
Demystify
Here's what I'm trying to find:
Card Photo
New method!
It seems that the following ImageMagick command, or maybe a variation of it, depending on looking at a greater selection of your images, will extract the wording at the top of your cards
convert aggressiveurge.jpg -crop 80%x10%+10%+10% crop.png
which takes the top 10% of your image and 80% of the width (starting at 10% in from the top left corner and stores it in crop.png as follows:
And if your run that through tessseract OCR as follows:
tesseract crop.png agg
you get a file called agg.txt containing:
E‘ Aggressive Urge \L® E
which you can run through grep to clean up, looking only for upper and lower case letters adjacent to each other:
grep -Eo "\<[A-Za-z]+\>" agg.txt
to get
Aggressive Urge
:-)
Thank you for posting some photos.
I have coded an algorithm called Perceptual Hashing which I found by Dr Neal Krawetz. On comparing your images with the Card, I get the following percentage measures of similarity:
Card vs. Abundance 79%
Card vs. Aggressive 83%
Card vs. Demystify 85%
so, it is not an ideal discriminator for your image type, but kind of works somewhat. You may wish to play around with it to tailor it for your use case.
I would calculate a hash for each of the images in your collection, one at a time and store the hash for each image just once. Then, when you get a new card, calculate its hash and compare it to the stored ones.
#!/bin/bash
################################################################################
# Similarity
# Mark Setchell
#
# Calculate percentage similarity of two images using Perceptual Hashing
# See article by Dr Neal Krawetz entitled "Looks Like It" - www.hackerfactor.com
#
# Method:
# 1) Resize image to black and white 8x8 pixel square regardless
# 2) Calculate mean brightness of those 64 pixels
# 3) For each pixel, store "1" if pixel>mean else store "0" if less than mean
# 4) Convert resulting 64bit string of 1's and 0's, 16 hex digit "Perceptual Hash"
#
# If finding difference between Perceptual Hashes, simply total up number of bits
# that differ between the two strings - this is the Hamming distance.
#
# Requires ImageMagick - www.imagemagick.org
#
# Usage:
#
# Similarity image|imageHash [image|imageHash]
# If you pass one image filename, it will tell you the Perceptual hash as a 16
# character hex string that you may want to store in an alternate stream or as
# an attribute or tag in filesystems that support such things. Do this in order
# to just calculate the hash once for each image.
#
# If you pass in two images, or two hashes, or an image and a hash, it will try
# to compare them and give a percentage similarity between them.
################################################################################
function PerceptualHash(){
TEMP="tmp$$.png"
# Force image to 8x8 pixels and greyscale
convert "$1" -colorspace gray -quality 80 -resize 8x8! PNG8:"$TEMP"
# Calculate mean brightness and correct to range 0..255
MEAN=$(convert "$TEMP" -format "%[fx:int(mean*255)]" info:)
# Now extract all 64 pixels and build string containing "1" where pixel > mean else "0"
hash=""
for i in {0..7}; do
for j in {0..7}; do
pixel=$(convert "${TEMP}"[1x1+${i}+${j}] -colorspace gray text: | grep -Eo "\(\d+," | tr -d '(,' )
bit="0"
[ $pixel -gt $MEAN ] && bit="1"
hash="$hash$bit"
done
done
hex=$(echo "obase=16;ibase=2;$hash" | bc)
printf "%016s\n" $hex
#rm "$TEMP" > /dev/null 2>&1
}
function HammingDistance(){
# Convert input hex strings to upper case like bc requires
STR1=$(tr '[a-z]' '[A-Z]' <<< $1)
STR2=$(tr '[a-z]' '[A-Z]' <<< $2)
# Convert hex to binary and zero left pad to 64 binary digits
STR1=$(printf "%064s" $(echo "obase=2;ibase=16;$STR1" | bc))
STR2=$(printf "%064s" $(echo "obase=2;ibase=16;$STR2" | bc))
# Calculate Hamming distance between two strings, each differing bit adds 1
hamming=0
for i in {0..63};do
a=${STR1:i:1}
b=${STR2:i:1}
[ $a != $b ] && ((hamming++))
done
# Hamming distance is in range 0..64 and small means more similar
# We want percentage similarity, so we do a little maths
similarity=$((100-(hamming*100/64)))
echo $similarity
}
function Usage(){
echo "Usage: Similarity image|imageHash [image|imageHash]" >&2
exit 1
}
################################################################################
# Main
################################################################################
if [ $# -eq 1 ]; then
# Expecting a single image file for which to generate hash
if [ ! -f "$1" ]; then
echo "ERROR: File $1 does not exist" >&2
exit 1
fi
PerceptualHash "$1"
exit 0
fi
if [ $# -eq 2 ]; then
# Expecting 2 things, i.e. 2 image files, 2 hashes or one of each
if [ -f "$1" ]; then
hash1=$(PerceptualHash "$1")
else
hash1=$1
fi
if [ -f "$2" ]; then
hash2=$(PerceptualHash "$2")
else
hash2=$2
fi
HammingDistance $hash1 $hash2
exit 0
fi
Usage
I also tried a normalised cross-correlation of each of your images with the card, like this:
#!/bin/bash
size="300x400!"
convert card.png -colorspace RGB -normalize -resize $size card.jpg
for i in *.jpg
do
cc=$(convert $i -colorspace RGB -normalize -resize $size JPG:- | \
compare - card.jpg -metric NCC null: 2>&1)
echo "$cc:$i"
done | sort -n
and I got this output (sorted by match quality):
0.453999:abundance.jpg
0.550696:aggressive.jpg
0.629794:demystify.jpg
which shows that the card correlates best with demystify.jpg.
Note that I resized all images to the same size and normalized their contrast so that they could be readily compared and effects resulting from differences in contrast are minimised. Making them smaller also reduces the time needed for the correlation.
I tried this by arranging the image data as a vector and taking the inner-product between the collection image vectors and the searched image vector. The vectors that are most similar will give the highest inner-product. I resize all the images to the same size to get equal length vectors so I can take inner-product. This resizing will additionally reduce inner-product computational cost and give a coarse approximation of the actual image.
You can quickly check this with Matlab or Octave. Below is the Matlab/Octave script. I've added comments there. I tried varying the variable mult from 1 to 8 (you can try any integer value), and for all those cases, image Demystify gave the highest inner product with the card image. For mult = 8, I get the following ip vector in Matlab:
ip =
683007892
558305537
604013365
As you can see, it gives the highest inner-product of 683007892 for image Demystify.
% load images
imCardPhoto = imread('0.png');
imDemystify = imread('1.jpg');
imAggressiveUrge = imread('2.jpg');
imAbundance = imread('3.jpg');
% you can experiment with the size by varying mult
mult = 8;
size = [17 12]*mult;
% resize with nearest neighbor interpolation
smallCardPhoto = imresize(imCardPhoto, size);
smallDemystify = imresize(imDemystify, size);
smallAggressiveUrge = imresize(imAggressiveUrge, size);
smallAbundance = imresize(imAbundance, size);
% image collection: each image is vectorized. if we have n images, this
% will be a (size_rows*size_columns*channels) x n matrix
collection = [double(smallDemystify(:)) ...
double(smallAggressiveUrge(:)) ...
double(smallAbundance(:))];
% vectorize searched image. this will be a (size_rows*size_columns*channels) x 1
% vector
x = double(smallCardPhoto(:));
% take the inner product of x and each image vector in collection. this
% will result in a n x 1 vector. the higher the inner product is, more similar the
% image and searched image(that is x)
ip = collection' * x;
EDIT
I tried another approach, basically taking the euclidean distance (l2 norm) between reference images and the card image and it gave me very good results with a large collection of reference images (383 images) I found at this link for your test card image.
Here instead of taking the whole image, I extracted the upper part that contains the image and used it for comparison.
In the following steps, all training images and the test image are resized to a predefined size before doing any processing.
extract the image regions from training images
perform morphological closing on these images to get a coarse approximation (this step may not be necessary)
vectorize these images and store in a training set (I call it training set even though there's no training in this approach)
load the test card image, extract the image region-of-interest(ROI), apply closing, then vectorize
calculate the euclidean distance between each reference image vector and the test image vector
choose the minimum distance item (or the first k items)
I did this in C++ using OpenCV. I'm also including some test results using different scales.
#include <opencv2/opencv.hpp>
#include <iostream>
#include <algorithm>
#include <string.h>
#include <windows.h>
using namespace cv;
using namespace std;
#define INPUT_FOLDER_PATH string("Your test image folder path")
#define TRAIN_IMG_FOLDER_PATH string("Your training image folder path")
void search()
{
WIN32_FIND_DATA ffd;
HANDLE hFind = INVALID_HANDLE_VALUE;
vector<Mat> images;
vector<string> labelNames;
int label = 0;
double scale = .2; // you can experiment with scale
Size imgSize(200*scale, 285*scale); // training sample images are all 200 x 285 (width x height)
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(3, 3));
// get all training samples in the directory
hFind = FindFirstFile((TRAIN_IMG_FOLDER_PATH + string("*")).c_str(), &ffd);
if (INVALID_HANDLE_VALUE == hFind)
{
cout << "INVALID_HANDLE_VALUE: " << GetLastError() << endl;
return;
}
do
{
if (!(ffd.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY))
{
Mat im = imread(TRAIN_IMG_FOLDER_PATH+string(ffd.cFileName));
Mat re;
resize(im, re, imgSize, 0, 0); // resize the image
// extract only the upper part that contains the image
Mat roi = re(Rect(re.cols*.1, re.rows*35/285.0, re.cols*.8, re.rows*125/285.0));
// get a coarse approximation
morphologyEx(roi, roi, MORPH_CLOSE, kernel);
images.push_back(roi.reshape(1)); // vectorize the roi
labelNames.push_back(string(ffd.cFileName));
}
}
while (FindNextFile(hFind, &ffd) != 0);
// load the test image, apply the same preprocessing done for training images
Mat test = imread(INPUT_FOLDER_PATH+string("0.png"));
Mat re;
resize(test, re, imgSize, 0, 0);
Mat roi = re(Rect(re.cols*.1, re.rows*35/285.0, re.cols*.8, re.rows*125/285.0));
morphologyEx(roi, roi, MORPH_CLOSE, kernel);
Mat testre = roi.reshape(1);
struct imgnorm2_t
{
string name;
double norm2;
};
vector<imgnorm2_t> imgnorm;
for (size_t i = 0; i < images.size(); i++)
{
imgnorm2_t data = {labelNames[i],
norm(images[i], testre) /* take the l2-norm (euclidean distance) */};
imgnorm.push_back(data); // store data
}
// sort stored data based on euclidean-distance in the ascending order
sort(imgnorm.begin(), imgnorm.end(),
[] (imgnorm2_t& first, imgnorm2_t& second) { return (first.norm2 < second.norm2); });
for (size_t i = 0; i < imgnorm.size(); i++)
{
cout << imgnorm[i].name << " : " << imgnorm[i].norm2 << endl;
}
}
Results:
scale = 1.0;
demystify.jpg : 10989.6, sylvan_basilisk.jpg : 11990.7, scathe_zombies.jpg : 12307.6
scale = .8;
demystify.jpg : 8572.84, sylvan_basilisk.jpg : 9440.18, steel_golem.jpg : 9445.36
scale = .6;
demystify.jpg : 6226.6, steel_golem.jpg : 6887.96, sylvan_basilisk.jpg : 7013.05
scale = .4;
demystify.jpg : 4185.68, steel_golem.jpg : 4544.64, sylvan_basilisk.jpg : 4699.67
scale = .2;
demystify.jpg : 1903.05, steel_golem.jpg : 2154.64, sylvan_basilisk.jpg : 2277.42
If i understand you correctly you need to compare them as pictures. There is one very simple, but effective solution here - it's called Sikuli.
What tools can I use to find which image of collection is most similar to mine?
This tool is working very good with the image-processing and is not only capable to find if your card(image) is similar to what you have already defined as pattern, but also search partial image content (so called rectangles).
By default you can extend it's functionality via Python. Any ImageObject can be set to accept similarity_pattern in percentages and by doing so you'll be able to precisely find what you are looking for.
Also another big advantage of this tool is that you can learn basics in one day.
Hope this helps.
I'm experimenting with convolving an image with a user-supplied mask, in this case
u = array([[-2,-2,-2],[-2,25,-2],[-2,-2,-2]])/9
using the commands
In[1]: import scipy.ndimage as ndi
In[2]: import skimage.io as io
In[3]: c = io.imread('cameraman.png')
In[4]: cu = ndi.convolve(c,u)
In[5]: io.imshow(cu)
I'm checking this against commands in GNU Octave:
Octave-3.8: 1> c = imread('cameraman.png');
Octave-3.8: 2> u = [-2 -2 -2;-2 25 -2;-2 -2 -2]/9
Octave-3.8: 3> cu = imfilter(c,u)
Octave-3.8: 4> imshow(cu)
But here's the thing: Octave seems to give the correct result, but Python doesn't, even though the commands convolve and imfilter are supposed to be implementing the same algorithm. (Well in fact imfilter performs a correlation, which in this case is the same as a convolution.)
The Octave output is:
!
and the Python output is:
!
which as you can see is very different to the Octave result. Does anybody know what's going on here? Or is there a better way of convolving with a user-supplied linear filter than using convolve?
The problem may be the result of your convolution taking your image luminance values out of bounds. I ran the example below in Matlab (~=Octave) and for an image that initially has grey values 0-255 so in normalised range [0,0.99] the result ends in with pixels in range [-0.88,2.03].
>> img=double(imread('cameraman.tif'))./255;
>> K=[-2 -2 -2 ; -2 25 -2; -2 -2 -2]/9;
>> out=conv2(img,K,'same');
>> max(max(out))
ans =
2.0288
>> min(min(out))
ans =
-0.8776
It could be that Python has a problem visualising images with out of range grey values <0 or >255 and this is causing a clamping of values resulting in black/white halos in those areas. Perhaps Octave normalises the image prior to displaying it resulting in few artifacts. If you normalise you image in Python prior to displaying it, do you still have this problem?
I have a large dataset of x,y coordinates in "NAD 1983 StatePlane Michigan South FIPS 2113 Feet" (aka ESRI 102690). I'd like to convert them to lat-lng points.
In theory, this is something proj is built to handle, but the documentation hasn't given me a clue -- it seems to describe much more complicated cases.
I've tried using a python interface, like so:
from pyproj import Proj
p = Proj(init='esri:102690')
sx = 13304147.06410000000 #sample points
sy = 288651.94040000000
x2, y2 = p(sx, sy, inverse=True)
But that gives wildly incorrect output.
There's a Javascript library, but I have ~50,000 points to handle, so that doesn't seem appropriate.
What worked for me:
I created a file called ptest with each pair on its own line, x and y coordinates separated by a space, like so:
13304147.06410000000 288651.94040000000
...
Then I fed that file into the command and piped the results to an output file:
$>cs2cs -f %.16f +proj=lcc +lat_1=42.1 +lat_2=43.66666666666666
+lat_0=41.5 +lon_0=-84.36666666666666 +x_0=4000000 +y_0=0 +ellps=GRS80
+datum=NAD83 +to_meter=0.3048006096012192 +no_defs +zone=20N +to
+proj=latlon ptest > out.txt
If you only need to reproject and can do some data-mining on your text files use whatever you like and use http://spatialreference.org/ref/esri/102690/ as reference.
For example use the Proj4 and store it in a shell/cmd file and call out your input file with proj4 (linux/windows version available) no problem with the size your dataset.
cs2cs +proj=latlong +datum=NAD83 +to +proj=utm +zone=10 +datum=NAD27 -r <<EOF
cs2cs -f %.16f +proj=utm +zone=20N +to +proj=latlon - | awk '{print $1 " " $2}
so in your case something like this:
cs2cs -f %.16f +proj=lcc +lat_1=42.1 +lat_2=43.66666666666666 +lat_0=41.5 +lon_0=-84.36666666666666 +x_0=4000000 +y_0=0 +ellps=GRS80 +datum=NAD83 +to_meter=0.3048006096012192 +no_defs +zone=20N +to +proj=latlon
http://trac.osgeo.org/proj/wiki/man_cs2cs
http://trac.osgeo.org/proj/
If you have coordinates in TXT, CSV or XLS file, you can do CTRL+C and insert them to http://cs2cs.mygeodata.eu where you can set appropriate input and desired output coordinate system. It is possible to insert thousands of coordinates in various formats...