Convert ESRI projection coordinates to lat-lng - mapping

I have a large dataset of x,y coordinates in "NAD 1983 StatePlane Michigan South FIPS 2113 Feet" (aka ESRI 102690). I'd like to convert them to lat-lng points.
In theory, this is something proj is built to handle, but the documentation hasn't given me a clue -- it seems to describe much more complicated cases.
I've tried using a python interface, like so:
from pyproj import Proj
p = Proj(init='esri:102690')
sx = 13304147.06410000000 #sample points
sy = 288651.94040000000
x2, y2 = p(sx, sy, inverse=True)
But that gives wildly incorrect output.
There's a Javascript library, but I have ~50,000 points to handle, so that doesn't seem appropriate.
What worked for me:
I created a file called ptest with each pair on its own line, x and y coordinates separated by a space, like so:
13304147.06410000000 288651.94040000000
...
Then I fed that file into the command and piped the results to an output file:
$>cs2cs -f %.16f +proj=lcc +lat_1=42.1 +lat_2=43.66666666666666
+lat_0=41.5 +lon_0=-84.36666666666666 +x_0=4000000 +y_0=0 +ellps=GRS80
+datum=NAD83 +to_meter=0.3048006096012192 +no_defs +zone=20N +to
+proj=latlon ptest > out.txt

If you only need to reproject and can do some data-mining on your text files use whatever you like and use http://spatialreference.org/ref/esri/102690/ as reference.
For example use the Proj4 and store it in a shell/cmd file and call out your input file with proj4 (linux/windows version available) no problem with the size your dataset.
cs2cs +proj=latlong +datum=NAD83 +to +proj=utm +zone=10 +datum=NAD27 -r <<EOF
cs2cs -f %.16f +proj=utm +zone=20N +to +proj=latlon - | awk '{print $1 " " $2}
so in your case something like this:
cs2cs -f %.16f +proj=lcc +lat_1=42.1 +lat_2=43.66666666666666 +lat_0=41.5 +lon_0=-84.36666666666666 +x_0=4000000 +y_0=0 +ellps=GRS80 +datum=NAD83 +to_meter=0.3048006096012192 +no_defs +zone=20N +to +proj=latlon
http://trac.osgeo.org/proj/wiki/man_cs2cs
http://trac.osgeo.org/proj/

If you have coordinates in TXT, CSV or XLS file, you can do CTRL+C and insert them to http://cs2cs.mygeodata.eu where you can set appropriate input and desired output coordinate system. It is possible to insert thousands of coordinates in various formats...

Related

Python parallelization for code to combine multiple images

I am new to Python and am trying to parallelize a program that I somehow pieced together from the internet. The program reads all image files (usually multiple series of images such as abc001,abc002...abc015 and xyz001,xyz002....xyz015) in a specific folder and then combines images in a specified range. Most times, the number of files exceeds 10000, and my latest case requires me to combine 24000 images. Could someone help me with:
Taking 2 sets of images from different directories. Currently I have to move these images into 1 directory and then work in said directory.
Reading only specified files. Currently my program reads all files, saves names in an array (I think it's an array. Could be a directory also) and then uses only the images required to combine. If I specify a range of files, it still checks against all files in the directory and takes a lot of time.
Parallel Processing - I work with usually 10k files or sometimes more. These are images saved from the fluid simulations that I run at specific times. Currently, I save about 2k files at a time in separate folders and run the program to combine these 2000 files at one time. And then I copy all the output files to a separate folder to keep them together. It would be great if I could use all 16 cores on the processor to combine all files in 1 go.
Image series 1 is like so.
Consider it to be a series of photos of the cat walking towards the camera. Each frame is is suffixed with 001,002,...,n.
Image series 1 is like so.
Consider it to be a series of photos of the cat's expression changing with each frame. Each frame is is suffixed with 001,002,...,n.
The code currently combines each frame from set1 and set2 to provide output.png as shown in the link here.
import sys
import os
from PIL import Image
keywords=input('Enter initial characters of image series 1 [Ex:Scalar_ , VoF_Scene_]:\n')
keywords2=input('Enter initial characters of image series 2 [Ex:Scalar_ , VoF_Scene_]:\n')
directory = input('Enter correct folder name where images are present :\n') # FOLDER WHERE IMAGES ARE LOCATED
result1 = {}
result2={}
name_count1=0
name_count2=0
for filename in os.listdir(directory):
if keywords in filename:
name_count1 +=1
result1[name_count1] = os.path.join(directory, filename)
if keywords2 in filename:
name_count2 +=1
result2[name_count2] = os.path.join(directory, filename)
num1=input('Enter initial number of series:\n')
num2=input('Enter final number of series:\n')
num1=int(num1)
num2=int(num2)
if name_count1==(num2-num1+1):
a1=1
a2=name_count1
elif name_count2==(num2-num1+1):
a1=1
a2=name_count2
else:
a1=num1
a2=num2+1
for x in range(a1,a2):
y=format(x,'05') # '05' signifies number of digits in the series of file name Ex: [Scalar_scene_1_00345.png --> 5 digits], [Temperature_section_2_951.jpg --> 3 digits]. Change accordingly
y=str(y)
for comparison_name1 in result1:
for comparison_name2 in result2:
test1=result1[comparison_name1]
test2=result2[comparison_name2]
if y in test1 and y in test2:
a=test1
b=test2
test=[a,b]
images = [Image.open(x) for x in test]
widths, heights = zip(*(i.size for i in images))
total_width = sum(widths)
max_height = max(heights)
new_im = Image.new('RGB', (total_width, max_height))
x_offset = 0
for im in images:
new_im.paste(im, (x_offset,0))
x_offset += im.size[0]
output_name='output'+y+'.png'
new_im.save(os.path.join(directory, output_name))
I did a Python version as well, it's not quite as fast but it is maybe closer to your heart :-)
#!/usr/bin/env python3
import cv2
import numpy as np
from multiprocessing import Pool
def doOne(params):
"""Append the two input images side-by-side to output the third."""
imA = cv2.imread(params[0], cv2.IMREAD_UNCHANGED)
imB = cv2.imread(params[1], cv2.IMREAD_UNCHANGED)
res = np.hstack((imA, imB))
cv2.imwrite(params[2], res)
if __name__ == '__main__':
# Build the list of jobs - each entry is a tuple with 2 input filenames and an output filename
jobList = []
for i in range(1000):
# Horizontally append a-XXXXX.png to b-XXXXX.png to make c-XXXXX.png
jobList.append( (f'a-{i:05d}.png', f'b-{i:05d}.png', f'c-{i:05d}.png') )
# Make a pool of processes - 1 per CPU core
with Pool() as pool:
# Map the list of jobs to the pool of processes
pool.map(doOne, jobList)
You can do this a little quicker with libvips. To join two images left-right, enter:
vips join left.png out.png result.png horizontal
To test, I made 200 pairs of 1200x800 PNGs like this:
for i in {1..200}; do cp x.png left$i.png; cp x.png right$i.png; done
Then tried a benchmark:
time parallel vips join left{}.png right{}.png result{}.png horizontal ::: {1..200}
real 0m42.662s
user 2m35.983s
sys 0m6.446s
With imagemagick on the same laptop I see:
time parallel convert left{}.png right{}.png +append result{}.png ::: {1..200}
real 0m55.088s
user 3m24.556s
sys 0m6.400s
You can do that much faster without Python, and using multi-processing with ImageMagick or libvips.
The first part is all setup:
Make 20 images, called a-000.png ... a-019.png that go from red to blue:
convert -size 64x64 xc:red xc:blue -morph 18 a-%03d.png
Make 20 images, called b-000.png ... b-019.png that go from yellow to magenta:
convert -size 64x64 xc:yellow xc:magenta -morph 18 b-%03d.png
Now append them side-by-side into c-000.png ... c-019.png
for ((f=0;f<20;f++))
do
z=$(printf "%03d" $f)
convert a-${z}.png b-${z}.png +append c-${z}.png
done
Those images look like this:
If that looks good, you can do them all in parallel with GNU Parallel:
parallel convert a-{}.png b-{}.png +append c-{}.png ::: {1..19}
Benchmark
I did a quick benchmark and made 20,000 images a-00000.png...a-019999.png and another 20,000 images b-00000.png...b-019999.png with each image 1200x800 pixels. Then I ran the following command to append each pair horizontally and write 20,000 output images c-00000.png...c-019999.png:
seq -f "%05g" 0 19999 | parallel --eta convert a-{}.png b-{}.png +append c-{}.png
and that takes 16 minutes on my MacBook Pro with all 12 CPU cores pegged at 100% throughout. Note that you can:
add spacers between the images,
write annotation onto the images,
add borders,
resize
if you wish and do lots of other processing - this is just a simple example.
Note also that you can get even quicker times - in the region of 10-12 minutes if you accept JPEG instead of PNG as the output format.

OpenCV : how to provide matrix for 'undistort' if I know lens correction factor in Gimp?

I'm getting images from an IP camera that have a strong fish-eye effect. I found that in Gimp I can get lines mostly straight by applying the Lens Distortion filter with a "main" value of -30 (all other parameters remain zero).
Now I need to do this ad-hoc using OpenCV. I gathered that the undistort function in imgproc would be the right thing to call. But how do I generate the correct camera and distortion matrix? I see there is a calibrateCamera function, but it seem you need a PhD in computer vision or so to use it. I have no clue. Since I know the one parameter, there must be a simple way to translate it into the matrix expected by 'undistort'?
Note: I only need the radial distortion coefficients, I'm not interested in the tangential distortion.
There is a sample provided by opencv for calibration. For that all you need is the list of the images of checkerboard(around 20 should be good). taken by your desired camera. It will give you all the required parameters( distortion coefficients, intrinsic parameters etc.). Then you can use 'undistort' function of opencv to correct your image.
You need to change in default.xml,(or you can create your own .xml) the name of the xml file containing the address of your images, the count of inner squares and their dimension in real world.
tadaa you have you required parameters :-)
For those who wonder where that calibration tool comes from. Seems one has to build it from source. This is what I did on Linux:
git clone https://github.com/opencv/opencv.git
cd opencv
git checkout -b 3.1.0 3.1.0 # make sure we build that version
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=Release -D BUILD_EXAMPLES=ON ..
make -j4
Then to calibrate:
./bin/cpp-example-calibration -w=8 -h=6 -o=camera.yml -op -oe -su image_list.xml
The -su lets you verify how the images look after un-distortion. The -w and -h parameters take "inner corners" which is not the number of squares in the chess pattern, but rather (num-black-squares - 1) * 2.
Here's how the perspective transform is applied in the end, using Scala and JavaCV:
import org.bytedeco.javacpp.indexer.FloatRawIndexer
import org.bytedeco.javacpp.opencv_core.Mat
import org.bytedeco.javacpp.{opencv_core, opencv_imgcodecs, opencv_imgproc}
import java.io.File
// from the camera_matrix > data part of the yml:
val cameraFocal = 1.4656877976320607e+03
val cameraCX = 1920.0/2
val cameraCY = 1080.0/2
val cameraMatrixData = Array[Double](
cameraFocal, 0.0 , cameraCX,
0.0 , cameraFocal, cameraCY,
0.0 , 0.0 , 1.0
)
// from the distortion_coefficients of the yml:
val distMatrixData = Array[Double](
-4.016824381742e-01, 4.368842493074e-02, 0.0, 0.0, 1.096412142704e-01
)
def run(in: File, out: File): Unit = {
val matOut = new Mat
val camMat = new Mat(3, 3, opencv_core.CV_32FC1)
val camIdx = camMat.createIndexer[FloatRawIndexer]
for (row <- 0 until 3) {
for (col <- 0 until 3) {
camIdx.put(row, col, cameraMatrixData(row * 3 + col).toFloat)
}
}
val distVec = new Mat(1, 5, opencv_core.CV_32FC1)
val distIdx = distVec.createIndexer[FloatRawIndexer]
for (col <- 0 until 5) {
distIdx.put(0, col, distMatrixData(col).toFloat)
}
val matIn = opencv_imgcodecs.imread(in.getPath)
opencv_imgproc.undistort(matIn, matOut, camMat, distVec)
opencv_imgcodecs.imwrite(out.getPath, matOut)
}

Logistic mapping in gnuplot

I have quite big problem when it comes to plotting data.
First, I've obtained file data.dat from my c++ program, which implements the logistic map.
Data.dat looks as follows: first column should be the number k which should be on the bottom of the plot. When k is in the range [2,3) everything is fine, there is only one attractor (corresponding value to each k, which is always in the range (0,1)), but when it's [3,4) things get complicated.
For each point k there are 2 up to 100 points corresponding to each k.
Each of these points is in the separate column, but I have no idea how could I connect those to certain k.
Here is a sample of my data for points: 2.5, 3, 3.2, 3.5, 3.8 and 3.99999, divided by the newline for clarity (it's not divided by a newline in my original data file)
http://pastebin.com/2AcAjXzk
Thanks for any help, cheers.
Gnuplot cannot handle such a data format properly. Either modify your program such that it prints in each line the k followed by a single value, or you process your data file with a short awk script before plotting:
plot '< awk ''{ for(i = 1; i <= NF; i++) print $1, $i}'' file.txt' using 1:2 with dots notitle

OpenCV read image from csv file

I have image in csv file and i want to load it in my program. I found that I can load image from cvs like this:
CvMLData mlData;
mlData.read_csv(argv[1]);
const CvMat* tmp = mlData.get_values();
cv::Mat img(tmp, true),img1;
img.convertTo(img, CV_8UC3);
cv::namedWindow("img");
cv::imshow("img", img);
I have RGB picture in that file but I got grey picture... Can somebody explain me how to load color image or how can I modify this code to get color image?
Thanks!
Updated
Ok, I don't know how to read your file into OpenCV for the moment, but I can offer you a work-around to get you started. The following will create a header for a PNM format file to match your CSV file and then append your data onto the end and you should end up with a file that you can load.
printf "P3\n284 177\n255\n" > a.pnm # Create PNM header
tr -d ',][' < izlaz.csv >> a.pnm # Append CSV data, after removing commas and []
If I do the above, I can see your bench, tree and river.
If you cannot read that PNM file directly into OpenCV, you can make it into a JPEG with ImageMagick like this:
convert a.pnm a.jpg
I also had a look at the University of Wisconsin ML data archive, that is read with those OpenCV functions that you are using, and the format of their data is different from yours... theirs is like this:
1000025,5,1,1,1,2,1,3,1,1,2
1002945,5,4,4,5,7,10,3,2,1,2
1015425,3,1,1,1,2,2,3,1,1,2
1016277,6,8,8,1,3,4,3,7,1,2
yours looks like this:
[201, 191, 157, 201 ... ]
So maybe this tr command is enough to convert your data:
tr -d '][' < izlaz.csv > TryMe.csv
Original Answer
If you run the following on your CSV file, it translates commas into newlines and then counts the lines:
tr "," "\n" < izlaz.csv | wc -l
And that gives 150,804 lines, which means 150,804 commas in your file and therefore 150,804 integers in your file (+/- 1 or 2). If your greyscale image is 177 rows by 852 columns, you are going to need 150,804 RGB triplets (i.e. 450,000 +/- integers) to represent a colour image, as it is you only have a single greyscale value for each pixel.
The fault is in the way you write the file, not the way you read it.
To see color image I must set number of channels. So this code works for me:
CvMLData mlData;
mlData.read_csv(argv[1]);
const CvMat* tmp = mlData.get_values();
cv::Mat img(tmp, true),img1;
img.convertTo(img, CV_8UC3);
img= img.reshape(3); //set number of channels

Image filtering - wrong results?

I'm experimenting with convolving an image with a user-supplied mask, in this case
u = array([[-2,-2,-2],[-2,25,-2],[-2,-2,-2]])/9
using the commands
In[1]: import scipy.ndimage as ndi
In[2]: import skimage.io as io
In[3]: c = io.imread('cameraman.png')
In[4]: cu = ndi.convolve(c,u)
In[5]: io.imshow(cu)
I'm checking this against commands in GNU Octave:
Octave-3.8: 1> c = imread('cameraman.png');
Octave-3.8: 2> u = [-2 -2 -2;-2 25 -2;-2 -2 -2]/9
Octave-3.8: 3> cu = imfilter(c,u)
Octave-3.8: 4> imshow(cu)
But here's the thing: Octave seems to give the correct result, but Python doesn't, even though the commands convolve and imfilter are supposed to be implementing the same algorithm. (Well in fact imfilter performs a correlation, which in this case is the same as a convolution.)
The Octave output is:
!
and the Python output is:
!
which as you can see is very different to the Octave result. Does anybody know what's going on here? Or is there a better way of convolving with a user-supplied linear filter than using convolve?
The problem may be the result of your convolution taking your image luminance values out of bounds. I ran the example below in Matlab (~=Octave) and for an image that initially has grey values 0-255 so in normalised range [0,0.99] the result ends in with pixels in range [-0.88,2.03].
>> img=double(imread('cameraman.tif'))./255;
>> K=[-2 -2 -2 ; -2 25 -2; -2 -2 -2]/9;
>> out=conv2(img,K,'same');
>> max(max(out))
ans =
2.0288
>> min(min(out))
ans =
-0.8776
It could be that Python has a problem visualising images with out of range grey values <0 or >255 and this is causing a clamping of values resulting in black/white halos in those areas. Perhaps Octave normalises the image prior to displaying it resulting in few artifacts. If you normalise you image in Python prior to displaying it, do you still have this problem?

Resources