I am trying to Get Exif data for Camera make, ISO speed etc. in a file upload. I can get some tags (see below) but need some guidance on extracting items from the Exif directories. Any suggestions please.
IEnumerable<MetadataExtractor.Directory> directories = ImageMetadataReader.ReadMetadata(strFileName);
foreach (var directory in directories)
foreach (var tag in directory.Tags)
System.Diagnostics.Debug.WriteLine(string.Format("Directory " + $"{directory.Name} - {tag.Name} = {tag.Description}"));
var subIfdDirectory = directories.OfType<ExifSubIfdDirectory>().FirstOrDefault();
var dateTime = subIfdDirectory?.GetDescription(ExifDirectoryBase.TagDateTime);
System.Diagnostics.Debug.WriteLine(string.Format("dateTime " + dateTime));
//
Image img = Image.FromFile(strFileName);
ImageFormat format = img.RawFormat;
System.Diagnostics.Debug.WriteLine("Image Type : " + format.ToString());
System.Diagnostics.Debug.WriteLine("Image width : " + img.Width);
System.Diagnostics.Debug.WriteLine("Image height : " + img.Height);
System.Diagnostics.Debug.WriteLine("Image resolution : " + (img.VerticalResolution * img.HorizontalResolution));
System.Diagnostics.Debug.WriteLine("Image Pixel depth : " + Image.GetPixelFormatSize(img.PixelFormat));
PropertyItem[] propItems = img.PropertyItems;
int count = 0;
ArrayList arrayList = new ArrayList();
foreach (PropertyItem item in propItems)
{
arrayList.Add("Property Item " + count.ToString());
arrayList.Add("iD: 0x" + item.Id.ToString("x"));
System.Diagnostics.Debug.WriteLine("PropertyItem item in propItems: " + item.Id.ToString("Name"));
count++;
}
ASCIIEncoding encodings = new ASCIIEncoding();
try
{
string make = encodings.GetString(propItems[1].Value);
arrayList.Add("The equipment make is " + make.ToString() + ".");
}
catch
{
arrayList.Add("no Meta Data Found");
}
ViewBag.listFromArray = arrayList;
return View(await db.ReadExifs.ToListAsync());
}
Two loops I know, messy but gives some output :
Directory JPEG - Compression Type = Baseline
Directory JPEG - Data Precision = 8 bits
Directory JPEG - Image Height = 376 pixels
Directory JPEG - Image Width = 596 pixels
Directory JPEG - Number of Components = 3
Directory JPEG - Component 1 = Y component: Quantization table 0, Sampling factors 2 horiz/2 vert
Directory JPEG - Component 2 = Cb component: Quantization table 1, Sampling factors 1 horiz/1 vert
Directory JPEG - Component 3 = Cr component: Quantization table 1, Sampling factors 1 horiz/1 vert
Directory JFIF - Version = 1.1
Directory JFIF - Resolution Units = inch
Directory JFIF - X Resolution = 120 dots
Directory JFIF - Y Resolution = 120 dots
Directory JFIF - Thumbnail Width Pixels = 0
Directory JFIF - Thumbnail Height Pixels = 0
Directory File - File Name = FakeFoto03_large.Jpg
Directory File - File Size = 66574 bytes
Directory File - File Modified Date = Tue Jan 03 00:02:00 +00:00 2017
Image Type : [ImageFormat: b96b3cae-0728-11d3-9d7b-0000f81ef32e]
Image width : 596
Image height : 376
Image resolution : 14400
Image Pixel depth : 24
Thanks. Y.
If the image you're processing has camera make, ISO and so forth, the metadata-extractor will print it out. The image you're providing must not have those details.
Solved. This block:
ArrayList arrayList = new ArrayList();
IEnumerable<MetadataExtractor.Directory> directories = ImageMetadataReader.ReadMetadata(strFileName);
foreach (var directory in directories)
foreach (var tag in directory.Tags)
// System.Diagnostics.Debug.WriteLine(string.Format("Directory " + $"{directory.Name} - {tag.Name} = {tag.Description}"));
arrayList.Add($"{tag.Name} = {tag.Description}");
ViewBag.listFromArray = arrayList;
return View(await db.ReadExifs.ToListAsync());
This produces (in the case of the Photo used as source) 120 exif tags. Sample:
White Balance Mode = Auto white balance
Digital Zoom Ratio = 1
Focal Length 35 = 28 mm
Scene Capture Type = Standard
Gain Control = Low gain up
Contrast = None
Thanks to Drew for the reply, works fine now, up to a point. while the snippet prints to screen fine ( 160 items ), I cannot assign the items description to a variable or array. Here is the code:
// start exif ###############################
var strFileName = Server.MapPath("~/uploads/" + fname + "_large" + extension);
System.Diagnostics.Debug.WriteLine(">>> ReadExifsController, fname: " + fname);
if (System.IO.File.Exists(strFileName))
{
System.Diagnostics.Debug.WriteLine(">>> ReadExifsController File exists.");
}
ArrayList arrayList = new ArrayList();
arrayList.Add("ArrayList start");
IEnumerable<MetadataExtractor.Directory> directories = ImageMetadataReader.ReadMetadata(strFileName);
foreach (var directory in directories)
foreach (var tag in directory.Tags)
System.Diagnostics.Debug.WriteLine(string.Format("Directory " + $"{directory.Name} - {tag.Name} = {tag.Description}"));
count++;
ViewBag.listFromArray = arrayList;
return View(await db.ReadExifs.ToListAsync());
}
Related
I'm trying to fit a line using quadratic poly, but because the fit results in continuous values, the integer conversion (for CartesianIndex) rounds it off, and I loose data at that pixel.
I tried the method
here. So I get new y values as
using Images, Polynomials, Plots,ImageView
img = load("jTjYb.png")
img = Gray.(img)
img = img[end:-1:1, :]
nodes = findall(img.>0)
xdata = map(p->p[2], nodes)
ydata = map(p->p[1], nodes)
f = fit(xdata, ydata, 2)
ydata_new .= round.(Int, f.(xdata)
new_line_fitted_img=zeros(size(img))
new_line_fitted_img[xdata,ydata_new].=1
imshow(new_line_fitted_img)
which results in chopped line as below
whereas I was expecting it to be continuous line as it was in pre-processing
Do you expect the following:
Raw Image
Fitted Polynomial
Superposition
enter image description here
enter image description here
enter image description here
Code:
using Images, Polynomials
img = load("img.png");
img = Gray.(img)
fx(data, dCoef, cCoef, bCoef, aCoef) = #. data^3 *aCoef + data^2 *bCoef + data*cCoef + dCoef;
function fit_poly(img::Array{<:Gray, 2})
img = img[end:-1:1, :]
nodes = findall(img.>0)
xdata = map(p->p[2], nodes)
ydata = map(p->p[1], nodes)
f = fit(xdata, ydata, 3)
xdt = unique(xdata)
xdt, fx(xdt, f.coeffs...)
end;
function draw_poly!(X, y)
the_min = minimum(y)
if the_min<0
y .-= the_min - 1
end
initialized_img = Gray.(zeros(maximum(X), maximum(y)))
initialized_img[CartesianIndex.(X, y)] .= 1
dif = diff(y)
for i in eachindex(dif)
the_dif = dif[i]
if abs(the_dif) >= 2
segment = the_dif ÷ 2
initialized_img[i, y[i]:y[i]+segment] .= 1
initialized_img[i+1, y[i]+segment+1:y[i+1]-1] .= 1
end
end
rotl90(initialized_img)
end;
X, y = fit_poly(img);
y = convert(Vector{Int64}, round.(y));
draw_poly!(X, y)
I'm reading source code of opencv: cvProjectPoints2, cvRodrigues2.
In cvProjectPoints2, the Jacobian matrix is first got using cvRodrigues2( &_r, &matR, &_dRdr );, and then used to calculate the partial derivative of pixels w.r.t the rvec (axis-angle representation)。
if( dpdr_p )
{
double dx0dr[] =
{
X*dRdr[0] + Y*dRdr[1] + Z*dRdr[2],
X*dRdr[9] + Y*dRdr[10] + Z*dRdr[11],
X*dRdr[18] + Y*dRdr[19] + Z*dRdr[20]
};
double dy0dr[] =
{
X*dRdr[3] + Y*dRdr[4] + Z*dRdr[5],
X*dRdr[12] + Y*dRdr[13] + Z*dRdr[14],
X*dRdr[21] + Y*dRdr[22] + Z*dRdr[23]
};
double dz0dr[] =
{
X*dRdr[6] + Y*dRdr[7] + Z*dRdr[8],
X*dRdr[15] + Y*dRdr[16] + Z*dRdr[17],
X*dRdr[24] + Y*dRdr[25] + Z*dRdr[26]
};
for( j = 0; j < 3; j++ )
{
double dxdr = z*(dx0dr[j] - x*dz0dr[j]);
double dydr = z*(dy0dr[j] - y*dz0dr[j]);
double dr2dr = 2*x*dxdr + 2*y*dydr;
double dcdist_dr = k[0]*dr2dr + 2*k[1]*r2*dr2dr + 3*k[4]*r4*dr2dr;
double dicdist2_dr = -icdist2*icdist2*(k[5]*dr2dr + 2*k[6]*r2*dr2dr + 3*k[7]*r4*dr2dr);
double da1dr = 2*(x*dydr + y*dxdr);
double dmxdr = fx*(dxdr*cdist*icdist2 + x*dcdist_dr*icdist2 + x*cdist*dicdist2_dr +
k[2]*da1dr + k[3]*(dr2dr + 2*x*dxdr));
double dmydr = fy*(dydr*cdist*icdist2 + y*dcdist_dr*icdist2 + y*cdist*dicdist2_dr +
k[2]*(dr2dr + 2*y*dydr) + k[3]*da1dr);
dpdr_p[j] = dmxdr;
dpdr_p[dpdr_step+j] = dmydr;
}
dpdr_p += dpdr_step*2;
}
The shape of dRdr is 3*9, and from how the indices of dRdr is used:
X*dRdr[0] + Y*dRdr[1] + Z*dRdr[2], //-> dx0dr1
X*dRdr[9] + Y*dRdr[10] + Z*dRdr[11], //-> dx0dr2
X*dRdr[18] + Y*dRdr[19] + Z*dRdr[20] //-> dx0dr3
the Jacobian matrix seems to be:
dR1/dr1, dR2/dr1, ..., dR9/dr1,
dR1/dr2, dR2/dr2, ..., dR9/dr2,
dR1/dr3, dR2/dr3, ..., dR9/dr3,
But to my knowledge the Jacobian matrix should be of shape 9*3, since it's derivatives of R(1~9) w.r.t r(1~3):
dR1/dr1, dR1/dr2, dR1/dr3,
dR2/dr1, dR2/dr2, dR2/dr3,
...
...
dR9/dr1, dR9/dr2, dR9/dr3,
As the docs of cvRodrigues2 says:
jacobian – Optional output Jacobian matrix, 3x9 or 9x3, which is a
matrix of partial derivatives of the output array components with
respect to the input array components.
So am I misunderstanding the code & docs? Or is the code using other convention? Or is it a bug (not likely...)?
If you look up the docs:
src – Input rotation vector (3x1 or 1x3) or rotation matrix (3x3).
dst – Output rotation matrix (3x3) or rotation vector (3x1 or 1x3), respectively.
jacobian – Optional output Jacobian matrix, 3x9 or 9x3, which is a matrix of partial derivatives of the output array components with respect to the input array components.
As you see you can switch source and destination places(mathematically it will be exactly transposition), but the code does not account for it.
Therefore, indeed you've got a transposed Jacobian, because you switched first arguments places(from default places for their types). Switch them again, and you'll get normal Jacobian!
I am having an assertion error when using the Core.inRange function, actually any Core. function. I have followed all solutions in the answers from similar questions. Other solutions have been to check the number of channels, check if the image is empty and verify installation. I am using Android Studio 2.2 on Mac. Phones tested were ZTE Speed KitKat and Moto g3 Marshmallow.
My goal is to get the red and blue from an image -> determine if a Red light is On or a Blue one is on. The code gets the image from a Vuforia Frame, converts it to a bitmap and then try to use OpenCV to manipulate the image. This was working on previous code before we had to implement Vuforia as part of the core.
This is the main section of the code, the Imgproc.cvtColor function works fine, its the very last one Core.inRange
Mat mat1 = new Mat(640,480, CvType.CV_8UC4);
Mat mat2 = new Mat(640,480, CvType.CV_8UC4);
Mat mat3 = new Mat(640,480, CvType.CV_8UC4);
.......
Log.d("OPENCV","Height " + rgb.getHeight() + " Width " + rgb.getWidth());
Bitmap bm = Bitmap.createBitmap(rgb.getWidth(), rgb.getHeight(), Bitmap.Config.RGB_565);
bm.copyPixelsFromBuffer(rgb.getPixels());
//Mat tmp = OCVUtils.bitmapToMat(bm, CvType.CV_8UC4);
Mat tmp = new Mat(rgb.getWidth(), rgb.getHeight(), CvType.CV_8UC4);
Utils.bitmapToMat(bm, tmp);
SaveImage(tmp, "-raw");
fileLogger.writeEvent("process()","Saved original file ");
Log.d("OPENCV","CV_8UC4 Height " + tmp.height() + " Width " + tmp.width());
Log.d("OPENCV","Channels " + tmp.channels());
tmp.convertTo(mat1, CvType.CV_8UC4);
Size size = new Size(640,480);//the dst image size,e.g.100x100
resize(mat1,mat1,size);//resize image
SaveImage(mat1, "-convertcv_8uc4");
Log.d("OPENCV","CV_8UC4 Height " + mat1.height() + " Width " + mat1.width());
fileLogger.writeEvent("process()","converted to cv_8uc4");
Log.d("OPENCV","Channels " + mat1.channels());
Imgproc.cvtColor(mat1, mat2, Imgproc.COLOR_RGB2HSV_FULL);
SaveImage(mat2, "-COLOR_RGB2HSV_FULL");
Log.d("OPENCV","COLOR_RGB2HSV Height " + mat2.height() + " Width " + mat2.width());
Log.d("OPENCV","Channels " + mat2.channels());
//Core.inRange(mat2, RED_LOWER_BOUNDS_HSV, RED_UPPER_BOUNDS_HSV, mat3);
Log.d("OPENCV","mat2 Channels " + mat2.channels() + " empty " + mat2.empty());
Log.d("OPENCV","mat3 Channels " + mat3.channels() + " empty " + mat3.empty());
Core.inRange(mat2, new Scalar(0,100,150), new Scalar(22,255,255), mat3);
fileLogger.writeEvent("process()","Set Red window Limits: ");
SaveImage(mat3, "-red limits");
These are the 2 errors I get when the command runs
E/cv::error(): OpenCV Error: Assertion failed (scn == 3 || scn == 4) in void cv::cvtColor(cv::InputArray, cv::OutputArray, int, int), file /home/maksim/workspace/android-pack/opencv/modules/imgproc/src/color.cpp, line 7349
E/org.opencv.imgproc: imgproc::cvtColor_10() caught cv::Exception: /home/maksim/workspace/android-pack/opencv/modules/imgproc/src/color.cpp:7349: error: (-215) scn == 3 || scn == 4 in function void cv::cvtColor(cv::InputArray, cv::OutputArray, int, int)
3 images are saved in the pictures directory as expected.
My logging produces the following
D/OPENCV: mat2 Channels 3 empty false
D/OPENCV: mat3 Channels 4 empty false
I have tried two different phones, tried adjusting the resolution down. I have reinstalled the OpenCV module in case it was not installed correctly. I have made the images all 3 channels, all 4 channels.
So after a week of debugging it was the most stupidest of mistakes!
Within the SaveImage function
Imgproc.cvtColor(mat, mIntermediateMat, Imgproc.COLOR_RGBA2BGR, 3);
This was what was causing the issue.
After the Core.inRange was the SaveImage function. Core.inRange dropped the channels to 1 - the fileLogger did not flush the last log, If I had used Log instead I probably would have picked it quicker.
public void SaveImage (Mat mat, String info) {
Mat mIntermediateMat = new Mat();
Imgproc.cvtColor(mat, mIntermediateMat, Imgproc.COLOR_RGBA2BGR, 3); <--Here bad
File path = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_PICTURES);
String filename = "ian" + info + ".png";
File file = new File(path, filename);
Boolean bool = null;
filename = file.toString();
bool = Imgcodecs.imwrite(filename, mIntermediateMat);
if (bool == true)
Log.d("filesave", "SUCCESS writing image to external storage");
else
Log.d("filesave", "Fail writing image to external storage");
}
I wrote a code that can get line projection (intensity profile) of an image, and I would like to convert/export this line projection (intensity profile) to excel table, and then order all the Y coordinate. For example, except the maximum and minimum values of all the Y coordinate, I would like to know largest 5 coordinate value and smallest coordinate value.
Is there any code can reach this function? Thanks,
image line_projection
Realimage imgexmp
imgexmp := GetFrontImage()
number samples = 256, xscale, yscale, xsize, ysize
GetSize( imgexmp, xsize, ysize )
line_projection := CreateFloatImage( "line projection", Xsize, 1 )
line_projection = 0
line_projection[icol,0] += imgexmp
line_projection /= samples
ShowImage( line_projection )
Finding a 'sorted' list of values
If you need to sort though large lists of values (i.e. large images) the following might not be very sufficient. However, if your aim is to get the "x highest" values with a relatively small number of X, then the following code is just fine:
number nFind = 10
image test := GetFrontImage().ImageClone()
Result( "\n\n" + nFind + " highest values:\n" )
number x,y,v
For( number i=0; i<nFind; i++ )
{
v = max(test,x,y)
Result( "\t" + v + " at " + x + "\n" )
test[x,y] = - Infinity()
}
Working with a copy and subsequently "removing" the maximum value by changing that pixel value. The max command is fast - even for large images -, but the for-loop iteration and setting of individual pixels is slow. Hence this script is too slow for a complete 'sorting' of the data if it is big, but it can quickly get you the n 'highest' values.
This is a non-coding answer:
If you havea LinePlot display in DigitalMicrograph, you can simply copy-paste that into Excel to get the numbers.
i.e. with the LinePlot image front most, preses CTRL + C to copy
(make sure there are no ROIs on it).
Switch to Excel and press CTRL + V. Done.
==>
I'm displaying titles of movies as letter images e.g. A separate image for each letter. Each letter can then be dragged in a space/container. this is my code for displaying the container
posX = {}
posY = 124
px = 10
containers = {}
for i = 1, #letters do
if(letters[i]==" ") then
px = px + 10
-- print(posX[i])
-- table.remove(posX, posX[i])
else
posX[i] = px
containers[i] = display.newImage( "Round1_blue_tileEnlarged 40x40.png", posX[i],posY )
px = px + 40
end
end
As you can see I am checking for a space e.g if batman begins was the title, I have no problems if the title is a single word, but adding the space is adding another element to my array that is causing an error when im placing an objecet in my containers. You can see in the 'if' im just adding a space but I dont want this to be an element of my table posX
I am not sure I understand your question well but if I do here is your problem: you are using i as the index in posX but i is incremented by the for loop even for spaces. That results in holes in the posX and containers tables.
You can fix that in several ways, here is a trivial one:
posX = {}
posY = 124
px = 10
containers = {}
local j = 1
for i = 1,#letters do
if(letters[i]==" ") then
px = px + 10
else
posX[j] = px
containers[j] = display.newImage( "Round1_blue_tileEnlarged 40x40.png", posX[j],posY )
px = px + 40
j = j + 1
end
end
You could also use #posX instead of j.