Building an Active Appearance Model - opencv

I'm trying to build an Active Appearance Model like in this guide.
But some of the comments sounds to me abstract and incomprehensible. So can you upload the entire set of files needed to create a model or send a link which has it. Thanks and sorry for my english!

When you download the aam project from Tim Cootes website it should have everything you need to get started.
http://www.isbe.man.ac.uk/~bim/software/am_tools_doc/
I recommend to run windows binary files with "wine" on linux. The linux binary files depend on some outdated libraries that require some effort to install properly.
The project has a few key folders: images, points, models, win_bin. "images" is the folder with image files and their corresponding .pts files go to "points" folder.
"models" folder contains configuration .smd files that you will use to build your model. The example of the accompanying smd file looks as follows:
~/Documents/am_tools/models$ cat tim_face.smd
// List of images + parameters required to define type of model
// Note: List excludes images in the separate test set (tim_face_test.smd)
model_name: tim_face
model_dir: ./
parts_file: face
image_dir: ../images/
points_dir: ../points/
shape_aligner: align_similar_2d
shape_modes: { min: 0 max: 30 prop: 0.98 }
tex_modes: { min: 0 max: 40 prop: 0.99 }
combined_modes: { min: 0 max: 30 prop: 0.99 }
params_limiter: mdpm_box_limits
{
sd_limits: 3
}
n_pixels: 10000
colour: Grey // Alternatives: Grey,RGB,...
// Texture Sampler can be tri_raw, tri_edge...
tex_sampler: vapm_triangle_sampler<vxl_byte>
tex_aligner: align_linear_1d
// shape_wts define how to compute relative scaling of shape & tex.
// shape_wts can be `EqualVar', `EqualEffect',...
shape_wts: EqualVar
// tex_model defines type of model to represent texture statistics, eg: pca, pca+haar1d
tex_model: pca
// Image Pyramid Builder can be gauss_byte, gauss_float, grad_float ...
pyr_builder: gauss_byte
points_pyr_builder: Same
max_im_pyr_levels: 5
// Levels of multi-res model to build :
min_level: 0
max_level: 4
// Details of points : images
training_set:
{
107_0764.pts : 107_0764.jpg
107_0766.pts : 107_0766.jpg
107_0779.pts : 107_0779.jpg
107_0780.pts : 107_0780.jpg
107_0781.pts : 107_0781.jpg
107_0782.pts : 107_0782.jpg
107_0784.pts : 107_0784.jpg
107_0785.pts : 107_0785.jpg
107_0786.pts : 107_0786.jpg
107_0787.pts : 107_0787.jpg
107_0788.pts : 107_0788.jpg
107_0789.pts : 107_0789.jpg
107_0790.pts : 107_0790.jpg
107_0791.pts : 107_0791.jpg
107_0792.pts : 107_0792.jpg
}
Just make sure that the listed .pts and .jpg files are present in you images and points folders. Next run the am_build_apm.exe/am_build_aam.exe commands:
~/Documents/am_tools/win_bin$ wine am_build_apm ../models/tim_face.smd
This will create tim_face.apm file in the win_bin folder. The created model can now be viewed by running:
~/Documents/am_tools/win_bin$ wine am_view_apm.exe
and opening ~/Documents/am_tools/models/tim_face.smd file in the application menu.
Now play with different parameters in .smd file, try to add your own images with pts files, I think BioID database has a few hundred images annotated with .pts files.
http://www.bioid.com/index.php?q=downloads/software/bioid-face-database.html

Related

Printing an image to a dye based application

I am learning about fluid dynamics (and Haxe) and have come across this awesome project and thought I would try to extend to it to help me learn. A demo of the original project in action can be seen here.
So far, I have created a side menu of items containing different shapes. When the user clicks on one of the shapes, then, clicks onto the canvas, the image selected should be imprinted onto the dye. The user will then move the mouse and explore the art etc.
To try and achieve this I did the following:
import js.html.webgl.RenderingContext;
function imageSelection(): Void{
document.querySelector('.myscrollbar1').addEventListener('click', function() {
// twilight image clicked
closeNav();
reset();
var image:js.html.ImageElement = cast document.querySelector('img[src="images/twilight.jpg"]');
gl.current_context.texSubImage2D(cast fluid.dyeRenderTarget.writeToTexture, 0, Math.round(mouse.x), Math.round(mouse.y), RenderingContext.RGB, RenderingContext.UNSIGNED_BYTE, image);
TWILIGHT = true;
});
After this call, inside the update function, I have the following:
override function update( dt:Float ){
time = haxe.Timer.stamp() - initTime;
performanceMonitor.recordFrameTime(dt);
//Smaller number creates a bigger ripple, was 0.016
dt = 0.090;//#!
//Physics
//interaction
updateDyeShader.isMouseDown.set(isMouseDown && lastMousePointKnown);
mouseForceShader.isMouseDown.set(isMouseDown && lastMousePointKnown);
//step physics
fluid.step(dt);
particles.flowVelocityField = fluid.velocityRenderTarget.readFromTexture;
if(renderParticlesEnabled){
particles.step(dt);
}
//Below handles the cycling of colours once the mouse is moved and then the image should be disrupted into the set dye colours.
}
However, although the project builds, I can't seem to get the image imprinted onto the canvas. I have checked the console log and I can see the following error:
WebGL: INVALID_ENUM: texSubImage2D: invalid texture target
Is it safe to assume that my cast for the first param is not allowed?
I have read that the texture target is the first parameter and INVALID_ENUM in particular means that one of the gl.XXX parameters are just flat out wrong for that particular function.
Looking through to the file writeToTexture is declared as so: public var writeToTexture (default, null):GLTexture;. WriteToTexture is a wrapper around a regular webgl handle.
I am using Haxe version 3.2.1 and using Snow to build the project. WriteToTexture is defined inside HaxeToolkit\haxe\lib\gltoolbox\git\gltoolbox\render
writeToTexture in gltoolbox is a GLTexture. With snow and snow_web, this is defined in snow.modules.opengl.GL as:
typedef GLTexture = js.html.webgl.Texture;
So we're simply dealing with a js.html.webgl.Texture here, or WebGLTexture in native JS.
Which means that yes, this is definitely not a valid value for texSubImage2D()'s target, which is specified to take one of the gl.TEXTURE_* constants.
A GLenum specifying the binding point (target) of the active texture.
From this description it's obvious that the parameter isn't actually for the texture itself - it merely gives some info on how the active texture should be used.
The question then becomes how the "active" texture can be set. bindTexture() can be used for this.

Require json file dynamically in react-native (from thousands of files)

I googled so far and tried to find out the solution but not yet.
I know require() works only with static path, so I want alternative ways to solve my problem. I found this answer here but it doesnt make sense for thousands of resources.
Please advise me the best approach to handle such case.
Background
I have thousand of json files that containing app data, and declared all the file path dynamically like below:
export var SRC_PATH = {
bible_version_inv: {
"kjv-ot": "data/bibles/Bible_KJV_OT_%s.txt",
"kjv-nt": "data/bibles/Bible_KJV_NT_%s.txt",
"lct-ot": "data/bibles/Bible_LCT_OT_%s.txt",
"lct-nt": "data/bibles/Bible_LCT_NT_%s.txt",
"leb": "data/bibles/leb_%s.txt",
"net": "data/bibles/net_%s.txt",
"bhs": "data/bibles/bhs_%s.txt",
"n1904": "data/bibles/na_%s.txt",
.....
"esv": "data/bibles/esv_%s.txt",
.....
},
....
As you can see, file path contains '%s' and that should be replace with right string depends on what the user selected.
For example if user select the bible (abbreviation: "kjv-ot") and the chapter 1 then the file named "data/bibles/Bible_KJV_OT_01.txt" should be imported.
I'm not good enough in react-native, just wondering if there is other alternative way to handle those thousands of resource files and require only one at a time by dynamically following the user's selection.
Any suggestions please.
Instead of exporting a flat file, you could export a function that took a parameter which would help build out the paths like this:
// fileInclude.js
export const generateSourcePath = (sub) => {
return {
bible_version_inv: {
"kjv-ot": `data/bibles/Bible_KJV_OT_${sub}.txt`
}
}
}
//usingFile.js
const generation = require('./fileInclude.js');
const myFile = generation.generateSourcePath('mySub');
const requiredFile = require(myFile);
then you would import (or require) this item into your project, execute generateSourcePath('mysub') to get all your paths.

class attribute is not nominal! Weka

I have downloaded a dataset from the uci archive called mammographic mass data. I have saved the file into edexcel and then saved as a .csv file. The attribute information for the data set is:
Attribute Information:
BI-RADS assessment: 1 to 5 (ordinal)
Age: patient's age in years (integer)
Shape: mass shape: round=1 oval=2 lobular=3 irregular=4 (nominal)
Margin: mass margin: circumscribed=1 microlobulated=2 obscured=3 ill- defined=4 spiculated=5 (nominal)
Density: mass density high=1 iso=2 low=3 fat-containing=4 (ordinal)
Severity: benign=0 or malignant=1 (binominal)
I open the file in the experiment environment and try to run however I get the following error message:
13:01:56: Started
13:01:56: Class attribute is not nominal!
13:01:56: Interrupted
13:01:56: There was 1 error
I have tried changing the attribute to class in the explorer but that has not worked. Any suggestions would be great :)
What you need is a Filter, more specifically a Descritize filter, to preprocess your data.
For example, assuming ins is the instances object where your data set is stored. The following code shows how to use a filter.
Discretize filter = new Discretize();
filter.setOptions(...); // set options
filter.setInputFormat(ins);
ins = Filter.useFilter(ins, filter);

Adding camera profile correction to dng_validate.exe [Adobe DNG SDK]

Using Lightroom I know how to apply a camera profile (*.dcp file) to my *.DNG image.
I would like to do the same in an application which I'm writing, so I guess a good starting point would be to append this functionality to the dng_validate.exe application.
So I started to add:
#include "dng_camera_profile.h"
Then added:
static dng_string gDumpDCP;
And add the following to the error print:
"-dcp <file> Load camera profile from <file>.dcp\"\n"
Then I added the function to read the dcp from cli:
else if (option.Matches("dcp", true))
{
gDumpDCP.Clear();
if (index + 1 < argc)
{
gDumpDCP.Set(argv[++index]);
}
if (gDumpDCP.IsEmpty() || gDumpDCP.StartsWith("-"))
{
fprintf(stderr, "*** Missing file name after -dcp\n");
return 1;
}
if (!gDumpDCP.EndsWith(".dcp"))
{
gDumpDCP.Append(".dcp");
}
}
Then I load the profile from disk [line 421]:
if (gDumpTIF.NotEmpty ())
{
dng_camera_profile profile;
if (gDumpDCP.NotEmpty())
{
dng_file_stream inStream(gDumpDCP.Get());
profile.ParseExtended(inStream);
}
// Render final image.
.... rest of code as it was
So how do I now use the profile data to correct the render and write the corrected image?
You need to add the profile to your negative with negative->AddProfile(profile);.
My project raw2dng does this (and more) and is available in source if you want to see an example. The profile is added here.
So after playing around for a couple of days, I now found the solution. Actually the negative can have multiple camera profiles. So with negative->AddProfile(profile) you just add one. But this won't be used if it's not the first profile! So we first need to clean the profiles and than add one.
AutoPtr<dng_camera_profile> profile(new dng_camera_profile);
if (gDumpDCP.NotEmpty())
{
negative->ClearProfiles();
dng_file_stream inStream(gDumpDCP.Get());
profile->ParseExtended(inStream);
profile->SetWasReadFromDNG();
negative->AddProfile(profile);
printf("Profile count: \"%d\"\n", negative->ProfileCount()); // will be 1 now!
}
Next thing to get the image correctly is to have correct white balance. This can be done in camera or afterwards. For my application with 4 different cameras, the result was the best when using afterward white balance correction. So I found 4 (Temperature,Tint) pairs using Lightroom.
The question was how to add these values in the dng_validate.exe program. I did it like this:
#include "dng_temperature.h"
if (gTemp != NULL && gTint != NULL)
{
dng_temperature temperature(gTemp, gTint);
render.SetWhiteXY(temperature.Get_xy_coord());
}
The resulting images are slightly different from the Lightroom result, but close enough. Also the camera to camera differences are gone now! :)

Grails 2.3.1 and burning image service plugin

I have installed and correctly configured :burning-image:0.5.1 on my grails application.
I can upload images and retrieve them.
Now I want to retrieve a saved image and manipulate it.
The goal is to give a link to the registered user where he can crop his original avatar when he want.
Here is my configuration in Config.groovy for my user instance :
import pl.burningice.plugins.image.engines.scale.ScaleType
bi.User = [
outputDir: 'upload/avatar',
images: ['normal': [scale:[width: 800, height: 600, type: ScaleType.APPROXIMATE]],
'small': [scale:[width:100, height:100, type:ScaleType.ACCURATE]]]
]
Here my field container in my User class :
import pl.burningice.plugins.image.ast.FileImageContainer
#FileImageContainer(field = 'avatar')
class User {
/* ... */
}
I'm trying retrieve and manipulate 'normal' image for modify the 'small' one allowing my user to crop his original image :
imageUploadService.save(user, {avatar, name ->
})
But I get the following error : Uploaded image is null.
I read the documentation and found a lot of advices for manipulate image when they are uploaded, before saving, but not for manipulate existing image.
I definitely miss something but can't figure it out, any help is welcome.

Resources