this page script have at bottom,
imagick classes Imagick, ImagickDraw, ImagickPixel, ImagickPixelIterator
ImageMagick version ImageMagick 6.5.4-7 2014-02-10 Q16
error_reporting(E_ALL);
ini_set("display_errors", 1);
$im = new Imagick("/home/palirsin/public_html/imagick/ie9.png");
$im->thumbnailImage(200, null);
$im->borderImage(new ImagickPixel("white"), 5, 5);
$reflection = $im->clone();
$reflection->flipImage();
$gradient = new Imagick();
$gradient->newPseudoImage($reflection->getImageWidth() + 10,$reflection->getImageHeight() + 10, "gradient:transparent-black");
$reflection->compositeImage($gradient, imagick::COMPOSITE_OVER, 0, 0);
$reflection->setImageOpacity( 0.3 );
$canvas = new Imagick();
$width = $im->getImageWidth() + 40;
$height = ($im->getImageHeight() * 2) + 30;
$canvas->newImage($width, $height, new ImagickPixel("black"));
$canvas->setImageFormat("png");
$canvas->compositeImage($im, imagick::COMPOSITE_OVER, 20, 10);
$canvas->compositeImage($reflection, imagick::COMPOSITE_OVER,20, $im->getImageHeight() + 10);
header("Content-Type: image/png");
echo $canvas;
Script giving this picture
https://www.alirsin.com/imagick/info.php
But true picture is bottom
http://www.irfansahin.com/imagick/info.php
What is wrong at script.
Your code looks fine and give a result similar to what you are expecting on my machine, which means that it is almost certainly an issue with using a buggy version of ImageMagick that is causing your problem.
The ImageMagick version 6.5.4-7 was released on 2009-07-30 which is rather ancient in terms of bugfixes. I'd strongly recommend to the latest version of ImageMagick that you can, both to make your code work, and also for several security issues.
Related
Recently upgraded OpenCV from 3.4.5. to OpenCV 4.2.0.
Before I followed this stitching example: https://github.com/opencv/opencv/blob/5131619a1a4d1d3a860b5da431742cc6be945332/samples/cpp/stitching_detailed.cpp (particularly line 480). After upgrading, I altered the code to align more with this newer example: https://github.com/opencv/opencv/blob/master/samples/cpp/stitching_detailed.cpp (Note line 481).
Problem is with this new computeImageFeatures function, I am getting less detected features. Older code with same images gave me 1400+ features but computeImageFeatures gave me exactly 500 features per image. Any ideas how to "fix" this? I believe it also causes the "Bundle Adjuster" to fail later.
According to documentation of cv::ORB::create, default value of nfeatures argument is 500:
The first argument is nfeatures, you may set the first argument to grater number like 2000.
Here are the constructor arguments:
static Ptr<ORB> cv::ORB::create (int nfeatures = 500,
float scaleFactor = 1.2f,
int nlevels = 8,
int edgeThreshold = 31,
int firstLevel = 0,
int WTA_K = 2,
int scoreType = ORB::HARRIS_SCORE,
int patchSize = 31,
int fastThreshold = 20
)
Try modifying:
if (features_type == "orb")
{
finder = ORB::create();
}
to
if (features_type == "orb")
{
finder = ORB::create(2000);
}
In case you are not using ORB, but other type of features, read the documentation of the constructor.
I assume all types has a limiter argument.
I have an addon correctly working just up to FF 47.I know that with e10s FF 48, it has some compatibility troubles. Here a brief list of the lines of code I think they are affected by the new multiprocess model of the browser:
1.let { Cc, Ci } = require('chrome');
2.const { Cu } = require("chrome");
3.require("sdk/tabs").on("ready",logURL);
4.Cu.import("resource://gre/modules/FileUtils.jsm");
5.const {TextDecoder, TextEncoder, OS} = Cu.import("resource://gre/modules/osfile.jsm", {});
6. file = FileUtils.getFile("Home", [".cp.txt"]); //reopen the file just saved
7. var txt = "";
var fstream = Cc["#mozilla.org/network/file-input-stream;1"].createInstance(Ci.nsIFileInputStream);
var cstream = Cc["#mozilla.org/intl/converter-input-stream;1"].createInstance(Ci.nsIConverterInputStream);
fstream.init(file, -1, 0, 0); cstream.init(fstream, "UTF-8", 0, 0); let str = {};
let read = 0;
do {
read = cstream.readString(0xffffffff, str); // read as much as we can and put it in str.value
txt += str.value;
} while (read != 0);
cstream.close(); // this closes fstream // use 0x02 | 0x10 to open file for appending. // save the domain option in file
foStream.init(file, 0x02 | 0x08 | 0x20, 0666, 0);
converter.init(foStream, "UTF-8", 0, 0);
var sEP = txt + '\n' + 'h' + '\n'; // encrypt new path converter.writeString(sEP); converter.close(); // this closes foStream
console.log('saved h'); }
I need to know, first of all, if all these elements are effectively problematic with new FF (I am pretty sure 6 and 7 are not compatible — the XUL and XPCOM are obsolete and work on the same thread —, but not so sure for the other lines), and finally if there are surrogate constructs for the 48 version in order to solve the same problems (input/output and so on). In particular, it is essential for the add-on the use of the tabs mechanism (for reading the URL of a tab). Thanks for the help.
None of these issues are e10s related, they are all es6 and xpcom questions.
If you use this in a framescript, then its a e10s question, however I avoid using xpcom in framescripts, try to use messaging from framescript to bootstrap such as this - https://github.com/Noitidart/CommPlayground
Re 1 and 2: if you are not using the Cu Ci in require'ed scripts, let and const is fine.
Re 7: Don't do this way, it will be deprecated soon, and is not main ui friendly. It can lock it up. Use OS.File.
Re 4 and 6: I recommend doing Services.dirsvc.get('Home', Ci.nsIFile).path;. This is XPCOM but it is using the common Services.jsm module, which is less likely to be depercated. Also, Services.dirsvc.get caches the files, so its much faster then FileUtils. However, ideally, you should use OS.Path.join and OS.Constants.Path which comes when you import osfile.jsm. So you would do OS.Path.join(OS.Constants.Path.homeDir, '.cp.txt') - try to avoid as much XPCOM as possible.
Re 3: This is fine.
Situation: I am trying to get point cloud with pcl::AdaptiveCostSOStereoMatching, which uses two rectified images (pics are ok).
I used these tutorials to learn how to do this:
First tutorial
Second tutorial
Error: programm crashes in runtime when calling "compute" method of AdaptiveCostSOStereoMatching
Question: how to correctly pass images to "compute" method?
I tried:
1) Images converted by png2pcd
(command line: "png2pcd.exe in.png out.pcd")
2) Images converted with function below from cv::Mat
But no luck.
Function which converts cv::Mat to pcl::PointCloud
void MatToPointCloud(Mat& mat, pcl::PointCloud<RGB>::Ptr cloud)
{
int width = mat.cols;
int height = mat.rows;
pcl::RGB val;
val.r = 0; val.g = 0; val.b = 0;
for (int i = 0; i < mat.rows; i++)
for (int j = 0; j < mat.cols; j++)
{
auto point = mat.at<Vec3b>(i, j);
//std::cout << j << " " << i << "\n";
val.b = point[0];
val.g = point[1];
val.r = point[2];
cloud->at(j, i) = val;
}
}
pcl::AdaptiveCostSOStereoMatching (compute)
// Input
Mat leftMat, rightMat;
leftMat = imread("left.png");
rightMat = imread("right.png");
int width = leftMat.cols;
int height = rightMat.rows;
pcl::RGB val;
val.r = 0; val.g = 0; val.b = 0;
pcl::PointCloud<pcl::RGB>::Ptr left_cloud(new pcl::PointCloud<pcl::RGB>(width, height, val));
pcl::PointCloud<pcl::RGB>::Ptr right_cloud(new pcl::PointCloud<pcl::RGB>(width, height, val));
MatToPointCloud(leftMat, left_cloud);
MatToPointCloud(rightMat, right_cloud);
// Calculation
pcl::AdaptiveCostSOStereoMatching stereo;
stereo.setMaxDisparity(60);
//stereo.setXOffest(0); Почему-то не распознается
stereo.setRadius(5);
stereo.setSmoothWeak(20);
stereo.setSmoothStrong(100);
stereo.setGammaC(25);
stereo.setGammaS(10);
stereo.setRatioFilter(20);
stereo.setPeakFilter(0);
stereo.setLeftRightCheck(true);
stereo.setLeftRightCheckThreshold(1);
stereo.setPreProcessing(true);
stereo.compute(*left_cloud, *right_cloud); // <-- CRASHING THERE
stereo.medianFilter(4);
pcl::PointCloud<pcl::PointXYZRGB>::Ptr out_cloud(new pcl::PointCloud<pcl::PointXYZRGB>);
stereo.getPointCloud(318.11220, 224.334900, 368.534700, 0.8387445, out_cloud, left_cloud);
Error information:
Output log: HEAP[App.exe]:
Heap block at 0000006B0F828460 modified at 0000006B0F8284A8 past requested size of 38
App.exe has triggered a breakpoint.
left_cloud (a right cloud looks like left_cloud)
Mini question: if AdaptiveCostSOStereoMatching really allows build point cloud from 2 images, how ACSSM doing this without insintric and excentic parameters?
Problem: I downloaded and installed old version of PCL without stereo.
After that, I downloaded stereo from other PCL pack and add this library to my PCL pack. And it worked incorrectly.
Solution: I compilled PCL 1.8 and my programm is ok now.
OS: Windows
IDE: MSVS 12 2013 x64
If you will try to compile PCL, these links can help you:
Official-tutorial-1
Official-tutorial-2
Good help with FLANN and VTK
Example to verify installation
I'm already looking for hours but I can't find the problem.
I get the following error when I want to stitch two images together:
OopenCV error: assertion failed (y==0 || data && dims >=1 && (unsigned)y < (unsigned > size.p[0])) in unkown function...
This is the code (pano.jpg was already stitched together in a previous run of the algorithm were the same algorithm did work...):
cv::Mat img1 = imread("input2.jpg");
cv::Mat img2 = imread("pano.jpg");
std::vector<cv::Mat> vectest;
vectest.push_back(img2);
vectest.push_back(img1);
cv::Mat result;
cv::Stitcher stitcher = cv::Stitcher::createDefault( false );
stitcher.setPanoConfidenceThresh(0.01);
detail::BestOf2NearestMatcher *matcher = new detail::BestOf2NearestMatcher(false, 0.001/*=match_conf*/);
detail::SurfFeaturesFinder *featureFinder = new detail::SurfFeaturesFinder(100);
stitcher.setFeaturesMatcher(matcher);
stitcher.setFeaturesFinder(featureFinder);
cv::Stitcher::Status status = stitcher.stitch( vectest, result );
You can find the images here:
pano.jpg: https://dl.dropbox.com/u/5276376/pano.jpg
input2.jpg: https://dl.dropbox.com/u/5276376/input2.jpg
Edit:
I compiled opencv 2.4.2 myself but still the same problem...
The system crashes in the stitcher.cpp file on the following line:
blender_->feed(img_warped_s, mask_warped, corners[img_idx]);
In this feed function it crashed at this line:
int y_ = y - y_tl;
const Point3_<short>* src_row = src_pyr_laplace[i].ptr<Point3_<short> >(y_);
Point3_<short>* dst_row = dst_pyr_laplace_[i].ptr<Point3_<short> >(y);
And finally this assertion in mat.hpp:
template<typename _Tp> inline _Tp* Mat::ptr(int y)
{
CV_DbgAssert( y == 0 || (data && dims >= 1 && (unsigned)y < (unsigned)size.p[0]) );
return (_Tp*)(data + step.p[0]*y);
}
strange that everything works fine for some people here...
I stitching images now,but not using stitch High Level Functionality instead encode every step by opencv2.4.2. As far as I know, you could have a try about first SurfFeaturesFinder, second BestOf2NearestMatcher. Just a try, Good luck!
I have a number of images that are sotred as blob data in my database.
I am aware this isn't a good idea, but it's what I'm using.
I have following code in my Peer class:
public function getImagesPath()
{
$file_srcs = false;
$fp = $this->getPhoto->getBlobData();
if ($fp !== null)
{
$file = stream_get_contents($fp);
$file_srcs = '/uploads/gallery/'.$this->getId().'.jpg';
}
return $file_srcs;
}
I then call this in my template, like so:
$path = $item->getImagesPath();
if ($path)
{
echo '<img src="'.$path.'" alt="Thumbnail for '.$photo->getName().'" width="153" height="153" />';
}
Now this works well, but, I have some images that are either square in shape, or rectangular.
Giving them a size/width in the img src distorts some of them.
Is there anyway, in which I could resize/crop the images before they are displayed?
Thanks
sfThumbnailPlugin is what I've used on a number of projects and it is pretty awesome. There is an older version for Symfony 1.0 if that's what you're using. By default it uses GD, but you can have it use ImageMagick and do some pretty cool things with it.
You can probably use imagecreatefromstring and imagecopyresampled. This is code that I use, that I've changed to work with your blob. This also adds a white background if the original size width/height ratio doesn't match the destination image size.
static function CreateThumbnailFromBlob($blobData, $dstWidth = 100.0, $dstHeight = 100.0){
$oldImg = #imagecreatefromstring($olduri);
if($oldImg){
$realOldW = imagesx($oldImg);
$realOldH = imagesy($oldImg);
$destX = 0;
$destY = 0;
if($realOldH>=$realOldW && $realOldH>0){
$realY = $dstHeight;
$realX = round($realY*$realOldW/$realOldH);
$destX = round($dstWidth/2-$realX/2);
}else{
$realX = $dstWidth;
if($realOldW>0)
$realY = round($realX*$realOldH/$realOldW);
else
$realY = $dstHeight;
$destY = round($dstHeight/2-$realY/2);
}
$newImg = #imagecreatetruecolor($dstWidth, $dstHeight);
$white = imagecolorallocate($newImg, 255, 255, 255);
imagefill($newImg, 1, 1, $white);
imagecopyresampled($newImg,$oldImg,$destX,$destY,
0,0,$realX,$realY,$realOldW,$realOldH);
imagedestroy($oldImg);
return $newImg;
}
}
How are you adding images to the database?
If it is via an upload form, the best method would be to create a thumbnail of the appropriate size/dimensions using GD or another library and store it in a second blob column.
Otherwise you can specify a single dimension in the html and the picture will retain its dimensions.