-liquid-rescale in GraphicsMagick or other SeamCarving command-line tools - imagemagick

I'm using GraphicsMagic (and have several bindings that require GM). Now I have to rescale images by Seam Carving algorithm that is available in ImageMagic via -liquid-rescale option but is missing in GM (isn't it?). Is there any options to install both GM and IM without conflicts (on Ubuntu 12.04) or is there any other command-line tools that can perform SeamCarving/LiquidRescale?

You can build a very simple script with python and scikit implementation.
Additionally, many tools like this are available on github.
Just as an example:
from skimage import data, draw
from skimage import transform, util
import numpy as np
from skimage import filters, color
from matplotlib import pyplot as plt
img = data.rocket()
img = util.img_as_float(img)
eimg = filters.sobel(color.rgb2gray(img))
out = transform.seam_carve(img, eimg, 'vertical', 200)
plt.title('Resized using Seam Carving')
plt.imshow(out)

Related

Big issue reading a large 16bit grayscale PNG using Python

I have a big issue trying to convert a large scientific image (7 Mb) 16bit PNG image to JPG in order to compress it and check for any eventual artifact in Python.
The original image can be found at:
https://postimg.cc/p5PQG8ry
Reading other answers here I have tried Pillow and OpenCV without any success the only thing I obtain is a white sheet. What I'm doing wrong?
The commented line was an attempt from Read 16-bit PNG image file using Python but seems not working for me generating a data type error.
import numpy as np
from PIL import Image
import cv2
image = cv2.imread('terzafoto.png', -cv2.IMREAD_ANYDEPTH)
cv2.imwrite('terza.jpg', image)
im = Image.open('terzafoto.png').convert('RGB')
im.save('terzafoto.jpg', format='JPEG', quality=100)
#im = Image.fromarray(np.array(Image.open('terzafoto.jpg')).astype("uint16")).convert('RGB')
Thanks to Dan Masek I was able to find the error in my code.
I was not correctly converting the data from 16 to 8 bit.
Here the updated code with the solution for OpenCV and Pillow.
import numpy as np
from PIL import Image
import cv2
im = Image.fromarray((np.array(Image.open('terzafoto.png'))//256).astype("uint8")).convert('RGB')
im.save('PIL100.jpg', format='JPEG', quality=100)
img = cv2.imread('terzafoto.png', cv2.IMREAD_ANYDEPTH)
cv2.imwrite('100.jpeg', np.uint8(img // 256),[int(cv2.IMWRITE_JPEG_QUALITY), 100])
The image quality factor can be set in function of your needs. 100 means lossless compression.

Find coordinates of high frequency centers after plotting fftshifted image with Python

I am trying to write a code to detect Moire Patterns in images. I am quite new to image processing in Python, so please excuse me if the solution is trivial.
My approach is, to use the scipy fftshift function to distinguish between Moire and non Moire images (see below). Moire images have several high frequency centers, whereas normal images only have one center.
I would like to get the coordinates of these "centers", but I don't know how to do it exactly.
I am happy about every suggestion!
Code:
import numpy as np
from scipy.fftpack import *
import imageio
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
moire = imageio.imread(imgpath1)
nonMoire = imageio.imread(imgpath2)
moire = np.dot(moire[...,:3], [0.2989, 0.5870, 0.1140])
nonMoire = np.dot(nonMoire[...,:3], [0.2989, 0.5870, 0.1140])
noisy_gray_fft = fft2(moire)
orig_gray_fft = fft2(nonMoire)
orig_gray_fft_shift = fftshift(orig_gray_fft)
noisy_gray_fft_shift = fftshift(noisy_gray_fft)
plt.subplot(121)
plt.imshow(np.abs(noisy_gray_fft_shift), cmap='gray', norm=LogNorm(vmin=5))
plt.subplot(122)
plt.imshow(np.abs(orig_gray_fft_shift), cmap='gray', norm=LogNorm(vmin=5))
Non Moire original:
Moire original:
After FFT:

scikit-learn.impute isn't being imported from Imputer via Spyder using the code from Machine Learning A-Z tutorial

My code isn't working that I copied word for word from the Machine Learning A-Z™: Hands-On Python & R In Data Science tutorial course. I am using Python 3.7, I have installed the scikit-learn package in my environment. It isn't working, I have tried looking for a package that has sklearn although it doesn't seem to find anything. It is giving me this error.
I am running my environment through Anaconda.
ImportError: cannot import name 'Imputer' from 'sklearn.preprocessing' (C:\Users\vygan\.conda\envs\env\lib\site-packages\sklearn\preprocessing\__init__.py)
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Importing the dataset
dataset = pd.read_csv('Data.csv')
X = pd.DataFrame(dataset.iloc[:, :-1].values)
y = pd.DataFrame(dataset.iloc[:, 3].values)
# Taking care of missing data
from sklearn.preprocessing import Imputer
imputer = Imputer(missing_values = 'NaN', strategy = 'mean', axis = 0)
imputer = imputer.fit(X[:, 1:3])
X[:, 1:3] = imputer.transform(X[:, 1:3])
it moved permanently from preprocessing to impute library, u can call it like:
from sklearn.impute import SimpleImputer
it's quite the same.
if it doesn't work, you should uninstall it with pip and then install it again
it may not installed properly for the first time
it doesn't have axis anymore but you could easily handle it with pandas dataframe header like this:
si=SimpleImputer()
si.fit([dataset["headername"]])
there is a strategy parameter that let you choose between "mean", "most_frequent","median" and "constant"
but there is another imputer that I like more:
from sklearn.impute import KNNImputer
which will impute missing values with an average of k nearest neighbors
A more complete answer:
Imputer (https://sklearn.org/modules/generated/sklearn.preprocessing.Imputer.html`)
can be found only in versions 0.19.1 and below.
SimpleImputer appeared at the latest versions and this is what you need.
Try to install the latest version:
pip install -U scikit-learn # or using conda
And then use:
from sklearn.impute import SimpleImputer
Source: https://github.com/mindsdb/lightwood/issues/75
Your code works fine for me. Which sklearn version do you have?
import sklearn
sklearn.__version__
'0.21.3'
You can upgrade packages with conda in the following way:
How to upgrade scikit-learn package in anaconda
I had faced the same problem because the library was changed from preprocessing to impute and the class was changed to SimpleImputer from Imputer.
I changed my code as follows:
from sklearn.impute import SimpleImputer
simp = SimpleImputer(missing_values = 'NaN', strategy = 'mean')
simp = SimpleImputer().fit(X[:, 1:3])
X[:, 1:3] = simp.transform(X[:, 1:3])

Does sklearn clustering output differs due to machine?

I am using sklearn AffinityPropagation clustering algorithm . The output of the clustering algorithm on my 4 core machine is different than what is getting generated on a typical server machine. Can someone suggest any method so that I can get similar output on both the systems.
I am using similar feature vector on both the machine.
Output on my machine is cluster0:[1,2,3],cluster1:[4,5,6] but on server its cluster0:[1,2] cluster1:[3,4],cluster2:[5]
from keras.applications.xception import Xception
from keras.preprocessing import image
from keras.applications.xception import preprocess_input
from keras.models import Model
from sklearn.cluster import AffinityPropagation
import cv2
import glob
base_model = Xception(weights = model_path)
base_model=Model(inputs=base_model.input,outputs=base_model.get_layer('avg_pool').output)
files = glob.glob("*.jpg")
image_vector = []
for f in files:
image = cv2.imread(f)
temp_vector = base_model.predict(image)
image_vector.append(temp_vector)
import numpy as np
image_vector = np.asarray(image_vector)
clustering = AffinityPropagation()
clustering.fit(image_vector)
Packages :-
scikit-learn 0.20.3
sklearn 0.0
tensorflow 1.12.0
keras 2.2.4
opencv-python
Machine 1 :- 4 core 8GB RAM
Machine 2 :- 7 Core 16GB RAM
Results on different machines can be different when running algorithms that are not deterministic.
I suggest that you fix the random seed of numpy and the random seed of Python if you want to be able to reproduce results across machines for such algorithms.
Python random seed can be fixed by using: random.seed(42) (or any other integer)
Numpy random seed can be fixed with: np.random.seed(12345) (or any other integer)
sklearn and Keras use numpy random number generator so the second option by itself could solve your issue.
This answer assumes that all libraries versions are the same on both systems.

How to run PCA with dask_ml. I am getting an error, "This function (tsqr) supports QR decomposition in the case of tall-and-skinny matrices"?

I want to perform dimensionality reduction over data with around 3000 rows and 6000 columns. Here the number of observations (n_samples) < number of features (n_columns). I am not able to achieve the result using dask-ml whereas the same is possible through scikit learn. What modifications do I need to perform to my existing code?
#### dask_ml
from dask_ml.decomposition import PCA
from dask_ml import preprocessing
import dask.array as da
import numpy as np
train = np.random.rand(3000,6000)
train = da.from_array(train,chunks=(100,100))
complete_pca = PCA().fit(train)
#### scikit learn
from sklearn.decomposition import PCA
from sklearn import preprocessing
import numpy as np
train = np.random.rand(3000,6000)
complete_pca = PCA().fit(train)
The PCA algorithm in Dask-ML is only designed for tall-and-skinny matrices. You could try using the raw SVD algorithms in dask.array. Also, with a 3000x6000 matrix you can probably also just use a single machine.
Adding in something like Dask-ML for a problem of this size might be adding more complexity than you need. If Scikit-Learn works for you then I would stick with that.

Resources