How do I create a dataset with multiple images the same format as CIFAR10? - machine-learning

I have images 1750*1750 and I would like to label them and put them into a file in the same format as CIFAR10. I have seen a similar answer before that gave an answer:
label = [3]
im = Image.open(img)
im = (np.array(im))
print(im)
r = im[:,:,0].flatten()
g = im[:,:,1].flatten()
b = im[:,:,2].flatten()
array = np.array(list(label) + list(r) + list(g) + list(b), np.uint8)
array.tofile("info.bin")
but it doesn't include how to add multiple images in a single file. I have looked at CIFAR10 and tried to append the arrays in the same way, but all I got was the following error:
E tensorflow/core/client/tensor_c_api.cc:485] Read less bytes than requested
Note that I am using Tensorflow to do my computations, and I have been able to isolate the problem from the data.

The CIFAR-10 binary format represents each example as a fixed-length record with the following format:
1-byte label.
1 byte per pixel for the red channel of the image.
1 byte per pixel for the green channel of the image.
1 byte per pixel for the blue channel of the image.
Assuming you have a list of image filenames called images, and a list of integers (less than 256) called labels corresponding to their labels, the following code would write a single file containing these images in CIFAR-10 format:
with open(output_filename, "wb") as f:
for label, img in zip(labels, images):
label = np.array(label, dtype=np.uint8)
f.write(label.tostring()) # Write label.
im = np.array(Image.open(img), dtype=np.uint8)
f.write(im[:, :, 0].tostring()) # Write red channel.
f.write(im[:, :, 1].tostring()) # Write green channel.
f.write(im[:, :, 2].tostring()) # Write blue channel.

Related

How to perform image augmentation for sequence of images representing a sample

I want to know how to perform image augmentaion for sequence image data.
The shape of my input to the model looks as below.
(None,30,112,112,3)
Where 30 is the number of images present in one sample. 112*112 are heigth and width,3 is the number of channels.
Currently I have 17 samples(17,30,112,112,3) which are not enough therefore i want make some sequence image augmentation so that I will have atleast 50 samples as (50,30,112,112,3)
(Note : My data set is not of type video,rather they are in the form of sequence of images captured at every 3 seconds.So,we can say that it is in the form of already extacted frames)
17 samples, each having 30 sequence images are stored in separate folders in a directory.
folder_1
folder_2,
.
.
.
folder_17
Can you Please let me know the code to perform data augmentation?
Here is an illustration of using imgaug library for a single image
# Reading an image using OpenCV
import cv2
img = cv2.imread('flower.jpg')
# Appending images 5 times to a list and convert to an array
images_list = []
for i in range(0,5):
images_list.append(img)
images_array = np.array(images_list)
The array images_array has shape (5, 133, 200, 3) => (number of images, height, width, number of channels)
Now our input is set. Let's do some augmentation:
# Import 'imgaug' library
import imgaug as ia
import imgaug.augmenters as iaa
# preparing a sequence of functions for augmentation
seq = iaa.Sequential([
iaa.Fliplr(0.5),
iaa.Crop(percent=(0, 0.1)),
iaa.LinearContrast((0.75, 1.5)),
iaa.AdditiveGaussianNoise(loc=0, scale=(0.0, 0.05*255), per_channel=0.5),
iaa.Multiply((0.8, 1.2), per_channel=0.2)
],random_order=True)
Refer to this page for more functions
# passing the input to the Sequential function
images_aug = seq(images=images_array)
images_aug is an array that contains the augmented images
# Display all the augmented images
for img in images_aug:
cv2.imshow('Augmented Image', img)
cv2.waitKey()
Some augmented results:
You can extend the above for your own problem.

Create jpg/png from encrypted image

I'd like to be able to convert/display an AES256 asymmetric encrypted image even if it appears to be garbage, I've read a number of things on SO that suggest removing the headers and then reattaching them afterward so even if looks nutty it still displays it.
The point of this is that I want to see if it's possible to perform image classification on an image dataset encrypted with a known public key. If I have a picture of a cat and I encrypt it with exactly the same key, then the result will generally be reproducible and result in an image that in some way equates to the original.
Excuse the lack of code, I didn't want to pollute the discussion with ideas that I was considering in order to get a proper critique from you lovely people- I would say I'm not an encryption expert hence my asking for advice here.
There are many options, but I suggest to follow the following guidelines:
Encrypt the image data, and not the image file.
In case the image is 100x100x3 bytes, encrypt the 30000 bytes (not the img.jpg file for example).
(The down side is that metadata is not saved as part of the encrypt image).
Use lossless image file format to store the encrypted image (PNG file format for example, and not JPEG format).
Lossy format like JPEG is going to be irreversible.
Set the resolution of the encrypted image to the same resolution as the input image.
That way you don't need to store the image headers - the resolution is saved.
You may need to add padding, so the size in bytes be a multiple of 32.
I hope you know Python...
Here is a Python code sample that demonstrates the encoding and decoding procedures:
import cv2
import numpy as np
from Crypto.Cipher import AES
# https://stackoverflow.com/questions/61240967/image-encryption-using-aes-in-python
key = b'Sixteen byte key'
iv = b'0000000000000000'
# Read image to NumPy array - array shape is (300, 451, 3)
img = cv2.imread('chelsea.png')
# Pad zero rows in case number of bytes is not a multiple of 16 (just an example - there are many options for padding)
if img.size % 16 > 0:
row = img.shape[0]
pad = 16 - (row % 16) # Number of rows to pad (4 rows)
img = np.pad(img, ((0, pad), (0, 0), (0, 0))) # Pad rows at the bottom - new shape is (304, 451, 3) - 411312 bytes.
img[-1, -1, 0] = pad # Store the pad value in the last element
img_bytes = img.tobytes() # Convert NumPy array to sequence of bytes (411312 bytes)
enc_img_bytes = AES.new(key, AES.MODE_CBC, iv).encrypt(img_bytes) # Encrypt the array of bytes.
# Convert the encrypted buffer to NumPy array and reshape to the shape of the padded image (304, 451, 3)
enc_img = np.frombuffer(enc_img_bytes, np.uint8).reshape(img.shape)
# Save the image - Save in PNG format because PNG is lossless (JPEG format is not going to work).
cv2.imwrite('enctypted_chelsea.png', enc_img)
# Decrypt:
################################################################################
key = b'Sixteen byte key'
iv = b'0000000000000000'
enc_img = cv2.imread('enctypted_chelsea.png')
dec_img_bytes = AES.new(key, AES.MODE_CBC, iv).decrypt(enc_img.tobytes())
dec_img = np.frombuffer(dec_img_bytes, np.uint8).reshape(enc_img.shape) # The shape of the encrypted and decrypted image is the same (304, 451, 3)
pad = int(dec_img[-1, -1, 0]) # Get the stored padding value
dec_img = dec_img[0:-pad, :, :].copy() # Remove the padding rows, new shape is (300, 451, 3)
# Show the decoded image
cv2.imshow('dec_img', dec_img)
cv2.waitKey()
cv2.destroyAllWindows()
Encrypted image:
Decrypted image:
Idea for identifying the encrypted image:
Compute a hash of the encrypted image, and store it in your database, along the original image, the key and the iv.
When you have the encrypted image, compute the hash, and search for it in your database.
I'm using an answer, although it's not an answer because I'd like to show two pictures to demonstrate.
Both pictures were taken from my blog entry http://javacrypto.bplaced.net/g01-ecb-pinguin/ (German language).
The first picture shows the Tuc penguin after encryption with AES in ECB mode:
The form still persists, and you can "imagine" what animal is shown.
The second picture was encrypted with AES in CBC mode and the output is looking like garbage:
The conclusion: if the picture was encrypted with a mode like CBC, CTR or GCM you will always get something like the second picture, even if you know the mode, key and initialization vector that was in use.
A visual comparison will not work, sorry.
To answer your question in comment "how would you display encrypted images in their encrypted form": you can't show them because usually a picture has a header that gets encrypted as well, so this information will be lost. The two "encrypted" pictures were created by stripping off the header before encryption, then the picture data gets encrypted, and the header is prepended.

Extract text information from PDF files with different layouts - machine learning

I need assistance with a ML project I am currently trying to create.
I receive a lot of invoices from a lot of different suppliers - all in their own unique layout. I need to extract 3 key elements from the invoices. These 3 elements are all located in a table/line items for all the invoices.
The 3 elements are:
1: Tariff number (digit)
2: Quantity (always a digit)
3: Total line amount (monetary value)
Please refer to below screenshot, where I have marked these field on a sample invoice.
I started this project with a template approach, based on regular expressions. This, however, was not scaleable at all and I ended up with tons of different rules.
I am hoping that machine learning can help me here - or maybe a hybrid solution?
The common denominator
In all of my invoices, despite of the different layouts, each line item will always consist of one tariff number. This tariff number is always 8 digits, and is always formatted in one the ways like below:
xxxxxxxx
xxxx.xxxx
xx.xx.xx.xx
(Where "x" is a digit from 0 - 9).
Further, as you can see on the invoice there is both a Unit Price and a Total Amount per line. The amount I will need is always the highest for each line.
The output
For each invoice like the one above, I need the output for each line. This could for example be something like this:
{
"line":"0",
"tariff":"85444290",
"quantity":"3",
"amount":"258.93"
},
{
"line":"1",
"tariff":"85444290",
"quantity":"4",
"amount":"548.32"
},
{
"line":"2",
"tariff":"76109090",
"quantity":"5",
"amount":"412.30"
}
Where to go from here?
I am not sure of what I am looking to do falls under machine learning and if so, under which category. Is it computer vision? NLP? Named Entity Recognition?
My initial thought was to:
Convert the invoice to text. (The invoices are all in textable PDFs, so I can use something like pdftotext to get the exact textual values)
Create custom named entities for quantity, tariff and amount
Export the found entities.
However, I feel like I might be missing something.
Can anyone assist me in the right direction?
Edit:
Please see below for a few more examples of how an invoice table section can look like:
Sample invoice #2
Sample invoice #3
Edit 2:
Please see below for the three sample images, without the borders/bounding boxes:
Image 1:
Image 2:
Image 3:
Here's an attempt using OpenCV, the idea is:
Obtain binary image. We load the image, enlarge using
imutils.resize to help obtain better OCR results (see Tesseract improve quality), convert to grayscale, then Otsu's threshold to obtain a binary image (1-channel).
Remove table grid lines. We create a horizontal and vertical kernels then perform morphological operations to combine adjacent text contours into a single contour. The idea is to extract a ROI row as one piece to OCR.
Extract row ROIs. We find contours then sort from top-to-bottom using imutils.contours.sort_contours. This ensures that we iterate through each row in the correct order. From here we iterate through the contours, extract the row ROI using Numpy slicing, OCR using Pytesseract, then parse the data.
Here's the visualization of each step:
Input image
Binary image
Morph close
Visualization of iterating through each row
Extracted row ROIs
Output invoice data result:
{'line': '0', 'tariff': '85444290', 'quantity': '3', 'amount': '258.93'}
{'line': '1', 'tariff': '85444290', 'quantity': '4', 'amount': '548.32'}
{'line': '2', 'tariff': '76109090', 'quantity': '5', 'amount': '412.30'}
Unfortunately, I get mixed results when trying on the 2nd and 3rd image. This method does not produce great results on the other images since the layout of the invoices are all different. However, this approach shows that it's possible to use traditional image processing techniques to extract the invoice information with the assumption that you have a fixed invoice layout.
Code
import cv2
import numpy as np
import pytesseract
from imutils import contours
import imutils
pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe"
# Load image, enlarge, convert to grayscale, Otsu's threshold
image = cv2.imread('1.png')
image = imutils.resize(image, width=1000)
height, width = image.shape[:2]
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Remove horizontal lines
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (50,1))
detect_horizontal = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, horizontal_kernel, iterations=2)
cnts = cv2.findContours(detect_horizontal, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(thresh, [c], -1, 0, -1)
# Remove vertical lines
vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,50))
detect_vertical = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, vertical_kernel, iterations=2)
cnts = cv2.findContours(detect_vertical, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(thresh, [c], -1, 0, -1)
# Morph close to combine adjacent contours into a single contour
invoice_data = []
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (85,5))
close = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel, iterations=3)
# Find contours, sort from top-to-bottom
# Iterate through contours, extract row ROI, OCR, and parse data
cnts = cv2.findContours(close, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
(cnts, _) = contours.sort_contours(cnts, method="top-to-bottom")
row = 0
for c in cnts:
x,y,w,h = cv2.boundingRect(c)
ROI = image[y:y+h, 0:width]
ROI = cv2.GaussianBlur(ROI, (3,3), 0)
data = pytesseract.image_to_string(ROI, lang='eng', config='--psm 6')
parsed = [word.lower() for word in data.split()]
if 'tariff' in parsed or 'number' in parsed:
row_data = {}
row_data['line'] = str(row)
row_data['tariff'] = parsed[-1]
row_data['quantity'] = parsed[2]
row_data['amount'] = str(max(parsed[10], parsed[11]))
row += 1
print(row_data)
invoice_data.append(row_data)
# Visualize row extraction
'''
mask = np.zeros(image.shape, dtype=np.uint8)
cv2.rectangle(mask, (0, y), (width, y + h), (255,255,255), -1)
display_row = cv2.bitwise_and(image, mask)
cv2.imshow('ROI', ROI)
cv2.imshow('display_row', display_row)
cv2.waitKey(1000)
'''
print(invoice_data)
cv2.imshow('thresh', thresh)
cv2.imshow('close', close)
cv2.waitKey()
I'm working on a similar problem in the logistics industry and trust me when I say these document tables come in myriad layouts. Numerous companies that have somewhat solved and are improving on this problem are mentioned as under
Leaders: ABBYY, AntWorks, Kofax, and WorkFusion
Major Contenders: Automation Anywhere, Celaton, Datamatics, EdgeVerve, Extract Systems, Hyland, Hyperscience, Infrrd, and Parascript
Aspirants: Ikarus, Rossum, Shipmnts(Alex), Amazon(Textract), Docsumo, Docparser, Aidock
The category I would like to put this problem under would be multi-modal learning, because both textual and image modalities contribute a good deal in this problem. Though OCR tokens play a vital role in attribute-value classification, their position on the page, spacing and inter-character distances hold as very important features in detecting table, row and column boundaries. The problem gets all the more interesting when rows break across pages, or some columns carry non-empty values.
While the academic world and conferences uses the term Intelligent Document Processing, in general for extracting both singular fields and tabular data. The former is more known by attribute-value classification and the latter is famous by table extraction or repeated-structure extraction, in research literature.
In our foray in processing these semi-structured documents over the 3 years, I feel that achieving both accuracy and scalability is a long and arduous journey. The solutions that offer scalability / 'template free' approach do have annotated corpus of semi-structured business documents in the order of tens of thousand, if not millions. Though this approach is a scalable solution, it's as good as the documents it has been trained on. If your documents hail from the logistics or insurance sector, which are known for their complex layouts, and need to be super-accurate owing to the compliance procedures, a 'template-based' solution would be the panacea to your ills. It is guaranteed to give more accuracy.
If you need links to existing research, do mention in the comments below and I'ld be happy to share them.
Also, I would recommend using pdfparser1 over pdf2text or pdfminer because the former gives character level information in digital files at significantly better performance.
Would be happy to incorporate any feedback, as this is my first answer here.

How is Spark reading my image using the image format?

It might be a silly question but I can't figure out how Spark read my image using the spark.read.format("image").load(....) argument.
After importing my image which gives me the following:
>>> image_df.select("image.height","image.width","image.nChannels", "image.mode", "image.data").show()
+------+-----+---------+----+--------------------+
|height|width|nChannels|mode| data|
+------+-----+---------+----+--------------------+
| 430| 470| 3| 16|[4D 55 4E 4C 54 4...|
+------+-----+---------+----+--------------------+
I arrive to the conclusion that:
my image is 430x470 pixels,
my image is colored (RGB due to nChannels = 3) which is an openCV compatible-type,
my image mode is 16 which corresponds to a particular openCV byte-order.
Does someone knows which website/documentation I could browse to know more about it?
the data in the data column is of type Binary but:
when I run image_df.select("image.data").take(1) I got an output which seems to be only one array (see below).
>>> image_df.select("image.data").take(1)
# **1/** Here are the last elements of the result
....<<One Eternity Later>>....x92\x89\x8a\x8d\x84\x86\x89\x80\x84\x87~'))]
# 2/ I got also several part of the result which looks like:
.....\x89\x80\x80\x83z|\x7fvz}tpsjqtkrulsvmsvmsvmrulrulrulqtkpsjnqhnqhmpgmpgmpgnqhnqhn
qhnqhnqhnqhnqhnqhmpgmpgmpgmpgmpgmpgmpgmpgnqhnqhnqhnqhnqhnqhnqhnqhknejmdilcilchkbh
kbilcilckneloflofmpgnqhorioripsjsvmsvmtwnvypx{ry|sz}t{~ux{ry|sy|sy|sy|sz}tz}tz}tz}
ty|sy|sy|sy|sz}t{~u|\x7fv|\x7fv}.....
What come next are linked to the results displayed above. Those might be due to my lack of knowledge concerning openCV (or else). Nonetheless:
1/ I don't understand the fact that if I got an RGB image, I should have 3 matrix but the output finishes by .......\x84\x87~'))]. I was more thinking on obtaining something like [(...),(...),(...\x87~')].
2/ Is this part has a special meaning? Like those are the separator between each matrix or something?
To be more clear about what I'm trying to achieve, I want to process images to do pixel comparison between each images. Therefore, I want to know the pixel values for a given position in my image (I assume that if I have an RGB image, I shall have 3 pixel values for a given position).
Example: let's say that I have a webcam pointing to the sky only during the day and I want to know the values of a pixel at a position corresponding to the top left sky part, I found out that the concatenation of those values gives the colour Light Blue which says that the photo was taken on a sunny day. Let's say that the only possibility is that a sunny day takes the colour Light Blue.
Next I want to compare the previous concatenation with another concat of pixel values at the exact same position but from a picture taken the next day. If I found out that they are not equal then I conclude that the given picture was taken on a cloudy/rainy day. If equal then sunny day.
Any help on that would be highly appreciated. I have vulgarized my example for a better understanding but my goal is pretty much the same. I know that ML model can exist to achieve those stuff but I would be happy to try this first. My first goal is to split this column into 3 columns corresponding to each color code: a red matrix, a green matrix, a blue matrix
I think I have the logic. I used the keras.preprocessing.image.img_to_array() function to understand how the values are classified (since I have an RGB image, I must have 3 matrix: one for each color R G B). Posting that if someone wonder how it works, I might be wrong but I think I have something :
from keras.preprocessing import image
import numpy as np
from PIL import Image
# Using spark built-in data source
first_img = spark.read.format("image").schema(imageSchema).load(".....")
raw = first_img.select("image.data").take(1)[0][0]
np.shape(raw)
(606300,) # which is 470*430*3
# Using keras function
img = image.load_img(".../path/to/img")
yy = image.img_to_array(img)
>>> np.shape(yy)
(430, 470, 3) # the form is good but I have a problem of order since:
>>> raw[0], raw[1], raw[2]
(77, 85, 78)
>>> yy[0][0]
array([78., 85., 77.], dtype=float32)
# Therefore I used the numpy reshape function directly on raw
# to have 470 matrix of 3 lines and 470 columns:
array = np.reshape(raw, (430,470,3))
xx = image.img_to_array(array) # OPTIONAL and not used here
>>> array[0][0] == (raw[0],raw[1],raw[2])
array([ True, True, True])
>>> array[0][1] == (raw[3],raw[4],raw[5])
array([ True, True, True])
>>> array[0][2] == (raw[6],raw[7],raw[8])
array([ True, True, True])
>>> array[0][3] == (raw[9],raw[10],raw[11])
array([ True, True, True])
So if I understood well, spark will read the image as a big array - (606300,) here - where in fact each element are ordered and corresponds to their respective color shade (R G B).
After doing my little transformations, I obtain 430 matrix of 3 columns x 470 lines. Since my image is (470x430) for (WidthxHeight), each matrix corresponds to a pixel heigth position and inside each: 3 columns for each color and 470 lines for each width position.
Hope that helps someone :)!

Converting images from cvimg to tensor properly

Is there a proper way to convert a cvimg to tensor properly without causing any color distortions? I have done a comparison by storing 2 images. One was decoded using tensorflow and the other was done using openCV
Image generated using tensorflow jpeg encoder
file_reader = tf.read_file(file_name, input_name)
if file_name.endswith(".png"):
image_reader = tf.image.decode_png(
file_reader, channels=3, name="png_reader")
elif file_name.endswith(".gif"):
image_reader = tf.squeeze(
tf.image.decode_gif(file_reader, name="gif_reader"))
elif file_name.endswith(".bmp"):
image_reader = tf.image.decode_bmp(file_reader, name="bmp_reader")
else:
image_reader = tf.image.decode_jpeg(
file_reader, channels=3, name="jpeg_reader")
Image generated using cv to tensor convert.
image_reader = tf.convert_to_tensor(cvimg)
Am i missing some steps here during the cv conversion? Thanks!
OpenCV loads images in the BGR format while Tensorflow uses the RGB format (as you can see the blue and red channels of your image are swapped).
Thus, if you want to read an image loaded using OpenCV (i suppose cvimg) you just have to swap the color channels from BGR to RGB:
image_reader = tf.convert_to_tensor(cvimg)
b, g, r = tf.unstack (image_reader, axis=-1)
image_reader = tf.stack([r, g, b], axis=-1)
Another way is:
image_reader=tf.reverse(cvimg,axis=[-1])
tf.reverse will reverse the order that you specified dimension.because the cvimg(opencv image,[height,width,channel]) is BGR format,
so tf.reverse(cvimg,axis=[-1]) will reverse BGR(the last dimension) to RGB,
you don't have to use tf.convert_to_tensor(cvimg), because tensorfow will automatically convert from numpy array to tensor.

Resources