Can I use .json dataset on yolo - image-processing

I'm currently using .txt dataset on yolo and I want to use .json dataset on yolov4.
Can I do that?
example for .txt format
0 0.542720 0.415254 0.409610 0.355932
example for .json format
{"info":{"description":"my-project-name"},"images":[{"id":1,"width":1280,"height":854,"file_name":"a.jpg"}],"annotations":[{"id":0,"iscrowd":0,"image_id":1,"category_id":1,"segmentation":[[572.45197740113,270.1920903954802,628.7419962335217,244.45951035781542,738.105461393597,286.2749529190207,776.7043314500942,369.90583804143125,775.0960451977401,453.53672316384177,545.1111111111111,511.4350282485875,506.512241054614,482.48587570621464,487.21280602636534,357.03954802259886,514.5536723163842,299.14124293785306,562.8022598870057,283.0583804143126]],"bbox":[487.21280602636534,244.45951035781542,289.49152542372883,266.9755178907721],"area":57054.884640074306}],"categories":[{"id":1,"name":"a"}]}```

Related

How Can I save And reuse One hot encoding in keras?

I'm working on a project that related to NLP. then i use One hot encode for text representation in google colab Then i fit it into LSTM.
This is my code:
from tensorflow.keras.preprocessing.text import one_hot
voc_size=13000
onehot_repr=[one_hot(words,voc_size)for words in X1]
the model seem good but when i want to save it for making prediction with new text i save it using pickle:
import pickle
with open("one_hot", "wb") as f:
pickle.dump(one_hot, f)
but when i restart the colab and load the saved one_hot again the number that represent a word is difference.
Is there any possible way that i can save Onehot and get the same result in colab?
Because I can not save one hot encode for using another time that why i save one hot representation as list and access it by index later:
## load save model
from tensorflow.keras.models import load_model
my_model=load_model("model9419.h5")
##load oneHot representation
with open('/content/drive/MyDrive/last_model/on_hot.json', 'rb') as f:
oneHot=json.load(f)
In order to predict A word i used simple array access element to find one hot representation of a words.
Is This a correct way to make a prediction ? Is there any better way than that?
And If I can save OneHot function how can i use in flask server?
Also can anyone recommend word representation that is easy, can save to use in flask and better?
First, create a one-hot dict and then convert it to pandas DataFrame and save a .csv of that DataFrame. ex.
import pandas as pd
from tensorflow.keras.preprocessing.text import one_hot
onehot_dict = {}
voc_size = 3
for words in ['this', 'that', 'then']:
onehot_dict[words] = one_hot(words, voc_size)
onehot_df = pd.DataFrame(onehot_dict)
onehot_df.to_csv('./onehot.csv', index=False)

EEG data preprocessing with mne python

I have Physiological EEG emotion dataset named "Deap". I want to analyze and visualize the data through MNE but it has its own format.
How can I load my personal data for pre-processing, data format is (.dat)?
import pickle
with open('s01.dat', 'rb') as f:
y = pickle.load(f, encoding='latin1')
This one works for me.
Of course, the ".dat" file is in the same directory as this code.

How can I load and deploy a pre-trained AWS Sagemaker XGBoost model on local machine?

I've trained a Sagemaker XGBoost model and downloaded the model.tar.gz file from S3 onto my local machine. How can I load this model for deploying it using flask?
I've tried using pickle to load the unzipped model file but it doesn't seem to work.
import sagemaker
import boto3
import os
import pickle
with open('xgboost-model', 'r') as inp:
cls.model = pkl.load(inp)
Traceback (most recent call last):
File "", line 2, in
File "C:\Anaconda3\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 969: character maps to
Figured it out! The downloaded pre-trained sagemaker model can be extracted from its tar.gz format onto the local machine. Once extracted, open the file in python in byte format and load using pickle.
file = open(model_path, 'rb')
xgb_model = pickle.loads(file.read())
Then read in the input data to be converted into xgboost DMatrix formatting without any of the independent data or headings to make predictions.
data_input = xgb.DMatrix(data.iloc[:, 1:].values)
predictions = xgb_model.predict(data_input)

Use mean file in hdf5 in caffe

I'm preparing to train in Caffe using data in a hdf5 file. This file also contains the per-pixel mean data/image of the training set. In the file 'train_val.prototxt' for the input data layer in the section 'transform_params' it is possible to use a mean_file to normalize the data, usually in binaryproto format, for example for the ImageNet Caffe tutorial example:
transform_param {
mirror: true
crop_size: 227
mean_file: "data/ilsvrc12/imagenet_mean.binaryproto"
}
For per-channel normalization one can instead use mean_value instead of mean_file.
But is there any way to use mean image data directly from my database (here hdf5) file?
I have extracted the mean from the hdf5 to a numpy file but not sure if that can be used in the prototxt either or converted. I can't find info about this in the Caffe documentation.
AFAIK, "HDF5Data" layer does not support transformations. You should subtract the mean values yourself when you store the data to HDF5 files.
If you want to save a numpy array in a binaryproto format, you can see this answer for more details.

Convert image data to grayscale from npy file in pylearn2

I'm training a simple convolution neural network using pylearn2. I have my RGB image data stored in a npy file. Is there anyway to convert that data directly to grayscale data directly from the npy file?
If this is a standalone file then load the file using numpy.load then convert the content using something like this:
def rgb2gray(rgb):
return np.dot(rgb[...,:3], [0.299, 0.587, 0.144])
If the file is part of a pylearn2 dataset (resulted from use_design_loc()), then load the dataset
from pylearn2.utils import serial
serial.load("file.pkl")
and apply rgb2gray() function to X member (I assume a DenseDesignMatrix).

Resources