First sorry for my bad english.
Short version :Can anyone tell me how to use only the first n layers of XLNET for classification ?
Long Version :
I have a dataset composed of texts and their summary. The goal is to detect if the summary is generated by a bot or not.
So I thought of using bert and give him as input "[CLS] "+Text+" [SEP]"+summary then take the representation of the "[CLS] " token and detect using a classifier if the summary was written by a bot.
Th problem is bert takes no more than 512 words as input.
So I thought of using XLNET. But here another problem appeared : My gpu (RTX 2060) can't handle a batch of size 1.
So I Thought of using only like the first 4 layers of XLNET but the problem is: I don't know how to do it.
So my code to load the model is model=XLNetForSequenceClassification.from_pretrained("xlnet-base-cased", num_labels = 2)
can anyone tell me what to add to use only a part of the network please ?
Related
I want to detect digits on a display. For doing that I am using a custom 19 classes dataset. The choosen model has been yolov5-X. The resolution is 640x640. Some of the objets are:
0-9 digits
Some text as objects
Total --> 17 classes
I am having problems to detect all the digits when I want to detect 23, 28, 22 for example. If they are very close to each other the model finds problems.
I am using roboflow to create diferent folders in which I add some prepcocessings to have a full control of what I am entering into the model. All are checked and entered in a new folder called TRAIN_BASE. In total I have 3500 images with digits and the majority of variance is with hue and brightness.
Any advice to make the model able to catch all the digits besides being to close from each other?
Here are the steps I follow:
First of all, The use of mosaic dataset was not a good choice the purpose of detecting digits on a display because in a real scenario I was never gonna find pieces of digits. That reason made the model not to recognize some digits if it was not shure.
example of the digits problem concept
Another big improvement was to change the anchor boxes of the yolo model to adapt them to small objects. To know which anchor boxes I needed. Just with adding this argument to train.py is enought in the script provided by ultralitics to print custom anchors and add them to your custom architecture.
To check which augmentations can be good and which not, the next article explains it quite visually.
P.D: Thanks for the fast response to help the comunity gave me.
As I am working on my project that is to detect FOD (Foreign Object Debirs) that is found on the runway. FOD include anything like nuts, bolts, screws, locking wires, plastic debris, stones etc. that has the potential to cause damage to the aircraft. Now I have searched on the Internet to find any image dataset but no dataset is available related to FOD. Now my question is kindly guide me that how can I make my own dataset of images that can then be used for training purpose.
Kindly guide me in making image dataset for both classification and detection purposes. And also the data pre-processing that will be required. Thanks and waiting for the reply!
Although the question is a bit vague regarding your requirements and the specs of your machine, I'll try to answer it. You'll need object detection to do your task. There are many models available which you can use like Yolo, SSD, etc..
To create your own dataset, you can follow these steps:
Take lots of images of your objects of interest in various conditions, viewpoints and backgrounds. (Around 2000 per class should be good enough).
Now annotate (or mark) where your object is in the image. If you're using Yolo, make use of Yolo-mark for annotating. There should be other similar tools for SSD and other models.
Now you can begin training.
These steps should get you started or at least point you in the right direction.
You can build your own dataset with this code. I wrote it, and it works correctly.
You need to import the libraries and add your DATADIR.
if __name__ == "__main__":
for category in CATEGORIES:
path = os.path.join(DATADIR, category)
class_num = CATEGORIES.index(category)
for img in os.listdir(path):
try:
img_array = cv2.imread(os.path.join(path,img))
new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
training_data.append([new_array, class_num])
except Exception as e:
pass
for features, label in training_data:
x_train.append(features)
y_train.append(label)
#create pikle
pickle_out = open("x_train.pickle", "wb")
pickle.dump(x_train, pickle_out)
pickle_out.close()
pickle_out = open("y_train.pickle", "wb")
pickle.dump(y_train, pickle_out)
pickle_out.close()
In case if you're starting completely from scratch, you can use "Dataset Directory", available on Play store. The App helps you in creating custom datasets using your mobile. You'll have to sign in to your Google drive such that your dataset is stored in Drive rather on your mobile. Additionally, It also contains Labelling the entity for classification and Regression predictive models.
Currently, the App supports Binary Image Classification and Image Regression.
Hope this Helped!
Download Link :
https://play.google.com/store/apps/details?id=com.applaud.datasetdirectory
I want to use Mallet as a part of an expert finding project. I'm almost new to Mallet but I know that it trains topics from a set of the documents. Let's say that I have 50 topics trained by Mallet. I want to calculate this probability: p(topic|q) or either p(q|topic)
q is the query. It's a word (such as algorithm, android and etc) which I'm desired to find the experts in the specified area.
As I read this post : how to get word-topic probability using mallet, One of the users said we can calculate the probability using --word-topic-counts-file option. Let's say that I have generated this file by Mallet. It has the following structure:
0 android 2:21
1 is 3:3
.
.
.
I know the semantic of this structure, But I don't know how can I calculate the probability of topic given query ( i.e. p(topic|q) or either p(q|topic) )
P.S: I use the word "either" because I'm not sure mallet calculates which of them
Any help would be appreciated
Take this example line from GlieBrt's answer to the linked question
1 needham 19:2 17:1
Here p(topic|q) can be calculated as
p(19|needham) = 2/3 = 0.67
and
p(17|needham) = 1/3 = 0.33
With you own example, it is even simpler:
0 android 2:21
p(2|android) = 1.0
I'm studying SVM and implemented this code , it's too basic,primitive and taking too much time but I just wanted to see how it actually works.Unfortunately,it is giving me bad results.What did I miss? Some coding error or mathematical mistakes? If you want to look at dataset , it's link here. I taked it from UCI Machine Learning Repository. Thanks for your deal.
def hypo(x,q):
return 1/(1+np.exp(-x.dot(q)))
data=np.loadtxt('LSVTVoice',delimiter='\t');
x=np.ones(data.shape)
x[:,1:]=data[:,0:data.shape[1]-1]
y=data[:,data.shape[1]-1]
q=np.zeros(data.shape[1])
C=0.002
##mean normalization
for i in range(q.size-1):
x[:,i+1]=(x[:,i+1]-x[:,i+1].mean())/(x[:,i+1].max()-x[:,i+1].min());
for i in range(2000):
h=x.dot(q)
for j in range(q.size):
q[j]=q[j]-(C*np.sum( -y*np.log(hypo(x,q))-(1-y)*np.log(1-hypo(x,q))) ) + (0.5*np.sum(q**2))
for i in range(y.size):
if h[i]>=0:
print y[i],'1'
else:
print y[i],'0'
Depending on your data, it's very usual that Simple Implementation of SVM give you bad result. You must try advanced version on SVM implementation(e.g Sickit SVM) you can also check this: https://github.com/scikit-learn/scikit-learn/tree/master/sklearn/svm
SVM has types of implementation and parameters like different kernels(e.g rbf). You must check them and try them with different parameter(depending on your data) and compare results to each other.
You can use Grid Search approach for comparing(check this: http://scikit-learn.org/stable/modules/grid_search.html)
I'd like to write a spam filter program with SVM and I choose libsvm as the tool.
I got 1000 good mails and 1000 spam mails, then I classify them into :
700 good_train mails 700 spam_train mails
300 good_test mails 300 spam_test mails
Then I wrote a program to count the time of each words occur in each file, got result like:
good_train_1.txt:
today 3
hello 7
help 5
...
I learned that libsvm needs format like:
1 1:3 2:1 3:0
2 1:3 2:3 3:1
1 1:7 3:9
as its input. I know that 1, 2, 1 is the label, but what does 1:3 mean?
How could I transfer what I've got to this format?
Likely, the format is
classLabel attribute1:count1 ... attributeN:countN
N is the total number of different words in your text corpus. You will have to check the documentation for the tool you are using(or its sources), to see if you can use a sparser format by not including the attributes having count 0.
How could I transfer what I've got to this format?
Here's how I would do this. I would use the script you've got to compute the count of words for each mail in the training set. Then, use another script and transfer that data into the LIBSVM format that you've shown earlier. (This can be done in a variety of ways, but it should be reasonable to write with an easy input/output language like Python) I would batch all "good-mail" data into one file, and label that class as "1". Then, I would do the same process with the "spam-mail" data and label that class "-1". As nologin said, LIBSVM requires the class label to precede the features, but the features themselves can be any number as long as they are in ascending order, e.g. 2:5 3:6 5:9 is allowed, but not 3:23 1:3 7:343.
If you're concerned that your data is not in the correct format, use their script
checkdata.py
before training and it should report any possible errors.
Once you have two separate files with data in the correct format, you can call
cat file_good file_spam > file_training
and generate a training file that contains data on both good and spam mail. Then, do the same process with the testing set. One psychological advantage with forming the data this way is that you know the top 700 (or 300) mail in the training (or testing) set is good mail, and the remaining are spam mail. This makes it easier to create other scripts you may want to act on the data, such as a precision/recall code.
If you have other questions, the FAQ at http://www.csie.ntu.edu.tw/~cjlin/libsvm/faq.html should be able to answer a few, as well as the various README files that come with installation. (I personally found the READMEs in the "Tools" and "Python" directories to be a great boon.) Sadly, the FAQ does not touch much on what nologin said, about data being in a sparse format.
On a final note, I doubt that you need to keep counts of every possible word that could appear in mail. I would recommend counting only the most common words you would suspect to appear in spam mail. Other potential features include total word count, average word length, average sentence length, and other possible data that you feel may be helpful.