yolov4 input and output tensor format - tensor

we need to implement a yolov4 CNN application on FPGA. We would like to know the format of input and output tensors of Yolov4. Tried searching on the internet but could not find much info.
Can someone let me know the tensor formats for input and output? It will be great it someone can also give a link which gives this information along with architecture of yolov4 explained in detail.
regards,
-sunil

Related

Custom DataSet for Bart Text Summarisation

I am using a pre-trained BART and then fine tuning on own dataset.I am now training the model after I have tokenised my dataset.
I am again and again getting this error.
After researaching alot I feel there is something wrong the way I have preprocessed the data.
I am adding the google colab link.
If someone could help me out.
https://colab.research.google.com/drive/11q5lb3pOgv7axZbszAYHYVLVwO8Eoz_s?usp=sharing[![enter image description here]1]1

LSTM for vector to character sequence translation

I'm looking to build a sequence-to-sequence model that takes in a 2048-long vector of 1s and 0s (ex. [1,0,1,0,0,1,0,0,0,1,...,1] ) as my input and translating it to my known output of (a variable length) 1-20 long characters (ex. GBNMIRN, ILCEQZG, or FPSRABBRF).
My goal is to create a model that can take in a new 2048-long vector of 1s and 0s and predict what the output sequence will look like.
I've looked at some github repositories like this and this.
but I'm not sure how to implement it with my problem. Are there any projects that have done something similar to this/how could I implement this with the seq2seq models or LSTMs currently out there? (python implementations)
I am using the keras library in python.
Your input is strange as it is an binary-code. I don't know whether the model will work well.
First of all, you need to add start and end marks for your input and output which indicates the boundaries. Then design regional module of each time step, including how to use hidden state. You could try simple GRU/LSTM networks as following.
For details, you could try Encoder
and Decoder
In addition, you could take a look at Attention mechanism in paper Neural Machine Translation by Jointly Learning to Align and Translate. And the structure is as following.
For details
Though you are using Keras, I think it will be helpful to read PyTorch codes as it is straightforward and easy to understand. The tutorial given in PyTorch tutorial

Maxima: Linear fit on data

I am new in Maxima. I have a set of data, (x,y,error) and I want to fit a linear line on it. I found some examples in example by maxima "Chapter 5: 2D Plots and Graphics using qdraw " but honestly I don't know how to download and use "qdraw" package.
anyone can help?
I see that qdraw.mac is linked from the page you mentioned. Maybe you can search for qdraw.mac on that page.
Maxima has some capability to work with linear regression models, but other packages which are specifically devoted to statistics might be more suitable. Have you tried R? (http://www.r-project.org)

How to train HOG and use my HOGDescriptor?

I want to training data and use HOG algorithm to detect pedestrian.
Now I can use defaultHog.setSVMDetector(HOGDescriptor::getDefaultPeopleDetector()); in opencv to detection, but the result is not very good to my testing video. So I want to do training use my database.
I have prepared 1000+ positive sample, and 1000+ negative samples. They are cropped to size 50 * 100, and I have do the list file.
And I have read some tutorials on the internet, they all so complex, sometimes abstruse. Most of them are analyze the source code and the algorithm of HOG. But with only less examples and simple anylize.
Some instruction show that libsvm\windows\svm-train.exe can be used to training, Can anyone gives an examples according to 1000+ 50*100 positive samples?
For example, like haartraing, we can do it from opencv, like haartraining.exe –a –b with some parameters, and get a *.xml as a result which will be used to people detection?
Or is there any other method to training, and detection?
I prefer to know how to use it and the detail procedures. As the detail algorithm, it is not important to me. I just want to implement it.
If anyone know about it, please give me some tips.
I provided some sample code and instructions to start training your own HOG descriptor using openCV:
See https://github.com/DaHoC/trainHOG/wiki/trainHOG-Tutorial.
The algorithm is indeed too complex to provide in short, the basic idea however is to:
Extract HOG features from negative and positive sample images of identical size and type.
Use the extracted feature vectors along with their respective classes to train a SVM classifier, in this step you can use the svm-train.exe with a generated file of the correct format containing the feature vectors and their classes (or directly include and address the libsvm library class in your sources).
The resulting SVM model and support vectors are calculated into a single descriptor vector that can be used with the openCV detector.
Best regards

mahout classification text input vectorization

I am trying to build a classifier with mahout. After the model is built.
I have to "feed" the target documents to the model and get the classification result.
I checked the testcases in the mahout source code, it uses DenseVector which have the fixed amount of fields.
However, I m using mahout to classify text docs, the input is some string(or array containing strings). How to convert it to a valid "Vector" instance.
I tried the StaticWordEncoder and RandomAccessSparseVector, but the result is not correct. Cannot figure out why. A little bit desperate.
You have to parse the document into words and populate the vector from those.
I would recommend reading something like Mahout In Action to get more background before attempting this.

Resources