Segmentation fault while training a ml model using yolov5 - machine-learning

I'm doing a project on object detection using yolov5. After I run train.py file for yolov5, it stops causing segmentation error. How do I solve this?
zsh: segmentation fault python3 train.py --img 640 --cfg yolov5m.yaml --hyp --batch 5 --epochs 1

Related

Darknet.exe Prints out Cuda & Opencv Version When Run

I have been trying to make an inference using the Yolov4 Darknet model I trained, and whenever I try to run the Command in Powershell, all It does is print out Cuda Version and OpenCv Version. If anybody has experienced this or knows a solution that would be amazing.
You will have to specify the input file/video as the last cmd line argument to perform inference on that particular file/video.
For example:
input image: darknet.exe detector test cfg/coco.data yolov4.cfg yolov4.weights -ext_output dog.jpg
video: darknet.exe detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights -ext_output test.mp4
WebCam 0: darknet.exe detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights -c 0

Elephas tutorial error - ValueError: Could not interpret optimizer identifier

I'm trying to run this elephas tutorial on Colab.
I prepared the environment with
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
!wget -q https://downloads.apache.org/spark/spark-2.4.6/spark-2.4.6-bin-hadoop2.7.tgz
!tar xf spark-2.4.6-bin-hadoop2.7.tgz
!pip install -q findspark
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-2.4.6-bin-hadoop2.7"
import findspark
findspark.init("spark-2.4.6-bin-hadoop2.7")
!pip install elephas
When I fit the model
pipeline = Pipeline(stages=[estimator])
fitted_pipeline = pipeline.fit(df)
I get the following error message
>>> Fit model
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-11-6d2ae7604dd2> in <module>()
1 # Fitting a model returns a Transformer
2 pipeline = Pipeline(stages=[estimator])
----> 3 fitted_pipeline = pipeline.fit(df)
11 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizers.py in get(identifier)
901 else:
902 raise ValueError(
--> 903 'Could not interpret optimizer identifier: {}'.format(identifier))
ValueError: Could not interpret optimizer identifier: 1e-06
As you can see, the error relates to decay (decay=1e-6). Anyway, I still get the same error even when I change this value.
sgd = optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
sgd_conf = optimizers.serialize(sgd)
Any ideas?
This may have had to do with an incompatibility if you were working with Tensorflow 2.0 API. I would recommend retrying with the latest release: https://github.com/danielenricocahall/elephas/releases/tag/1.0.0 which now contains support for Tensorflow 2.1.x and Tensorflow 2.3.x.

Error while downloading pre-trained GAN model in Pytorch : 'memory' file not found

I was following the steps given in https://modelzoo.co/model/pytorch-cyclegan-and-pix2pix to download a pre-trained model.
These where the first 3 commands given there:
git clone https://github.com/pytorch/vision
cd vision
**python setup.py install**
However when I ran the third line, I got an error:
fatal error: 'memory' file not found
#include <memory>
error: command 'gcc' failed with exit status 1
If anyone has some idea on how to overcome this error, it would be really helpful.
See this post. Try the line
apt-get install build-essential

exception in thread main java.lang.illegalargumentexception. Training dataset /tmp/iris.csv can not be found in mahout

When I try to work MLP in mahout:
/home/batu/Documents/mahout/trunk/bin/mahout org.apache.mahout.classifier.mlp.TrainMultilayerPerceptron -i /tmp/iris.csv -labels Iris-setosa Iris-versicolor Iris-virginica -mo /tmp/mlp.model -ls 4 8 3 -l 0.2 -m 0.35 -r 0.0001
this exception occurs.
Exception in thread "main" java.lang.illegalargumentexception: Training dataset /tmp/iris2.csv cannot be found!
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:115)
at org.apache.mahout.classifier.mlp.TrainMultiLayerPerceptron.main(TrainMultiLayerPerceptron.java:124)

No input clusters found in /user/mahout/cluster/part-randomSeed. Check your -c argument

My test.csv file:
==================
1,54,1341775056478
2,1568,1341775056478
1,1622,1341775056498
2,3136,1341775056498
1,3190,1341775056671
2,4704,1341775056671
1,4758,1341775056693
2,6272,1341775056693
1,6326,1341775056714
2,7840,1341775056714
1,7894,1341775056735
2,9408,1341775056735
1,9462,1341775056951
2,10976,1341775056951
1,11030,1341775056972
2,12544,1341775056972
1,12598,1341775056994
2,14112,1341775056994
1,14166,1341775057014
2,15680,1341775057014
1,15734,1341775057065
2,17248,1341775057065
1,17302,1341775057087
2,18816,1341775057087
1,18870,1341775057119
2,20384,1341775057119
....
....
I am trying to cluster this data using mahout k-means algorithm.
I had followed these steps:
1)Create a sequence file from the test.csv file
mahout seqdirectory -c UTF-8 -i /user/mahout/input/test.csv -o /user/sample/out_seq -chunk 64
2)Create a sparse vector from the sequence file
mahout seq2sparse -i /user/mahout/out_seq/ -o /user/mahout/sparse_dir --maxDFPercent 85 --namedVector
3)perfom K-Means clustering
mahout kmeans -i /user/mahout/sparse_dir/tfidf-vectors/ -c /user/mahout/cluster -o /user/mahout/kmeans_out
-dm org.apache.mahout.common.distance.CosineDistanceMeasure --maxIter 10 --numClusters 20 --ow --clustering
At step 3,I'm facing this error:
Exception in thread "main" java.lang.IllegalStateException: No input clusters found in /user/mahout/text/cluster/part-randomSeed. Check your -c argument.
at org.apache.mahout.clustering.kmeans.KMeansDriver.buildClusters(KMeansDriver.java:213)
at org.apache.mahout.clustering.kmeans.KMeansDriver.run(KMeansDriver.java:147)
....
....
How to overcome this error.Actually I did the clustering example successfuuly using reuters dataset.But with my dataset,it is showing this issue.Is there any problem with the dataset ? or due to some other issue,am i facing this error?
Can anyone suggest me regarding this issue...
Thanks, in advance

Resources