Installing distributed xgboost will manipulate the huge dataset ? - machine-learning

I have a dataset about 40G and my mac only has 16G memory. I want to run xgboost on my dataset.
It seems that mac can not load it directly into memory.
Using the distributed xgboost will handle the 40G dataset on my mac ? Any advice about how to handle my dataset ? Thanks.
(I have already transform the original csv file to libsvm format, but the size is still over 40GB ).

Related

Is there a way to use distributed training with DASK using my GPU?

As of now, lightGBM model supports GPU training and distributed training (using DASK).
If it is possible, how can I use distributed training with DASK using my GPU or is there any other way to do so?
Actually my task is to use the power of GPU and distributed training in lightGBM model.
It may possible I am missing a concept because I'm a beginner.
I'm not a LightGBM expert, so it might be better to wait for some to chime in. But from what I've been able to find, lightGBM does not really work with both Dask and GPU support.
See https://github.com/microsoft/LightGBM/issues/4761#issuecomment-956358341:
Right now the dask interface doesn't directly support distributed training using GPU, you can subscribe to #3776 if you're interested in that. Are you getting any warnings about this? I think it probably isn't using the GPU at all.
Furthermore, if your data fits in a single machine then it's probably best not using distributed training at all. The dask interface is there to help you train a model on data that doesn't fit on a single machine by having partitions of the data on different machines which communicate with each other, which adds some overhead compared to single-node training.
And https://github.com/microsoft/LightGBM/issues/3776:
The Dask interface in https://github.com/microsoft/LightGBM/blob/706f2af7badc26f6ec68729469ec6ec79a66d802/python-package/lightgbm/dask.py currently only supports CPU-based training.
Anyway, if you have only one GPU, Dask shouldn't be of much help.

ML.NET Image Classification Training Freezing

I'm using the latest version of ML.NET image classification in Visual Studio 2019 on a Windows 10 PC to detect inappropriate images. I was using a dataset of 3000 SFW and 3000 NSFW images to train it, but it got stuck while training. There are no errors outputted, it just stops using the CPU and stops outputting to the console.
It has often stopped randomly after a line such as:
[Source=ImageClassificationTrainer; ImageClassificationTrainer, Kind=Trace] Phase: Bottleneck Computation, Dataset used: Train, Image Index: 1109
or
[Source=ImageClassificationTrainer; MultiClassClassifierScore; Cursor, Kind=Trace] Channel disposed
After it stops using the CPU the training page on the machine learning model builder remains the same:
I have also tried this with a smaller dataset of 700 images for each type but ended up with similar results. What's causing this?
This may be related to the chosen learning environment. Most likely you chose GPU training, but it is not supported. Choose CPU.

Can we use CPU instead of GPU to train custome YOLO model for object detection

I want to train YOLO model for my custom objects data-set. I read about it everywhere on various sites and everybody is talking about GPU should be used to train and run YOLO custom model.
But, due to I don't have GPU I am confused about what to do? Because I can not buy a GPU for that. Also, I read about Google Colab but I can not use it, that I want to use my model on offline system.
I am scared after seeing the system utilization of the YOLO used in the program from github:
https://github.com/AhmadYahya97/Fully-Automated-red-light-Violation-Detection.git.
I was running this on my laptop having configuration:
RAM: 4 GB
Processor: Intel i3, 2.40 GHz
OS: Ubuntu 18.04 LTS
Although it is going to be lot slower, yes you can use CPU only when training and make prediction. If you are using original Darknet Framework, set the GPU flag in Makefile when installing darknet to GPU=0.
How to install darknet : https://pjreddie.com/darknet/install/
Then you can start to train or predict following this guide : https://pjreddie.com/darknet/yolo/

Using python with ml

I have not GPU support so it often happens that my model takes hours to train. Can I train my model in batches , for example if I want to have 100 epochs for my model,but due to power cut my training stops(at 50th epoch) but when I retrain my model I want to train it from where it was left (from 50th epoch).
It would be much appreciated if anyone can explain it by some example. https://timbu.com user
The weights are already updated, retraining the model with the updated weights without reintializing them will carry where it left.
You can work with online notebooks like google's colab or microsoft azure notebook if you have ressources problems. They offer a good working environnement, colab for exemple has gpu and tpu enabled and 16go ram limit.

Using Torch trained VGG face

Is there any way I can pass existing images in my system through a trained VGG with torch? I am using Ubuntu 14.04 and unfortunately do not have a GPU. I have searched quite extensively but all the ones I have found require a GPU. Are there other ways to use VGG without torch? I'm open to suggestions but the method should not require a GPU.
While running the network using a GPU will make things a lot faster, you can run the network in CPU mode only.
Once you load the model, pretrained on a GPU you can simple convert it to CPU as follow:
model = model:float()
You can easily load an image from your computer with the help of the image library and then do a forward pass
local img = image.load(imagefile,3,'byte')
local output = model:forward(img)

Resources