Project Using Machine Learning for Optimization [closed] - machine-learning

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 months ago.
Improve this question
I want to do a website project that uses machine learning to optimize car throughput in a city. This would be a cartoonish grid of dots attempting to navigate through a grid of streets with stoplights at each intersection. However, I have not been able to find the right resources for learning about this type of ML optimization.
The idea to start is that the grid of stoplights is given the same set of cars each epoch and the stoplights guess their own frequency of green/red to maximize traffic flow. So the metric that the model will learn against is number of cars through the light (or time for all cars to clear the city, not sure yet).
I have done the Google ML Crash Course and the book A Programmer's Guide to Artificial Intelligence, but I have yet to find the right type of ML I am looking for. I am looking for a learning resource on training a model with no labeled data, with a metric for optimization.

Reinforcement learning was what I was looking for and I’m now looking into tensorflow documentation on how a virtual light signal can take actions and receive rewards from a model

Related

ML models can only be used for making future predictions? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 months ago.
Improve this question
I'm aware that ML models are used to make future prediction but can they also be used for making predictions in the past?
I've a model that predicts the accident prone zones for a given location and given date and time. The model has been developed by studying previous 2 years data (2020 and 2021). I've few datasets that I am required to predict on, which are in the year 2019. This is required to verify if the predictions actually tally.
Now, would it be feasible to use this ML model to test on the dataset for the year 2019?
I'm using sklearn and the model used is Random forest.
Theoretically it is possible. It doesn't matter which direction you go. e.g. if a trend is seen to increase in the future, this means the trend is probably decreasing in the past. So for the model it doesn't matter much - it is going to predict a decrease (for example). However, how relevant is your prediction, it is something to sought for.

Random Forest Calibration [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
I am experimenting with Random Forests for a Classification problem and am wondering about calibration for individual features. For example, a 2-level factor such as home field advantage in a sporting event, that we can be certain has an average effect of around +5% on win rate and its effect is not captured by any other feature in the data.
It appears as though the nature of random forests (selecting N random features to consider at each split) will not allow the model to fully capture the effect of any one particular feature like this. Setting the max_features parameter to None or to include all features appears to solve the problem but then it loses the power of diversity between trees.
I am wondering if there are any good methods of dealing with this type of feature that we want to be fully captured based on some sort of domain knowledge we have about the problem?

How to choose which model to fit to data? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
My question is given a particular dataset and a binary classification task, is there a way we can choose a particular type of model that is likely to work best? e.g. consider the titanic dataset on kaggle here: https://www.kaggle.com/c/titanic. Just by analyzing graphs and plots, are there any general rules of thumb to pick Random Forest vs KNNs vs Neural Nets or do I just need to test them out and then pick the best performing one?
Note: I'm not talking about image data since CNNs are obv best for those.
No, you need to test different models to see how they perform.
The top algorithms based on the papers and kaggle seem to be boosting algorithms, XGBoost, LightGBM, AdaBoost, stack of all of those together, or just Random Forests in general. But there are instances where Logistic Regression can outperform them.
So just try them all. If the dataset is >100k, you're not gonna lose that much time, and you might learn something valuable about your data.

How can we do theft detection in shopping malls by cctv? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I have a problem related to computer vision and machine learning. Basically we are working on video surveillance system which will be trained to detect any suspicious activity like theft or shop lifting in stores.We are confused that is that will be able to solve this problem or not. We don't know that is it feasible or not. So kindly just suggest us something. Any help will be appreciated.
While I understand that Open CV is great for face-detection and usable for face-recognition, can it be used for analyzing "actions", s.a. the act of sitting, the act of lifting an object off the shelf ? If so, what are some of these algorithms I should dig deeper into ?
Are there other libraries (other than OpenCV) which need to be used for such tasks? Are there open-source libraries for the same?
What you are trying to achieve is currently a very active area in computer vision and machine learning research called Behaviour Analysis or Activity Detection. State of the art approaches can be found in journals like PAMI or conferences like CVPR or NIPS. As of today, it is nowhere near the performance you would require to build an automatic theft-detection system in the general case (i.e., any surveillance camera looking into any scene in any orientation). Behaviour Analysis is based on many underlying techniques, such as identifying the pose of people in images. Current research is still trying to figure out if there's a person in the picture and the position of its limbs in the general case.
Here's what might be feasible with the current research state: A system that help an operator focus on potential threats when cameras have a clear unobstructed view to a clear and mostly static environment (e.g., glass displays). An operator could therefore monitor many more cameras than before, because the system will automatically hide the cameras that clearly does not contain suspicious activity or movement.
To know more about current possibilities, I recommend you to check the literature (like this example), decompose the problem into subparts and leverage your priors (your a priori knowledge of the scene and people you're looking at) as much as possible.
By using object recognition (by helping deep learning) we can detect object and by using the data set of recorded object in the shop we can assess to the detailed (price) of that object. based on the number of objects and information about the object we can recognize the issue such as thrift in the counter.

difference between data mining and machine learning [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I am new to this area.
In my image,
Data mining means to retrieve useful information from data with respect to a data model.
Machine learning seeks to identify behavior patterns in data, and them build various models based on observed patterns.
Also, Data Mining is often considered a sub-field of Machine Learning.
Data Mining usually goes only as far as interpreting the data (e.g. categorizing newspaper articles based on their theme, or books according to the suitable age of readers). It is a part of Machine Learning that is given raw data, and then, using Machine Learning methods, extracts some meaningful information about it.
Machine Learning in general can have more steps than just interpreting the data. Programs developed Machine Learning techniques can also act upon the knowledge "learned" from the data, e.g. a program that is given a bunch of examples of Checkers games and based on that is able to play the game (well), has "learned" from the examples -- the data, and can now interpret new (similar data) and act upon that.
The terms are not overly strict in defintion, but basically I think what you're saying is correct.
Machine learning involves algorithm identification and finessing, whereas data mining implies a more static algorithm that is applied to fixed data. The output of machine learning is information of course, but also new algorithms identified through the process. Data mining seeks to apply a pre-existing algorithm over data.

Resources