Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I am a computer science student and i have to choose the theme of my future research work. I really want to solve some scientific problems in chemistry(or maybe biology) using computers. Also I have huge interest in machine learning sphere.
I have been surfing over internet for a while, and have found some particular references on that kind of problems. But, unfortunately, that stuff is not enough for me.
So, I am interested in the Community's recommendation of particular resources that present the application of an ML technique to solve a problem in chemistry--e.g., a journal article or a good book describing typical (or the new ones) problems in chemistry being solved "in silico".
i should think that chemistry, as much as any domain, would have the richest supply of problems particularly suited for ML. The rubric of problems i have in mind are QSAR (quantitative structure-activity relationships) both for naturally occurring compounds and prospectively, e.g., drug design.
Perhaps have a look at AZOrange--an entire ML library built for the sole purpose of solving chemistry problems using ML techniques. In particular, AZOrange is a re-implementation of the highly-regarded GUI-driven ML Library, Orange, specifically for the solution of QSAR problems.
In addition, here are two particularly good ones--both published within the last year and in both, ML is at the heart (the link is to the article's page on the Journal of Chemoinformatics Site and includes the full text of each article):
AZOrange-High performance open source machine learning for QSAR modeling in a graphical programming environment.
2D-Qsar for 450 types of amino acid induction peptides with a novel substructure pair descriptor having wider scope
It seems to me that the general natural of QSAR problems are ideal for study by ML:
a highly non-linear relationship between the expectation variables
(e.g, "features") and the response variable (e.g., "class labels" or
"regression estimates")
at least for the larger molecules, the structure-activity
relationships is sufficiently complex that they are at least several
generations from solution by analytical means, so any hope of
accurate prediction of these relationships can only be reliably
performed by empirical techniques
oceans of training data pairing analysis of some form of
instrument-produced data (e.g., protein structure determined by x-ray
crystallography) with laboratory data recording the chemical behavior
behavior of that protein (e.g., reaction kinetics)
So here are a couple of suggestions for interesting and current areas of research at the ML-chemistry interface:
QSAR prediction applying current "best practices"; for instance, the technique that won the NetFlix Prize (awarded sept 2009) was not based on a state-of-the-art ML algorithm, instead it used kNN. The interesting aspects of the winning technique are:
the data imputation technique--the technique for re-generating the data rows having one or more feature missing; the particular
technique for solving this sparsity problem is usually referred to by
the term Positive Maximum Margin Matrix Factorization (or
Non-Negative Maximum Margin Matrix Factorization). Perhaps there are
a interesting QSAR problems which were deemed insoluble by ML
techniques because of poor data quality, in particular sparsity.
Armed with PMMMF, these might be good problems to revisit
algorithm combination--the rubric of post-processing techniques that involve combining the results of two or more
classifiers was generally known to ML practitioners prior to the
NetFlix Prize but in fact these techniques were rarely used. The most
widely used of these techniques are AdaBoost, Gradient Boosting, and
Bagging (bootstrap aggregation). I wonder if there are some QSAR
problems for which the state-of-the-art ML techniques have not quite
provided the resolution or prediction accuracy required by the
problem context; if so, it would certainly be interesting to know if
those results could be improved by combining classifiers. Aside from their often dramatic improvement on prediction accuracy, an additional advantage of these techniques is that many of them are very simple to implement. For instance, Bagging works like this: train your classifier for some number of epochs and look at the results; identify those data points in your training data that caused the poorest resolution by your classifier--i.e., the data points it consistently predicted incorrectly over many epochs; apply a higher weight to those training instances (i.e., penalize your classifier more heavily for an incorrect prediction) and re-train y our classifier with this "new" data set.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed last year.
Improve this question
I am looking for some Research paper or books have good, basic definiton of what Supervised and Unsupervised Learning is. So that i am able to quote these definition in my project.
Thank you so much.
I would make a reference to the following book: Artificial Intelligence: A Modern Approach (3rd Edition) 3rd Edition by Stuart Russell and Peter Norvig. In more detail in Chapter 18 and in pages 693 and on there is an analysis of supervised and unsupervised learning. About unsupervised learning:
In unsupervised learning, the agent learns patterns in the input
even though no explicit feedback is supplied.
The most common unsupervised learning task is clustering:
detecting potentially useful clusters of input examples.
For example, a taxi agent might gradually develop a concept
of “good traffic days” and “bad traffic days” without ever being
given labeled examples of each by a teacher
While for supervised:
In supervised learning, the agent observes some example input–output
pairs
and learns a function that maps from input to output. In component 1 above,
the inputs are percepts and the output are provided by a teacher
who says “Brake!” or “Turn left.” In component 2, the inputs are camera
images and the outputs again come from a teacher who says “that’s a bus.”
In 3, the theory of braking is a function from states and braking actions
to stopping distance in feet. In this case the output value is available
directly from the agent’s percepts (after the fact); the environment
is the teacher.
The examples are mentioned in the text above.
Christopher M. Bishop, "Pattern Recognition and Machine Learning", p.3 (emphasis mine)
Applications in which the training data comprises examples of the input vectors along with their corresponding target vectors are known as supervised learning problems...
In other pattern recognition problems, the training data consists of a set of input vectors x without any corresponding target values. The goal in such unsupervised learning problems may be to discover groups of similar examples within the data,
where it is called clustering, or to determine the distribution of data within the input space, known as density estimation, or to project the data from a high-dimensional space down to two or three dimensions for the purpose of visualization.
Which is as good as you can get. Basically, the most noticable difference is whether we have labels wrt. which we want learning model to optimize. If we don't have some of the labels, it's still can be described as weakly-supervised learning. If no labels are available,the only thing left is to find some structure in the data.
Thanks #Pavel Tyshevskyi for the answear. Your answer is perfect but it seem a littel but hard to understand for beginers like me.
And after hour of searching, i found my own answer version in "Machine Learning For Dummies, IBM Limited Edition" book, at part "Approaches to Machine Learning" of chapter 1 "Understanding Machine Learning". It has simpler definition and has example that can help me to understand better a bit. Link to the book: Machine Learning For Dummies, IBM Limited Edition
Supervised learning
Supervised learning typically begins with an established set of data and a certain understanding of how that data is classified. Supervised learning is intended to find patterns in data that can be applied to an analytics process. This data has labeled features that define the meaning of data. For example, there could be mil-lions of images of animals and include an explanation of what each animal is and then you can create a machine learning appli-cation that distinguishes one animal from another. By labeling this data about types of animals, you may have hundreds of cat-egories of different species. Because the attributes and the mean-ing of the data have been identified, it is well understood by the users that are training the modeled data so that it fits the details of the labels. When the label is continuous, it is a regression; when the data comes from a finite set of values, it known as classifica-tion. In essence, regression used for supervised learning helps you understand the correlation between variables. An example of supervised learning is weather forecasting. By using regression analysis, weather forecasting takes into account known historical weather patterns and the current conditions to provide a predic-tion on the weather.
The algorithms are trained using preprocessed examples, and at this point, the performance of the algorithms is evaluated with test data. Occasionally, patterns that are identified in a subset of the data can’t be detected in the larger population of data. If the model is fit to only represent the patterns that exist in the training subset, you create a problem called overfitting. Overfit-ting means that your model is precisely tuned for your training data but may not be applicable for large sets of unknown data. To protect against overfitting, testing needs to be done against unforeseen or unknown labeled data. Using unforeseen data for the test set can help you evaluate the accuracy of the model in predicting outcomes and results. Supervised training models have broad applicability to a variety of business problems, including fraud detection, recommendation solutions, speech recognition, or risk analysis.
Unsupervised learning
Unsupervised learning is best suited when the problem requires a massive amount of data that is unlabeled. For example, social media applications, such as Twitter, Instagram, Snapchat, and.....
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
There are several components and techniques used in learning programs. Machine learning components include ANN, Bayesian networks, SVM, PCA and other probability based methods. What role do Bayesian networks based techniques play in machine learning?
Also it would be helpful to know how does integrating one or more of these components into applications lead to real solutions, and how does software deal with limited knowledge and still produce sufficiently reliable results.
Probability and Learning
Probability plays a role in all learning. If we apply Shannon's information theory, the movement of probability toward one of the extremes 0.0 or 1.0 is information. Shannon defined a bit as the quotient of the log_2 of the before and after probabilities of a hypothesis. Given the probability of the hypothesis and its logical inversion, if the probability does not increase for either, no bits of information have been learned.
Bayesian Approaches
Bayesian Networks are directed graphs that represents causality hypotheses. They are generally represented as nodes with conditions connected by arrows that represent the hypothetical causes and corresponding effects. Algorithms have been developed based on Bayes' Theorem that attempt to statistically analyze causality from data that had been or is being collected.
MINOR SIDE NOTE: There are often usage constraints for the analytic tools. Most Bayesian algorithms require that the directed graph be acyclic, meaning that no series of arrows exist between two or more nodes anywhere in the graph that create a purely clockwise or purely counterclockwise closed loop. This is to avoid endless loops, however there may be now or in the future algorithms that work with cycles and handle them seamlessly from mathematical theory and software usability perspectives.
Application to Learning
The application to learning is that the probabilities calculated can be used to predict potential control mechanisms. The litmus test for learning is the ability to reliably alter the future through controls. An important application is the sorting of mail from handwriting. Both neural nets and Naive Bayesian classifiers can be useful in general pattern recognition integrated into routing or manipulation robotics.
Keep in mind here that the term network has a very wide meaning. Neural Nets are not at all the same approach as Bayesian Networks, although they may be applied to similar problem-solution topologies.
Relation to Other Approaches and Mechanisms
How a system designer uses support vector machines, principle component analysis, neural nets, and Bayesian networks in multivariate time series analysis (MTSA) varies from author to author. How they tie together also depends on the problem domain and statistical qualities of the data set, including size, skew, sparseness, and the number of dimensions.
The list given includes only four of a much larger set of machine learning tools. For instance Fuzzy Logic combines weights and production system (rule based) approaches.
The year is also a factor. An answer given now might be stale next year. If I were to write software given the same predictive or control goals as I was given ten years ago, I might combine various techniques entirely differently. I would certainly have a plethora of additional libraries and comparative studies to read and analyse before drawing my system topology.
The field is quite active.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I am learning Linear Algebra(started recently) and was curious to know its applications in Machine Learning, where can I read about this
Thank you
Linear Algebra provides the computational engine for the majority of Machine Learning algorithms.
For instance, probably the most conspicuous and most frequent application of ML
is the recommendation engine.
Aside from data retrieval, the real crux of these algorithms is often
'reconstruction' of the ridiculously sparse data used as input for these engines.
The raw data supplied to Amazon.com's user-based R/E is (probably) a massive
data matrix in which the users are the rows and its products are represented
in the columns. Therefore, to organically populate this matrix, every customer would have to
purchase every product Amazon.com sells. Linear Algebra-based techniques are used here.
All of the techniques in current use involve some type of matrix decomposition, a fundamental
class of linear algebra techniques (e.g., non-negative matrix approximation, and
positive-maximum-margin-matrix approximation (warning link to pdf!) are perhaps the two most common)
Second, many if not most ML techniques rely on a numerical optimization technique.
E.g., most supervised ML algorithms involve creation of a trained classifier/regressor by minimizing the delta between the value calculated by the nascent classifier and
the actual value from the training data. This can be done either iteratively or using linear algebra
techniques. If the latter, then the technique is usually SVD or some variant.
Third, the spectral-based decompositions--PCA (principal component analysis)
and kernel PCA--are perhaps the most commonly used dimension-reduction techniques,
often applied in a pre-processing step just ahead of the ML algorithm in the data flow,
for instance, PCA is often used instance in a Kohonen Map to initialize the
lattice. The principal insight underneath these techniques is that the eigenvectors of the covariance matrix (a square, symmetric matrix with zeros down the main diagonal, prepared from the original data matrix) are unit length and are orthogonal to each other.
In machine learning, we generally deal with data in form of vectors/matrices. Any statistical method used involves linear algebra as its integral part. Also, it is useful in data mining.
SVD and PCA are famous dimensionality reduction techniques involving linear algebra.
Bayesian decision theory also involves significant amount of LA.You can try it also.
Singular value decomposition (SVD), is a classic method widely used in Machine Learning.
I find this article is fairly easy, explaining a SVD based recommendation system, see http://www.igvita.com/2007/01/15/svd-recommendation-system-in-ruby/ .
And Strang's linear algebra book, contains a section on the application of SVD to rank web pages (HITS algorithm) see Google Books.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
What is machine learning ?
What does machine learning code do ?
When we say that the machine learns, does it modify the code of itself or it modifies history (database) which will contain the experience of code for given set of inputs?
What is a machine learning ?
Essentially, it is a method of teaching computers to make and improve predictions or behaviors based on some data. What is this "data"? Well, that depends entirely on the problem. It could be readings from a robot's sensors as it learns to walk, or the correct output of a program for certain input.
Another way to think about machine learning is that it is "pattern recognition" - the act of teaching a program to react to or recognize patterns.
What does machine learning code do ?
Depends on the type of machine learning you're talking about. Machine learning is a huge field, with hundreds of different algorithms for solving myriad different problems - see Wikipedia for more information; specifically, look under Algorithm Types.
When we say machine learns, does it modify the code of itself or it modifies history (Data Base) which will contain the experience of code for given set of inputs ?
Once again, it depends.
One example of code actually being modified is Genetic Programming, where you essentially evolve a program to complete a task (of course, the program doesn't modify itself - but it does modify another computer program).
Neural networks, on the other hand, modify their parameters automatically in response to prepared stimuli and expected response. This allows them to produce many behaviors (theoretically, they can produce any behavior because they can approximate any function to an arbitrary precision, given enough time).
I should note that your use of the term "database" implies that machine learning algorithms work by "remembering" information, events, or experiences. This is not necessarily (or even often!) the case.
Neural networks, which I already mentioned, only keep the current "state" of the approximation, which is updated as learning occurs. Rather than remembering what happened and how to react to it, neural networks build a sort of "model" of their "world." The model tells them how to react to certain inputs, even if the inputs are something that it has never seen before.
This last ability - the ability to react to inputs that have never been seen before - is one of the core tenets of many machine learning algorithms. Imagine trying to teach a computer driver to navigate highways in traffic. Using your "database" metaphor, you would have to teach the computer exactly what to do in millions of possible situations. An effective machine learning algorithm would (hopefully!) be able to learn similarities between different states and react to them similarly.
The similarities between states can be anything - even things we might think of as "mundane" can really trip up a computer! For example, let's say that the computer driver learned that when a car in front of it slowed down, it had to slow down to. For a human, replacing the car with a motorcycle doesn't change anything - we recognize that the motorcycle is also a vehicle. For a machine learning algorithm, this can actually be surprisingly difficult! A database would have to store information separately about the case where a car is in front and where a motorcycle is in front. A machine learning algorithm, on the other hand, would "learn" from the car example and be able to generalize to the motorcycle example automatically.
Machine learning is a field of computer science, probability theory, and optimization theory which allows complex tasks to be solved for which a logical/procedural approach would not be possible or feasible.
There are several different categories of machine learning, including (but not limited to):
Supervised learning
Reinforcement learning
Supervised Learning
In supervised learning, you have some really complex function (mapping) from inputs to outputs, you have lots of examples of input/output pairs, but you don't know what that complicated function is. A supervised learning algorithm makes it possible, given a large data set of input/output pairs, to predict the output value for some new input value that you may not have seen before. The basic method is that you break the data set down into a training set and a test set. You have some model with an associated error function which you try to minimize over the training set, and then you make sure that your solution works on the test set. Once you have repeated this with different machine learning algorithms and/or parameters until the model performs reasonably well on the test set, then you can attempt to use the result on new inputs. Note that in this case, the program does not change, only the model (data) is changed. Although one could, theoretically, output a different program, but that is not done in practice, as far as I am aware. An example of supervised learning would be the digit recognition system used by the post office, where it maps the pixels to labels in the set 0...9, using a large set of pictures of digits that were labeled by hand as being in 0...9.
Reinforcement Learning
In reinforcement learning, the program is responsible for making decisions, and it periodically receives some sort of award/utility for its actions. However, unlike in the supervised learning case, the results are not immediate; the algorithm could prescribe a large sequence of actions and only receive feedback at the very end. In reinforcement learning, the goal is to build up a good model such that the algorithm will generate the sequence of decisions that lead to the highest long term utility/reward. A good example of reinforcement learning is teaching a robot how to navigate by giving a negative penalty whenever its bump sensor detects that it has bumped into an object. If coded correctly, it is possible for the robot to eventually correlate its range finder sensor data with its bumper sensor data and the directions that sends to the wheels, and ultimately choose a form of navigation that results in it not bumping into objects.
More Info
If you are interested in learning more, I strongly recommend that you read Pattern Recognition and Machine Learning by Christopher M. Bishop or take a machine learning course. You may also be interested in reading, for free, the lecture notes from CIS 520: Machine Learning at Penn.
Machine learning is a scientific discipline that is concerned with the design and development of algorithms that allow computers to evolve behaviors based on empirical data, such as from sensor data or databases. Read more on Wikipedia
Machine learning code records "facts" or approximations in some sort of storage, and with the algorithms calculates different probabilities.
The code itself will not be modified when a machine learns, only the database of what "it knows".
Machine learning is a methodology to create a model based on sample data and use the model to make a prediction or strategy. It belongs to artificial intelligence.
Machine learning is simply a generic term to define a variety of learning algorithms that produce a quasi learning from examples (unlabeled/labeled). The actual accuracy/error is entirely determined by the quality of training/test data you provide to your learning algorithm. This can be measured using a convergence rate. The reason you provide examples is because you want the learning algorithm of your choice to be able to informatively by guidance make generalization. The algorithms can be classed into two main areas supervised learning(classification) and unsupervised learning(clustering) techniques. It is extremely important that you make an informed decision on how you plan on separating your training and test data sets as well as the quality that you provide to your learning algorithm. When you providing data sets you want to also be aware of things like over fitting and maintaining a sense of healthy bias in your examples. The algorithm then basically learns wrote to wrote on the basis of generalization it achieves from the data you have provided to it both for training and then for testing in process you try to get your learning algorithm to produce new examples on basis of your targeted training. In clustering there is very little informative guidance the algorithm basically tries to produce through measures of patterns between data to build related sets of clusters e.g kmeans/knearest neighbor.
some good books:
Introduction to ML (Nilsson/Stanford),
Gaussian Process for ML,
Introduction to ML (Alpaydin),
Information Theory Inference and Learning Algorithms (very useful book),
Machine Learning (Mitchell),
Pattern Recognition and Machine Learning (standard ML course book at Edinburgh and various Unis but relatively a heavy reading with math),
Data Mining and Practical Machine Learning with Weka (work through the theory using weka and practice in Java)
Reinforcement Learning there is a free book online you can read:
http://www.cs.ualberta.ca/~sutton/book/ebook/the-book.html
IR, IE, Recommenders, and Text/Data/Web Mining in general use alot of Machine Learning principles. You can even apply Metaheuristic/Global Optimization Techniques here to further automate your learning processes. e.g apply an evolutionary technique like GA (genetic algorithm) to optimize your neural network based approach (which may use some learning algorithm). You can approach it purely in form of a probablistic machine learning approach for example bayesian learning. Most of these algorithms all have a very heavy use of statistics. Concepts of convergence and generalization are important to many of these learning algorithms.
Machine learning is the study in computing science of making algorithms that are able to classify information they haven't seen before, by learning patterns from training on similar information. There are all sorts of kinds of "learners" in this sense. Neural networks, Bayesian networks, decision trees, k-clustering algorithms, hidden markov models and support vector machines are examples.
Based on the learner, they each learn in different ways. Some learners produce human-understandable frameworks (e.g. decision trees), and some are generally inscrutable (e.g. neural networks).
Learners are all essentially data-driven, meaning they save their state as data to be reused later. They aren't self-modifying as such, at least in general.
I think one of the coolest definitions of machine learning that I've read is from this book by Tom Mitchell. Easy to remember and intuitive.
A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E
Shamelessly ripped from Wikipedia: Machine learning is a scientific discipline that is concerned with the design and development of algorithms that allow computers to evolve behaviors based on empirical data, such as from sensor data or databases.
Quite simply, machine learning code accomplishes a machine learning task. That can be a number of things from interpreting sensor data to a genetic algorithm.
I would say it depends. No, modifying code is not normal, but is not outside the realm of possibility. I would also not say that machine learning always modifies a history. Sometimes we have no history to build off of. Sometime we simply want to react to the environment, but not actually learn from our past experiences.
Basically, machine learning is a very wide-open discipline that contains many methods and algorithms that make it impossible for there to be 1 answer to your 3rd question.
Machine learning is a term that is taken from the real world of a person, and applied on something that can't actually learn - a machine.
To add to the other answers - machine learning will not (usually) change the code, but it might change it's execution path and decision based on previous data or new gathered data and hence the "learning" effect.
there are many ways to "teach" a machine - you give weights to many parameter of an algorithm, and then have the machine solve it for many cases, each time you give her a feedback about the answer and the machine adjusts the weights according to how close the machine answer was to your answer or according to the score you gave it's answer, or according to some results test algorithm.
This is one way of learning and there are many more...
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am wanting some expert guidance here on what the best approach is for me to solve a problem. I have investigated some machine learning, neural networks, and stuff like that. I've investigated weka, some sort of baesian solution.. R.. several different things. I'm not sure how to really proceed, though. Here's my problem.
I have, or will have, a large collection of events.. eventually around 100,000 or so. Each event consists of several (30-50) independent variables, and 1 dependent variable that I care about. Some independent variables are more important than others in determining the dependent variable's value. And, these events are time relevant. Things that occur today are more important than events that occurred 10 years ago.
I'd like to be able to feed some sort of learning engine an event, and have it predict the dependent variable. Then, knowing the real answer for the dependent variable for this event (and all the events that have come along before), I'd like for that to train subsequent guesses.
Once I have an idea of what programming direction to go, I can do the research and figure out how to turn my idea into code. But my background is in parallel programming and not stuff like this, so I'd love to have some suggestions and guidance on this.
Thanks!
Edit: Here's a bit more detail about the problem that I'm trying to solve: It's a pricing problem. Let's say that I'm wanting to predict prices for a random comic book. Price is the only thing I care about. But there are lots of independent variables one could come up with. Is it a Superman comic, or a Hello Kitty comic. How old is it? What's the condition? etc etc. After training for a while, I want to be able to give it information about a comic book I might be considering, and have it give me a reasonable expected value for the comic book. OK. So comic books might be a bogus example. But you get the general idea. So far, from the answers, I'm doing some research on Support vector machines and Naive Bayes. Thanks for all of your help so far.
Sounds like you're a candidate for Support Vector Machines.
Go get libsvm. Read "A practical guide to SVM classification", which they distribute, and is short.
Basically, you're going to take your events, and format them like:
dv1 1:iv1_1 2:iv1_2 3:iv1_3 4:iv1_4 ...
dv2 1:iv2_1 2:iv2_2 3:iv2_3 4:iv2_4 ...
run it through their svm-scale utility, and then use their grid.py script to search for appropriate kernel parameters. The learning algorithm should be able to figure out differing importance of variables, though you might be able to weight things as well. If you think time will be useful, just add time as another independent variable (feature) for the training algorithm to use.
If libsvm can't quite get the accuracy you'd like, consider stepping up to SVMlight. Only ever so slightly harder to deal with, and a lot more options.
Bishop's Pattern Recognition and Machine Learning is probably the first textbook to look to for details on what libsvm and SVMlight are actually doing with your data.
If you have some classified data - a bunch of sample problems paired with their correct answers -, start by training some simple algorithms like K-Nearest-Neighbor and Perceptron and seeing if anything meaningful comes out of it. Don't bother trying to solve it optimally until you know if you can solve it simply or at all.
If you don't have any classified data, or not very much of it, start researching unsupervised learning algorithms.
It sounds like any kind of classifier should work for this problem: find the best class (your dependent variable) for an instance (your events). A simple starting point might be Naive Bayes classification.
This is definitely a machine learning problem. Weka is an excellent choice if you know Java and want a nice GPL lib where all you have to do is select the classifier and write some glue. R is probably not going to cut it for that many instances (events, as you termed it) because it's pretty slow. Furthermore, in R you still need to find or write machine learning libs, though this should be easy given that it's a statistical language.
If you believe that your features (independent variables) are conditionally independent (meaning, independent given the dependent variable), naive Bayes is the perfect classifier, as it is fast, interpretable, accurate and easy to implement. However, with 100,000 instances and only 30-50 features you can likely implement a fairly complex classification scheme that captures a lot of the dependency structure in your data. Your best bet would probably be a support vector machine (SMO in Weka) or a random forest (Yes, it's a silly name, but it helped random forest catch on.) If you want the advantage of easy interpretability of your classifier even at the expense of some accuracy, maybe a straight up J48 decision tree would work. I'd recommend against neural nets, as they're really slow and don't usually work any better in practice than SVMs and random forest.
The book Programming Collective Intelligence has a worked example with source code of a price predictor for laptops which would probably be a good starting point for you.
SVM's are often the best classifier available. It all depends on your problem and your data. For some problems other machine learning algorithms might be better. I have seen problems that neural networks (specifically recurrent neural networks) were better at solving. There is no right answer to this question since it is highly situationally dependent but I agree with dsimcha and Jay that SVM's are the right place to start.
I believe your problem is a regression problem, not a classification problem. The main difference: In classification we are trying to learn the value of a discrete variable, while in regression we are trying to learn the value of a continuous one. The techniques involved may be similar, but the details are different. Linear Regression is what most people try first. There are lots of other regression techniques, if linear regression doesn't do the trick.
You mentioned that you have 30-50 independent variables, and some are more important that the rest. So, assuming that you have historical data (or what we called a training set), you can use PCA (Principal Componenta Analysis) or other dimensionality reduction methods to reduce the number of independent variables. This step is of course optional. Depending on situations, you may get better results by keeping every variables, but add a weight to each one of them based on relevant they are. Here, PCA can help you to compute how "relevant" the variable is.
You also mentioned that events that are occured more recently should be more important. If that's the case, you can weight the recent event higher and the older event lower. Note that the importance of the event doesn't have to grow linearly accoding to time. It may makes more sense if it grow exponentially, so you can play with the numbers here. Or, if you are not lacking of training data, perhaps you can considered dropping off data that are too old.
Like Yuval F said, this does look more like a regression problem rather than a classification problem. Therefore, you can try SVR (Support Vector Regression), which is regression version of SVM (Support Vector Machine).
some other stuff you can try are:
Play around with how you scale the value range of your independent variables. Say, usually [-1...1] or [0...1]. But you can try other ranges to see if they help. Sometimes they do. Most of the time they don't.
If you suspect that there are "hidden" feature vector with a lower dimension, say N << 30 and it's non-linear in nature, you will need non-linear dimensionality reduction. You can read up on kernel PCA or more recently, manifold sculpting.
What you described is a classic classification problem. And in my opinion, why code fresh algorithms at all when you have a tool like Weka around. If I were you, I would run through a list of supervised learning algorithms (I don't completely understand whey people are suggesting unsupervised learning first when this is so clearly a classification problem) using 10-fold (or k-fold) cross validation, which is the default in Weka if I remember, and see what results you get! I would try:
-Neural Nets
-SVMs
-Decision Trees (this one worked really well for me when I was doing a similar problem)
-Boosting with Decision trees/stumps
-Anything else!
Weka makes things so easy and you really can get some useful information. I just took a machine learning class and I did exactly what you're trying to do with the algorithms above, so I know where you're at. For me the boosting with decision stumps worked amazingly well. (BTW, boosting is actually a meta-algorithm and can be applied to most supervised learning algs to usually enhance their results.)
A nice thing aobut using Decision Trees (if you use the ID3 or similar variety) is that it chooses the attributes to split on in order of how well they differientiate the data - in other words, which attributes determine the classification the quickest basically. So you can check out the tree after running the algorithm and see what attribute of a comic book most strongly determines the price - it should be the root of the tree.
Edit: I think Yuval is right, I wasn't paying attention to the problem of discretizing your price value for the classification. However, I don't know if regression is available in Weka, and you can still pretty easily apply classification techniques to this problem. You need to make classes of price values, as in, a number of ranges of prices for the comics, so that you can have a discrete number (like 1 through 10) that represents the price of the comic. Then you can easily run classification it.