- Publisher: Hanser Fachbuch (January 1, 2001)
- Language: German
- ISBN-10: 3446215336
- ISBN-13: 978-3446215337
- Product Dimensions: 6.6 x 0.9 x 9.5 inches
- Shipping Weight: 1.7 pounds
- Average Customer Review: 38 customer reviews
- Amazon Best Sellers Rank: #18,244,458 in Books (See Top 100 in Books)
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
Data Mining. (German) Paperback – January 1, 2001
Customers who viewed this item also viewed
Customers who bought this item also bought
What other items do customers buy after viewing this item?
“This book presents this new discipline in a very accessible form: both as a text to train the next generation of practitioners and researchers, and to inform lifelong learners like myself. Witten and Frank have a passion for simple and elegant solutions. They approach each topic with this mindset, grounding all concepts in concrete examples, and urging the reader to consider the simple techniques first, and then progress to the more sophisticated ones if the simple ones prove inadequate. If you have data that you want to analyze and understand, this book and the associated Weka toolkit are an excellent way to start.
― From the foreword by Jim Gray, Microsoft Research
“It covers cutting-edge, data mining technology that forward-looking organizations use to successfully tackle problems that are complex, highly dimensional, chaotic, non-stationary (changing over time), or plagued by. The writing style is well-rounded and engaging without subjectivity, hyperbole, or ambiguity. I consider this book a classic already!
― Dr. Tilmann Bruckhaus, StickyMinds.com --This text refers to an out of print or unavailable edition of this title.
Highly anticipated second edition of the highly-acclaimed reference on data mining and machine learning. --This text refers to an out of print or unavailable edition of this title.
Top customer reviews
There was a problem filtering reviews right now. Please try again later.
Along with its practical emphasis, the book includes discussions of some very interesting developments that are not usually included in books or monographs on data mining. One of these concerns the current research in `programming by demonstration.' This research is targeted towards the "ordinary" computer user who does not possess any programming knowledge but yet wants to automate predictable tasks. The only thing required from the user is knowledge of how to do the task in the usual way. As an example, the authors discuss briefly the `Familiar' system, which extracts information from user applications to make predictions and then generates explanations for the user about its predictions. Even more interesting is that it learns the tasks that are specialized for each individual user. It learns from the unique style of each user and their interaction history. One of the most interesting and powerful claims of programming by demonstration is that is domain-independent, considering the current intense interest in reasoning patterns or algorithms that can process information arising from multiple domains. In this regard a successful system would then be able to learn how to play chess from a user along with perhaps composing music. Again, the ability of a machine to reason in many domains is a step towards what many in the artificial community have called a `universal' learning machine. But the authors do not hold to this view, and in fact they open up the discussion in the chapter on the Weka workbench with a statement to the effect that there is no single learning algorithm that will work with all data mining problems. The "universal learner" they say, is an "idealistic fantasy."
Another interesting discussion included in the book is that of `co-training', which is a methodology that arises in the context of `semi-supervised learning.' In this learning scheme the input contains both unlabeled and labeled data. In co-training, one depends on the fact that the classification task depends on two different and independent perspectives. Then assuming there are a few labeled examples, a different model will be learned for each perspective, and then the models are separately used to label the unlabeled examples. Each model will contribute both negative and positive examples to the pool of labeled examples. The procedure is then repeated until the unlabeled pool is empty. This allows both models to be trained on the new pool of labeled examples. The authors point out some evidence indicating that if a (naive) Bayesian learner is used throughout this procedure, then it outperforms a learner that develops a single model from the labeled data. The intuition behind this is that using the independence of the two perspectives allows one to reduce the likelihood of an incorrect labeling. References are given for readers that want to investigate this approach in more detail, along with more brief discussions on its generalizations, such as co-EM, which involves probabilistic labeling of unlabeled data in one perspective, and how to use support vector machines in place of the naive Bayesian learner.
For the practitioner, the most useful discussion in the book concerns the evaluation of the different methods for data mining. What makes one approach to data mining better than another, and is there then a ranking of the different approaches? Can one in fact make judgments on the reliability or performance of data mining algorithms using solely the training or test data? If one had a general methodology for ranking data mining algorithms according to their performance then this would be a major advance, since this would allow a classification scheme for machine learning where one could speak of one machine being `more intelligent' than another. Unfortunately however this is difficult, and even said to be impossible according to some researchers. There are results in the research literature, going by the name of `free lunch' theorems, which seem to indicate that one cannot distinguish machine learning algorithms based solely on the way the deal with training or test data. The authors do not discuss these results in this book, but it is certainly apparent that they are aware of the difficult issues involved in the prediction of performance for data mining algorithms.
Obviously, this book is a perfect companion to the Weka machine toolbox, which is quickly becoming a standard, invaluable research toolbox for many.
It should be pointed out that about 10% of the text of this book is devoted simply as a user manual for an open source MLA package called Weka. When I first realized this I almost flipped; I really didn't want a book that was devoted to gaining a surface understanding of a particular implementation of a set of algorithms. After reading through, I can tell you it is not. All the algorithms are explained well enough that you could implement them and work out simple examples on paper.
I should note also that Weka, as well as a lot of the algorithms in this book, don't parallelize well (or obviously). This is an excellent point to get your feet wet and do some exploratory analysis, but if you're past that point and want to learn about crunching big (TB+) data you should look elsewhere.
One area that the text does not cover (and, for many software engineers this is not a fault) is some of the mathematics behind some of the algorithms the author proposes. For instance, in the author's description of linear regression using SGD he glosses over the math of actually calculating the gradient by saying "there's a matrix inversion involved and its available in prepackaged software." I'm not saying this is bad, because if you're a good software engineer the first thing you'll do it look for an existing implementation that you can alter to fit your needs, so he's right. It just may not be what mathematicians or more theory-oriented computer scientists expect.
Most recent customer reviews