- Hardcover: 1024 pages
- Publisher: Prentice Hall; 2nd edition (May 16, 2008)
- Language: English
- ISBN-10: 0131873210
- ISBN-13: 978-0131873216
- Product Dimensions: 7.1 x 1.5 x 9.4 inches
- Shipping Weight: 3.4 pounds (View shipping rates and policies)
- Average Customer Review: 25 customer reviews
Amazon Best Sellers Rank:
#278,539 in Books (See Top 100 in Books)
- #13 in Books > Computers & Technology > Software > Voice Recognition
- #39 in Books > Computers & Technology > Computer Science > AI & Machine Learning > Natural Language Processing
- #46 in Books > Computers & Technology > Computer Science > AI & Machine Learning > Computer Vision & Pattern Recognition
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
Speech and Language Processing, 2nd Edition 2nd Edition
Use the Amazon App to scan ISBNs and compare prices.
"Children of Blood and Bone"
Tomi Adeyemi conjures a stunning world of dark magic and danger in her West African-inspired fantasy debut. Learn more
Frequently bought together
Customers who bought this item also bought
Customers who viewed this item also viewed
About the Author
Dan Jurafsky is an associate professor in the Department of Linguistics, and by courtesy in Department of Computer Science, at Stanford University. Previously, he was on the faculty of the University of Colorado, Boulder, in the Linguistics and Computer Science departments and the Institute of Cognitive Science. He was born in Yonkers, New York, and received a B.A. in Linguistics in 1983 and a Ph.D. in Computer Science in 1992, both from the University of California at Berkeley. He received the National Science Foundation CAREER award in 1998 and the MacArthur Fellowship in 2002. He has published over 90 papers on a wide range of topics in speech and language processing.
James H. Martin is a professor in the Department of Computer Science and in the Department of Linguistics, and a fellow in the Institute of Cognitive Science at the University of Colorado at Boulder. He was born in New York City, received a B.S. in Comoputer Science from Columbia University in 1981 and a Ph.D. in Computer Science from the University of California at Berkeley in 1988. He has authored over 70 publications in computer science including the book A Computational Model of Metaphor Interpretation.
Author interviews, book reviews, editors picks, and more. Read it now
Top customer reviews
There was a problem filtering reviews right now. Please try again later.
So on to picking nits... which is way more fun. What I really wanted is to read this book and then be able to sit down and write my own Python implementation of the forward/backward algorithm to train an HMM. I bobbed along through the book, perhaps experiencing a little bit of fuzziness around those probabilities, and came full stop at ‘not quite ksi’ right smack in the middle of my HMM forward/backward section. I’d done a practice run by training a neural net in Andrew Ng’s machine learning course with Coursera. But I stared pretty hard for 3-4 hours at pages 189 and 190. And I mean I get it basically… Alpha and beta represent the accumulated wisdom coming from the front and from the back… And then you take a kind of average to go from not quite ksi to ksi. But there are too many assumptions hidden in P(X,Y|Z)/P(Y|Z). And this is an iterative algorithm, so how do you seed the counts? And I’m very annoyed by the phrase ‘note the different conditioning of O’. Okay, I can see the O is on the wrong side of the line. What does that mean? When I came to the next impasse, I didn’t try as hard. It’s already clear I’ll have to go elsewhere for the silver bullet. (The next impasse, btw was the cepstrum – what do you mean you leave the graph the same and just replace the x-axis with something totally unrelated? I’m no Stanford professor, but what kind of math is that? I’m sure it means something to somebody, but not to me.)
And drop the pseudo-code. If you’re deadly serious about teaching me the HMM, then write out a working implementation in full in a real language like C or Python with the variables all initialized so I can copy and paste the code into my debugger and watch what happens to the numbers as I step through. I suspect J&M of compromising the pedagogical value of the book by deliberately withholding information from those brilliant Stanford students of theirs so they have something to quiz them on at the end of the chapter. But this is a mistake. Give us the answers. Give us all the answers. Give us the actual code for the HMM and then explain it. I will read the explanation. I’ll have to read the explanation, because my neck is on the line if my code blows up. There will still be plenty of questions left over for those students.
In twenty-five chapters, the book covers the breadth of computational linguistics with an overall logical organization. Five chapter groupings organize material on Words, Speech, Syntax, Semantics and Pragmatics, and Applications. The four Applications chapters address Information Extraction, Question Answering and Summarization, Dialogue and Conversational Agents, and Machine Translation. The book covers a lot of ground, and a fifty-page bibliography directs readers to vast expanses beyond the book's horizon. The aging content problem present in all such books is addressed through the book's web site and numerous links to other sites, tools, and demonstrations. There is a lot of stuff.
While it is an achievement to assemble such a collection of relevant information, the book could be more useful than it is. An experienced editor could rearrange content into a more readable flow of information and increase the clarity of some of the authors' examples and explanations. As is, the book is a useful reference for researchers and practitioners already working in the field. A more clear presentation would lower the experience requirement and make its store of information available to students and non-specialists as well.
Readers looking for an introduction to natural language processing might find Manning and Schütze's Foundations of Statistical Natural Language Processing, easier to understand. It is over ten years old, but worth reading for an understanding of basic concepts that are still relevant in the field.
Most recent customer reviews
It was easy to follow and a great read.