Top positive review
110 people found this helpful
Highly original...will make you "think out of the box".
on July 4, 2001
In this book the author attempts to explain the workings of the human mind as a collection of a large number of autonomous mindless connected agents. The approach is metaphorical/philosophical, and no empirical evidence is given for the ideas expounded. The "society of mind", composed as it is of a collection of simple objects, is purely reductionist in its strategy and philosophy. It is though a highly original and thought provoking introduction to the major questions involving mental states, concept formation in the brain, learning theory, and artificial intelligence. The author gives many interesting examples that entice the reader to "think out of the box".
The book itself is written as though each chapter were itself one of these agents. Typically a chapter poses a question or a particular phenomenon, and the author then addresses how the mind would implement of resolve this question or deal with this phenomenon. Some interesting chapters in the book include:
1. Self-Knowledge is Dangerous: The author argues that mental constraints are needed to prevent the individual from artificially creating emotional states that would prevent deliberate action on our part. An intelligent machine will then need to have such constraints in order to prevent it from repeating endlessly the same activity.
2. Learning from Failure: Minsky argues that confining oneself to positive learning experiences will not be as robust or effective as one that will involve some kind of discomfort or pain. Such discomfort will enable more radical changes in conceptual structure.
3. Power of Negative Thinking: The author argues that an optimistic problem-solving strategy is contingent on the ability to recognize several paths to the solution, with the best path then selected. When such knowledge is not available, a "pessimistic" strategy is more optimal. The solution in this case is one that at first glance seems the worst possible avenue of approach.
4. Emotion: The question is posed as to whether machines can be intelligent without any emotions. The author seems to be arguing, and plausibly I think, that emotions serve as a defense against competing interests when a goal is set. Emotional responses occur when the most important goal(s) are disrupted by other influences. Intelligent machines then will need to have the many complex checks and balances.
5. Must Machines be Logical: It is argued correctly that intelligent machines must employ reasoning tools other then ones that are strictly logical. Logic is strictly a side constraint, a test that prevents invalid conclusions. It cannot by itself lead to genuine knowledge.
6. Mathematics Made Hard: Minsky argues that the strategy behind the construction of mathematical systems, via strict definitions and categorization, results in systems that have very small "meaning" content. More robust systems must be developed and integrated into the educational process and into any design for intelligent machines.
7. Weighing Evidence: There is an interesting example of a collection of four index cards on two of which are connected line patterns, and on the other two disconnected line patterns. When the cards are cut into many pieces, and put into separate piles, then a machine with a feature weighing capability would be unable to distinguish between the piles.
8. The Mind and the World: The author's thinking on the mind-body problem is a very sensible one, namely that "minds are simply what brains do". It matters not, according to the author, what the substance of mind (brain) is, only what it (the agents) do.
A few omissions in the book include the discussion on intelligence: the author never really gives his outlook or "definition" of intelligence, but merely comments on a few other opinions on this concept. If one is to make "intelligent" machines, it is important that intelligence be characterized explicitly so that one will know when and if the goal of artificial intelligence has been reached. The author correctly argues however that expert systems can and have been successfully constructed, and that the most formidable obstacle to constructing an "intelligent" machine is in implementing the ability of humans to exercise "common sense".