Guidelines: Learn more about the ins and outs of Your Profile.

Reviews Written by Dr. Lee D. Carlson (Baltimore, Maryland USA)







Very helpful and comprehensive, May 25, 2015
The biggest challenge to writing any book on electronic warfare (EW) is to give an indepth review of the important concepts and developments without divulging classified information. This book, sizable as it is, gives the reader who is or is intending to work in the area of electronic warfare relevant information while still remaining unclassified. All of the topics discussed can be found in the open literature but the author has saved readers a lot of browsing and search time by including the most important ones. Readers requiring more specialized or indepth discussion may find that this type of information is not publicly available. Due to its size, there is a lot in this book to absorb, but no doubt readers who decide to commit to its study will not read it in its entirety but will instead topics of interest to them.
Radio receivers of course are noisy entities, and the different noise contributions to receiver electronics are revealed quantitatively in this book, using primarily only elementary mathematical tools, instead of using the full theory of stochastic processes. That the level of mathematical details is kept at an elementary level will help readers who are interested primarily in the practical implementation of radio receivers in an EW environment. Readers who want a more sophisticated mathematical/theoretical treatment will have to consult another monograph or the research literature (which is relatively sparse because of security constraints).
The first chapter is more of a breadandbutter topic and definition list that covers the important metrics and performance parameters of EW receivers. EW network designers and EW network performance engineers frequently use these metrics, especially those who must configure military tactical networks so they they adhere here to performance requirements in an EW environment. The challenge of course in making a network function in such an environment is being able to distinguish friendly from unfriendly jamming/interference. In this regard, another very helpful feature of this book is that the author devotes considerable discussion on the difficulties in measuring the important quantities of interest, one example being the bandwidth. The author uses signaltonoise ratio (SNR) instead of the SINR (signaltointerference noise ratio) to frame the concept of receiver sensitivity. The SINR model of interference has been gaining in popularity in recent years, due mainly to is connection with network performance optimization and network capacity, both of which are very important considerations for networks in EW environments.
The author also includes a discussion of random modulation in the book, which is somewhat atypical and helpful to readers who are interested in how randomness can be used in a useful way radio communications. Contrary to what one’s intuition might indicate, the deliberate incorporation of randomness can greatly assist in optimizing radio communication performance. The discussion on random modulation could be viewed as the most complex in the book from a mathematical standpoint.
Although still not widely used in military tactical networks, but definitely coming in the future, is that of cellular technology. The issue of how to place base stations is the main inhibitor in implementing this kind of technology in military tactical networks, but whatever eventual decisions are made will no doubt have to respect some of the considerations that the author includes in this book, particularly in this discussion of CDMA. When studying this part of the book, it is interesting to learn for example that uplinks in cellular networks require power control of 1 dB in accuracy and about 1 kbps of control data. Measurement error may prohibit such an accuracy in real networks, but even if this is dealt with, it goes beyond saying that power control in tactical networks is very important from the standpoint of EW network coordination and scheduling.
At least from the standpoint of the reviewer, who needed to learn these topics, the most important parts of the book dealt with the different stages of radio receiver electronics, and how each can essentially act as a noise source and the degree to which RF amplifiers determine the receiver sensitivity. Network designers, engineers, and analysts need to have an appreciation of the different factors contributing to noise in EW receivers, and this book will definitely assist them in gaining the necessary insight.









Both explains and expounds, April 10, 2015
Survival and Event History Analysis
Survival analysis and the theory of competing risks have found extensive application in the financial and medical fields, and the literature on these applications is vast. For analysts who want to apply these techniques to these fields, broaden their application to others, or who need a rigorous understanding of them, assimilating this literature can be an arduous task. There are many books that have appeared in the last two decades that are very helpful in acquiring the needed understanding, but this one is unusual in that it is able to articulate on both the theoretical and the applied, and do so in a way that does not trivialize the subject. Readers will find an inclusion of many examples drawn mostly from the authors’ geographic location, and also discussions of the mathematical formalism that makes it intuitively clear why some of the formalisms are deployed.
For example, of utmost importance in survival analysis is that of censored data, and it only takes the author 2 pages to begin discussing how to handle this kind of data, wherein they motivate the difference between survival analysis and ordinary statistical analysis when it comes to censoring data. They then move right into the definition of the hazard rate, which is is rather straightforward, but newcomers to the field may confuse it with an ordinary probability, but it is not since it can essentially be any nonnegative function even though it is defined as a conditional probability. Some texts have referred to it as a “probability rate”. The authors give many examples that illustrate the many complexities that the hazard rate can exhibit, and of crucial importance in some of these examples is the actual shape of the hazard rate. The reviewer can attest to a few applications (such as in data networks) where it is also of interest to compare the shape or slope of the hazard rate as the observation time increases.
Some of the more widely used survival models are discussed, such as the Cox proportional hazard model, and additive regression models. The reason for the popularity of these models lies in the use of observable variables or covariates to model differences between individuals. The authors show how to extend these models to take into account unobservable heterogeneities between individuals by using frailty models, wherein the hazard rate of an individual is changed by simply multiplying by a frailty variable. The topic of informative censoring is not discussed in the book, but readers who are at the level of sophistication to be able to appreciate this topic can find ample discussion of it in the literature. Multistate models can be used for the case that individuals can experience more than one type of event, which are better known as ‘competing risks’. The authors show how to modify the hazard rate to take into account competing risks, and caution the reader in remembering the difference between the cumulative incidence function and the cumulative causespecific hazard.
This book sets itself from others on the subject in its coverage on stochastic processes and their connection with event history analysis. An excellent motivation for martingales is given that makes their understanding readily apparent to readers who may have only encountered formal definitions in their prior exposure to the research literature. This is readily apparent in the authors’ discussion of sigma algebras of events and the notion of adaptation to the time evolution of families of these sigma algebras. The authors’ discussion is very lucid as compared to what a reader might find by perusing other literature on this topic, in particular in the area of financial modeling. Along these same lines, the martingale property is shown to be resilient to some transformations that act on processes with this property. The authors discuss the concept of an ‘optional stopping time’ as an illustration of this, in that the martingale property is left intact under optional stopping. Most importantly, they connect optional stopping with censoring, and set the analyst’s mind at ease in showing that the martingale property will remain intact and therefore unbiased estimates can occur.
Another stumbling block to those learning it for the first time, due in part to the formal nature of most treatments of it in the research literature, is that of the ‘Doob decomposition’. The authors explain this as essentially a decomposition of an arbitrary stochastic process into one that is dependent on the past, and one that reflects what is novel or unanticipated if compared to past experience. To experts in probability theory and the theory of stochastic processes such a description may seem trivial or imprecise, but for those who really want to understand the subject, and do so outside the constraints of formal reasoning, the authors’ “intuitive discussion” is very helpful and considerably shortens the time to learn the important ideas.
Of fundamental importance in applying survival analysis are the nonparametric estimators of the cumulative hazard rate going by the names of the NelsonAelen and KaplanMeier estimators. These two techniques are widely used, sometimes in contexts where they should not be, but they do give “backoftheenvelope” estimates that can serve as a guide in lieu of more refined approaches. The authors, interestingly, view the NelsonAelen estimator as the more fundamental of the two techniques, and so begin with it. They also show that the estimator for the variance of the NelsonAalen estimator is approximately normal distributed and so one can speak meaningfully about confidence intervals and percentiles. The properties of the KaplanMeier estimator, which of the two is the most familiar to analysts, is shown also to be approximately normally distributed for large samples, and interestingly shown in one of the exercises to be equal to 1 – (empirical cumulative distribution function) when there is no censoring. The reviewer has successfully applied both of these techniques to packet drops in data networks, an area that has not yet seen significant application of survival analysis.









1 of 1 people found the following review helpful
A history of twisted thought and the Coppola FourStar Clowns, February 28, 2015
For those of age during the Vietnam war, there is no doubt that objectivity is difficult as to why America got involved and eventually pulled out. The view of those who fought the war is usually quite different from those who instigated it and were responsible for its disastrous outcome. It takes courage to go into battle and fight for a cause that through the detestable bureaucratic legislation called the draft one is forced to fight for. It takes just as much courage to voluntarily fight in a war that has been marketed as being necessary, unavoidable, and winnable. This book gives further evidence that the disaster of the Vietnam war was not the result of those who fought it, but rather with the DC clowns who feigned competence in military matters and those who remained silent or acquiesced in the horrible circus of political maneuvering.
There are some who may hold to the premise that Lyndon Johnson and his closest advisors showed real guts in attempting to fight against the Vietnamese Communist threat and to “save American face”. But it does not take any intestinal fortitude or keen intellect to indulge in the deceit and verbal machinations that are delineated in meticulous detail in this book. For those readers who want the raw, naked truth about Vietnam, this book is highly recommended, and its study will reveal that the author has definitely done his homework.
Having its origin in the National Security Act of 1947, the Joint Chiefs of Staff (JCS) during the Vietnam war is portrayed in this book as more of a collection of “technicians for planners” than a body of individuals who carefully thought out strategies and tactics. Some readers may be shocked as to what little influence the JCS had on actual policy decisions during the buildup of the war and its actual execution in the years that followed. One can only wonder whether this was the result of tacit agreement with those policies or rather from an excess of veneration for the Presidency and his cabinet officers. The author seems to argue for a superposition of both of these, and frequently the JCS is accused of placating the president.
Robert McNamara is rightfully portrayed as an evil demon in this book, as a government bureaucrat who cannot engage in selfcriticism and smug in the certainty of his analysis and assessments of progress in the war. McNamara’s dwelling at the time was definitely a cesspool of apodictic certainty as is well brought out in this book, especially in the manner in which he interacted with the president and the JCS.
Johnson failed along with his vision of the Great Society. The JCS failed. Robert McNamara and Cyrus Vance failed. The only success of that time was the drive to end the debacle of the Vietnam war. This book is a microscopic view of these failures, and the biggest lesson to take away from the study of this book is an appreciation of just how removed from reality a government bureaucracy can be, and how uncritical adulation for a president or an idea can result in horrible destruction and heartache.









A good overview, January 10, 2015
In an article on quantum chromodynamics (QCD) in the year 1978 the physicists W. Marciano and H. Pagels and writing in particular on the SchwingerDyson equations for QCD, they stated that “ultimately we will have to face up to solving these equations or some equivalent problem.” These authors were of course cognizant of the fact that to understand QCD will take approaches very different than what had been done in other quantum field theories, such as quantum electrodynamics, the latter of which can be tackled successfully using perturbation theory. Of course, the property of asymptotic freedom in QCD allows one to do perturbation calculations at high enough energy, and useful insights may be obtained from these equations, but if one is to understand nonperturbative phenomena, such as bound states, then one has to make use of techniques outside the context of perturbation theory.
As the content of this book illustrates with great clarity, much has happened in the field since 1978, not only in the discovery of nonperturbative techniques such as lattice gauge theory and the AdS/CFT correspondence, but also in the experimental techniques such as heavy ion collisions. Indeed all these developments have been interesting and no doubt will continue to give surprising results in the years ahead.
For those interested in nonperturbative quantum field theory, either as a profession or from the standpoint of a spectator, there is much to be gained from studying this book. The authors keep the physics to the forefront, and despite the fact that they frequently have to refer the reader to the research literature for details on calculations, this book should not be viewed as a literature survey. And even though the authors clearly want to advertise the virtues of using the AdS/CFT correspondence to understand QCD, they do not hesitate to point out the problems in using this correspondence. They also discuss in great detail some of the gaps in understanding in the experiments dealing with heavy ion collisions (and do a through job of motivating the experimental situation in the first two chapters of the book).
Some of the discussions/results that the reader may find interesting or surprising include:
1. The breakdown of the eikonal formalism in describing parton energy loss in heavy ion collisions (due to the partons themselves being created in the collisions and suffering energy loss in the medium created). The resulting ‘jet quenching” of partons as they move through dense matter is a challenge to theorists and is to be contrasted with the perturbation theory calculation that can be done for parton showers in a vacuum. The authors describe a simple jet quenching model in Chapter 2, and refer the reader to the literature for estimates based on Monte Carlo simulations.
2. The phenomenon (ala the Matsui/Satz model) of ‘color screening’ in preventing meson production in the hot quarkgluon plasma, and approaches to estimating the screening length for the quarkantiquark force.
3. Although their vacuum solutions are very different, QCD and N = 4 Super YangMills theory have similar properties above the critical temperature Tc, which is defined to be the crossover temperature from a hadron gas to a quarkgluon plasma. QCD is no longer confining above Tc. The authors give a comprehensive list of their similarities and differences at nonzero temperatures. Most interesting is that the authors show that the ratio between the shear viscosity and the entropy density does not depend on the number of degrees of freedom, the latter of which are not equal for these two theories. And even though SUSY is not present in QCD, at finite temperature the difference between bosons and fermions can be ignored. However the interplay between the number of flavors and the number of colors remains important. Readers hungry for a research problem using gauge/string duality can try extending the methods in this book to the case where the number of colors are comparable to the number of flavors.
4. The absence of quasiparticles in the strongly coupled N = 4 SYM plasma. The quasiparticle picture has of course dominated applications of quantum field theory to condensed matter and manybody systems. This paradigm finally goes away when there is strong coupling between the constituents of the system.
5. The authors show that thermodynamic quantities do not vary much between weakly and strongly coupled nonAbelian gauge theory plasmas.
6. The extensive discussion on the physics of holographic mesons (quarkonium mesons), given much needed understanding of the properties of (strongly coupled) hot QCD.
The book therefore gives a good update on what techniques are available for studying nonperturbative QCD. With further work and possibly even more exotic mathematical techniques, researchers may be closing in on the major unsolved problem of quantum field theory. In the same article in 1978, Marciano and Pagels remark that “no one has ever proven the existence of a single bound state let alone the confinement property in any relativistic, 3 + 1 dimensional quantum field theory”.









2 of 2 people found the following review helpful
Finely tuned to physics, November 16, 2014
The conducting and reporting of scientific research requires a degree of intellectual honesty, both personal and public, that is usually not required in religion and politics. Typically, the goal of the latter two is control, both personal and public, and any counterexamples found to its dogmas or beliefs are usually dealt with by force, censorship, or marketing hype. False professionalism, namely the feigning of intellectual competence, is characteristic of many who inhabit these areas, and evidence or supporting data is usually thought of as a necessary evil instead of a guide for decisions or revisions of thought. There are only a few examples that the reviewer is aware of where sound, scientific and constructive inquiry takes place in the fields of religion and politics.
This book, with its dramatic and beautifully designed cover, is not of course free of marketing hype, but within its pages one will find a highly interesting and informative account of the physics and astronomy behind some of the new conceptions that are beginning to be hotly debated among physicists and astronomers. It also serves as a counterweight to those assertions made by religious apologists who want to use astrophysical research to support their beliefs as to the divine origin of the things that be. Both the author and the religionists he quotes are biased, but the author is aware of his biases and freely admits them, knowing full well that a privileged apodictic point of view is not possible in science (or even desired).
Along these lines, the author describes himself as being an instrumentalist, and promotes instrumentalism as the view that the models built by scientists don't correspond exactly to reality. His opinion on the reality of quantum fields in the book is a clear example of his stance on the "ontological status" of scientific models. He is careful to distance himself though from some religious apologists who want to label him and others as subscribing to what is called "ontological pluralism", which as the name implies assets that there are many independent "valid" realities. Some readers, even of a purely scientific persuasion, may object to the instrumentalist "worldview", and might be tempted, because of its emphasis on empirical results, to classify it as yet another manifestation of positivism, the latter of which has become almost a dirty word in some professional circles in the philosophy of science.
One should not view the contents of this book as promoting an instrumentalist worldview however, and there are many surprising scientific facts that will be encountered between its covers, even for readers with a solid background in physics or astronomy. One example of this is the discussion of the entropy at the Planck time, and another is the discussion (albeit brief) on the ACDM model. And as is always the case in rational discussion of physical models, charts and data abound. For readers who are pressed for time and are not able to consult the original literature, these are welcome additions.
One could argue perhaps that the author has wasted page space in attempting to refute or even address the arguments of religionists such as William Lane Craig and Robin Collins that the author feels he has to deal with in this book, even if they appear to be "physics savvy" as the author describes them. These individuals, as well as physicists who are interested in this problem, need to show that life, even if based strictly on carbon chemistry, would be impossible without the "finetuning" hypothesis. To show this would require a solution of the bound state problem in quantum field theory, which to this date is the major unsolved problem in quantum field theory. But the reviewer has not found any written record that finetuning religious apologists are interested in solving physics problems of this kind, difficult as they are and requiring massive commitments of time and resources. As it stands, and the author gives several examples showing the weaknesses of their assertions, colloquially speaking their arguments are to be viewed as a slice of bread lying in a bowl of milk. When picked up for examination, it falls to pieces.









1 of 1 people found the following review helpful
A clash of two evil empires, November 15, 2014
In the chapter in this book entitled `The Elimination of the Incas' the belief of the Spanish administrator Francisco de Toledo that any remnants of the Inca empire must be eliminated is based on his view that the Incas' right to rule Peru was no more justified that that of the Spaniards. And as the author describes in vivid detail, Toledo goes on to finish off any leftovers of Inca "enclaves" with great zeal and efficiency. Toledo proved himself to be quite adept and instigating mass murder or what is now called genocide, as the study of this chapter readily reveals.
Toledo was of course correct in believing in the equivalence between Spaniards and Incas with respect to their status as rightful rulers of Peru. Neither had such a right, and both groups engaged in behavior towards the native populations of Peru in a manner that appears like they were competing for the status of who is the most evil. Both Incas and Spaniards had an official religion that they represented, with that of the Incas being tied more to natural objects such as the sun, while that of the Spaniards to an institution that had shown itself to be capable of sustained brutality throughout its history.
One noted difference between the Spaniards and the Incas is the keeping of written records, and the history delineated in this book could not have been accomplished if the Spaniards had not done this in fairly meticulous detail. The book is long but highly interesting, and even more so for readers, such as the reviewer, who have visited Peru and are curious about its history, with enough details that cannot be obtained by tour guides. And in that regard, such readers may find that the historical picture given by such guides is sometimes at odds with what is reported in this book.
The author makes a conscious effort to refute the notion that the Incas did not resist Spanish conquest, and also addresses the "legend of Spanish atrocities" as he puts it. The book sometimes reads like a story rather than one of history, but this does not detract from the richness of information on each page and the overall quality of presentation. The participants of the conquest, both Inca and Spanish, are sometimes described as having intentions and emotions that would be impossible to verify however. It is difficult for historians in general to refrain from imputing their own attitudes or those of their culture to those of others, and this author is no different.
From a study of the book it is fair to say that gold and religion were the driving forces behind the conquest. It seems that greed and the lust for evangelizing use similar strategies, and moral judgments and empathy are suspended during their execution. The author brings out several cases however where conscience apparently gnawed at some Spaniards of clerical persuasions both in Peru and back in Spain, and there were many attempts to arrest the attempts to enslave native populations and exact unreasonable tribute. None of these pangs of conscience however were of the degree that would instigate any official, whether religious or governmental to advocate the complete withdrawal from Peru.
Readers interested in the military tactics and strategies used by the conquistadors will find ample food for thought in this book. From studying these, it is apparent that the conquest was not a cakewalk, even though from their use of horses and superior weaponry by the Spaniards it might appear that the fighting was definitely onesided. It is also interesting to learn, but not surprising, that some of the Indian populations allied themselves with the Spaniards to get rid of the Incas. At the time that the Spaniards entered the Inca territories, civil strife was tearing at the Inca empire, and the Spaniards took full advantage of the resultant disorganization and decimation. This and the willingness of the Indian populations to fight against the Incas set the fate of this empire, taking only about a decade to do so.









1 of 1 people found the following review helpful
A fine overview with helpful, pictorial examples, September 20, 2014
The Kontsevich combinatorial formula of stable algebraic curves can be loosely described as being a generalization of what is done for Grassmann varieties in the context of vector bundles. A Grassmann variety Gr(k, n) is a collection L of kdimensional linear subspaces of a complex ndimensional vector space. The geometry of Gr(k, n) can be viewed as a kind of measure of how complicated things can get if L is permitted to vary in families. A family can be viewed as a collection of linear spaces parametrized by points of a base space B, and this leads naturally to the concept of a locally trivial vector bundle over B. One can then obtain a ‘tautological’ vector bundle Ltaut over Gr(k, n) consisting merely of pairs (L, v) where v is an element of L. Forgetting v gives a map from Ltaut to Gk(r, n) with fiber L. Given a map from B into Gr(k, n) there is a ‘pullback’ of Ltaut which happens to be a vector bundle over B of rank k. It turns out that this procedure for n arbitrarily large and for B compact is gives a lot of information and is “universal” in the sense that there is a bijection between homotopy classes of maps from B to Gr(k, n) and the set of isomorphism classes of rank n vector bundles on B.
In the context of algebraic geometry a natural question to ask is whether this “universality” can be repeated when the families of linear spaces are replaced by families of curves of genus g. In other words, given a family F of smooth algebraic curves of genus g parametrized by some base space B, does there exist a natural map from B to a “moduli space” of curves that gives the essential information about F?
As is known, and as brought out in this book the answer to this question is in general no. If Mg is defined to be the moduli space of smooth curves of genus g then a family of curves with base B is a morphism from F to B of algebraic varieties whose fibers are smooth complete curves of genus g. Any map phi from B to Mg needs to be algebraic and in general F will not be the pullback of any universal family over Mg. For g = 0, onedimensional projective space P(1) there exists a trivial map to a point, but there exists complicated families with fibers isomorphic to P(1) because of a “large” automorphism group which can enable the construction of complex objects from simple ones. It is the presence of this automorphism group that makes it difficult to find a universal family of curves over Mg.
However, if the automorphism group is finite, then this can be dealt with by putting “marked” points on the curves. The number of marked points must be greater than or equal to 3 for the case of genus 0 and greater than or equal to 1 for the case of genus 1 curves. There are some straightforward examples of marking in the book, and the authors show just how one needs to change the moduli space Mg to M(g, n), where n is the number of marked points, in order to eventually lead to a theory where one can discuss intersections of curves and a formula for computing the number of points of intersection.
The first issue that must be dealt with is that families of curves over a base space B typically have singular fibers, and these fibers give valuable information about the geometry of the fiber. How are these singular fibers to be dealt with? The answer involves only worrying about the socalled ‘stable’ curves of genus g with n marked points. One thus obtains a ‘compactification’ of M(g ,n) which consists of stable curves, i.e only those curves that are complete and connected, have only nodal singularities, and only finitely many automorphisms. This procedure allows more control over the fibers over B.
Through helpful diagrams the authors show how to deal with the phenomenon where marked points can approach each other. In more advanced treatments of this subject, this procedure is called ‘normalization’ of the curve. In particular when the base B is onedimensional a family of curves over B is a map from the fibers F to B, where the fibers are curves of the family. Marking n points on the curves gives essentially n sections of the map, i.e. n maps from the base to F. There may be a point in the base where these sections (i.e. the marked points) coincide, and this will result in a curve that is not stable. The ‘normalization’ procedure is to “blow up” this “bad” point on F, giving a new family of curves over B and at the bad point has an additional contribution called the ‘effective divisor’ and the resulting combination will be a stable curve with two marked points which is essentially the “stable” limit of the old curves as points in the base B approach the bad point.
In general then, if a marked point on a curve C approaches another, then C will “bubble” off a P(1) with these two points on it. For a family of curves with a smooth onedimensional base B that are stable except at a bad point in the base, one can apply a sequence of blowups and blowdowns so that a new family is obtained which has stable fibers and where the fiber over the bad point is determined uniquely. This is called ‘stable reduction’.
The real goal behind all this marking and consequent stable reduction is to use the compactified moduli space to do intersection theory and arrive at a general formula for the number of points of intersection. This is done by looking at the line bundle of a (stable) curve C over the n marked points and intersecting the first Chern class of this line bundle. If pi: F > B is family of stable pointed curves and phi: B > compactification(M(g, n)) is the induced map then sections of the pullback phi* of the tangent space at the marked points are vector fields on points s of the fibers of pi that are tangent to the fibers of pi. This procedure gives a section of the normal bundle to the points s in the fiber F, and the degree of this normal bundle is the selfintersection of the points s on F, and is equal to the integral over B of the first Chern class of phi* of the tangent space at the marked points.
The first Chern classes are the “psi’s” that one sees in the vast literature on quantum cohomology and its connection with intersection. The computation of the intersections of the psi’s with compactification(M(g, n)) is the subject of GromovWitten theory and the authors show how this is connected with the quantum cohomology and enumerative combinatorics. Readers with a physics background will find that the designation of this cohomology as being “quantum’ is only because of the historical origins of the subject in the area of quantum gravity. One should not in that regard view quantum cohomology as being a “quantization” of some underlying cohomology theory. It should rather be viewed as a deformation of the ordinary cupproduct multiplication that is found in discussions on Chern classes of Grassmann varieties in algebraic geometry or in the Chow ring of P(r). The use of “generating functions” is also reminiscent of what is done in quantum field theory and quantum statistical mechanics, but since the resulting “quantum” product, which amazingly produces the right enumerative information, is commutative, the analogy to quantization is rather loose, given that quantization typically results in operations that are noncommutative.









3 of 3 people found the following review helpful
One of the spurs broke after just a few weeks, August 14, 2014
Whatever the reason for purchasing this pair of boots, whether for yourself or as a gift to someone close to you, you should know that the spurs are NOT real metal and one of them broke in half not too long after being worn.
The spurs are apparently made of a cheap composite and will break easily. If you check online many owners of this boot have complained about the spurs breaking.
Try another style because for the price the spurs should be REAL metal.
Update (Sept 10, 2014): I contacted Frye and they at first agreed to replace the spur free of charge, and even pay for postage to send the boot to them. A few days later they reneged on this promise, and then only agreed to send me replacement spurs (which they did) with instructions to the effect that if I could not find a local cobbler, I could send the boot to them for repair. However I was to pay for the postage (both ways) and the cost of insurance.
The replacement spurs are the same material: a cheap composite that will easily break. One would expect more from Frye, who are supposed to stand for quality.









2 of 3 people found the following review helpful
Confessions of a neoPythagorean, July 26, 2014
The subject matter of this book is so interesting and important that some readers may wonder why the author decided to devote so many of its pages to personal anecdotes and sidebar story telling. But the pages that are devoted to physics and mathematics are well worth studying, even for readers who are not experts in these subjects but want to attain more insight into the controversy behind the topic of the multiverse, and why some physicists are embracing this very radical worldview in spite of no direct observational support. NeoPythagoreans (such as the reviewer) will be enraptured by the author's proposal, and will find him to be one of their conceptual kindred.
The author is definitely in the academic mind set, which comes out especially when he talks about the inability of the (now famous) Hugh Everett, the originator of the manyworlds interpretation of quantum physics, to find employment as a physicist. But there is life outside of the academy, and there is more to physics and mathematics as a profession than merely publishing papers and attending meetings and conferences. And one might argue that after reading this book that its proposals are best done by individuals who are not associated with academia, so as to avoid the conflicts and admonitions by colleagues that the author evidently faced as he frequently alludes to in this book.
Those readers intimately familiar with the philosophy of mathematics may find the author's view of mathematical truth as somewhat restricted, for the reason that although the author asserts that the ultimate nature of reality is mathematical, his conception of the existence of mathematical structures is more in line with the Platonist or formalist schools of mathematical truth. In his (interesting) discussion of the measurement problem as being one of the "crises" in modern physics he does show familiarity with the finitist philosophy of mathematics, but a more indepth justification of his thesis would entail that he come to grips with the intuitionist school of mathematics, who have very demanding requirements for what it means for a mathematical structure to exist.
Indeed, in intuitionism proving that a mathematical structure exists must be done constructively, in that it is not enough to show that denying this structure will result in a contradiction, and that one must actually give an example of one of these structures or list procedures for its construction. Intuitionist frown upon for example the occurrence of multiplicities of "existence proofs" where not even one example is given of the object that is proposed to exists. One particular example of this is in the field of functional analysis, wherein myriads of theorems are stated that show the existence of fixed points under certain transformations, but where no explicit examples are given of a fixed point.
Intuitionism does have ramifications for the author's thesis, for asserting that one can find all mathematical structures present in the plethora of universes that are part of the multiverse would entail that these structures are not just "predicted" by certain theorems but also that they can be explicitly constructed. Requiring this would prune the number of universes that actually will inhabit the multiverse.
In a neoPythagorean conception of reality, which this book clearly represents, everything is mathematical. It remains to be seen if those with practical mind sets, such as experimental physicists, engineers, and technicians, who are used to working with measurement errors and statistical uncertainties, will find any common ground with the author's thesis. For them there is no risk in believing his thesis to be false. The practical realities of doing science will still be with them, whatever the nature of reality.









18 of 29 people found the following review helpful
Won't do, June 8, 2014
Anyone who has indoor plants has no doubt run into the problem of proper lighting, with the need sometimes to use artificial lighting. There are several ways in which this might be done, depending on the imagination of the plant lover: 1. One method is to put the plant under a lamp, which is then turned on and off by the plant lover. 2. For those who do not want to remember to turn the lamp on and off, there are devices on the market whose timing can be set by the user to turn a lamp on and off. The actual time that the lamp is on can be set in these devices, according to recommendations given by a plant expert of botanist.
3. Suppose now that this device was modified so as to contain information about the lighting needs of the plant, an Aphelandra squarrosa for example, and that the device was able to turn on and off and vary its lighting intensity based on the judgements of a plant expert. Suppose also that the device is able to compare the efficacy of its "light curve" on the health of the Aphelandra with others grown under light controlled by a device of the same kind. The actual comparison is done under the instigation of the plant lover, and the device can then change its light curve based on the results of the comparison.
4. Suppose that the device is further modified so that it can make the comparison itself, namely it judges whether the difference in light curves on the health of the plants is significant and then alters its own light curve appropriately. Its judgements are taken independent of the plant lover or plant expert, and are based on historical or experimental data it has access to.
5. As a further modification to the device, suppose it can now formulate a set of hypotheses that explain the effects of this type of artificial light generation on Aphelandra squarrosa. The device generates these hypotheses and formulates theories based on the instigation of the plant lover. For example, the plant lover may want to know how the health of the Aphelandra would be affected by changing the lighting conditions, without having to do the testing herself. The device can also formulate light requirements for plants other than Aphelandra squarrosa.
6. Suppose a further modification gives a device that can use the information on light curves of plants to understand the effects of light on other physical entities. The device can find common elements of behavior in the response of plants to light and the response of these other entities to light and formulate a set of hypotheses based on these elements. The device attempts to formulate these hypotheses based on the instigation of an interested human party. A typical plant lover would probably not want this kind of information, but a scientist or botanist might. The device would probably be too impractical to a typical plant lover and its additional ability therefore useless for general home use.
7. The device is further modified so that it is curious about the effects of light on entities, whether these entities are plants or something else. It tries to formulate theories on its own, independent of any external interested party. Such a device might be able to formulate procedures, based on genetic engineering, for altering the biochemistry of Aphelandra squarrosa, so as to make it more resilient as a houseplant, possibly needing less light or a radically different light curve.
8. The device is modified so as to be able to selfmanage itself, such as its power requirements. In addition, it can send a set of instructions to a manufacturing facility that will manufacture copies of itself, or it might recommend its own design be altered and then manufactured, with recommendations being based on designs it generated.
It might be fair to say that these eight types of devices are very different, qualitatively speaking. The first type of device is incapable of solving problems but is more of a simple switch. The second type of device represents a machine that can find answers to domainspecific problems but does not compare these answers to any standards. Machines of this type do not attempt to check their answers or correct them. The third device represents machines that find answers to domainspecific problems and check their answers to these problems according to standards that are given to the machine from an external source or standard. The fourth device represents a machine that is able to check its answer to domainspecific problems and make judgments as to the quality of these answers, and do so independently of any external standards.
The fifth type of device represents machines that are able to judge the quality of their answers to domainspecific problems and then propose theories or explanations that subsume these problems, whereas the sixth type of device is able to solve problems having their origin in more than one domain, but their attempt takes place only under the instigation of an external inquirer. The seventh type of device expresses curiosity and creativity, can solve problems independently without any external instigations, and can develop theories of explanations around these problems. Finally, the eighth type of device represents machines that can selfmanage and selfreplicate,and have all the abilities of machines of the seventh type.
In analogy with human reasoning one might argue that as one goes from the first type to the last the intelligence increases. But if one insisted upon a quantitative measure of just how much "smarter" the last type of device is than the first, then this would be difficult, since no such measure has yet been devised in the field of artificial/machine intelligence.
And the lack of such a measure is the predominant reason why the thesis of this book is problematic and needs to be rejected. There are many places in the book where the the author speaks of "super intelligent" machines as being a thousand or a trillions of times more intelligent than humans, but no where in the book is there any discussion of how this is to be determined. The author does refer to machines taking IQ tests, and the reader is evidently supposed to surmise that it is the use of these tests that will enable one to determine the time when a machine "could match and then surpass human intelligence." No where in the book though is an example given of a machine, either existing or projected into the future, that has taken one or more IQ tests and therefore shown to be "intelligent" to the degree to which these types of tests measure intelligence (if indeed they do). This is also an indication of the great need for the field of artificial intelligence for a rigorous "theory of intelligence" that would allow researchers and engineers to assess more quantitatively the difference between what is called AGI (artificial general intelligence), and domainspecific intelligence.
Again, qualitatively speaking, one could argue that there are many machines today that exhibit domainspecific intelligence, such as those able to play chess and backgammon, perform financial analysis and trading, regulate and troubleshoot communication networks, and find interesting patterns in genome data. These are just a few examples, and apparently the author wants to base his case for what he believes will be "super intelligent" machines on the proliferation of these types of machines in everyday life, as indeed they are. It is true that are lives are dependent on the output of these machines, such as credit scores, financial trading, medical diagnostics, etc. It is quite a stretch though to argue that this massive proliferation of domainsspecific reasoning machines will result in machines that can reason over many domains (AGI) without substantial rewriting of their "brains". The author is clearly fearful that this will occur, but he has given no absolutely no hint on how this is do be done.
Instead, the author relies on the opinions of experts who work in the field of artificial intelligence, and also gives figures on the funding levels of research in AGI. If one checks the reality of this funding, there are certain instances where one can verify the figures, but to say as the author does that "billions" are being spent on bringing about humanlevel intelligence in machines. In addition, opinions of experts are valuable in assessing their comfort level on advances in artificial intelligence, but if one is to build a sound case for the "intelligence explosion" that the author claims will happen, one will definitely need to offer a more quantitative case. The Vinge/Kurzweil conception of the "law of accelerating returns" and the associated concept of a technological "singularity" is with each passing year looking to be more of a sophisticated marketing campaign rather than sound science, and reliance on these conceptions is not bringing about a theory of machine intelligence that is practical and sound.
There are also a few other difficulties in the claim that superintelligent machines are destined to be our "final invention", mostly coming from basic physics and the manner in which scientific research and results are obtained. There are thermodynamic considerations and energy requirements that need to be addressed if such machines are to operate creatively in bringing about new scientific knowledge and practical products. A "superintelligent" machine engaged in scientific research will need to conduct actual experiments, this being essential to science rather than just thoughtful musings, and this will require space, instrumentation, and a substantial amount of energy. These kinds of machines will also be subject to the ordinary laws of thermodynamics, and will have to deal with the heat they generate when such an "intelligent explosion" occurs.
One might ignore all of these considerations and take the author's case as one that is more of a warning, just like some scientists had sounded off during the development of nuclear weapons. But to argue that superintelligent machines are the biggest threat to our existence is to ignore the fact that it is the dumbest entity in the world today that has that privilege, namely the ordinary biological virus.


