Life at the Center of the Energy Crisis: A Technologist’s Search for a Black Swan by George H. Miley


cover-image-9789814436489_p0_v1_s260x420Published by World Scientific, Life at the Center of the Energy Crisis: A Technologist’s Search for a Black Swan by Dr. George H. Miley describes “the story of the author’s work and struggles in the field of energy research.”

Dr. Miley is Professor Emeritus at the University Of Illinois Urbana-Champagne (UIUC) in the Department of Nuclear, Plasma, and Radiological Engineering, and, he was one of the few plasma scientists who considered seriously the possibility of nuclear reactions taking place in a solid metal at low-temperatures after the announcement of Drs. Martin Fleischmann and Stanley Pons.

As Director of the Fusion Studies Lab at UIUC, he has engaged students (though he will say his students engaged him!) in cold fusion research, also called low-energy nuclear reactions (LENR) since 1989.

When virtually all other mainstream scientific journals refused to accept them, he as a founding editor of the American Nuclear Society’s (ANS) Fusion Science and Technology was singular in allowing publication of blacklisted cold fusion research papers.

From the World Scientific site:

The author’s experience in the field spans from work with Admiral Rickover and the Nuclear Navy to research with NASA designing propulsion for spacecraft to travel to Mars. The book provides insights into the differences between nuclear research done during the Cold War by the two superpowers, and offers a commentary on the flaws in each system with hope for change in the future. The book also provides a look into the development of the nuclear engineering program at the University of Illinois from the author’s years as a professor and an administrator.

The recipient of numerous awards, Dr. Miley has been a Guggenheim Fellow and held a senior NATO fellowship. He holds six patents, and has most recently been involved in a new-energy start-up LENUCO, a company formed to develop a heat-source for energy production based on metallic-hydride reactions.

Earlier this year, LENUCO lost a crowd-voting contest in a bid for a presentation spot before venture capitalists held by Future Energy. The LEN-GEN Module followed the design Miley presented before rocket scientists at last year’s NETS conference. He continues to develop plasma solutions for multiple applications including space-drive with Nuclear Plasma Laboratories Associates, Inc., another company based in Illinois.

The book’s Foreword is by Dr. Heinz Hora, Professor Emeritus of University of New South Wales and long-time collaborator with Dr. Miley on theoretical models of LENR. In my interview with Dr. Miley during the 2012 NETS conference, I profiled some of their work.

Table of Contents

Foreword [.pdf]
Why a “Black Swan”?
Living at the Center of the Energy Crisis
Timeline and Apology
Early Days and Searching for a Starting Path [.pdf]
Burnable Poison Control for Nuclear Submarine Reactors
Nuclear Pulse Propagation and Fission Reactor Kinetics
Nuclear Pumped Laser (NPL) Research
Direct Electron Beam Pumped Laser
Advanced Lasers
Alpha Particle Effects in Thermonuclear Fusion Devices
Alternate Fusion Concepts
Advanced Fuel Fusion and Direct Energy Conversion
Inertial Confinement Fusion (ICF)
Inertial Electrostatic Confinement (IEC) Fusion
Low Energy Nuclear Reactions (LENR)
Hydrogen Economy and Fuel Cells
Fusion Propulsion and Space Colonization
Nuclear Batteries
Computation and Theory
Nuclear Power Plant Safety and the Illinois Low-Level Waste Site
Teaching, Education, and University Administration
Creation of a Small Company, NPL Associates, Inc.
Where Am I in the Search? What Have I Found?
Concluding Comments
Timeline of Events

The contents are suitable for undergraduates and PhDs, as well as anyone in the public interested in the history of the field.

Buy George H. Miley’s book from World Scientific or from Amazon

Related

New Book Tells of Miley’s Career, Energy Crisis

Sergio Focardi in Remembrance

Video by Ecat.com Sergio Focardi: This is an energy revolution

Sergio Focardi from Ecat.com's video This is an energy revolution
Sergio Focardi from Ecat.com’s video This is an energy revolution
Physicist Sergio Focardi of University of Bologna has crossed over.

He was part of an early group that included Fracesco Piantelli and Roberto Habel who pioneered the generation of excess heat from Flesichmann-Pons cells using light-water and the metal nickel.

He inspired and worked closely with Andrea Rossi on the design of the Energy Catalyzer, or E-Cat, a thermal generator that operates from nickel powders and light-hydrogen gas.

Today on the Journal of Nuclear Physics website, Rossi wrote of Focardi’s passing:

Andrea Rossi
June 22nd, 2013 at 2:46 AM

SERGIO FOCARDI, PROF. EMERITUS OF THE UNIVERSITY OF BOLOGNA, IS DEAD. I RECEIVED THE NEW FROM ITALY TODAY AT 3 A.M., USA EASTERN TIME, FEW MINUTES AGO.

We all have lost one of the greatest scientists in the field of the LENR.

For me he has been a tremendous ally, he helped our work enormously and the safety certifications that we are obtaining are the fruit of his consulting during the last 7 years. For me he has been also a teacher for Physics and Mathematics, anytime I needed his help in these matters to better understand the theory behind the effect of the E-Cat.

He has always worked with us with total, absolute and disinterested attitude, thinking only the the interest of the Science behind the LENR.

All the newspapers of the scientific world will say what he has been in the Scientific and University world and his enormous legacy: he has been Professor of Physics, Mathematic, he has been the Dean of the Scientific Faculties of the Alma Mater University of Bologna and the founder of the Cesena branch of the University of Bologna. His pubilcations in the fields of Mathematics and Physics are monumental.

Now, after a long period of illness, that obviously all his friends have taken secret to respect his privacy, he ceased to suffer and starts a new duty for God under anothe form of life. I am sure he will continue to look after my work from where he is now.

See you soon, my great Friend and Master Sergio! I will never forget our work together and that day in the Brasimone Nuclear facility.

Yours Andrea Rossi

Sergio Focardi at TEDxBologna: E-cat e la fusione nucleare fredda con il Nichel e l’Idrogeno

Related

Anomalous Heat Production in Ni-H Systems 1994 [.pdf]
Sergio Focardi, Francesco Piantelli, and Roberto Habel

Overview of Ni-H systems: old experiments and new set-up [.pdf] by E. Campari, S. Focardi, V Gabbani, V. Montalbano, F. Piantelli, and S. Veronesi

Cold Fusion: The History of Research in Italy 2009 [.pdf]

Aether the Theory of Relativity and LENR Energy

“We may say that according to the general theory of relativity space is endowed with physical qualities.      -In this sense-        -Therefore-  

There exists an ether.” – Albert Einstein

 

 

Way Back

In 1989, the popular yet controversial Cold Fusion ‘Fleischmann and Pons Effect’, challenged the notions of theoretical physicists of the time. Newly established arts today, like cold fusion-LENR-low energy nuclear reaction science, continue to do so.

Science progresses by challenging established notions that are not able to properly hold observed phenomenon within a theoretical framework. Through this process of – researching the unknown – new scientific arts become established. Then theoretical physicists have a whole new playground in which to make predictions; as well as an arena in which to create new physical theories and grandiose mathematical models of physics, such as the likes of Einstein’s.

Many modern arts of science weren’t firmly established when early cold fusion researchers started college. A few of these arts are notable in the LENR energy arena today. Nano Engineering and Science, with the likes of carbon nanotubes, allows for new methods of constructing the required fractal geometries within the low energy nuclear reactive lattice. Quantum Physics and Engineering also play an important role with a deeper understanding of the atom. This ever-growing field, understanding the actions in the subatomic realm, provides new glimpses into the inner workings of the low energy nuclear reactive environment. In this dynamic multidisciplinary field, LENR Sciences, both theory and engineering, are improving as we progress in the art.

During the early 80’s, one would venture to say, there were three or four dozen subatomic particles that we knew of. During Einstein’s time perhaps even less. Now we are looking at well over a hundred and fifty of them. The list is mind-boggling to conceptualize, observe, and then finally comprehend. (that’s what we have open minded experimental and theoretical scientists for) The article “Not so Elementary, My Dear Electron” is an example. It takes us far from the “Standard Model” of my youth. The once “elementary” electron has been ‘split’ into three… a holon, spinon, and orbiton.

After reading that article my pre-concieved grip on reality became so unhinged. That night I had a dream finding myself shrunk down, traveling the empty space within the low energy nuclear reactive environment. There, right before my eyes, an electron split into its’ three elements -WOW- One Went Flying OFF Into a Far Distancing Dimension

Then it Went Super Nova!!!               Lesson Learned

Watch what you read before nodding off into

The Aether of the Dreamland

My Heart Hopes That

We can ALL

Enjoy

 

Aether Science

Another art pertaining to the low energy nuclear environment is Aether Science – the science of the vacuum. The Aether, or ether, is that which fills “empty space”. “Space” is found in the outer reaches between planets and between stars and “Space” is found between atoms. There is more space than matter in the universe. More space between the atoms in molecules and more space between the subatomic particles of the atom than there is matter… yet space is not, in reality, truly empty. Read “Dark Energy Dark Matter” NASA

Quantum Science: Pushing the envelope and inviting us to explore the physical realities within the Aether. (links go to the U.S. DoE search engine) Research these sciences at the U.S Department of Energy – Office of Science website links: into Dark Energy (see 46 papers – year 2013), into Zero Point Energy (see 13 papers – year 2013) , into Vacuum Field (see 43 papers –  under ‘Energy’), into Gravity (see 103 papers – year 2013), into LENR (see 38 papers – under ‘Low Energy Nuclear Reaction’)

During an Address delivered on May 5th, 1920, at the University of Leyden

A theoretical physicist once said,

“As to the part which the new ether is to play in the physics of the future we are not yet clear. We know that it determines the metrical relations in the space-time continuum, e.g. the configurative possibilities of solid bodies as well as the gravitational fields; but we do not know whether it has an essential share in the structure of the electrical elementary particles constituting matter. Nor do we know whether it is only in the proximity of ponderable masses that its structure differs essentially from that of the Lorentzian ether; whether the geometry of spaces of cosmic extent is approximately Euclidean. But we can assert by reason of the relativistic equations of gravitation that there must be a departure from Euclidean relations, with spaces of cosmic order of magnitude, if there exists a positive mean density, no matter how small, of the matter in the universe. In this case the universe must of necessity be spatially unbounded and of finite magnitude, its magnitude being determined by the value of that mean density.

If we consider the gravitational field and the electromagnetic field from the standpoint of the ether hypothesis, we find a remarkable difference between the two. There can be no space nor any part of space without gravitational potentials; for these confer upon space its metrical qualities, without which it cannot be imagined at all. The existence of the gravitational field is inseparably bound up with the existence of space. On the other hand a part of space may very well be imagined without an electromagnetic field; thus in contrast with the gravitational field, the electromagnetic field seems to be only secondarily linked to the ether, the formal nature of the electromagnetic field being as yet in no way determined by that of gravitational ether. From the present state of theory it looks as if the electromagnetic field, as opposed to the gravitational field, rests upon an entirely new formal motif, as though nature might just as well have endowed the gravitational ether with fields of quite another type, for example, with fields of a scalar potential, instead of fields of the electromagnetic type.

Since according to our present conceptions the elementary particles of matter are also, in their essence, nothing else than condensations of the electromagnetic field, our present view of the universe presents two realities which are completely separated from each other conceptually, although connected causally, namely, gravitational ether and electromagnetic field, or — as they might also be called — space and matter.

Of course it would be a great advance if we could succeed in comprehending the gravitational field and the electromagnetic field together as one unified conformation. Then for the first time the epoch of theoretical physics founded by Faraday and Maxwell would reach a satisfactory conclusion. The contrast between ether and matter would fade away, and, through the general theory of relativity, the whole of physics would become a complete system of thought, like geometry, kinematics, and the theory of gravitation.”

Albert Einstein

What is Aether?

Robert B. Laughlin Nobel Laureate in Physics-Stanford University-The Ether

In contemporary theoretical physics: “It is ironic that Einstein’s most creative work, the general theory of relativity, should boil down to conceptualizing space as a medium when his original premise [in special relativity] was that no such medium existed. The word ‘ether’ has extremely negative connotations in theoretical physics because of its past association with opposition to relativity. This is unfortunate because, stripped of these connotations, it rather nicely captures the way most physicists actually think about the vacuum. Relativity actually says nothing about the existence or nonexistence of matter pervading the universe, only that any such matter must have relativistic symmetry. It turns out that such matter exists. About the time relativity was becoming accepted, studies of radioactivity began showing that the empty vacuum of space had spectroscopic structure similar to that of ordinary quantum solids and fluids. Subsequent studies with large particle accelerators have now led us to understand that space is more like a piece of window glass than ideal Newtonian emptiness. It is filled with ‘stuff’ that is normally transparent but can be made visible by hitting it sufficiently hard to knock out a part. The modern concept of the vacuum of space, confirmed every day by experiment, is a relativistic ether. But we do not call it this because it is taboo.” Laughlin, Robert B. (2005). “A Different Universe: Reinventing Physics from the Bottom Down”  pp. 120–121.

from the Bottom Down” A REVIEW By Jeremy Chunn

“Tired of the predictable ‘clockwork’ nature of the physical world as defined by Newtonian laws? Then you’ll find a friend in Robert B. Laughlin. He suspects the fact that Newtonian laws break down at quantum levels and fail to predict all phases between states is evidence the physical world is still highly mysterious.”

Paul Dirac wrote in 1951

“Physical knowledge has advanced much since 1905, notably by the arrival of quantum mechanics, and the situation [about the scientific plausibility of Aether] has again changed. If one examines the question in the light of present-day knowledge, one finds that the Aether is no longer ruled out by relativity, and good reasons can now be advanced for postulating an Aether. We have now the velocity at all points of space-time, playing a fundamental part in electrodynamics. It is natural to regard it as the velocity of some real physical thing. Thus with the new theory of electrodynamics [vacuum filled with virtual particles] we are rather forced to have an Aether”. “Is there an Aether?”, Nature 168 (1951), p. 906.

… Is there an Aether?” abstract by Dirac St. John’s College, Cambridge. Oct. 9, 1951

IN the last century, the idea of a universal and all-pervading æther was popular as a foundation on which to build the theory of electromagnetic phenomena. The situation was profoundly influenced in 1905 by Einstein’s discovery of the principle of relativity, leading to the requirement of a four-dimensional formulation of all natural laws. It was soon found that the existence of an æther could not be fitted in with relativity, and since relativity was well established, the æther was abandoned.

John Bell, interviewed by Paul Davies in “The Ghost in the Atom” 1986

Has suggested that an Aether theory might help resolve the EPR paradox by allowing a reference frame in which signals go faster than light. He suggests Lorentz contraction is perfectly coherent, not inconsistent with relativity, and could produce an aether theory perfectly consistent with the Michelson-Morley experiment.

Bell suggests the aether was wrongly rejected on purely philosophical grounds:

“What is unobservable does not exist”

Besides the arguments based on his interpretation of quantum mechanics; Bell also suggests resurrecting the aether because it is a useful pedagogical device. That is, many problems are solved more easily by imagining the existence of an aether.   The Ghost in the Atom: A Discussion of the Mysteries of Quantum Physics

As noted by Alexander Markovich Polyakov in 1987

Elementary particles existing in nature resemble very much excitations of some complicated medium (Aether). We do not know the detailed structure of the Aether but we have learned a lot about effective Lagrangians for its low energy excitations. It is as if we knew nothing about the molecular structure of some liquid but did know the Navier-Stokes equation and could thus predict many exciting things.

Clearly, there are lots of different possibilities at the molecular level:

Leading to the same low energy picture. – end quote

From Harwood Academic Publishers (1987), A. M. Polyakov, “Gauge Fields and Strings” sec,12

LENR and the Aether – Harold Aspden

‘Heavy Electron’ -‘Mu-meson’ Vacuum Field – Electron Proton ‘Creation’

Dr. Harold Aspden is of particular interest. A brilliant man, he successfully predicted the mass of the proton and was a pioneer of efficient thermal electric conversion devices. He was the first to be issued a U.S. patent with ‘cold fusion’ contained in the text of the application. A further example of his brilliance is his theoretical papers on Aether Science. This list is of ten Harold Aspden patents granted, applied, or cited  that concern “Cold Fusion” LENR and the Aether (ZPE). Here is an excellent biography of the honorable Dr. Harold Aspden including all theories, works published, and documented efforts in the Aether and LENR sciences.

GIVE THANKS and Support to Pure Energy Systems News for compiling the best in LENR history and news.

Ten of the 119 patents found at Cold Fusion NowHarold Aspden Patent Tribute – Honoring Dr. Aspden

For links to these patents open the “Harold Aspden Patent Tribute” 

  1. Cold Nuclear Fusion Method and Apparatus App. – Filed Apr 20, 1990 – Published Nov 1, 1990 – Richard Geoffrey Belton – The Broken Hill Proprietary Company Limited May 23, 1994, Dec 8, 1994, Aspden, Harold, Hydrogen activated heat generation apparatus
  2. Hydrogen Activated Heat Generation Apparatus App. – Filed May 23, 1994 – Published Dec 8, 1994 – Aspden, Harold, Eneco, Inc. Inventors. Harold Aspden. Applicant. Aspden, Harold
  3. Cold Nuclear Fusion Method and Apparatus App. – Filed Apr 20, 1990 – Published Nov 1, 1990 – Richard Geoffrey Belton – The Broken Hill Proprietary Company Limited May 23, 1994, Dec 8, 1994, Aspden, Harold, Hydrogen activated heat generation apparatus
  4. Methods and Systems for Generating High Energy Photons or Quantum… Grant – Filed Nov 21, 2001 – Issued Aug 30, 2005 – Kiril B. Chukanov – Chukanov Quantum Energy, L.L.C.
… OTHER PUBLICATIONS Aspden, Harold, “Aether Science Papers: Part I: The Creative Vacuum,” Aether Science Papers, (1996), pp. 26-32. Chukanov, KM
  5. Device to Move an Object Back and Forth Grant – Filed Jan 22, 2008 – Issued Mar 8, 2011 – Harvey Emanuel Fiala, 
Harold E. Puthoff, and Harold Aspden are recent exponents of  ZPE. …. Aspden, Harold: Power from Space: Inertia and Gravitation, Energy Science Report No.
  6. Inertial Propulsion Device to Move an Object Up and Down Grant – Filed Feb 11, 2011 – Issued Nov 29, 2011 – Harvey E. Fiala
… energy (ZPE) or space energy at every point in space, possibly even of the order or magnitude of nuclear energy
  7. Hydrogen Activated Heat Generation Apparatus App. – Filed May 23, 1994 – Published Feb 9, 1995 – Inventors. Harold Aspden. Applicant. Aspden, Harold
  8. Production of Thermal Energy App. – Filed Jun 4, 1990 – Published Dec 13, 1990 – Cyril Barrie Edwards – Edwards, Barrie, Cyril
… May 23, 1994, Dec 8, 1994, Aspden, Harold, Hydrogen activated heat generation apparatus
  9. Method for Producing Plasma Nuclear Fusion App. – Filed Apr 9, 1990 – Published Oct 24, 1990 – Shunpei Yamazaki -Semiconductor Energy Laboratory Co., Ltd. May 23, 1994, Dec 8, 1994, Aspden, Harold, Hydrogen activated heat generation apparatus
  10. Solid State Surface Micro-Plasma Fusion Device App. – Filed May 28, 1992 – Published Dec 23, 1992 – Ell Yeong Kim – Purdue Research Foundation May 23, 1994, Dec 8, 1994, Aspden, Harold, Hydrogen activated heat generation apparatus

Told by Dr. Aspden

His story of a ‘Cold Fusion’ institutional firewall at the U.S. patent office

The tactics I adopted in my efforts to secure a granted patent involved filing a U.S. continuation-in-part application based on the pending cold fusion application that had survived the PCT stage, but before it came under the executioner’s axe wielded by Harvey Behrend. My plan was to emphasize the thermoelectric aspects of the invention, but discuss their relevance to ‘cold fusion’ and incorporate a very substantial Appendix on that subject. I wrote the specification discussing the merits of ‘cold fusion’ and offered as an invention a special form of apparatus which I regarded as useful for testing the cold fusion process.

There was a 50:50 chance that the new application would be assigned to Harvey Behrend’s examining group, but the abstract stressed thermoelectric energy conversion and not cold fusion, so I had my fingers crossed in hoping that Art group 1102 and not Harvey Behrend’s Art group 2204 would be put in charge of the case in the U.S. Patent and Trademark Office.

So that you, the reader, may understand what this is all about, and particularly so that my colleagues in the patent profession in Europe who may come to hear about this as well may understand, I feel it appropriate to quote a few words from an article which appeared in the July-November double issue of ‘Infinite Energy’, Nos. 15 and 16, at page. 86.

I refer to Dr. Hal Fox’s article ‘New Energy Sources for the Near Future: An Open Letter to Decision Makers’. Hal Fox is Editor of the Journal of New Energy. He is located in Utah, where the saga of cold fusion was born, and he has followed the cold fusion theme as closely as anyone over the years dating from March 1989, when that hope and prospect for a new energy technology was first announced.

Hal Fox, Quote…

“A university professor who has been supported by a multi-million dollar hot fusion contract and who becomes an advisor to the Department of Energy is unlikely to advise the government to fund a competitive low-energy technology. There would be very strong university pressure to continue in the development of hot fusion! This combination of federal funds, appointments to advisory groups, and the pressures for institutional funds on the advisers, has resulted in scientists becoming lobbyists with the following results:

  • The Office of Patents and Trademarks has been advised not to allow patents on competitive technology to hot fusion.

  • Leaders of some professional societies (such as the American Physical Society) have lobbied to prevent major peer-reviewed journals from publishing articles about competing technologies.

Aether: How it relates to cold fusion (link)

A BREAKTHROUGH: U.S. PATENT NO. 5,734,122

Cold Fusion Appears in a U.S. Patent!

Copyright © 1998 Harold Aspden

The Fusion Criteria

In a very hot proton gas protons can combine to create heavier atomic nuclei. This is facilitated if there is something effectively neutralizing the charge repulsion between the protons. A proton or anti-proton charge can become neutral if a beta particle of opposite polarity combines with it in some way to be seen as a neutron. Alternatively it is conceivable that in the very energetic field conditions that one can foresee, particularly in the presence of strong gravity fields, the field medium itself can be such as to overcome the mutual repulsion or the medium itself may become electrically polarized to provide a background that can serve as the neutralizing influence. In any event, the high energy physics of the scenario by which protons synthesize heavier forms of matter has to explain why hot fusion occurs and the picture just presented has to be very close to what has just been outlined.

Now, there is one important aspect here that tends to be overlooked. How do those protons get created in the first place? The scientific challenge here is not concerned with fusion but rather initial creation and the answer lies in finding the true explanation for what governs the mass of the proton. This is a theoretical exercise in which this Applicant has played an important and recognized part, because, although the world has not rushed into accepting the Applicant’s explanation, it is a fact that the precise value of the proton-electron mass ratio of 1836.152 was deduced in terms of the mu-meson field. This derivation involved collaboration with Dr. D. M. Eagles of the then National Standards Laboratory in Australia. It was reported in the U.S.A. Institute of Physics journal Physics Today in 1984 (November issue, p. 15) and was mentioned in their 1985 update by the leading U.S. researchers who measure this quantity. See R.S. Van Dyck et al: International Journal of Mass Spectroscopy and Ion Processes, 66, (1985) pp. 327-337. They noted how remarkably close the theoretical value was to the one they measured and added ‘This is even more curious when one notes that they [meaning this Applicant and Dr. Eagles] published this result several years before direct precision measurements of this ratio had begun.

‘Given that the Applicant knows how protons are created from a mu-meson field and taking into account that physicists familiar with quantum electrodynamics know that the vacuum field is the seat of activity of electron and positron creation and that mu-mesons are otherwise known as ‘heavy electrons’, it needs little imagination then to suspect that Nature is trying to create protons continuously everywhere in space. Since we do not see such protons materializing before our eyes we must infer that they exist only very transiently after creation unless the field medium has surplus energy to be shed over and above its local equilibrium requirements.

The Applicant’s Electrodynamic Research

There are long-accepted but unresolved anomalies concerning the anomalously very high forces exerted on heavy ions in a cold cathode discharge. In researching this subject the Applicant has established that the forces exerted on a heavy ion owing to its electrodynamic interaction with an electron are, in theory, enhanced by a factor equal to the ion-electron mass ratio.

This theory leads to a breach of the law that specifies balance of action and reaction, which means that energy is being exchanged with the field medium in which the electromagnetic reference frame is seated. The effective electromagnetic reference frame has a structure, as if it is formed by a fluid crystal lattice which, on a local scale, can adapt or maybe govern the shell structure of an atomic nucleus. Thus, normally, the motion of atoms and even ions in a gas or a solution will not evidence the anomalous electrodynamic effects, simply because they do not move relative to the local electromagnetic reference frame, meaning that, as far as concerns translational motion, the electrons present are the only active participant electrodynamically.

It is, however, quite a different situation when we consider a proton or a deuteron as a free ion inside the crystal host lattice of a metallic form, because there can only be one electromagnetic reference frame effective at any location in that metal. Therefore, a proton that is within a host crystal, and is free to move through it, will be seen as moving relative to the electromagnetic reference frame and then it can contribute to anomalous electrodynamic effects.

These conditions were the subject of the Applicant’s research as a Visiting Senior Research Fellow at the University of Southampton in England 1983 onwards. The Applicant had written on the subject of the proton, the deuteron and the neutron, pursuing the theme that no neutrons exist inside the deuteron and stressing that atomic nuclei are composites of beta particles and protons or antiprotons. This work was all published before 1989.

The anomalous electrodynamic forces that exist in the heavy ion/electron interaction imply a hidden source of energy and so of heat but the Applicant’s research was aimed essentially at proving the modified law of electrodynamics dictated by that research. Certainly, whilst the ability to accelerate heavy ions by drawing on a hidden source of field energy was one of the Applicant’s pursuits, at no time had the Applicant contemplated the prospect of a fusion reaction of the kind implied by Fleischmann and Pons.

Nevertheless, as soon as that latter work was reported, the research knowledge arising from the author’s investigations was seen as relevant in the onward exploration of the excess heat phenomenon.

The Applicant was not only interested because of the excess energy aspect. There was the no-neutron feature and the fact that the process involved ion migration through water. There was the fact that the deuteron was the primary agent and this Applicant had shown, from the theory of the deuteron mass and its magnetic moment, that deuterons undergo cyclic changes of state and the state which prevails for one seventh of the time, the deuteron has a neutral core, having transiently shed a beta particle. More than this, however, the author had become involved at the time with two inventions, one of which later became the subject of a U.S. Patent (Serial No. 5,065,085) and these involved anomalous energy activity in a thermoelectric context which bears upon the cold fusion issue.

The other, lesser important, of these inventions was concerned with ‘warm’ superconductivity. The Applicant’s research had suggested that substances having certain molecular mass forms are adapted to absorb impact by conduction electrons in such a way that the change of inductive energy accompanying the collision is conserved until the resulting EMF changes can impart the energy to another electron. This meant that the thermal energy of a heavy ion in the substance could be reduced to feed the normal resistance loss associated with the current. This was, therefore, a process by which anomalous heat energy activity was involved in electrodynamic interactions between heavy ions and electrons.

The more important invention of the two just mentioned was concerned with the anomalous behaviour of a thermoelectric interface between two metals when subjected to a strong magnetic field in a rather special conductor configuration. The Nernst Effect operates to cause heat carried by electrons in a metal to be converted into an electric potential energy by the ordering action of a transversely directed magnetic field.

The essential requirement for the action of the Nernst Effect is that there is a temperature gradient in the metal and, given such a temperature gradient, and the magnetic field, there will then be an electric potential gradient set up within the metal. Now, a potential gradient inside a metal conductor implies that there is inside the body of the metal a distribution of electric charge not neutralized by normal metallic conduction. The polarity of that charge is determined by the direction of the thermal gradient and the orientation of the magnetic field. It can be negative or positive by choice in the design of the apparatus used.

Besides this, the Applicant knew that the flow of a strong current through a metal conductor will promote what is known as the pinch effect in which electrodynamic forces act on the negative electron charge carriers to pinch them inwards and so set up an excess negative charge distribution inside the metal conductor.

This, plus the additional feature that a strong current flow through a metal conductor that is populated by free deuterons will promote a migration of deuterons that will bring them more frequently into near collision, all militated in favour of an invention proposing the provision of a supplementary high current closed circuit through the cathode of a cold fusion cell. That, indeed, became the subject of the patent application which the Applicant filed in U.K. on April 15, 1989, this being the priority application relied upon in the U.S. Patent Application under petition.

The Applicant, therefore, had reason to believe that the work on cold fusion would progress if the auxiliary current activation circuit were to be used.

However, in the event, the pioneer work of Fleischmann and Pons became the subject of such criticism that there was no prospect of getting R & D funding to take the subject invention forward and one is confronted with a chicken and egg scenario where disbelief of cold fusion as a scientific possibility stands in the way of securing patent grant and the doubts about securing a patent stands in the way of finding sponsorship for the development.

The Fusion Criteria Reexamined: There are three criteria that need to be satisfied simultaneously to promote and enhance the cold fusion reaction of deuterons. 

  • Firstly, there is the background incidence of the virtual mu-meson field which is trying everywhere to create protons. This is a natural activity that cannot be controlled. It is a statistical effect, but one can calculate the probability governing proton creation fluctuations in a given volume of cathode material. See comments below. 

  • Secondly, there is the need to bring the deuteron partner in the fusion process into close proximity with the target deuteron. In hot fusion reactions this is achieved by the motion associated with thermal activity. In cold fusion it is achieved by adsorbing deuterons into a host metal in which they become separate from their satellite electrons and by concentrating the loading by the deuteron population. 

  • Thirdly, as with the creation of stars and by hydrogen fusion, there is the need to provide the field which pulls the deuterons together in spite of their mutual repulsion. In cold fusion this means the provision of a neutralizing negative charge distribution within the metal body of host metal. This requires strong electron current surges resulting in heat concentrations which set up temperature gradients in company with transverse magnetic fields. However, the structural form of the host metal in relation to the current channel, the magnetic field effect and the heat conduction path require a mutually orthogonal geometry to provide an optimum action. 

Note that the surplus negative charge may result in a charge density that is quite small in relation to the positive charge of the deuteron population but every unit of charge is seated in a discrete electron and a single electron which can upset the normal charge balance of deuterons and free conduction electrons can nucleate a pair of deuterons.

Then, the creation of a proton in one deuteron accompanied by the demise of a proton in the other will convert the two deuterons into a tritium nucleus and free a proton with a beta particle transferring between the two. Alternatively one deuteron will convert into helium 3 and the proton released will be in company with a beta minus particle.

The onward reactions involving neutrons that are observed with hot fusion processes need not occur if the events involved are triggered naturally by the mu-meson activity in trying to create protons rather than by neutron bombardment.

 

Excellent Perspective From Relativity Past

 

“Ether and the Theory of Relativity” By Albert Einstein

An Address delivered on May 5th, 1920, 
in the University of Leyden

Translated by George Barker Jeffery and Wilfrid Perrett

From: Sidelights on Relativity (1922), pp.3-24, London: Methuen

German original: Äther und Relativitätstheorie (1920), Berlin: Springer

How does it come about that alongside of the idea of ponderable matter, which is derived by abstraction from everyday life, the physicists set the idea of the existence of another kind of matter, the ether? The explanation is probably to be sought in those phenomena which have given rise to the theory of action at a distance, and in the properties of light which have led to the undulatory theory. Let us devote a little while to the consideration of these two subjects.

Outside of physics we know nothing of action at a distance. When we try to connect cause and effect in the experiences which natural objects afford us, it seems at first as if there were no other mutual actions than those of immediate contact, e.g. the communication of motion by impact, push and pull, heating or inducing combustion by means of a flame, etc. It is true that even in everyday experience weight, which is in a sense action at a distance, plays a very important part. But since in daily experience the weight of bodies meets us as something constant, something not linked to any cause which is variable in time or place, we do not in everyday life speculate as to the cause of gravity, and therefore do not become conscious of its character as action at a distance. It was Newton’s theory of gravitation that first assigned a cause for gravity by interpreting it as action at a distance, proceeding from masses. Newton’s theory is probably the greatest stride ever made in the effort towards the causal nexus of natural phenomena. And yet this theory evoked a lively sense of discomfort among Newton’s contemporaries, because it seemed to be in conflict with the principle springing from the rest of experience, that there can be reciprocal action only through contact, and not through immediate action at a distance.

It is only with reluctance that man’s desire for knowledge endures a dualism of this kind. How was unity to be preserved in his comprehension of the forces of nature? Either by trying to look upon contact forces as being themselves distant forces which admittedly are observable only at a very small distance and this was the road which Newton’s followers, who were entirely under the spell of his doctrine, mostly preferred to take; or by assuming that the Newtonian action at a distance is only apparently immediate action at a distance, but in truth is conveyed by a medium permeating space, whether by movements or by elastic deformation of this medium. Thus the endeavour toward a unified view of the nature of forces leads to the hypothesis of an ether. This hypothesis, to be sure, did not at first bring with it any advance in the theory of gravitation or in physics generally, so that it became customary to treat Newton’s law of force as an axiom not further reducible. But the ether hypothesis was bound always to play some part in physical science, even if at first only a latent part.

When in the first half of the nineteenth century the far-reaching similarity was revealed which subsists between the properties of light and those of elastic waves in ponderable bodies, the ether hypothesis found fresh support. It appeared beyond question that light must be interpreted as a vibratory process in an elastic, inert medium filling up universal space. It also seemed to be a necessary consequence of the fact that light is capable of polarisation that this medium, the ether, must be of the nature of a solid body, because transverse waves are not possible in a fluid, but only in a solid. Thus the physicists were bound to arrive at the theory of the “quasi-rigid ” luminiferous ether, the parts of which can carry out no movements relatively to one another except the small movements of deformation which correspond to light-waves.

This theory — also called the theory of the stationary luminiferous ether — moreover found a strong support in an experiment which is also of fundamental importance in the special theory of relativity, the experiment of Fizeau, from which one was obliged to infer that the luminiferous ether does not take part in the movements of bodies. The phenomenon of aberration also favoured the theory of the quasi-rigid ether.

The development of the theory of electricity along the path opened up by Maxwell and Lorentz gave the development of our ideas concerning the ether quite a peculiar and unexpected turn. For Maxwell himself the ether indeed still had properties which were purely mechanical, although of a much more complicated kind than the mechanical properties of tangible solid bodies. But neither Maxwell nor his followers succeeded in elaborating a mechanical model for the ether which might furnish a satisfactory mechanical interpretation of Maxwell’s laws of the electro-magnetic field. The laws were clear and simple, the mechanical interpretations clumsy and contradictory. Almost imperceptibly the theoretical physicists adapted themselves to a situation which, from the standpoint of their mechanical programme, was very depressing. They were particularly influenced by the electro-dynamical investigations of Heinrich Hertz. For whereas they previously had required of a conclusive theory that it should content itself with the fundamental concepts which belong exclusively to mechanics (e.g. densities, velocities, deformations, stresses) they gradually accustomed themselves to admitting electric and magnetic force as fundamental concepts side by side with those of mechanics, without requiring a mechanical interpretation for them. Thus the purely mechanical view of nature was gradually abandoned. But this change led to a fundamental dualism which in the long-run was insupportable. A way of escape was now sought in the reverse direction, by reducing the principles of mechanics to those of electricity, and this especially as confidence in the strict validity of the equations of Newton’s mechanics was shaken by the experiments with β-rays and rapid kathode rays.

This dualism still confronts us in unextenuated form in the theory of Hertz, where matter appears not only as the bearer of velocities, kinetic energy, and mechanical pressures, but also as the bearer of electromagnetic fields. Since such fields also occur in vacuo — i.e. in free ether the ether — also appears as bearer of electromagnetic fields. The ether appears indistinguishable in its functions from ordinary matter. Within matter it takes part in the motion of matter and in empty space it has everywhere a velocity; so that the ether has a definitely assigned velocity throughout the whole of space. There is no fundamental difference between Hertz’s ether and ponderable matter (which in part subsists in the ether).

The Hertz theory suffered not only from the defect of ascribing to matter and ether, on the one hand mechanical states, and on the other hand electrical states, which do not stand in any conceivable relation to each other; it was also at variance with the result of Fizeau’s important experiment on the velocity of the propagation of light in moving fluids, and with other established experimental results.

Such was the state of things when H. A. Lorentz entered upon the scene. He brought theory into harmony with experience by means of a wonderful simplification of theoretical principles. He achieved this, the most important advance in the theory of electricity since Maxwell, by taking from ether its mechanical, and from matter its electromagnetic qualities. As in empty space, so too in the interior of material bodies, the ether, and not matter viewed atomistically, was exclusively the seat of electromagnetic fields. According to Lorentz the elementary particles of matter alone are capable of carrying out movements; their electromagnetic activity is entirely confined to the carrying of electric charges. Thus Lorentz succeeded in reducing all electromagnetic happenings to Maxwell’s equations for free space.

As to the mechanical nature of the Lorentzian ether, it may be said of it, in a somewhat playful spirit, that immobility is the only mechanical property of which it has not been deprived by H. A. Lorentz. It may be added that the whole change in the conception of the ether which the special theory of relativity brought about, consisted in taking away from the ether its last mechanical quality, namely, its immobility. How this is to be understood will forthwith be expounded.

The space-time theory and the kinematics of the special theory of relativity were modelled on the Maxwell-Lorentz theory of the electromagnetic field. This theory therefore satisfies the conditions of the special theory of relativity, but when viewed from the latter it acquires a novel aspect. For if K be a system of co-ordinates relatively to which the Lorentzian ether is at rest, the Maxwell-Lorentz equations are valid primarily with reference to K. But by the special theory of relativity the same equations without any change of meaning also hold in relation to any new system of co-ordinates K’ which is moving in uniform translation relatively to K. Now comes the anxious question: — Why must I in the theory distinguish the K system above all K’ systems, which are physically equivalent to it in all respects, by assuming that the ether is at rest relatively to the K system? For the theoretician such an asymmetry in the theoretical structure, with no corresponding asymmetry in the system of experience, is intolerable. If we assume the ether to be at rest relatively to K, but in motion relatively to K’, the physical equivalence of K and K’ seems to me from the logical standpoint, not indeed downright incorrect, but nevertheless unacceptable.

The next position which it was possible to take up in face of this state of things appeared to be the following. The ether does not exist at all. The electromagnetic fields are not states of a medium, and are not bound down to any bearer, but they are independent realities which are not reducible to anything else, exactly like the atoms of ponderable matter. This conception suggests itself the more readily as, according to Lorentz’s theory, electromagnetic radiation, like ponderable matter, brings impulse and energy with it, and as, according to the special theory of relativity, both matter and radiation are but special forms of distributed energy, ponderable mass losing its isolation and appearing as a special form of energy.

More careful reflection teaches us, however, that the special theory of relativity does not compel us to deny ether. We may assume the existence of an ether; only we must give up ascribing a definite state of motion to it, i.e. we must by abstraction take from it the last mechanical characteristic which Lorentz had still left it. We shall see later that this point of view, the conceivability of which I shall at once endeavour to make more intelligible by a somewhat halting comparison, is justified by the results of the general theory of relativity.

Think of waves on the surface of water. Here we can describe two entirely different things. Either we may observe how the undulatory surface forming the boundary between water and air alters in the course of time; or else — with the help of small floats, for instance — we can observe how the position of the separate particles of water alters in the course of time. If the existence of such floats for tracking the motion of the particles of a fluid were a fundamental impossibility in physics — if, in fact, nothing else whatever were observable than the shape of the space occupied by the water as it varies in time, we should have no ground for the assumption that water consists of movable particles. But all the same we could characterize it as a medium.

We have something like this in the electromagnetic field. For we may picture the field to ourselves as consisting of lines of force. If we wish to interpret these lines of force to ourselves as something material in the ordinary sense, we are tempted to interpret the dynamic processes as motions of these lines of force, such that each separate line of force is tracked through the course of time. It is well known, however, that this way of regarding the electromagnetic field leads to contradictions.

Generalizing we must say this: — There may be supposed to be extended physical objects to which the idea of motion cannot be applied. They may not be thought of as consisting of particles which allow themselves to be separately tracked through time. In Minkowski’s idiom this is expressed as follows: — Not every extended conformation in the four-dimensional world can be regarded as composed of world-threads. The special theory of relativity forbids us to assume the ether to consist of particles observable through time, but the hypothesis of ether in itself is not in conflict with the special theory of relativity. Only we must be on our guard against ascribing a state of motion to the ether.

Certainly, from the standpoint of the special theory of relativity, the ether hypothesis appears at first to be an empty hypothesis. In the equations of the electromagnetic field there occur, in addition to the densities of the electric charge, only the intensities of the field. The career of electromagnetic processes in vacua appears to be completely determined by these equations, uninfluenced by other physical quantities. The electromagnetic fields appear as ultimate, irreducible realities, and at first it seems superfluous to postulate a homogeneous, isotropic ether-medium, and to envisage electromagnetic fields as states of this medium.

But on the other hand there is a weighty argument to be adduced in favour of the ether hypothesis. To deny the ether is ultimately to assume that empty space has no physical qualities whatever. The fundamental facts of mechanics do not harmonize with this view. For the mechanical behaviour of a corporeal system hovering freely in empty space depends not only on relative positions (distances) and relative velocities, but also on its state of rotation, which physically may be taken as a characteristic not appertaining to the system in itself. In order to be able to look upon the rotation of the system, at least formally, as something real, Newton objectivises space. Since he classes his absolute space together with real things, for him rotation relative to an absolute space is also something real. Newton might no less well have called his absolute space “Ether”; what is essential is merely that besides observable objects, another thing, which is not perceptible, must be looked upon as real, to enable acceleration or rotation to be looked upon as something real.

It is true that Mach tried to avoid having to accept as real something which is not observable by endeavouring to substitute in mechanics a mean acceleration with reference to the totality of the masses in the universe in place of an acceleration with reference to absolute space. But inertial resistance opposed to relative acceleration of distant masses presupposes action at a distance; and as the modern physicist does not believe that he may accept this action at a distance, he comes back once more, if he follows Mach, to the ether, which has to serve as medium for the effects of inertia. But this conception of the ether to which we are led by Mach’s way of thinking differs essentially from the ether as conceived by Newton, by Fresnel, and by Lorentz. Mach’s ether not only conditions the behaviour of inert masses, but is also conditioned in its state by them.

Mach’s idea finds its full development in the ether of the general theory of relativity. According to this theory the metrical qualities of the continuum of space-time differ in the environment of different points of space-time, and are partly conditioned by the matter existing outside of the territory under consideration. This space-time variability of the reciprocal relations of the standards of space and time, or, perhaps, the recognition of the fact that “empty space” in its physical relation is neither homogeneous nor isotropic, compelling us to describe its state by ten functions (the gravitation potentials gμν), has, I think, finally disposed of the view that space is physically empty. But therewith the conception of the ether has again acquired an intelligible content, although this content differs widely from that of the ether of the mechanical undulatory theory of light. The ether of the general theory of relativity is a medium which is itself devoid of all mechanical and kinematical qualities, but helps to determine mechanical (and electromagnetic) events.

What is fundamentally new in the ether of the general theory of relativity as opposed to the ether of Lorentz consists in this, that the state of the former is at every place determined by connections with the matter and the state of the ether in neighbouring places, which are amenable to law in the form of differential equations; whereas the state of the Lorentzian ether in the absence of electromagnetic fields is conditioned by nothing outside itself, and is everywhere the same. The ether of the general theory of relativity is transmuted conceptually into the ether of Lorentz if we substitute constants for the functions of space which describe the former, disregarding the causes which condition its state. Thus we may also say, I think, that the ether of the general theory of relativity is the outcome of the Lorentzian ether, through relativation.

As to the part which the new ether is to play in the physics of the future we are not yet clear. We know that it determines the metrical relations in the space-time continuum, e.g. the configurative possibilities of solid bodies as well as the gravitational fields; but we do not know whether it has an essential share in the structure of the electrical elementary particles constituting matter. Nor do we know whether it is only in the proximity of ponderable masses that its structure differs essentially from that of the Lorentzian ether; whether the geometry of spaces of cosmic extent is approximately Euclidean. But we can assert by reason of the relativistic equations of gravitation that there must be a departure from Euclidean relations, with spaces of cosmic order of magnitude, if there exists a positive mean density, no matter how small, of the matter in the universe. In this case the universe must of necessity be spatially unbounded and of finite magnitude, its magnitude being determined by the value of that mean density.

If we consider the gravitational field and the electromagnetic field from the standpoint of the ether hypothesis, we find a remarkable difference between the two. There can be no space nor any part of space without gravitational potentials; for these confer upon space its metrical qualities, without which it cannot be imagined at all. The existence of the gravitational field is inseparably bound up with the existence of space. On the other hand a part of space may very well be imagined without an electromagnetic field; thus in contrast with the gravitational field, the electromagnetic field seems to be only secondarily linked to the ether, the formal nature of the electromagnetic field being as yet in no way determined by that of gravitational ether. From the present state of theory it looks as if the electromagnetic field, as opposed to the gravitational field, rests upon an entirely new formal motif, as though nature might just as well have endowed the gravitational ether with fields of quite another type, for example, with fields of a scalar potential, instead of fields of the electromagnetic type.

Since according to our present conceptions the elementary particles of matter are also, in their essence, nothing else than condensations of the electromagnetic field, our present view of the universe presents two realities which are completely separated from each other conceptually, although connected causally, namely, gravitational ether and electromagnetic field, or — as they might also be called — space and matter.

Of course it would be a great advance if we could succeed in comprehending the gravitational field and the electromagnetic field together as one unified conformation. Then for the first time the epoch of theoretical physics founded by Faraday and Maxwell would reach a satisfactory conclusion. The contrast between ether and matter would fade away, and, through the general theory of relativity, the whole of physics would become a complete system of thought, like geometry, kinematics, and the theory of gravitation. An exceedingly ingenious attempt in this direction has been made by the mathematician H. Weyl; but I do not believe that his theory will hold its ground in relation to reality. Further, in contemplating the immediate future of theoretical physics we ought not unconditionally to reject the possibility that the facts comprised in the quantum theory may set bounds to the field theory beyond which it cannot pass.

Recapitulating, we may say that according to the general theory of relativity space is endowed with physical qualities; in this sense, therefore, there exists an ether. According to the general theory of relativity space without ether is unthinkable; for in such space there not only would be no propagation of light, but also no possibility of existence for standards of space and time (measuring-rods and clocks), nor therefore any space-time intervals in the physical sense. But this ether may not be thought of as endowed with the quality characteristic of ponderable media, as consisting of parts which may be tracked through time. The idea of motion may not be applied to it.”

Scientific Theoretical Physicists

Physics author A. Zee is a Permanent Member of the Institute for Theoretical Physics and Professor of Theoretical Physics at the University of California, Santa Barbara. Professor A. Zee was invited to write an introduction to the new edition of Feynman’s classic book on quantum electrodynamics. Feynman’s QED: The Strange Theory of Light and Matter

A quote from the introduction:

“Theoretical physicists are a notoriously pragmatic lot. They will use whichever method is the easiest. There is none of the mathematicians’ petulant insistence on rigor and proof. Whatever works, man!

Given this attitude, you may ask, which of the three formalisms, Schrödinger, Heisenberg, and Dirac-Feynman, is the easiest? The answer depends on the problem. In treating atoms for example, as the master himself admits on page 100, the Feynman diagrams “for these atoms would involve so many straight and wiggly lines and they’d be a complete mess!”

The Schrödinger formalism is much easier by a long shot and that is what physicists use. In fact, for most “practical” problems the path integral formalism is almost hopelessly involved, and in some cases downright impossible. I once even asked Feynman about one of these apparently impossible cases and he had no answer. Yet beginning students using the Schrödinger formalism easily solve these apparently impossible cases!

Thus, which formalism is best really depends on the physics problem, so that theoretical physicists in one field, atomic physics for example, might favor one formalism, while those in another, high energy physics for example, might prefer another formalism.

Logically then, it may even happen that, as a given field evolves and develops, one formalism may emerge as more convenient than another.” – end quote

 

The Responsibly Imaginable

To Possibly Be

Imaginable    So Responsibly

Creatively See

Observational    The Reality

Hope of Theory

Occupational   A Visionary

Energize Plea

Survival         With Planetary

Space Faring

Quantum    LENR

Energy

 

gbgoble2013

FARING : intransitive verb

1. To get along

2. To go or happen

3. To travel; go.

4. To dine; eat.

Middle English faren, from Old English faran; akin to Old High German faran to go, Latin portare to carry, Greek peran to pass through, poros passage, journey.

First Known Use: before 12th century

 

Letter to Nature on Martin Fleischmann released

On August 3, 2012 Dr. Martin Fleischmann, co-discoverer of cold fusion, passed away in his home after a long illness.

Obituaries produced by mainstream news outlets were nothing more than gross distortions of career that exemplified intellectual honesty and integrity. The science journal Nature was but one publication that mischaracterized Fleischmann’s work where author Philip Ball wrote of cold fusion as a “pathological science”, and the “blot” it left on Fleischmann’s career.

Fortunately, Dr. Brian Josephson, a Cambridge University professor and Nobel laureate, responded to Nature’s portrayal with a letter published in Nature Correspondence. Because of licensing arrangements, the text has only recently become available to non-subscribers, and is reproduced here.

Here is Brian Josephson’s letter to Nature magazine:

Cold fusion: Fleischmann denied due credit
Brian D. Josephson

From Nature 490, 37 (04 October 2012)
doi:10.1038/490037c
Original online publication at nature.com, 03 October 2012
Philip Ball’s obituary of Martin Fleischmann (Nature 489, 34; 2012), like many others, ignores the experimental evidence contradicting the view that cold fusion is ‘pathological science’ (see www.lenr.org). I gave an alternative perspective in my obituary of Fleischmann in The Guardian (see go.nature.com/rzukfz), describing what I believe to be the true nature of what Ball calls a “Shakespearean tragedy”.

The situation at the time of the announcement of cold fusion was confused because of errors in the nuclear measurements (neither Fleischmann nor his co-worker Stanley Pons had expertise in this area) and because of the difficulty researchers had with replication. Such problems are not unusual in materials science. Some were able, I contend, to get the experiment to work (for example, M. C. H. McKubre et al. J. Electroanal. Chem. 368, 55–56; 1994; E. Storms and C. L. Talcott Fusion Technol. 17, 680; 1990) and, in my view, to confirm both excess heat and nuclear products.

Skepticism also arose because the amount of nuclear radiation observed was very low compared with that expected from the claimed levels of excess heat. But it could be argued that the experiments never excluded the possibility that the liberated energy might be taken up directly by the metal lattice within which the hydrogen molecules were absorbed.

In my opinion, none of this would have mattered had journal editors not responded to this skepticism, or to emotive condemnation of the experimenters, by setting an unusually high bar for publication of papers on cold fusion. This meant that most scientists were denied a view of the accumulating positive evidence.

The result? Fleischmann was effectively denied the credit due to him, and doomed to become the tragic figure in Ball’s account.

For more, see Brian Josephson’s Link of the Day archive.

Related Links

New energy solution from Nobel laureate ignored at NY Times April 7, 2013

Brian Josephson safeguards historic contribution of Martin Fleischmann October 6, 2012

Martin Fleischmann leaves brilliant legacy of courage in pursuit of truth August 4, 2012

‘Pathological Science’ is not Scientific Misconduct (nor is it pathological) by Henry H. Bauer

The term pathological science was first coined by chemist and Nobel laureate Irving Langmuir during a speech in 1953 when he described scientific investigations done on the premise of “wishful thinking”.

From a transcript of Langmuir’s almost-lost talk:

These are cases where there is no dishonesty involved but where people are tricked into false results by a lack of understanding about what human beings can do to themselves in the way of being led astray by subjective effects, wishful thinking or threshold interactions.

Langmuir’s criteria, or ‘symptons‘, for pathological science are:

1. The maximum effect that is observed is produced by a causative agent of barely detectable intensity, and the magnitude of the effect is substantially independent of the intensity of the cause.
2. The effect is of a magnitude that remains close to the limit of detectability; or, many measurements are necessary because of the very low statistical significance of the results.
3. Claims of great accuracy.
4. Fantastic theories contrary to experience.
5. Criticisms are met by ad hoc excuses thought up on the spur of the moment.
6. Ratio of supporters to critics rises up to somewhere near 50% and then falls gradually to oblivion.

This is important to the field of condensed matter nuclear science (CMNS) as cold fusion has been labeled “pathological science” and pushed outside of mainstream research. The result has been a lack of funding, the absence of a coordinated research plan, a decreased intellectual pool working to solve the problem, and the continued use of hydrocarbons and dirty nuclear power with its associated horrors.

Author Jed Rothwell wrote on Wikipedia in 2006 dispatching each of Langmuir’s criteria in regards to cold fusion science by providing a case-by-case rebuttal. Wikipedia has campaigned hard to remove positive information on the topic, and that article is now deleted, but you can read it here archived on Ludwik Kowalski‘s page.

Virginia Tech Professor of Chemistry Henry H. Bauer looked at the criteria itself.

In his essay ‘Pathological Science’ is not Scientific Misconduct (nor is it pathological) published in HYLE–International Journal for Philosophy of Chemistry, he writes that “they do not provide useful criteria for distinguishing bad science from good science (Bauer 1984, pp. 145-46; Physics Today, 1990 a,b)” and that they “are no more valid than the many other suggestions as to how to distinguish good science from pseudo-science (Bauer 1984, chapter 8; Laudan 1983)” noting that “many praised pieces of research satisfy one or more of Langmuir’s criteria for pathology.”

For the case of cold fusion, Bauer writes:

The most recent major outcry over ‘pathological science’ was occasioned by ‘cold fusion’. A number of books about this episode have appeared, all of them quite strongly pro- or con-. This author, who himself worked in electrochemistry from the early 1950s to the late 1970s, has discussed the merits and defects of these books in several reviews (Bauer 1991; 1992b, c; 1995).

In 1989, Martin Fleischmann and Stanley Pons announced at a press conference at the University of Utah that they had brought about nuclear fusion at room temperature in an electrochemical cell: they had measured heat production too great to explain by other than nuclear processes.

Many physicists dismissed the claims as impossible from the outset, yet confirmations were being announced from all over the world. Within months, however, many of these were withdrawn; other laboratories reported failures to replicate the effect; and a committee empaneled by the US Department of Energy concluded that there was nothing worth pursuing in these claims. Within a year or two, those working on cold fusion had become separated from mainstream scientific communities, holding separate conferences and often publishing in other than mainstream publications. However, at the present time, a dozen years after the initial announcement, a considerable number of properly qualified people continue to believe the chief claim, that nuclear reactions can be achieved at ambient temperatures under electrochemical conditions (Beaudette 2000).

What have Fleischmann and Pons been accused of that was ‘pathological’?

They had announced their discovery at a news conference and not in peer-reviewed publication. They had failed to reveal all details of their procedures. The heat effect remained elusive: no one could set up the experiment and guarantee that excess heat would be observed, sometimes it was and sometimes not. They had performed incompetent measurements of nuclear products and then fudged the results. They had failed to understand that nuclear reactions would inevitably release radiation, and that the level of radiation corresponding to the heat claimed to have been generated would have been lethal. Nuclear theory in any case showed that fusion could not occur under such mild conditions, it required higher temperatures and pressures to many orders of magnitudes, as in the interior of stars.

But of all those criticisms, only the one about fudging nuclear measurements can be sustained, and that does not bear on the issue of whether or not cold fusion is a real phenomenon.

Announcing results first at news conferences has become standard practice in hot fields, for example molecular biology and genetic engineering. It was routine during the initial years of excitement about high-temperature superconductors. Also in that field, some workers quite deliberately put misleading information into their publications, correcting them at the last moment only, in order to preserve secrecy (Felt & Nowotny 1992; Roy 1989).

Lack of replicability does not mean that a phenomenon is necessarily spurious. Semiconductors did not become transistors and microchips in the 1930s because the presence of then-unsuspected, then-undetectably-small amounts of impurities made the phenomena irreproducible, elusive. Certain effects of electromagnetic fields on living systems remained difficult to reproduce for a century or more (Bauer 2001a, pp. 125, 132-33). Perhaps only electrochemists would recognize how vast is the number of experimental variables that might affect reproducibility in cold-fusion systems: almost innumerable variations in the physical characteristics of the electrodes and in the electrical regimen as well as all sorts of possible contaminants, conceivably active at levels that might be virtually impossible to detect by other means than their interference with the looked-for effect.

As to theoretical possibility, “Although cold fusion was, in terms of ‘ordinary’ physics, absurd, it was not obviously so; it contravened no fundamental laws of nature” (Lindley 1990, p. 376). Physics Nobelist Julian Schwinger was among those who proposed explanations for how cold fusion might occur. It may be well to recall in this connection that lasers and masers were also regarded as impossible before their discovery, and indeed by some eminent people even after they had been demonstrated (Townes 1999).

Once again, as with N-rays and polywater, it turns out that nothing occurred that could rightly be called pathological. The leading cold-fusion researchers went at their work just as they had at the other research that had established their good reputation, in Fleischmann’s case sufficiently distinguished as to warrant a Fellowship of the Royal Society. Fleischmann had always been known as an adventurous thinker, the sort of person – like the astrophysicist Thomas Gold (1999) – whose suggestions are always worth attending to even when they do not work out. His competence was beyond question, and it was not at all uncharacteristic for him to follow apparently far-out hunches. Sometimes they had paid off for him. Moreover, he had ample grounds from earlier work to look for unusual phenomena when electrolyzing heavy water at palladium electrodes, and he had quite rational grounds for speculating that nuclear reactions might proceed in the solid state under quite different conditions than in plasmas (Beaudette 2000, chap. 3).

The single criticism that is not to be gainsaid concerns how Fleischmann and Pons altered the reported results from initial attempts to measure radiation from their cells. But there is more to be noted here about such apparent instances of scientific misconduct. Fleischmann and Pons were tempted into these actions because they had tried to make measurements without properly learning all the ins and outs of the technique: they thought they could measure radiation by just taking a radiation meter and placing it near their cell. In point of fact, a great deal needs to be known about circumstances that can affect the functioning of such instruments (temperature, for example) and about how to eliminate background signals, as well as about how to interpret the measurements. In this, Fleischmann and Pons were falling into the same trap as many of their critics who, without experience of electrochemistry, thought they could connect together some cells and batteries and palladium electrodes and test within days or weeks what the experienced electrochemists had struggled for several years to bring about.

The transfer of expertise across disciplinary boundaries affords great challenges, and this instance illustrates that a superficial view might label as misconduct what is basically a natural result of failing to recognize how intricately specialized are the approaches of every sort of research. Much of the fuss about cold fusion is understandable as an argument between electrochemists and physicists as to whether empirical data from electrochemical experiments is to be more believed or less believed than apparently opposing nuclear theory (Beaudette 2000). To electrochemists it may seem perverse, possibly even scientific misconduct, to rule out of the realm of possibility competently obtained results because some theory in physics pronounces them impossible. To nuclear physicists, it may seem incompetence verging on scientific misconduct for electrochemists to invoke nuclear explanations just because they cannot understand where the heat in their experiments comes from.

As in the case of N-rays, one can plausibly level charges of scientific misconduct against those who denounced the cold-fusion studies. A journalist baselessly charged a graduate student with falsifying evidence of the production of tritium and this charge was published in Nature. The legitimacy of work by a distinguished Professor at Texas A & M University was questioned in two separate, long-drawn-out investigations that ultimately found him innocent of any wrongdoing. One participant in the cold-fusion controversy suggested that critics were guilty of “pathological skepticism” (Accountability in Research 2000).” —Henry H. Bauer ‘Pathological Science’ is not Scientific Misconduct (nor is it pathological)

Read The Pseudoscientists of the APS by Eugene Mallove and Jed Rothwell and decide for yourself if the shoe fits the “pathological skeptics”.

Conventionally-minded men like John R. Huizenga, the co-chair of the Energy Research Advisory Board who “investigated” the phenomenon in 1989 but failed to include the positive results made by Navy scientists in the final report, and American Physical Society spokesman Robert Park who lobbied vociferously against any mention of the research in mainstream scientific and political circles, both routinely used the vocabulary “pathological science” among other words, to describe the very real and active research. Applying the term to cold fusion was an early effort to discredit experimental data that could not be explained within the prevalent theoretical models.

“But the mainstream is always antagonistic to highly novel discoveries or suggestions, even when they become acceptable later: any suggestion that paradigms need to be changed is routinely resisted (Barber 1961), sometimes by effectively ignoring the claims (Stent 1972)”, writes Bauer.

From Henry H. Bauer Ethics in Science
From Henry H. Bauer Ethics in Science
“The most striking potential discoveries bring about revolutionary paradigm shifts.”

“The accepted rules and procedures for doing normal science are not adequate to bring about potentially revolutionary science: as is well known, hard cases make bad laws.”

Bauer does not endorse the work of Drs. Martin Fleischmann and Stanley Pons. He does make the case that cold fusion is not deserving of the moniker “pathological science” and that the Langmuir’s criteria are themselves suspect. Based on the historical record of discovery, he reminds us that is normal for conventional minds to resist change.

The only difference now is that we are at the end-run of a several thousand-year-old expansion; over 7 billion people cover the planet, all needing water, food, and energy. Ecological systems are strained, economic relationships are broken, and energy resources are becoming difficult to access, expensive to process, and dangerous and to use. We have a solution for a clean energy world waiting to be developed; to wait for history to swing around is no longer an option.

“Pathological science” is a distraction we can tolerate no more.

Cold Fusion Now!

Related Links

Ethics in Science by Professor Henry H. Bauer

The Believers – the Movie

A local Association devoted to Product Management who is considering showing the the above film at their final monthly meeting in June asked me if it would be appropriate material within their theme of supporting the cause of Product Management. Along with providing a private showing of the film to the Board of Directors, I also prepared the following Synopsis of the movie for possible distribution to members.

__________________________________________________________

Synopsis

Released in October, 2012 The Believers is a documentary that tracks the dark side of the March 1989 announcement at the University of Utah that two respected chemists had solved the world’s energy problems. They had discovered “Cold Fusion”. Within days Martin Fleischmann and Stanley Pons were on the covers of numerous magazines worldwide. But three short months later their science has been discredited and their reputations ruined. The established community of physicists refuse to accept the alleged experimental results. Retreating to France the two pioneers pursue their researches for another five years before retiring into twilight. Meanwhile “Cold Fusion” has become synonymous with “pathological science” within the general scientific community.

There could not be a better modern example of a combination of hubris and bad public relations for a product launch. Understanding what went wrong makes this story worthy to be the central focus of courses in marketing, or more precisely bad marketing, for years to come.

Meanwhile, twenty-three years later, laboring under the disdain of their peers, a small group of faithful scientists still persist in trying to resolve the un-answered Mystery of where the incontrovertible unexplained heat of the Fleischmann & Pons Effect comes from. Once solved, this scientific breakthrough may yet become the salvation of civilization providing an unending supply of low-cost energy.

The movie is not, however, so much about the science as it is about the tragedy of the personal lives of the two original discoverers. It is also about the tragedy of the rejection by established institutions of the opportunity to pursue a discovery of unparalleled importance. This assumes that its riddles can be solved and the science applied to produce its potentially vast technological rewards. But the Believers themselves are not organized. Without the presence of overseeing management and meaningful financial resources, they are all struggling with their individual theories and personal myopic experiments leaving little hope that a breakthrough is imminent.

No one can leave the theater without asking themselves: “How could this have happened?”
__________________________________________________

The movie, a documentary, is definitely focused on the tragic impact of the unfolding scenario on the lives of the two original scientists. In fact, the movie is rather dark. Martin Fleischmann is shown in his declining years suffering from Parkinson’s disease. He is also caught in moments of reflection that are quite poignant. This is particularly true when he ponders how to answer the question: “What happened between you and Stanley Pons?” He never does answer that question properly, but the look in his eyes as he stares off into the distance searching back in his memory is telling. We see the panorama of opportunities lost and dreams dashed. In Martin’s stare we can imagine the closeness that must once have existed between these two collaborators, and the gulf that now separates them.

Fleischmann until his death in August 2012 was living in England and Pons is presently, 2013, living in the South of France. In one short scene while being interviewed with his wife, Pons seriously contemplates giving-up his American citizenship.

But the movie is not just about these two individuals. It is also about the tragedy of the lost opportunity to resolve the challenge of understanding the “Fleischmann & Pons effect”. And it is indeed remarkable that in our present times with all the tools and acquired knowledge available, that those still researching in the Cold Fusion field have not determined how to reliably produce heat at a level that will boil a cup of coffee.

No doubt the scientists assembling for the next annual world conference for the Cold Fusion community, ICCF-18 to be held in July, 2013 on the campus of the University of Missouri in Columbus Missouri, will wince at the description that they are all “struggling with their individual theories and personal myopic experiments”. Unfortunately, I cannot help but lean towards that colorful image in my personal struggle to answer the question: “How could this have happened?” Why has there been no breakthrough even after 24 years?

It may well be that the puzzle is indeed complex. But there is a vast amount of data available for those who wish to accept a challenge. Reported experiments in the field showing the generation of unexplained excess heat must by now have exceeded the millennium level. A half a dozen potentially practical theories already exist and a dozen more less- credible concepts regularly float up into the air. But surely there’s enough information assembled now for some genius to hit upon the solution. What is the source of the excess energy – heat – that flows from Cold Fusion experiments??

There is an analogy in history. In 1898 Pierre Curie and his wife isolated for the first time a quantity of Radium. To their great surprise, 1 gram of radium produced heat apparently endlessly! (Ra226 half-life: 1601 years; 1000 joules per gram per year when pure.) It was not, however, until 1938-39 when Lise Meitner, an Austrian Jew exiled to Sweden by Nazi politics, solved the source of the heat based experimental results for uranium reported to her by her former associate, Otto Hahn. She identified the source of this heat as: Nuclear Fission and the mass difference of the nuclei. The elapsed time was 40 years. But that was in the first half of the 20th century. Surely in the last and first decades spanning the 20th and 21st century a similar unexplained source of energy, albeit intermittent and unreliable, should have been explained by now.

The movie The Believers is not just about Martin Fleischmann and Stanley Pons. It’s a story about the failure of the scientific community to resolve the puzzle that these two electro-chemists presented to the world. The movie depicts the toxic condemnations that descended on the heads of these two gentlemen, primarily because many laboratories could not duplicate their results, and additionally because the theory that nuclear fusion was the source of the energy was incompatible with nuclear fusion as understood by the physicists. The physicists believed that if fusion were occurring then there had to be an associated emission of high-energy particles. In the case of Cold Fusion, there was no substantial demonstration of energetic particle emissions associated with the process. The fact that there was an unexplained supply of heat, a phenomenon that would otherwise violate one of the most fundamental laws of thermodynamics, the Conservation of Energy, was simply ignored. This is a travesty of the highest order.

The movie ends with a tone of despair expressed by one of the eminent theoreticians laboring in the field: Dr. Peter Hagelstein of the Massachusetts Institute of Technology. Peter, with the demeanor of a defeated man but probably intending ironic humor, speculates that when the present generation of Believers has passed on (most of them are elderly having adopted this as their career back in 1989-1990), the field will fall into neglect only to be discovered at some future date by scientific archaeologists. These are the words of a man holding tenure at MIT who nevertheless has no grants to spend and no students to do experiments. That is the burden that these Believers labor under.

While these observations are in keeping with the tone of the movie, exploring a theme which is dark and depressing, there is hope. So many favorable experimental results have now accumulated that a breakthrough in theoretical understanding in this field must occur in the near future. At least that will be the hope of all those attending ICCF-18 this coming July. In expressing such expectations and hopes myself, I have disclosed my own membership in a group that I’m sure one day will be honored for their loyalty to the cause, – The Believers.