Interview with Yuri Bazhutov by Peter Gluck

This is a re-post of an article written by Dr. Peter Gluck of Ego Out in Cluj, Romania.

The original article can be found here.

SHORT INTERVIEW WITH YU. N. BAZHUTOV by Peter Gluck

I had the privilege to ask a few preliminary questions from the leader of Russian LENR researchers Yuri Nikolaevich Bazhutov. They call the field Cold Nuclear Transmutation and I think this name is more realist than Cold Fusion.

Yuri Bazhutov is an ’89-er cold fusionist (excuse me) a well known member or our community, a reputed author, with 15 papers 1982 to 2014 in the LENR-CANR Library, an organizer and participant at our meetings, CNT strategist, a personality..

Q
It is encouraging to see and easy to observe how closely and seriously are followed, discussed and theorized the developments in CNT/LENR in Russia. What is the strategic thinking beyond this and the main targets?

A
After more, than 25 years of theoretical, experimental pilot studies in Cold Nuclear Transmutation in Russia we have arrived to a stage when we think about patents, demonstration devices, search for investors for realization of industrial devices. We are at a different, higher level now.

Q
Your very personal opinion: how do you see the scientific aspects; how these new developments, can they be explained theoretically and what do you and your collaborators intend to do for the experimental part?

In essence is it new science or new application (s) of already known science?

A
As co-author of the Model of the Erzion Catalysis (MEC), I believe that it explains the nature of CNT. All my experiments made in 25 years confirm this model.

MEC is built on orthodox representations of the Physics of Elementary Particles including as the main part, Quantum Chronodynamics (QCD) and, therefore it is also the new Section of Nuclear Physics

Q
The Lugano experiment despite its over-complicated thermometric calorimetry is a harbinger of a really wonderful/powerful energy source, MWhours from grams. Unfortunately, the Testers were shocked by the analytical results.
What do you think about those unexpected isotopic shifts and the dynamic processes that make these possible

A
Starting with the first experiments made by Rossi and Focardi up to the very Hot Cat tested in Lugano, MEC gives generally fine explanations and I have published about this in RCCNT&BL Proc., and in the Russian Inventing magazines (No. 1, 2012) and ISCMNS J. (No. 13, 2014). However I believe that our option of Russian E-cat on the basis of Plasma Electrolysis gives a much better perspective- heat generator at close realization still having a very high output specific power (MWhours from grams common water).

Q
On December 25, 2014 at a CNT seminary-Alexander Parkhomov and you have presented an experiment confirming the Lugano experiment using a realistic-cut-the Gordian knot simple calorimetry inspired from your experience. A very positive event.

However, after more than 50 years in and around research i have learned the cruel 1=0 rule-1 single experiment can’t generate absolute certainty. Nor Lugano, neither Parkhomov; so I ask-was the experiment repeated in house and when will the new report be published?

A
Parkhomov now works on lengthening of time of continuous work of a cell then to do atom spectroscopic and mass spectroscopic analyses of change of chemical structure and of the isotopic composition of fuel.

Peter Gluck – This was just a first discussion, I hope to continue. Bazhutov added: see and read more– and I have translated the paper.

http://vpk.name/forum/s188.html
The revolution in energetics was accomplished! The place of organic fuels was taken by the Cold nuclear Transmutation.
By A.A. Rukhadze, Yu.N. Bazhutov, A.B. Karabut, V.G. Koltashov

The era of oil burning has arrived to its end. The revolution in CNT (Cold Nuclear Transmutation) opens the way toward a new economic transformation, to the triumph of robotics, to cheaper production and the transition of the world’s economy in which Russia should not be disadvantaged.

On October 8, 2014 in the prestigious Los Alamos electronic publication Arxiv.org it was published the report of an independent group regarding the testing of the heat generator- Hot Cat created by Andrea Rossi. Six well known scientists from Italy and Sweden have tested for 32 days the functioning of the generator that allows obtaining cheap energy on the basis of a new scientific principle.

In the absence of the author of the invention (A. Rossi) there were measured all the possible parameters of the “energetic cat” After that, for an half year the scientists have processed the results in order to get comprehension. And their verdict was univocal: the Rossi generator works and produces an incredible amount of energy- the energy density is millions times greater as by burning the same quantity of any kind of organic fuel and is 3.7 times greater than the input electric energy. In the same time it is changed the isotopic composition of the fuel materials.

No nuclear radiations from the reactor could be observed during the test.
The first demonstration of working of an E-cat prototype was performed already at January 14, 2011 in Bologna, at the Physics Dept. of the University. During this demo the scientists and the journalists have seen a functioning reactor with the power of 12.5 kW at output. This works on the principle of cold nuclear transmutation as have related the authors, Andrea Rossi and Sergio Focardi.

Sergio Focardi, professor at the Bologna University – has performed even 20 years earlier the mechanism of hydrogen-nickel interaction in cooperation with the professor of the Siena University, Francesco Piantelli. These studies were done in the frame of a new physical phenomenon, cold fusion discovered by Martin Fleischmann and Stanley Pons in the year 1989.

At October 28, 2011 Andrea Rossi has already shown his first 1 Megawatt reactor sold to his first customer. Engineers and scientist were present, verifying how it works. Due to some imperfections, the reactor has produced 470 kWatts working for 5.5 hours in self-sustaining mode. There were used 100 reactor modules each with 3 branches- the whole complex of 300 reaction chambers.

The orthodox physicist overall have again ignored Rossi. According to all the canons of physics, something like this- nuclear boiler on the table- cannot exist! Amplification of energy almost 10 times is pure non-sense! And only few “heretics” of science, working for cold fusion (CF) have supported him.

Rossi had an unpredictable behavior but not so that he could be called a rogue and a charlatan as the orthodox have accused him. He has not asked money from anybody, on the contrary he has sold his house to be able to start this research. He has not chased popularity in the press; he refused interviews and has worked more with businessmen than journalists.

Rossi also has not tried to open a dialogue with the scientists – the luminaries of the nuclear physics: “The best proof of my truth will be the commercial device on the market”- he says.

The attitude toward this inventor has gradually changed- when after a dozen conferences nobody could show he cheats, secretly brings electricity to the device.

After that NASA took Rossi under its protection. Rossi could not refuse. It is clear he is safer in the US than in Italy. But NASA is only the visible part of the wall built by USA around Rossi and his invention.

It can be confirmed that the US tries to obtain complete control of the new sources of energy, the one who owns it, will be the far leader in technology.

Signals at the APEC Summit Show Big Changes Ahead
http://ireport.cnn.com/docs/DOC-1187686?ref=feeds%2Flatest
and gets rid of the oil gas dependence.

The US hopes not only to manage the flow of finance but also, on the basis of new technologies, having almost free, clean, limitless energy to perform export-oriented industrialization.

Other countries will remain behind if they will not also try to change. For this reason, in India after the ATEC summit where this issue was discussed ( see the CNN link) governmental actions were initiated to finance the development of new energy see please: http://www.e-catworld.com/2014/11/17/indian-government-urged-to-revive-cold-fusion-research-program/

It is for sure to say that Rossi’s invention cannot be kept under lock for long. In dozens of laboratories worldwide, the scientists are trying to guess the secret of the “silent Italian”, to find out his catalyst, to develop a theory of the process. In meantime, preparations are made for bringing the generators on the market. If the transition in industry, trade and transport rising humankind to a new level of automation- needs hundreds of thousands “Cold Cats” (actually they are warm or hot, N.T.) the start of these new industries will bring the oil industry in the abyss by thousands of ways – very bad for the economies that depend on hydrocarbons. It will become obvious the futility of investing in oil and its long term purchase.

In the near future we can expect a rapid development of the Cold Nuclear Transmutation (a new and more correct name than Cold Nuclear Fusion) both regarding theory and experiment, great investments will lead to breakthroughs in the related fields of science and technology. U.S. already relies on the revolution in the energy sector and may soon get its winnings. Civilization is near to a new era and we know in advance that it will be grandiose.

Russia is still among the leaders in research in Cold Nuclear Transmutation even in the absence of targeted funding, due to the still strong post-Soviet educational, theoretical and experimental research basis of its enthusiasts. The country has a Coordinating Council on the issue of Cold Nuclear Transmutation, held annual conferences and monthly seminars, in spite of the strong resistance of its orthodox-minded opponents. The Russian researchers in Cold Nuclear Transmutations have presented copyrighted theoretical models for CNT, more than 500 publications at the 25th anniversary of the discovery of CNF by Fleischmann and Pons. Based on the principles of CNT there had been created dozens of patents for the creation of new energy. A part of the researchers had been able to get small funding, others, unfortunately were forced to work abroad.

The “war of sanctions” from 2014 has shown that the US sees Russia as a threat to its dominance in Europe and world hegemony. Rossi’s success gives them a chance to retain the role of the global financial and industrial center, undermining the position of the other strong players. But the long-term decline in prices in the oil market will not necessarily mean a catastrophe for the Russian economy. With a favorable state’s attitude toward science, we will be able to recover the leading position as it was in the ‘50-‘60 years of the twentieth century. We will be able to participate in the new industrial revolution, going forward to terminate the humiliating position on the raw materials periphery of the world.

A.A. Rukhadze
Chairman of the Coordination Council of the SFA on the problem of Cold Nuclear Transmutation,
Academy of Natural Sciences and the National Academy of Sciences of the Republic of Georgia, Honored Scientist of Russia, Doctor of Science, prof., Institute of General Physics “AM Prokhorov”

Yu. N. Bazhutov member of the International Executive Committee on the issue of Cold Nuclear Transmutation, organizer of (1-21) Russian Conferences on Cold Nuclear Transmutation and the problem of the 13th International Conference on Cold Nuclear Transmutation (Dagomis 2007), Deputy. President of the Cold Nuclear Transmutation Committee (RFO), PhD, MN, IZMIRAN

A. B. Karabut AB, winner of the International Award Cold Nuclear Transmutation them. “Giuliano Preparata”for 2007.,
Laureate of the State Prize of the USSR for 1982. Member of COP Cold Nuclear Transmutation (RFO), PhD, MN, SNA “Luch”

V. G. Koltashov, head of the Center for Economic Research Institute of Globalization and Social Movements, Ph.D.

Translated by Peter Gluck, Jan 13, 2015

END RE-POST

Related Links


Russian scientist replicates Hot Cat test: “produces more energy than it consumes”

Aether the Theory of Relativity and LENR Energy

“We may say that according to the general theory of relativity space is endowed with physical qualities.      -In this sense-        -Therefore-  

There exists an ether.” – Albert Einstein

 

 

Way Back

In 1989, the popular yet controversial Cold Fusion ‘Fleischmann and Pons Effect’, challenged the notions of theoretical physicists of the time. Newly established arts today, like cold fusion-LENR-low energy nuclear reaction science, continue to do so.

Science progresses by challenging established notions that are not able to properly hold observed phenomenon within a theoretical framework. Through this process of – researching the unknown – new scientific arts become established. Then theoretical physicists have a whole new playground in which to make predictions; as well as an arena in which to create new physical theories and grandiose mathematical models of physics, such as the likes of Einstein’s.

Many modern arts of science weren’t firmly established when early cold fusion researchers started college. A few of these arts are notable in the LENR energy arena today. Nano Engineering and Science, with the likes of carbon nanotubes, allows for new methods of constructing the required fractal geometries within the low energy nuclear reactive lattice. Quantum Physics and Engineering also play an important role with a deeper understanding of the atom. This ever-growing field, understanding the actions in the subatomic realm, provides new glimpses into the inner workings of the low energy nuclear reactive environment. In this dynamic multidisciplinary field, LENR Sciences, both theory and engineering, are improving as we progress in the art.

During the early 80’s, one would venture to say, there were three or four dozen subatomic particles that we knew of. During Einstein’s time perhaps even less. Now we are looking at well over a hundred and fifty of them. The list is mind-boggling to conceptualize, observe, and then finally comprehend. (that’s what we have open minded experimental and theoretical scientists for) The article “Not so Elementary, My Dear Electron” is an example. It takes us far from the “Standard Model” of my youth. The once “elementary” electron has been ‘split’ into three… a holon, spinon, and orbiton.

After reading that article my pre-concieved grip on reality became so unhinged. That night I had a dream finding myself shrunk down, traveling the empty space within the low energy nuclear reactive environment. There, right before my eyes, an electron split into its’ three elements -WOW- One Went Flying OFF Into a Far Distancing Dimension

Then it Went Super Nova!!!               Lesson Learned

Watch what you read before nodding off into

The Aether of the Dreamland

My Heart Hopes That

We can ALL

Enjoy

 

Aether Science

Another art pertaining to the low energy nuclear environment is Aether Science – the science of the vacuum. The Aether, or ether, is that which fills “empty space”. “Space” is found in the outer reaches between planets and between stars and “Space” is found between atoms. There is more space than matter in the universe. More space between the atoms in molecules and more space between the subatomic particles of the atom than there is matter… yet space is not, in reality, truly empty. Read “Dark Energy Dark Matter” NASA

Quantum Science: Pushing the envelope and inviting us to explore the physical realities within the Aether. (links go to the U.S. DoE search engine) Research these sciences at the U.S Department of Energy – Office of Science website links: into Dark Energy (see 46 papers – year 2013), into Zero Point Energy (see 13 papers – year 2013) , into Vacuum Field (see 43 papers –  under ‘Energy’), into Gravity (see 103 papers – year 2013), into LENR (see 38 papers – under ‘Low Energy Nuclear Reaction’)

During an Address delivered on May 5th, 1920, at the University of Leyden

A theoretical physicist once said,

“As to the part which the new ether is to play in the physics of the future we are not yet clear. We know that it determines the metrical relations in the space-time continuum, e.g. the configurative possibilities of solid bodies as well as the gravitational fields; but we do not know whether it has an essential share in the structure of the electrical elementary particles constituting matter. Nor do we know whether it is only in the proximity of ponderable masses that its structure differs essentially from that of the Lorentzian ether; whether the geometry of spaces of cosmic extent is approximately Euclidean. But we can assert by reason of the relativistic equations of gravitation that there must be a departure from Euclidean relations, with spaces of cosmic order of magnitude, if there exists a positive mean density, no matter how small, of the matter in the universe. In this case the universe must of necessity be spatially unbounded and of finite magnitude, its magnitude being determined by the value of that mean density.

If we consider the gravitational field and the electromagnetic field from the standpoint of the ether hypothesis, we find a remarkable difference between the two. There can be no space nor any part of space without gravitational potentials; for these confer upon space its metrical qualities, without which it cannot be imagined at all. The existence of the gravitational field is inseparably bound up with the existence of space. On the other hand a part of space may very well be imagined without an electromagnetic field; thus in contrast with the gravitational field, the electromagnetic field seems to be only secondarily linked to the ether, the formal nature of the electromagnetic field being as yet in no way determined by that of gravitational ether. From the present state of theory it looks as if the electromagnetic field, as opposed to the gravitational field, rests upon an entirely new formal motif, as though nature might just as well have endowed the gravitational ether with fields of quite another type, for example, with fields of a scalar potential, instead of fields of the electromagnetic type.

Since according to our present conceptions the elementary particles of matter are also, in their essence, nothing else than condensations of the electromagnetic field, our present view of the universe presents two realities which are completely separated from each other conceptually, although connected causally, namely, gravitational ether and electromagnetic field, or — as they might also be called — space and matter.

Of course it would be a great advance if we could succeed in comprehending the gravitational field and the electromagnetic field together as one unified conformation. Then for the first time the epoch of theoretical physics founded by Faraday and Maxwell would reach a satisfactory conclusion. The contrast between ether and matter would fade away, and, through the general theory of relativity, the whole of physics would become a complete system of thought, like geometry, kinematics, and the theory of gravitation.”

Albert Einstein

What is Aether?

Robert B. Laughlin Nobel Laureate in Physics-Stanford University-The Ether

In contemporary theoretical physics: “It is ironic that Einstein’s most creative work, the general theory of relativity, should boil down to conceptualizing space as a medium when his original premise [in special relativity] was that no such medium existed. The word ‘ether’ has extremely negative connotations in theoretical physics because of its past association with opposition to relativity. This is unfortunate because, stripped of these connotations, it rather nicely captures the way most physicists actually think about the vacuum. Relativity actually says nothing about the existence or nonexistence of matter pervading the universe, only that any such matter must have relativistic symmetry. It turns out that such matter exists. About the time relativity was becoming accepted, studies of radioactivity began showing that the empty vacuum of space had spectroscopic structure similar to that of ordinary quantum solids and fluids. Subsequent studies with large particle accelerators have now led us to understand that space is more like a piece of window glass than ideal Newtonian emptiness. It is filled with ‘stuff’ that is normally transparent but can be made visible by hitting it sufficiently hard to knock out a part. The modern concept of the vacuum of space, confirmed every day by experiment, is a relativistic ether. But we do not call it this because it is taboo.” Laughlin, Robert B. (2005). “A Different Universe: Reinventing Physics from the Bottom Down”  pp. 120–121.

from the Bottom Down” A REVIEW By Jeremy Chunn

“Tired of the predictable ‘clockwork’ nature of the physical world as defined by Newtonian laws? Then you’ll find a friend in Robert B. Laughlin. He suspects the fact that Newtonian laws break down at quantum levels and fail to predict all phases between states is evidence the physical world is still highly mysterious.”

Paul Dirac wrote in 1951

“Physical knowledge has advanced much since 1905, notably by the arrival of quantum mechanics, and the situation [about the scientific plausibility of Aether] has again changed. If one examines the question in the light of present-day knowledge, one finds that the Aether is no longer ruled out by relativity, and good reasons can now be advanced for postulating an Aether. We have now the velocity at all points of space-time, playing a fundamental part in electrodynamics. It is natural to regard it as the velocity of some real physical thing. Thus with the new theory of electrodynamics [vacuum filled with virtual particles] we are rather forced to have an Aether”. “Is there an Aether?”, Nature 168 (1951), p. 906.

… Is there an Aether?” abstract by Dirac St. John’s College, Cambridge. Oct. 9, 1951

IN the last century, the idea of a universal and all-pervading æther was popular as a foundation on which to build the theory of electromagnetic phenomena. The situation was profoundly influenced in 1905 by Einstein’s discovery of the principle of relativity, leading to the requirement of a four-dimensional formulation of all natural laws. It was soon found that the existence of an æther could not be fitted in with relativity, and since relativity was well established, the æther was abandoned.

John Bell, interviewed by Paul Davies in “The Ghost in the Atom” 1986

Has suggested that an Aether theory might help resolve the EPR paradox by allowing a reference frame in which signals go faster than light. He suggests Lorentz contraction is perfectly coherent, not inconsistent with relativity, and could produce an aether theory perfectly consistent with the Michelson-Morley experiment.

Bell suggests the aether was wrongly rejected on purely philosophical grounds:

“What is unobservable does not exist”

Besides the arguments based on his interpretation of quantum mechanics; Bell also suggests resurrecting the aether because it is a useful pedagogical device. That is, many problems are solved more easily by imagining the existence of an aether.   The Ghost in the Atom: A Discussion of the Mysteries of Quantum Physics

As noted by Alexander Markovich Polyakov in 1987

Elementary particles existing in nature resemble very much excitations of some complicated medium (Aether). We do not know the detailed structure of the Aether but we have learned a lot about effective Lagrangians for its low energy excitations. It is as if we knew nothing about the molecular structure of some liquid but did know the Navier-Stokes equation and could thus predict many exciting things.

Clearly, there are lots of different possibilities at the molecular level:

Leading to the same low energy picture. – end quote

From Harwood Academic Publishers (1987), A. M. Polyakov, “Gauge Fields and Strings” sec,12

LENR and the Aether – Harold Aspden

‘Heavy Electron’ -‘Mu-meson’ Vacuum Field – Electron Proton ‘Creation’

Dr. Harold Aspden is of particular interest. A brilliant man, he successfully predicted the mass of the proton and was a pioneer of efficient thermal electric conversion devices. He was the first to be issued a U.S. patent with ‘cold fusion’ contained in the text of the application. A further example of his brilliance is his theoretical papers on Aether Science. This list is of ten Harold Aspden patents granted, applied, or cited  that concern “Cold Fusion” LENR and the Aether (ZPE). Here is an excellent biography of the honorable Dr. Harold Aspden including all theories, works published, and documented efforts in the Aether and LENR sciences.

GIVE THANKS and Support to Pure Energy Systems News for compiling the best in LENR history and news.

Ten of the 119 patents found at Cold Fusion NowHarold Aspden Patent Tribute – Honoring Dr. Aspden

For links to these patents open the “Harold Aspden Patent Tribute” 

  1. Cold Nuclear Fusion Method and Apparatus App. – Filed Apr 20, 1990 – Published Nov 1, 1990 – Richard Geoffrey Belton – The Broken Hill Proprietary Company Limited May 23, 1994, Dec 8, 1994, Aspden, Harold, Hydrogen activated heat generation apparatus
  2. Hydrogen Activated Heat Generation Apparatus App. – Filed May 23, 1994 – Published Dec 8, 1994 – Aspden, Harold, Eneco, Inc. Inventors. Harold Aspden. Applicant. Aspden, Harold
  3. Cold Nuclear Fusion Method and Apparatus App. – Filed Apr 20, 1990 – Published Nov 1, 1990 – Richard Geoffrey Belton – The Broken Hill Proprietary Company Limited May 23, 1994, Dec 8, 1994, Aspden, Harold, Hydrogen activated heat generation apparatus
  4. Methods and Systems for Generating High Energy Photons or Quantum… Grant – Filed Nov 21, 2001 – Issued Aug 30, 2005 – Kiril B. Chukanov – Chukanov Quantum Energy, L.L.C.
… OTHER PUBLICATIONS Aspden, Harold, “Aether Science Papers: Part I: The Creative Vacuum,” Aether Science Papers, (1996), pp. 26-32. Chukanov, KM
  5. Device to Move an Object Back and Forth Grant – Filed Jan 22, 2008 – Issued Mar 8, 2011 – Harvey Emanuel Fiala, 
Harold E. Puthoff, and Harold Aspden are recent exponents of  ZPE. …. Aspden, Harold: Power from Space: Inertia and Gravitation, Energy Science Report No.
  6. Inertial Propulsion Device to Move an Object Up and Down Grant – Filed Feb 11, 2011 – Issued Nov 29, 2011 – Harvey E. Fiala
… energy (ZPE) or space energy at every point in space, possibly even of the order or magnitude of nuclear energy
  7. Hydrogen Activated Heat Generation Apparatus App. – Filed May 23, 1994 – Published Feb 9, 1995 – Inventors. Harold Aspden. Applicant. Aspden, Harold
  8. Production of Thermal Energy App. – Filed Jun 4, 1990 – Published Dec 13, 1990 – Cyril Barrie Edwards – Edwards, Barrie, Cyril
… May 23, 1994, Dec 8, 1994, Aspden, Harold, Hydrogen activated heat generation apparatus
  9. Method for Producing Plasma Nuclear Fusion App. – Filed Apr 9, 1990 – Published Oct 24, 1990 – Shunpei Yamazaki -Semiconductor Energy Laboratory Co., Ltd. May 23, 1994, Dec 8, 1994, Aspden, Harold, Hydrogen activated heat generation apparatus
  10. Solid State Surface Micro-Plasma Fusion Device App. – Filed May 28, 1992 – Published Dec 23, 1992 – Ell Yeong Kim – Purdue Research Foundation May 23, 1994, Dec 8, 1994, Aspden, Harold, Hydrogen activated heat generation apparatus

Told by Dr. Aspden

His story of a ‘Cold Fusion’ institutional firewall at the U.S. patent office

The tactics I adopted in my efforts to secure a granted patent involved filing a U.S. continuation-in-part application based on the pending cold fusion application that had survived the PCT stage, but before it came under the executioner’s axe wielded by Harvey Behrend. My plan was to emphasize the thermoelectric aspects of the invention, but discuss their relevance to ‘cold fusion’ and incorporate a very substantial Appendix on that subject. I wrote the specification discussing the merits of ‘cold fusion’ and offered as an invention a special form of apparatus which I regarded as useful for testing the cold fusion process.

There was a 50:50 chance that the new application would be assigned to Harvey Behrend’s examining group, but the abstract stressed thermoelectric energy conversion and not cold fusion, so I had my fingers crossed in hoping that Art group 1102 and not Harvey Behrend’s Art group 2204 would be put in charge of the case in the U.S. Patent and Trademark Office.

So that you, the reader, may understand what this is all about, and particularly so that my colleagues in the patent profession in Europe who may come to hear about this as well may understand, I feel it appropriate to quote a few words from an article which appeared in the July-November double issue of ‘Infinite Energy’, Nos. 15 and 16, at page. 86.

I refer to Dr. Hal Fox’s article ‘New Energy Sources for the Near Future: An Open Letter to Decision Makers’. Hal Fox is Editor of the Journal of New Energy. He is located in Utah, where the saga of cold fusion was born, and he has followed the cold fusion theme as closely as anyone over the years dating from March 1989, when that hope and prospect for a new energy technology was first announced.

Hal Fox, Quote…

“A university professor who has been supported by a multi-million dollar hot fusion contract and who becomes an advisor to the Department of Energy is unlikely to advise the government to fund a competitive low-energy technology. There would be very strong university pressure to continue in the development of hot fusion! This combination of federal funds, appointments to advisory groups, and the pressures for institutional funds on the advisers, has resulted in scientists becoming lobbyists with the following results:

  • The Office of Patents and Trademarks has been advised not to allow patents on competitive technology to hot fusion.

  • Leaders of some professional societies (such as the American Physical Society) have lobbied to prevent major peer-reviewed journals from publishing articles about competing technologies.

Aether: How it relates to cold fusion (link)

A BREAKTHROUGH: U.S. PATENT NO. 5,734,122

Cold Fusion Appears in a U.S. Patent!

Copyright © 1998 Harold Aspden

The Fusion Criteria

In a very hot proton gas protons can combine to create heavier atomic nuclei. This is facilitated if there is something effectively neutralizing the charge repulsion between the protons. A proton or anti-proton charge can become neutral if a beta particle of opposite polarity combines with it in some way to be seen as a neutron. Alternatively it is conceivable that in the very energetic field conditions that one can foresee, particularly in the presence of strong gravity fields, the field medium itself can be such as to overcome the mutual repulsion or the medium itself may become electrically polarized to provide a background that can serve as the neutralizing influence. In any event, the high energy physics of the scenario by which protons synthesize heavier forms of matter has to explain why hot fusion occurs and the picture just presented has to be very close to what has just been outlined.

Now, there is one important aspect here that tends to be overlooked. How do those protons get created in the first place? The scientific challenge here is not concerned with fusion but rather initial creation and the answer lies in finding the true explanation for what governs the mass of the proton. This is a theoretical exercise in which this Applicant has played an important and recognized part, because, although the world has not rushed into accepting the Applicant’s explanation, it is a fact that the precise value of the proton-electron mass ratio of 1836.152 was deduced in terms of the mu-meson field. This derivation involved collaboration with Dr. D. M. Eagles of the then National Standards Laboratory in Australia. It was reported in the U.S.A. Institute of Physics journal Physics Today in 1984 (November issue, p. 15) and was mentioned in their 1985 update by the leading U.S. researchers who measure this quantity. See R.S. Van Dyck et al: International Journal of Mass Spectroscopy and Ion Processes, 66, (1985) pp. 327-337. They noted how remarkably close the theoretical value was to the one they measured and added ‘This is even more curious when one notes that they [meaning this Applicant and Dr. Eagles] published this result several years before direct precision measurements of this ratio had begun.

‘Given that the Applicant knows how protons are created from a mu-meson field and taking into account that physicists familiar with quantum electrodynamics know that the vacuum field is the seat of activity of electron and positron creation and that mu-mesons are otherwise known as ‘heavy electrons’, it needs little imagination then to suspect that Nature is trying to create protons continuously everywhere in space. Since we do not see such protons materializing before our eyes we must infer that they exist only very transiently after creation unless the field medium has surplus energy to be shed over and above its local equilibrium requirements.

The Applicant’s Electrodynamic Research

There are long-accepted but unresolved anomalies concerning the anomalously very high forces exerted on heavy ions in a cold cathode discharge. In researching this subject the Applicant has established that the forces exerted on a heavy ion owing to its electrodynamic interaction with an electron are, in theory, enhanced by a factor equal to the ion-electron mass ratio.

This theory leads to a breach of the law that specifies balance of action and reaction, which means that energy is being exchanged with the field medium in which the electromagnetic reference frame is seated. The effective electromagnetic reference frame has a structure, as if it is formed by a fluid crystal lattice which, on a local scale, can adapt or maybe govern the shell structure of an atomic nucleus. Thus, normally, the motion of atoms and even ions in a gas or a solution will not evidence the anomalous electrodynamic effects, simply because they do not move relative to the local electromagnetic reference frame, meaning that, as far as concerns translational motion, the electrons present are the only active participant electrodynamically.

It is, however, quite a different situation when we consider a proton or a deuteron as a free ion inside the crystal host lattice of a metallic form, because there can only be one electromagnetic reference frame effective at any location in that metal. Therefore, a proton that is within a host crystal, and is free to move through it, will be seen as moving relative to the electromagnetic reference frame and then it can contribute to anomalous electrodynamic effects.

These conditions were the subject of the Applicant’s research as a Visiting Senior Research Fellow at the University of Southampton in England 1983 onwards. The Applicant had written on the subject of the proton, the deuteron and the neutron, pursuing the theme that no neutrons exist inside the deuteron and stressing that atomic nuclei are composites of beta particles and protons or antiprotons. This work was all published before 1989.

The anomalous electrodynamic forces that exist in the heavy ion/electron interaction imply a hidden source of energy and so of heat but the Applicant’s research was aimed essentially at proving the modified law of electrodynamics dictated by that research. Certainly, whilst the ability to accelerate heavy ions by drawing on a hidden source of field energy was one of the Applicant’s pursuits, at no time had the Applicant contemplated the prospect of a fusion reaction of the kind implied by Fleischmann and Pons.

Nevertheless, as soon as that latter work was reported, the research knowledge arising from the author’s investigations was seen as relevant in the onward exploration of the excess heat phenomenon.

The Applicant was not only interested because of the excess energy aspect. There was the no-neutron feature and the fact that the process involved ion migration through water. There was the fact that the deuteron was the primary agent and this Applicant had shown, from the theory of the deuteron mass and its magnetic moment, that deuterons undergo cyclic changes of state and the state which prevails for one seventh of the time, the deuteron has a neutral core, having transiently shed a beta particle. More than this, however, the author had become involved at the time with two inventions, one of which later became the subject of a U.S. Patent (Serial No. 5,065,085) and these involved anomalous energy activity in a thermoelectric context which bears upon the cold fusion issue.

The other, lesser important, of these inventions was concerned with ‘warm’ superconductivity. The Applicant’s research had suggested that substances having certain molecular mass forms are adapted to absorb impact by conduction electrons in such a way that the change of inductive energy accompanying the collision is conserved until the resulting EMF changes can impart the energy to another electron. This meant that the thermal energy of a heavy ion in the substance could be reduced to feed the normal resistance loss associated with the current. This was, therefore, a process by which anomalous heat energy activity was involved in electrodynamic interactions between heavy ions and electrons.

The more important invention of the two just mentioned was concerned with the anomalous behaviour of a thermoelectric interface between two metals when subjected to a strong magnetic field in a rather special conductor configuration. The Nernst Effect operates to cause heat carried by electrons in a metal to be converted into an electric potential energy by the ordering action of a transversely directed magnetic field.

The essential requirement for the action of the Nernst Effect is that there is a temperature gradient in the metal and, given such a temperature gradient, and the magnetic field, there will then be an electric potential gradient set up within the metal. Now, a potential gradient inside a metal conductor implies that there is inside the body of the metal a distribution of electric charge not neutralized by normal metallic conduction. The polarity of that charge is determined by the direction of the thermal gradient and the orientation of the magnetic field. It can be negative or positive by choice in the design of the apparatus used.

Besides this, the Applicant knew that the flow of a strong current through a metal conductor will promote what is known as the pinch effect in which electrodynamic forces act on the negative electron charge carriers to pinch them inwards and so set up an excess negative charge distribution inside the metal conductor.

This, plus the additional feature that a strong current flow through a metal conductor that is populated by free deuterons will promote a migration of deuterons that will bring them more frequently into near collision, all militated in favour of an invention proposing the provision of a supplementary high current closed circuit through the cathode of a cold fusion cell. That, indeed, became the subject of the patent application which the Applicant filed in U.K. on April 15, 1989, this being the priority application relied upon in the U.S. Patent Application under petition.

The Applicant, therefore, had reason to believe that the work on cold fusion would progress if the auxiliary current activation circuit were to be used.

However, in the event, the pioneer work of Fleischmann and Pons became the subject of such criticism that there was no prospect of getting R & D funding to take the subject invention forward and one is confronted with a chicken and egg scenario where disbelief of cold fusion as a scientific possibility stands in the way of securing patent grant and the doubts about securing a patent stands in the way of finding sponsorship for the development.

The Fusion Criteria Reexamined: There are three criteria that need to be satisfied simultaneously to promote and enhance the cold fusion reaction of deuterons. 

  • Firstly, there is the background incidence of the virtual mu-meson field which is trying everywhere to create protons. This is a natural activity that cannot be controlled. It is a statistical effect, but one can calculate the probability governing proton creation fluctuations in a given volume of cathode material. See comments below. 

  • Secondly, there is the need to bring the deuteron partner in the fusion process into close proximity with the target deuteron. In hot fusion reactions this is achieved by the motion associated with thermal activity. In cold fusion it is achieved by adsorbing deuterons into a host metal in which they become separate from their satellite electrons and by concentrating the loading by the deuteron population. 

  • Thirdly, as with the creation of stars and by hydrogen fusion, there is the need to provide the field which pulls the deuterons together in spite of their mutual repulsion. In cold fusion this means the provision of a neutralizing negative charge distribution within the metal body of host metal. This requires strong electron current surges resulting in heat concentrations which set up temperature gradients in company with transverse magnetic fields. However, the structural form of the host metal in relation to the current channel, the magnetic field effect and the heat conduction path require a mutually orthogonal geometry to provide an optimum action. 

Note that the surplus negative charge may result in a charge density that is quite small in relation to the positive charge of the deuteron population but every unit of charge is seated in a discrete electron and a single electron which can upset the normal charge balance of deuterons and free conduction electrons can nucleate a pair of deuterons.

Then, the creation of a proton in one deuteron accompanied by the demise of a proton in the other will convert the two deuterons into a tritium nucleus and free a proton with a beta particle transferring between the two. Alternatively one deuteron will convert into helium 3 and the proton released will be in company with a beta minus particle.

The onward reactions involving neutrons that are observed with hot fusion processes need not occur if the events involved are triggered naturally by the mu-meson activity in trying to create protons rather than by neutron bombardment.

 

Excellent Perspective From Relativity Past

 

“Ether and the Theory of Relativity” By Albert Einstein

An Address delivered on May 5th, 1920, 
in the University of Leyden

Translated by George Barker Jeffery and Wilfrid Perrett

From: Sidelights on Relativity (1922), pp.3-24, London: Methuen

German original: Äther und Relativitätstheorie (1920), Berlin: Springer

How does it come about that alongside of the idea of ponderable matter, which is derived by abstraction from everyday life, the physicists set the idea of the existence of another kind of matter, the ether? The explanation is probably to be sought in those phenomena which have given rise to the theory of action at a distance, and in the properties of light which have led to the undulatory theory. Let us devote a little while to the consideration of these two subjects.

Outside of physics we know nothing of action at a distance. When we try to connect cause and effect in the experiences which natural objects afford us, it seems at first as if there were no other mutual actions than those of immediate contact, e.g. the communication of motion by impact, push and pull, heating or inducing combustion by means of a flame, etc. It is true that even in everyday experience weight, which is in a sense action at a distance, plays a very important part. But since in daily experience the weight of bodies meets us as something constant, something not linked to any cause which is variable in time or place, we do not in everyday life speculate as to the cause of gravity, and therefore do not become conscious of its character as action at a distance. It was Newton’s theory of gravitation that first assigned a cause for gravity by interpreting it as action at a distance, proceeding from masses. Newton’s theory is probably the greatest stride ever made in the effort towards the causal nexus of natural phenomena. And yet this theory evoked a lively sense of discomfort among Newton’s contemporaries, because it seemed to be in conflict with the principle springing from the rest of experience, that there can be reciprocal action only through contact, and not through immediate action at a distance.

It is only with reluctance that man’s desire for knowledge endures a dualism of this kind. How was unity to be preserved in his comprehension of the forces of nature? Either by trying to look upon contact forces as being themselves distant forces which admittedly are observable only at a very small distance and this was the road which Newton’s followers, who were entirely under the spell of his doctrine, mostly preferred to take; or by assuming that the Newtonian action at a distance is only apparently immediate action at a distance, but in truth is conveyed by a medium permeating space, whether by movements or by elastic deformation of this medium. Thus the endeavour toward a unified view of the nature of forces leads to the hypothesis of an ether. This hypothesis, to be sure, did not at first bring with it any advance in the theory of gravitation or in physics generally, so that it became customary to treat Newton’s law of force as an axiom not further reducible. But the ether hypothesis was bound always to play some part in physical science, even if at first only a latent part.

When in the first half of the nineteenth century the far-reaching similarity was revealed which subsists between the properties of light and those of elastic waves in ponderable bodies, the ether hypothesis found fresh support. It appeared beyond question that light must be interpreted as a vibratory process in an elastic, inert medium filling up universal space. It also seemed to be a necessary consequence of the fact that light is capable of polarisation that this medium, the ether, must be of the nature of a solid body, because transverse waves are not possible in a fluid, but only in a solid. Thus the physicists were bound to arrive at the theory of the “quasi-rigid ” luminiferous ether, the parts of which can carry out no movements relatively to one another except the small movements of deformation which correspond to light-waves.

This theory — also called the theory of the stationary luminiferous ether — moreover found a strong support in an experiment which is also of fundamental importance in the special theory of relativity, the experiment of Fizeau, from which one was obliged to infer that the luminiferous ether does not take part in the movements of bodies. The phenomenon of aberration also favoured the theory of the quasi-rigid ether.

The development of the theory of electricity along the path opened up by Maxwell and Lorentz gave the development of our ideas concerning the ether quite a peculiar and unexpected turn. For Maxwell himself the ether indeed still had properties which were purely mechanical, although of a much more complicated kind than the mechanical properties of tangible solid bodies. But neither Maxwell nor his followers succeeded in elaborating a mechanical model for the ether which might furnish a satisfactory mechanical interpretation of Maxwell’s laws of the electro-magnetic field. The laws were clear and simple, the mechanical interpretations clumsy and contradictory. Almost imperceptibly the theoretical physicists adapted themselves to a situation which, from the standpoint of their mechanical programme, was very depressing. They were particularly influenced by the electro-dynamical investigations of Heinrich Hertz. For whereas they previously had required of a conclusive theory that it should content itself with the fundamental concepts which belong exclusively to mechanics (e.g. densities, velocities, deformations, stresses) they gradually accustomed themselves to admitting electric and magnetic force as fundamental concepts side by side with those of mechanics, without requiring a mechanical interpretation for them. Thus the purely mechanical view of nature was gradually abandoned. But this change led to a fundamental dualism which in the long-run was insupportable. A way of escape was now sought in the reverse direction, by reducing the principles of mechanics to those of electricity, and this especially as confidence in the strict validity of the equations of Newton’s mechanics was shaken by the experiments with β-rays and rapid kathode rays.

This dualism still confronts us in unextenuated form in the theory of Hertz, where matter appears not only as the bearer of velocities, kinetic energy, and mechanical pressures, but also as the bearer of electromagnetic fields. Since such fields also occur in vacuo — i.e. in free ether the ether — also appears as bearer of electromagnetic fields. The ether appears indistinguishable in its functions from ordinary matter. Within matter it takes part in the motion of matter and in empty space it has everywhere a velocity; so that the ether has a definitely assigned velocity throughout the whole of space. There is no fundamental difference between Hertz’s ether and ponderable matter (which in part subsists in the ether).

The Hertz theory suffered not only from the defect of ascribing to matter and ether, on the one hand mechanical states, and on the other hand electrical states, which do not stand in any conceivable relation to each other; it was also at variance with the result of Fizeau’s important experiment on the velocity of the propagation of light in moving fluids, and with other established experimental results.

Such was the state of things when H. A. Lorentz entered upon the scene. He brought theory into harmony with experience by means of a wonderful simplification of theoretical principles. He achieved this, the most important advance in the theory of electricity since Maxwell, by taking from ether its mechanical, and from matter its electromagnetic qualities. As in empty space, so too in the interior of material bodies, the ether, and not matter viewed atomistically, was exclusively the seat of electromagnetic fields. According to Lorentz the elementary particles of matter alone are capable of carrying out movements; their electromagnetic activity is entirely confined to the carrying of electric charges. Thus Lorentz succeeded in reducing all electromagnetic happenings to Maxwell’s equations for free space.

As to the mechanical nature of the Lorentzian ether, it may be said of it, in a somewhat playful spirit, that immobility is the only mechanical property of which it has not been deprived by H. A. Lorentz. It may be added that the whole change in the conception of the ether which the special theory of relativity brought about, consisted in taking away from the ether its last mechanical quality, namely, its immobility. How this is to be understood will forthwith be expounded.

The space-time theory and the kinematics of the special theory of relativity were modelled on the Maxwell-Lorentz theory of the electromagnetic field. This theory therefore satisfies the conditions of the special theory of relativity, but when viewed from the latter it acquires a novel aspect. For if K be a system of co-ordinates relatively to which the Lorentzian ether is at rest, the Maxwell-Lorentz equations are valid primarily with reference to K. But by the special theory of relativity the same equations without any change of meaning also hold in relation to any new system of co-ordinates K’ which is moving in uniform translation relatively to K. Now comes the anxious question: — Why must I in the theory distinguish the K system above all K’ systems, which are physically equivalent to it in all respects, by assuming that the ether is at rest relatively to the K system? For the theoretician such an asymmetry in the theoretical structure, with no corresponding asymmetry in the system of experience, is intolerable. If we assume the ether to be at rest relatively to K, but in motion relatively to K’, the physical equivalence of K and K’ seems to me from the logical standpoint, not indeed downright incorrect, but nevertheless unacceptable.

The next position which it was possible to take up in face of this state of things appeared to be the following. The ether does not exist at all. The electromagnetic fields are not states of a medium, and are not bound down to any bearer, but they are independent realities which are not reducible to anything else, exactly like the atoms of ponderable matter. This conception suggests itself the more readily as, according to Lorentz’s theory, electromagnetic radiation, like ponderable matter, brings impulse and energy with it, and as, according to the special theory of relativity, both matter and radiation are but special forms of distributed energy, ponderable mass losing its isolation and appearing as a special form of energy.

More careful reflection teaches us, however, that the special theory of relativity does not compel us to deny ether. We may assume the existence of an ether; only we must give up ascribing a definite state of motion to it, i.e. we must by abstraction take from it the last mechanical characteristic which Lorentz had still left it. We shall see later that this point of view, the conceivability of which I shall at once endeavour to make more intelligible by a somewhat halting comparison, is justified by the results of the general theory of relativity.

Think of waves on the surface of water. Here we can describe two entirely different things. Either we may observe how the undulatory surface forming the boundary between water and air alters in the course of time; or else — with the help of small floats, for instance — we can observe how the position of the separate particles of water alters in the course of time. If the existence of such floats for tracking the motion of the particles of a fluid were a fundamental impossibility in physics — if, in fact, nothing else whatever were observable than the shape of the space occupied by the water as it varies in time, we should have no ground for the assumption that water consists of movable particles. But all the same we could characterize it as a medium.

We have something like this in the electromagnetic field. For we may picture the field to ourselves as consisting of lines of force. If we wish to interpret these lines of force to ourselves as something material in the ordinary sense, we are tempted to interpret the dynamic processes as motions of these lines of force, such that each separate line of force is tracked through the course of time. It is well known, however, that this way of regarding the electromagnetic field leads to contradictions.

Generalizing we must say this: — There may be supposed to be extended physical objects to which the idea of motion cannot be applied. They may not be thought of as consisting of particles which allow themselves to be separately tracked through time. In Minkowski’s idiom this is expressed as follows: — Not every extended conformation in the four-dimensional world can be regarded as composed of world-threads. The special theory of relativity forbids us to assume the ether to consist of particles observable through time, but the hypothesis of ether in itself is not in conflict with the special theory of relativity. Only we must be on our guard against ascribing a state of motion to the ether.

Certainly, from the standpoint of the special theory of relativity, the ether hypothesis appears at first to be an empty hypothesis. In the equations of the electromagnetic field there occur, in addition to the densities of the electric charge, only the intensities of the field. The career of electromagnetic processes in vacua appears to be completely determined by these equations, uninfluenced by other physical quantities. The electromagnetic fields appear as ultimate, irreducible realities, and at first it seems superfluous to postulate a homogeneous, isotropic ether-medium, and to envisage electromagnetic fields as states of this medium.

But on the other hand there is a weighty argument to be adduced in favour of the ether hypothesis. To deny the ether is ultimately to assume that empty space has no physical qualities whatever. The fundamental facts of mechanics do not harmonize with this view. For the mechanical behaviour of a corporeal system hovering freely in empty space depends not only on relative positions (distances) and relative velocities, but also on its state of rotation, which physically may be taken as a characteristic not appertaining to the system in itself. In order to be able to look upon the rotation of the system, at least formally, as something real, Newton objectivises space. Since he classes his absolute space together with real things, for him rotation relative to an absolute space is also something real. Newton might no less well have called his absolute space “Ether”; what is essential is merely that besides observable objects, another thing, which is not perceptible, must be looked upon as real, to enable acceleration or rotation to be looked upon as something real.

It is true that Mach tried to avoid having to accept as real something which is not observable by endeavouring to substitute in mechanics a mean acceleration with reference to the totality of the masses in the universe in place of an acceleration with reference to absolute space. But inertial resistance opposed to relative acceleration of distant masses presupposes action at a distance; and as the modern physicist does not believe that he may accept this action at a distance, he comes back once more, if he follows Mach, to the ether, which has to serve as medium for the effects of inertia. But this conception of the ether to which we are led by Mach’s way of thinking differs essentially from the ether as conceived by Newton, by Fresnel, and by Lorentz. Mach’s ether not only conditions the behaviour of inert masses, but is also conditioned in its state by them.

Mach’s idea finds its full development in the ether of the general theory of relativity. According to this theory the metrical qualities of the continuum of space-time differ in the environment of different points of space-time, and are partly conditioned by the matter existing outside of the territory under consideration. This space-time variability of the reciprocal relations of the standards of space and time, or, perhaps, the recognition of the fact that “empty space” in its physical relation is neither homogeneous nor isotropic, compelling us to describe its state by ten functions (the gravitation potentials gμν), has, I think, finally disposed of the view that space is physically empty. But therewith the conception of the ether has again acquired an intelligible content, although this content differs widely from that of the ether of the mechanical undulatory theory of light. The ether of the general theory of relativity is a medium which is itself devoid of all mechanical and kinematical qualities, but helps to determine mechanical (and electromagnetic) events.

What is fundamentally new in the ether of the general theory of relativity as opposed to the ether of Lorentz consists in this, that the state of the former is at every place determined by connections with the matter and the state of the ether in neighbouring places, which are amenable to law in the form of differential equations; whereas the state of the Lorentzian ether in the absence of electromagnetic fields is conditioned by nothing outside itself, and is everywhere the same. The ether of the general theory of relativity is transmuted conceptually into the ether of Lorentz if we substitute constants for the functions of space which describe the former, disregarding the causes which condition its state. Thus we may also say, I think, that the ether of the general theory of relativity is the outcome of the Lorentzian ether, through relativation.

As to the part which the new ether is to play in the physics of the future we are not yet clear. We know that it determines the metrical relations in the space-time continuum, e.g. the configurative possibilities of solid bodies as well as the gravitational fields; but we do not know whether it has an essential share in the structure of the electrical elementary particles constituting matter. Nor do we know whether it is only in the proximity of ponderable masses that its structure differs essentially from that of the Lorentzian ether; whether the geometry of spaces of cosmic extent is approximately Euclidean. But we can assert by reason of the relativistic equations of gravitation that there must be a departure from Euclidean relations, with spaces of cosmic order of magnitude, if there exists a positive mean density, no matter how small, of the matter in the universe. In this case the universe must of necessity be spatially unbounded and of finite magnitude, its magnitude being determined by the value of that mean density.

If we consider the gravitational field and the electromagnetic field from the standpoint of the ether hypothesis, we find a remarkable difference between the two. There can be no space nor any part of space without gravitational potentials; for these confer upon space its metrical qualities, without which it cannot be imagined at all. The existence of the gravitational field is inseparably bound up with the existence of space. On the other hand a part of space may very well be imagined without an electromagnetic field; thus in contrast with the gravitational field, the electromagnetic field seems to be only secondarily linked to the ether, the formal nature of the electromagnetic field being as yet in no way determined by that of gravitational ether. From the present state of theory it looks as if the electromagnetic field, as opposed to the gravitational field, rests upon an entirely new formal motif, as though nature might just as well have endowed the gravitational ether with fields of quite another type, for example, with fields of a scalar potential, instead of fields of the electromagnetic type.

Since according to our present conceptions the elementary particles of matter are also, in their essence, nothing else than condensations of the electromagnetic field, our present view of the universe presents two realities which are completely separated from each other conceptually, although connected causally, namely, gravitational ether and electromagnetic field, or — as they might also be called — space and matter.

Of course it would be a great advance if we could succeed in comprehending the gravitational field and the electromagnetic field together as one unified conformation. Then for the first time the epoch of theoretical physics founded by Faraday and Maxwell would reach a satisfactory conclusion. The contrast between ether and matter would fade away, and, through the general theory of relativity, the whole of physics would become a complete system of thought, like geometry, kinematics, and the theory of gravitation. An exceedingly ingenious attempt in this direction has been made by the mathematician H. Weyl; but I do not believe that his theory will hold its ground in relation to reality. Further, in contemplating the immediate future of theoretical physics we ought not unconditionally to reject the possibility that the facts comprised in the quantum theory may set bounds to the field theory beyond which it cannot pass.

Recapitulating, we may say that according to the general theory of relativity space is endowed with physical qualities; in this sense, therefore, there exists an ether. According to the general theory of relativity space without ether is unthinkable; for in such space there not only would be no propagation of light, but also no possibility of existence for standards of space and time (measuring-rods and clocks), nor therefore any space-time intervals in the physical sense. But this ether may not be thought of as endowed with the quality characteristic of ponderable media, as consisting of parts which may be tracked through time. The idea of motion may not be applied to it.”

Scientific Theoretical Physicists

Physics author A. Zee is a Permanent Member of the Institute for Theoretical Physics and Professor of Theoretical Physics at the University of California, Santa Barbara. Professor A. Zee was invited to write an introduction to the new edition of Feynman’s classic book on quantum electrodynamics. Feynman’s QED: The Strange Theory of Light and Matter

A quote from the introduction:

“Theoretical physicists are a notoriously pragmatic lot. They will use whichever method is the easiest. There is none of the mathematicians’ petulant insistence on rigor and proof. Whatever works, man!

Given this attitude, you may ask, which of the three formalisms, Schrödinger, Heisenberg, and Dirac-Feynman, is the easiest? The answer depends on the problem. In treating atoms for example, as the master himself admits on page 100, the Feynman diagrams “for these atoms would involve so many straight and wiggly lines and they’d be a complete mess!”

The Schrödinger formalism is much easier by a long shot and that is what physicists use. In fact, for most “practical” problems the path integral formalism is almost hopelessly involved, and in some cases downright impossible. I once even asked Feynman about one of these apparently impossible cases and he had no answer. Yet beginning students using the Schrödinger formalism easily solve these apparently impossible cases!

Thus, which formalism is best really depends on the physics problem, so that theoretical physicists in one field, atomic physics for example, might favor one formalism, while those in another, high energy physics for example, might prefer another formalism.

Logically then, it may even happen that, as a given field evolves and develops, one formalism may emerge as more convenient than another.” – end quote

 

The Responsibly Imaginable

To Possibly Be

Imaginable    So Responsibly

Creatively See

Observational    The Reality

Hope of Theory

Occupational   A Visionary

Energize Plea

Survival         With Planetary

Space Faring

Quantum    LENR

Energy

 

gbgoble2013

FARING : intransitive verb

1. To get along

2. To go or happen

3. To travel; go.

4. To dine; eat.

Middle English faren, from Old English faran; akin to Old High German faran to go, Latin portare to carry, Greek peran to pass through, poros passage, journey.

First Known Use: before 12th century

 

Peter Gluck and Yeong E. Kim on LENR research


Dr. Yeong E. Kim is a professor at Purdue University specializing in theoretical physics who has speculated that Bose-Einstein Condensates (BEC) are involved in low-energy nuclear reactions (LENR). Cold Fusion Now attended the 2012 NETS conference where he presented his theory.

Chemist and new energy writer Peter Gluck has published a new interview with Dr. Kim discussing the changes in the field over the past two-and-a-half decades and his viewing of Defkalion Green Technology’s Hyperion reactor in Vancouver, Canada.

Original article here.

Gluck Q1 Dear Yeong, can you please tell us about your moments of awakening, illumination, scientific revelations to the truth of cold fusion?

Kim As you know, John Huizenga dismissed the Fleischmann-Pons Effect (FPE) as the scientific fiasco of the century [John R. Huizenga, Cold Fusion: the Scientific Fiasco of the Century, U. Rochester Press (1992)]. He claimed that three miracles were needed to explain the FPE:

(1) suppression of the DD Coulomb repulsion (Gamow factor) (Miracle #1),
(2) no production of nuclear products (D+D → n+ 3He, etc.) (Miracle #2), and
(3) the violation of the momentum conservation in free space (Miracle #3).

The above three violations are known as “three miracles of cold fusion”.

My first moment of awakening happened when Fleischmann and Pons announced their experimental results in news media. Initially, my feeling of disbelief dominated about this discovery as a practicing theoretical nuclear physicist, as most of my professional colleagues did. As I was searching a possible theoretical explanation for the claimed discovery, I realized that the conventional nuclear theory could not be applied to deuteron fusion in metal.

However, at the same time, I did not know how to formulate a theory for deuteron fusion in metal, even though I clearly recognized that the conventional nuclear scattering theory at positive energies cannot directly be applied to nuclear reactions involving deuterons bound in a metal, which is a negative-energy bound-state problem. Quantum scattering theory describing the Coulomb barrier problem is applicable to scattering experiments with nuclear beams.

When they were being criticized at the APS meeting, I was frustrated that I could not rebuke public criticisms by my nuclear theory colleagues, since I did not have an appropriate alternative theory, even though I realized that their theoretical arguments are premature. Furthermore, I did not have slightest ideas for explaining the miracles #2 and #3. However my theoretical curiosity on the miracle #1 did kept my intellectual interests on the subject.

My second awakening happened in 1996-1997 when our theory group at Purdue developed the optical theorem approach for low-energy nuclear reactions. Purdue nuclear theory group at that time consists of four members (Y. E. Kim, group leader; A. L. Zubarev, senior scientist; Y. J. Kim, visiting professor; and J. – H. Yoon, graduate student). Our results were published in a publication entitled “Optical Theorem Formulation of Low Energy Nuclear Reactions” (OTF-LENRs) [Physical Review C 55, 801 (1997)]. http://www.physics.purdue.edu/people/faculty/yekim.shtml

Our optical theorem formulation is rigorous. My second awakening came in 1997 with realization that our theoretical result for the OTF-LENRs can be used to develop a generalized theory which is appropriate for describing deuteron fusion in a metal.

My third awakening and illumination happened when I and Zubarev developed a theory of Bose-Einstein condensation nuclear fusion (BECNF) for deuteron fusion in a metal. The results were published in 2000 [“Nuclear fusion for Bose nuclei confined in ion traps,” Fusion Technology 37, 151 (2000); “Ultra low-energy nuclear fusion of Bose nuclei in nano-scale ion traps,” Italian Physical Society Conference Proceedings 70, 375 (2000)]. http://www.physics.purdue.edu/people/faculty/yekim.shtml My third awakening came in 2000 with realization that the BECNF theory is capable of explaining the F- P effect and all of Huizenga’s three miracles.

My fourth awakening is currently evolving ever since I met John Hadjichristos of Defkalion at the NI Week in August 2012. I was very pleasantly surprised when he told me at the NI Week that he quoted our OTF-LENRs paper in his paper submitted to ICCF-17. This was the first time someone in the LENR community was quoting this paper! My second surprise was to hear from him about the even-isotope effect which he observed in his experiments and which was reported in his ICCF-17 paper. The observed even isotope effect is consistent with the theory of BECNF! More detailed theoretical analysis of reaction mechanisms for his experimental results is currently in progress

Gluck Q2 24 years have elapsed; hundreds of successful experiments were made proofs of the reality of the phenomena. Unfortunately the experiments were not sufficiently successful to provide the necessary understanding of what happens and the conditions to enhance the heat release to useful levels? What were your thoughts re the evolution of the experimental situation in the field?

Kim Experiments with electrolysis and gas loading involve very complex measurements with many parameters. Unfortunately, even when useful positive results were observed, it had been very difficult to reproduce the results. The absence of reproducibility of positive experimental results has been a major road block in the field.
We needed desperately a break-through in experimental procedures and techniques to achieve the reproducibility. Unfortunately lack of research funding prevented intense and concentrated experimental works based on fresh new ideas, especially from younger generation.

Gluck Q3 You have published over 200 papers re Physics http://www.physics.purdue.edu/people/faculty/yekim.shtml and over 50 regarding LENR. (An opportunity to thank you for the many fine papers of you have sent me by classic mail and later electronic mail). As a theorist it is said you do not belong to any school, you “are” a school. I could understand this for LENR from your very first note re Cluster Fusion in 1989 till http://www.physics.purdue.edu/people/faculty/yekim/PhysRevC.55.801.pdf the Kim paper I know; may I ask how your theoretical ideas have evolved?

Kim Peter, this was actually answered – for your first question.
I will ask the nice readers to study these two relevant documents: “Critical Review of Theoretical Models (1994)” and “Message to the Colleagues 2012http://www.physics.purdue.edu/people/faculty/yekim.shtml Many of my papers are also posted in the above web site. I hope to publish very important new ones soon.

Gluck Q4 Why is the way to truth and to value so long; why does LENR still have so many problems? On a scale of 1 to 10, what is your degree of discontent with the global situation of LENR?

Kim This is more a philosophical question and I am a physicist. Perhaps CF was not discovered in the best place; perhaps it is a historical bad chance that two electrochemist geniuses have discovered it. And surely I am highly discontented with the experimental situation – weak signals and poor reproducibility – if and when they come, lack of conceptual unity, vision. The theory part was not much better, however I am happy that now becomes obvious – our theory is a part of a greater vision, and it is a critical part.

Gluck Q5 Recently some non-conformist newcomers, as for example Defkalion Green Technologies Global (DGTG) came with the idea that actually what we call LENR is something much more complex than we have thought and the solution is to radically re-design the components – hydrogen, metal, reaction vessel and environment to make it productive and controllable. What do you think about this New Wave idea? New paradigm?

Kim Recently, I had an opportunity to observe experimental runs of DGTG’s R-5 reactor carried out by their group of scientists in Vancouver. The results were positive. More importantly the results are reproducible, since there had been many positive runs with other observers in addition to my observation.

This is very significant historically since we have now a device which yields reproducible results for the first time. It is a break-through which we have been waiting for.

The break-through is accomplished by new-comers, a new breed of scientists and engineers led by a mathematician who became an excellent scientist. This is a new wave and new paradigm change.

Gluck Q6 Prediction is an intellectual activity superior even to wisdom. Please tell my readers what are your predictions for the future of the field! Are you looking to the present and then great chances are you are pessimist, or do you have the vision of a bright future?

Kim Recently I became very optimistic. At Vancouver I witnessed a protocoled successful test with results leaving no doubt about plenty of excess heat and good control of the device. I am an optimist regarding the principles, but also for discovering and or creating the details which I plan to work on very hard in collaboration with my DGTG friends.

Peter Gluck‘s original interview with Yeong E. Kim is at Ego Out.

Related

Dispatch from Daejon Updates from ICCF-17

Trends Journal taps new energy again

Gerald Celente has once again brought new energy to the attention of investors and policy-makers by including a story on Defkalion Green Technologies in his Trends Journal, according to a post by Peter Gluck.

Trends Journal Institute 2013-Spring-JournalThe publication is of the Trends Research Institute.

It has been quite a while since Celente has spoke publicly on cold fusion and other new energy technologies. Sterling Allan‘s Pure Energy Systems has a Top Five Exotic Free Energy Technologies listing of breakthrough technologies that got the attention of Celente, and made it into the Trends Journal 2011.

In 2011, Celente put new energy as a top trend for 2011 and called cold fusion the greatest investment opportunity of the 21st century. While the earlier inclusions focused on Andrea Rossi‘s Ecat technology, this piece focuses on the differences between Rossi and Defkalion’s prototypes.

This excerpt is from Gluck’s posting New Energy trends paper about Defkalion Green Energies:

Alex Xanthoulis, Defkalion’s CEO, is quick to emphasize that the company’s products differ sharply from Rossi’s. An unnamed “major US organization,” he says, has compared Rossi’s and Defkalion’s devices on 14 points. “It found only two the same – the use of hydrogen and the use of nickel,” he says. “Otherwise, the two are completely different.”

There are other points of departure.

Rossi’s early devices, like the inventor himself, also were quirky. The temperatures they would reach weren’t predictable; they produced only a few watts of excess energy; and, when shut off, took varying lengths of time to stop producing heat.

In contrast, Defkalion’s machines reportedly produce heat at precise temperatures that customers require and can be shut off within a few seconds. The devices also produce energy up to 10 kilowatt-hours, not single watts as others have. The nickel-hydrogen fuel modules can easily be pulled out and replaced when depleted, a task that should need to happen only every few months.

Defkalion’s first product is called “Hyperion” and will enter the market early next year. A cube about 20 inches on a side, it will be marketed as a heater or boiler for homes and light industry needing up to five megawatts of power.

The second product is a larger-scale reactor that can be used to drive turbines or even cars, trains, ships, space satellites, and planes. Defkalion reports fielding inquiries from hundreds of companies around the world and has chosen to partner with at least 10 large ones – including three vehicle manufacturers, a utility company, telecommunications firms, and a maker of aircraft – to continue research and development. Some of the companies already are testing commercial devices using the reactor as a power source.
–THE SEARCH FOR AN OIL-FREE FUTURE by Bennett Daviss The Trends Journal Spring 2013, pp 30-34

Defkalion Green TechnologiesThe mention of proposed Hyperion generators contradicts earlier statements implying Defkalion would not focus on consumer products per se, but license their technology for others to manufacture. However, as technological developments rush forward, adapting to new information is required and plans and strategies change at light-speed. All pathways to a marketable generator must remain open.

ecatdotcom-logoRossi’s heat output seems somewhat low according to reports by European scientists who have witnessed public demonstrations of the Ecat. However, it does appear that many spectators to this drama confuse the two rival technologies and distinctions must be made.

Hopefully, Defkalion’s first public demonstration planned this August at National Instruments NIWeek 2013 will make those distinctions clear.

New energy trend is strong

It has been three years this month that Cold Fusion Now was activated to promote clean energy from the hydrogen in water. Turning the science into a commercially-scaled technology still eludes new energy researchers, yet wider awareness and support has been generated, and more private investments made. The Trends Research Institute has been one of the few forecasting agencies who recognize the importance of this technology.

It is unknown how many new labs are opening up. I know of at least two fresh and fairly well-funded ventures in the U.S. which prefer to remain under the radar without publicity, and probably more than that, with at least that many in multiple countries around the world. The number of people working on solving the problem of cold fusion, also called low-energy nuclear reactions (LENR), lattice-assisted nuclear reactions (LANR), quantum fusion and the anomalous heat effect (AHE), has increased dramatically over the last three years, most of whom have dreams of a marketable device, despite the lack of a definitive theory describing the reaction.

While a large-scale coordinated research strategy would most likely accelerate the engineering of a usable technology, the absence of funding and patent protection forces hopes on the efforts of small independent labs, most of whom are working in isolation from each other.

Google’s search trends results reveals the general interest over the last 12 months. The most popular search term remains “cold fusion”, the name we choose for describing all the multiple terms listed above. Highest peaks occurred during July and August when the International Conference on Cold Fusion ICCF-17 met in South Korea and National Instruments celebrated LENR at NIWeek 2012. A third big bump happens over the MIT IAP Cold Fusion 101 course in late-January.

Ecat, the proprietary name of Rossi’s steam generator, is the second most popular term, a word which many people mistake for the entire field of research. Ecat search interest peaked last fall around the time of the Ecat Conference in Zurich, Switzerland.

LENR, LANR, quantum fusion, and AHE all have devotees, but LENR is most used by the institutions who eschew the stigma of cold fusion.

Google searches have trended flat or slightly downward since the big conferences last year. But the enthusiasm is global, beyond borders and regional dialects. We expect the search engines to be maxing out on cold fusion for this year’s conferences.

Gerald Celente and The Trends Research Institute are bold enough to call it as they see it, without concern for appearances. While giants sleep, the underlings continue to build the intellectual infrastructure of a new energy tomorrow. Usually, being on the leading edge is lonely and frustrating. But if trends continue, this edge will be the springboard for many in the field to not only pay their rent, but provide a clean source of energy in a people-powered world.

Cold Fusion Now!

Peter Gluck: INSPIRED BY DEFKALION: THE TORTUOUS WAY TOWARD LENR STANDARDS

Longtime LENR researcher, now LENR+ advocate (read further to know the difference) operating from Cluj, Romania, Peter Gluck has published an interesting sequence of observations and suggestions on the process of bringing a newly discovered phenomenon into a usable technology for the betterment of humankind, and how those involved in that process may operate to the best, and worst, possible outcomes.

While we don’t always agree, the problems associated with bringing a revolutionary new technology into human civilization are, in part, outlined here with a pointed direction towards solutions, and with sensitivity.

The full essay is here.

A lengthy excerpt follows:

Researcher and Author Peter Gluck
SECOND PART POSITIVE (about LENR!)

Problem solving is a great art, and, modesty apart, I am a guru of it- (see my 20 Rules of Problem Solving in this Blog.) A practical principle is that a very good solution is applicable for surprisingly broad range of things.

Take in consideration that: “The Good and Evil are Siamese twins”- ergo the Manual, in an adapted form. can be used for other problems too even if these are essentially good, positive, constructive.

In my former writing it was stated that LENR needs a Definition.

At the Panel of LENR theory at ICCF-17 it was minimum of superposition or concordance between the basic ideas of the brightest 5 theorists, their assumptions are different and their logic and mathematics do not intersect. I conclude that prior to a definition, for LENR it will be necessary to have a lot of more mutual understanding, less Babelization, a bit of conceptual harmony, a good inner taxonomy. and even standards. The standards have to be accessible and acceptable for the LENR people but also for those outside the field who take the decisions.

The Modern Mass Manipulation Manual (MMMM) can be used to help to make these tasks more clear…

DIVIDE and Solve
Divide- and the other two basic MMMM verbs/actions are poly-semantic here.

Even huge works, plans, tasks (says the science of Project Management and also the common sense) can be fulfilled if they are divided, in small sub-tasks. Sometimes, this is the unique possibility to make a mega-task manageable. Other times it is a trick to get time till the task becomes obsolete or ceases to be interesting or important.

Wicked problems must be split in small steps, actions and tasks. Tasks have to be smartly segmented. Logical schemes with alternative ways have to be created; careful good planning is essential.

We have to divide the great field of LENR in sub-domains, select those with the best scientific and technological (if!) perspectives and focus on those with the highest expectations. I am using a rather simple approach; when I speak about dividing, the “knife” I am using is technology. I repeat- the main criteria for dividing is: what is good for technology and is potentially commercially valuable, from what cannot lead to a technology and is merely a lab curiosity.

But this thinking too must avoid logical rigidity. Divide ideas in those that are possible, those that can be made possible and those that are not possible. These three categories are overlapping dynamically, will remain in evolution.

If, in LERN you try to divide the things in simple and complicated, then in practice you will state soon that actually they are very complex and amazingly complex.

Everything I know in LENR I have learned from my colleagues via their writings and messages, however today it is a sharp division between my opinion and the LENR opinions of other people. This is due to my focusing on technology and on the future – perhaps most people will consider I am anticipating some facts. Only the future can show if I am too optimistic regarding the present situation of the field. It will also show if I am too pessimistic regarding the technological future (lack of it, actually) of the classic, pre-revolutionary part of LENR.

Yes, when it is about dividing the entire LENR field, it seems only I am speaking about LENR and LENR+ following a technical and scientific revelation caused by the Defkalion process. It is described as:

LENR – unexpected, not controllable nuclear reactions in a hydrogen isotope plus transition metal system.

LENR+ – designed and controllable nuclear reactions in a hydrogen plus transition metal system enhanced by making hydrogen reactive and the metal more receptive.

If this division/classification will be ever accepted, this depends on the industrial success of the Hyperions and of the E-cats– if they become heaters as usual and are sold as other older energy generators.

It is a great pity and delay that this simple and powerful principle “make hydrogen reactive and nickel more receptive” is not recognized; this is in a way present in Piantelli’s process (hydrogen negative ion and nano-clusters of nickel). I bet that Rossi’s E-cats – he had more variants of them thin and fat, are all also based on this generic “best practice” or “core understanding”.Rossi speaks so many times about the necessity to use atomic hydrogen and to apply some special proprietary treatments to the micrometric Ni powder, that this is a case where we can believe him.

It is also regrettable that in the present time, the LENR+ deniers and LENR+ ignoranti are so predominant, the qualitative difference between the classical LENR systems and the fledgling LENR+ technologies remain merely unknown.

By the way, my Open Letter [read] was accepted very politely by the organizers of the ICCF-17, but as far I could state its impact on the participants was zero or less. It could be worse.

However despite the fact that I have predicted the technological failure of palladium and deuterium in wet systems these are continuing to be popular and the pre-formed nanometric complex powders are participating in many scientific orgies with both deuterium and hydrogen.

So it is a sharp division between my opinion and the opinions of my colleagues and friends, however one of the first slogans I created says; “Differences in opinions attract smart people and repel only those who are not so” Sometimes this is almost true, however it sounds fine, isn’t it?

I have now the typical behavior of a theorist speaking my own language, own concepts and very personal presumptions and specific approach; and I am not the fan of anybody, even not of myself because I do not claim my inerrancy.

For a standard we have to analyze the similarities and the differences between LENR and LENR+. In practice only LENR+ as Defkalion says- needs a standard and a definition, and only LENR+ has to be and can be standardized.

CONNECT and Understand
First of all, LENR has to be connected to reality, this is usually a painful and shocking experience and it is especially difficult when it is about the technological reality. Defkalion’s experience (see their paper at ICCF-17) shows that even if LENR is the cause of many natural phenomena, hydrogen and nickel in their “almost” natural-technological state are very far from being ready to initiate/enter a low energy nuclear reaction, have no real chances, and their physico-chemical status has to be radically and deeply transformed.

This explains the extreme difficulty of experimental LENR and shows the way to LENR+ as an energy source.

Connect with pragmatism and disconnect from theoretical idealism, re-connect theory and experiment. Connect as strongly possible, LENR and the Scientific Method (principles of Galileo).

Without this connection any progress in understanding and in making LENR useful, is illusory.

Few people will agree with me that the time is ripe for LENR’s weaning from palladium, therefore I will not say that perhaps it is time to invest less (money, time, hopes) in wet systems (yes I know about Brillouin!), electrolysis, deuterium. The great LENR community is free and democratic, everybody does something resulting from what he/she has already done, likes to do and affords to do. Global LENR management and strategy are inexistent, or, just happen. Perhaps a better connection between the teams could help.

Connect in stronger and more powerful networks of researchers, but include good engineers and materials scientists. Connect to the essence and disconnect from the fuzzy halo of ideas around it. This is a problem of subjectivity and of definitions.

Connect with more and more remote scientific and technical ideas trying to create creative bisociations.

Connect better- smarter, more diplomatically with the press, develop empathy for their points of view and try to convince the journalists that a working LENR is a better source of interesting sellable information than a failed LENR.

Special- National Instruments has connected with the best LENR warriors and has established a very valuable model.

In the near future, the place where LENR+ Truth can be revealed is Defkalion’s on-line real time mass spectrometry plus the other systematic and instrumental analyses- a unique window of opportunity. I hope DGTG will connect with an excellent strategic partner as good in instrumental analysis as National Instruments is in instrumental measurements, for this task of paramount importance. Goddess Athena, help us!

May I add some highly desirable but rather idealistic connections?

Develop more connected professional networks, include engineers, managers, specialists in materials science, IT people, clever businessmen, nice politicians…

Make friends, make demos, make positive noise. Develop empathy for skeptics. Stop infighting. Develop strong solidarity in the LENR community and do not let it to degrade to liquidarity or even gasidarity.
Disconnect from the cursed word “anomalous”; use ‘unexpected’ or ‘surprising’ instead.

LENR is at least as natural as combustion, nuclear fission or hot fusion but much more environmentally friendly as these.

LENR is more concentrated, practical and reliable as solar or wind energy. It has to be developed in harmony with the other forms of energy, connected with these by complementarity not by senseless rivalry.

COMBINE and Enhance
First of all, combine the best modes of thinking for understanding the generative essence and spirit of LENR+.

Combine and unite the good things and eliminate the bad things.

Combine the best theories, more precisely the fragments of truth from these to develop real understanding and ideas for the technological development of this science. Accept that theories if not combined and confronted with solid experimental data are uselessly incomplete…

Accept that only combinations of theories are able to explain LENR that is a very puzzling combination of a broad range of branches of physics and chemistry.

LENR+ uses secret combinations of chemicals that enhance some of the critical (bottleneck) stages of the set of reactions that lead to LENR+. Rossi calls in a populistic manner these combinations “Catalyst” and calls his generator E-cats– a misnomer taking profit from the general sympathy toward the household felines.

The combination of words “cold fusion” is an other misnomer. (for ‘fusion’- ask Steve Krivit) , ‘cold’ is also not true/real. LENR+ is not as hot as Hot Fusion (at millions of degrees) but it is very hot for earthly conditions. It seems that as Defkalion says, LENR+ takes place only in the dynamic vacancies – miniature workshops of Hephaistos with Hell- like conditions.

Conclusion: for the success of LENR, we have to divide it in LENR+ that is both interesting and valuable and LENR classic that is only interesting, we have connect it to good things and disconnect it from harmful things; and, eventually we have to combine what we already know with what we still have to discover- all these in order to create the best energy source of the near future. –Peter Gluck

“Explaining LENR”

A new idea of what creates the cold fusion reaction has been articulated by Edmund Storms of Kiva Labs. Storms describes his hypothesis with the simplest terms in the updated Student’s Guide to Cold Fusion [.pdf] and in a recent paper submitted to the Journal of Condensed Matter Nuclear Science called Explaining LENR. [.pdf]

There are three distinct parts to his model.

1. The Nuclear Active Environment NAE of a crack or hollow is formed.
2. Hydrogen enters the NAE.
3. Applied power at the resonant frequency of the NAE/hydrogen combo turns mass into energy.

Storms does not say what nuclear mechanism is at work, only that it is instigated by resonance.

Peter Gluck, one of the earliest scientists to look into cold fusion/LANR/LENR, and what he has termed LENR+ for the new commercial products now being engineered, asked how this proposal answers seven crucial questions, and got Storms to answer. Re-published here from his blog EgoOut is their exchange.

Question #1: What are the consequences if the New Theory is successful?

Storms: The consequences of my theory being correct are twofold. First, the ability to replicate LENR at robust levels will improve. Once the required cracks can be manufactured on demand, the energy could be made on any scale, from that required to power a computer to a space craft.

Second, the phenomenon can be applied to solving the solar defect of neutrinos. This will cause a new understanding of the Standard model. But right now, we can only hope.

Question #2: What about the completeness of the New Theory? Is it a “transtheory”?

Storms: The model will be a “trans-theory” only to the extent that it is acknowledged as plausible and worth exploring. This acceptance is not assured at this time. As for whether one or many theories are required depends on how many ways Nature has to cause LENR. I assume only one basic method is possible. Therefore, only one theory is needed, i.e. the correct one. We will have to wait until the proper tests are made to determine which theory is correct. My model shows exactly which tests need to be done.

Question #3: Is the theory valid for all the existing LENR systems?

Storms: I base my model on hundreds of observations that show several very robust patterns of behavior. These behaviors include both the presence and absence of expected behavior. I rely on using a large number of combinations of behaviors, all of which are consistent with the logic of the model.

In addition, the model can be applied to both deuterium and hydrogen systems using any method for causing LENR. Of course, less support for the idea exists in the hydrogen system, which makes it the ideal system to use as a test of the predictions.

Question #4: Does the New Theory explain the serious problems of control, characteristic to all the LENR systems?

Storms: Control is a problem that the model addresses. I assume the rules controlling chemical behavior apply to the process that precedes the nuclear reaction, regardless how the nuclear reaction operates. Once the preconditions are understood, the controlling variables can be identified and used in the same manner they would be used to control a chemical processes. In other words, chemistry determines the rate of the nuclear reaction.

Once the required conditions are formed, the nuclear process occurs very rapidly and without any additional effort. This is similar to how energy is made in a gas furnace. The rate of energy production is determined by how fast the fuel is applied, in this case D+, and the subsequent flame does its thing without any additional effort or control.

Question #5: Does the New Theory explain the huge enhancement of energy achieved in the LENR+ systems of Rossi and Defkalion?

Storms: Rossi has succeeded in increasing energy production by finding a way to create many active cracks in the fine nickel powder. Presumably the powder has just the right size to support exactly the correct size crack. As a result, the concentration of NAE is higher than Piantelli was able to achieve in solid nickel. The secret of the process involves the method and/or the material that needs to be added to Ni to cause the cracks to form.

Question #6: Piantelli had a self-sustaining cell working for some 4 months and Rossi speaks about an active life time of the material of 6 months. It seems Ni is not destroyed but transmuted. My guess from the very start (1993 paper) was that the active sites are formed in some way by “surface dynamics”- the movements of the atoms at the very surface of the metal – many degrees of freedom.

If the NAE are active cracks in the metal and many/more active cracks mean more energy, then isn’t LENR an inherently destructive process? Is there is a concurrent process by which the structure of the metal is rebuilt, the “wounds” are healed or is the metal, in a certain sense, ‘sacrificed’, structurally speaking?

Storms: I propose that a limited and relatively constant number of active cracks can form because these result from stress relief. Once all the stress is relieved, no more cracks can form. Of course, most of the cracks made this way will be too large to be active, so that only a small number of NAE sites are making the detected energy.

The life time will be determined by variables independent of the number of active sites. For example as deuterium accumulates in the E-cat, the reaction rate will drop because the less active tritium formation reaction will start. When deuterium is used to make helium, the helium will accumulate and block access to the active sites for the deuterium.

I do not believe that any significant transmutation takes place. All measurements of this process show that this reaction is rare, except for the claim by Rossi.

Question #7: Based on the New Theory, what would you recommend as a strategy for the LENR field? On what should research and development focus as much as they can; palladium-deuterium Pd-D systems or nickel-hydrogen Ni-H systems?

Storms: This question involves politics, which makes it difficult to answer. On the one hand, the Pd system has a great deal of experimental support while the Ni system can apparently produce significant power, but based on very little understanding of the process.

If the crack model is correct, the metal is not important except that it be able to form active cracks and dissolve D or H as the required reactants. In fact, Ni might be a better host for the D reaction than Pd because it is cheaper and the D is more active than H because each D makes more energy than each H.

So, my advice is not to focus on the metal but on understanding the process. Once the process is mastered, the claims will be accepted regardless of the metal used. In fact, I think neither Ni nor Pd is the best host for the reaction.

Related Links

A Crack in the Code by Ruby Carat May 24, 2012