Hagelstein and Tanzella’s Vibrating Copper Experiment

Read original article by Marianne Macy on Infinite-Energy.com.

Hagelstein and Tanzella’s Vibrating Copper Experiment
by Marianne Macy

MIT’s Prof. Peter Hagelstein, longtime contributor of cold fusion experimental and theoretical work, knows a thing or two about X-rays. In the 1980s he was a 24-year old-prodigy when he worked for hydrogen bomb creator Edward Teller at Lawrence Livermore Laboratory in what became known as the Strategic Defense Initiative— Star Wars. Hagelstein had discovered a way to make a nuclear X-ray laser that would become the basis for the program, calculating that the electrons of a metallic atom, pumped repeatedly from an exploding bomb, could produce scores of X-ray photons. His work postulated that metals with a higher atomic number on the periodic table such as gold, mercury, platinum and bismuth would have shorter wavelengths and make for a more energetic laser. After successful early tests, Hagelstein became one of the chief scientists of a program that essentially was based on his idea. He was the recipient of the E.O. Lawrence Award for National Defense from the Department of Energy in 1984, and at the time was the youngest recipient of that honor.

Dr. Alexander Karabut, who passed away on March 15 and whose background is detailed in a memorial obituary, spent years studying and working on X-ray effects. David Nagel, a physicist and former Naval Research Laboratory (NRL) Division head who himself has a patent on a system for studying the effects of soft X-rays for lithography, considered the work by Karabut and his colleagues at LUCH to be very important. “The center of gravity of Karabut’s work is transmutation and radiation measurements. Karabut’s X-ray measurements got attention in the U.S. because of the interest of people like myself and Peter Hagelstein, who have a background of experience in X-rays.”

Nagel credits Karabut with making a tremendous contribution to this area of research. “I found 20 papers on ISCMNS and on LENR-CANR.org [search Karabut] there are 33 papers by him covering this area. He produced a large body of information.”

Alexander Karabut’s glow discharge experiments are considered some of the most significant in the field. In 2007 he was awarded the Preparata Medal for this work. One of his longtime LUCH colleagues, Irina Savvatimova, said at his memorial that she and Karabut had published their first paper on cold fusion shortly after Fleischmann and Pons (F&P) had. She said they had observed the effect of excess heat long before F&P but had not paid attention as they’d been more focused on transmutation.

Karabut’s work in X-ray effects is significant on many fronts, including the “fastest recorded evidence from LENR experiments of any kind,” as David Nagel put it. Recent work confirms that Karabut did indeed produce soft X-rays, which is a very big deal. It’s important in terms of understanding nuclear mechanisms and making related technology work. It’s a great scientific breakthrough with significant potential for industrialization.

RESEARCH TO UNDERSTAND KARABUT’S EXPERIMENT
An update on the results of a collaborative research effort between MIT’s Prof. Peter Hagelstein and SRI International’s Dr. Fran Tanzella will be presented at ICCF19 in Padua, Italy. The experiment studies the possible up-conversion of vibrational energy in order to understand Karabut’s X-ray effects, not with glow discharge but with a vibrating copper foil. One of the most striking things about the results is that the very different thinking, backgrounds and disciplines of the participants—an unusual amalgam of disciplines—when put together have resulted in a new kind of experiment.

Tanzella explains that for starters, he and Hagelstein were looking at the problem through a different lens. “Physics and chemistry have a difference of nomenclature,” Tanzella says. “Physicists think of all low energy radiation as X-ray’s regardless of its source. To a chemist, a photon ejected from an atom with low energy is an ‘electronic X-ray,’ while a low energy particle ejected from the nucleus is a ‘nuclear X–ray’ and they are considered different phenomena. Classically excess angular momentum from a nuclear reaction expresses itself as a photon (i.e. a gamma ray). Peter’s hypothesis, the last step of which is present in some LENR theories, is that when a nuclear reaction occurs inside a lattice the excess angular momentum interacts with that lattice’s vibrations. Therefore instead of yielding photons (gammas) it leads to a vibrating lattice, which thermalizes resulting in heat with no ionizing radiation. So Peter thought of vibrations exciting nuclei to get low energy gammas, and calling them X-rays. (We don’t argue over the different nomenclature anymore). Peter’s lossy spin boson model theory deals with massive up-conversion and down-conversion. In high temperature fusion, deuterons normally fuse to make n+3He and p+t, but with low probability can make 4He plus a gamma. So in Peter’s theory for LENR to occur, that nuclear energy needs to be down-converted to phonons. If you vibrate a lattice you get heat but not ionizing radiation. The way I view our experiment is we are looking at the final step in the LENR process backwards: we’re exciting phonons mechanically, and which interact with nuclei to give off low energy gammas as X-rays. In Peter’s model the energy goes from the nuclei to the vibrations for excess heat production, where here the idea is to go the other way and start with the vibrations to produce nuclear excitation. In the models the two processes are just two sides of the same coin.”

Despite, or perhaps because of their different perspectives, they came up with an experiment both were happy with, after some rounds of refinement.

THE KARABUT INSPIRATION
The person whose work they were springing off from, Alexander Karabut, was coming from yet another world entirely. Hagelstein, who had traveled to Russia in the 1990s to see Karabut’s work in the early stages, explains that “Karabut is an experimentalist, not a theorist or someone involved in quantum mechanics. He lived in a last century world. His world is one of power supplies, discharges, working with others on hardware to do some diagnostics on it, and generating lots of data that didn’t make any sense but that he tried to understand.”

Hagelstein mused that he tried repeatedly to tell Alexander Karabut how influential the Russian’s work had been on his thinking and the very direction of Hagelstein’s work. Karabut had asked Hagelstein to collaborate with him on a book, a book that Hagelstein would still like to complete if Karabut was able to make enough progress to leave a manuscript. Hagelstein hopes his appreciation of Karabut had came through to him. This was not just a matter of their method of communication with each other. Neither spoke the other’s language so they were using Google Translate on emails, with linguistic idiosyncrasies indubiously causing major pieces of communication to fall between the cracks. Both men were very busy, Hagelstein reporting that his last term’s work at MIT was “the worst I’ve had in twenty years” and Karabut was working in a new space in Moscow he had put together. Hagelstein also attributed any glitches to their very different life views.

“I think the ideas I’m pursuing are not the most obvious ideas. To think what I am suggesting is plausible requires suspension of disbelief, or someone understanding how coherent processes in quantum mechanics works,” Hagelstein says. “I am going to imagine from his point of view that he would think I’ve lost my mind—which would be a natural reaction of an experimentalist interacting with a theorist like me!” Hagelstein laughs. “He wouldn’t appreciate the amount of ongoing effort to untangle what he did. But Karabut’s work has provided the foundation of pretty much most of the major issues I’ve been working on since 2011. I’ve come to view his experiment as seminal. If you say the Fleischmann-Pons experiment is Number 1 in all this business, I’m of the opinion that his collimated X-rays, if it is not Number 2 then it is in the top five.”

Hagelstein and Tanzella set out to reproduce the Karabut effect. . .not with a glow discharge, as Karabut did, but with vibrating foils and resonators. Would it be possible to produce soft X-rays that were collimated? The distinction being that “soft” here means X-rays in a region of the electromagnetic spectrum. In the X-ray region, the radiation ranges from hard and energetic which will penetrate surfaces (like your broken arm) all the way down to a region—soft—that will not penetrate much material. It has to do with wavelength. The collimated part is more like a laser than a light bulb. If an electric bulb scatters light in all directions, a collimated beam is in a narrow format like a laser. Ordinary X-rays are usually born going in all directions but Karabut found the X-rays from his source were more like a laser, directional.

Hagelstein takes this as extremely significant. If the X-rays are directional, then there has to be a pretty fundamental reason for it. Phase coherence among the emitters could result in collimation, but then how could this phase coherence come about? Hagelstein’s conclusion was that the most likely way it could happen would be through up-conversion of vibrational energy to produce phase coherent nuclear excitation. If so, this would bring Karabut’s experiment into alignment with mechanisms Hagelstein thinks are involved in producing excess heat in the Fleischmann-Pons experiment.

So Hagelstein and Tanzella set out to reproduce the collimated X-ray effect that Alexander Karabut first saw back in 2002. Hagelstein says, “Even before 2002 there were precursors to the effect. Karabut saw X-ray beamlets at higher energy. Karabut was convinced he had made an X-ray laser back in those days.”

HISTORY: BACK TO THE USSR
Peter Hagelstein visited Russia’s LUCH Institute in 1995. In the late 1980s to early 1990s, physicist Yan Kucherov was the head of a group at LUCH that included Alexander Karabut and Irina Savvatimova. Kucherov had already emigrated to the United States but stayed in touch with his colleagues. David Nagel, then at NRL, said he wished for a more comprehensive understanding of what LUCH was like. He believed the institute functions like the United States’ Lawrence Livermore, Los Alamos and Sandia National Laboratories. “You can see they did lab work on materials and systems that have to do with nuclear power and propulsion,” Nagel says.

Hagelstein relates that at MIT they tried to replicate the Karabut, Kucherov and colleague’s experiments. A version of their experiment was constructed and shipped to MIT, where Lou Smullin and Peter Hagelstein worked on it for four years altogether. During this effort, travel was arranged for Hagelstein to go to Moscow to visit the LUCH Institute. “I got to see Karabut there. I witnessed the discharge,” he says. “I asked him a lot of questions. We worked to understand the large voltage spikes in their system better, and for me to get better acquainted with the experiment.”

Hagelstein notes, “In those days we focused on the claim of gamma emission. Kucherov and colleagues had claimed to see gamma emission around 129 keV. The goal of the experiment was to set things out, put a gamma detector on it and see if we could see the same thing. After a very long time and a huge amount of work we saw exactly what they saw. The headache was that the gammas at 129 keV were statistical noise.”

One researcher Hagelstein knew had experimented with glow discharges and tried to do an experiment related to Karabut’s collimated X-rays. “That researcher, someone with an awful lot of experience with glow discharge experiments, failed,” says Hagelstein. “I scratched my head thinking, why? And then I thought, well, obviously Karabut had these sharp voltage spikes, sub-nanoseconds 50 kV or higher in sub-nanoseconds. When I say 50 kV or higher, he was claiming up to a megavolt. That was one of the reasons why I went to Russia, to see these voltage spikes with my own eyes.”

In Moscow, Hagelstein found the ingenious nuts and bolts experimenter at work. “Karabut set up this insane resistance ladder voltage divider. He had like 100 resistors stacked up! So he was able to get a sufficiently low voltage across one of the resistors and so he could measure it without frying the electronics. He claimed his measurements were consistent with getting well over 100 kV out of his voltage spikes. They were shorter than he was able to measure with the scope. He was of the opinion they were sub-nanosecond. We looked for them in our system, which was supposed to be a copy of his. The discharge hardware was an exact copy of Karabut’s system.”

“Although,” Hagelstein continues, “what we had at MIT was a twin to their system. . . except for the electronics. We built our own electronics, different from Karabut’s electronics. We saw voltage spikes but 10, 15, 20 kV also shorter than we could measure, they were under a nanosecond but not of the amplitude that the Russians were getting. I am of the opinion that these voltage spikes are connected with the collimated X-ray emission and electron emission effects. The voltage spikes would only be present if you do something interesting in your drive electronics—for example, he had inductors. When I went to Moscow he said at the time he was using an inductive ballast…but I think to understand his experiments you have to understand the electronics and that is going to play a key role in the effort of sorting out what it is he did.”

So it was that Peter Hagelstein, upon learning of Karabut’s death, sent word to his Russian colleagues that it could be important for Karabut’s electronics to be preserved. “I am of the opinion that the key to Karabut’s experiment glow discharge experiments was in his electronics. He had a report where he documented some aspect of his electronics for a system that was similar to his glow discharge which had some inductors on the other side of power transistors, which is an unusual thing to do. His glow discharge showed very short, high amplitude voltage spikes, which is very unusual for glow discharge. In my view it would be connected to his electronics. If his electronics or notes exist it would be a tremendous loss for them to be to discarded so we don’t figure out his electronics.”

Hagelstein notes, “One thing I had hoped to do in connection with writing the book, was that I was going to twist his arm to write out a circuit describing his driving electronics so it would be there in black and white for the world. I think someone technical who knows about circuits should make an effort to look through his notes and electronics to make a diagram of his driving circuit. If that is done then it would be possible to pursue his life research. If it gets lost then no one will ever be able to go back to what he was doing.”

Hagelstein offered that he would host the experiment at MIT in the future when he could raise the resources and manpower to do it. “It would be nice if his experiment were preserved because it’s such an important and fundamental experiment, but what’s important would be if someone could recover the circuit diagram in as much detail as physically possible; that’s what would make a giant difference to me. There are two separate issues. One is the circuit diagram, the other is the preservation of the experiment. That should be talked about, to make a home for it, possibly in this country—at the University of Missouri Kimmel Institute or LENR research director Rob Duncan’s center at Texas Tech. Our place in MIT is a possibility. In Russia, one question is if Roussetski and company could take it over. In France, researcher Jean Paul Biberian might be a candidate.”

TRYING TO UNDERSTAND KARABUT: THE SRI/MIT Experiment by Hagelstein and Tanzella (The “Hellish Beast”)
Tanzella and Hagelstein agreed on the importance of Karabut’s X-ray effects and the great scientific and practical industrial potential. “We all agreed it was not an X-ray laser,” Tanzella states. “An X-ray laser needed a population inversion, which was thought impossible under the conditions of the experiment.”

At SRI, Hagelstein and Tanzella were faced with the need to make an experiment that was inspired by Karabut’s experiment but would be executed in a way that was completely different. They recognized that Karabut’s glow discharge was sufficiently complex that it was unlikely they would be able to build something to replicate what he had done because they would need his circuits. “In my view, his glow discharge is a hellish beast,” Hagelstein says. “Karabut and the LUCH Institute had a lifetime of experience with glow discharges before he built and worked on it. There was no way I wanted to get into a program where we’d have to basically become experts like Karabut. The idea was that if Karabut’s ideas worked it would work in a certain way. I have models and the models say that the only way Karabut’s model would really work would be if one of these voltage spikes on the cathode produced vibrations. The only way it would work would be, if there was mercury on the surface would we get the X rays.” Hagelstein had noted earlier that 201Hg is special among nuclear because it has the lowest energy transition (at 1565 eV) from the ground state of the stable isotopes.

Hagelstein suggested that instead of building Karabut’s glow discharge system, which looked like a real beast of a problem, they should attack the interpretation and build something simpler that would just vibrates some cathodes. It would be easier to explain to colleagues later on.

Tanzella suggested making them out of copper because mercury sticks to copper very well. Hagelstein explains, “If we got it to work we could put mercury on the surface and just watch for X-rays. That’s what we did. We got charge emission signals. We also got X-ray signals, which we initially thought were Karabut’s X-rays. When we went back to try to understand the data, it was clear. . .We had been fooled. Karabut didn’t get fooled because his diagnostics were very good and redundant, and he had taken the time to study the effect for many years. He had four different ways to test for his X-rays. But we were only using one X-ray detector. I am of the opinion our X-ray detector got fooled because of the large amount of noise present in the system. If real X-rays had been there we couldn’t tell the difference between it and the noise. We would like to follow up and try again either at SRI or MIT. At MIT we haven’t gotten that far yet but we are definitely interested in the X-rays.”

In that Hagelstein has been following the Karabut effect since the 1990s, his appreciation for SRI and Fran Tanzella is great. “Let me honor my friend Fran just a bit here,” he says. “When I approached Fran and SRI and said I’d like to set up a controlled Karabut experiment it was such a contrast to what would happen if I’d tried to do it here at MIT, where if I said, ‘I want to vibrate copper and see if X-rays come out,’ the door would immediately slam shut! But at SRI they said, ‘let’s just go do it and set it up!’”

“We talked about what we needed to do,” Tanzella says. “We need to excite a thin piece of metal. I got copper foils and cleaned them up. We made a simple apparatus. You can find details of this in our recent paper with figures and pictures. We put things together with steel washers. We spent months trying to make it work. The project proceeded in three phases. Peter wanted to find resonance by performing AC impedance experiments. We started that path but found that the noise was large and the signals too small to see in the presence of so much noise.”

Tanzella explains, “We then decided to make a solid cell that would hold the foil tightly, and resonate with the foil. We did that and got a large driver, which was a copper block, large so the acoustic energy wouldn’t go there. We brought in a collector plate in the back side of the resonator foil. You have a driver close to the foil so that it can drive it, and waves from the foil couple to the resonator. You have a collector plate to be able to measure electrons or any current. We were hoping they were electrons. The signals corresponded to negative charges, so we assumed they were electrons or negatively charged air molecules. We had an oscillator and linear amplifier so I could drive oscillations with high voltage and MHz frequencies. There were resonances in the signals. Peter thinks that the X-ray emission in the Karabut experiment is due to the 1565 eV transition in a mercury isotope, 201Hg. He recalled during his visit that they had at one time been using an old mercury-based diffusion pump. The amount of mercury needed on the surface to produce the emission was very small, and probably consistent with normal levels of ubiquitous mercury contamination. So we wanted to get copper vibrating so it could excite the mercury on the surface. Copper amalgamates with mercury so my colleague, Jianer Bao, deposited a thin layer of mercury on our copper foil. And we looked to see X-rays when we excited this coated foil. We saw charge emission signals that seemed to be correlated with the vibrational resonances. (If we let the foil sit for some time the mercury diffuses into the copper—it amalgamates—and the signals on the X-ray detector diminish, which we had attributed to the mercury atoms no longer being on the surface.)”

Tanzella notes, “Peter pulled out his credit card and bought a $7500 X-ray spectrometer. It fit in our resonator. We performed the excitation experiments with and without mercury. We saw something (a stronger signal in the X-ray detector) with mercury present. Because these results were potentially so important, the issue as to whether the signals were real or not came to be an issue. Peter decided to go through every scrap of data that had been taken, and we had to re-run all of the X-ray calibrations since there seemed to be some uncertainty in the calibration that had been used. Peter ended up not being convinced that the signals on the X-ray detector were not real because they didn’t seem to be absorbed by the Be window at the front of the detector. The X-ray detector was responding to something, but not to X-rays.”

Tanzella continues, “We needed to make a decision about presenting the charge emission results at ICCF19, since a charge emission effect correlated with acoustic vibrations would be big news and important to the community. At MIT some experiments had been started, and large amounts of RF noise was found in all of the detectors. So Peter wanted to see the charge emission experiment pass a ‘gold standard’ test to be sure that the charge was real, and not electrical noise. The idea was that RF noise might confuse some electronics, but Peter felt that a simple capacitor couldn’t be fooled. If the current was real, then it would charge a capacitor, and we would have much more confidence in the current measurements.”

So, Tanzella set up the “gold standard” capacitor measurement and took data. He found that the capacitor charged up when the driver was on, at a rate consistent with the earlier measurements. Also, the rate of charging was low off of resonance, and high on resonance, backing up the earlier electrometer measurements. With a successful “gold standard” test in hand, the abstract was e-mailed off.

Continued discussions about the severe noise problems in the experiments at MIT prompted Tanzella to repeat the “gold standard” capacitor test. This time, there would be no real-time monitoring of the capacitor. It would remain unconnected from the rest of the world (other than the collector and ground), and sampled only when the big high frequency and high voltage drive was off. This time no voltage could be seen on the big microfarad capacitor. The measurement was repeated with a small picofarad capacitor, and a signal could be seen. This signal was seen to grow roughly linearly with more running and subsequent interruption type measurements.

Hagelstein notes, “A conclusion from this test is that all of the earlier charge emission measurements were called into question as most likely being due to noise. Critics of the field have speculated that all positive measurements of excess heat and other anomalies are nothing but artifacts, so doing more tests to be sure of a result is always important.” Hagelstein has observed that if the charge in this new test were real, it would be very important. He says, “Unfortunately, we don’t know very much about this new version of the experiment, whether the result is an artifact or not, or whether the charge has anything to do with the vibrations.”

So, the question could be asked, after going through all of this, how do the results connect with Karabut’s experiment, based on all that has been learned?

Tanzella says, “I view the importance here is that you can excite phonons and show nuclear excitation as a way to prove LENR nuclear excitation relates to phonons to get heat without gammas.” Tanzella said that if successful, this research “could validate the concept that you can have nuclear reactions without ionizing radiation.”

Hagelstein has observed philosophically that knowing what doesn’t work is important, because it allows you to focus on things that have a better chance of working. However, he says that the results so far have been extremely valuable to him in his interpretation of the Karabut experiment, and of the models he has been working on. He explains that one of the big headaches in the theory end of things has been to find a regime in the models that might allow for an X-ray emission effect that involves a small sample. For years the numbers just wouldn’t work, even after repeated tries. Last year he found an obscure regime of the model where it was possible to have the numbers work, but this corresponded to a very strong coupling regime of the model only available if coupling to transitions with negative energy states of the nucleus were responsible for the fractionation. Not a regime that he was happy with, and one that would not go over well with colleagues. But a regime that the model would be forced into if one concluded that a small foil had the power to up-convert lots of small quanta to make 1.5 keV X-rays.

According to Hagelstein, they “did drive small samples pretty hard, and when driven hard they didn’t seem to do very much (although with so much noise present it has been hard to be sure).” Tentatively the conclusion he is coming to is that a revision in his interpretation of the Karabut experiment is needed. In experiments by Kornilova and Vysotskii and coworkers, a 3 mm thick steel plate near a high pressure water jet has been seen to produce X-ray signals on film under conditions where the X-rays are collimated. Peter thinks that this effect is closely related to Karabut’s collimated X-rays. He says, “Steel is interesting in that it contains 57Fe, and there is a nuclear transition at 14.4 keV in 57Fe that is like the 1565 eV transition in 201Hg. The cathode holder in the version of the glow discharge experiment that we worked with at MIT had a very heavy steel holder that could be interpreted as an acoustic resonator. Strong acoustic excitation of this resonator, resulting from the very short and very high voltage spikes that occur in Karabut’s discharge, might be responsible for the up-conversion of the vibrational energy. If so, the model would probably be much happier with it in the normal regime of the model. And if so, we could test it, by working with a big steel resonator instead of a copper resonator.”

So, a work in progress. Hagelstein and Tanzella are advancing their ideas about Karabut’s collimated X-rays by investigating a physics experiment which they think is closely related.

WHAT THE FUTURE HOLDS
Peter Hagelstein and his collaborator Irfan Chaudhary produced a paper last year that focused on generic issues of the Karabut experiment and Hagelstein’s model. This paper discusses the model in the different regimes, trying heroically to connect the model to experiment under the assumption that the small cathode is up-converting the vibrational quanta. Hagelstein notes, “The ultimate conclusion is that a connection is made only if the system operates in an anomalous regime, which is interesting but not appealing. These days I am moving to a different interpretation that says the large steel cathode holder plays a major roll. The thought is that the model will be much happier connecting with experiment in the normal regime. This will make life much simpler, as the normal regime is much better understood, much easier to analyze, and behaves qualitatively much more like the experiments. One possibility is that the Fe-57 transition and the few other long-lived low energy nuclear transitions might be important for up-conversion in the eV-keV range, while more common long-lived transitions at higher energy are important for the down-conversion in the MeV regime.” In all of this the Karabut experiments, Hagelstein claims, “Have been key in my thinking and that of some of my associates as well.”

What is the potential of a working technology coming out of the Karabut-inspired experiments Hagelstein and Tanzella are doing?

“Let me back up a bit,” Hagelstein responds. “Some years ago, when Karabut first found this he wondered how efficient it could be. So he tinkered with it, trying to make it as efficient as possible. He had conversion efficiency of 20% from input electrical energy to output collimated X-rays. That is wild. It is amazing. Some of my colleagues have explained to me that this would be a candidate for commercialization. I don’t think you’d like to do it with glow discharge. Nothing wrong with it. If you debug it that would be useful. But I was thinking if we could get surfaces to be vibrated and give out collimated X-rays if this happened efficiently that would be a ridiculously useful technology. One of my friends who is involved with X-ray lithography said that would be the cat’s meow for a source for lithography for the semiconductor industry. Whether or not it turns out to be true, it conveys how important X-ray sources are in this day and age.”

—Marianne Macy and Infinite Energy will continue with this reporting, with interviews from Alexander Karabut’s colleagues from LUCH detailing the history and future of related work there.

Read original article here.

A Russian Experiment: High Temperature, Nickel, Natural Hydrogen by Michael C.H. McKubre

This is a re-post of an article written by Michael C.H. McKubre and published in Infinite Energy Magazine issue #119.

The original article can be found here.

A Russian Experiment: High Temperature, Nickel, Natural Hydrogen
by
Michael C.H. McKubre

[Editor’s Note: Alexander Parkhomov’s E-Cat experiment report was issued on December 25, 2014. We have uploaded the original Russian report by Alexander Parkhomov and his English translation.]

The first thing to record is that the document under consideration is an informal, preliminary research note available to me only in English translation of the Russian original. Despite that it reads well. Alexander Parkhomov is a “known” scientist from a highly reputable Institution, Lomonosov Moscow State University, which I have visited on several occasions. He has published work with friends of mine including Yuri Bazhutov (Chairman of ICCF13 and member of the IAC) and Peter Sturrock (Stanford University). These are both very capable senior scientists so that when this research is prepared for formal publication I am sure we can anticipate a complete and solid report.

In the meantime I will comment briefly on what is presented. Because of the community interest in the topic and the apparently clear and elegant nature of the experiment, Parkhomov’s preliminary report has already received an astonishing amount of discussion on the CMNS news group. What is stated in this preliminary report is encouraging, potentially even interesting, but one is struck by material information that is not made available in this report. Much, most or all of this added detail apparently is available to the author so one must await further elucidation from Parkhomov or a serious engineering effort at replication before final conclusions can be arrived at.

Although clearly motivated by the Rossi “Lugano” experiment it is not correct to call either a replication of the other or of any before. These are new experiments, with new characteristics, and some common features. As shown below the reactor active core consists of nickel powder intermixed with a hydrogen (lithium and aluminum) source, LiAlH4, enclosed in an alumina tube and confined with bonded ceramic plugs. This core is surrounded by a helically wound, coaxial electrical heater extended in length to provide closely uniform heating. The whole is potted in ceramic cement to incorporate a single sense thermocouple.

Fig. 1 Design of the reactor.
Fig. 1 Design of the reactor.

To this extent this configuration mirrors the Rossi reactor recently reported from Lugano although we do not know the similarity or differences between the Ni samples used in each.[1] Since LiAlH4 decomposes to liquid and H2 gas at the temperature of operation its source and nature of are presumed not to make much difference although the impurity content (unstated) may. Also different is the nature of the electrical input used for heating. For Parkhomov this is unspecified. The Rossi effort at Lugano employed 3-phase (50 Hz.) power for the calorimetric input and thermal stimulus but also includes an unknown amount of power in unstated form as a trigger. No such trigger apparently was used by Parkhomov.

The two experiments diverge radically in their chosen means of calorimetry. Parkhomov states that the “Rossi reactor technique based on thermovision camera observation is too complex,” with which I tend to agree. The chosen mean of calorimetry on the new report is to employ the latent heat of vaporization of water — the well known amount of heat required to boil water to steam, in this case at ambient pressure. The heater/reactor combination shown above was enclosed with partial insulation inside a rectangular metal box that was contacted on 5 of 6 surfaces by water.

There are some second order effects that might pertain to this boiling water calorimetry but the method is “tried and true.” It has been employed accurately for well over 100 years and in a slightly different form (boiling liquid nitrogen) was the method selected in recent SRI calorimetry.[2] With simple precautions such a calorimeter should be accurate within a few percent over a wide range of powers and reactor temperatures. One must be concerned to interrogate the heat that leaves the calorimeter by means other than as steam escaping at ambient pressure, that water does not leave the vessel in the liquid phase as splattered droplets or mist (fog), and to accurately measure the water mass loss (or its rate to determine output power). Obviously one also needs to accurately and completely measure the electrical input power.

Although this last issue has been recently (and anciently) raised it is very rarely a problem. Measurement of current, voltage and time (power and energy) are some of the measurements most easily and commonly made. Parkhomov does not supply details of the electrical power or its measurement and he is very much encouraged to do this in his formal reporting. I have no reason, however, to doubt the input power statements. Splatter and mist are issues of observation and calibration and heat leaks are a matter of calibration. Much detail is missing here. Full information about the calibration(s) must be provided in any formal report and full resolution of the question “what do the data tell us?” awaits this detail.


Infinite_energy_logo2

Get Infinite Energy now!


In the meantime what can we learn? Parkhomov states without showing that data that: “The power supplied to the heater stepwise varied from 25 to 500 watts.” The thermocouple in the reactor reached 1000°C approximately 5 hours after initial heating. It would be very nice to have these early-time data together with the data for calibration with which to compare; the greatest weakness of this report is the paucity of data. We are forced basically to rely on three data pairs that I have re-tabulated below from the Parkhomov report with some calculated numbers. Three time intervals are reported of varying duration (Row 2) in which the cell reported an average temperature resulting from the stated average electrical input power, and accumulated the stated Energy In. Parkhomov states from his calibration (not shown) that the heat leak from the system to the ambient is 155 W with the boiler at 100°C. From this heat leak rate we can calculate the energy that leaves in each interval through the insulation and from the mass of water lost we can calculate the heat that leaves as steam by using the known latent heat of vaporization of water (40.657 kJ /Mole or 2258.7 kJ / kg of H2O). The sum of these is the Total Energy Output, the second half of our three data pairs.

Tab-data-MM-analysis

These tabulated data (although few) exhibit an impressive set of characteristics:

  • Excess energies of ~120 to ~1900 kJ in 40-50 minutes.
  • Energy output greater than heat leak rate for the two higher input powers so that even if this loss approaches zero there is still calculated excess energy.
  • Percentage excess energies (and therefore average power) of ~20-160% with increasing input power and temperature.
  • Average excess powers of ~50 to nearly 800 W with a very small “fuel” load (0.9g of Ni).
  • Excess power densities of ~60 to nearly 900 W g-1 of Ni, well within “useful” regimes and consistent with previous CMNS results.
  • Excess power densities for the small reaction volume (~1 cm3) of ~50 to nearly 800 W cm-3.

All of these characteristics are exceptionally favorable. In the “plus column” we can also add that the experiment should be very easy to reproduce and we will hopefully soon have well-engineered replication attempts and conceivably confirmations. The experiment also does not appear to need stimulation[3] other than heat, hydrogen and possibly lithium or the need for solid-nickel/molten-metal interaction. So what are the worries? A very large amount has been said about this experiment in part because of the spectacular character of the tabulated data. Over and above the obvious need for calibration data and complete run-time data (ideally in the form of numbers not just plots) not everybody is happy. Why not?

Although others may have further points to add I would summarize three major concerns expressed[4] with the material that has been presented (rather than what was not):

The unexpected behavior of the Temperature at high power. When excess power (of apparently considerable power density) is being created one would expect to see the temperature of the source to be increasingly elevated. The observed trend is not in the “right” direction.

A plot of the data tabulated by Parkhomov for Reactor Temperature vs. Input Power is a stunningly good fit to a parabola. Because of limits of accuracy and precision experimentalists normally expect such close fits to be the result of calculation, not measurement. The goodness of fit may be explicable by the author or just be a fascinating coincidence.

A temperature arrest of approximately 8 minutes occurred at the end of the experiment after the rapid power and temperature drop following heater failure. This “Heat after Death” episode was preceded by a similar period of apparent temperature fluctuation. Either episode or both might be important signals of the underlying heat generation process or may signal sensor failure. It is difficult to resolve this ambiguity without redundant temperature measurement.

In the absence of relevant calibration data at least, and (better) a finite element model of the complex heat flow from the system as well, one can use only experience and intuition to predict what the reactor thermocouple sensor should register as a consequence of changing input power. The input power to the helical heater has a known (distributed) location. The excess power, however, while (presumably) volumetrically constrained has no defined or necessarily stationary position within the fuel volume. Even the first step of heat flow is therefore complex but an argument has been made qualitatively that, all else being equal, if you add a heat source the temperature should go up. Does it?

Let’s look first at a plot of percent excess power (left vertical axis) and temperature (right vertical axis, °C) as a function of input power (W). Three different colored curves are plotted for three different postulated values of the conductive heat leak from the calorimeter: red (155 W) the heat leak power calibrated by Parkhomov and assumed to be constant throughout the active run; blue (102 W) the value that makes the excess power for the first data point zero, as a conservative internal calibration; green (0 W) no heat leak, the most conservative estimate possible for this term. There is nothing at all surprising about this set of curves, and something quite encouraging. The observed excess power cannot be explained by an error in the conductive heat leak or any changing value of that parameter. The temperature of the reactor rises monotonically and smoothly with increasing excess and total power.

Now let’s look at the same data plotted against the measured reactor temperature below. Here we see some indication of the first concern enumerated above. Although slight, the curvature of this family of curves is up suggesting that as the excess (and total) power measured calorimetrically by the released steam increases, so also does the rate of heat (or temperature) loss from the thermocouple sensor. Although this might indicate a measurement problem (unknowable without calibration data) note that the deviation cause by this curvature is well within the variation bounded by the assumed heat leak to the ambient and might easily be caused by a relatively small change in this calibrated “constant.”

At least two unincluded heat loss term are known that must cause the heat leak constant to change in the direction to cause upward curvature: radiant heat loss from the reactor to the enclosing metal box at higher temperature; increased convective transport from the enclosing metal box to the inner wall of the “steamer” at higher rates of steam bubble evolution. I do not know whether the shape of the curve is a problem or is not. The point that I would like to re-reinforce is that we can only answer such questions definitively and thus gain confidence in the data and therefore knowledge if we have direct access to calibration data in the relevant temperature regime. I would also like to see a good thermal model as the reactor/calorimeter system is nowhere near as simple as it seems having several parallel and series heat transport paths. I realize that such model would be labor intensive and/or expensive to develop so lets start with the calibration. How does the system behave with no possibility of excess power?

As a comment in conclusion, there are gaps and unexplained effects in the data set, notably in the missing calibration data, and the foreground data record is slight. Nevertheless the experiment is clearly specified, easily performed, elegant and sufficiently accurate (with relevant calibration). I would recommend that the experiment be attempted by anyone curious and with the facilities to do so safely, exactly as described. Anything else or more runs the risk of teaching us nothing. I await further word from Parkhomov and reports from further replication teams.

Footnotes:
[1] Parkhomov has stated that the NI used to charge his reactor had an initial grain size of ~10µ and specific area ~1000 cm2/g.
[2] SRI DTRA report and ICCF17 proceedings.
[3] Note that the lack of need for stimulation is very good for demonstration but undesirable for control and thus technology.
[4] The first two points were elaborated initially by Ed Storms, who may make them more strongly than I do here.

About the Author: Dr. Michael McKubre is Director of the Energy Research Center of the Materials Research Laboratory at SRI International. He received B.Sc., M.Sc. and Ph.D. in chemistry and physics at Victoria University (Wellington, New Zealand). He was a Postdoctoral Research Fellow at Southampton University, England. Dr. McKubre joined SRI as an electrochemist in 1978. He is an internationally recognized expert in the study of electrochemical kinetics and was one of the original pioneers in the use of ac impedance methods for the evaluation of electrode kinetic processes. Dr. McKubre has been studying various aspects of hydrogen and deuterium in metals since he joined SRI in 1978, the last 25 years with a close focus on heat measurements. He was recognized by Wired magazine as one of the 25 most innovative people in the world. Dr. McKubre has conducted research in CMNS since 1989.

***********************************END RE-POST

Related Links

Russian scientist replicates Hot Cat test: “produces more energy than it consumes”

Interview with Yuri Bazhutov by Peter Gluck

Infinite Energy Magazine

Stanley Pons’ Preface from J.P. Biberian’s La Fusion dans Tous ses États translated

Stanley Pons, co-discoverer of cold fusion, left the United States in 1991 amidst an unprecedented assault. Physicists wedded to the 100-year-old standard model of nuclear theory, and whose funding would be jeopardized by this seemingly simpler approach to energy production, ‘threw tantrums’ and attacked with vehemence.

Steven E. Koonin, who left Caltech Institute to work for BP Oil and later became the U.S. Under-Secretary of Energy 2009-2011, Robert Park, then-Director of Public Information for the American Physical Society and author of Voodoo Science, and John Huizenga, co-Chair of the Department of Energy panel charged with evaluating the scientific claims and author of Cold Fusion: The Scientific Fiasco of the Century, were just a few of the men who used their authority to create a myth that ultimately denied funding to anyone interested in researching the Fleischmann-Pons Effect (FPE) of excess heat, and to blacklist all scientific papers on the topic from mainstream publication.

Sheila Pons documented the absurd melee in her editorial ‘Fusion frenzy’ stymies research published in the Deseret News March 28, 1990. For the Pons family, as well as the Fleischmanns, the emotional cost was great.

A new laboratory in the south of France funded by Minoru Toyoda, of the Toyota Corporation fame, was set up to continue research. The Institute of Minoru Research Advancement (IMRA) provided a peaceful, supportive setting for the embattled scientists to work.

Dr. Pons describes his early experience in France in the Preface to La Fusion dans Tous ses États: Fusion Froide, ITER, Alchimie, Transmutations Biologiques (Fusion in All Its Forms: Cold Fusion, ITER, Alchemy, Biological Transmutations) by Dr. Jean-Paul Biberian. Published last December 2012 in French, a new English version is expected later this year.

Dr. Biberian has worked on cold fusion cells for the past two-decades at the University of Marseille Luminy where he was a physics professor until retirement last summer. He is also the Editor-in-Chief of the Journal of Condensed Matter Nuclear Science published by the International Society for Condensed Matter Nuclear Science (ISCMNS).

From the French version, he wrote:
À l’annonce de la découverte de la fusion froide, en 1989, l’ensemble du monde scientifique entre en ébullition. Il serait donc possible de produire de l’énergie illimitée à moindres frais ? Dans de nombreux laboratoires, connus ou inconnus, réputés ou non, chacun tente de reproduire l’expérience dont tout le monde parle. J’ai fait partie de ces pionniers, de cette aventure prometteuse extraordinaire. Mais la fusion froide ne s’est pas faite en un jour.

Laissez-moi vous raconter la petite et la grande histoire, humaine et scientifique, alchimique et biologique, de la fusion froide. Une histoire qui me passionne et qui se poursuit aujourd’hui…

with a Google translation:
At the announcement of the discovery of cold fusion in 1989, the entire scientific world boils. Is it possible to produce unlimited energy at a lower cost? In many laboratories, known or unknown, ‘deemed’ or not, everyone tries to replicate the experience the world speaks of. I was one of the pioneers of this extraordinary, promising adventure. But cold fusion was not built in a day.

Let me tell you the small and the great history, human and scientific, biological and alchemical, of cold fusion. A story that fascinates me and that continues today …Jean-Paul Biberian La Fusion dans Tous ses États: Fusion Froide, ITER, Alchimie, Transmutations Biologiques (Fusion in All Its Forms: Cold Fusion, ITER, Alchemy, Biological Transmutations)

Dr. Biberian has been a colleague and friend to Stanley Pons since they first met in 1993 at the IMRA lab.

Infinity Energy Magazine has obtained special rights to publish the translation to English of Stanley Pons‘ Preface and has made it freely available to the public. [ download .pdf]

You can support Infinite Energy Magazine with your subscription.
Your subscription helps to continue the legacy of
Eugene Mallove and the New Energy Foundation.

Related

Edmund Storms at NPA-19: What is cold fusion and why should you care? video August 7, 2012

Too Close to the Sun: 1994 BBC documentary profiles early history of ‘cold fusion underground’ June 7, 2012

World Wide Lab September 18, 2011

Cold Fusion, Derided in U.S., Is Hot In Japan by Andrew J. Pollack NYTimes November 17, 1992

Video: 1989 Steven E. Koonin “we are suffering the incompetence and perhaps delusion of …. New Energy Times

Hot, clean water from cold fusion means worldwide health revolution

In Potential Advantages and Impacts of LENR Generators of Thermal and Electrical Power and Energy published in May/June 2012 Infinite Energy #103 [.pdf version], Professor David J. Nagel describes the impact that clean drinking water produced by cold fusion, also called low-energy nuclear reactions (LENR) would have on human health:

Production of Clean Water
Humans need water on a frequent basis to sustain life. Roughly one billion people on earth do not have good drinking water now. The possibility of being able to produce drinkable water from dirty rivers and the seas by using the heat from LENR would be momentous.” –David J. Nagel

Cleaning dirty water and de-salinization of ocean water on small and large scales both become possible with cold fusion technology, and hot, clean water produced from small, portable generators could affect the health of a billion people world-wide.

Nagel is a Professor at George Washington University in Washington, D.C. and a founder of NuCat, a company that holds workshops and seminars on cold fusion for scientists, researchers, and potential investors. [visit] Making the case to businesses that they can profit with affordable LENR-based hot-water boilers, he goes on to say:

Favorable pricing of LENR generators for such countries could conceivably contribute significantly to world peace. The situation might be similar to the current sales of medicines for AIDS to poor countries at reduced prices. Rich countries will not soon give poor countries a large fraction of their wealth. However, they could provide some of the energy needed for development and local wealth production at discounted prices, while still making money from manufacturing LENR energy generators. This is an historic opportunity. –David J. Nagel

But the real winners are those suffering with conditions caused by dirty water:

Global Medical Impacts
The availability of water free of pathogens and parasites to a very large number of people should lead to dramatic reductions of the incidence of many diseases. The savings of lives, human suffering and costs of medical assistance, where it is available, might greatly outweigh the costs of buying and using LENR generators. The better availability of electricity would improve both the diagnostic and therapeutic sides of clinical medicine.” –David J. Nagel

Coal-mining company Massey Energy leaves behind dirty legacy for people and wildlife in the U.S.
That may be a policy of enlightened self-interest on the part of “rich countries”, but just who needs clean water? Just about everybody.

In the U.S., there are people whose water is combustible because of pollutants from nearby hydraulic fracturing, or fracking, for gas. Suzy Williams wrote a song about it in response to Gasland which documents this atrocity.

But what kind of difference can clean water make in the lives of poor people around the world? The hardship that lack of access to clean water brings to one in seven around the globe forfeits a tremendous human capital. According to Water.org [visit],

Women around the world spend 200 million hours every day collecting water and every 20 seconds a child dies from a water-born pathogen.

Cold fusion commercial products for domestic use now in research and development phase are small and portable. A 10 kilowatt steam-heat generator has a core the size of a tin of mints, requiring only a few grams of nickel powder and pico-grams of hydrogen gas to operate. These relatively simple devices can be made affordably for communities in need.

The benefits of clean water from cold fusion was highlighted in another article published in the December 1996/January 1997 Infinite Energy magazine issue #11 [visit], this one written by researcher and author Jed Rothwell. In it, he commented on Everyday Killers, a series of articles in the New York Times about the myriad of problems created by lack of access to clean water and mosquito nets. [download .pdf]

Here are some excerpts from that article showing cold fusion researchers have been thinking about the revolutionary benefits of this newly emerging technology for a long time:

It is good to be reminded why cold fusion is so important. The New York Times recently published a two-part series on third world health problems titled “Everyday Killers,” by Nicholas D. Kristof:

Malaria Makes a Comeback. And is More Deadly Than Ever, January 8, 1997
For Third World, Water Is Still Deadly Drink, January 9, 1997

… Almost all of water borne diseases could be eliminated by boiling the water used for cooking and drinking and by cooking foods more thoroughly. Better hygiene would also eliminate them, but boiling will work. Unfortunately for a family of four in India, the kerosene required to boil the water costs about $4 per month. Many poor families earn less than $20 per month, so this is much more than they can afford.

The waters of the Niger River Delta are used for defecating, bathing, fishing and garbage. Oil companies have removed more than $400 billion of wealth out of the wetland, but local residents have little to show for it.
Oil companies have removed more than $400 billion of wealth out of the Niger River Delta, and the waters are still used for defecating, bathing, fishing and garbage.
Cold fusion might ameliorate this problem by giving people cheap energy to boil drinking water and cook food. If a high-temperature cold fusion device could be made as cheaply as a kerosene burner or electric stove, it could save millions of lives every year. Boiling water is a workaround. It is not as effective as proper sanitation. As the article explains, “billions of people in the third world don’t have access even to a decent pit latrine.” In other words, in many parts of the world shovels would do more good than either kerosene or cold fusion. Latrines or septic systems would be a great benefit on land with good drainage and percolation. Concrete lined cesspools can be effective. The next step — to water pipes, sewers, and waste treatment plants — costs far more than poor communities can afford.

The Times listed some statistics for the most common water borne diseases in the 1997 article:

Deaths per Year
Diarrhea 3,100,000
Schistosmiasis 200,000
Trypanosomiasis 130,000
Intestinal Helminth Infection 1001000
TOTAL 3,530.000

Sources: World Health Organization. American Medical Association, and the Encyclopedia of Medicine.

Whether you use kerosene or cold fusion, boiling drinking water is a stopgap solution to the problem. It depends on the initiative of individuals. A mother might conscientiously boil drinking water, but when she is not around the children may not bother. It is far better and more efficient to secure a source of pure water for the whole neighborhood or village, and to drain off sewage.

On the other hand, the ad-hoc one-at-a-time method of boiling water is good because it allows individuals to solve the problem on their own, immediately, without depending on community action. It fits in well with the “micro-loan” model third world assistance programs, which were pioneered by organizations like Oxfam.

Ignorance Is Often the Real Problem
Ignorance causes much of the suffering. Children have no idea that filth causes disease. The Times article opens with a scene familiar to anyone who has traveled in the third world, although it is unthinkable to Americans and Europeans
:

Children like the Bhagwani boys scamper about barefoot on the
narrow muddy paths that wind through the labyrinth of a slum here,
squatting and relieving themselves as the need arises, as casual about
the filth as the bedraggled rats that nose about in the raw sewage
trickling beside the paths.

Adults realize that this causes disease, but they are not convinced of the fact enough to discipline their children, or to dig proper latrines. In some urban slums there is not enough room, but that is not a problem in rural villages, yet in many of them water-born diseases are endemic. Many crowded Japanese towns and villages today have no running water or sewer systems. (At least, they still do not in rural Yamaguchi, where I often spend my summer vacation.) Houses are equipped with concrete cesspools only, which were emptied by hand until the 1950s. Yet there has been no water-born disease in these villages in modern times.

Cold Fusion No Panacea, but Better than Alternatives
…Technology does not help people automatically, just by existing.

..The biggest advantage would be that individual people will decide for themselves to buy the reactor. People will not have to wait for corrupt governments or power companies to serve their needs. They will be able to solve their own problems, just as they do today with micro-loans. –Jed Rothwell excerpts from Everyday Killers

Recently, I met with veteran cold fusion researcher Dr. Melvin Miles [visit] and his colleague Dr. Iraj Parchamazad, Chairman of the Chemistry Department at University of LaVerne in LaVerne, California [visit].

An electrochemist who worked for the Navy, as well as a professor of chemistry at University of LaVerne, the now “retired” Dr. Miles continues to work on palladium-deuterium (Pd-D) electrolytic cells as he has for twenty-three years. He was the first to correlate excess heat with the production of helium, confirming the nuclear origin of the reaction. He is an expert in measuring heat, called calorimetry, as well as measuring the tiny amounts of helium produced by these cells.

I wanted to ask Dr. Miles about what he’s learned about calorimetry over the past two decades and I was lucky enough to interview Dr. Parchamazad about his latest work using palladium nano-particles baked into zeolites and exposed to deuterium gas D2O, with which he’s had a 10 out of 10 success rate in generating excess heat.

And a slide from Miles’ presentation at the American Chemical Society Meeting in 2007, a calculation showing that if we took all the deuterium atoms in the ocean and fused them into helium, creating energy according to Albert Einstein’s E= mc2, the fuel would burn 13 billion years:

Slide from Miles' presentation at National Meeting American Chemical Society 2010

Remove institutional blocks at MIT and CalTech; fund cold fusion programs now

First published by Infinite Energy IE24 in 1999, the MIT and Cold Fusion Special Report [.pdf] by Eugene Mallove featured a detailed history of the Massachusetts Institute of Technology’s (MIT) investigation into the claims made of cold fusion technology. The brief episode of research was undertaken by the MIT Plasma Fusion Center (PFC) in 1989 while Mallove was the school’s News Office Chief Science Writer. Mallove’s report on the hot-fusion scientist’s findings is fully documented with an analysis that shows a discrepancy between the original lab data and the data published in their final evaluation.

Drs. Pons and Fleischmann with cold fusion energy cells in 1989.
In that year 1989, two scientists Drs. Fleischmann and Pons working out of the University of Utah Salt Lake City Chemistry department announced the discovery of what was called cold fusion, a clean and powerful form of energy generated in a small test-tube of heavy water. The cell made excess heat, which means more heat comes out of the cell than goes in. And it was alot of heat, the kind of heat that could be developed into an energy-dense technology to provide clean, abundant power for the entire world. It was an astounding declaration.

Upon learning of this breakthrough discovery, scientists around the world dropped what they were doing and attempted to reproduce the Fleischmann-Pons Effect (FPE). Brilliant individuals and talented researchers from a variety of disciplines, including hot fusion and plasma scientists, threw electro-chemical cells together using materials on hand, and attached a battery.

Unfortunately, for all the groups that attempted the experiment, there was only about a 15% success rate.

Most of the attempts to reproduce the effect failed, and many of the researchers saw nothing out of the ordinary happen.

Within months after the announcement, two of the top science institutes in the United States, with the power to shape policy at the highest levels, had declared cold fusion a ridiculous hoax.

More than any other factor, it was the negative reports by MIT on the east coast, and CalTech on the west, that influenced the U.S. federal policy of excluding cold fusion from the energy portfolio.

Federal agencies cited the recommendations from MIT and CalTech as a basis for their policy.

PFC Director Ronald Parker and professor Dr. Richard Petrasso wrote the MIT final report, making the claim that the Utah scientists had “misinterpreted” their results.

Quoting Mallove’s account, scientists at MIT claimed that “tritium detection in cold fusion experiments at Los Alamos National Laboratory should be ignored because it had been done by ‘third-rate scientists'”. They were of course talking about Dr. Edmund Storms and Dr. Carol Talcott, specialists on tritium and metal-hydrides who were measuring “significant amounts of tritium” along with others teams at the national lab.

MIT and CalTech expert opinions were broadcast through the TV/satellite peak of power, just as the Internet was first emerging in the civilian sphere. The message was total. In a story to the press, Parker characterized the work of Fleischmann and Pons as “scientific schlock” and “possible fraud”.

Though he first denied saying anything of the kind, an audio tape made by the reporter confirmed his particular language. The same vocabulary was unleashed on May 1, 1989 at the Baltimore meeting of the American Physical Society with an emotional vehemence uncharacteristic of scientific objectivity.

While Director Parker was meeting with Boston Herald reporter Nick Tate, he took a phone call from NBC-TV news Science Reporter Robert Bazell during the interview. The press eventually ran the message that cold fusion was a big mistake. Since then, virtually no coverage of cold fusion breakthroughs have been broadcast, with the exception the 2009 CBS 60mins report Cold Fusion More Than Junk Science.

During the Herald interview, Parker also took a phone call from Richard Garwin, Chief Science Researcher at IBM Corporation and a member of the Energy Research Advisory Board tasked by then-Secretary of Energy James Watkins with determining the federal response to cold fusion. The ERAB ultimately decided there was no need to investigate the phenomenon further.

In the years that followed, then-President of MIT Charles M. Vest was also on a federal panel that advised President Bill Clinton’s administration to increase funding for hot fusion. The U.S. Department of Energy (DoE) has refused to even acknowledge the existence of cold fusion, resulting in no research funding for over twenty-years, including their $29 billion 2012 budget.

These reports were cited by the U.S. Patent and Trademark Office (USPTO) to justify diverting cold fusion patents out of the normal processing stream. Mallove stated that the MIT report effectively “killed the Pons and Fleischmann patent, which happened in the Fall of 1997”.

The meme created by MIT and CalTech in 1989 remains in scientific and political circles to this day: that cold fusion is a phenomenon imagined in the minds of lesser scientists.

Dr. Vesco Noninski was first to be curious about the MIT cold fusion experimental data. A subsequent analysis performed by MIT alumnus Dr. Mitchell Swartz, now of JET Energy, confirmed discrepancies between the original lab data and the reported data. The MIT reported data appears to be shifted downward, indicating that excess heat may have been measured, as represented by the higher-temperature lab data.

Swartz detailed his findings in three papers which can be found in the Proceedings of ICCF-4 prepared by the Electric Power Research Institute in 1993: “Re-Examination of a Key Cold Fusion Experiment: ‘Phase-II’ Calorimetry by the MIT Plasma Fusion Center“, “A Method to Improve Algorithms Used to Detect Steady State Excess Enthalpy” and “Some Lessons from Optical Examination of the PFC Phase-II Calorimetric Curves“. [download .pdf]

But the damage had been done. Administrators were not interested in re-visiting an already dismissed claim.

If it were not for that lucky 15%, we would not have known anything different, and prospects for a clean energy future would indeed be gloomy.

It is now known that for the types of palladium-deuterium electrolytic cells that they were experimenting with, significantly long times are needed to “load” the deuterium into the palladium. Weeks, or even months, could go by before excess heat would be produced. Turning on the cell in the morning, and expecting the effect to occur by dinner, was unreasonable.

In addition, scientists who were experts in their own field were not necessarily skilled in the complex art of electro-chemical cells. Measuring heat, a science in itself called calorimetry, is difficult for an experienced electro-chemist, let alone a novice. Experiments done by both MIT and CalTech were plagued with poor calorimetry.

Swartz’ examinations of MIT data twenty-years ago were recently appended when Melvin Miles and Peter Hagelstein re-visited the PFC’s experimental procedures of calorimetry. Miles and Hagelstein published their analysis in the Journal of Condensed Matter Nuclear Science Volume 8 2012 pages 132-138 [download .pdf]

Miles is a retired Professor and Navy researcher who is an expert in measuring heat. Hagelstein is MIT Professor of Electrical Engineering who has theorized on the nature of the cold fusion reaction. Hagelstein has collaborated with Mitchell Swartz over the years on several IAP short courses and public demonstrations of active cells on the MIT campus without the official support of MIT. The most recent cold fusion cell continues to produce excess heat for six months now.

The summary of the Miles and Hagelstein calorimetry analysis is reproduced here:

 
The 1989 report from MIT remains flawed with unjustified shifts of temperature plots and poor calorimetry procedures. Yet this report, along with the CalTech conclusions, established the baseline for all academic and federal policy over two decades.

Twenty-years ago, Dr. Charles McCutchen of the National Institute of Health (NIH) responding to Eugene Mallove’s request to examine the MIT PFC data, asked MIT President Vest:

For its own good, and to restore some civility to a contentious field, MIT should look into (1) how its scientists came to perform and publish such a poor experiment, (2) why they either misdescribed their results, making them seem more meaningful than they were or used a subtle correcting procedure without describing exactly what it was, (3) how it came about that data from calorimeters with a claimed sensitivity of 40 mw converged, between drafts, after completion of the experiments, to within perhaps 5 mw of the result that hot fusion people would prefer to see. It might have been chance, but it might not.” –Charles McCutchen NIH 1992

In light of the problems that characterized the Plasma Fusion Center’s experiments over those few months in 1989, and in light of the twenty-three years of research confirming without a doubt the existence of a form of energy that is dense, safe and ultra-clean, both MIT and CalTech have two choices: implement Dr. McCutchen’s recommendations, or, remove any long-standing institutional blocks that have kept research on cold fusion out of the most prestigious science schools in the U.S., and begin again by instituting a serious program to understand and develop what is now called condensed matter nuclear science (CMNS).

Both MIT and CalTech have refused donor money for cold fusion research. Most recently, an “MIT physicist” denied a group’s ability to fund Hagelstein’s research by actually returning the dollars. Meanwhile, the University of Missouri increases its support for new-energy company Energetics Technologies with private donations over $5 million. For elite science schools like MIT and CalTech to ignore the reality of cold fusion is not only a threat to the integrity of our institutions of science, but a threat to our planet.

There is alot of catching up to do in order to develop the myriad of technologies that will allow humankind a second chance at living a technological future, in peace, on a green planet Earth, and we need our most talented and creative minds to do it.

New-Energy Program begins tomorrow!

Opening party starts at 4PM.

Related Links

How Nature refused to re-examine the 1989 CalTech experiment by Jed Rothwell [.pdf]

JET Energy NANOR device at MIT continues to operate months later by Ruby Carat May 22, 2012

1994 BBC doc profiles early history of cold fusion underground by Ruby Carat June 7, 2012

International Society of Condensed Matter Nuclear Science Publications

Big Bang Theory AND Cold Fusion

System crashing, nuclear threat,
riots in streets, fiat paper debt,
mass extinctions, despot measures-
I’m changing the channel for TV pleasures.

I am not familiar with most current TV offerings – and this is old news for you hipsters – but while visiting family, I learned about The Big Bang Theory, one of the biggest shows on TV. It’s about a couple of super-smart science geeks, and the limitations they have as whole people.

Steve Wozniak on Big Bang TheoryOne particular episode, Season 4 Episode 2 The Cruciferous Vegetable Amplification, aired on November 30, 2010 and had a guest appearance by Apple computer legend Steve Wozniak AND a mention of cold fusion.

Continue reading “Big Bang Theory AND Cold Fusion”

Top