Wikipedia Beyond Cold Fusion: A Journey Into the Depths of Wiki Science

It has been noted by many that the Wikipedia Cold Fusion article is not a good source for those seeking information on the art of this science. The Wiki article quibbles as to whether cold fusion research is actually science. The Wiki article also does not recognize the peer review process of LENR-CANR.org or other cold fusion science journals; seeing them as publications by a group of self promoting crackpot scientists, deluding us and each other with dreams of infinite energy akin to perpetual motion, i.e. pseudoscience. This limits valid source material, turning Wiki Cold Fusion into a battle ground and a poor encyclopedic science article with a very low Wiki rating.

To get to the heart of this matter, we will go beyond the surface of the field of battle at the Wiki cold fusion article and find, there in the depths of Wikipedia, the workings of the science behind the clean low energy nuclear reaction environment; now emerging into the marketplace as popular ‘cold fusion’ LENR energy.

It is heartening to find, in Wikipedia, science that challenges known theory; and which confirms the science and the physics surrounding the low energy nuclear reaction. Here we have proof that the coverage of cutting edge cold fusion research has been sorely mistreated by the senior Wiki editors who ride that post. Explore the depths of  Wiki science and find that nowhere else is cutting edge research which challenges known theory thrown into such a battleground of contention, as is found at the Wikipedia article about Cold Fusion… Now, why with recent developments is this so?

 

Explore Key Words at Wiki From This Cold Fusion (LENR) Patent

“Method for Producing Heavy Electrons”, NASA LENR Patent (USPTO link)

Surface plasmons (SPs), Surface plasmon polaritons (SPPs), Resonant frequency, Heavy electrons, Metal hydride, Fractal geometry, Energy, Unconventional superconductivity, Weak antiferromagnetism, Pseudo metamagnetism, Hydrogenated/deuterated molecular structures such as graphane and its nanotube variants, Quasi-crystalline arrays, Metamaterials, Dusty plasmas

Surface plasmons (SPs) are coherent electron oscillations that exist at the interface between any two materials where the real part of the dielectric function changes sign across the interface (e.g. a metal-dielectric interface, such as a metal sheet in air). SPs have lower energy than bulk (or volume) plasmons which quantise the longitudinal electron oscillations about positive ion cores within the bulk of an electron gas (or plasma). The existence of surface plasmons was first predicted in 1957 by Rufus Ritchie. In the following two decades, surface plasmons were extensively studied by many scientists, the foremost of whom were T. Turbadar in the 1950s and 1960s, and Heinz Raether, E. Kretschmann, and A. Otto in the 1960s and 1970s. Information transfer in nanoscale structures, similar to photonics, by means of surface plasmons, is referred to as plasmonics. Surface plasmons can be excited by both electrons and photons. (Wiki)

Surface plasmon polaritons (SPPs), are infrared or visible frequency electromagnetic waves trapped at or guided along metal-dielectric interfaces. These are shorter in wavelength than the incident light (photons). Hence, SPPs can provide a significant reduction in effective wavelength and a corresponding significant increase in spatial confinement and local field intensity. Collective charge oscillations at the boundary between an insulating dielectric medium (such as air or glass) and a metal (such as gold, silver or copper) are able to sustain the propagation of infrared or visible frequency electromagnetic waves known as surface plasmon-polaritons (SPP). SPPs are guided along metal-dielectric interfaces much in the same way light can be guided by an optical fiber, with the unique characteristic of subwavelength-scale confinement perpendicular to the interface. Surface plasmons (not SPPs), occur as light induced packets of electrical charges collectively oscillate at the surfaces of metals at optical frequencies.

Under specific conditions, the light that radiates the object (incident light) couples with the surface plasmons to create self-sustaining, propagating electromagnetic waves known as surface plasmon polaritons (SPPs). Once launched, the SPPs ripple along the metal-dielectric interface and do not stray from this narrow path. Compared with the incident light that triggered the transformation, the SPPs can be much shorter in wavelength. In other words, when SPs couple with a photon, the resulting hybridised excitation is called a surface plasmon polariton (SPP). This SPP can propagate along the surface of a metal until energy is lost either via absorption in the metal or radiation into free-space. (Wiki)

Resonant frequencies In physics, resonance is the tendency of a system to oscillate with greater amplitude at some frequencies than at others. Frequencies at which the response amplitude is a relative maximum are known as the system’s resonant frequencies, or resonance frequencies. At these frequencies, even small periodic driving forces can produce large amplitude oscillations, because the system stores vibrational energy. Resonance occurs when a system is able to store and easily transfer energy between two or more different storage modes (such as kinetic energy and potential energy in the case of a pendulum). Resonance phenomena occur with all types of vibrations or waves: there is mechanical resonance, acoustic resonance, electromagnetic resonance, nuclear magnetic resonance (NMR), electron spin resonance (ESR), and resonance of quantum wave functions. (Wiki)

Muons (mu mesons aka heavy electrons) Muons are denoted by μ− and antimuons by μ+. Muons were previously called mu mesons, but are not classified as mesons by modern particle physicists (see History). Muons have a mass of 105.7 MeV/c2, which is about 200 times the mass of an electron. Since the muon’s interactions are very similar to those of the electron, a muon can be thought of as a much heavier version of the electron. The eventual recognition of the “mu meson” muon as a simple “heavy electron” with no role at all in the nuclear interaction, seemed so incongruous and surprising at the time, that Nobel laureate I. I. Rabi famously quipped, “Who ordered that?” Muonic helium is created by substituting a muon for one of the electrons in helium-4. The muon orbits much closer to the nucleus, so muonic helium can therefore be regarded like an isotope of hydrogen whose nucleus consists of two neutrons, two protons and a muon, with a single electron outside. Colloquially, it could be called “hydrogen 4.1”, since the mass of the muon is roughly 0.1 au. Chemically, muonic helium, possessing an unpaired valence electron, can bond with other atoms, and behaves more like a hydrogen atom than an inert helium atom. A positive muon, when stopped in ordinary matter, can also bind an electron and form an exotic atom known as muonium (Mu) atom, in which the muon acts as the nucleus. The positive muon, in this context, can be considered a pseudo-isotope of hydrogen with one ninth of the mass of the proton. Because the reduced mass of muonium, and hence its Bohr radius, is very close to that of hydrogen, this short-lived “atom” behaves chemically — to a first approximation — like hydrogen, deuterium and tritium. Since the production of muons requires an available center of momentum frame energy of 105.7 MeV, neither ordinary radioactive decay events nor nuclear fission and fusion events (such as those occurring in nuclear reactors and nuclear weapons) are energetic enough to produce muons. Only nuclear fission produces single-nuclear-event energies in this range, but does not produce muons as the production of a single muon is possible only through the weak interaction, which does not take part in a nuclear fission. (Wiki)

Metal hydrides Complex metal hydrides are salts wherein the anions contain hydrides. In the older chemical literature as well as contemporary materials science textbooks, a “metal hydride” is assumed to be nonmolecular, i.e. three-dimensional lattices of atomic ions. In such systems, hydrides are often interstitial and nonstoichiometric, and the bonding between the metal and hydrogen atoms is significantly ionic. In contrast, complex metal hydrides typically contain more than one type of metal or metalloid and may be soluble but invariably react with water. (Wiki)

Fractal Geometry One often cited description that Mandelbrot published to describe geometric fractals is “a rough or fragmented geometric shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole”; this is generally helpful but limited. Authorities disagree on the exact definition of fractal, but most usually elaborate on the basic ideas of self-similarity and an unusual relationship with the space a fractal is embedded in. One point agreed on is that fractal patterns are characterized by fractal dimensions, but whereas these numbers quantify complexity (i.e., changing detail with changing scale), they neither uniquely describe nor specify details of how to construct particular fractal patterns. Multifractal scaling: characterized by more than one fractal dimension or scaling rule. Fine or detailed structure at arbitrarily small scales. A consequence of this structure is fractals may have emergent properties; irregularity locally and globally that is not easily described in traditional Euclidean geometric language. (Wiki)

Energy In physics, energy is an indirectly observed quantity which comes in many forms, such as kinetic energy, potential energy, radiant energy, and many others; which are listed in this summary article. This is a major topic in science and technology and this article gives an overview of its major aspects, and provides links to the many specific articles about energy in its different forms and contexts. The question “what is energy?” is difficult to answer in a simple, intuitive way, although energy can be rigorously defined in theoretical physics. In the words of Richard Feynman, “It is important to realize that in physics today, we have no knowledge what energy is. We do not have a picture that energy comes in little blobs of a definite amount.”   Whenever physical scientists discover that a certain phenomenon appears to violate the law of energy conservation, new forms may be added, as is the case with dark energy, a hypothetical form of energy that permeates all of space and tends to increase the rate of expansion of the universe. (Wiki)

Unconventional Superconductors are materials that display superconductivity which does not conform to either the conventional BCS theory or the Nikolay Bogolyubov’s theory or its extensions. After more than twenty years of intensive research the origin of high-temperature superconductivity is still not clear, but it seems that instead of electron-phonon attraction mechanisms, as in conventional superconductivity, one is dealing with genuine electronic mechanisms (e.g. by antiferromagnetic correlations), and instead of s-wave pairing, d-waves are substantial. One goal of all this research is room-temperature superconductivity . A room-temperature superconductor is a hypothetical material which would be capable of exhibiting superconductivity at operating temperatures above 0° C (273.15 K). While this is not strictly “room temperature” (which would be approx. 20–25 °C), it is the temperature at which ice forms and can be reached and maintained easily in an everyday environment. At present, the highest temperature superconducting materials are the cuprates, which have demonstrated superconductivity at atmospheric pressure at temperatures as high as -135 °C (138 K). It is unknown whether any material exhibiting room-temperature superconductivity exists. The interest in its discovery arises from the repeated discovery of superconductivity at temperatures previously unexpected or held to be impossible. The potential benefits for society and science if such a material did exist are profound. (Wiki)

Weak antiferromagnetism One of the fundamental properties of an electron (besides that it carries charge) is that it has a dipole moment, i.e., it behaves itself as a tiny magnet. This dipole moment comes from the more fundamental property of the electron that it has quantum mechanical spin. The quantum mechanical nature of this spin causes the electron to only be able to be in two states, with the magnetic field either pointing “up” or “down” (for any choice of up and down). The spin of the electrons in atoms is the main source of ferromagnetism, although there is also a contribution from the orbital angular momentum of the electron about the nucleus. When these tiny magnetic dipoles are aligned in the same direction, their individual magnetic fields add together to create a measurable macroscopic field. However, in materials with a filled electron shell, the total dipole moment of the electrons is zero because the spins are in up/down pairs. Only atoms with partially filled shells (i.e., unpaired spins) can have a net magnetic moment, so ferromagnetism only occurs in materials with partially filled shells. Because of Hund’s rules, the first few electrons in a shell tend to have the same spin, thereby increasing the total dipole moment. These unpaired dipoles (often called simply “spins” even though they also generally include angular momentum) tend to align in parallel to an external magnetic field, an effect called paramagnetism. Ferromagnetism involves an additional phenomenon, however: The dipoles tend to align spontaneously, giving rise to a spontaneous magnetization, even when there is no applied field. Diamagnetism Diamagnetism is a magnetic response shared by all substances. In response to an applied magnetic field, electrons precess (see Larmor precession), and by Lenz’s law they act to shield the interior of a body from themagnetic field. Thus, the moment produced is in the opposite direction to the field and the susceptibility is negative. This effect is weak but independent of temperature. A substance whose only magnetic response is diamagnetism is called a diamagnet. Paramagnetism Paramagnetism is a weak positive response to a magnetic field due to rotation of electron spins. Paramagnetism occurs in certain kinds of iron-bearing minerals because the iron contains an unpaired electron in one of their shells (see Hund’s rules). Some are paramagnetic down to absolute zero and their susceptibility is inversely proportional to the temperature (see Curie’s law); others are magnetically ordered below a critical temperature and the susceptibility increases as it approaches that temperature (see Curie-Weiss law). Ferromagnetism Collectively, strongly magnetic materials are often referred to as ferromagnets. However, this magnetism can arise as the result of more than one kind of magnetic order. In the strict sense, ferromagnetism refers to magnetic ordering where neighboring electron spins are aligned by the exchange interaction. Below a critical temperature called the Curie temperature, ferromagnets have a spontaneous magnetization and there is hysteresis in their response to a changing magnetic field. Most importantly for rock magnetism, they have remanence, so they can record the Earth’s field. Iron does not occur widely in its pure form. It is usually incorporated into iron oxides, oxyhydroxides and sulfides. In these compounds, the iron atoms are not close enough for direct exchange, so they are coupled by indirect exchange or superexchange. The result is that the crystal lattice is divided into two or more sublattices with different moments. Ferrimagetism Ferrimagnets have two sublattices with opposing moments. One sublattice has a larger moment, so there is a net unbalance. Ferrimagnets often behave like ferromagnets, but the temperature dependence of their spontaneous magnetization can be quite different. Louis Néel identified four types of temperature dependence, one of which involves a reversal of the magnetization. This phenomenon played a role in controversies over marine magnetic anomalies. Antiferromagnetism Antiferromagnets, like ferrimagnets, have two sublattices with opposing moments, but now the moments are equal in magnitude. If the moments are exactly opposed, the magnet has no remanence. However, the moments can be tilted (spin canting), resulting in a moment nearly at right angles to the moments of the sublattices. (Wiki)

Metamagnetism is a blanket term used loosely in physics to describe a sudden (often, dramatic) increase in the magnetization of a material with a small change in an externally applied magnetic field. The metamagnetic behavior may have quite different physical causes for different types of metamagnets. Some examples of physical mechanisms leading to metamagnetic behavior are: Itinerant Metamagnetism – Exchange splitting of the Fermi surface in a paramagnetic system of itinerant electrons causes an energetically favorable transition to bulk magnetization near the transition to a ferromagnet or other magnetically ordered state.  Antiferromagnetic Transition – Field-induced spin flips in antiferromagnets cascade at a critical energy determined by the applied magnetic field. Depending on the material and experimental conditions, metamagnetism may be associated with a first-order phase transition, a continuous phase transition at a critical point(classical or quantum), or crossovers beyond a critical point that do not involve a phase transition at all. These wildly different physical explanations sometimes lead to confusion as to what the term “metamagnetic” is referring in specific cases. (Wiki)

Graphane is a two-dimensional polymer of carbon and hydrogen with the formula unit (CH)n where n is large. Graphane should not be confused with graphene, a two-dimensional form of carbon alone. Graphane is a form of hydrogenated graphene. Graphane’s carbon bonds are in sp3 configuration, as opposed to graphene’s sp2 bond configuration, thus graphane is a two-dimensional analog of cubic diamond. The first theoretical description of graphane was reported in 2003 and its preparation was reported in 2009. Full hydrogenation from both sides of a graphene sheet results in graphane, but partial hydrogenation leads to hydrogenated graphene. If graphene rests on a silica surface, hydrogenation on only one side of graphene preserves the hexagonal symmetry in graphane. One-sided hydrogenation of graphene becomes possible due to the existence of ripplings. Because the latter are distributed randomly, obtained graphane is expected to be disordered material in contrast to two-sided graphane. If Annealing allows the hydrogen to disperse, reverting to graphene. Note: p-doped graphane is postulated to be a high-temperature BCS theory superconductor with a Tc above 90 K. (Wiki)

Surface Layering (quasi-crystalline arrays) Surface layering is a quasi-crystalline structure at the surfaces of otherwise disordered liquids, where atoms or molecules of even the simplest liquid are stratified into well-defined layers parallel to the surface. While in crystalline solids such atomic layers can extend periodically throughout the entire dimension of a crystal, surface layering decays rapidly away from the surface and is limited to just a few near-surface region layers. Another difference between surface layering and crystalline structure is that atoms or molecules of surface-layered liquids are not ordered in-plane, while in crystalline solids they are. Surface layering was predicted theoretically by Stuart Rice at the University of Chicago in 1983 and has been experimentally discovered by Peter Pershan (Harvard) and his group, working in collaboration with Ben Ocko (Brookhaven) and Moshe Deutsch (Bar-Ilan) in 1995 in elemental liquid mercury and liquid gallium using x-ray reflectivity techniques. More recently layering has been shown to arise from electronic properties of metallic liquids, rather than thermodynamic variables such as surface tension, since surfaces of low-surface tension metallic liquids such as liquid potassium are layered, while those of dielectric liquids such as water, are not. (Wiki)

Metamaterials are artificial materials engineered to have properties that may not be found in nature. They are assemblies of multiple individual elements fashioned from conventional microscopic materials such as metals or plastics, but the materials are usually arranged in periodic patterns. Metamaterials gain their properties not from their composition, but from their exactingly-designed structures. Their precise shape, geometry, size, orientation and arrangement can affect the waves of light or sound in an unconventional manner, creating material properties which are unachievable with conventional materials.  These metamaterials achieve desired effects by incorporating structural elements of sub-wavelength sizes, i.e. features that are actually smaller than the wavelength of the waves they affect. (Wiki)

Plasmonic metamaterials are metamaterials that exploit surface plasmons, which are produced from the interaction of light with metal-dielectric materials. Under specific conditions, the incident light couples with the surface plasmons to create self-sustaining, propagating electromagnetic waves known as surface plasmon polaritons (SPPs). Once launched, the SPPs ripple along the metal-dielectric interface and do not stray from this narrow path. Compared with the incident light that triggered the transformation, the SPPs can be much shorter in wavelength. By fabricating such metamaterials fundamental limits tied to the wavelength of light are overcome. Light hitting a metamaterial is transformed into electromagnetic waves of a different variety—surface plasmon polaritons, which are shorter in wavelength than the incident light. This transformation leads to unusual and counterintuitive properties that might be harnessed for practical use. Moreover, new approaches that simplify the fabrication process of metamaterials are under development. This work also includes making new structures specifically designed to enable measurements of the materials novel properties. Furthermore, nanotechnology applications of these nanostructures are currently being researched, including microscopy beyond the diffraction limit. (Wiki)

Dusty Plasmas A dusty plasma is a plasma containing millimeter (10−3) to nanometer (10−9) sized particles suspended in it. Dust particles are charged and the plasma and particles behave as a plasma. Dust particles may form larger particles resulting in “grain plasmas”. Due to the additional complexity of studying plasmas with charged dust particles, dusty plasmas are also known as Complex Plasmas. Dusty plasmas are interesting because the presence of particles significantly alters the charged particle equilibrium leading to different phenomena. It is a field of current research. Electrostatic coupling between the grains can vary over a wide range so that the states of the dusty plasma can change from weakly coupled (gaseous) to crystalline. Such plasmas are of interest as a non-Hamiltonian system of interacting particles and as a means to study generic fundamental physics of self-organization, pattern formation, phase transitions, and scaling. (Wiki)

This brings us to the end of our exploration for now (there will be more of this Wiki series in the near future)

I hope you have enjoyed the trip. When reading the NASA LENR – Cold Fusion Patent after completing this journey,  you may be surprised at the depth of your new insight. Also, please remember, when informing others about LENR – Cold Fusion Energy be sure to tell them… “Explore beyond the surface of ‘Wikipedia Cold Fusion’ and take a journey into the depths of Wiki science.”

Thanks,

Greg Goble

 

HEY! Visit http://lenr-canr.org/ This site features a library of papers on LENR, Low Energy Nuclear Reactions, also known as Cold Fusion. (CANR, Chemically Assisted Nuclear Reactions is another term for this phenomenon.) The library includes more than 1,000 original scientific papers reprinted with permission from the authors and publishers. The papers are linked to a bibliography of over 3,500 journal papers, news articles, and books (they even have  few quality encyclopedia articles) about LENR Science and Engineering… Popularly known as ‘cold fusion’ now… Forever historically speaking that is.

 

NEXT

LENR Sister Field – Thermoelectric Energy

Soon I will take us back in time to the field of invention of Harold Aspden, the father of efficient thermoelectric energy devices; the likes of which power NASA deep space probes. Key words in his breakthrough patent are worth noting as they are common to the science and the environment of cold fusion phenomenon. Harold Aspden was fascinated by cold fusion as well as the biological transmutation of elements, seeing them as relevant to his field… the science of thermoelectric energy conversion.

 

“Responsibly imaginable” LENR solutions from NASA

Chief Scientist at NASA Langley Research Center Dr. Dennis Bushnell has authored a new report Advanced-to-Revolutionary Space Technology Options The Responsibly Imaginable which includes low-energy nuclear reactions (LENR).

This particular report focuses on how humans might visit Mars. “In general revolutionary goals [such as Mars-Humans] require revolutionary technology,” writes Bushnell, and LENR is part of NASA Langley’s portfolio of solutions.

In this case, “responsibly imaginable” means that “Nominal and usual enabling timescales for such technologies are the order of 12-to-15 years for research and triage, and another 12-15 years for development.”

marsFor humans to visit Mars, a large infrastructure must be launched from Earth orbit. This “up-mass” consists of all the elements to keep humans alive and safe, what NASA describes as “pink and warm”. Long-term space exploration has so far been powered by radioisotope generators (RTGs), which use radioactive plutonium to make heat that is turned into electricity using thermoelectric converters. Plutonium, aside from being dangerous and dirty, is in short supply these days, and recent Mars robotic rover missions have utilized most of the last store of material.

One thing about LENR, it can make heat. Dr. George Miley, Professor Emeritus at the University of Illinois Urbana-Champagne and head of start-up LENUCO, proposed replacing RTGs with LENR devices at the NETS conference a year ago, and has since speculated how these devices could power domestic needs as well.

Bushnell’s report also seeks to refute a statement made at a recent USAF workshop that claimed, according to Bushnell, “Space is a Mature and Declining field of Endeavour in the U.S.”

To do so, he first lists the experimental and commercial efforts with which NASA has been engaged over the decades to develop new useful technologies but that have not succeeded. Then he explains why, writing “Essentially, Space Commercialization going forward is hostage to far lower space access/launch costs.”

Rudolf Nebel, left, is pictured with German rocketeers including (second from right) Wernher von Braun, 19 at the time, and Professor Hermann Oberth (to the right of the rocket). - Image credit: NMMSH Archives
Rudolf Nebel, left, is pictured with German rocketeers including (second from right) Wernher von Braun, 19 at the time, and Professor Hermann Oberth (to the right of the rocket). – Image credit: NMMSH Archives
Tracing current space technology from World War II-era ballistics, Bushnell says that efforts to improve the current chemical fuel technology have been unprofitable within business models because “the current cost of access to space is in the range of thousands-of dollars per pound-of-payload.”

“We have, throughout the “Space Age” starting in the 1950’s, been in thrall to Chemical Propulsion. Breakthroughs in energetics and propulsion beyond Chemical appear to be required going forward.”

Fortunately, Bushnell believes that “The relatively straight forward applications of the ongoing civilian and military worldwide technology investments will greatly reduce mission cost and enhance safety going forward.”

Advances in materials science, information- and nano-technology have reduced the mass of spacecraft and thus the cost, but as Bushnell notes, “humans are not shrinking”, thus lower-cost lift, propulsion and power systems are required. That’s where LENR comes in, for high-thrust in-space propulsion.

“The LENR situation is in a major state of flux with recent apparently successful theoretical efforts and indications of much higher yields,” writes Bushnell, adding “if proven technologically viable, (LENR) would also be candidates for in space and on planet power.”

While the report inventories an array of possible technologies, here’s the portion devoted to LENR:

Low Energy Nuclear Reactions, the Realism and the Outlook
LENR could, by itself, COMPLETELY Revolutionize Space access and utilization. Although there is a quite long history of “anomalous” observations including transmutations the “recent” consideration of Low Energy Nuclear Reactions begins with the Pons/ Fleishman late 80’s observations and assertions regarding what they termed “Cold Fusion”. Subsequent difficulties with experimental replication and an utter lack of convincing theoretical explication forced research in this arena “underground” with minimal financial support.”

“The current situation is that we now have over two decades of hundreds of experiments indicating heat and transmutations with minimal radiation and low energy input. By any rational measure this evidence indicates something real is occurring.”

“So, is LENR “Real”? Evidently, from the now long standing and diverse experimental evidence – yes – With effects occurring using diverse materials, methods of energy addition etc. This is FAR from a “Narrow Band”/episodic set of physical phenomena.”

“The next consideration is “WHAT IS REAL? WHAT IS Happening? For NASA Langley the epiphany moment on LENR was the publication of the Widom-Larsen weak interaction LENR Theory. This theory is currently under study and experimental verification [or not] at NASA LaRC. The theory appears to explain nearly all the various and often variegated experimental observations and shifted the LENR Theoretical focus from some way of “fooling” Particle Nuclear Physics/The Strong Force to Condensed Matter Nuclear Physics, Collective Effects, The Weak Force and “Heavy Electrons”. The Strong Force Particle Physicists have of course evidently been correct all along, “Cold Fusion” is not possible. HOWEVER, via collective effects/condensed matter quantum nuclear physics LENR is allowable without any “Miracles”. The theory states that once load surfaces with hydrogen/protons and add some energy IF the surface morphology enables high localized voltage gradients then “heavy electrons leading to ultra low energy neutrons will form, neutrons that never leave the surface. The neutrons set up isotope cascades that results in beta decay and heat and transmutations with the heavy electrons converting the gamma from the beta decay into heat.”

“The theory indicates several key issues/circumstances are required to enable-to-optimize
LENR and explains the various experimental observations, including the often long initiation times required in some experiments. If the theory is experimentally validated in detail it provides the understanding to shift LENR research from discovery into engineering development. The theory indicates energy densities some million times chemical, the current experiments are in the 10’s to hundreds range. However, several labs have blown up studying LENR and windows have melted, indicating when the conditions are “right” prodigious amounts of energy can be released/produced. There are some 6 or so groups claiming device outputs in the 100 watt range and 3 others claiming kilowatts. Efforts are ongoing within NASA and other organizations to validate, or not, these claims. It should be noted that these devices are essentially “Edisonian,” the result of attempts at experimental “discovery” vice ab-initio design from the weak interaction theories per se.”

“The LENR situation and outlook is therefore the following: Something real is happening,
the weak interaction theories suggest what the physics might be, there are efforts ongoing to explore the validity of the theories, there are continuing Edisonian efforts to produce “devices,” mainly for heat or in some cases Transmutations. There are efforts to “certify” such devices. We are still FAR from the theoretical limits of the weak interaction physics and are in fact inventing in real time the requisite Engineering, along with verifying the physics. When we concentrated upon Nuclear Engineering beginning in the 1940’s we went, “jumped” to the strong force/particle physics and leapt over the weak force, condensed matter nuclear physics. We are going “back” now to study and hopefully develop this arena.”

“The “Precautionary Principle” demands that we core down and determine realism for this arena, given the truly massive-to-mind boggling benefits – Solutions to climate, energy and the limitations that restrict the NASA Mission areas, all of them. The KEY to Space Exploration is Energetics. Some examples of what LENR might/ could enable in a resultant “Energy Rich” Exploration context include:

– Refrigeration for Zero Boiloff cryo storage
– Active Radiation protection
– High Thrust Vasimir/MHD Propulsion
– Energy Beaming
– Separation of propulsive mass and energy/ energetics to establish the requisite
conductivity for most “harvested” propulsive mass including regolith
– Planetary Terraforming
– Ubiquitous in space and on-planet sensors and robotics
– LEO propulsion
– On planet power and energy
– EDL retro via heating of ingested mass”

“Also, The Key to SST’s and neighbor-friendly personal fly/drive air vehicles is Energetics, as simplex examples of the potential implications of this area of research. There are estimates using just the performance of some of the devices under study that one percent of the nickel mined on the planet each year could produce the world’s energy requirements at the order of 25 percent the cost of coal. No promises but something[s] seriously “strange” are going on, which we may be closer to understanding and if we can optimize/engineer such the world changes. Worldwide, worth far more resources than are currently being devoted to this research arena. Need to core down and determine “Truth”. If useful, need to engineer/apply.” —Dr. Dennis Bushnell Advanced-to-Revolutionary Space Technology Options The Responsibly Imaginable

They won’t call it cold fusion, but NASA is keeping an open-mind to new energy technology through experimentation and testing a LENR theory. Bushnell knows the importance of “not picking the winner” too early on. It isn’t easy, even for a national Chief Scientist.

In the report, Bushnell quotes Ivan Bekey from his Chapter 9 Long Term Outlook for Commercial Space in the book “Toward a Theory of Space Power”:

“The Nation’s space programs are in a horrible mess and appear to be locked in a
downward spiral. Almost all defense and civil government space programs suffer from
similar symptoms:

 no toleration of or planning for failures
 avoidance of risk
 lack of funding for new technologies
 inability of industry to afford research or to develop technologies alone
 suppression of disruptive technologies
 disappearance of the concept of experimental systems in space.
As a result of these symptoms, the following conditions are now the norm:
 absence of innovation
 long timelines for even modestly new developments
 billion-dollar price tags for major systems
 major overruns and schedule slips
 need for long on-orbit life to amortize the investment
 obsolescence of systems upon launch or soon thereafter.”

Bushnell sums it up with:

Overall – As mentioned in the Bekey comments cited herein, the pursuit of Revolutionary Space Technologies has over the years been akin to a battle, with the forces of conservatism/evolution consistently winning over those advocating risky/huge payoff REAL “Game-Changing” approaches. The Space Community has simply been unwilling to make the investment of time and treasure to ideate and triage/develop Revolutionary Technologies, resulting in Space being largely and still a high capital investment evolutionary at best Industrial Age Endeavour in the IT age heading rapidly to the Virtual Age.

and ends with this quote of Ivan Bekey:

The battle is within. It is a cultural one: between glorifying the past or
marching toward the future, between protecting successes or cannibalizing them,
between averting risk or embracing it. The battle is for the soul of the of the Industry [and the Future of Humankind in Space]

… and we’ll add ‘on Earth too’!

Cold Fusion Now!

Related Links

Review of NASA/Zawodny US Patent application David J. French

Aneutronic Fusion Spacecraft Architecture – LENR Cold Fusion Gregory Goble

NASA LENR series Gregory Goble

‘Pathological Science’ is not Scientific Misconduct (nor is it pathological) by Henry H. Bauer

The term pathological science was first coined by chemist and Nobel laureate Irving Langmuir during a speech in 1953 when he described scientific investigations done on the premise of “wishful thinking”.

From a transcript of Langmuir’s almost-lost talk:

These are cases where there is no dishonesty involved but where people are tricked into false results by a lack of understanding about what human beings can do to themselves in the way of being led astray by subjective effects, wishful thinking or threshold interactions.

Langmuir’s criteria, or ‘symptons‘, for pathological science are:

1. The maximum effect that is observed is produced by a causative agent of barely detectable intensity, and the magnitude of the effect is substantially independent of the intensity of the cause.
2. The effect is of a magnitude that remains close to the limit of detectability; or, many measurements are necessary because of the very low statistical significance of the results.
3. Claims of great accuracy.
4. Fantastic theories contrary to experience.
5. Criticisms are met by ad hoc excuses thought up on the spur of the moment.
6. Ratio of supporters to critics rises up to somewhere near 50% and then falls gradually to oblivion.

This is important to the field of condensed matter nuclear science (CMNS) as cold fusion has been labeled “pathological science” and pushed outside of mainstream research. The result has been a lack of funding, the absence of a coordinated research plan, a decreased intellectual pool working to solve the problem, and the continued use of hydrocarbons and dirty nuclear power with its associated horrors.

Author Jed Rothwell wrote on Wikipedia in 2006 dispatching each of Langmuir’s criteria in regards to cold fusion science by providing a case-by-case rebuttal. Wikipedia has campaigned hard to remove positive information on the topic, and that article is now deleted, but you can read it here archived on Ludwik Kowalski‘s page.

Virginia Tech Professor of Chemistry Henry H. Bauer looked at the criteria itself.

In his essay ‘Pathological Science’ is not Scientific Misconduct (nor is it pathological) published in HYLE–International Journal for Philosophy of Chemistry, he writes that “they do not provide useful criteria for distinguishing bad science from good science (Bauer 1984, pp. 145-46; Physics Today, 1990 a,b)” and that they “are no more valid than the many other suggestions as to how to distinguish good science from pseudo-science (Bauer 1984, chapter 8; Laudan 1983)” noting that “many praised pieces of research satisfy one or more of Langmuir’s criteria for pathology.”

For the case of cold fusion, Bauer writes:

The most recent major outcry over ‘pathological science’ was occasioned by ‘cold fusion’. A number of books about this episode have appeared, all of them quite strongly pro- or con-. This author, who himself worked in electrochemistry from the early 1950s to the late 1970s, has discussed the merits and defects of these books in several reviews (Bauer 1991; 1992b, c; 1995).

In 1989, Martin Fleischmann and Stanley Pons announced at a press conference at the University of Utah that they had brought about nuclear fusion at room temperature in an electrochemical cell: they had measured heat production too great to explain by other than nuclear processes.

Many physicists dismissed the claims as impossible from the outset, yet confirmations were being announced from all over the world. Within months, however, many of these were withdrawn; other laboratories reported failures to replicate the effect; and a committee empaneled by the US Department of Energy concluded that there was nothing worth pursuing in these claims. Within a year or two, those working on cold fusion had become separated from mainstream scientific communities, holding separate conferences and often publishing in other than mainstream publications. However, at the present time, a dozen years after the initial announcement, a considerable number of properly qualified people continue to believe the chief claim, that nuclear reactions can be achieved at ambient temperatures under electrochemical conditions (Beaudette 2000).

What have Fleischmann and Pons been accused of that was ‘pathological’?

They had announced their discovery at a news conference and not in peer-reviewed publication. They had failed to reveal all details of their procedures. The heat effect remained elusive: no one could set up the experiment and guarantee that excess heat would be observed, sometimes it was and sometimes not. They had performed incompetent measurements of nuclear products and then fudged the results. They had failed to understand that nuclear reactions would inevitably release radiation, and that the level of radiation corresponding to the heat claimed to have been generated would have been lethal. Nuclear theory in any case showed that fusion could not occur under such mild conditions, it required higher temperatures and pressures to many orders of magnitudes, as in the interior of stars.

But of all those criticisms, only the one about fudging nuclear measurements can be sustained, and that does not bear on the issue of whether or not cold fusion is a real phenomenon.

Announcing results first at news conferences has become standard practice in hot fields, for example molecular biology and genetic engineering. It was routine during the initial years of excitement about high-temperature superconductors. Also in that field, some workers quite deliberately put misleading information into their publications, correcting them at the last moment only, in order to preserve secrecy (Felt & Nowotny 1992; Roy 1989).

Lack of replicability does not mean that a phenomenon is necessarily spurious. Semiconductors did not become transistors and microchips in the 1930s because the presence of then-unsuspected, then-undetectably-small amounts of impurities made the phenomena irreproducible, elusive. Certain effects of electromagnetic fields on living systems remained difficult to reproduce for a century or more (Bauer 2001a, pp. 125, 132-33). Perhaps only electrochemists would recognize how vast is the number of experimental variables that might affect reproducibility in cold-fusion systems: almost innumerable variations in the physical characteristics of the electrodes and in the electrical regimen as well as all sorts of possible contaminants, conceivably active at levels that might be virtually impossible to detect by other means than their interference with the looked-for effect.

As to theoretical possibility, “Although cold fusion was, in terms of ‘ordinary’ physics, absurd, it was not obviously so; it contravened no fundamental laws of nature” (Lindley 1990, p. 376). Physics Nobelist Julian Schwinger was among those who proposed explanations for how cold fusion might occur. It may be well to recall in this connection that lasers and masers were also regarded as impossible before their discovery, and indeed by some eminent people even after they had been demonstrated (Townes 1999).

Once again, as with N-rays and polywater, it turns out that nothing occurred that could rightly be called pathological. The leading cold-fusion researchers went at their work just as they had at the other research that had established their good reputation, in Fleischmann’s case sufficiently distinguished as to warrant a Fellowship of the Royal Society. Fleischmann had always been known as an adventurous thinker, the sort of person – like the astrophysicist Thomas Gold (1999) – whose suggestions are always worth attending to even when they do not work out. His competence was beyond question, and it was not at all uncharacteristic for him to follow apparently far-out hunches. Sometimes they had paid off for him. Moreover, he had ample grounds from earlier work to look for unusual phenomena when electrolyzing heavy water at palladium electrodes, and he had quite rational grounds for speculating that nuclear reactions might proceed in the solid state under quite different conditions than in plasmas (Beaudette 2000, chap. 3).

The single criticism that is not to be gainsaid concerns how Fleischmann and Pons altered the reported results from initial attempts to measure radiation from their cells. But there is more to be noted here about such apparent instances of scientific misconduct. Fleischmann and Pons were tempted into these actions because they had tried to make measurements without properly learning all the ins and outs of the technique: they thought they could measure radiation by just taking a radiation meter and placing it near their cell. In point of fact, a great deal needs to be known about circumstances that can affect the functioning of such instruments (temperature, for example) and about how to eliminate background signals, as well as about how to interpret the measurements. In this, Fleischmann and Pons were falling into the same trap as many of their critics who, without experience of electrochemistry, thought they could connect together some cells and batteries and palladium electrodes and test within days or weeks what the experienced electrochemists had struggled for several years to bring about.

The transfer of expertise across disciplinary boundaries affords great challenges, and this instance illustrates that a superficial view might label as misconduct what is basically a natural result of failing to recognize how intricately specialized are the approaches of every sort of research. Much of the fuss about cold fusion is understandable as an argument between electrochemists and physicists as to whether empirical data from electrochemical experiments is to be more believed or less believed than apparently opposing nuclear theory (Beaudette 2000). To electrochemists it may seem perverse, possibly even scientific misconduct, to rule out of the realm of possibility competently obtained results because some theory in physics pronounces them impossible. To nuclear physicists, it may seem incompetence verging on scientific misconduct for electrochemists to invoke nuclear explanations just because they cannot understand where the heat in their experiments comes from.

As in the case of N-rays, one can plausibly level charges of scientific misconduct against those who denounced the cold-fusion studies. A journalist baselessly charged a graduate student with falsifying evidence of the production of tritium and this charge was published in Nature. The legitimacy of work by a distinguished Professor at Texas A & M University was questioned in two separate, long-drawn-out investigations that ultimately found him innocent of any wrongdoing. One participant in the cold-fusion controversy suggested that critics were guilty of “pathological skepticism” (Accountability in Research 2000).” —Henry H. Bauer ‘Pathological Science’ is not Scientific Misconduct (nor is it pathological)

Read The Pseudoscientists of the APS by Eugene Mallove and Jed Rothwell and decide for yourself if the shoe fits the “pathological skeptics”.

Conventionally-minded men like John R. Huizenga, the co-chair of the Energy Research Advisory Board who “investigated” the phenomenon in 1989 but failed to include the positive results made by Navy scientists in the final report, and American Physical Society spokesman Robert Park who lobbied vociferously against any mention of the research in mainstream scientific and political circles, both routinely used the vocabulary “pathological science” among other words, to describe the very real and active research. Applying the term to cold fusion was an early effort to discredit experimental data that could not be explained within the prevalent theoretical models.

“But the mainstream is always antagonistic to highly novel discoveries or suggestions, even when they become acceptable later: any suggestion that paradigms need to be changed is routinely resisted (Barber 1961), sometimes by effectively ignoring the claims (Stent 1972)”, writes Bauer.

From Henry H. Bauer Ethics in Science
From Henry H. Bauer Ethics in Science
“The most striking potential discoveries bring about revolutionary paradigm shifts.”

“The accepted rules and procedures for doing normal science are not adequate to bring about potentially revolutionary science: as is well known, hard cases make bad laws.”

Bauer does not endorse the work of Drs. Martin Fleischmann and Stanley Pons. He does make the case that cold fusion is not deserving of the moniker “pathological science” and that the Langmuir’s criteria are themselves suspect. Based on the historical record of discovery, he reminds us that is normal for conventional minds to resist change.

The only difference now is that we are at the end-run of a several thousand-year-old expansion; over 7 billion people cover the planet, all needing water, food, and energy. Ecological systems are strained, economic relationships are broken, and energy resources are becoming difficult to access, expensive to process, and dangerous and to use. We have a solution for a clean energy world waiting to be developed; to wait for history to swing around is no longer an option.

“Pathological science” is a distraction we can tolerate no more.

Cold Fusion Now!

Related Links

Ethics in Science by Professor Henry H. Bauer

Plasma engine reproduced; now optimizing for efficiency

This is a power source that mankind needs to have. We need to make a push and get this energy out in the hands of people.Dr. Michael McKubre TeslaTech 2012

A revolutionary new engine powered by the pulsing of plasma has been successfully reproduced from an earlier lost technology and now needs an increase in efficiency to make a usable generator, say engineers from RG Energy, developers of the device.

They’ve launched a fundraising campaign at Indiegogo to accomplish this.

Cross-section from Papp patent; watch historical video at pappengine.com
Cross-section from 1984 Papp patent; watch historical video at RGEnergy.com

The design is derived from Josef Papp‘s noble gas engine, its secret inert gas mixture lost when he died. Granted a patent in 1984 for his invention, Papp was not explicit about the precise mix required.

Bob Rohner of RG Energy built those engines for Mr. Papp along with his brother Tom and worked closely with Papp over several years beginning in 1979. Owning successful businesses in Iowa, they had experience with large machinery and controls. “I pursued it because it was such a remarkable invention.”

“We’re trying to clean up the environment, we’re trying to get away from carbon-based fuels to clean up our atmosphere. We need to maintain the energy so we can have our quality of life,” Rohner said in a recent interview. [.mp3]

“These engines are capable of operating for long periods of time on charges of inert gases. They’re totally sealed, easy to build, economical to operate, and they’re totally clean,” says Bob, the mechanical engineer of the team.

“They have no emissions whatsoever to the environment.”

“We have proven to ourselves, and to many savvy observers, that we are capable of firing the single-cylinder, the two-cylinder, and a rotary fixture with inert gas pulses for hours on end, lifting large amounts of weight with no heat build-up.”

An astonishing claim, as inert gases do not readily react with other elements, and though a plasma performs work in pushing a piston, there is no exhaust to vent, and no heat generated. The engine appears to defy the laws of thermodynamics and what is known about heat cycles.

“One of my physicists Frank Andres believes there is a kind of re-absorption going on. Myself, I don’t believe heat is ever created to begin with,” attests Bob.

“To drive that piston up against a 300-lb load like I’m doing would take something like 1200-1400 degrees flash heat, and the Teflon insulators have a highest temperature rating of maybe 600 degrees, so it doesn’t appear as if the high-temperature is ever there to begin with. It amazes me that universities aren’t jumping all over this.”

Energy-dense plasma power

Witnesses to demonstrations of Papp’s engines, including aerospace executives, engineers and Navy scientists, have publicly testified to its capability of producing great amounts of power, without knowing exactly how it works.

“When I first heard of the Papp Engine, I was intrigued because it’s clearly impossible,” said Dr. Michael McKubre, a senior scientist at the forefront of research in new energy from metal-hydrides who spoke at TeslaTech 2012 in support of the project.

But after himself interviewing witnesses from Papp’s engine demonstrations where “observations of tests running hours or days in a closed room, no exhaust, with negligible temperature rise,” McKubre got interested.

Bob watched Papp’s engine produce between 5-10 times over-unity on a few ccs of inert gases and an electrical input. He and Tom finally developed the Twin Cylinder Dual Piston Engine as an experimental set-up that has, they believe, reproduced Papp’s process using a mere 200 ccs of a gas mixture.

The power generated by the inert gas plasma is convertible, meaning power can be extracted as electricity or mechanical force, depending on how the inputs are adjusted, but now they must increase the efficiency of the engine.

“We’re quite confident we can complete the recovery of this technology,” says Rohner, “our progress is steady and much has been achieved, but we have essentially three major items to complete yet.”

1 Build Papp’s gas mixer

One time Bob started the motor and got a huge amount of power; the next time, not nearly as much. It’s been hypothesized that impurities in the gas mixture prevent a full reaction.

“I didn’t believe it at first, but small amounts of contamination will have a stark effect on power output,” says Rohner.

Pre-mixed gas fuel canisters have 5% inaccuracy
Pre-mixed gas canisters are accurate to about 5%

“It turns out that 1/10,000 contamination has a major effect on plasma formation. Now it’s time to make a fuel mixture more accurate and clean.”

Papp’s last patent was for an apparatus that apparently filtered the gases as well as made sure the mixture was exactly correct. When the Rohners first went to Florida, they saw Papp’s glass-tube mixer “all busted-up on the floor”, damaged from shipping. For years afterwards, Bob didn’t think much of it.

“About six months ago, I found a bunch of these old pictures and started going through them, and I realized Papp had not only replaced that mixer once, he actually built two new ones. Nobody has adequately researched or rebuilt this fuel mixing apparatus yet, and it’s looking like a worthwhile thing, probably the most important element to get completed as soon as possible.”

2 Big step with the crossover

“It was described in Papp’s patent that the engine could run as one-cylinder, but only in an inefficient manner, and the evidence is proving that statement correct. To run efficiently you always have to be in pairs,” says Bob. “You can’t be on a teeter-totter with one person, you got to have both.”

Twin Engine
The Twin Cylinder will have crossover

With electrical crossover, one cylinder powers the other, and vice-versa.

“As the plasma collapses in one cylinder, it generates a large amount of electricity, and as the piston travels through the coils, it creates a back EMF. The back EMF and this generated electricity needs to be transferred at the proper time to the opposite cylinder, to one, excite the gases, and two, trigger the plasma pulse in the opposite cylinder.”

3 Re-assemble the original Papp engine

Videos of Papp’s original engine running on the dynos are posted online, but the machine that Bob and Tom built years ago now sits in their workshop waiting resurrection. Results of electrical crossover on the Twin Cylinder will be applied to Papp’s original to complete a serial engine.

“It’s perfectly capable of handling several hundred HP, which is what we need to do. The structure is good, it needs the electrics re-built and some of the stainless sleeves re-honed. Everything on the engine is custom, so it’s very expensive to do.”

Help wanted

Classic photo of Tom Rohner and early plasma engine
Tom Rohner and early plasma engine.

Bob’s brother Tom Rohner had a career as a Chief Software Architect at both Compaq and HP Computers besides working on plasma engines. A year and a half ago, Tom succumbed to pancreatic cancer. Bob lost his best friend as well as Tom’s extensive skills as a computer and electrical engineer.

RG Energy is now looking for a talented full-time Electrical Engineer to complete the sophisticated Twin Cylinder crossover.

“Buckets have to switch from being both anodes to being an anode-cathode, at certain times you’re handing 1000 Volts and at other times you’re handling 40 Volts at high currents, and so this stuff is precise. We’re looking for somebody to fill Tom’s shoes.”

Experienced electrical engineer needed for crossover
Experienced electrical engineer needed for crossover
With a good electrical engineer, it should take 8-12 months to get the gas mixer, the cross over and the build of the serial model completed. These engines could then penetrate the market quickly, complementing cold fusion technology.

Bob adds, “This is a compact energy source that can be mobile, it can run in outer space or the bottom of the ocean. The difference between this engine and cold fusion is that cold fusion is heat-driven. To generate electricity you instantly drop to a 30- 40% loss in production. Then you have to take that electricity to some storage source, to batteries, and yet, here’s a generator that you can run for thousands of hours on one charge. The flexibility of this power source is more important than anything.”

“I hope cold fusion is developed, but to base our future on one solution isn’t that smart.”

“We have odd electricity and odd energy, both real and I believe, coupled,” says Michael McKubre. “To really do the job well, we need a well-equipped lab and able experts willing to suspend disbelief. I’ve been following Bob around ever since, waiting for him to produce his machine, and he’s getting close.”

Learn more about this project at the Gyrokinetic Plasma Engine on IndieGoGo


    Related

http://www.RGEnergy.com/

Bob Rhoner special guest on Wide Awake News Monday May 6 Hour 1 [download .mp3] on the Rense Network.

The Mystery and Legacy of Joseph Papp’s Noble Gas Engine by Eugene Mallove Infinite Energy Magazine

 

Gerald Celente on “the biggest element that could change the world in a positive way”

Gerald Celente is singular in his field.

Founder of the Trends Institute and their signature publication Trends Journal, he features new energy in his forecasting work. An early subscriber to Infinite Energy magazine and friend to Eugene Mallove, Celente has been following and writing about cold fusion since 1989.

Why is he so alone among forecasters? Gerald Celente answers in this CFN exclusive.

Gerald Celente“Whether it’s infinite energy, or the economy or geopolitics, people have belief systems, and it’s very hard to break out of them, so they have a very narrow vision of the future,” says Celente.

“For example, in the political realm people will say things like I’m a Republican or a Democrat, I’m a liberal, I’m a conservative, I’m left-wing or right-wing. We don’t care who you are or what you believe in. We just look at the facts. It’s what is.”

“And to think that here we are in the 21st century and fossil fuels are our main source of energy, it’s a fossil idea.”

For the rationale, Celente loosely quotes from his nationally bestselling book Trends 2000: How to Prepare for and Profit from the Changes of the 21st Century published January 1, 1997 and translated into eight different languages:

“Looking back from Galileo to the Wright brothers to Wegener and his theory of continental drift, all through the history of science, there’s a sequence of breakthrough discoveries that are automatically dismissed and derided by the establishment of the day.”

“Newspapers ignored the Wright brothers’ flight in 1903. Scientific American claimed it was a hoax. ‘For the next five years – with the Wright Brothers routinely flying test-flights around the field at Kitty Hawk – experts continued to argue that heavier-than-air machines were scientifically impossible.'”

“And the special interests are big roadblocks. Do you think the coal or oil or gas industry wants to see a competitor that could put them out of business?” he asks. “Can you imagine Exxon Mobile going out of business? Think of what refrigeration did to the ice business. There’s no more icemen around!”

“There are special interests that have their ideas of what the future should look like and whether it’s going in that direction or not. They are blinded by their desk-vision, and I have no interest in special interests.”

“In the end, people want results”

“I go back to those days when Infinite Energy magazine was born. I remember what happened with Fleischmann and Pons, and their inability to replicate the experiments,” Celente recalls.

Magazine covers on May 8, 1989
Magazine covers on May 8, 1989

“I don’t know if it was a matter of money or a matter of science, I don’t judge those things. But going back to those days, I can tell you the high-expectations for an alternative energy were actually much higher than they are today. For someone who has been writing about it all those years, I remember the excitement and the massive coverage it was getting.”

And he doesn’t mean the conventional alternatives of wind, solar, geothermal or bio-fuel.

“Whether it’s cold fusion or hydrino power, whatever it might be, when the results are there, they’ll forget who said what, and who was right or wrong. But now, I can’t believe that we’re so locked into the 19th and 20th century ideas of energy and its potential.”

“If we could build an atom bomb to fight a war with a Manhattan Project that wasn’t compromised by special interests – we could have a Manhattan Project of alternative energy, if the will was there, and the money was there, and if the special interests weren’t there.”

“Necessity is the mother of invention, and I think out of necessity, we’re going to see that new invention, if it doesn’t come first out of the human spirit. But again, there’s so much holding it back.”

“The biggest factor, of course, is money”

Regarding the problems that independent labs have protecting their intellectual work, Celente says, “We’ve written about the strangle-hold the patent office has on new patents, too. Without the achievable goal and the money, it’s all small ventures, and people are trying to protect their rights as well as trying to develop the system, so it’s very difficult without the money.”

He wonders when the billionaires are going to “do something really big with their money. They could make this thing happen in no time.”

According to Celente, an angel isn’t the answer, just “an enlightened human being.”

trend-tracking-88“When I wrote my book Trend Tracking back in 1988, the government had spent $13 trillion fighting the Cold War. That number’s probably quadrupled since then.”

“Look at the moronic things they’re doing every day: fighting wars, losing old ones, starting new ones. Look at the money they wasted on this F35; it’s going to cost a trillion dollars. That could have brought us an alternative energy.”

He continued, “As of March 31, the United States Congress appropriated more than $54 billion to support the ANSF Afghan National Security Forces – just this year. This is according to military.com. And as of March 31, the United States has spent $10.4 billion on equipment and vehicles, meaning the American tax-payer paid for that.”

“Neither the will or the money is there,” he declares.

“Alternative energy could change the entire economic paradigm. It is the biggest, biggest element I see that could change the world in a positive way.”

“Imagine being off the grid and getting all the energy you could need. Imagine how our foreign policy, and that of the rest of the world, would change. There would be no Iraq war, there would be no Libyan war. It was about oil, it certainly wasn’t about weapons of mass destruction and ties to Al Qaeda, they didn’t have any. But Libya has some of the finest sweet crude oil on the planet. The whole middle east situation changes.”

“To me, at this time, alternative energy is the holy grail of civilization”

“It could be much bigger, for example, than the dot.com boom, and the whole Internet revolution. You saw what that did. This could trump that by thousands of times. It’s one of our most important discoveries waiting to happen that could really be a civilization-changer.”

“But as my father would say, may his soul rest in peace, ‘don’t get upset, we’re dealing with people who have little minds.’ So that’s it, we’re dealing with people with little minds, and they just can’t see the big picture.”

“Scientific and industrial opposition may slow the energy revolution”, writes Celente, but they will not stop it, and we can look forward to “a revolution on the order of the discovery of fire”, making it possible for people to live the alternative lifestyles they believe in.

trends2000From Trends 2000:
The energy revolution will be the single-biggest investment opportunity of the twenty-first century. Its ramifications will extend to practically every aspect of human and planetary life. To profit from the trend, potential investors should start familiarizing themselves with the field thoroughly and immediately, and keep abreast of developments before they become official. Recommended reading Infinite Energy magazine Post Office Box 2816 Concord, New HampshireGerald Celente Trends 2000

Freed from reliance upon established institutions and infrastructures, access to free energy can “make a life of voluntary simplicity, self-sufficiency and techno-tribalism” based in a “psychology of inner freedom”.

Cold Fusion Now!

Related Links

Trends Journal taps new energy again Ruby Carat

The Wright Aeroplane and its Fabled Performance from Scientific American Vol. XCIV No. 2 January 13, 1905 page 40 courtesy Mississippi State University

The Wright Brothers and Cold Fusion Jed Rothwell

 

Field work of Cesium decontamination by nano silver

On December 10, 2011, Dr. Norio Abe did a radioactive cesium decontamination experiment by the nano silver at an kindergarden in Koriyama city of Fukushima prefecture in Japan. Dr. Abe sent me the result data, then I try to make a summary of the experiment.

1. Outline of the experiment

On December 10, 2011, at an kindergarden in Koriyama city, Dr. Abe did the experiment at the same time as the water washing of the roof for decontamination.  He used the filtration device called “Lucy”, that has four stages of filtration including nano silver supported by bone charcoal and white granite.

He filtered cesium contaminated water three times by Lucy.  Each time the value of radioactivity of the filtered water was measured using a germanium detector by a professional company.

As a result, the radioactivity of the water was reduced to 1/10 as compared to the initial value.  Furthermore, he measured the radioactivity of the used filter in 8 days at Itabashi Firefly Ecosystem Center, and it was reduced to less than 50%, too.

2. Video ( in Japanese )


[the experiment in the kindergarden]


[the radiation measurement at Itabashi Firefly Ecosystem Center]


3. Experimental results

The following values indicate decontamination level of the water (Bq / L) after the three times filtering.

(Bq/L) Initial 1st time 2nd time 3rd time
Cs-134 13300 1840 1350 1340
Cs-137 18800 2490 1910 1830

 



After the experiment, he carried Lucy to Itabashi Firefly Ecosystem Center and measured the radioactivity of the filter.  The following values indicate the radioactivity of the bone charcoal and white granite including nano silver.

 

data-table-20

.

The measurement started at December 11, while the experiment was done on December 10, then the initial value of radioactivity might be higher than 0.56 μSv/H. Further, as the background radioactivity was not measured, the background value was not subtracted.

The result is very impressive because it seems to show nano silver can decontaminate the radioactivity of Cesium in a real situation.  I hope other scientists may pay attention to the experiment and will try to reproduce these results.

Cold Fusion Now!

 


Related

Nanoscale Ag may decrease radiation of Cesium 134 and 137 through LENR transmutation? Toshiro Sengaku

Top