Quantum gravity physics based on facts, giving checkable predictions

Monday, March 06, 2006

Edwin Budding, a Research Fellow of the Carter National Observatory in New Zealand, and currently at Canakkale Onsekin Mart University in Turkey, has sent me his paper on inertia and gravitational mechanism. It is 16 pages in length and the basic physical ideas for gravity and inertia are somewhat similar to http://feynman137.tripod.com/, although the mathematical treatment is interestingly different. It does not make the numerical connections and predictions, but it does formulate the mechanism of inertia and of the FitzGerald-Lorentz contraction physically, in the classical model of LeSage.

Budding notes that Maxwell in his article 'Atom' in the 9th edition of Encyclopedia Britannica, p38, dimisses LeSage's gravity on the basis that the energy density required in the vacuum to cause gravity would somehow cause Earth to vaporise, a dismissal repeated by Poincare in his 1906 communication to the Bull. Astronomique, 17, p121. In fact, there is no mechanism for the energy of the gravity causing radiation to be converted into heat, and quantum field theory - the best tested physical theory in human history - is based on gauge boson radiation causing forces! So somewhere between the 1875-1906 rebuttals of Maxwell and Poincare, and the modern formulation of quantum field theory as Feynman diagrams with radiation mediating all known forces, the LeSage mechanism was forgotten. If we accept that some kind of quantum gravity is implied by force unification, then by Einstein's equivalence principle of inertial and gravitational mass, the gauge boson radiation mediation which causes gravity also causes inertial effects. Rueda and Haisch have demonstrated how to derive laws like F = ma from QFT, although their calculations did not give testable numerical predictions for other things.

Feynman discussed LeSage's gravity mechanism in detail in his BBC2 TV lectures, recorded in America in 1964 and transmitted in 1965. He said it does give the inverse square law, but objected that in its form at that time it did not make useful predictions, in particular in the context of drag. Budding answers Feynman's problem on pages 15-16, where he shows that the dynamics of radiation pressure cause the Lorentz-FitzGerald contraction of moving objects in the direction of motion; the length contraction in the direction of motion, and time dilation (Lorentz showed how mass increase occurs before Einstein, as well as time-dilation, which was also dealt with by Larmor and others in 1901). The contraction is caused by kinetic energy added to the body to overcome inertia (i.e., to accelerate it). So the major 'problem' Feynman had with LeSage is really the secret to the mechanism behind the equations of 'special relativity'!

(To be fair, Feynman did finish his discussion by leaving the question wide open to further study, and later in the same series of lectures he commented: "It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of space/time is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.")

In the treatment at http://feynman137.tripod.com/ the overlap problem is ignored on the basis that the source of gravity for most of the problems of interest is the Earth, and that doesn't have enough fundamental particles in it to allow a significant chance of overlap. Suppose you stand on a 1 cubic metre block of matter, or alternatively, suppose you hold it above your head. How much shielding does it afford against gravity causing radiation? Easy, the cross-sectional shield area for each fundamental particle is Pi multiplied by the square of the black hole radius for its mass (R = 2GM/c^2). This area is so small, that even if your 1 cubic metre shield was lead, the probability of two fundamental particles overlapping (being one directly behind the other, as exposed to graviton radiation) is infidesimal. But if you packed in so much matter that the total shield cross-sectional area in your cubic metre was 1 square metre, you would have 100 % shielding if you ignore overlap. Taking account of overlap (statistically for many randomly distributed fundamental particles), the true shielding in that case would stop not 100 % but only 100(1 - 1/e) = 63.2 % of the gravitons.

Now calculate the shield area for Earth. The mass of earth is 5.97 x 10^24 kg, whereas the mass of a nucleon (neutron or proton) is around 1.67 x 10^-27 kg. Hence Earth is about 3.6 x 10^51 nucleons, each with a cross-sectional shielding area of just 1.9 x 10^-107 square metres. Hence the total shield area of the earth is 6.9 x 10^-56 square metres.

This is so small that the Earth is amlost completely transparent and there is no real chance of overlap by any two particles, so you can treat gravity as a sum of contributions from individual particles as Newton did, but obviously when you start considering galaxies or clusters of galaxies, the situation changes quantitatively. Another thing to remember is that the chance of overlap increases with density. Suppose you have an army being machine gunned. The probability that a single bullet will hit two men will be very small of the men are dispersed in the open, but will be great if they are all gathered together. My approach to overlap is introducing a shielding correction of the general sort used in radiation shielding.

To explain overlap briefly, consider by analogy the gamma ray shielding afforded by matter (electrons do most of the shielding via the Compton effect for 1 MeV typical gamma rays, nuclear shielding only becomes important - via 'pair production' - at for higher energy gamma rays).

To calculate the Compton effect shielding of 1 MeV gamma rays, you use the Klein-Nishima treatment of quantum mechanics to give the cross-sectional area for scattering by an electron. Since you know how many electrons an atom has, and how many atoms there are in a unit volume, you can calculate the mean-free-path of a gamma ray: the average distance it travels in the material before being scattered by an electron. The attenuation of the incident gamma radiation is then given by exp(-x/y) where x is the thickness of the material and y is the mean-free-path for the radiation. The reason why this is an exponential law is the compound probability of overlap. If there was no overlap, then a definite thickness of material would scatter 100% of the radiation. This thickness would be equal to the mass of material for which the total sum of microscopic cross-sectional areas equals the actual macroscopic shield area.

However, there is no such thickness for gamma radiation. The probability of unscattered gamma rays penetrating a material never falls to zero, no matter how thick the material is. All you can do is to calculate the attenuation factor for a given thickness of material. (This is only the case for gamma rays and other uncharged radiations like neutrinos and neutrons; for alpha and beta particles, there is a more definite 'range' because they cause ionisation continuously as they travel in material, and after a certain distance, all their original kinetic energy is thus lost.)

Ivor Catt, who built the model of 'crosstalk' used in computer chip design, points out that when you charge up an object, electric energy enters it at light speed, and similarly exits at light speed when discharged. Catt says that since there is no mechanism for electric energy to slow down, charge is electromagnetic energy. In a 'static' charged object, you have as much energy flowing in any given direction at any given time as is flowing in the opposite direction, so the magnetic field curl vectors cancel out, but electric field vectors add up. This is confirmed both by Catt's experiments with capacitors, and by his calculations of capacitors as transmission lines. Because fundamental charges are gravitationally trapped energy, they have black hole shielding size.

What about the spin, magnetic dipole, electric spherically symmetric field, and mass of an electron? We know that charge is not the same as mass; charge only rises by 7% when you collide electrons at 90 GeV (Levine, Physical Review Letters, v.78, 1997, no.3, p.424), and the Standard Model QFT suggests that the reason is that higher energy impacts break through the polarised shield of the vacuum, exposing more of the higher core charge: ‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’

On the other hand, the mass of the electron increases indefinitely with higher velocity. The answer to the disagreement is the physical mechanism responsible for mass: the Higgs field. Electromagnetic energy associated with vacuum particles which give rise to mass through a physical mechanism. If you think of the deflection of starlight, it is smoothly deflected as seen by the sharp (not diffused) gravitationally deflected light during the eclipses which confirmed general relativity. This shows that light is not being hit directly by light-speed graviton particles which cause the deflection. If that were the case, such particle-particle interaction would be expected to produce diffuse scattered light. Instead, the sharp deflection indicates that whatever causes gravity is acting indirectly on the light. The light wave is an electromagnetic disturbance propagated by the spacetime fabric, and it is subject to gravity because the energy in the light wave is coupled in passing to the spacetime fabric including the Higgs field. Gravity distorts the spacetime fabric, creating the curved geodesics described by the maths of general relativity. The spacetime fabric is perhaps best viewed as a spin foam network, behaving as a perfect fluid (creating only inertia, not continuous drag).

So an electron is a gravitationally trapped loop of Heaviside-Poynting energy current. The radial electric field from each part of the loop forms a spherically symmetric electric field as seen at a large distance, while the magnetic field forms a toroidal shape close up, which becomes a magnetic dipole as seen from a large distance, in agreement with the known properties of the electron. These facts are entirely non-speculative: http://feynman137.tripod.com/. Nobody can claim an electron is not a charge with a magnetic dipole and spin. These are hard facts. If you think an electron is something else like a 10-dimensional string, you need to prove that you get the observed properties of the electron from that description, instead of being dismissive to other people. (I try not to mention Lubos Motl and Jacques Distler at this point. Whoops!)

Anyway, Edwin Budding has done the detailed calculations for a distributed array of shielding particles using the LeSage gravity idea, plus the Hubble effect as mechanism. His comparison between reality and the theory is concerned with galaxies.

I like the introductory paragraph of Budding's paper, which seems to be a good answer to the propaganda of 'string theorists' who claim to have predicted gravity by speculating about spin-2 gravitons without any checkable tests or predictions:

'Modern science must surely receive much of its interest and support from the practical value of its applications. In turn, this puts weight behind an 'operational' or empirical stance to physics. This places practical measurement to the fore when discussing physical concepts: if we cannot provide a feasible operation to measure a physical entity, such an entity can be dismissed as practically irrelevant, and dropped from the vocabulary of useful physics.'


Dear Edwin,

Thank you very much for your PDF paper on Aristotlian Inertia and gravitation. Obviously, it is very exciting and valuable to see your treatment of the problem! From my own experience of endless reformulations and improvements, the mechanism of gravity is one of those problems which looks simple and easy, but this is deceptive if you try to do it for the first time! I'm very impressed that you tackle the absorption through a sphere of uniform density.

My approach is to treat all the mass as being located at the centre of the object, using Newton's geometrical demonstration that this is equivalent. Obviously, because of shielding overlap by particles, this is not true because the higher the density, the greater the ratio of volume to surface area, so the greater the chance of overlap. If you think of bullets being fired inward on a group of soldiers, the closer they are together, the greater the change that one bullet will hit two people. However, I had quite a lot of things to do with the mechanism in the context of nuclear and electromagnetic forces, and left this issue.

You may well be right about galactic problems, I haven't checked through your calculations, but I notice the exponential absorption term in equations 14, 15, namely 1 - e^(-nsRx). This is mathematically an analogy to shielding gamma radiation by matter, where the fraction of the beam fraction transmitted is simply e^(-ax) and the fraction absorbed is then is 1 - e^(-ax), where x is the thickness of the material, and a = 1/(mean-free-path) = function of cross-sectional shielding area, density, etc.

From my approach, the gravity radiation shielding cross-sectional area of a hydrogen atom is proven to be Pi multiplied the square of the radius of a black hole of similar mass (R = 2GM/c^2). This means that the fraction of radiation absorbed in producing gravitational force is very small even for the earth's mass. The reason gravity results from such a small amount of shielding is that the outward force of the big bang is so large, 7 x 10^43 Newtons, and there is an equal inward "implosion" force upon every fundamental particle, except where geometrically shielded by other particles.

The physical picture is crudely like a nuclear implosion weapon, where a plutonium core is compressed by the explosion of TNT surrounding it. Because by the 3rd law of motion as much force goes inward as outward in any explosion possible, you get core compression.

One thing I think should be done is to equate the contraction term of general relativity to the Lorentz-FitzGerald contraction with some kind of treatment of the energy needed to do the compression. The atoms are compressed against the Pauli exclusion principle, which is the energy of alignment of pairs of electrons with opposite spin (and magnetic field) relative to one another. I've derived the general relativity contraction result from the Lorentz-FitzGerald contraction by employing a simple argument from Einstein's principle of the equivalence of inertial and gravitational mass. My paper is on:
http://nigelcook0.tripod.com/ notice that I give two derivations of gravity; which give the same result from different assumptions.

General relativity says the contraction is (1/3)MG/c^2 = 1.5 mm for the Earth. This is the amount by which radiation pressure "squeezes" the earth's radius, and it is the gravitational equivalent of the Lorentz-FitzGerald contraction of materials in their direction of motion. Obviously, the initial force needed to start a body moving causes the contraction, which is sustained while the surrounding spacetime fabric (virtual particles, radiation, etc.) flows around it. The amount of energy needed to contract a body must come from the energy needed to accelerate it to a given velocity. Hence the "potential" energy gained by a falling apple appears physically as the energy causing its compression due to its motion. This way of dealing with energy gets rid of "potential" energy and replaced it by the physics of what is occurring to the matter.

The Standard Model of particle physics is well confirmed experimentally for nuclear particles at energies up to 200 GeV and for electrons at 80 GeV (although it is incomplete, omitting proper unification and gravity for instance), and the quantum field theory of the Standard Model suggests that fundamental particles are just energy, with mass having a physical mechanism (the Higgs field, the exact dynamics of which are still speculative because the Higgs particle has yet to be identified). I've made some progress in this area recently, which I will mention later.

I read Aristotle's book "Physics" (350 BC). It is clear now that the spacetime fabric of general relativity and quantum field theory (Dirac ether sea), implies that FitzGerald was right, and the contraction is a physical effect, so relativity is due to physical mechanism. However, LeSage is over simplistic, by analogy for example, to the Phlogiston theory of heat, which Prevost in 1792 replaced by two distinct mechanisms: kinetic theory and radiation.

The LeSage mechanism is like the attraction of a rubber sucker to a surface by the surrounding air pressure. This is a short ranged force. To get mutual shielding that way, the objects must be nearer than a few times the mean-free-path of the fluid molecules. In the spacetime fabric, the Heisenberg uncertainty principle becomes a simple scattering relationship that deals with scattering of vacuum (aether) particles, and their energy versus time relationship (see
http://nigelcook0.tripod.com/). So the LeSage mechanism describes the short-range weak and strong nuclear force mechanisms.

Gravity and electromagnetism are due to light-speed radiation exchange. There is some work on radiation as a mechanism for inertia and gravity (via general relativity) due to Professors Rueda and Haisch, which may be useful to you (they derive F=ma from the radiation resistance of the vacuum using a quantum mechanical approach to the "zero point field" but Nobel laureate Glashow, as quoted in New Scientist last year, dismissed the argument as "not even wrong" type speculation).

Matt Edwards seems to think science is about personal belief systems, not established facts (he rejects the big bang pro-forma without giving any better model for the Hubble law than expansion):
http://www.astro.ucla.edu/~wright/tiredlit.htm. I've found that 100% of the people who are open to a mechanism for gravity are anti-big bang. They also seem to think that if I use established facts (like Hubble expansion), then I'm 100% in agreement with the existing way of force-fitting general relativity to the big bang, which is not true. We live in an age of incredibly unfounded, bigoted arrogance, but that is nothing new I suppose. This is why I've met stubborn ignorance (and often abuse from people who should know better) for a decade. The first publication my paper had was a mention in the letters page of the October 1996 issue of Electronics World, 10 years ago.

The big bang contains the mechanism for gravity, its a real explosion and Newton's 3rd empirical law shows you get an inward force as well as an outward force. At its simplest, in any explosion you have an outward force. The outward pressure, which knocks down trees, people, and houses, is that force divided by the spherical area, so the pressure falls with distance even if the force is a fixed constant (explosions in air are far more complicated than space based explosions - see my page
http://nigelcook0.tripod.com/ for details for the hydrodynamics of an explosion in the air). The outward force of the big bang is the mass of the universe multiplied by its acceleration in space time. Since the velocity varies linearly from 0 to c with times past of 0 to 15 Gyears, the acceleration dv/dt = c/(age of universe) = 6 x 10^-10 ms^-2. The outward force is then F=ma = 7 x 10^43 Newtons. My basic argument is:

1. Feynman shows that forces arise from the exchange of gauge bosons (coming from distances at light speed, hence coming from times in the past).
2. The big bang mass has a speed, in the spacetime which we see, from 0 toward speed of light c with times past of 0 toward 15 billion years (or distances of 0 to 15 billion light-years), giving outward force by Newton’s 2nd empirical law: F = ma = m.dv/dt = mc/(age of universe).
3. Newton’s 3rd law gives equal inward force, carried by gauge bosons, which shielded by mass, proves gravity and electromagnetism correctly.

On my page, I've gone into heuristic quantum field theory in just enough semi-empirical depth to tackle some big problems, like the quantizised masses of different fundamenta particles, mechanistically. All mass has a single fundamental building block, due to some kind of Pauli-pairing of a real particle with a single vacuum particle which provides mass, and there is good empirical evidence for this from both the results of many different abstract (but empirically confirmed) quantum field theory calculations, and from the semi-empirical relationships between the measured masses of many different nuclear particles, leptons and hadrons.

I've only glanced briefly through your paper so far, I've not worked through the equations and arguments in detail, but it is very interesting and deals with aspects of the problem which I have neglected. I've no idea if you are happy working by yourself on this, or if you would be interested to consider carefully incorporating ideas from my efforts, or even co-authoring some paper/book at some stage. I will carefully read your paper.

What you need to think about is the string theory lobby, led by Edward Witten who in April 1996 claimed in Physics Today: "String theory has the remarkable property of predicting gravity." This is a major reason why I've been censored, and why the physics community is so bigoted today, not wanting "alternatives". However, in 2004, Penrose exposed Witten as a bit of a charlatan for his claim, see p. 896 of of Penrose's "Road to Reality".

I know from my experience that it is a mistake to ignore the string theory lobby, and Peter Woit's experiences with their paranoid bigotry show how much control they have of the means to communication in science. I think you should look to Darwin and Newton's books for ideas of how to overcome bigotry and mainstream arrogance when introducing a radical new idea. A book on the subject would of course be hard to get published, but I recommend that you consider aligning yourself with Peter Woit's camp, since the major barrier to progress is the arrogance to string theorists.

Best wishes,
Nigel Cook


Update: The Lorentz contraction is a compression in the direction of motion: the kinetic energy gained by matter as it is accelerated is physically stored as the compression. The Standard Model interpretation of experimental data suggest the strong force falls from about 137 times Coulomb's force at low energy, to about 16 times Coulomb's force at 90 GeV. Levine's PRL data from 1997 show that Coulomb's law increases by 7% as collision energy rises to 90 GeV (PRL, v.78, 1997, no.3, p.424). These are empirical facts: electromagnetic force rises with interaction energy, while the strong force decreases. Since the atomic forces are electromagnetic, the gradual fall in electromagnetic force with increasing energy would be expected to affect the Lorentz contraction and mass increase physics. The Lorentz contraction is a compression in the direction of motion. The kinetic energy gained by matter as it is accelerated is stored as the energy of compression, if we accept that mass is not a form of energy, but is just associated to it by the Higgs field mechanism. However, it is generally accepted that the electromagnetic force rises because particles colliding at higher energy can penetrate more of the polarised vacuum shield around the particle core, thus experiencing a stronger core charge. The Standard Model also appears to say something about mass increase, as all mass is normally ascribed to the Higgs field of the vacuum (the Higgs field as cause of mass also breaks the fundamental meaning of E=mc2). A fundamental particle gets squeezed in the direction that it moves due to the surrounding Dirac sea of charges in the vacuum. Those surrounding vacuum charges must flow from bow to stern of your moving particle. But they can't move out of your way faster than light, so as you approach light velocity, the retardative pressure from ahead increases, increasing your mass. (This hints at an underlying energy conservation, with a fixed total amount of gauge boson energy being distributed between the different fundamental forces. So a fall in one force with increasing collision energy is accompanied by a rise in other fundamental force strengths. This is an alternative which could disprove SUSY, since energy conservation of the total force field will mean perfect unification where particles break completely through polarised vacuum shields.)

0 Comments:

Post a Comment

<< Home