Quantum gravity physics based on facts, giving checkable predictions: December 2006

Thursday, December 07, 2006

There are two types of loops behind all physics phenomena: creation-annihilation loops which occur inside a very short range of charges (about 1 fm for a unit charge, corresponding to about 1020 volts per metre field strength, which is sufficient for pair production, making the creation-annihilation loops) and are associated with short range atomic chaos and nuclear phenomena, and the long-range loops of Yang-Mills exchange radiation in electromagnetism and gravity.

The latter loops are not creation/annihilation loops, but are purely the exchange of gauge boson radiation between charges. This looping of radiation is such that all charges in the universe are exchanging this long range radiation, the result being electromagnetism and gravity. Force causing radiation delivers momentum, which drives the forces. It causes cosmological expansion because the exchange of radiation pushes charges apart; the incoming force causing exchange from the more distant matter is increasingly red-shifted with increasing distance of that matter and so imparts somewhat less force. This is the primary effect of exchange radiation; gravity is a secondary effect occurring as a result of cosmological expansion. (Obviously, at early stages in the big bang when a lot of matter was colliding as a hot ionized gas, there was also expansive pressure from that, as in a nuclear explosion fireball in space. The exact causality and interdependence of cosmological expansion and force generating mechanisms at the earliest times is still under investigation but it is already known that short range force mechanisms are related to the gravity and electromagnetism forces.)

The recession of galaxies gives an effective outward force of receding matter around us due to Newton's second law, which by Newton's 3rd law and the possibilities known in the Standard Model of particle physics, results in net inward force of gauge boson radiation. The net result of these forces can be repulsive or attractive with equal magnitude for unit charges in electromagnetism, but there is also a weak residue summation automatically allowed for in the electromagnetism force model, (N^{1/2} times weaker than electromagnetism, where N is the number of fermions in the universe) that is always attractive, gravity. (The mechanism for this is described in detail below in bold italics, in conjunction with Lunsford's unification of gravitation and electromagnetism.) This is a very checkable theory, it is founded on well established facts and makes numerous non-ad hoc predictions, quantitatively checkable in greater accuracy as cosmology observation evidence improves!

Using these confirmed facts it was predicted and published in 1996 (way ahead of Perlmutter's observation) that the universe isn't decelerating (without inventing dark energy to offset gravity; gravity simply doesn't operate between receding masses where the exchange radiation is too redshifted for it to work; in addition it relies on pushing effects which are unable - in our frame of observational reference - to operate on the most distant matter). The simplest way to predict gravity, electromagnetism and nuclear forces is to follow renormalized QFT facts. There are no creation-annihilation loops below the IR cutoff, ie beyond 1 fm from a fundamental particle where the field strength is below 1018 v/m, so those creation-annililation loops don’t justify ‘cosmological dark energy’. Long range forces, gravity and electromagnetism, are due (beyond 1 fm) entirely to loop quantum gravity type Yang-Mills exchange radiation (loops that are exchanges of radiation between masses, with nothing to do with charge creation-annihilation loops in the Dirac sea). Short range nuclear forces are due to the pressure of charges in the loops within a 1 fm range.

For the history of LeSage see the survey and comments here. First, it turns out LeSage plagarized the idea from Newton's friend Fatio.

Numerous objections have been raised against it, all superficially very impressive but scientifically totally void objections that were raised against the earth rotating by Ptolemy ('if the earth rotated, we'd be thrown off, the clouds fly from east to west always at 1000 miles/hour', etc):

(1) Maxwell dismissed LeSage because he thought atoms were "solid" so gravity would depend on the surface area of a planet, not on the mass within a planet. LeSage however had used the fact that gravity depends on mass to argue that the atom is mostly void so that the gravity causing radiation can penetrate the earth and act only on massive nuclei, etc. He was confirmed by discovery of x-rays and penetrating radioactive particles.

(2) The tactic against LeSage then changed to the claim that if gravity was due to exchange radiation, the power would be hot enough to melt atoms.


The flaw here is obvious: the Yang-Mills exchange radiation to produce all forces is large. All Standard Model forces are larger in strength than gravity! But they all rely on radiation exchange.

The Standard Model is the best tested physical theory: forces result from radiation exchange in spacetime. Mass recedes at 0-c in spacetime of 0-15 billion years, so outward force F = m.dv/dt ~ m(Hr)/t = mHv = mH(Hr) ~ 1043 N. H is Hubble parameter. Newton's 3rd law implies equal inward force, and this force is carried by Yang-Mills exchange radiation.

The radiation is received by mass almost equally from all directions, coming from other masses in the universe; the radiation is in effect reflected back the way it came if there is symmetry that prevents the mass from being moved. The result is then a mere compression of the mass by the amount mathematically predicted by general relativity, i.e., the radial contraction is by the small distance MG/(3c²) = 1.5 mm for the contraction of the spacetime fabric by the mass in the Earth.

A local mass shields the force-carrying radiation exchange, because the distant masses in the universe have high speed recession, but the nearby mass is not receding significantly. By Newton’s 2nd law the outward force (according of a nearby mass which is not receding (in spacetime) from you is F = ma = m.dv/dt = mv/(x/c) = mcv/x = 0. Hence, by Newton’s 3rd law, the inward force of gauge bosons coming towards you from that mass is also zero; there is no action and so there is no reaction. As a result, the local mass shields you, so you get pushed towards it. This is why apples fall.

Geometrically, the unshielded gravity force is equal to the total inward force, F = ma ~ mcH, multiplied by the proportion of the shielded area of a spherical surface around the observer (illustration here). The surface area of the sphere with radius R (the average distance of the receding matter that is contributing to the inward gauge boson force) is 4*Pi*R². The shielding area of a local mass is projected on to this area: the local mass of say the planet Earth, the centre of which is distance r from you, casts a shadow (on the distant surface 4*Pi*R² ) equal to its shielding area multiplied by the simple ratio (R/r)². This ratio is very big. Because R is a fixed distance, as far as we are concerned here, the most significant variable the 1/r² factor, which we all know is the Newtonian inverse square law of gravity.

More here (that page is under revision with corrections and new information here and a revised version as a properly edited paper will go here). There is strong evidence from electromagnetic theory that every fundamental particle has black-hole cross-sectional shield area for the fluid analogy of general relativity. (Discussed here.)

The effective shielding radius of a black hole of mass M is equal to 2GM/c2. A shield, like the planet earth, is composed of very small, sub-atomic particles. The very small shielding area per particle means that there will be an insignificant chance of the fundamental particles within the earth ‘overlapping’ one another by being directly behind each other.

The total shield area is therefore directly proportional to the total mass: the total shield area is equal to the area of shielding by 1 fundamental particle, multiplied by the total number of particles. (Newton showed that a spherically symmetrical arrangement of masses, say in the earth, by the inverse-square gravity law is similar to the gravity from the same mass located at the centre, because the mass within a shell depends on its area and the square of its radius.) The earth’s mass in the standard model is due to particles associated with up and down quarks: the Higgs field.

From the illustration here, the total outward force of the big bang,
(total outward force) = ma = (mass of universe).(Hubble acceleration, a = Hc, see detailed discussion and proof here),

while the gravity force is the shielded inward reaction (by Newton’s 3rd law the outward force has an equal and opposite reaction):

F = (total outward force).(cross-sectional area of shield projected to radius R) / (total spherical area with radius R).

The cross-sectional area of shield projected to radius R is equal to the area of the fundamental particle (Pi multiplied by the square of the radius of the black hole of similar mass), multiplied by the (R/r)2 which is the inverse-square law for the geometry of the implosion. The total spherical area with radius R is simply four times Pi, multiplied by the square of R. Inserting simple Hubble law results c = RH and R/c = 1/H give us F = (4/3)Pi*Rho*G2M2/(Hr)2. (Notice the presence of M2 in this equation, instead of mM or similar; this is evidence that gravitational mass is ultimately quantized into discrete units of M, see this post for evidence, ie this schematic idealized diagram of polarization - in reality there are not two simple shells of charges but that is similar to the net effect for the purpose required for coupling of masses - and this diagram showing numerically how observable particle masses can be built up on the polarization shielding basis. More information in posts and comments on this blog.) We then set this equal to F=Ma and solve, getting G = (3/4)H2/(Pi*Rho). When the effect of the higher density in the universe at the great distance R is included, this becomes

G = (3/4)H2/(Pi*Rholocale3). This appears to be very accurate.

Newton's gravitation says the acceleration field a = MG/r2.

Let's go right through the derivation of the Einstein-Hilbert field equation in a non-obfuscating way.

To start with, the classical analogue of general relativity's field equation is Poisson's equation

div.2E = 4*Pi*Rho*G

The square of the divergence of E is just the Laplacian operator (well known in heat diffusion) acting on E and implies for radial symmetry (r = x = y = z) of a field:

div.2E
= d2E/dx2 + d2E/dy2 + d2E/dz2
= 3*d2E/dr2


To derive Poisson's equation in a simple way (not mathematically rigorous), observe that for non-relativistic situations

E = (1/2)mv2 = MG/r

(Kinetic energy gained by a test particle falling to distance r from mass M is simply the gravitational potential energy gained at that distance by the fall! Simple!)

Now, observe for spherical geometry and uniform density (where density Rho = M/[(4/3)*Pi*r3]),

4*Pi*Rho*G = 3MG/r3 = 3[MG/r]/r2

So, since E = (1/2)mv2 = MG/r,

4*Pi*Rho*G = 3[(1/2)mv2]/r2 = (3/2)m(v/r)2

Here, the ratio v/r = dv/dr when translating to a differential equation, and as already shown div.2E = 3*d2E/dr2 for radial symmetry, so

4*Pi*Rho*G = (3/2)m(dv/dr)2 = div.2E

Hence proof of Poisson's gravity field equation:

div.2E = 4*Pi*Rho*G.

To get this expressed as tensors you begin with a Ricci tensor Ruv for curvature (this is a shortened Riemann tensor).

Ruv = 4*Pi*G*Tuv,

where Tuv is the energy-momentum tensor which includes potential energy contributions due to pressures, but is analogous to the density term Rho in Poisson's equation. (The density of mass can be converted into energy density simply by using E=mc2.)

However, this equation Ruv = 4*Pi*G*Tuv was found by Einstein to be a failure because the divergence of Tuv should be zero if energy is conserved. (A uniform energy density will have zero divergence, and Tuv is of course a density-type parameter. The energy potential of a gravitational field doesn't have zero divergence, because it diverges - falls off - with distance, but uniform density has zero divergence simply because it doesn't fall with distance!)

The only way Einstein could correct the equation (so that the divergence of Tuv is zero) was by replacing Tuv with Tuv - (1/2)(guv)T, where R is the trace of the Ricci tensor, and T is the trace of the energy-mass tensor.

Ruv = 4*Pi*G*[Tuv - (1/2)(guv)T]

which is equivalent to

Ruv - (1/2)Rguv = 8*Pi*G*Tuv

Which is the full general relativity field equation (ignoring the cosmological constant and dark energy, which is incompatible with any Yang-Mills quantum gravity because to use an over-simplified argument, the redshift of gravity-causing exchange radiation between receding masses over long ranges cuts off gravity, negating the need for dark energy to explain observations).

There are no errors as such in the above but there are errors in the way the metric is handled and in the ignorance of quantum gravity effects on the gravity strength constant G.

Because Einstein doesn't mathematically distinguish between the volume (and its dimensions) of spacetime and the gravity causing volume (and its dimensions) of the spacetime fabric (the Dirac sea or the Yang-Mills exchange radiation which produces forces and curvature), there have always been disputes regarding the correct metric.

Kaluza around 1919 suggested to Einstein that the metric should be 5 dimensional, with an extra distance dimension. Klein suggested the latter is rolled up in a particle. Kaluza's argument was that by changing the metric there is hope of unifying gravitation with Maxwell's photon. However, the ad hoc approach of changing the number of dimensions doesn't predict anything checkable, normally. (Mainstream string theory, M-theory, is an example.)

The exception is Lunsford's 6-dimensional unification (using 3 distance-like dimensions and 3 time-like dimensions), which correctly (see below for how Yang-Mills gauge boson redshift alters general relativity without needing a cosmological constant) predicts the correct electrodynamics and gravitational unification and predicts the absence of a cosmological constant.

The physical basis for Lunsford's solution of the problem is clear: one set of 3 dimensions is describing the contractable dimensions of matter like the Michelson-Morley instrument, while the other set of 3 dimensions is describing the 3 dimensions in the volume of space. This gets away from using the same dimensions to describe the ever expanding universe as are used to describe the contraction of a moving particle in the direction of its motion.

The spacetime fabric is not the same thing as geometric volume: geometric volume is ever expanding due to the big bang, whereas the spacetime fabric has force causing "curvature" due to the presence of masses locally (or pressure, to use a 3 dimensional analogy to the 2-dimensional rubber sheet physics used traditionally to "explain" curvature):

(1). geometric volume is ever expanding due to the big bang (3-dimensions)
(2). spacetime fabric has force causing 'curvature' which is associated with contractions (not uniform expansion!) due to the presence of masses locally

The dimensions describing (1) don't describe (2). Each needs at least 3 dimensions, hence there are 6 dimensions present for a mathematically correct description of the whole problem. My approach in a now-obsolete CERN paper is to tackle this dynamical problem: as the volume of the universe increases, the amount of Dirac sea per unit volume increases because the Dirac sea surrounds matter. Empty a bottle of water, and the amount of air in it increases. A full dynamical treatment of this predicts gravity. (This approach is a dual to another calculation which is based on Yang-Mills exchange radiation pressure but is entirely consistent. Radiation and Dirac sea charges are both intimately related by pair production/annihilation 'loops', so strictly speaking the radiation calculation of gravity is valid below the IR cutoff of QFT, whereas the Dirac sea hydrodynamic effect becomes most important above the IR cutoff. In any case, the total gravity force as the sum of both radiation and charge pressures will always be accurate because when loops tranform fraction f of the radiation into charged loops with material like pressure, the fractional contribution of the latter is 1-f so the total gravity is f+1-f = 1, ie, it is numerically the same as assuming either than 100% is due to radiation or 100% is due to Dirac sea charges.)

In particular, matter contraction is a case where distance can be defined by the matter (ie, how many atoms there are along the length of the material), while the expanding universe cannot be dealt with this way because of the issue that time delays become vitally important (two widely separated objects always see one another as being in the past, due to the immense travel time of the light, and since effects like gravity go at light speed, what matters physically is time delay). Hence, it is rational to describe the contraction of matter due to its motion and gravitational properties with a set of distance dimensions, but to use a set of time dimensions to describe the ever expanding cosmology.

Danny Ross Lunsford’s major paper, published in Int. J. Theor. Phys., v 43 (2004), No. 1, pp.161-177, was submitted to arXiv.org but was removed from arXiv.org by censorship apparently since it investigated a 6-dimensional spacetime which again is not exactly worshipping Witten’s 10/11 dimensional M-theory. It is however on the CERN document server at http://doc.cern.ch//archive/electronic/other/ext/ext-2003-090.pdf, and it shows the errors in the historical attempts by Kaluza, Pauli, Klein, Einstein, Mayer, Eddington and Weyl. It proceeds to the correct unification of general relativity and Maxwell’s equations, finding 4-d spacetime inadequate:

‘... We see now that we are in trouble in 4-d. The first three [dimensions] will lead to 4th order differential equations in the metric. Even if these may be differentially reduced to match up with gravitation as we know it, we cannot be satisfied with such a process, and in all likelihood there is a large excess of unphysical solutions at hand. ... Only first in six dimensions can we form simple rational invariants that lead to a sensible variational principle. The volume factor now has weight 3, so the possible scalars are weight -3, and we have the possibilities [equations]. In contrast to the situation in 4-d, all of these will lead to second order equations for the g, and all are irreducible - no arbitrary factors will appear in the variation principle. We pick the first one. The others are unsuitable ... It is remarkable that without ever introducing electrons, we have recovered the essential elements of electrodynamics, justifying Einstein’s famous statement ...’

D.R. Lunsford shows that 6 dimensions in SO(3,3) should replace the Kaluza-Klein 5-dimensional spacetime, unifying GR and electromagnetism: ‘One striking feature of these equations ... is the absent gravitational constant - in fact the ratio of scalars in front of the energy tensor plays that role. This explains the odd role of G in general relativity and its scaling behavior. The ratio has conformal weight 1 and so G has a natural dimensionfulness that prevents it from being a proper coupling constant - so this theory explains why ordinary general relativity, even in the linear approximation and the quantum theory built on it, cannot be regularized.’

In a comment on Woit's blog, Lunsford writes: ‘... I worked out and published an idea that reproduces GR as low-order limit, but, since it is crazy enough to regard the long range forces as somehow deriving from the same source, it was blacklisted from arxiv (CERN however put it up right away without complaint). ... GR is undeniably correct on some level - not only does it make accurate predictions, it is also very tight math. There are certain steps in the evolution of science that are not optional - you can’t make a gravity theory that doesn’t in some sense incorporate GR at this point, any more than you can make one that ignores Newton on that level. Anyone who claims that Einstein’s analysis is all wrong is probably really a crackpot. ... my work has three time dimensions ... This is not incompatible with GR, and in fact seems to give it an even firmer basis. On the level of GR, matter and physical space are decoupled the way source and radiation are in elementary EM.’

In another comment on Woit's blog, Lunsford writes: ‘... the idea was really Riemann’s, Clifford’s, Mach’s, Einstein’s and Weyl’s. The odd thing was, Weyl came so close to getting it right, then, being a mathematician, insisted that his theory explain why spacetime is 4D, which was not part of the original program. Of course if you want to derive matter from the manifold, it can’t be 4D. This is so simple that it’s easy to overlook.

‘I always found the interest in KK theory curiously misplaced, since that theory actually succeeds in its original form, but the success is hollow because the unification is non-dynamical.’

Regarding Lunsford’s statement that ‘since it is crazy enough to regard the long range forces as somehow deriving from the same source, it was blacklisted from arxiv’, the way that both gravity and electromagnetism derive from the same source is as follows:

A capacitor QFT model in detail. The gauge bosons must travel between all charges, they cannot tell that an atom is "neutral" as a whole, instead they just travel between the charges. Therefore even though the electric dipole created by the separation of the electron from the proton in a hydrogen atom at any instant is randomly orientated, the gauge bosons can also be considered to be doing a random walk between all the charges in the universe.

The random-walk vector sum for the charges of all the hydrogen atoms is the voltage for a single hydrogen atom (the real charges mass in the universe is something like 90% composed of hydrogen), multiplied by the square root of the number of atoms in the universe.

This allows for the angles of each atom being random. If you have a large row of charged capacitors randomly aligned in a series circuit, the average voltage resulting is obviously zero, because you have the same number of positive terminals facing one way as the other.

So there is a lot of inefficiency, but in a two or three dimensional set up, a drunk taking an equal number of steps in each direction does make progress. The taking 1 step per second, he goes an average net distance from the starting point of t^0.5 steps after t seconds.

For air molecules, the same occurs so instead of staying in the same average position after a lot of impacts, they do diffuse gradually away from their starting points.

Anyway, for the electric charges comprising the hydrogen and other atoms of the universe, each atom is a randomly aligned charged capacitor at any instant of time.

This means that the gauge boson radiation being exchanged between charges to give electromagnetic forces in Yang-Mills theory will have the drunkard’s walk effect, and you get a net electromagnetic field of the charge of a single atom multiplied by the square root of the total number in the universe.

Now, if gravity is to be unified with electromagnetism (also basically a long range, inverse square law force, unlike the short ranged nuclear forces), and if gravity due to a geometric shadowing effect (see my home page for the Yang-Mills LeSage quantum gravity mechanism with predictions), it will depend on only a straight line charge summation.

In an imaginary straight line across the universe (forget about gravity curving geodesics, since I’m talking about a non-physical line for the purpose of working out gravity mechanism, not a result from gravity), there will be on average almost as many capacitors (hydrogen atoms) with the electron-proton dipole facing one way as the other, but not quite the same numbers.


You find that statistically, a straight line across the universe is 50% likely to have an odd number of atoms falling along it, and 50% likely to have an even number of atoms falling along it. Clearly, if the number is even, then on average there is zero net voltage. But in all the 50% of cases where there is an odd number of atoms falling along the line, you do have a net voltage. The situation in this case is that the average net voltage is 0.5 times the net voltage of a single atom. This causes gravity. The exact weakness of gravity as compared to electromagnetism is now predicted. Gravity is due to 0.5 x the voltage of 1 hydrogen atom (a "charged capacitor").

Electromagnetism is due to the random walk vector sum between all charges in the universe, which comes to the voltage of 1 hydrogen atom (a "charged capacitor"), multiplied by the square root of the number of atoms in the universe. Thus, ratio of gravity strength to electromagnetism strength between an electron and a proton is equal to: 0.5V/(V.N^0.5) = 0.5/N^0.5. V is the voltage of a hydrogen atom (charged capacitor in effect) and N is the number of atoms in the universe. This ratio is equal to 10^-40 or so, which is the correct figure within the experimental errors involved.


I'm quite serious as I said to Prof. Distler that tensors have their place but aren't the way forward with gravity because people have been getting tangled with that since 1915 and have a landscape of cosmologies being fiddled with cosmological constants (dark energy) to fit observation, and that isn't physics.

For practical purposes, we can use Poisson's equation as a good approximation and modify it to behave like the Einstein' field equation. Example: div.2E = 4*Pi*Rho*G becomes div.2E = 8*Pi*Rho*G when dealing with light that is transversely crossing gravitational field lines (hence light falls twice as much towards the sun than Newton's law predicts).

When gravity deflects an object with rest mass that is moving perpendicularly to the gravitational field lines, it speeds up the object as well as deflecting its direction. But because light is already travelling at its maximum speed (light speed), it simply cannot be speeded up at all by falling. Therefore, that half of the gravitational potential energy that normally goes into speeding up an object with rest mass cannot do so in the case of light, and must go instead into causing additional directional change (downward acceleration). This is the mathematical physics reasoning for why light is deflected by precisely twice the amount suggested by Newton’s law.

That's the physics behind the maths of general relativity! It's merely very simple energy conservation; the rest is just mathematical machinery! Why can't Prof. Distler grasp this stuff? I'm going to try to rewrite my material to make it crystal clear even to stringers who don't like to understand the physical dynamics!

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

- R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

Regards the physics of the metric, guv: in 1949 some kind of crystal-like Dirac sea was shown to mimic the SR contraction and mass-energy variation, see C.F. Frank, ‘On the equations of motion of crystal dislocations’, Proceedings of the Physical Society of London, A62, pp 131-4:
‘It is shown that when a Burgers screw dislocation [in a crystal] moves with velocity v it suffers a longitudinal contraction by the factor (1 - v2/c2)1/2, where c is the velocity of transverse sound. The total energy of the moving dislocation is given by the formula E = Eo/(1 - v2/c2)1/2, where Eo is the potential energy of the dislocation at rest.’


Specifying that the distance/time ratio = c (constant velocity of light), then tells you that the time dilation factor is identical to the distance contraction factor.

Feynman explained that, in general relativity, the contraction around a static mass M is simply a reduction in radius by (1/3)MG/c2 or 1.5 mm for the Earth. You don’t need the tensor machinery of general relativity to get such simple results. You can do it just using the equivalence principle of general relativity plus some physical insight:

The velocity needed to escape from the gravitational field of a mass M (ignoring atmospheric drag), beginning at distance x from the centre of mass M, by Newton’s law will be v = (2GM/x)1/2, so v2 = 2GM/x. The situation is symmetrical; ignoring atmospheric drag, the speed that a ball falls back and hits you is equal to the speed with which you threw it upwards (the conservation of energy). Therefore, the energy of mass in a gravitational field at radius x from the centre of mass is equivalent to the energy of an object falling there from an infinite distance, which by symmetry is equal to the energy of a mass travelling with escape velocity v.

By Einstein’s principle of equivalence between inertial and gravitational mass, this gravitational acceleration field produces an identical effect to ordinary motion. Therefore, we can place the square of escape velocity (v2 = 2GM/x) into the Fitzgerald-Lorentz contraction, giving g = (1 – v2/c2)1/2 = [1 – 2GM/(xc2)]1/2.

However, there is an important difference between this gravitational transformation and the usual Fitzgerald-Lorentz transformation, since length is only contracted in one dimension with velocity, whereas length is contracted equally in 3 dimensions (in other words, radially outward in 3 dimensions, not sideways between radial lines!), with spherically symmetric gravity. Using the binomial expansion to the first two terms of each:

Fitzgerald-Lorentz contraction effect: gamma = x/xo = t/to = mo/m = (1 – v2/c2)1/2 = 1 – ½v2/c2 + ...

Gravitational contraction effect: gamma = x/xo = t/to = mo/m = [1 – 2GM/(xc2)]1/2 = 1 – GM/(xc2) + ...,

where for spherical symmetry (x = y = z = r), we have the contraction spread over three perpendicular dimensions not just one as is the case for the FitzGerald-Lorentz contraction: x/xo + y/yo + z/zo = 3r/ro. Hence the radial contraction of space around a mass is r/ro = 1 – GM/(xc2) = 1 – GM/[(3rc2]

Therefore, clocks slow down not only when moving at high velocity, but also in gravitational fields, and distance contracts in all directions toward the centre of a static mass. The variation in mass with location within a gravitational field shown in the equation above is due to variations in gravitational potential energy. The contraction of space is by (1/3)GM/c2.

There is more than one way to do most things in physics. Just because someone cleverer than me can use tensors analysis to do the above, doesn’t itself discredit intuitive physics. General relativity is not a religion unless you make it one by insisting on a particular approach. The stuff above is not “pre-GR” because Newton didn’t do it. It’s still GR alright. You can have different roads to the same thing even in GR. Baez and Bunn have a derivation of Newton’s law from GR which doesn’t use tensor analysis: see http://math.ucr.edu/home/baez/einstein/node6a.html

Friedmann’s solution to general relativity says the effective dimension of the universe R should increase (for critical density) as t2/3. The two-thirds power of time is due to the effect of gravity.

Now, via the October 1996 issue of Electronics World, I had an 8-pages paper analysing this, and predicting the results Perlmutter found two years later.

General relativity is widely accepted to not be the final theory of quantum gravity, and the discrepancy is in that a Standard Model type (ie Yang-Mills) quantum field theory necessitates gauge boson radiation (”gravitons”) being exchanged between the gravitational “charges” (ie masses). In the expanding universe, over vast distances the exchange radiation will suffer energy loss like redshift when being exchanged between receding masses.

This predicts that gravity falls off over large (cosmological sized) distances. As a result, Friedmann’s solution is false. The universe isn’t slowing down! Instead of R ~ t2/3 the corrected theoretical prediction turns out R ~ t, which is confirmed by Perlmutter’s data from two years after the 1996 prediction was published. Hence no need for dark energy; instead there’s no simply gravity to pull back very distant objects. Nobel Laureate Phil Anderson grasps this epicycle/phlogiston type problem:

‘the flat universe is just not decelerating, it isn’t really accelerating’ - Phil Anderson, http://cosmicvariance.com/2006/01/03/danger-phil-anderson/#comment-10901

To prove Louise’s MG = tc3 (for a particular set of assumptions which avoid a dimensionless multiplication factor of e3 which could be included according to my detailed calculations from a gravity mechanism):

(1) Einstein’s equivalence principle of general relativity:

gravitational mass = inertial mass.

(2) Einstein’s inertial mass is equivalent to inertial mass potential energy:

E = mc2

This equivalent energy is “potential energy” in that it can be released when you annihilate the mass using anti-matter.

(3) Gravitational mass has a potential energy which could be released if somehow the universe could collapse (implode):

Gravitational potential energy of mass m, in the universe (the universe consists of mass M at an effective average radius of R):

E = mMG/R

(4) We now use principle (1) above to set equations in arguments (2) and (3) above, equal:

E = mc2 = mMG/R

(5) We use R = ct on this:

c3 = MG/t

or

MG = tc3

Which is Louise’s equation. QED.

I’ve done and published (where I can) work on the detailed dynamics for such an energy-based mechanism behind gravity. The actual dynamics are slightly more subtle than above because of the definition of mass of universe and its effective radius, which are oversimplified above. In reality, the further you see, the earlier the phase of the universe you are looking at (and receiving gravity effects from), so the effective density of the universe is greater. There is however, a limiting factor which eventually offsets this effect when considering great distances, because big masses in the very dense early-time universe, receding from us rapidly, would exchange very severely redshifted 'gravitons' (or whatever the gauge bosons of gravity are).

LQG is a modelling process, not a speculation. Smolin et al. show that a path integral is a summing over the full set of interaction graphs in a Penrose spin network. The result gives general relativity without a metric (ie, background independent). Next, you simply have to make gravity consistent completely with standard model-type Yang-Mills QFT dynamics to get predictions.

LQG is explained in Woit's book Not Even Wrong where he points out that loops are a perfectly logical and self-consistent duality of the curvature of spacetime: ‘In loop quantum gravity, the basic idea is to use the standard methods of quantum theory, but to change the choice of fundamental variables that one is working with. It is well known among mathematicians that an alternative to thinking about geometry in terms of curvature fields at each point in a space is to instead think about the holonomy [whole rule] around loops in the space.’

‘Gravitation and Electrodynamics over SO(3,3)’ on CERN document server, EXT-2003-090 by D. R. Lunsford was peer-reviewed and published but was deleted from arXiv. It makes predictions, eliminating dark/energy cosmological constant. It’s 3 orthagonal time dimensions and its elimination of the CC mean it fits a gravity mechanism which is also predictive...

LQG has the benefits of unifying Standard Model (Yang-Mills) quantum field theory with the verified non-landscape end of general relativity (curvature) without making a host of uncheckable extradimensional speculations. It is more economical with hype than string theory because the physical basis may be found in the Yang-Mills picture of exchange radiation. Fermions (non-integer spin particles) in the standard model don’t have intrinsic masses (masses vary with velocity for example), but their masses are due to their association with massive bosons having integer spin. Exchange of gauge boson radiations between these massive bosons gives the loops of LQG. If string theorists had any rationality they would take such facts as at least a serious alternative to string!

It is interesting that the editorial on p5 of the 9 Dec 06 issue of New Scientist states: “[Unorthodox approaches] now seem the antithesis of modern science, with consensus and peer review at its very heart. ... The sheer number of ideas in circulation means we need tough, sometimes crude ways of sorting.... The principle that new ideas should be verified and reinforced by an intellectual community is one of the pillars of scientific endeavour, but it comes at a cost.”

Page 46 of the same issue has an article by Professor Harry Collins of Cardiff University which states that a crackpot 'US military spends around $1 million per year on anti-gravity research. This is best understood by analogy with the philosopher Blaise Pascal's famous wager. Everyone, he argued, should believe in God because the cost of believing was small while the cost of not believing could be an eternity in hell. For the military, the cost of missing a technological opportunity, particularly if your enemy finds it first, is a trip to the hell of defeat. Thus goes the logic of the military investigating anti-gravity.'

Yang-Mills exchange radiation causes gravity (predicting it's strength and other general relativity effects accurately), by the push effect gets rid of anti-gravity, and this fact would save U.S. taxpayers $1 million per year by getting rid of unwanted research. First useful spin-off! You cannot shield a push gravity because it doesn't depend on mediation of gravity between two passes, but on the shielding of masses from Yang-Mills exchange radiation. Any attempt to 'shield' gravity would hence make the effect of gravity stronger, not weaker, by adding to the existing shielding.

Professor Collins' major point however is that fear prevents ideas being dismissed and ignored. If you want to deter people from being dismissive and abusive, therefore, you should emphasise the penalty (what they have to lose) due to ignoring and dismissing the facts. The best way to do that is to expose the damage to physics that some of the existing egotists are doing, so that others will be deterred. I've tried waiting for a decade (first publication via October 1996 Electronics World, an 8-pages paper made available via the letters pages). Being polite and nice or modest to the leading physicists is like giving an inch to to a dictator who then takes a mile. They can afford false modesty, others can't. Any modesty is automatically interpreted as proof of incompetence or stupidity. They aren't interested in physics. The only way to get interest is by pointing out facts they don't want to hear. This brings a hostile reaction, but the choice is not between a hostile reaction and a nice reaction! It is a choice between a hostile reaction and an abusive/dismissive reaction. Seen properly, the steps being taken are the best possible choice from the alternatives available.

On the topic of gravity decelerated expansion in the false (conventional) treatment of cosmology, see Brian Green's The Elegant Universe,
UK paperback edition page 355: 'Inflation The root of the inflation problem is that in order to get two [now] widely separated regions of the universe close together [when looking backwards in time towards the big bang time origin] ... there is not enough time for any physical influence to have travelled from one region to the other. The difficulty, therefore, is that [looking backwards in time] ... the universe does not shrink at a fast enough rate.

'... The horizon problem stems from the fact that like a ball tossed upward, the dragging pull of gravity causes the expansion rate of the universe to slow down. ... Guth's resolution of the horizon problem is ... the early universe undergoes a brief period of enormously fast expansion...'

The problem is real but the solution isn't! What's happening is that the cosmological gravitational retardation idea from general relativity is an error because it ignores redshift of gauge boson radiation between receding masses (gravitational charges). This empirically based gravity mechanism also solves the detailed problem of the small-sized ripples in the cosmic background radiation.



Update: extract from an email to Mario Rabinowitz, 30 December 2006.

There are issues with the force calculation when you want great accuracy. For my purpose I wanted to calculate an inward reaction force to get predict gravity by the LeSage gravity mechanism which Feynman describes (with a picture) in his book "Character of Physical Law" (1965).

The first problem is that at greater distances, you are looking back in time, so the density is higher. The density varies as the inverse cube of time after the big bang.

Obviously this would give you an infinite force from the greatest distances, approaching zero time (infinite density).

But the redshift of gravity causing gauge boson radiation emitted from such great distances would weaken their contribution. The further the mass is, the greater the redshift of any gravity causing gauge boson radiation which is coming towards us from that mass. So this effect puts a limit on the effect on gravity from the otherwise infinitely increasing effective outward force due to density rising at early times afte the big bang. A simple way to deal with this is to treat redshift as a stretching of the radiation, and the effects of density can be treated with the mass-continuity equation by supplying the Hubble law and the spacetime effect. The calculation suggests that the overall effect is that the density rise (as limited by the increase in redshift of gauge bosons carrying the inward reaction force) is a factor of e^3 where e is the base of natural logarithms. This is a factor of about 20, not infinity. It allows me to predict the strength of gravity correctly.

The failure of anybody (except for me) to correctly predict the lack of gravitational slowing of the universe is also quite serious.

Either it's due to a failure of gravity to work on masses receding from one another at great speed (due to redshift of force carrying gauge bosons in quantum gravity?) or it's due to some repulsive force offsetting gravity.

The mainstream prefers the latter, but the former predicted the Perlmutter results via October 1996 Electronics World. There is extreme censorship against predictions which are correctly confirmed afterwards, and which quantitatively correlate to observations. But there is bias in favour of ad hoc invention of new forces which simply aren't needed and don't predict anything checkable. It's just like Ptolemy adding a new epicycle everytime he found a discrepancy, and then claiming to have "discovered" a new epicycle of nature.

The claimed accelerated expansion of the universe is exactly (to within experimental error bars) what I predicted two years before it was observed, using the assumption there is no gravitational retardation (instead of an accelerated expansion just sufficient to cancel the expansion). The "cosmological constant" the mainstream is using is variable, to fit the data! You can't exactly offset gravity by simply adding a cosmological constant, see:

http://cosmicvariance.com/2006/01/11/evolving-dark-energy/

See the diagram there! The mainstream best fit using a cosmological constant is well outside many of the error bars. This is intuitively obvious from my perspective. What is occurring is that there is simply no gravitational slowing. But the mainstream is assuming that there is gravitational slowing, and also dark energy causing acceleration which offsets the gravitational slowing. But that doesn't work: the cosmological constant cannot do it. If it is perfectly matched to experimental data at short distances, it over compensates at extreme distances because it makes gravity repulsive. So it overestimates at extreme distances.

All you need to get the correct expansion curve is to delete gravitational retardation altogether. You don't need general relativity to examine the physics.

Ten years ago (well before Perlmutter’s discovery and dark energy), the argument arose that if gravity is caused by a Yang-Mills exchange radiation quantum force field, where gravitons were exchanged between masses, then cosmological expansion would degenerate the energy of the gravitons over vast distances.

It is easy to calculate: whenever light is seriously redshifted, gravity effects over the same distance will be seriously reduced.

At that time, 1996, I was furthering my education with some Open University courses and as part of the cosmology course made some predictions from this quantum gravity concept.

The first prediction is that Friedmann’s solutions to GR are wrong, because they assume falsely that gravity doesn’t weaken over distances where redshifts are severe.

Whereas the Hubble law of recessionis empirically V = Hr, Friedmann’s solutions to general relativity predicts that V will not obey this law at very great distances. Friedmann/GR assume that there will be a modification due to gravity retarding the recession velocities V, due effectively to the gravitational attraction of the receding galaxy to the mass of the universe contained within the radius r.

Hence, the recession velocity predicted by Friedmann’s solution for a critical density universe (which continues to expand at an ever diminishing rate, instead of either coasting at constant - which Friedmann shows GR predicts for low density - or collapsing which would be the case for higher than critican density) can be stated in classical terms to make it clearer than using GR.

Recession velocity including gravity

V = (Hr) - (gt)

where g = MG/(r^2) and t = r/c, so:

V = (Hr) - [MGr/(cr^2)]

= (Hr) - [MG/(cr)]

M = mass of universe which is producing the gravitational retardation of the galaxies and supernovae, ie, the mass located within radius r (by Newton’s theorem, the gravity due to mass within a spherically symmetric volume can be treated as to all reside in the centre of that volume):

M = Rho.(4/3)Pi.r^3

Assuming as (was the case in 1996 models) that Friedmann Rho = critical density = Rho = 3(H^2)/(8.Pi.G), we get:

M = Rho.(4/3)Pi.r^3

= [3(H^2)/(8.Pi.G)].(4/3)Pi.r^3

= (H^2)(r^3)/(2G)

So, the Friedmann recession velocity corrected for gravitational retardation,

V = (Hr) - [MG/(cr)]

= (Hr) - [(H^2)(r^3)G/(2Gcr)]

= (Hr) - [0.5(Hr)^2]/c.

Now, what my point is is this. The term [0.5(Hr)^2]/c in this equation is the amount of gravitational deceleration to the recession velocity.
From Yang-Mills quantum gravity arguments, with gravity strength depending on the energy of exchanged gravitons, the redshift of gravitons must stop gravitational retardation being effective. So we must drop the effect of the term [0.5(Hr)^2]/c.

Hence, we predict that the Hubble law will be the correct formula.

Perlmutter’s results of software-automated supernovae redshift discoveries using CCD telescopes were obtained in about 1998, and fitted this prediction made in 1996. However, every mainstream journal had rejected my 8-page paper, although Electronics World (which I had written for before) made it available via the October 1996 issue.

Once this quantum gravity prediction was confirmed by Perlmutter’s results, instead of abandoning Friedmann’s solutions to GR and pursuing quantum gravity, the mainstream instead injected a small positive lambda (cosmological constant, driven by unobserved dark energy) into the Friedmann solution as an ad hoc modification.

I can’t understand why something which to me is perfectly sensible and is a prediction which was later confirmed experimentally, is simply ignored. Maybe it is just too simple, and people hate simplicity, preferring exotic dark energy, etc.

People are just locked into believing Friedmann’s solutions to GR are correct because they come from GR which is well validated in other ways. They simply don’t understand that the redshift of gravitons over cosmological sized distances would weaken gravity, and that GR simply doesn’t contains these quantum gravity dynamics, so fails. It is “groupthink”.

*****

For an example of the tactics of groupthing, see Professor Sean Carroll's recent response to me: 'More along the same lines will be deleted — we’re insufferably mainstream around these parts.'

He is a relatively good guy, who stated:

‘The world is not magic. The world follows patterns, obeys unbreakable rules. We never reach a point, in exploring our universe, where we reach an ineffable mystery and must give up on rational explanation; our world is comprehensible, it makes sense. I can’t imagine saying it better. There is no way of proving once and for all that the world is not magic; all we can do is point to an extraordinarily long and impressive list of formerly-mysterious things that we were ultimately able to make sense of. There’s every reason to believe that this streak of successes will continue, and no reason to believe it will end. If everyone understood this, the world would be a better place.’ – Prof. Sean Carroll, here

The good guys fairly politely ignore the facts, the bad guys try to get you sacked, or get call you names. There is a big difference. But at the end, the mainstream rubbish continues for ages either way:

Human nature means that instead of using scientific objectivity, any ideas outside the current paradigm should be attacked either indirectly (by ad hominem attacks on the messenger), or by religious-type (unsubstantiated) bigotry, irrelevant and condescending patronising abuse, and sheer self-delusion:

‘(1). The idea is nonsense.

‘(2). Somebody thought of it before you did.

‘(3). We believed it all the time.’

- Professor R.A. Lyttleton’s summary of inexcusable censorship (quoted by Sir Fred Hoyle, Home is Where the Wind Blows Oxford University Press, 1997, p154).

‘If you have got anything new, in substance or in method, and want to propagate it rapidly, you need not expect anything but hindrance from the old practitioner - even though he sat at the feet of Faraday... beetles could do that... he is very disinclined to disturb his ancient prejudices. But only give him plenty of rope, and when the new views have become fashionably current, he may find it worth his while to adopt them, though, perhaps, in a somewhat sneaking manner, not unmixed with bluster, and make believe he knew all about it when he was a little boy!’

- Oliver Heaviside, Electromagnetic Theory Vol. 1, p337, 1893.

UPDATE:

Copy of a comment:


http://kea-monad.blogspot.com/2007/02/luscious-langlands-ii.html


Most of the maths of physics consists of applications of equations of motion which ultimately go back to empirical observations formulated into laws by Newton, supplemented by Maxwell, Fitzgerald-Lorentz, et al.


The mathematical model follows experience. It is only speculative in that it makes predictions as well as summarizing empirical observations. Where the predictions fall well outside the sphere of validity of the empirical observations which suggested the law or equation, then you have a prediction which is worth testing. (However, it may not be falsifiable even then, the error may be due to some missing factor or mechanism in the theory, not to the theory being totally wrong.)


Regarding supersymmetry, which is the example of a theory which makes no contact with the real world, Professor Jacques Distler gives an example of the problem in his review of Dine’s book Supersymmetry and String Theory: Beyond the Standard Model:


http://golem.ph.utexas.edu/~distler/blog/


“Another more minor example is his discussion of Grand Unification. He correctly notes that unification works better with supersymmetry than without it. To drive home the point, he presents non-supersymmetric Grand Unification in the maximally unflattering light (run α 1 ,α 2 up to the point where they unify, then run α 3 down to the Z mass, where it is 7 orders of magnitude off). The naïve reader might be forgiven for wondering why anyone ever thought of non-supersymmetric Grand Unification in the first place.”


The idea of supersymmetry is the issue of getting electromagnetic, weak, and strong forces to unify at 10^16 GeV or whatever, near the Planck scale. Dine assumes that unification is a fact (it isn’t) and then shows that in the absense of supersymmetry, unification is incompatible with the Standard Model.


The problem is that the physical mechanism behind unification is closely related to the vacuum polarization phenomena which shield charges.


Polarization of pairs of virtual charges around a real charge partly shields the real charge, because the radial electric field of the polarized pair is pointed the opposite way. (I.e., the electric field lines point inwards towards an electron. The electric field likes between virtual electron-positron pairs, which are polarized with virtual positrons closer to the real electron core than virtual electrons, produces an outwards radial electric field which cancels out part of the real electron’s field.)


So the variation in coupling constant (effective charge) for electric forces is due to this polarization phenomena.


Now, what is happening to the energy of the field when it is shielded like this by polarization?
Energy is conserved! Why is the bare core charge of an electron or quark higher than the shielded value seen outside the polarized region (i.e., beyond 1 fm, the range corresponding to the IR cutoff energy)?


Clearly, the polarized vacuum shielding of the electric field is removing energy from charge field.


That energy is being used to make the loops of virtual particles, some of which are responsible for other forces like the weak force.


This provides a physical mechanism for unification which deviates from the Standard Model (which does not include energy sharing between the different fields), but which does not require supersymmetry.


Unification appears to occur because, as you go to higher energy (distances nearer a particle), the electromagnetic force increases in strength (because there is less polarized vacuum intervening in the smaller distance to the particle core).


This increase in strength, in turn, means that there is less energy in the smaller distance of vacuum which has been absorbed from the electromagnetic field to produce loops.


As a result, there are fewer pions in the vacuum, and the strong force coupling constant/charge (at extremely high energies) starts to fall. When the fall in charge with decreasing distance is balanced by the increase in force due to the geometric inverse square law, you have asymptotic freedom effects (obviously this involves gluon and other particles and is complex) for quarks.
Just to summarise: the electromagnetic energy absorbed by the polarized vacuum at short distances around a charge (out to IR cutoff at about 1 fm distance) is used to form virtual particle loops.


These short ranged loops consist of many different types of particles and produce strong and weak nuclear forces.


As you get close to the bare core charge, there is less polarized vacuum intervening between it and your approaching particle, so the electric charge increases. For example, the observable electric charge of an electron is 7% higher at 90 GeV as found experimentally.


The reduction in shielding means that less energy is being absorbed by the vacuum loops. Therefore, the strength of the nuclear forces starts to decline. At extremely high energy, there is - as in Wilson’s argument - no room physically for any loops (there are no loops beyond the upper energy cutoff, i.e. UV cutoff!), so there is no nuclear force beyond the UV cutoff.


What is missing from the Standard Model is therefore an energy accountancy for the shielded charge of the electron.


It is easy to calculate this, the electromagnetic field energy for example being used in creating loops up to the 90 GeV scale is the energy of a charge which is 7% of the energy of the electric field of an electron (because 7% of the electron’s charge is lost by vacuumn loop creation and polarization below 90 GeV, as observed experimentally; I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424).


So this physical understanding should be investigated. Instead, the mainstream censors physics out and concentrates on a mathematical (non-mechanism) idea, supersymmetry.
Supersymmetry shows how all forces would have the same strength at 10^16 GeV.


This can’t be tested, but maybe it can be disproved theoretically as follows.


The energy of the loops of particles which are causing nuclear forces comes from the energy absorbed by the vacuum polalarization phenomena.


As you get to higher energies, you get to smaller distances. Hence you end up at some UV cutoff, where there are no vacuum loops. Within this range, there is no attenuation of the electromagnetic field by vacuum loop polarization. Hence within the UV cutoff range, there is no vacuum energy available to create short ranged particle loops which mediate nuclear forces.
Thus, energy conservation predicts a lack of nuclear forces at what is traditionally considered to be “unification” energy.


So there would seem to discredit supersymmetry, whereby at “unification” energy, you get all forces having the same strength. The problem is that the mechanism-based physics is ignored in favour of massive quantities of speculation about supersymmetry to “explain” unification, which are not observed.


***************************


Dr M. E. Rose (Chief Physicist, Oak Ridge National Lab.), Relativistic Electron Theory, John Wiley & Sons, New York and London, 1961, pp 75-6:


‘The solution to the difficulty of negative energy states [in relativistic quantum mechanics] is due to Dirac [P. A. M. Dirac, Proc. Roy. Soc. (London), A126, p360, 1930]. One defines the vacuum to consist of no occupied positive energy states and all negative energy states completely filled. This means that each negative energy state contains two electrons. An electron therefore is a particle in a positive energy state with all negative energy states occupied. No transitions to these states can occur because of the Pauli principle. The interpretation of a single unoccupied negative energy state is then a particle with positive energy … The theory therefore predicts the existence of a particle, the positron, with the same mass and opposite charge as compared to an electron. It is well known that this particle was discovered in 1932 by Anderson [C. D. Anderson, Phys. Rev., 43, p491, 1933].


‘Although the prediction of the positron is certainly a brilliant success of the Dirac theory, some rather formidable questions still arise. With a completely filled ‘negative energy sea’ the complete theory (hole theory) can no longer be a single-particle theory.


‘The treatment of the problems of electrodynamics is seriously complicated by the requisite elaborate structure of the vacuum. The filled negative energy states need produce no observable electric field. However, if an external field is present the shift in the negative energy states produces a polarisation of the vacuum and, according to the theory, this polarisation is infinite.


‘In a similar way, it can be shown that an electron acquires infinite inertia (self-energy) by the coupling with the electromagnetic field which permits emission and absorption of virtual quanta. More recent developments show that these infinities, while undesirable, are removable in the sense that they do not contribute to observed results [J. Schwinger, Phys. Rev., 74, p1439, 1948, and 75, p651, 1949; S. Tomonaga, Prog. Theoret. Phys. (Kyoto), 1, p27, 1949].


‘For example, it can be shown that starting with the parameters e and m for a bare Dirac particle, the effect of the ‘crowded’ vacuum is to change these to new constants e’ and m’, which must be identified with the observed charge and mass. … If these contributions were cut off in any reasonable manner, m’ - m and e’ - e would be of order alpha ~ 1/137. No rigorous justification for such a cut-off has yet been proposed.


‘All this means that the present theory of electrons and fields is not complete. … The particles … are treated as ‘bare’ particles. For problems involving electromagnetic field coupling this approximation will result in an error of order alpha. As an example … the Dirac theory predicts a magnetic moment of mu = mu[zero] for the electron, whereas a more complete treatment [including Schwinger’s coupling correction, i.e., the first Feynman diagram] of radiative effects gives mu = mu[zero].(1 + alpha/{twice Pi}), which agrees very well with the very accurate measured value of mu/mu[zero] = 1.001 …’


Notice in the above that the magnetic moment of the electron as calculated by QED with the first vacuum loop coupling correction is 1 + alpha/(twice Pi) = 1.00116 Bohr magnetons. The 1 is the Dirac prediction, and the added alpha/(twice Pi) links into the mechanism for mass here.
Most of the charge is screened out by polarised charges in the vacuum around the electron core:


‘… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ - arxiv hep-th/0510040, p 71.


‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.


Comment by nc — February 23, 2007 @ 11:19 am