The final theory (more on heuristic understanding)

The answer to the previous post is that the arxiv paper on quantum field theory contained a factor of 100 or so error due to ignoring a whole range of particle loops which appear in the energy band of 0.511 Mev to 92 GeV. The authors of the paper will produce an improved discussion, according Lubos Motl.

My point is this: the final theory sets out to unifies the forces and the particle types. I'm dealing with a static electron, and I'm asking how much of the charge of the electron is observable as a function of distance from the middle of it.

Why should this vacuum shielding involve particle loops beyond 0.511 MeV? If you hit electrons together hard in order to make them approach closely, then the collision energy would produce high energy particles. But I'm talking about the vacuum polarization of a static electron, which shields the high core charge to the observed value in Coulomb's law at large distances.

Let's consider where the loops arise from. A vacuum loop for electron-positron creation and annihilation is something like: gamma ray -> e

^{-}+ e

^{+}-> gamma ray -> e

^{-}+ e

^{+}...

This loop of matter creation and annihilation is occurring in the vacuum, and is described by the creation-annihilation operators in quantum field theory. The basic physics can be grasped with Heisenberg's uncertainty equation. Physically, particles are hitting one another in the vacuum. Just as in an particle accelerator, such collisions produce short-lived bursts of particles which have observable consequences, such as increasing the magnetic moment of an electron by 0.116% and causing the Lamb shift in the hydrogen atom's spectrum.

The average time that such particles persist depends on how long another collision in the vacuum annihilates them. So Heisenberg's uncertainty relation for energy and time predicts the lifetime of any particular energy fluctuation. [There is no need to get into science fiction or metaphysics about Hawking's 'interpretations' of the uncertainty principle (parallel universes, baby universes, etc.) because that simply isn't physics, which is about checkable calculations.]

Particles which have the lowest energy exist for the longest period, a simple inverse law relationship between energy and time. Therefore, electron-positron pairs will dominate the vacuum, because they have the least rest mass-energy of all charged particles, and exist for the longest period of time.

The next particle beyond the electron has merely the same electric charge as the electron,

*but is over two hundred times as massive,*and will therefore exist on average for under 1/200th or under 0.5% of the duration of time that electron-positron charges exist in the vacuum.

An electron-positron pair is created by 1.022 MeV = 1.634 x 10

^{-13}J of energy, and has a lifetime according to the Heisenberg uncertainty relationship of

*t ~ h/E*~ (6.626 x 10

^{-34}m

^{2}kg/s) / (1.634 x 10

^{-13}J) ~ 4 x 10

^{-21}seconds, during which the maximum possible distance moved by either particle is

*x = ct =*(3 x 10

^{8}m/s).(4 x 10

^{-21}s) = 1.2 x 10

^{-12}metres.

This is why the vacuum isn't observably radioactive! The virtual particles have just enough range (10

^{-12}metres) to continuously upset the electron orbits in atoms, creating the Schroedinger statistical wave distribution instead of classical orbits, but they don't have enough range to break up molecules. The radioactivity of the vacuum is short-ranged enough that it can only upset the orbits of electrons and nuclear particles. This is rather like the effects of a gas on producing Brownian motion of very small (micron sized) dust particles, but being unable to induce chaotic motion in large objects because statistically their effects balance out on large scales (creating pressure).

Pairs of heavier charged particles could exist closer to the core of an electron if the electric field of the electron is responsible for creating particles. This in the Yang-Mills theory is the same as saying that force-causing gauge photons - which constitute the electric field in the U(1) quantum field theory - behave like very high energy gamma rays when close to a heavy nucleus (when pair production occurs, gamma ray -> electron + positron). Obviously, this route is suggesting that gauge photons interact with one another to create vacuum charges of higher mass when closer to the electron middle. If I shield cobalt-60 with lead, some of the gamma rays (with a mean energy of 1.25 MeV) will be converted into electron-positron pairs by interacting with the strong nuclear field in the lead nucleus.

I cannot however see why the polarization of heavier particles would produce a greater attenuation of the electric charge, because after all they have the same amount of charge as electrons and more inertia, so are less mobile to become polarized. Physically, they should produce

*less*shielding, so Lubos Motl's off-the-cuff response to a question (see comments section of last post on this blog) is illucid.

I've demonstrated before on this blog that you can calculate the Yang-Mills electromagnetic force using Heisenberg's law in a reasonably straight forward (experimentally validated) way. You get about 137 times the Coulomb law for two electrons. The explanation seems straight forward: the vacuum polarization around the electron core causes an electric field attenuation factor of alpha, 1/137.

Hence, I'd expect the observable charge of an electron to naturally vary with distance

*x*from the middle by some law due to polarization that looks like:

e(x)/e(x = infinity) = 1 + [(alpha)

^{-1}- 1]e

^{-ax}

= 1 + 136e

^{-ax}

where a is the reciprocal of the distance over which the shieldable portion of the electric charge falls by a factor of e (where e=2.71828...).

There are two issues with this model. First, it doesn't in this form take account of unification properly. My argument is that at very close ranges, you get strong and weak forces occurring due to the effect of the electromagnetic force field strength on the vacuum. These stronger forces - due to creation of charges in the vacuum near the middle of the electron - sap energy from the electromagnetic field. For example, if hypothetically on average half of the gauge boson energy of the photons causing electromagnetism is used to produce heavy charges at a certain distance close to the middle of the electron, then the electromagnetic field energy at that distance as carried by gauge bosons (photons for electromagnetism) will be reduced by 50%.

So you can't have your cake and eat it; if one observational charge gets weaker, that implies that energy is being used by some other process. The strong force for quarks within say nucleons is optimised at low energies, and gets weaker at high energy collisions or when the quarks are brought closer together or close to other quarks. The reason may well be simply that the whole strong force phenomenology is driven by the variation in energy of the electroweak force field in the polarized vacuum. As the electromagnetic force gets stronger very close to the middle of a particle, less energy is therefore available for strong force coupling.

The equation above may need modification in other ways, too: it might not be a simple exponential law; for example it could be that x should be replaced by x

^{n}in the equation, where the value of n might not be 1, but could be almost anything. A value of n bigger than 1 would quench the charge of the electron more sharply than a normal exponential law, while a value of n smaller than 1 would do the opposite and smear out the charge variation more.

This suggestion is in sharp contrast to the official renormalization approach to quantum field theory. The existing system uses a logarithmic correction for observable charge variation with energy (which is, in general, inversely proportional to distance from the middle of the particle):

e(x)/e(x = infinity) ~ 1 + [0.005 ln(A/E)]

This equals 1.07 for A = 92,000 MeV and E = 0.511 MeV as experimentally validated (Levine, PRL 1997), hence the relative shielding factor of the vacuum falls from 137 at 0.511 MeV collisions to about 128 at 92 GeV. What is artificial about this equation is the lower limit (cutoff) of 0.511 MeV. From quantum mechanics you would expect a smooth curve for the shielding from charge polarization, not an abrupt lower limit. The electron charge is exactly e until you get to a certain distance from it (corresponding to a collision energy of 0.511 MeV) when it abruptly starts to increase.

*No way this could happen, why should vacuum polarization stop at some arbitrary distance?!!*

The whole of quantum field theory is a fiddle mathematically and this proves it. You could have a lower cutoff if there was some mechanism for it. I think there is confusion introduced into QFT allowing charge variation

*instead of trying to find a model for the variation for observable charge*

**as a function of collision energy,***in the simplest case of a static electron. Some of the collision energy in a collision will produce extra pairs of charges which will mean that there is no a straightforward correspondence between these two situations.*

**as a function of distance from the middle of the electron**The shielding factor is falling because you are getting closer to the core of the electron, and seeing less of the polarised shield, and more of the unshielded charge. Similarly, if it is a cloudy day and you get in an aircraft and get much higher in the atmosphere, you will see more sunlight and less attenuated sunlight. If you could get above the cloud cover, you would see completely unshielded sunlight. Similarly, if you hit particles so hard together that the clouds of virtual charges are no longer intervening in the small distance between the particle cores when they collide, the electric charge involves is much stronger because it is less shielded.

The final theory involves getting to grips with charge and mass, which first means getting a working model for what these things are in quantum field theories. Charge is the priority, because it is less variable than mass (mass varies in relativity, charge doesn't).

Fundamental particles have charges in the ratio 1: -2 : 3 (example, d quark is -1/3, u is +2/3, electron is -1). The fractionational charges of quarks in the final theory must arise from polarization either by itself or in combination with some other effect or principle.

When you get two or three quarks close together, they share the same cloud of polarized charge, but the cloud is two or three times stronger, so the charge of each individual quark from a great distance seems to be only a fraction of what you get at a large distance from an electron.

Obviously this is very sketchy indeed, and raised a lot of further questions and possible objections. But I think something of this kind is more likely to lead someplace than a load of extra dimensional speculation which is not even wrong. Peter Woit finishes his new book Not Even Wrong by saying that some kind of new symmetry principle - which is not the approach of string theory - is probably needed. He writes that little is known of the representation theory of general covariance in general relativity and quantum field theory may be unified with gravity by this sort of investigation, rather than stringy speculations about extra dimensions.

He also gives a very nice, clear discussion of loop quantum gravity on page 189. The loop quantum gravity theory looks considerably more straightforward to me now. With all this stuff, you get experts giving a vast amount of technical mathematics which end up with no hard numbers. This is the same for string theory as for loop quantum gravity. But loop quantum gravity is describing real physics with observational support from quantum field theory experiments. Loops exist. There is plenty of experimental evidence for it (this situation is the opposite of that for string theory)

Loop quantum gravity gets from quantum field theory to general relativity without a metric, as Lee Smolin described in recent Perimeter Institute lectures. The loops are described by a Penrose spin network, and graphs of the interactions are summed to produce the path integral which gives a background independent (metric-less) version of general relativity. I find it easy to reconcile this completely with what I've suggested is the gravity mechanism and the mechanism for the metric.

## 0 Comments:

Post a Comment

<< Home