Quantum gravity physics based on facts, giving checkable predictions

Sunday, March 19, 2006

Copies of my comments to Dr Dantas's blog and to Cosmic Variance

http://christinedantas.blogspot.com/2006/03/wmap-3-year-data-release.html
1

On the subject of drl versus cosmological constant: Dr Lunsford outlines problems in the 5-d Kaluza-Klein abstract (mathematical) unification of Maxwell's equations and GR, and Lunsford published it in published in Int. J. Theor. Phys., v 43 (2004), No. 1, pp.161-177. This peer-reviewed paper was submitted to arXiv.org but was removed from arXiv.org by censorship apparently since it investigated a 6-dimensional spacetime is not consistent with Witten’s speculative 10/11 dimensional M-theory. It is however on the CERN document server at http://doc.cern.ch//archive/electronic/other/ext/ext-2003-090.pdf, and it shows the errors in the historical attempts by Kaluza, Pauli, Klein, Einstein, Mayer, Eddington and Weyl. It proceeds to the correct unification of general relativity and Maxwell’s equations, finding 4-d spacetime inadequate:

‘Gravitation and Electrodynamics over SO(3,3)’ on CERN document server, EXT-2003-090: ‘an approach to field theory is developed in which matter appears by interpreting source-free (homogeneous) fields over a 6-dimensional space of signature (3,3), as interacting (inhomogeneous) fields in spacetime. The extra dimensions are given a physical meaning as ‘coordinatized matter’. The inhomogeneous energy-momentum relations for the interacting fields in spacetime are automatically generated by the simple homogeneous relations in 6-D. We then develop a Weyl geometry over SO(3,3) as base, under which gravity and electromagnetism are essentially unified via an irreducible 6-calibration invariant Lagrange density and corresponding variation principle. The Einstein-Maxwell equations are shown to represent a low-order approximation, and the cosmological constant must vanish in order that this limit exist.’

Lunsford begins with an enlightening overview of attempts to unify electromagnetism and gravitation:

‘The old goal of understanding the long-range forces on a common basis remains a compelling one. The classical attacks on this problem fell into four classes:

‘1. Projective theories (Kaluza, Pauli, Klein)
‘2. Theories with asymmetric metric (Einstein-Mayer)
‘3. Theories with asymmetric connection (Eddington)
‘4. Alternative geometries (Weyl)

‘All these attempts failed. In one way or another, each is reducible and thus any unification achieved is purely formal. The Kaluza theory requires an ad hoc hypothesis about the metric in 5-D, and the unification is non-dynamical. As Pauli showed, any generally covariant theory may be cast in Kaluza’s form. The Einstein-Mayer theory is based on an asymmetric metric, and as with the theories based on asymmetric connection, is essentially algebraically reducible without additional, purely formal hypotheses.

‘Weyl’s theory, however, is based upon the simplest generalization of Riemannian geometry, in which both length and direction are non-transferable. It fails in its original form due to the non-existence of a simple, irreducible calibration invariant Lagrange density in 4-D. One might say that the theory is dynamically reducible. Moreover, the possible scalar densities lead to 4th order equations for the metric, which, even supposing physical solutions could be found, would be differentially reducible. Nevertheless the basic geometric conception is sound, and given a suitable Lagrangian and variational principle, leads almost uniquely to an essential unification of gravitation and electrodynamics with the required source fields and conservation laws.’ Again, the general concepts involved are very interesting: ‘from the current perspective, the Einstein-Maxwell equations are to be regarded as a first-order approximation to the full calibration-invariant system.

‘One striking feature of these equations that distinguishes them from Einstein’s equations is the absent gravitational constant – in fact the ratio of scalars in front of the energy tensor plays that role. This explains the odd role of G in general relativity and its scaling behaviour. The ratio has conformal weight 1 and so G has a natural dimensionfulness that prevents it from being a proper coupling constant – so the theory explains why general relativity, even in the linear approximation and the quantum theory built on it, cannot be regularised.’

A causal model for GR must separate out the description of matter from the expanding spacetime universe. Hence you have three expanding spacetime dimensions, but matter itself is not expanding, and is in fact contracted by the gravitational field, the source for which is vector boson radiation in QFT.

The CC is used to cancel out gravitational retardation of supernovae at long distances. You can get rid of the CC by taking the Hubble expansion as primitive, and gravity as a consequence of expansion in spacetime. Outward force f=ma=mc/(age of universe) => inward force (3rd law). The inward force according to the Standard Model possibilities of QFT, must be carried by vector boson radiation. So causal shielding (Lesage) gravity is a result of the expansion. Thus, quantum gravity and the CC problem dumped in one go.

I personally don't like this result, it would be more pleasing not to have to do battle with the mainstream over the CC, but frankly I don't see how an ad hoc model composed of 96% dark matter and dark energy, is defended to the point of absurdity by suppressing workable alternatives which are more realistic.

The same has happened in QFT due to strings. When I was last at university, I sent Stanley Brown, editor of Physical Review Letters my gravity idea, a really short concise paper, and he rejected it for being an "alternative" to string theory! I don't believe he even bothered to check it. I'd probably have done the same thing if I was flooded by nonsense ideas from outsiders, but it is a sad excuse.

2

Lee Smolin, in starting with known facts of QFT and building GR from them, is an empiricist; contrasted to the complete speculation of string theorists.

We know some form of LQG spin foam vacuum is right, because vector bosons (1) convey force, and (2) have spin.

For comparison, nobody has evidence for superpartners, extra dimensions, or any given stringy theory.

Danny Lunsford unites Maxwell's equations and GR using a plausible treatment of spacetime where there exactly twice as many dimensions as observed, the extra dimensions describing non-expanding matter while the normal spacetime dimensions describe the expanding spacetime. Because the expanding BB spacetime is symmetrical around us, those three dimensions can be lumped together.

The problem is that the work by Smolin and Lunsford is difficult for the media to report, and is not encouraged by string theorists, who have too much power.

Re inflation: the observed CBR smoothness "problem" at 300,000 years (the very tiny size scale of the ripples across the sky) is only a problem for seeding galaxy formation in the mainstream paradigm for GR.

The mainstream approach is to take GR as a model for the universe, which assumes gravity is not a QFT radiation pressure force.

But if you take the observed expansion as primitive, then you get a mechanism for local GR as the consequence, without the anomalies of the mainstream model which require CC and inflation.

Outward expansion in spacetime by Newton's 3rd law results in inward gauge boson pressure, which causes the contraction term in GR as well as gravity itself.

GR is best viewed simply as Penrose describes it:

(1) the tensor field formulation of Newton's law, R_uv = 4Pi(G/c^2)T_uv, and

(2) the contraction term which leads to all departures from Newton's law (apart from CC).

Putting the contraction term into the Newtonian R_uv = 4Pi(G/c^2)T_uv gives the Einstein field equation without the CC:

R_uv - ½Rg_uv = 8Pi(G/c^2)T_uv

Feynman explains very clearly that the contraction term can be considered physical, e.g., the Earth's radius is contracted by the amount ~(1/3)MG/c^2 = 1.5 mm.

This is like like radiation pressure squeezing the earth on the subatomic level (not just the macroscopic surface of the planet), and this contraction in space also causes a related gravitational reduction in time, or gravity time-dilation.

The mechanism behind the deflection of light by the sun is that everything, including light, gains gravitational potential energy as it approaches a mass like the sun.

Because the light passes perpendicularly to the gravity field vector at closes approach (average deflection position), the increased gravitational energy of a slow moving body would be used equally in two ways: 50% of the energy would go into increasing the speed, and 50% into changing the direction (bending it towards the sun).

Light cannot increase in speed, so 100% of the gained energy must go into changing the direction. This is why the deflection of light by the sun is exactly twice that predicted for slow-moving particles by Newton's law. All GR is doing is accounting for energy.

This empiricist model accurately predicts the value of G using cosmological data (Hubble constant and density of universe), eliminating most dark matter in the process. It gets rid of the need for inflation since the effective strength of gravity at 300,000 years was very small, so the ripples were small.

=> No inflation needed. All forces (nuclear, EM, gravity) are in constant ratio because all have inter-related QFT energy exchange mechanisms. Therefore the fine structure parameter 137 (ratio of strong force to EM) remains constant, and the ratio of gravity to EM remains constant.

The sun's radiating power and nuclear reactions in the 1st three minutes are not affected at all by variations in the absolute strengths of all the fundamental forces, since they remain in the same ratio.

Thus, if you double gravity and nuclear and EM force strengths are also doubled, the sun will not shine any differently than now. The extra compression due to an increase in gravity would be expected to increase the fusion rate, but the extra Coulomb repulsion between approaching protons (due to the rise in EM force), cancels out the gravitational compression.

So the ramshackle-looking empiricist model does not conflict at all with the nucleosynthesis of the BB, or with stellar evolution. It does conflict with the CC and inflation, but those are just epicycles in the mainstream model, not objective facts.

http://cosmicvariance.com/2006/03/16/wmap-results-cosmology-makes-sense/

This is hyped up to get media attention: the CBR from 300,000 years after BB says nothing of the first few seconds, unless you believe their vague claims that the polarisation tells something about the way the early inflation occurred. That might be true, but it is very indirect.

I do agree with Sean on CV that n = 0.95 may be an important result from this analysis. I’d say it’s the only useful result. But the interpretation of the universe as 4% baryons, 22% dark matter and 74% dark energy is a nice fit to the existing LambdaCDM epicycle theory from 1998. The new results on this are not too different from previous empirical data, but this ‘nice consistency’ is a euphemism for ‘useless’.

WMAP has produced more accurate spectral data of the fluctuations, but that doesn’t prove the ad hoc cosmological interpretation which was force-fitted to the data in 1998. Of course the new data fits the same ad hoc model. Unless there was a significant error in the earlier data, it would do. Ptolemies universe, once fiddled, continued to model things, with only occasional ‘tweaks’, for centuries. This doesn’t mean you should rejoice.

Dark matter, dark energy, and the tiny cosmological constant describing the dark energy, remain massive epicycles in current cosmology. The Standard Model has not been extended to include dark matter and energy. It is not hard science, it’s a very indirect interpretion of the data. I’ve got a correct prediction made without a cosmological constant made and published in ‘96, years before the ad hoc Lambda CDM model. Lunsford’s unification of EM and GR also dismisses the CC.

http://cosmicvariance.com/2006/03/18/theres-gold-in-the-landscape/

They were going to name the flower “Desert Iron Pyrites”, but then they decided “Desert Gold” is more romantic ;-)

Dr Peter Woit has kindly removed the following comments as requested, which I made on the subject of physicist John Barrow's $1.4 million prize for religion (see second comment below):

http://www.math.columbia.edu/~woit/wordpress/?p=364

anon Says: March 18th, 2006 at 3:47 am

Secret milkshake, I agree! The problem religion posed in the past to science was insistence on the authority of scripture and accepted belief systems over experimental data. If religion comes around to looking at experimental data and trying to go from there, then it becomes more scientific than certain areas of theoretical physics. Does anyone know what Barrow has to say about string theory?

I learnt a lot of out-of-the-way ‘trivia’ from ‘The Anthropic Cosmological Principle’, particularly the end notes, e.g.:

‘… should one ascribe significance to empirical relations like m(electron)/m(muon) ~ 2{alpha}/3, m(electron)/m(pion) ~ {alpha}/2 … m(eta) - 2m(charged pion) = 2m(neutral pion), or the suggestion that perhaps elementary particle masses are related to the zeros of appropriate special functions?’

By looking religiously at numerical data, you can eventually spot more ‘coincidences’ that enable empirical laws to be formulated. If alpha is the core charge shielding factor by the polarised vacuum of QFT, then it is possible to justify particle mass relationships; all observable particles apart from the electron have masses quantized as M=[electron mass].n(N+1)/(2.alpha) ~ 35n(N+1) Mev, where n is 1 for leptons, 2 for mesons and naturally 3 for baryons. N is also an integer, and takes values of ‘magic numbers’ of nuclear physics for relatively stable particles: for the muon (most stable particle after the neutron), N=2, for nucleons N=8, for the Tauon, N=50.

Hence, there’s a selection principle allowing masses of relatively stable particles to be deduced. Since the Higgs boson causes mass and may have a value like that of the Z boson, it’s interesting that [Z-boson mass]/(3/2 x 2.Pi x 137 x 137) = 0.51 Mev (electron mass), and [Z-boson mass]/(2.Pi x 137) ~ 105.7 MeV (muon mass). In an electron, the core must be quite distant from the particle giving the mass, so there are two separate vacuum polarisations between them, weakening the coupling to just alpha squared (and a geometrical factor). In the muon and all other particles than the electron, there is extra binding energy and so the core is closer to the mass-giving particle, hence only one vacuum polarisation separates them, so the coupling is alpha.

Remember that Schwinger’s coupling correction in QED increases Dirac’s magnetic moment of the electron to about 1 + alpha/(2.pi). When think outside the box, sometimes coincidences have a reason

anon Says: March 18th, 2006 at 6:04 am

Sorry Peter, you may delete the above comment. I’ve just read your links. Barrow is today well into using religion to justify stringy land and other unobservables like dark energy/CC. The 1986 book presented facts and left the reader to decide, which is different from what he’s now doing, preaching a belief system.

The CC is used to cancel out gravitational retardation of supernovae at long distances. You can get rid of the CC by taking the Hubble expansion as primitive, and gravity as a consequence of expansion in spacetime. Outward force f=ma=mc/(age of universe) => inward force (3rd law). So causal shielding (Lesage) gravity is a result of the expansion. Thus, quantum gravity and the CC problem dumped in one go. Better delete this quick too, before it annoys Barrow.

Another comment to Cosmic Variance:

http://cosmicvariance.com/2006/01/25/general-relativity-as-a-tool/#comment-15326

The best way to understand that the basic field equation of GR is empirical fact is extending Penrose’s arguments:

(1) Represent Newton’s empirical gravity potential in the tensor calculus of Gregorio Ricci-Curbastro and Tullio Levi-Civita: R_uv = 4.Pi(G/c^2)T_uv, which applies to low speeds/weak fields.

(2) Consider objects moving past the sun, gaining gravitational potential energy, and being deflected by gravity. The mean angle of the object to the radial line from the gravity force from the sun is 90 degrees, so for slow-moving objects, 50% of the energy is used in increasing the speed of the object, and 50% in deflecting the path. But because light cannot speed up, 100% of the gravitational potential energy gained by light on its approach to the sun is used to deflection, so this is the mechanism why light suffers twice the deflection suggested by Newton’s law. Hence for light deflection: R_uv = 8.Pi(G/c^2)T_uv.

(3) To unify the different equations in (1) and (2) above, you have to modify (2) as follows: R_uv - 0.5Rg_uv = 8.Pi(G/c^2)T_uv, where g_uv is the metric. This is the Einstein-Hilbert field equation.

At low speeds and in weak gravity fields, R_uv = - 0.5Rg_uv, so the equation becomes the Newtonian approximation R_uv = 4.Pi(G/c^2)T_uv.

GR is based entirely on empirical facts. Speculation only comes into it after 1915, via the “cosmological constant” and other “fixes”. Think about the mechanism for the gravitation and the contraction which constitute pure GR: it is quantum field theory, radiation exchange.

Fundamental particles have spin which in an abstract way is related to vortices. Maxwell in fact argued that magnetism is due to the spin alignment of tiny vacuum field particles.

The problem is that electron is nowadays supposed to be in an almost metaphysical superposition of spin states until measured, which indirectly (via the EPR-Bell-Aspect work) leads to the entanglement concept you mention. But Dr Thomas Love of California State University last week sent me a preprint, “Towards an Einsteinian Quantum Theory”, where he shows that the superposition principle is a fallacy, due to two versions of the Schroedinger equation: a system described by the time-dependent Schroedinger equation isn’t in an eigenstate between interactions.

“The quantum collapse occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.”

1 Comments:

At 1:07 PM, Blogger nige said...

copy of comment to

http://eskesthai.blogspot.com/2004/11/entanglement-and-new-physics.html

The electron is nowadays supposed to be in an almost metaphysical superposition of spin states until measured, which indirectly (via the EPR-Bell-Aspect work) leads to the entanglement concept you mention. But Dr Thomas Love of California State University last week sent me a preprint, “Towards an Einsteinian Quantum Theory”, where he shows that the superposition principle is a fallacy, due to two versions of the Schroedinger equation: a system described by the time-dependent Schroedinger equation isn’t in an eigenstate between interactions.

“The quantum collapse occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the
wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.”

In the same report, page 133, Dr Love explains that von Neumann's proof of the impossibility of hidden variables is false because it replies on the validity of the principle of superposition.

So we can, and if we are objective, we must move on to make calculations with a causal model of reality. Feynman knew this 42 years ago, as shown in his Nov 1964 Cornell lectures filmed for BBC TV, "The Character of Physical Law" (broadcast on BBC2 in 1965), published in his book Character of Physical Law, pp. 171-3):

"The inexperienced [theorists who have never made contact with reality by getting a theory which predicts something measurable], and crackpots, and people like that, make guesses that are simple, but [with extensive knowledge of the actual facts rather than speculative theories of physics] you can immediately see that they are wrong, so that does not count. ... There will be a degeneration of ideas, just like the degeneration that great explorers feel is occurring when tourists begin moving i on a territory."

On page 38 of this book, Feynman has a diagram which looks basically like this: >E S<, where E is earth and S is sun. The arrows show the push that causes gravity. This is the LeSage gravity scheme, which I now find Feynman also discusses (without the diagram) in his full Lectures on Physics. He concludes that the mechanism in its form as of 1964 contradicted the no-ether relativity model and could not make any valid predictions, but finishes off by saying (p. 39):

"'Well,' you say, 'it was a good one, and I got rid of the mathematics for a while. Maybe I could invent a better one.' Maybe you can, because nobody knows the ultimate. But up to today [1964], from the time of Newton, no one has invented another theoretical description of the mathematical machinery behind this law which does not either say the same thing over again, or make the mathematics harder, or predict some wrong phenomena. So there is no model of the theory of gravitation today, other the mathematical form."

Does this mean Feynman is after physical mechanism, or is happy with the mathematical model? The answer is there on page 57-8:

"It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of space/time is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities."

 

Post a Comment

<< Home