Quantum gravity physics based on facts, giving checkable predictions: February 2006

Monday, February 27, 2006

Recent email to Dr Chris Oakley, with an update added in red. The string theory approach to QFT (quantum gravity, superforce unification, SUSY) is extremely illucid and disconnected from reality.

I've quoted a section from an old (1961) book on 'Relativistic Electron Theory' at http://electrogravity.blogspot.com/2006/02/standard-model-says-mass-higgs-field.html:

''The solution to the difficulty of negative energy states [in relativistic quantum mechanics] is due to Dirac [P. A. M. Dirac, Proc. Roy. Soc. (London), A126, p360, 1930]. One defines the vacuum to consist of no occupied positive energy states and all negative energy states completely filled. This means that each negative energy state contains two electrons. An electron therefore is a particle in a positive energy state with all negative energy states occupied. No transitions to these states can occur because of the Pauli principle. The interpretation of a single unoccupied negative energy state is then a particle with positive energy ... It will be apparent that a hole in the negative energy states is equivalent to a particle with the same mass as the electron ... The theory therefore predicts the existence of a particle, the positron, with the same mass and opposite charge as compared to an electron. It is well known that this particle was discovered in 1932 by Anderson [C. D. Anderson, Phys. Rev., 43, p491, 1933].

'Although the prediction of the positron is certainly a brilliant success of the Dirac theory, some rather formidable questions still arise. With a completely filled 'negative energy sea' the complete theory (hole theory) can no longer be a single-particle theory.

'The treatment of the problems of electrodynamics is seriously complicated by the requisite elaborate structure of the vacuum. The filled negative energy states need produce no observable electric field. However, if an external field is present the shift in the negative energy states produces a polarisation of the vacuum and, according to the theory, this polarisation is infinite.

'In a similar way, it can be shown that an electron acquires infinite inertia (self-energy) by the coupling with the electromagnetic field which permits emission and absorption of virtual quanta. More recent developments show that these infinities, while undesirable, are removable in the sense that they do not contribute to observed results [J. Schwinger, Phys. Rev., 74, p1439, 1948, and 75, p651, 1949; S. Tomonaga, Prog. Theoret. Phys. (Kyoto), 1, p27, 1949].

'For example, it can be shown that starting with the parameters e and m for a bare Dirac particle, the effect of the 'crowded' vacuum is to change these to new constants e' and m', which must be identified with the observed charge and mass. ... If these contributions were cut off in any reasonable manner, m' - m and e' - e would be of order alpha ~ 1/137. No rigorous justification for such a cut-off has yet been proposed.

'All this means that the present theory of electrons and fields is not complete. ... The particles ... are treated as 'bare' particles. For problems involving electromagnetic field coupling this approximation will result in an error of order alpha. As an example ... the Dirac theory predicts a magnetic moment of mu = mu[zero] for the electron, whereas a more complete treatment [including Schwinger's coupling correction, i.e., the first Feynman diagram] of radiative effects gives mu = mu[zero].(1 + alpha/{twice Pi}), which agrees very well with the very accurate measured value of mu/mu[zero] = 1.001...'

This kind of clear-cut physics is more appealing to me than string theory about extra dimensions and such like. There is some evidence that masses for the known particles, can be described by a a two-step mechanism. First, virtual particles in the vacuum (most likely trapped neutral Z particles, 91 GeV mass) interact with one another by radiation by give rise to mass (a kind of Higgs field). Secondly, real charges can associate with a trapped Z particle either inside or outside the polarised veil of virtual charges around the real charge core: http://electrogravity.blogspot.com/2006/02/standard-model-says-mass-higgs-field.html

The polarised charge around either a trapped Z particle (OK it is neutral over all, but so is the photon, and the photon's EM cycle is half positive electric field and half negative, in Maxwell's model of light, so a neutral particle still has electric fields when considering the close-in picture) gives a shielding factor of 137, with an additional factor of twice Pi for some sort of geometric reason, possibly connected to spin/magnetic polarisation. If you spin a loop as seen edge-on, the exposure it receives per unit area falls by a factor of Pi, compared to a non-spinning cylinder, and we are dealing with exchange of gauge bosons like radiation to create forces between spinning particles. The electron loop has spin 1/2, so it rotates 720 degrees to cover a complete revolution like a Mobius strip loop. Thus, it has a reduction factor of twice Pi as seen edge on, and the magnetic alignment which increases the magnetic moment of the electron means that the core electron and the virtual charge in the vacuum are aligned side-on.

Z-boson mass: 91 GeVMuon mass (electron with a Higg's boson/trapped Z-boson inside its veil): 91 / (2.Pi.137) = 105.7 MeV.Electron mass (electron with a Higg's boson/trapped Z-boson outside its veil): 91 / [(1.5).(137).(2.Pi.137)] = 0.51 MeV.
Most hadron masses are describable by (0.511 Mev).(137/2)n(N + 1) = 35n(N + 1) Mev where n and N are integers, with a similar sort of heuristic explanation (as yet incomplete in details): http://feynman137.tripod.com/

Supersymmetry can be completely replaced by physical mechanism and energy conservation of the field bosons:

Supersymmetry is not needed at all because the physical mechanism by which nuclear and electroweak forces unify at high energy automatically leads to perfect unification, due to conservation of energy: as you smash particles together harder, they break through the polarised veil around the cores, exposing a higher core charge so the electromagnetic force increases. My calculation at http://electrogravity.blogspot.com/2006/02/heisenbergs-uncertainty-sayspd-h2.html suggests that the core charge is 137 times the observed (long range) charge of the electron. However, simple conservation of potential energy for the continuously-exchanged field of gauge bosons shows that this increase in electromagnetic field energy must be conpensated for by a reduction in other fields as collision energy increases. This will reduce the core charge (and associated strong nuclear force) from 137 times the low-energy electric charge, compensating for the rising amount of energy carried by the electromagnetic field of the charge at long distances.

Hence, in sufficiently high energy collisions, the unified force will be some way intermediate in strength between the low-energy electromagnetic force and the low-energy strong nuclear force. The unified force will be attained where the energy is sufficient to completely break through the polarised shield around the charge cores, possibly at around 10^16 GeV as commonly suggested. A proper model of the physical mechanism would get rid of the Standard Model problems of unification (due to incomplete approximations used to extrapolate to extremely high energy): http://electrogravity.blogspot.com/2006/02/heuristic-explanation-of-short-ranged_27.html

So I don't think there is any scientific problem with sorting out force unification without SUSY in the Standard Model, or of including gravity (http://feynman137.tripod.com/). The problem lies entirely with the mainstream preoccupation with string theory. Once the mainstream realises it was wrong, instead of admitting it was wrong, it will just use its preoccupation with string theory as the excuse for having censored alternative ideas.

The problem is whether Dr Peter Woit can define crackpottery to both include the mainstream string theory, and exclude some alternatives which look far-fetched or crazy but have a more realistic change of being tied to facts, and making predictions which can be tested. With string theory, Dr Woit finds scientific problems. I think the same should be true of alternatives, which should be judged on scientific criteria. The problem is that the mainstream stringers don't use scientific grounds to judge either their own work or alternatives. They say they are right because they are a majority, and alternatives are wrong because they are in a minority.

Heuristic explanation of short-ranged nuclear forces and unification

Conservation of energy for all the force field mediators would imply that the fall in the strength of the strong force would be accompanied by the rise in the strength of the electroweak force (which increases as the bare charge is exposed when the polarised vacuum shield breaks down in high energy collisions), which implies that forces unify exactly without needing supersymmetry (SUSY).

Above: what the fuss is all about, the unification of strong, electromagnetic, and weak forces at high energy. The Standard Model is well tested, but doesn't predict a neat unification into a single 'superforce' at high energy unless there is a 1:1 boson:fermion supersymmetry. Tony Smith has classified some of the stringy theories which attempt to deal with this issue. However, why force nature to become a superforce at 10^15 GeV, when you don't know that such a superforce exists, you have no way to test it because it is so many orders of magnitude higher than even an earth-sized particle accelerator could attain, and you don't see any superpartners anyway? Notice that the mechanism proved on this blog and linked site is ignored, and suppressed from arxiv by mainstream string theorists. What I'm doing is building on known facts. There is good evidence from Koltick's experiments that the electroweak forces increase with collision energy, as I've discussed in previous posts (see also here and here and here) and that the strong nuclear force decreases with increasing collision energy. At low energies, the experimentally determined strong nuclear force strength is alpha = 1 (which is about 137 times the Coulomb law), but it falls to alpha = 0.35 at a collision energy of 2 GeV, 0.2 at 7 GeV, and 0.1 at 200 GeV or so.

Since the force field energy is independent of the kinetic energy (charges don't vary like masses do), conservation of energy for all the force field mediators would imply that the fall in the strength of the strong force would be accompanied by the rise in the strength of the electroweak force (which increases as the bare charge is exposed when the polarised vacuum shield breaks down in high energy collisions), which implies that forces unify exactly without needing supersymmetry (SUSY). What should be happening in physics is a modelling of the facts in a simple way, not inventing new models to predict unobservables and unification at unobservably high energy, but modelling what we can really measure, the loads of data on particle masses, cosmology, forces at low energy, etc. Once that is done, the resulting theory will clarify the mechanism for what happens at higher energy, i.e., whether superforce exists.

‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 German publication, ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303. Note: statistical scatter gives the energy form of Heisenberg’s equation, since the vacuum is full of gauge bosons carrying momentum like light, and exerting vast pressure; this gives the foam vacuum.

I've just updated a previous post here with some comments on the distinction between the two aspects of the strong nuclear force, that between quarks (where the physics is very subtle, with interactions between the virtual quark field and the gluon field around quarks leading to a modification of the strong nuclear force and the effect of asymptotic freedom of quarks within hadrons), and that between one nucleon and another.

Nucleons are neutrons and protons, each containing three quarks, and the strong nuclear force between nucleons behaves as if neutrons are practically identical to protons (the electric charge is an electromagnetic force effect). Between individual quarks, the strong force is mediated by gluons and is more complex due to screening effects of the colour charges of the quarks, but between nucleons it is mediated by pions, and is very simple, as my previous post shows.

Consider why the nuclear forces short-ranged, unlike gravity and electromagnetism. The Heisenberg uncertainty principle in its time-energy form sets a limit to the amount of time a certain amount of energy (that of the fause-mediating particles) can exist. Because of the finite speed of light, this time limit is equivalent to a distance limit or range. This is why nuclear forces are short-ranged. Physically, the long-range forces (gravity and electromagnetism) are radiation exchange effects which aren't individually attenuated with distance, but just undergo geometric spreading over wider areas due to divergence (giving rise to the inverse-square law).

But the short-ranged nuclear forces are physically equivalent to a gas-type pressure of the vacuum. The 14.7 pounds/square inch air pressure doesn't push you against the walls, because air exists between you and the walls, and disperses kinetic energy as pressure isotropically (equally in all directions) due to the random scattering of air molecules. The range over which you are attracted to the wall due to the air pressure is around the average distance air molecules go between rancom scattering impacts, which is the mean free path of an air molecule, about 0.1 micron (micron = micrometre).

This is why to get 'attracted' to a wall using air pressure, you need a very smooth wall and a clean rubber suction cup: it is a short-ranged effect. The nuclear forces are similar to this in their basic mechanism, with a short range because of the collisions and interactions of the force-mediating particles, which are more like gas molecules than the radiations which give rise to gravity and electromagnetism. We know this for the electroweak theory, where at low energies the W and Z force mediators are screened by the foam vacuum of space, while the photon isn't.

Deceptions used to attack predictive, testable physical understanding of quantum mechanics: (1) metaphysically-vague entanglement of the wavefunctions of photons in Alain Aspect's ESP/EPR supposed experiment, which merely demonstrates a correlation in the polarisation of photons emitted from the same source in opposite directions and measured. This correlation is expected if Heisenberg's uncertainty principle does NOT apply to photon measurement. We know Heisenberg's uncertainty principle DOES apply to measuring electrons and other non-light speed particles, which have time to respond to the measurement by being deflected or changing state. Photons, however, must be absorbed and then re-emitted to change state or direction. Therefore, correlation of identical photon measurements is expected based on the failure of the uncertainty principle to apply to the measurement process of photons. It is hence entirely fraudulent to claim that the correlation is due to metaphysically-vague entanglement of wave functions of photons metres apart travelling in opposite directions. (2) Young's double slit experiment: Young falsely claimed that light somehow cancels out at the dark fringes on the screen. But we know energy is conserved. Light simply doesn’t arrive at the dark fringes (if it does, what happens to it, especially where you fire one photon at a time!!!!!!????). What really happens with light is interference near the double slits, not at the screen, which is not the case for water wave type interference (water waves are longitudinal so interfere at the screen, light waves have a transverse feature which allows interference to occur even when a single photon passes through one of two slits, if the second slit is nearby, i.e., within a wavelength or so!). (3) Restricted ('Special') Relativity:

"General relativity as a generalization of special relativity

"Some people are extremely confused about the nature of special relativity and they will tell you that the discovery of general relativity has revoked the constraints imposed by special relativity. But that's another extremely deep misunderstanding of physics. General relativity is called general relativity because it generalizes special relativity; it does not kill it. One of the fundamental pillars of general relativity is the equivalence principle that states that in locally inertial frames, the laws of special relativity must be satisfied by all local phenomena."

I just don't believe you [Lubos Motl] don't understand that general covariance in GR is the important principle, that accelerations are not relative and that all motions at least begin and end with acceleration/deceleration.

The radiation (gauge bosons) and virtual particles in the vacuum exert pressure on moving objects, compressing them in the direction of motion. As FitzGerald deduced in 1889, it is not a mathematical effect, but a physical one. Mass increase occurs because of the snowplow effect of Higgs boson (mass ahead of you) when you move quickly, since the Higgs bosons you are moving into can't instantly flow out of your path, so there is mass increase. If you were to approach c, the particles in the vacuum ahead of you would be unable to get out of your way, you'd be going so fast, so your mass would tend towards infinity. This is simply a physical effect, not a mathematical mystery. Time dilation occurs because time is measured by motion, and if as the Standard Model suggests, fundamental spinning particles are just trapped energy (mass being due to the external Higgs field), that energy is going at speed c, perhaps as a spinning loop or vibrating string. When you move that at near speed c, the internal vibration and/or spin speed will slow down, because c would be violated otherwise. Since electromagnetic radiation is a transverse wave, the internal motion at speed x is orthagonal to the direction of propagation at speed v, so x^2 + v^2 = c^2 by Pythagoras. Hence the dynamic measure of time (vibration or spin speed) for the particle is x/c = (1 - v^2/c^2)^1/2, which is the time-dilation formula.
As Eddington said, light speed is absolute but undetectable in the Michelson-Morley experiment owing to the fact the instrument contracts in the direction of motion, allowing the slower light beam to cross a smaller distance and thus catch up.

‘The Michelson-Morley experiment has thus failed to detect our motion through the aether, because the effect looked for – the delay of one of the light waves – is exactly compensated by an automatic contraction of the matter forming the apparatus…. The great stumbing-block for a philosophy which denies absolute space is the experimental detection of absolute rotation.’ – Professor A.S. Eddington (who confirmed Einstein’s general theory of relativity in 1919), Space Time and Gravitation: An Outline of the General Relativity Theory, Cambridge University Press, Cambridge, 1921, pp. 20, 152.

Einstein said the same:

‘Recapitulating, we may say that according to the general theory of relativity, space is endowed with physical qualities... According to the general theory of relativity space without ether is unthinkable.’ – Albert Einstein, Leyden University lecture on ‘Ether and Relativity’, 1920. (Einstein, A., Sidelights on Relativity, Dover, New York, 1952, pp. 15-23.)

Maxwell failed to grasp that radiation (gauge bosons) was the mechanism for electric force fields, but he did usefully suggest that:

‘The ... action of magnetism on polarised light [discovered by Faraday not Maxwell] leads ... to the conclusion that in a medium ... is something belonging to the mathematical class as an angular velocity ... This ... cannot be that of any portion of the medium of sensible dimensions rotating as a whole. We must therefore conceive the rotation to be that of very small portions of the medium, each rotating on its own axis [spin] ... The displacements of the medium, during the propagation of light, will produce a disturbance of the vortices ... We shall therefore assume that the variation of vortices caused by the displacement of the medium is subject to the same conditions which Helmholtz, in his great memoir on Vortex-motion, has shewn to regulate the variation of the vortices [spin] of a perfect fluid.’ - Maxwell’s 1873 Treatise on Electricity and Magnetism, Articles 822-3

Compare this to the spin foam vacuum, and the fluid GR model:

‘… the source of the gravitational field can be taken to be a perfect fluid…. A fluid is a continuum that ‘flows’... A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.’ – Professor Bernard Schutz, General Relativity, Cambridge University Press, 1986, pp. 89-90.

Einstein admitted SR was tragic:

‘The special theory of relativity … does not extend to non-uniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. Along this road we arrive at an extension of the postulate of relativity… The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant). …’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916.

Sunday, February 26, 2006

Standard Model says mass = Higgs field, but could that be Z boson field!

Conventionally, the Higgs boson gives the Z boson of the electroweak theory its mass, which in turn is responsible for short range, compared to the infinite range of the photon. The Higgs field of the vacuum couples with the Z boson to give it mass at low energies ('spontaneous symmetry breaking'), but at very high energies the Z boson behaves like the photon, and isn't limited in range ('symmetry' with the photon). But the dynamics for this theory are speculative and sketchy (no prediction of particle masses), and experimental confirmation of the correct details of the Higgs boson is awaited. The Z boson was discovered at CERN in 1983 and has a mass of 91 GeV.

It is quite likely that the mechanism for inertial and gravitational mass is radiation-equilibrium based, as shown by Drs Rueda and Haisch: see http://arxiv.org/abs/physics/9802031, http://arxiv.org/abs/gr-qc/0209016, http://www.calphysics.org/articles/newscientist.html and http://www.eurekalert.org/pub_releases/2005-08/ns-ijv081005.php.

Let's integrate the radiation picture with a dynamics that predicts particle masses, like (0.511 Mev).(137/2)n(N + 1) = 35n(N + 1) Mev. Could the gravitationally-trapped (static) Z boson the Higgs boson? This turns the existing picture around: normally part of the reason for the Higgs field is to give the Z boson its mass! Could the conventional picture be back to front? The radius of a black hole for its mass is only 2GM/c^2.

The Z boson is the 91 GeV neutral, 'massive photon' involved in the electroweak theory. Hans de Vries and Alejandro in their paper http://arxiv.org/abs/hep-ph/0503104, Evidence for radiative generation of lepton masses, show that the Z boson mass is about twice Pi times the 137.0 (or 1/alpha) factor, times the muon mass: 2.Pi.137.105 MeV = 91 GeV.

1. Mass of Z boson when moving at light speed (not trapped) = 91 GeV because it is going too fast for the vacuum charges around it to polarise into a veil which shields the Z boson's electric fields (like the photon, the Z-boson has positive and negative electric fields in equal amounts).

2. When trapped in a black hole by its own mass, the vacuum polarises around each side of it (positive and negative electric fields), attenuating the fields by the 137 factor, and a geometric spin factor of twice Pi. This reduces its mass to 91/(2.Pi.137) = 105.7 MeV, muon mass.

3. You still have to obtain the net charge of the muon (which is also the electron charge): as shown on my home page, the electron itself has a polarised vacuum charge veil around it. If the Higgs boson (trapped Z-boson) creating the mass is outside the vacuum veil, there is a 1.5 x 137 additional shielding factor (giving electron mass), but if the Higgs boson (trapped Z boson) is inside the polarised veil, this additional shielding factor does not apply. Hence:

Z-boson mass: 91 GeV

Muon mass (electron with a Higg's boson/trapped Z-boson inside its veil): 91/(2.Pi.137) = 105.7 MeV.

Electron mass (electron with a Higg's boson/trapped Z-boson outside its veil): 91/[(1.5).(137).(2.Pi.137)] = 0.51 MeV.

Could a static, gravitationally-trapped version of the Z boson be the mass-causing Higgs field boson? The Z particle is like a massive photon, and that it can be trapped gravitationally into a small loop, the radial electric field lines around it on one side will be positive and the other side they will be negative. Despite being neutral, the electric fields inside any photon, when trapped, are called charge. Therefore the virtual charges of the vacuum are polarised like a veil around it, shielding it by the 137 factor. If the mass-causing vacuum particles or Higgs field is outside the polarised veil, then their coupling (and the effective mass) will be reduced by a factor of 137, and spin geometry may introduce a further reduction of twice Pi.

(Taking a semi-classical model of pair production, a 1.022 MeV gamma ray is electromagnetic radiation, which is often modelled by Maxwell's wave concept, in which the transverse electric field is negative for half a cycle of a sine wave and positive for the other half, with an orthagonal magnetic field. If you could break up a such a gamma ray, and each half-cycle were trapped by gravity into a loop, you would have a positron and an electron, each having spherically symmetric electric field and a dipole magnetic field.)

Here is a nice essay dealing with the Dirac and the perturbative QFT in physical terms:

Dr M. E. Rose (Chief Physicist, Oak Ridge National Lab.), Relativistic Electron Theory, John Wiley & Sons, New York and London, 1961, pp 75-6:

'The solution to the difficulty of negative energy states [in relativistic quantum mechanics] is due to Dirac [P. A. M. Dirac, Proc. Roy. Soc. (London), A126, p360, 1930]. One defines the vacuum to consist of no occupied positive energy states and all negative energy states completely filled. This means that each negative energy state contains two electrons. An electron therefore is a particle in a positive energy state with all negative energy states occupied. No transitions to these states can occur because of the Pauli principle. The interpretation of a single unoccupied negative energy state is then a particle with positive energy ... It will be apparent that a hole in the negative energy states is equivalent to a particle with the same mass as the electron ... The theory therefore predicts the existence of a particle, the positron, with the same mass and opposite charge as compared to an electron. It is well known that this particle was discovered in 1932 by Anderson [C. D. Anderson, Phys. Rev., 43, p491, 1933].

'Although the prediction of the positron is certainly a brilliant success of the Dirac theory, some rather formidable questions still arise. With a completely filled 'negative energy sea' the complete theory (hole theory) can no longer be a single-particle theory.

'The treatment of the problems of electrodynamics is seriously complicated by the requisite elaborate structure of the vacuum. The filled negative energy states need produce no observable electric field. However, if an external field is present the shift in the negative energy states produces a polarisation of the vacuum and, according to the theory, this polarisation is infinite.

'In a similar way, it can be shown that an electron acquires infinite inertia (self-energy) by the coupling with the electromagnetic field which permits emission and absorption of virtual quanta. More recent developments show that these infinities, while undesirable, are removable in the sense that they do not contribute to observed results [J. Schwinger, Phys. Rev., 74, p1439, 1948, and 75, p651, 1949; S. Tomonaga, Prog. Theoret. Phys. (Kyoto), 1, p27, 1949].

'For example, it can be shown that starting with the parameters e and m for a bare Dirac particle, the effect of the 'crowded' vacuum is to change these to new constants e' and m', which must be identified with the observed charge and mass. ... If these contributions were cut off in any reasonable manner, m' - m and e' - e would be of order alpha ~ 1/137. No rigorous justification for such a cut-off has yet been proposed.

'All this means that the present theory of electrons and fields is not complete. ... The particles ... are treated as 'bare' particles. For problems involving electromagnetic field coupling this approximation will result in an error of order alpha. As an example ... the Dirac theory predicts a magnetic moment of mu = mu[zero] for the electron, whereas a more complete treatment [including Schwinger's coupling correction, i.e., the first Feynman diagram] of radiative effects gives mu = mu[zero].(1 + alpha/{twice Pi}), which agrees very well with the very accurate measured value of mu/mu[zero] = 1.oo1...'

Thursday, February 23, 2006

Above: 'A Hubble Diagram is presented based on 172 distance measures involving 52 Gamma-Ray Bursts out to redshifts of 6.3. The observed shape of the Hubble Diagram is a measure of the expansion history of the Universe, which will include the effects of the 'Dark Energy' that dominates over matter. Gamma-Ray Bursts can be seen to high redshift, and this uniquely allows for tests as to whether the Dark Energy changes with time. If Einstein's Cosmological Constant is a good representation of cosmology, then the equation of state of the Dark Energy won't change in time over the age of the Universe. ... The 12 bursts with the highest redshift are all below the 'constant' case, and this is the indication that the Dark Energy changes with time.'

'I wrote a paper showing that CC was zero, and why. No one is interested in results.' - D. R. Lunsford, on Peter Woit's blog, Not Even Wrong.

I've mentioned before that Lunsford was censored off arXiv.org after getting his peer-reviewed paper published in a mainstream journal, Int. J. Theor. Phys., v 43 (2004), No. 1, pp.161-177.

You can download a draft version free from CERN Document Server, here. If you download the PDF version, you may need to print it out because it doesn't seem to use a font which scales properly on screen, at least on my computer. But it prints out fine on paper! I have sympathy with Lunsford's paper for several reasons. It is more objective (less reducible) than the Kaluza-Klein 'unification' of general relativity and Maxwell equations, and Lunsford's 6-d finding is more symmetrical to reality and therefore more convincing (just doubling the 3 obvious dimensions to get unification can be explained more simply than just adding one extra dimension and claiming it is curled up unobservably small, as Klein did).

Lunsford's major result is to dismiss the cosmological constant. I'm entirely in agreement. Lunsford finds a symmetry orthagonal grouping of the 6 dimensions with a base of SO(3,3) which is elegant, and whatever 'beauty' is, it is more simple and in that sense beautiful than the mess which results from Kaluza-Klein (string theory nonsense).

To be honest, I've really become sick of science suppression by mainstream bigotry. It is not just the ignorant officials at the top who are responsible, it echos right the way down. It is a complete corruption of society, not just science. There is more decency and logic in the administration of the most corrupt political system than there is in science, which is run by masters of double-talk and infidelity.

Monday, February 20, 2006

The minimal SUSY Standard Model shows electromagnetic force coupling increasing from alpha of 1/137 to alpha of 1/25 at 10^16 GeV, and the strong force falling from 1 to 1/25 at the same energy, hence unification.

The reason why the unification superforce strength is not 137 times electromagnetism but only 137/25 or about 5.5 times electromagnetism, is heuristically explicable in terms of potential energy for the various force gauge bosons.

If you have one force (electromagnetism) increase, more energy is carried by virtual photons at the expense of something else, say gluons. So the strong nuclear force will lose strength as the electromagnetic force gains strength. Thus simple conservation of energy will explain and allow predictions to be made on the correct variation of force strengths mediated by different gauge bosons. When you do this properly, you may learn that SUSY just isn't needed or is plain wrong, or else you will get a better grip on what is real and make some testable predictions as a result.

I frankly think there is something wrong with the depiction of the variation of weak force strength with energy shown in Figure 66 of Lisa Randall's "Warped Passages". The weak strength is extremely low (alpha of 10^-10) normally, say for beta decay of a neutron into a proton plus electron and antineutrino. This force coupling factor is given by Pi2hM4/(Tc2m5), where h is Planck’s constant from Planck’s energy equation E = hf, M is the mass of the proton, T is the effective energy release ‘life’ of the radioactive decay (i.e. the familiar half-life multiplied by 1/ln2 = 1.44), c is velocity of light, and m is the mass of an electron.
The diagram seems to indicate that at low energy, the weak force is stronger than electromagnetism, which seems in error. The conventional QFT treatments show that electroweak forces increase as a weak logarithmic function of energy. See arxiv hep-th/0510040, p. 70.

'STRING THEORY' FINALLY FINDS A USE: FOR EXCUSING INFIDELITY.

Geometry of magnetic moment correction for electron: reason for number 2

Magnetic moment of electron= Dirac factor + 1st virtual particle coupling correction term = 1 + 1/(2.Pi.137.0...) = 1.00116 Bohr magnetons to 6 significant figures (more coupling terms are needed for greater accuracy). The 137.0... number is usually signified by 1/alpha, but it is clearer to use the number than to write 1 + alpha/(2.Pi).

Anyway, the 1 is the magnetic contribution from the core of the electron. The second term, alpha/(2.Pi) or 1/(2.Pi.137), is the contribution from a virtual electron which is associated with the real electron core via shielded electric force. The charge of the core is 137e, the shielding due to the veil of polarised vacuum virtual charges around the core is 1/137, so the observed charge outside the veil is just e.

The core magnetism of 1 Bohr magneton predicted by Dirac's equation is too low. The true factor is nearer 1.00116, and the additional 0.116% is due to the vacuum virtual particles.

In other words, the vacuum reduces the electron's electric field, but increases its magnetic field! The reason for the increase in the magnetic field by the addition of alpha/(2.Pi) = 1/(2.Pi.137.0...) is simply that a virtual particle in the vacuum pairs up with the real particle via the electric field. The contribution of the second particle is smaller than 1 Bohr magneton by three factors, 2, Pi, and 137.0... Why? Well, heuristic reasoning suggests that the second particle is outside the polarised shield, and is thus subject to a shielding of 1/137.

The magnetic field from the real electron core which is transverse to the radial direction (i.e., think about the magnetic field lines over earth's equator, which run at 90 degrees to the radial direction) will be shielded, by the 137 factor. But the magnetic field that is parallel to the radial direction (i.e., the magnetic field lines emerging from earth's poles) are completely unshielded.

Whereas an electric field gets shielded where it is parallel to another electric field (the polarised vacuum field arrow points outward because virtual positrons are closer to the negative core than virtual electrons, so this outward arrow opposes the inward arrow of electric field towards the real electron core, causing attenuation), steady state magnetic fields only interact with steady state electric fields where specified by Ampere's law, which is half of one of Maxwell's four equations.

Ampere's law states that a curling magnetic field causes an electric current, just like an electric field does. Normally to get an electric current you need an electric potential difference between the two ends of a conductor, which causes electrons to drift. But a curling electric field around the conductor does exactly the same job. Therefore, a curling magnetic field around a conductor is quite indistinguishable from an electric field which varies along a conductor. You might say no, because the two are different, but you'd be wrong. If you have an electric field variation, then the current will (by conventional theory) cause a curling magnetic field around the conductor.

At the end of the day, the two situations are identical. Moreover, conventional electric theory has some serious issues with it, since Maxwell's equations assume instantaneous action at a distance (such as a whole capacitor plate being charged up simultaneously), which have been experimentally and theoretically disproved, despite the suppression of this fact as 'heresy'.

Maxwell's equations have other issues as well, for example Coulomb's law which is expressed in Maxwell's equation as the electric field from a charge (Gauss' law), is known to be wrong at high energies. Quantum field theory and experiments confirming it published by Koltick in PRL in 1997 shows that electric forces are 7% higher at 80 GeV than at low energies. This is because the polarised vacuum is like a sponge foam covering on an iron cannon ball. If you knock such sponge foam covered balls together very gently, you don't get a metallic clang or anything impressive. But if you fire them together very hard, the sponge foam covering is breached by the force of the impact, and you experience the effects of the strong cores to a greater degree!

The polarised vacuum veil around the real electron core behaves a bit like the shield of foam rubber around a steel ball, protecting it from strong interactions if the impacts are low energy, but breaking down in very high-energy impacts.

Anyway, the Schwinger correction term, 1/(2.Pi.137) contains 137 because of the shielding by the polarised vacuum veil.

The coupling is physically interpreted as a Pauli-exclusion principle type magnetic pairing of the real electron core with one virtual positron just outside the polarised veil. Because the spins are aligned to some extent in this process, the magnetic field which is of importance between the real electron core and the virtual electron is the transverse magnetic field, which is (unlike the polar magnetic field) shielded by the 137 factor like the electric field.

So that explains why the magnetic contribution from the virtual electron is 137 times weaker than that from the real electron core: because the transverse magnetic field from the real electron core is reduced by 137 times, and that is what causes the Pauli exclusion principle spin alignment. The two other reduction factors are 2 and Pi. These are there simply because each of the two particles is a spinning loop and has its equator on the same plane to the other. The amount of field each particle sees of the other is 1/Pi of the total, because a loop has a circumference of Pi times the diameter, and only the diameter is seen edge-on, which means that only 1/Pi of the total is seen edge on. Because the same occurs for each particle, each of the two particles (the one real particle and the virtual particle), the correct reduction factor is twice this. Obviously, this is heuristic, and by itself doesn't prove anything. It is only when you add this explanation to the prediction of meson and baryon masses by the same mechanism of 137, and the force strengths derivation, that it starts to become more convincing. Obviously, it needs further work to see how much it says about further coupling corrections, but its advantage is that it is a discrete picture so you don't have to artifically and arbitrarily impose cutoffs to get rid of infinities, like those of existing (continuous integral, not discrete) QFT renormalisation.

One think more I want to say after the latest post (a few back actually) here on deriving the strong nuclear force as 137 times Coulomb's law for low energies. The Standard Model does not indicate perfect force unification at high energy unless there is supersymmetry (SUSY), which requires superpartners which have never been observed, and whose energy is not predictable.

The minimal theory of supersymmetry predicts that the strong, weak and electromagnetic forces unify at 10^16 GeV. I've mentioned already that Koltick's experiments in 1997 were at 80 GeV, and that was pushing it. There is no way you can ever test a SUSY unification theory by firing particles together on this planet, since the planet isn't big enough to house or power such a massive accelerator. So you might as well be talking about UFOs as SUSY, because neither are observable scientifically in any conceivable future scenario of real science.

So let's forget SUSY and just think about the Standard Model as it stands. This shows that the strong, weak, and electromagnetic forces become almost (but not quite) unified at around 10^14 GeV, with an interaction strength around alpha of 0.02, but that electromagnetism continues to rise at higher energy, becoming 0.033 at 10^20 GeV, for example. Basically, the Standard Model without SUSY predicts that electromagnetism continues to rise as a weak (logarithmic type) function of energy, while the strong nuclear force falls. Potential energy conservation could well explain why the strong nuclear force must fall when the electromagnetic force rises. The fundamental force is not the same thing as the particle kinetic energy, remember. Normally you would expect the fundamental force to be completely distinct from the particle energy, but there are changes because the polarised vacuum veil around the core is progressively breached in higher energy impacts.

Saturday, February 18, 2006

Sir Roger Penrose on events before the big bang:

http://news.bbc.co.uk/1/hi/programmes/hardtalk/4631138.stm

Stephen Sackur talks to Sir Roger Penrose about his latest theory on what may have existed before the Big Bang

Penrose says that because thermodynamics says the universe becomes more disorganised with time, as you look backwards in time the amount of order increases towards infinity. The BBC news reported suggests God is the reason for the initial high degree of order of the big bang, but Penrose defends science.

Penrose's new idea is very simple, one of the 10 components of the spacetime metric is special, and when the universe has eventually decayed entirely into radiation (assuming that even protons decay, albeit with an undetectably long radioactive half-life, as per Standard Model predictions), it loses any meaningful measure of 'time' (I've pointed out in Electronics World and on my site for years that time is entirely linked to organised motion, and when you lose any organised motion, time is meaningless). Therefore, because of the link between space and time (spacetime), space disappears and the state is effectively a singularity.

Hence at the very moment that the last proton or muon decays, the whole universe is then a fresh singularity, and the big bang occurs all over again. (This is entirely different from the old 'oscillating' universe prediction from general relativity.) I'd like to see a complete proof please! Personally, I think Penrose is right, as the idea is factually defendable and yet radically crazy enough to explain everything, and he says it makes testable predictions which astronomy can check.

Full heuristic interpretation of quantum field theory.

The 2.Pi factor in the Schwinger 1st coupling correction of the magnetic moment of the electron, in 1 + 1/(2.Pi.137) Bohr magnetons is almost certainly due to the spin effect shielding.

Physically, the core of the first electron has a magnetic moment of 1 Bohr magneton because the polarised vacuum around the electron core only reduces the radial electric field and transverse magnetic field, not the polar magnetic field vector which is of course parallel to the radial electric field at the poles.

The electric field of the core is reduced by a factor of 137 by the polarised virtual charge surrounding it in the vacuum. The real core couples up with a particle (virtual positron?) in the vacuum which adds to the magnetic moment by aligning with the magnetic axis of the electron core. This is the reason for the 137 factor in Schwinger correction for the first coupling effect, the 1/(2.Pi.137) = 0.00116 term added to Dirac's 1 Bohr magneton.

The 2.Pi is an additional shielding factor, and is due to geometry. The 2.Pi factor is heuristically explainable in terms of the geometry which stems from the aligned real electron core and the virtual particle which is aligned with it to add to its magnetic moment. The vacuum is full of virtual particles, but because they are normally orientated randomly, their magnetic fields cancel each other out as seen on a macroscopic scale.

Now, the virtual electron which is outside the polarised shield surrounding the real electron core, and which adds 1/(2.Pi.137) Bohr magnetons to the magnetic moment of the latter, itself has the same effect on another vacuum particle! So there is another correction, which is even smaller, by another 137 factor, and another geometric factor... and so on.

This is how you heuristically explain the extra couplings required for more decimals than 1.00116 Bohr magnetons. Also, you need to take account of different vacuum particles, as occurs with the magnetic moment of the muon, which is slightly different to that from the electron.

Friday, February 17, 2006

Heisenberg's uncertainty says

pd = h/(2.Pi)

where p is uncertainty in momentum, d is uncertainty in distance.
This comes from his imaginary gamma ray microscope, and is usually written as a minimum (instead of with "=" as above), since there will be other sources of uncertainty in the measurement process.

For light wave momentum p = mc,
pd = (mc)(ct) = Et where E is uncertainty in energy (E=mc2), and t is uncertainty in time.

Hence, Et = h/(2.Pi)

t = h/(2.Pi.E)

d/c = h/(2.Pi.E)

d = hc/(2.Pi.E)

This result is used to show that a 80 GeV energy W or Z gauge boson will have a range of 10^-17 m. So it's OK.

Now, E = Fd implies

d = hc/(2.Pi.E) = hc/(2.Pi.Fd)

Hence

F = hc/(2.Pi.d^2)

This force is 137.036 times higher than Coulomb's law for unit fundamental charges.
Notice that in the last sentence I've suddenly gone from thinking of d as an uncertainty in distance, to thinking of it as actual distance between two charges; but the gauge boson has to go that distance to cause the force anyway.
Clearly what's physically happening is that the true force is 137.036 times Coulomb's law, so the real charge is 137.036. This is reduced by the correction factor 1/137.036 because most of the charge is screened out by polarised charges in the vacuum around the electron core:

"... we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum ... amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies)." - arxiv hep-th/0510040, p 71.

The unified Standard Model force is F = hc/(2.Pi.d^2)

That's the superforce at very high energies, in nuclear physics. At lower energies it is shielded by the factor 137.036 for photon gauge bosons in electromagnetism, or by exp(-d/x) for vacuum attenuation by short-ranged nuclear particles, where x = hc/(2.Pi.E)

This is dealt with at http://einstein157.tripod.com/ and the other sites. All the detailed calculations of the Standard Model are really modelling are the vacuum processes for different types of virtual particles and gauge bosons. The whole mainstream way of thinking about the Standard Model is related to energy. What is really happening is that at higher energies you knock particles together harder, so their protective shield of polarised vacuum particles gets partially breached, and you can experience a stronger force mediated by different particles!

UPDATE as of 25 Feb 06:

http://cosmicvariance.com/2006/02/22/on-the-plus-side/

quarks have asymptotic freedom: because the strong force and electromagnetic force cancel where the strong force is weak, at around the distance of separation of quarks in hadrons. That’s because of interactions with the virtual particles (fermions, quarks) and the field of gluons around quarks. If the strong nuclear force fell by the inverse square law and by an exponential quenching, then the hadrons would have no volume because the quarks would be on top of one another (the attractive nuclear force is much greater than the electromagnetic force).

It is well known you can’t isolate a quark from a hadron because the energy needed is more than that which would produce a new pair of quarks. So as you pull a pair of quarks apart, the force needed increases because the energy you are using is going into creating more matter.
This is why the quark-quark force doesn’t obey the inverse square law. There is a pictorial discussion of this in a few books (I believe it is in “The Left Hand of Creation”, which says the heuristic explanation of why the strong nuclear force gets weaker when quark-quark distance decreases is to do with the interference between the cloud of virtual quarks and gluons surrounding each quark).
Between nucleons, neutrons and protons, the strong force is mediated by pions and simply decreases with increasing distance by the inverse-square law and an exponential term something like exp(-x/d) where x is distance and d = hc/(2.Pi.E) from the uncertainty principle.

Saturday, February 11, 2006

Dr Woit has retained a comment of mine dealing with the gravity mechanism! See http://www.math.columbia.edu/~woit/wordpress/?p=273#comment-5322

I'll have to re-do my internet pages.

Update: the basic ideas were published in Electronics World from October '96 to March '05. However the recent expansion of my internet pages has made them too long. I'll have to print everything out and write a brief summary. I just can't edit half a megabyte of technical text on a screen without going insane. The number of new results and proofs should be emphasised, and the fact arXiv.org and the editor of PRL ignorantly blacklisted me in 2002 should not!