Quantum gravity physics based on facts, giving checkable predictions: July 2006

Monday, July 31, 2006

The observable radial expansion around us involves matter having observable outward speeds ranging from zero toward light velocity c with spacetime of 15 billion years toward zero (i.e., time past of zero toward 15 billion years). This gives an outward force in observable spacetime of

F = ma = m.dv/dt = m(c - 0) / (age of universe) = mc/t ~ mcH = 7 x 1043 Newtons.

Newton’s 3rd law tells us there is an equal inward force, which according to the possibilities implied by the Standard Model, is carried by exchange radiation (gauge bosons), which predicts gravity constant G to within 1.65 %: F = ¾ mM H2/( p r r2 e3) ~ 6.7 x 10-11 mM/ r2 Newtons correct within 2 % for consistent interdependent values of the Hubble constant and density. A fundamental particle of mass M has a cross-sectional space pressure shielding area of p (2GM/c2)2. For two separate rigorous and full accurate treatments see Proof.

A simplified version of the gravity treatment so that everyone can understand some of the elementary physics points:

  • The radiation is received by mass almost equally from all directions, coming from other masses in the universe. Because the gauge bosons are exchange radiation, the radiation is in effect reflected back the way it came if there is symmetry that prevents the mass from being moved. The result is then a mere compression of the mass by the amount mathematically predicted by general relativity, i.e., the radial contraction is by the small distance MG/(3c2) = 1.5 mm for the contraction of the spacetime fabric by the mass in the Earth.

  • If you are near a mass, it creates an asymmetry in the radiation exchange, because the radiation normally received from the distant masses in the universe is red-shifted by high speed recession, but the nearby mass is not receding significantly. By Newton’s 2nd law the outward force of a nearby mass which is not receding (in spacetime) from you is F = ma = mv/t = mv/(x/c) = mcv/x = 0. Hence by Newton’s 3rd law, the inward force of gauge bosons coming towards you from that mass is also zero; there is no action and so there is no reaction. As a result, the local mass shields you, creating an asymmetry. So you get pushed towards the shield. This is why apples fall.

  • The universe empirically looks similar in all directions around us: hence the net unshielded gravity force equal to the total inward force, F = ma ~ mcH, multiplied by the proportion of the shielded area of a spherical surface around the observer (see diagram). The surface area of the sphere with radius R (the average distance of the receding matter that is contributing to the inward gauge boson force) is 4p R 2. The ‘clever’ mathematical bit is that the shielding area of a local mass is projected on to this area by very simple geometry: the local mass of say the planet Earth, the centre of which is distance r from you, casts a ‘shadow’ (on the distant surface 4 p R 2) equal to its shielding area multiplied by the simple ratio (R / r)2. This ratio is very big. Because R is a fixed distance, as far as we are concerned for calculating the fall of an apple or the ‘attraction’ of a man to the Earth, the most significant variable the 1/ r2 factor, which we all know is the Newtonian inverse square law of gravity. For two separate rigorous and full accurate treatments see Proof.
  • Gravity is not due to a surface compression but instead is mediated through the void between fundamental particles in atoms by exchange radiation which does not recognise macroscopic surfaces, but only interacts with the subnuclear particles associated with the elementary units of mass. The radial contraction of the earth's radius by gravity, as predicted by general relativity, is 1.5 mm. [This contraction of distance hasn't been measured directly, but the corresponding contraction or rather 'dilation' of time has been accurately measured by atomic clocks which have been carried to various altitudes (where gravity is weaker) in aircraft. Spacetime tells us that where distance is contracted, so is time.]

    This contraction is not caused by a material pressure carried through the atoms of the earth, but is instead due to the gravity-causing Yang-Mills exchange radiation which travels through the void
    (nearly 100% of atomic volume is void). Hence the contraction is independent of the chemical nature of the earth. (Similarly, the contraction of moving bodies is caused by the same exchange radiation effect, and so is independent of the material's composition.)

    Proof (by two different calculations compared side by side). I’ve received nothing but ignorant pseudo personal abuse from string theorists. An early version on CERN can’t be updated via arXiv apparently because arXiv is controlled in the relevant section by maintream string theorist.

    Newton never expressed a gravity formula with the constant G because he didn't know what the constant was (that was measured by Cavendish much later). Newton did have empirical evidence, however, for the inverse square law. He knew the earth has a radius of 4000 miles and the moon is a quarter of a million miles away, hence by inverse-square law, gravity should be (4000/250,000)2 = 3900 times weaker at the moon than the 32 ft/s/s at earth's surface. Hence the gravity acceleration due to the earth's mass at the moon is 32/3900 = 0.008 ft/s/s.

    Newton's formula for the centripetal acceleration of the moon is: a = v2 /(distance to moon), where v is the moon's orbital velocity, v = 2
    p
    .[250,000 miles]/[27 days] ~ 0.67 mile/second), hence a = 0.0096 ft/s/s.

    So Newton had evidence that the gravity from the earth at moon's radius is approximately the same (0.008 ft/s/s ~ 0.0096 ft/s/s) as the centripetal force for the moon. The gravity law we have proved from experimental facts is a complete mechanism that predicts gravity and the contraction of general relativity.
    Illustration above: exchange force (gauge boson) radiation force cancels out (although there is compression equal to the contraction predicted by general relativity) in symmetrical situations outside the cone area since the force sideways is the same in each direction and so cancels out unless there is a shielding-mass intevening, like the Earth below you. Shielding is caused simply by the fact that nearby matter is not significantly receding, whereas distant matter is significant receding (and hence it fires a recoil towards you in a net gauge boson exchange force, unlike a nearby mass). Gravity is the net force introduced where a mass shadows you, namely in the double-cone areas shown above. In all other directions the symmetry cancels out and produces no net force. Hence gravity can be quantitatively predicted using only well established facts of quantum field theory, recession, etc. The prediction compares well with reality but is banned by string theorists like Lubos Motl who say
    everyone who points out errors in speculative string theory, and everybody who proves the correct explanation, should be hated.

    REDSHIFT DUE TO RECESSION OF DISTANT STARS

    From: "Nigel Cook" <nigelbryancook@hotmail.com>
    To: "David Tombe" <sirius184@hotmail.com>; <epola@tiscali.co.uk>; <jvospost2@yahoo.com>; <imontgomery@atlasmeasurement.com.au>; <Monitek@aol.com>
    Cc: <marinsek@aon.at>; <pwhan@atlasmeasurement.com.au>; <graham@megaquebec.net>; <andrewpost@gmail.com>; <george.hockney@jpl.nasa.gov>; <tom@tomspace.com>
    Sent: Wednesday, August 02, 2006 2:29 PM
    Subject: Re: Bosons -The Defining Question

    Because time past that light was emitted = distance of source of light / c.

    ----- Original Message -----

    From: "David Tombe" <sirius184@hotmail.com>

    To: <nigelbryancook@hotmail.com>; <epola@tiscali.co.uk>; <jvospost2@yahoo.com>; <imontgomery@atlasmeasurement.com.au>; <Monitek@aol.com>

    Cc: <marinsek@aon.at>; <pwhan@atlasmeasurement.com.au>; <graham@megaquebec.net>; <andrewpost@gmail.com>; <george.hockney@jpl.nasa.gov>; <tom@tomspace.com>

    Sent: Tuesday, August 01, 2006 4:09 PMSubject: Re: Bosons -The Defining Question

    > Dear Nigel,

    > I copied this line from your explanation below.

    > "F = ma = mv/t = mv/(x/c) = mcv/x = 0"

    >> How did you manage to slip the speed of light into Newton's > second law of motion?>> Yours sincerely

    > David Tombe

    From: "David Tombe" <sirius184@hotmail.com>
    To: <jvospost2@yahoo.com>; <nigelbryancook@hotmail.com>; <epola@tiscali.co.uk>; <imontgomery@atlasmeasurement.com.au>; <Monitek@aol.com>
    Cc: <marinsek@aon.at>; <pwhan@atlasmeasurement.com.au>; <graham@megaquebec.net>; <andrewpost@gmail.com>; <george.hockney@jpl.nasa.gov>; <tom@tomspace.com>
    Sent: Tuesday, August 01, 2006 7:51 PM
    Subject: RE: Simple Survey Yields Cosmic Conundrum

    Dear Jonathan,

    I've always been extremely cautious about believing modern theories in cosmology. There are so many unknown variables and factors out there in that great expanse, that it is far too presumptious to say anything with any degree of certainty.

    Hence I would never base a physics theory on cosmological evidence. I work from the ground up.

    Yours sincerely

    David Tombe

    Comments by Nigel:

    The gauge boson radiation to give rise to smooth deflection of light by gravity (rather than diffuse images of light deflected by gravity which would be the case in the event of particle-particle scattering such as photon-graviton scattering) indicates that the ultimate force field mehanism is continuum not particles, but this continuum can be composed of particles somewhat as water waves are composed of particulate water molecules. There is no contradiction since large numbers of particles can cooperate to allow waves to propagate (although the aether mechanism Maxwell proposed for light is wrong in important details). The recession of matter in the universe around us - which is established fact because there is no experimentally known mechanism for anything other than recession to give the uniformly redshifted spectra, and the recession redshift spectra is fact - implies a variation in velocity in proportion to time past (which is equivalent to distance in spacetime), ie an acceleration. This acceleration away from us tells us that the outward force of the big bang is F=ma = ~10^43 Newtons. By Newton's third law there is equal inward force, 10^43 Newtons pressing inward. The actual inward pressure (force/area) is immense and varies as the inverse square law, because area of a sphere is 4 Pi times square of radius. This inward pressure gives gravity because mass shields you. Each electron or quark has a shielding area of A = Pi(2GM/c^2)^2 square metres, where M is the mass of the particle and the part inside brackets is the event horizon radius of a black hole.
    This is (1) validated by Catt's experimental finding that trapped light speed energy current = static electric charge, and (2) it is proved by a separate calculation which does not require shielding area. The second calculation (shown side-by-side with the first on my home page) is purely hydrodynamic, i.e., another approach treating the exchange radiation as a perfect fluid and yielding the same physical calculation for this model:
    Walk down a corridor at velocity v, your volume is x litres where x = (your mass, kg)/(your density, ~ 1 kg/m^3) ~ 70 litres. When you get to the far end of the corridor, you notice that a volume of 70 litres of air has been moving at velocity -v (ie in the opposite direction to your walking), filling in the volume you have been displacing. You don't find that air pressure has increased where you are, you find the air has been flowing around you and filling in the "void" you would otherwise create in your wake. The same effect can be used to predict gravity. The outward spherically symmetric, radial recession of mass of the universe with speeds increasing with observable distance/time past, creates an inward "in fill". Because of the acceleration in space time of a = c/t = Hc where H is Hubble parameter in reciprocal seconds, the net inward push is not like the air flow back while walking at constant speed, but an acceleration. If you run down the corridor with acceleration a, the air will tend to accelerate at -a (the other way). The spacetime fabric known in general relativity is a perfect, frictionless fluid, so this works: http://feynman137.tripod.com/ (introduction to physical theory near top of page). Mathematical proof: http://feynman137.tripod.com/#h Where partially shielded by mass, the inward pressure causes gravity. Apples are pushed downwards towards the earth, a shield: '... the source of the gravitational field [gauge boson radiation] can be taken to be a perfect fluid.. A fluid is a continuum that 'flows'... A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.' - Bernard Schutz, General Relativity, Cambridge University Press, 1986, pp. 89-90.

    From: "Nigel Cook" <nigelbryancook@hotmail.com>
    To: "Guy Grantham" <epola@tiscali.co.uk>; "David Tombe" <sirius184@hotmail.com>; <jvospost2@yahoo.com>; <imontgomery@atlasmeasurement.com.au>; <Monitek@aol.com>
    Cc: <marinsek@aon.at>; <pwhan@atlasmeasurement.com.au>; <graham@megaquebec.net>; <andrewpost@gmail.com>; <george.hockney@jpl.nasa.gov>; <tom@tomspace.com>
    Sent: Tuesday, August 01, 2006 9:54 AM
    Subject: Re: Bosons -The Defining Question

    Dear Guy,

    The universe's expansion is powered by the momentum being delivered to masses by gauge boson exchange radiation. Think of a normal balloon inflated with air and some fine dust suspended in the air under pressure. Now imagine you remove the constraining balloon skin. The dust expands because air molecules hit one dust particles and the recoil momentum causes expansion. In the case of the universe, the role of the air molecules is done by gauge boson radiation, which is only similar to air molecules insofar as providing momentum.

    The anisotropy in the CBR is dealt with on my hole page. There are several causes. The earth's motion causes the largest anistropy, +/-0.13 % in different areas of the sky (in the direction of our absolute motion the 2.726K temperature is enhanced by 0.003 K; the other way it is reduced by 0.003 K, at angles inbetween it varies as the cosine of the angle).

    The much smaller ripples in the CBR, supposed to indicate the seeding existing at 300,000 years after BB for the growth of the earliest galaxies (quasars etc) are to small to seed galaxies according to simple BB ideas. The galaxy mechanism formulae I explain why the ripples were so weak at 300,000 years but remain consistent with galaxy formation, see list at http://feynman137.tripod.com/#d : gravity constant G (and electromagnetism force strength, etc) increases with time after BB (fusion rate in BB and star evolution remain the same because they depend on strong nuclear force induced capture of nucleons which occurs when gravitational compression exceeds Coulomb electromagnetic repulsion between charges; increase or decrease BOTH gravity and electromagnetism by the same factor, and the fusion rate is UNALTERED because the effects of the two inverse square law forces on fusion rate offset one another). Because G increases linearly with time, G at the CBR emission time (300,000 years) was 50,000 times weaker than it is now. This explains why the ripples were correspondingly smaller than expected from constant-G galaxy formation models, when data came in from the COBE satellite in 1992. As G increased after 300,000 years, galaxy formation accelerated correspondingly. At 1,000,000,000 years when powerful early galaxies like quasars were being formed, G was over 3,000 times stronger than at the time of CBR emission, so the discrepancy disappears. All discrepancies in the mainstream BB model disappear under the gravity mechanism.

    There is no [recession caused] redshift of the sun. We are bound to it by gravity. It is 8 light minutes away from us. There is no significant net redshift even for stars 100 light years away! Andromeda, our nearest galaxy, is BLUESHIFTED, not redshifted (in fact since it is bigger than the Milky Way, this is mainly because we are going towards Andromeda, not Andromeda going towards us; notice how special relativity breaks down when you have gravity as the major intermediating force in a very old universe, because gravity allows you to determine which object is moving most - namely the less massive object).You have to look to CLUSTERS OF GALAXIES before you see the Hubble law appear. Nearby objects are not receding significantly, or there is a lot of "noise" in the data due to local gravity effects.

    Kind regards,nigel

    ----- Original Message -----

    From: "Guy Grantham" <epola@tiscali.co.uk>

    To: "Nigel Cook" <nigelbryancook@hotmail.com>; "David Tombe" <sirius184@hotmail.com>; <jvospost2@yahoo.com>; <imontgomery@atlasmeasurement.com.au>; <Monitek@aol.com>Cc: <marinsek@aon.at>; <pwhan@atlasmeasurement.com.au>; <graham@megaquebec.net>; <andrewpost@gmail.com>; <george.hockney@jpl.nasa.gov>; <tom@tomspace.com>

    Sent: Monday, July 31, 2006 11:30 PM

    Subject: Re: Bosons -The Defining Question

    Dear Nigel

    What comment do you have on the redshift anisotropy and general inaccuracy of Hubble redshift? What comment, and why that result, on the measure of redshift from here to the Sun?

    Best regards,

    Guy.

    ----- Original Message -----

    > From: "Nigel Cook" <nigelbryancook@hotmail.com

    To: "David Tombe" <sirius184@hotmail.com>; <epola@tiscali.co.uk>; > <jvospost2@yahoo.com>; <imontgomery@atlasmeasurement.com.au>; > <Monitek@aol.com>> Cc: <marinsek@aon.at>; <pwhan@atlasmeasurement.com.au>; > <graham@megaquebec.net>; <andrewpost@gmail.com>; > <george.hockney@jpl.nasa.gov>; <tom@tomspace.com

    Sent: Monday, July 31, 2006 6:34 PM

    Subject: Re: Bosons -The Defining Question

    >>>> Dear David,

    >>>> " There could be any number of explanations to explain red shift." - David

    ... PROOF: >> http://www.astro.ucla.edu/~wright/tiredlit.htm. There NO other explanations of redshift that work. They are all ... LIES. Facts are that redshift is proved to occur to light due to motion of the source away from you (recession) and all alternatives are UNPROVED speculation.

    Court case:

    Barrister David: "There could be any number of explanations why the dagger was found in the defendants hand immediately after the murder. He could have just noticed the murder and picked up the dagger to see how well it would fit in his hand. Or a wicked fairy may have handed the knife to the defendant to ensnare him. There is no evidence that the >> defendant is guilty just because he was caught red-handed."

    Kind regards,

    nigel

    From: "Nigel Cook" <nigelbryancook@hotmail.com>
    To: "Guy Grantham" <epola@tiscali.co.uk>; "David Tombe" <sirius184@hotmail.com>; <jvospost2@yahoo.com>; <imontgomery@atlasmeasurement.com.au>; <Monitek@aol.com>
    Cc: <marinsek@aon.at>; <pwhan@atlasmeasurement.com.au>; <graham@megaquebec.net>; <andrewpost@gmail.com>; <george.hockney@jpl.nasa.gov>; <tom@tomspace.com>
    Sent: Wednesday, August 02, 2006 2:40 PM
    Subject: Re: Bosons -The Defining Question

    Guy, gravitational redshift can be calculated and does not work [to explain Hubble law]. Red filtering or scattering doesn't work because different frequencies of light would be redshifted by different amounts. Redshift as observed precisely involves the spectral lines all being shifted by the same factor, regardless of the frequency of the spectral line.

    Put another way, the best black body Planck radiation spectrum every observed is not a laboratory measurement but is the cosmic background spectrum first accurately plotted by the COBE satellite in 1992. That spectrum has an effective temperature of 2.72 K, and was emitted when the universe was 300,000 years old and at a temperature of 3,500 K. The redshift has uniformly shifted all frequencies by a factor of over 1000, which is the most severe redshift we can ever hope to observe. If redshift was due to any other cause than recession (which is proved to cause redshift correctly as observed, you can detect the redshift of light emitted from a moving object), then you would get scattering effects where different parts of the spectrum are redshifted by different amounts - see the Raleigh scattering formula, etc. See also http://www.astro.ucla.edu/~wright/tiredlit.htm for a disproof of more complex speculations.

    nigel

    ----- Original Message -----

    From: "Guy Grantham" <epola@tiscali.co.uk

    To: "Nigel Cook" <nigelbryancook@hotmail.com>; "David Tombe" <sirius184@hotmail.com>; <jvospost2@yahoo.com>; <imontgomery@atlasmeasurement.com.au>; <Monitek@aol.com>Cc: <marinsek@aon.at>; <pwhan@atlasmeasurement.com.au>; <graham@megaquebec.net>; <andrewpost@gmail.com>; <george.hockney@jpl.nasa.gov>; <tom@tomspace.com>Sent: Tuesday, August 01, 2006 1:01 PMSubject: Re: Bosons -The Defining Question

    Dear Nigel

    That's OK then. I was rising to your (adamant) comment "There NO other explanations of redshift that work." You have put the Gravitational /Einstein redshifts (eg due to the Sun) into another box, and acknowledge that there are net blueshifts on the > smaller scale whilst qualifying Hubble redshift only to the Astronomical very-grand scale . Do you acknowledge also the non-linear absorptions in space of energy from > em radiation causing redshifts in freq.? Best regards, Guy

    From: "Nigel Cook" <nigelbryancook@hotmail.com>
    To: "jonathan post" <jvospost2@yahoo.com>; "David Tombe" <sirius184@hotmail.com>; <epola@tiscali.co.uk>; <imontgomery@atlasmeasurement.com.au>; <Monitek@aol.com>
    Cc: <marinsek@aon.at>; <pwhan@atlasmeasurement.com.au>; <graham@megaquebec.net>; <andrewpost@gmail.com>; <george.hockney@jpl.nasa.gov>; <tom@tomspace.com>
    Sent: Wednesday, August 02, 2006 2:46 PM
    Subject: Re: "Cosmologists work from the ground up" and "It's like this, you see..."

    Seeing that the most accurate black body radiation ever observed in history is the cosmic background radiation, measured by the COBE satellite in 1992, David should beware that if he doesn't take the most accurate knowledge we have as his foundation, he has problems:

    Ancestor of David in 1500: "Copernicus [builds] his theory on astronomy which is suspect. I build from the ground up."

    Ancestor of David in 1687: "Newton [builds] his theory on astronomy which is suspect. I build from the ground up."

    I don't think it is helpful to throw out the whole of the data from modern physics or from astronomy just because the current theories being used by the mainstream (Lambda-CDM and string theory) look as if they are dark energy, dark matter metaphysics which have been tied together with very short bits of string.

    Cut the strings out and forget the speculative ad hoc dark fixes, and work on the empirical FACTS and you find they can make useful predictions

    Kind regards,

    nigel

    ----- Original Message -----

    From: "jonathan post" <jvospost2@yahoo.com

    To: "David Tombe" <sirius184@hotmail.com>; <nigelbryancook@hotmail.com>; <epola@tiscali.co.uk>; <imontgomery@atlasmeasurement.com.au>; <Monitek@aol.com>Cc: <marinsek@aon.at>; <pwhan@atlasmeasurement.com.au>; <graham@megaquebec.net>; <andrewpost@gmail.com>; <george.hockney@jpl.nasa.gov>; <tom@tomspace.com>

    Sent: Tuesday, August 01, 2006 9:45 PM

    Subject: "Cosmologists work from the ground up" and "It's like this, you see..."

    Dear David,

    "I work from the ground up" sounds like the punchline of a joke about Cosmologists. Maybe of cosmologists who drive cars with bumper stickers reading "astronomers do it in the dark."

    =================================

    It's like this, you see

    The ability to think metaphorically isn't reserved for poets. Scientists do it, too, using everyday analogies to expand their understanding of the physical world and share their knowledge with peers

    Jul. 30, 2006. 01:00 AM

    SIOBHAN ROBERTS SPECIAL TO THE [TORONTO] STAR:

    http://www.thestar.com/NASApp/cs/ContentServer?pagename=thestar/Layout/Article_Type1&c=Article&amp;amp;cid=1154082909559&call_pageid=1105528093962&col=1105528093790

    The poet Jan Zwicky once wrote, "Those who think metaphorically are enabled to think truly because the shape of their thinking echoes the shape of the world." Zwicky, whose day job includes teaching philosophy at the University of Victoria in British Columbia and authoring books of lyric philosophy such as Metaphor & Wisdom, from which the above quotation was taken, has lately directed considerable attention to contemplating the intersection of "Mathematical Analogy and Metaphorical Insight," giving numerous talks on the subject, including one scheduled at the European Graduate School in Switzerland next week. Casual inquiry reveals that metaphor, and its more common cousin analogy, are tools that are just as important to scientists investigating truths of the physical world as they are to poets explaining existential conundrums through verse. A scientist, one might liken, is an empirical poet; and reciprocally, a poet is a scientist of more imaginative and creative hypotheses. Both are seeking "the truth of the matter," says Zwicky. "As a species we are attempting to articulate how our lives go and what our environment is like, and mathematics is one part of that and poetry is another." Analogies, whether in science or poetry, she says, are not arbitrary and meaningless, not merely "airy nothings, loose types of things, fond and idle names." To bolster her thesis, Zwicky cites Austrian ethologist and evolutionary epistemologist Konrad Lorenz: "(Lorenz) has argued that, ok, yeah, we are subject to evolutionary pressure, selection of the fittest, but that means what we perceive about the truth of the world has to be pretty damn close to what> the truth of the world actually is, or the world would have eliminated us. There are selection pressures on our epistemological choices." Analogy appearing in scientific methodology, then, is no accident. It is fundamental to the way scientists think and the way they whittle their thinking down to> truth. Zwicky, not being a mathematician (though she teaches elementary mathematical proofs in her philosophy courses), relies on historical testimony from mathematicians such as Henri Poincaré and Johannes Kepler. "I love analogies most of all, my most reliable masters who know in particular all secrets of nature," Kepler wrote in 1604. "We have to look at them especially in geometry, when, though by means of very absurd designations, they unify infinitely many cases in the middle between two extremes, and place the total essence of a thing splendidly before the eyes." ... According to Kochen, the modern mathematical method is that of axiomatics - rooted abstraction and analogy. Indeed, mathematics has been called "the science of analogy."

    "Mathematics is often called abstract," Kochen says. "People usually mean that it's not concrete, it's about abstract objects. But it is abstract in another related way. The whole mathematical method is to abstract from particular situations that might be analogous or similar (to another situation). That is the method of analog." This method originated with the Greeks, with the axiomatic method applied in geometry. It entailed abstracting from situations in the real world, such as> farming, and deriving mathematical principles that were put to use elsewhere. Eratosthenes used geometry to measure the circumference of the Earth in 276 BC, and with impressive accuracy. In the lexicon of cognitive science, this process of transferring knowledge from a known to unknown is called "mapping" from the "source" to the "target." Keith Holyoak, a professor of cognitive psychology at UCLA, has dedicated much of his work to parsing this process. He discussed it in a recent essay, "Analogy," published last year in The Cambridge Handbook of Thinking and Reasoning. "The source," Holyoak says, providing a synopsis, "is what you know already - familiar and well understood. The target is the new thing, the problem you're working on or the new theory you are trying to develop. But the first big step in analogy is actually finding a source that is worth using at all. A lot of our research showed that that is the hard step. The big creative insight is figuring out what is it that's analogous to this problem. Which of course depends on the person actually knowing such a thing, but also> being able to find it in memory when it may not be that obviously related with any kind of superficial features." In an earlier book, Mental Leaps: Analogy in Creative Thought, Holyoak and co-author Paul Thagard, a professor of philosophy and director of the Cognitive Science Program at the University of Waterloo, argued that the cognitive mechanics underlying analogy and abstraction is what sets humans apart from all the other species, even the great apes. They touch upon the use of analogy in politics and law but focus a chapter on the "analogical scientist" and present a list of "greatest hits" science analogies. The ancient Greeks used water waves to suggest the nature of the modern wave theory of sound. A millennia and a half later, the same analogical abstraction yielded the wave theory of light. Charles Darwin formed his evolutionary theory of natural selection by drawing a parallel to the> artificial selection performed by breeders, an analogy he cited in his 1859 classic The Origin of Species. Velcro, invented in 1948 by Georges de Mestral, is an example of technological design based on visual analogy - Mestral recalled how the tiny hooks of burrs stuck to his dog's fur. Velcro later became a "source" for further analogical designs with "targets" in medicine, biology, and chemistry. According to Mental Leaps, these new domains for analogical transfer include abdominal closure in surgery, epidermal structure, molecular bonding, antigen recognition, and hydrogen bonding. Physicists currently find themselves toying with analogies in trying to unravel the puzzle of string theory, which holds promise as a grand unified theory of everything in the universe. Here the tool of> analogy is useful in various contexts - not only in the discovery, development, and evaluation of an idea, but also in the exposition of esoteric hypotheses, in communicating them both among physicists and to the layperson. Brian Greene, a Columbia University professor cum pop-culture physicist, has successfully translated the foreign realm of string theory for the general public with his best-selling book The Elegant Universe (1999) and an accompanying NOVA documentary, both replete with analogies to garden hoses, string symphonies, and sliced loaves of bread. As one profile of Greene observed, "analogies roll off his tongue with the effortless precision of a Michael Jordan lay-up." Yet at a public lecture at the Strings05 conference in Toronto, an audience member politely berated physicists for their bewildering smorgasbord of analogies, asking why the scientists couldn't reach consensus on a few key analogies so as to convey a more coherent and unified message to the public. The answer came as a disappointment. Robbert Dijkgraaf, a mathematical physicist at the University of Amsterdam, bluntly stated that the plethora of analogies is an indication that string theorists themselves are grappling with the mysteries of their work; they are groping in the dark and thus need every glimmering of analogical input they can get. "What makes our field work, particularly in the present climate of not having very much in the way of newer experimental information, is the diversity of analogy, the diversity of thinking," says Leonard Susskind, the Felix Bloch professor of theoretical physics at Stanford, and the discoverer of string theory. "Every really good physicist I know has their own absolutely unique way of thinking," says Susskind. "No two of them think alike. And I would say it's that diversity that makes the whole subject progress. I> have a very idiosyncratic way of thinking. My friend Ed Witten (at Princeton's Institute for Advanced Study) has a very idiosyncratic way of thinking. We think so differently, it's amazing that we can ever interact with each other. We learn how. And one of the ways we learn how is by using analogy." Susskind considers analogy particularly important in the current era because physics is almost going beyond the ken of human intelligence. "Physicists have gone through many generations of rewiring themselves, to learn how to think about things in a way which initially was very counterintuitive and very far beyond what nature wired us for," he says. Physicists compensate for their evolutionary shortcomings, he says, either by learning how to use abstract mathematics or by building analogies. Susskind, for his own part, deploys more of the latter. Analogy is one of his most reliable tools (visual thinking is the other). And Susskind has a few favourites that he always returns to, especially when he is stuck or confused. He thinks of black holes as an infinite lake with boats swirling toward a drain at the bottom, and he envisions the expanding universe as an inflating balloon. However, the real art of analogy, he says, "is not just making them up and using them, but knowing when they're defective, knowing their limitations. All analogies are defective at some level." A balloon eventually pops, for example, whereas a universe does not. At least not yet.

    --------------------------------------------------------------------------------

    Siobhan Roberts is a Toronto freelance writer and author of "King of Infinite Space: Donald Coxeter, The Man Who Saved Geometry" (Anansi), to be published in October. ======================================

    >> --- David Tombe <sirius184@hotmail.com wrote: Dear Jonathan, I've always been extremely cautious about believing modern theories in cosmology. There are so many unknown variables and factors out there in that great expanse, that it is far too presumptious to>say anything with any degree of certainty. Hence I would never base a physics theory on cosmological evidence. I work from the ground up. Yours sincerely David Tombe

    From: "Nigel Cook" <nigelbryancook@hotmail.com>
    To: "Guy Grantham" <epola@tiscali.co.uk>; "David Tombe" <sirius184@hotmail.com>; <jvospost2@yahoo.com>; <imontgomery@atlasmeasurement.com.au>; <Monitek@aol.com>
    Cc: <marinsek@aon.at>; <pwhan@atlasmeasurement.com.au>; <graham@megaquebec.net>; <andrewpost@gmail.com>; <george.hockney@jpl.nasa.gov>; <tom@tomspace.com>
    Sent: Wednesday, August 02, 2006 7:53 PM
    Subject: Re: Bosons -The Defining Question

    Dear Guy,

    No! Distant light coming towards us from a vast distance would approach the centre of mass in our frame of reference (we are at the centre of mass in our frame of reference because all the mass of the universe appears as if it is uniformly distributed around us in all directions) and hence would would GAIN gravitational potential energy and be BLUESHIFTED. A blue shift corresponds to gaining gravitational potential energy, gravitational redshift would occur to light we send out towards distant masses. In an extremely sensitive experiment by Pound who won a Nobel for it (Woit says somewhere that Pound was his PhD adviser or postdoc adviser or similar), the gravitational redshift of gamma rays was measured by sending them upwards in a tower at his physics department. The result was redshift. If the gamma rays had been sent downwards, they would have been blueshifted.

    Hence you are totally wrong. Have you been dowsing again? ;-)

    Kind regards,

    nigel

    ----- Original Message -----

    From: "Guy Grantham" <epola@tiscali.co.uk>

    To: "Nigel Cook" <nigelbryancook@hotmail.com>; "David Tombe" <sirius184@hotmail.com>; <jvospost2@yahoo.com>; <imontgomery@atlasmeasurement.com.au>; <Monitek@aol.com>Cc: <marinsek@aon.at>; <pwhan@atlasmeasurement.com.au>; <graham@megaquebec.net>; <andrewpost@gmail.com>; <george.hockney@jpl.nasa.gov>; <tom@tomspace.com>

    Sent: Wednesday, August 02, 2006 6:54 PMSubject: Re: Bosons -The Defining Question

    Dear Nigel

    I take it you mean that the gravitational redshift and non-linear absorptions do not work to explain the cosmological redshift relating to age of universe, for they surely cause redshifts?

    Best regards, Guy

    From: "Nigel Cook" <nigelbryancook@hotmail.com>
    To: "Guy Grantham" <epola@tiscali.co.uk>; "David Tombe" <sirius184@hotmail.com>; <jvospost2@yahoo.com>; <imontgomery@atlasmeasurement.com.au>; <Monitek@aol.com>
    Cc: <marinsek@aon.at>; <pwhan@atlasmeasurement.com.au>; <graham@megaquebec.net>; <andrewpost@gmail.com>; <george.hockney@jpl.nasa.gov>; <tom@tomspace.com>
    Sent: Thursday, August 03, 2006 9:54 AM
    Subject: Re: Bosons -The Defining Question

    Dear Guy,

    I said gravitational redshift has been measured on Earth using gamma rays byProfessor Pound, and I said gravitational redshift does not work to explain the recession of stars.

    Because light is falling in to us, you'd expect if anything that distant starlight would be blueshifted due to gaining gravitational potential energy, rather than redshifted (which occurs when light moves away from a centre of mass, not towards it). However, at immense distances the time the light emitted after BB was small, so the density of the universe was much higher. Newton's proof for a large radially symmetrical spherical system (observed universe being spherically symmetrical, as seen by us along any radial line) is that the net gravity effect is due to the mass enclosed by the sphere of radius equal to your distance from the centre of mass. For example if you are at half the earth's radius, you can calculate gravity by working out the amount of mass which exists in the earth out to half the radius of the earth, and then assuming it is all located in the centre of the earth (contributions to the shells around you all cancel out exactly).

    So a ray of light travelling from a very distant supernova is continuously being subject to a falling [diminishing] gravitational field as it approaches is, due to two reasons: (1) the density of the universe is falling as time passes and the universe expands, and (2) the light as it approaches is subject to a net effect equal to the gravity due to the mass enclosed by a sphere around us with radius equal to the distance the light ray is from us. Effect (1), falling density of universe, is severe because the volume of the universe increases as the cube of time since there is no observational gravitational retardation, and density = mass/volume where volume = (4/3)Pi(ct)^3, where ct is the horizon radius, t being time after BB (ignoring Guth's inflation,which I've already said is superfluous under the gravity mechanism, sincethe CBR is smoothed correctly by a totally different mechanism than the ad hoc inflation speculation which Guth gives). Effect (2) is also severe. At immense distances, gravitational blueshift may therefore be at a maximum butwill be still small because of the inverse square law of distance which makes gravity extremely weak, nevermind the issue that gravity in the gravity mechanism falls to zero at the horizon because it is an effect of surrounding expansion. So [after detailed calculations have been done to determine it quantitatively, gravitational] blueshift is trivial.

    The amount of gravitational redshift is extremely difficult to measure from the sun with the blackbody light the sun gives off, because it is so small. What you are doing in measuring redshift in a nearby star light or sun light spectrum is ascertaining the frequencies of the lines spectra, then comparing the Balmer etc. series of line spectra to the line spectra measured in the laboratory here on Earth where there is no net effect (light going sideways, not vertically, for instance would eliminate gravity red/blue shift, but it is trivial anyhow which is why gamma rays were usedin the experiment because it is easier to determine their exact frequency asthey have more energy to start with than light photons).

    Because of the high temperature and intense magnetic field in the sun, the line spectra are only approximately those determined in the laboratory on Earth. The Balmer equation indicates the relative frequencies of lines fora particular element, but in reality the presence of high temperatures and magnetic fields cause "noise" effects which outweigh gravitational redshift from the sun which is very small. Admittedly the Earth's mass is trivial compared to the Sun's, but with gamma rays going up a tower you have complete control over the temperature and magnetic field those rays are subjected to from the time of emission to the time of being measured, and since we are far closer to the Earth's centre of mass (4000 miles) than to the Sun's (92,000,000 miles), the inverse square law means that gravity acceleration on Earth's surface is relatively high despite Earth having trivial mass compared to the Sun. This is why the redshift can be measured on Earth more easily than from the Sun, although obviously light leaving the Sun's surface is in a stronger gravitational field.

    Finally you say; "> If an em photon from the source were to be (totally)absorbed by a filtering> atom or ion it would simply diminish amplitude of the incoming beam andnot> shift its freq, I am not referring to filtering or scattering." - Guy

    There is no evidence that this can occur, no evidence that there are atoms or ions in space which will totally absorb and then presumably re-emitradiation at a reduced frequency. This is speculation, like UFOs, Lock Ness Monster, like religious stuff, like string theory, 10 dimensions, 11 dimensions, parallel universes, dowsing, etc. The evidence is that the recession redshift model is experimentally shown to work, and it fits other evidence of BB like abundances of the different light elements in the universe and also and makes predictions, like gravity (although I can't publish that in Classical and Quantum Gravity, despite the kindly editor, because the peer-reviewers are all behaving like arrogant thugs obsessed with string theory or their own pretentious pet theories which predict nothing).

    Kind regards,

    nigel

    ----- Original Message -----From: "Guy Grantham" <epola@tiscali.co.uk>To: "Nigel Cook" <nigelbryancook@hotmail.com>; "David Tombe"<sirius184@hotmail.com>; <jvospost2@yahoo.com>;<imontgomery@atlasmeasurement.com.au>; <Monitek@aol.com>Cc: <marinsek@aon.at>; <pwhan@atlasmeasurement.com.au>;<graham@megaquebec.net>; <andrewpost@gmail.com>;<george.hockney@jpl.nasa.gov>; <tom@tomspace.com>Sent: Wednesday, August 02, 2006 10:23 PMSubject: Re: Bosons -The Defining Question> Dear Nigel>> Neither dowsing nor drowsing!>> Yes! I am correct then that you hold they "do not work to explain the> cosmological redshift ">> I referred to the redshift of light leaving the Sun towards us - whichyou> seemed to deny to occur!> ">>>You have put the Gravitational /Einstein redshifts (eg due to the Sun)> [gg]>> Guy, gravitational redshift can be calculated and does notwork.[nbc]> "> I can imagine that intermediate stellar objects would equally blueshiftthen> redshift light passing by no change of original freq.. ( I had read ofthe> experiment to measure redshift of gamma rays sent upwards a few metres -> which proved that gravitational redshift does occur).> Does your measure of perimeter of universe include a correction for the> gravitational blueshift of incoming radiation and would it mean that the> radius of vision is limited to an 'event horizon' of a bigger universe> rather than an expanding-still-visible universe?>> My other point has not been resolved for me. Non-linear absorption of em> radiation over long distance causes a redshift of frequency by partial> energy loss. This effect would be distance related as it is in practicefor> radio waves on Earth. Over cosmological distances all freqs (spectral> lines) would exhibit the same shift by the same probability logic as your> averaging of push/pull virtual bosons.> If an em photon from the source were to be (totally) absorbed by afiltering> atom or ion it would simply diminish amplitude of the incoming beam andnot> shift its freq, I am not referring to filtering or scattering.>> Best regards, Guy

    http://motls.blogspot.com/2006/07/from-strings-to-lhc.html

    Dear Lumos,

    Suppose string theory is correct, and gravity mediated by spin 2 gauge bosons (exchange radiation between masses, since masses are the units of gravitational charge).

    Assuming this, would the exchange radiation (gravitons, gravity gauge bosons, vector bosons, whatever you call it) be redshifted by cosmological expansion over large distances?

    If so this would weaken gravity over immense distances, explaining why gravity doesn't slow the recession of distant supernovae. Hence the need to invent a small positive cosmological constant (and a massive amount of "dark energy" to explain that cosmological constant) disappears.

    Also, the disagreement between this "dark energy observation" and the vacuum energy, as calculated by supersymmetry predictions, will disappear.

    Suppose you were confronted by the argument above, would you be able to put a note on arxiv.org about it?

    Kind regards,
    anon
    anon Homepage 07.29.06 - 12:07 pm #


    Dear anon,

    I don't want to discourage you much but what you wrote here is far from an acceptable paper on arxiv.org. Moreover I don't quite understand how could I publish YOUR ideas.

    Indeed, the rough idea that something is drastically changed at cosmological distances and forces become weaker has been formulated by many people.

    A more important point at this moment is that according to the standard theories we love and trust today, the very long distance behavior of the forces IS given by the well-known power laws and there are no easy ways to make these power laws break down.

    Physical gravitons and photons get red shifted by expansion, but the force they cause right now is always given by the same laws, regardless of the cosmological history.

    However, I sympathize that the 1/r^2 becomes much weaker at very long distances, which could eventually even explain the cosmological data - and Hubble = 1.0/AgeOfTheUniverse - without dark energy. But you need to create a theory that is also consistent with other things we know which is not trivial.

    Best
    Lubos
    Luboš Motl Homepage 07.30.06 - 4:39 am #

    Dear Lumos,

    Thanks for your remarks, but can you tell me if anyone has formulated this gauge boson redshift mechanism before? I'm not interested in ad hoc speculative MOND stuff that anyone can sketch on the back of an envelope (pet theories), this is a mechanism which is true. Redshift is confirmed [> http://www.astro.ucla.edu/~wright/tiredlit.htm. There NO other explanations of redshift that work other tha recession and big bang. Light isn't being coloured red or scattered, it is being shifted by the same amount at all frequencies which cannot result from scattering or filtering which are frequency dependent effects; notice that the most redshifted big bang radiation is the cosmic background radiation emitted when ions and electrons combined to form hydrogen molecules at 300,000 years after BB, and this radiation is the most PERFECT BLACK BODY SPECTRUM ever observed in physics! No evidence that redshift is due to red-colouring or scattering or any other speculative, false drivel from people like Catt and Arp. The facts are that redshift is proved to occur to light due to motion of the source away from you (recession) and all alternatives are UNPROVED speculation], and all the evidence points to forces being mediated by exchange radiation.

    The earth and moon are not receding from one another significantly. Hence redshift is not a contributor to inverse square law in most cases. Newton showed that gravity between earth and moon is given by inverse square law, see http://www.math.columbia.edu/~woit/wordpress/?p=432#comment-13697, where redshift of light and gauge bosons between the two masses is NOT involved!

    Redshift of the gravitons is significant on cosmological distance scales. It is easy to calculate the precise effect due to redshift: the proportionate loss of energy of photons and gravitons at any distance would reduce gravity constant G by the same factor.

    I plot the predicted recession curve allowing for the reduced G involved in gravity retardation effects on receding galaxies at any given distance from us. If you want I'll draft a paper with the equations and some graphs, comparing the redshift mechanism to the ad hoc Lambda-CDM speculation, and to the actual data: http://www.phys.lsu.edu/GRBHD/ . I can provide a reference to a publication of this dated Oct96, preceding the current Lambda-CDM theory and also preceding Perlmutter's discovery that there is no observable gravitational retardation (the so called evidence for a small positive CC and dark energy; which is actually confirmation of the redshift mechanism).

    This would wipe out all the paradoxes of a small positive CC, leaving a slate clean for new ideas.

    Best wishes,
    anon

    anon Homepage 07.30.06 - 11:46 am #

    Isn't it the case that what is actually observed is the acceleration of cosmological expansion. In which case, even if gravity were redshifted to zero at cosmological distance scales, one would still need to introduce a small but non-zero vacuum energy to explain the acceleration, right?

    Chris 07.30.06 - 7:30 pm #

    Dear anon,

    Your line of thinking vaguely reminds me of a gravitational model presented by Gia Dvali at the Space Telescope Science Institute (12/4). He essentially proposes the idea that dark energy is actually a modification of gravity at long distances. This model embraces the notion that gravity has an attractive side as well as a repulsive side. {a lot more speculative, non factual, non predictive, non-scientific ad hoc MOND drivel deleted} ... Dvali has had lunar thoughts ...

    Best,
    Cynthia 07.31.06 - 10:04 am #

    "Isn't it the case that what is actually observed is the acceleration of cosmological expansion. In which case, even if gravity [graviton radiation] were redshifted to zero at cosmological distance scales, one would still need to introduce a small but non-zero vacuum energy to explain the acceleration, right?" - Chris

    Chris, what acceleration??? The acceleration I'm referring to is being postulated to counteract gravity, which slows expansion if you ignore redshift of gravitons.

    See Nobel Laureate Phil Anderson's comment http://cosmicvariance.com/2006/01/03/danger-phil-anderson/#comment-10901 on the discussion http://cosmicvariance.com/2006/01/03/danger-phil-anderson:

    "the flat universe is just not decelerating, it isn’t really accelerating ... there’s a bit of the 'phlogiston fallacy' here, one thinks if one can name Dark Energy or the Inflaton one knows something about it. And yes, inflation predicts flatness, and I even conditionally accept inflation, but how does the crucial piece Dark Energy follow from inflation?–don’t kid me, you have no idea."

    In a sense of course the universe is accelerating in a completely different context (the regular Hubble effect).

    If you assume graviton exchange works simply by imparting momentum, then this be a dark energy driving the Hubble expansion, but that is entirely diferent to the pseudo-acceleration in the Lambda-CDM model of mainstream cosmology (which is horses***).

    The REGULAR Hubble recession formula is: recession velocity = Hubble constant x distance.

    But this ignores Minkowshi's spacetime, where distance = c x time.

    Hence recession velocity rises linearly with time of observed stars in the past, not merely distance of the stars.

    Any variatio of velocity with respect to time is acceleration a = v/t ~ c/(1/H) ~ cH ~ 10^-10 ms^-2, so the mass of universe m is subject to outward force of F = ma ~ 10^43 Newtons. Since Yang-Mills exchange radiation drives gravity, it probably does this by simply delivering momentum [and causing the general relativity contraction the same way, ie, the earth's radius is contracted by GM/(3c^2) = 1.5 millimetres according to the contraction in general relativity, which resembles an ideal fluid frictionless compression by gravitons which distinguishes general relativity from Newtonian dynamics]. There is no evidence that the dynamics are far more complicated than radiation exchange.

    Because the expansion of the universe is then like a balloon inflating under molecule collisions, the vacuum energy is then easy to calculate: it is equivalent to the total kinetic energy of all the receding matter in the universe.

    This is IMMENSELY larger than the energy implied by the small positive CC Lambda-CDM acceleration "dark energy". It is FAR closer to supersymmetry predictions than the Lambda-CDM s***.

    Best wishes,
    anon
    anon Homepage 07.31.06 - 3:45 pm #

    Tony Smith [Frank D. (Tony) Smith, Jr.] is suppressed by other ignorant Theorists

    On the Amazon.com discussion board for Woit's book Not Even Wrong, Tony Smith has placed some intriguing comments relating to Dr Woit's Not Even Wrong and also to Dr Smolin's forthcoming The Trouble with Physics: The Rise of String Theory, The Fall of a Science, and What Comes Next, which I can't respond to there because I'm an Amazon.co.uk customer and you have to have bought a book from Amazon.com (not Amazon.co.uk) in order to participate.

    Dr Smith comments:

    "The Standard Model, which is at present completely consistent with experimental results, but which by itself is not unified with Gravity, is well described in Woit's book. ...

    "The Rise of String Theory as an attempt over the past 20 years or so to unify the Standard Model with Gravity is described by both Smolin and Woit. The Fall of a Science, due to the failure of superstring theory to deliver on its early promise of unification of the Standard Model and Gravity, is also described by both Smolin and Woit. Smolin's Book Description says that superstring theory: '... has been a colossal failure. And because it has soaked up the lion's share of funding, attracted some of the best minds, and penalized young physicists for pursuing other avenues, it is dragging the rest of physics down with it. ...'.

    "As Feynman said in the book Superstrings, by Davies and Brown (Cambridge 1988, pp. 194-195): '... I do feel strongly that this is nonsense! ... I think all this superstring stuff is crazy and is in the wrong direction. ... I don't like it that they're not calculating anything. ... why are the masses of the various particles such as quarks what they are? All these numbers ... have no explanations in these string theories - absolutely none! ... '

    "What Comes Next is not dealt with by Woit (which is why I gave his book only 4 and not 5 stars), but Smolin advocates his personal approach to physics, known as Loop Quantum Gravity, which is an alternative to superstring theory that has some establishment support, but not nearly as much as superstring theory (maybe Loop Quantum Gravity gets 10% of the high energy theoretical physics funding and jobs, with superstring theory getting 90%, and other approaches getting relatively insignificant amounts. Smolin's failure to advocate exploration of such 'other approaches', while trumpeting his own Loop Quantum Gravity as his anointed successor to superstring theory, is why I gave his book only 4 stars. For the context of a concrete example of Smolin's indifference to 'other approaches', consider a June 2006 talk by John Baez at the University of Western Ontario (nearby Smolin's Perimeter Institute in Waterloo, Ontario). Baez's talk was entitled Fundamental Physics: Where We Stand Today, and discussed Six Mysteries whose solution would advance Fundamental Physics. Baez's Six Mysteries are:

    '... Mystery 1. What is making the expansion of the universe accelerate? Does the vacuum have negative pressure?... Mystery 2. Does the Higgs really exist? What is the origin of mass?Mystery 3. Why do these 18 numbers [that]... describe the strengths of all these interactions ...[of]... the Standard Model ... have the values they do?Does this question even have an answer?... Mystery 4. Do neutrino oscillations fit into a slightly modified Standard Model- now requiring 25 dimensionless numbers - or must the theory be changed more drastically?... Mystery 5. What happens to things when they fall into a black hole?... Mystery 6. What is cold dark matter- or what else explains what this hypothesis tries to explain...[such as]...that the energy of our universe ... is made of: 4% normal matter 23% cold dark matter 73% vacuum energy ...'.

    "My concrete example is that, shortly after Baez described his Six Mysteries in Ontario, I sent an e-mail message to Smolin saying:

    '... I would like to present, at Perimenter, answers to those questions, as follows:Mysteries 2 and 3: The Higgs probably does exist, and is related to a Tquark-Tantiquark condensate,and mass comes from the Standard Model Higgs mechanism, producing force strengths and particle masses consistent with experiment, as described in http://www.valdostamuseum.org/hamsmith/YamawakiNJL.pdfandhttp://www.valdostamuseum.org/hamsmith/TQ3mHFII1vNFadd97.pdf

    'Mystery 4: Neutrino masses and mixing angles consistent with experimentare described in the first part of this pdf file http://www.valdostamuseum.org/hamsmith/NeutrinosEVOs.pdf Mystery 5: A partial answer: If quarks are regarded as Kerr-Newman black holes, merger of a quark-antiquark pair to form a charged pion produce a toroidal event horizon carrying sine-Gordon structure, so that, given up and down quark constituent masses of about 312 MeV,the charged pion gets a mass of about 139 MeV, as described in http://www.valdostamuseum.org/hamsmith/sGmTqqbarPion.pdf Mysteries 6 and 1:The Dark Energy : Dark Matter : Ordinary Matter ratioof about 73 : 23 : 4 is described in http://www.valdostamuseum.org/hamsmith/WMAPpaper.pdf

    'Please let me know if you would be interested in me making such a presentation at Perimeter.
    Tony Smith
    http://www.valdostamuseum.org/hamsmith/

    'PS - I expect that I could pay my own expenses, including traveland lodging, and that I would not require any funding by Perimeter. ...'.

    "I have yet to receive even the courtesy of a reply from Smolin, much less any interest in a presentation of my approach, which is different from both superstring theory and Smolin's Loop Quantum Gravity. Maybe my approach has some flaws of which I am unaware (one reason to make such a presentation is to find out if others can spot such flaws, so that they can either be corrected or the flawed stuff discarded) or maybe my approach is substantially sound and would be a significant contribution to What Comes Next, but if physicists in leadership positions, such as Smolin, are unwilling to consider such possible alternatives to their own pet theories around which their jobs/funding empires are built, the Future of Theoretical High Energy Physics is indeed Dark (at least in North America).

    "Frank D. (Tony) Smith, Jr. http://www.valdostamuseum.org/hamsmith/"

    I think Tony Smith is a free thinking orthodoxy-inclined mathematical physicist. What is weird is that he is a string theorist, and yet has been banned from string theory arXiv.org as I was, for no good scientific reason whatsoever (i.e., because Tony Smith sticks to 26-dimensional string theory instead of the celebrated 10/11 dimensional M-theory version of Edward Witten; Tony Smiths version is the only one to offer any sort of predictions and thus to offer the scientific criterion of falsifiability), and provably for reasons of prejudice unrelated to content:

    The people who run arXiv.org deleted Tony Smith's seminal 340 book after just4 minutes and 7 seconds, and since nobody has claimed to be able to read 340 pages of mathematical physics in 4 minutes 7 seconds, they not honest if they claim to have any science behind suppressing it. They are all a load of ------- and -------. I've commented on this blog before that Danny Ross Lunsford was censored off arXiv.org.

    As for Dr Lee Smolin and Dr Peter Woit, they are doing their best although both are clearly inexperienced in formulating physical theories which can be easily checked and tested (they are however more competent than the likes of Hawking and . Although arXiv.org's Dr Jacques Distler has banned arXiv.org trackbacks to Woit's blog Not Even Wrong, it is clear from the stance of string theorists that they will complain about THEIR freedom being threatened by insistence that some money goes to alternatives, while ganging together to PREVENT freedom of people to publish alternatives.

    Notice above, in quoting Tony Smith, I put one section in red. Tony Smith's talk about explaining dark energy is wrong (see HERE for details, which is in the fast comment section of Dr Motl's blog), but his description of quark as spinning black hole is a vital insight.

    What always happens in the world is a seesaw from one extreme mainstream viewpoint to another extreme mainstream viewpoint. There can be no wide diversity of officially published and acknowledged ideas waiting to be checked in a professional science culture. Such a wide diversity would make the subject look too 'amateurish' (why sneer at amateurs, what is wrong with amateurs, why are people so sneering at science when they would never do that of amateur sportsmen or amateurs in other subjects, what is wrong with society when it comes to science, it is totally screwed up by the ignorance due to Hollywood) destroying the pseudo-credibility which is current upheld by falsehood and propaganda in favour of officialdom speculations.

    It is not down to Smolin or Woit, or Baez or even President Bush to decide what facts get money for publicity, publication and further investigation. It is a free market of ideas and we have to get dirty and improve and clarify and popularise the facts we have found out without waiting for the helpful intervention of any particular big name. Those guys and girls will only jump on a successful bandwaggon, they aren't - in this day and age - interested in non-mainstream ideas which are obscure, and which are quietly defended; at he same time they are even less sympathetic to loudly presented ideas which are inadequately checked and presented. So the ideas need to have not only marketing, but more important, scientific development as far as possible.

    Should I submit a paper to Dr Anthony Garrett who is an expert editor offering presentation-editing help for reasonable remuneration? Although I'm already author of a number of Electronics World articles and letters. I'm not arrogant or proud enough to believe that I'm the best writer; my writing less inhibited than my speech since I don't value writing so much. I certainly always find plenty of inadvertent errors in what I write when I read my own writings at a later date. So I may try that. The efforts I will have to make to draft a reasonable manuscript for him to consider would probably be immensely valuable, as much progress in minutiae has been made this year which has not been published.

    *****

    Update:

    One very interesting paper by Frank D. (Tony) Smith, Jr., that I recommend (compare it to my Electronics World articles of August 2002 and April 2003), is "Sine-Gordon Quarks and Pion", PDF here. See http://www.valdostamuseum.org/hamsmith/Jun2006Update.html (extracted text below excludes illustrations):

    Sine-Gordon Quarks and Pion
    Frank D. (Tony) Smith, Jr.
    http://www.valdostamuseum.org/hamsmith/sGmTqqbarPion.pdf
    June 2006


    Abstract: The charged pion, made up of Up plus antiDown or Down plus antiUp quarks, is described in terms of the sine-Gordon and massive Thirring models. When the quark and antiquark are seen as Kerr-Newman black holes each having constituent mass about 312 MeV, the pion is seen as resulting from their merger, whch produces a black hole with toroidal event horizon representing a sine-Gordon meson whose mass can be calculated. The charged pion mass calculation gives a charged pion mass of about 139 MeV, which is substantially consistent with experimental results.

    The quark content of a charged pion is a quark - antiquark pair: either Up plus antiDown or Down plus antiUp. Experimentally, its mass is about 139.57 MeV.

    The quark is a Naked Singularity Kerr-Newman Black Hole, with electromagnetic charge e and spin angular momentum J and constituent mass M 312 MeV, such that e^2 + a^2 is greater than M^2 (where a = J / M).

    The antiquark is a also Naked Singularity Kerr-Newman Black Hole, with electromagnetic charge e and spin angular momentum J and constituent mass M 312 MeV, such that e^2 + a^2 is greater than M^2 (where a = J / M).

    According to General Relativity, by Robert M. Wald (Chicago 1984) page 338 [Problems] ... 4. ...:
    "... Suppose two widely separated Kerr black holes with parameters ( M1 , J1 ) and ( M2 , J2 ) initially are at rest in an axisymmetric configuration, i.e., their rotation axes are aligned along the direction of their separation.

    Assume that these black holes fall together and coalesce into a single black hole.
    Since angular momentum cannot be radiated away in an axisymmetric spacetime, the final black hole will have momentum J = J1 + J2. ...".

    The neutral pion produced by the quark - antiquark pair would have zero angular momentum, thus reducing the value of e^2 + a^2 to e^2 .

    For fermion electrons with spin 1/2, 1 / 2 = e / M (see for example Misner, Thorne, and Wheeler, Gravitation (Freeman 1972), page 883) so that M^2 = 4 e^2 is greater than e^2 for the electron. In other words, the angular momentum term a^2 is necessary to make e^2 + a^2 greater than M^2 so that the electron can be seen as a Kerr-Newman naked singularity.
    Since the magnitude of electromagnetic charge of each quarks or antiquarks less than that of an electron, and since the mass of each quark or antiquark (as well as the pion mass) is greater than that of an electron, and since the quark - antiquark pair (as well as the pion) has angular momentum zero, the quark - antiquark pion has M^2 greater than e^2 + a^2 = e^2.

    ( Note that color charge, which is nonzero for the quark and the antiquark and is involved in the relation M^2 less than sum of spin-squared and charges-squared by which quarks and antiquarks can be see as Kerr-Newman naked singularities, is not relevant for the color-neutral pion. )

    Therefore, the pion itself is a normal Kerr-Newman Black Hole with Outer Event Horizon = Ergosphere at r = 2M ( the Inner Event Horizon is only the origin at r = 0 ) as shown in this image http://www.valdostamuseum.org/hamsmith/sGmTqqbarPion.pdf
    from Black Holes - A Traveller's Guide, by Clifford Pickover (Wiley 1996) in which the Ergosphere is white, the Outer Event Horizon is red, the Inner Event Horizon is green, and the Ring Singularity is purple. In the case of the pion, the white and red surfaces coincide, and the green surface is only a point at the origin.
    According to section 3.6 of Jeffrey Winicour's 2001 Living Review of the Development of Numerical Evolution Codes for General Relativity (see also a 2005 update):
    "... The black hole event horizon associated with ... slightly broken ... degeneracy [ of the axisymmetric configuration ]... reveals new features not seen in the degenerate case of the head-on collision ... If the degeneracy is slightly broken, the individual black holes form with spherical topology but as they approach, tidal distortion produces two sharp pincers on each black hole just prior to merger. ... Tidal distortion of approaching black holes ...
    ... Formation of sharp pincers just prior to merger ..
    ... toroidal stage just after merger ...
    At merger, the two pincers join to form a single ... toroidal black hole.
    The inner hole of the torus subsequently [ begins to] close... up (superluminally) ... [ If the closing proceeds to completion, it ]... produce[s] first a peanut shaped black hole and finally a spherical black hole. ...".
    In the physical case of quark and antiquark forming a pion, the toroidal black hole remains a torus. The torus is an event horizon and therefore is not a 2-spacelike dimensional torus, but is a (1+1)-dimensional torus with a timelike dimension.
    The effect is described in detail in Robert Wald's book General Relativity (Chicago 1984). It can be said to be due to extreme frame dragging, or to timelike translations becoming spacelike as though they had been Wick rotated in Complex SpaceTime.
    As Hawking and Ellis say in The LargeScale Structure of Space-Time (Cambridge 1973):
    "... The surface r = r+ is ... the event horizon ... and is a null surface ...
    ... On the surface r = r+ .... the wavefront corresponding to a point on this surface lies entirely within the surface. ...".
    A (1+1)-dimensional torus with a timelike dimension can carry a Sine-Gordon Breather, and the soliton and antisoliton of a Sine-Gordon Breather correspond to the quark and antiquark that make up the pion.
    Sine-Gordon Breathers are described by Sidney Coleman in his Erica lecture paper Classical Lumps and their Quantum Descendants (1975), reprinted in his book Aspects of Symmetry (Cambridge 1985), where Coleman writes the Lagrangian for the Sine-Gordon equation as ( Coleman's eq. 4.3 ):
    L = (1 / B^2 ) ( (1/2) (df)^2 + A ( cos( f ) - 1 ) )
    file:///Users/Shared/Pion%20PDF/sGmTpion.html (4 of 11)5/25/06 10:35 AM
    June 2006 update
    and Coleman says:
    "... We see that, in classical physics, B is an irrelevant parameter: if we can solve the sine-Gordon equation for any non-zero B, we can solve it for any other B. The only effect of changing B is the trivial one of changing the energy and momentum assigned to a given soluition of the equation. This is not true in quantum physics, becasue the relevant object for quantum physics is not L but [ eq. 4.4 ]
    L / hbar = (1 / ( B^2 hbar ) ) ( (1/2) (df)^2 + A ( cos( f ) - 1 ) )
    An other way of saying the same thing is to say that in quantum physics we have one more dimensional constant of nature, Planck's constant, than in classical physics. ... the classical limit, vanishingf hbar, is exactly the same as the small-coupling limit, vanishing B ... from now on I will ... set hbar equal to one. ...
    ... the sine-Gordon equation ...[ has ]... an exact periodic solution ...[ eq. 4.59 ]...
    f( x, t ) = ( 4 / B ) arctan( ( n sin( w t ) / cosh( n w x ))
    where [ eq. 4.60 ] n = sqrt( A - w^2 ) / w and w ranges from 0 to A. This solution has a simple physical interpretation ... a soliton far to the left ...[ and ]... an antisoliton far to the right. As sin ( w t ) increases, the soliton and antisoliton mover farther apart from each other. When sin( w t ) passes thrpough one, they turn around and begin to approach one another. As sin( w t ) comes down to zero ... the soliton and antisoliton are on top of each other ... when sin( w t ) becomes negative .. the soliton and antisoliton have passed each other. ...[
    This stereo image of a Sine-Gordon Breather was generated by the program 3D-Filmstrip for Macintosh by Richard Palais. You can see the stereo with red-green or red-cyan 3D glasses. The program is on the WWW at http://rsp.math.brandeis.edu/3D-Filmstrip. The
    Sine-Gordon Breather is confined in space (y-axis) but periodic in time (x-axis), and therefore naturally lives on the (1+1)-dimensional torus with a timelike dimension of the Event Horizon of the pion. ...]
    ... Thus, Eq. (4.59) can be thought of as a soliton and an antisoliton oscillation about their common center-of-mass. For this reason, it is called 'the doublet [ or Breather ] solution'. ... the energy of the doublet ...[ eq. 4.64 ]
    E = 2 M sqrt( 1 - ( w^2 / A ) )
    where [ eq. 4.65 ] M = 8 sqrt( A ) / B^2 is the soliton mass. Note that the mass of the doublet is always less than twice the soliton mass, as we would expect from a soltion-antisoliton pair. ... Dashen, Hasslacher, and Neveu ... Phys. Rev. D10, 4114; 4130; 4138 (1974). A pedagogical review of these methods has been written by R. Rajaraman ( Phys. Reports 21, 227 (1975 ... Phys. Rev. D11, 3424 (1975) ...[ Dashen, Hasslacher, and Neveu found that ]... there is only a single series of bound states, labeled by the integer N ... The energies ... are ... [ eq. 4.82 ]
    E_N = 2 M sin( B'^2 N / 16 )
    where N = 0, 1, 2 ... < 2 =" B^2" qdot =" 2" 2 =" 4" m =" -" 2 =" 4" g =" 0" http://www.valdostamuseum.org/hamsmith/sGmTqqbarPion.pdf
    ...[eq. 2.13b ] E = 8 sqrt(A) / B^2 ...[ is the ]... energy of the lump ... of sine-Gordon theory ... frequently called 'soliton...' in the literature ... [ Zeroth-order is the classical case, or classical limit. ] ...
    ... Coherent-state variation always gives the same result as the ... Zeroth-order weak coupling expansion ... .
    The ... First-order weak-coupling expansion ... explicit formula ... is ( 8 / B^2 ) - ( 1 / pi ). ...".
    Note that, using the VoDou Physics constituent mass of the Up and Down quarks and antiquarks, about
    312.75 MeV, as the soliton and antisoliton masses, and setting B^2 = piand using the DHN formula,
    the mass of the charged pion is calculated to be ( 312.75 / 2.25 ) MeV = 139 MeV
    which is in pretty good agreement with the experimental value of about 139.57 MeV. Why is the value B^2 = pi ( or, using Coleman's eq. ( 5.14 ), the Thirring coupling constant g = 3 pi ) the
    special value that gives the pion mass ?
    Because B^2 = pi is where the First-order weak coupling expansion substantially coincides with the ( probably exact ) DHN formula. In other words,
    The physical quark - antiquark pion lives where the first-order weak coupling expansion is exact.
    Near the end of his article, Coleman expressed "Some opinions":
    "... This has been a long series of physics lectures with no reference whatsoever to experiment.
    This is embarrassing.
    ... Is there any chance that the lump will be more than a theoretical toy in our field? I can think of
    two possiblities.
    One is that there will appear a theory of strong-interaction dynamics in which hadrons are thought of as lumps, or, ... as systems of quarks bound into lumps. ... I am pessimistic about the success of such a theory. ... However, I stand ready to be converted in a moment by a convincing computation.
    The other possibility is that a lump will appear in a realistic theory ... of weak and electromagnetic interactions ... the theory would have to imbed the U(1)xSU(2) group ... in a larger group without U(1) factors ... it would be a magnetic monopole. ...".
    This description of the hadronic pion as a quark - antiquark system governed by the sine-Gordon - massive Thirring model should dispel Coleman's pessimism about his first stated possibility and relieve his embarrassment about lack of contact with experiment.
    As to his second stated possibility, very massive monopoles related to SU(5) GUT are still within the realm of possible future experimental discoveries.
    Further material about the sine-Gordon doublet Breather and the massive Thirring equation can be found in the book Solitons and Instantons (North-Holland 1982,1987) by R. Rajaraman, who writes:
    "... the doublet or breather solutions ... can be used as input into the WKB method. ... the system is ... equivalent to the massive Thirring model, with the SG soliton state identifiable as a fermion. ... Mass of the quantum soliton ... will consist of a classical term followed by quantum corrections. The energy of the classical soliton ... is ... [ eq. 7.3 ]
    E_cl[f_sol] = 8 m^3 / L The quantum corrections ... to the 'soliton mass' ... is finite as the momentum cut-off goes to infinity and equals ( - m / pi ). Hence the quantum soliton's mass is [ eq. 7.10 ]
    M_sol =( 8 m^3 / L ) - ( m / pi ) +O(L). The mass of the quantum antisoliton will be, by ... symmetry, the same as M_sol. ... The doublet solutions ... may be quantised by the WKB method. ... we see that the coupling
    constant ( L / m^2 ) has been replaced by a 'renormalised' coupling constant G ... [ eq. 7.24 ] G = ( L / m^2 ) / ( 1 - ( L / 8 pi m^2 )) ... as a result of quantum corrections. ... the same thing had happened to the soliton mass in eq.
    ( 7.10 ). To leading order, we can write [ eq. 7.25 ] M_sol = ( 8 m^3 / L ) - ( m / pi ) = 8 m / G ... The doublet masses ... bound-state energy levels ... E = M_N, where ... [ eq. 7.28 ] M_N = ( 16 m / G ) sin( N G / 16 ) ; N = 1, 2, ... <> 8 pi / G . ... The
    classical solutions ... bear the same relation to the bound-state wavefunctionals ... that Bohr orbits bear to hydrogen atom wavefunctions. ... Coleman ... show[ed] explicitly ... the SG theory equivalent to the charge-zero sector of the MT
    model, provided ... L / 4 pi m^2 = 1 / ( 1 + g / pi )...[ where in Coleman's work set out above such as his eq. ( 5.14 ) , B^2 = L / m^2 ]...Coleman ... resurrected Skyrme's conjecture that the quantum soliton of the SG model may be
    identified with the fermion of the MT model. ... ".
    WHAT ABOUT THE NEUTRAL PION?
    The quark content of the charged pion is u_d or d_u , both of which are consistent with the sine-Gordon picture.
    Experimentally, its mass is 139.57 Mev.
    The neutral pion has quark content (u_u + d_d)/sqrt(2) with two components, somewhat different from the sine-Gordon picture, and a mass of 134.96 Mev.
    The effective constituent mass of a down valence quark increases (by swapping places with a strange sea quark) by about
    DcMdquark = (Ms - Md) (Md/Ms)2 aw V12 == 312x0.25x0.253x0.22 Mev = 4.3 Mev.
    Similarly, the up quark color force mass increase is about
    DcMuquark = (Mc - Mu) (Mu/Mc)2 aw V12 == 1777x0.022x0.253x0.22 Mev = 2.2 Mev.
    The color force increase for the charged pion DcMpion± = 6.5 Mev.
    Since the mass Mpion± = 139.57 Mev is calculated from a color force sine-Gordon soliton state, the mass 139.57 Mev already takes DcMpion± into account.
    For pion0 = (u_u + d_d)/ sqrt 2 , the d and _d of the the d_d pair do not swap places with strange sea quarks very often because it is energetically preferential for them both to become a u_u pair. Therefore, from the point of view of calculating DcMpion0, the pion0 should be considered to be only u_u , and DcMpion0 = 2.2+2.2 = 4.4 Mev.
    If, as in the nucleon, DeM(pion0-pion±) = -1 Mev, the theoretical estimate is
    DM(pion0-pion±) = DcM(pion0-pion±) + DeM(pion0-pion±) == 4.4 - 6.5 -1 = -3.1 Mev, roughly consistent with the experimental value of -4.6 Mev.

    Friday, July 21, 2006

    Quantum field theory

    'Quantum Yang-Mills theory is now the foundation of most of elementary particle theory, and its predictions have been tested at many experimental laboratories, but its mathematical foundation is still unclear. The successful use of Yang-Mills theory to describe the strong interactions of elementary particles depends on a subtle quantum mechanical property called the "mass gap:" the quantum particles have positive masses, even though the classical waves travel at the speed of light.' - http://www.claymath.org/millennium/Yang-Mills_Theory/

    Is there charge in empty space? QFT is not very lucid on this. Whereas Dirac said the vacuum is full of virtual charge, and used this to predict antimatter, which was then detected by Anderson in 1932 (resulting in a Nobel Prize for Dirac), there is the issue that a vacuum completely full of charge would polarize around real charge, cancelling it completely.

    Feynman, Schwinger, and Tomonaga showed that the vacuum is not full of free charge or it would cancel out real charges by becoming totally polarized. Instead, there is a limit or cutoff to the amount of polarization, at a collision energy which corresponds to a very small distance from the middle of the electron. This comes from the need to renormalize the charge of the electron, in order to force the QFT to predict the electron's magnetic moment and Lamb shift accurately.

    So free (polarizable) charge pairs can only exist close to the middle of the electron, where the electromagnetic field is strong enough, presumably, to break the bonds of the vacuum sea of charges, and free the charges so that they can then become polarized and cause the right amount of shielding for QFT to work. So from causality + QFT, the vacuum doesn't contain any free charge but must be full of bound charge which is broken up into free charge in the strong fields near a fundamental particle (real charge core).

    There are virtual particles polarized in the strong field near real charge cores, and these must come from some mechanism. The strong field breaks up the orderly vacuum structure for a small distance, allowing it to become polarized which shields some of the real particle charge.

    The classical 'solar system' model falls because it is fake. The planets don't do around the sun in true ellipses because they perturb each other. Because all the planets are of small mass compared to the sun, the solar system looks approximately deterministic, but it isn't. You can't solve it analytically (this is the '3 body problem'), so you have to use approximations which get less and less accurate. If you want to know where the earth will be in its orbit in 100 million years, the inaccuracies are so great that all you can do is to specify the probability of finding the earth at any given distance from the sun. It is not deterministic, it is statistical on long time scales. This 3+ body problem is called Poincare chaos and has been known over 100 years. 'Classical theory', not quantum theory, is horse*hit.

    Newton's is just an approximate theory. It can't calculate the real world deterministically to absolute accuracy, even given absolutely known initial conditions.The electron is a far worse case because you have (1) got the fact that the electron has equal charge to the proton it is orbiting, not a charge far smaller (mass represents gravitational 'charge' in any quantum gravity), and (2) the size scale of the range of virtual particles in the vacuum is big enough to knock and deflect the electron randomly as the electron moves in the vacuum (Heisenberg's law says product of energy of virtual particles and the time they exist is h over 2 Pi or whatever, hence by rearranging and putting in the energy for an electron-positron pair, you get the time they exist, 10^-21 second; assuming as an exaggeration they go at speed c, this gives you a maximum range of virtual particles in the vacuum of ct = 10^-12 metre).

    In addition, you have the 3+ body problem of the multiple charges. Even in the simplest case, a H atom, you have proton, electron, and a third particle you use to probe to find out where the electron appears to be. This gives you 3 bodies hence Poincare chaos. In practical situations, you have many more electrons. I give sources and references for this stuff on my site at locations like http://feynman137.tripod.com/#b.

    An electron has some chance of being found at a great range of locations, and the classical picture at best just indicates where it is most probably to find it. 'Most probable' is misleading if a large number of locations have equal or nearly equal probability, producing the electron 'cloud'. Quantum mechanics doesn't even estimate the statistical probability of finding an electron at a given location directly when you have anything more complex than a H atom in an empty universe. At least in that case, QM allows you to predict probabilities of finding the electron at different locations. All multi electron atoms are beyond quantum mechanics and can only be solved by introducing approximations and simplifying assumptions which make the situation into a 'corrected' version of the H atom. You can't trust any claims made by quantum mechanics gurus who wave their arms around, Feynman was right to say denounce it. Feynman says, in his book Character of Physical Law, page 57-8 (1964 Cornell lectures, broadcast on BBC2 in 1965):

    'It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of space/time is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.'

    Feynman said, in Character of Physical Law, pp. 171-3:

    'The inexperienced, and crackpots, and people like that, make guesses that are simple, but [with extensive knowledge of the actual facts rather than speculative theories of physics] you can immediately see that they are wrong, so that does not count. ... There will be a degeneration of ideas, just like the degeneration that great explorers feel is occurring when tourists begin moving in on a territory.'

    It isn't electrostatic forces which control chemistry, it is the 'Pauli exclusion principle'. The key assertion of the exclusion principle for physics is that in any adjacent pair of electrons, the spins are opposite. Since spins are linked to magnetic moments in QED, this suggests that the periodic table and all bonding arrangements are based on magnetic repulsion/attraction in addition to the electric Coulomb forces. The 'Pauli exclusion principle' just says that the four quantum numbers of any electron (describing shell, its shape, etc, as well as spin) are a unique set for each electron in an atom. So far as shell number goes, this is just stating that the electron is not in two shells at once, which is just a statement that the electron behaves classical law not indeterminacy horse*hit (I'm not in two rooms at once). The Pauli exclusion principle is insightful in saying that adjacent electrons always have opposite spins: this is why most materials are non-magnetic seen from a large distance, the magnetic moments of the electrons cancel one another out. But at short range, there are magnetic forces which are important for determining atomic stucture and chemical bonding.These are always referred to as 'Pauli exclusion principle' instead of the more candid term 'magnetic force due to spin'.

    See:http://hyperphysics.phy-astr.gsu.edu/hbase/pauli.html#c2 (which is a horse*hit description), andhttp://hyperphysics.phy-astr.gsu.edu/hbase/molecule/nacl.html (which at least has a graph).

    Consider H-H bonding. Why should two hydrogen atoms bond together to form H_2? The H_2 molecule is in a lower energy state than 2H. The two shared electrons pair with opposite spins (one up, one down). If you drop magnets in a box and shake it up, they end up side by side, north pole of one beside south pole of the other, and vice versa. This is the lowest energy state, which occurs naturally. So all of this quantum stuff is found to have not classical, but mechanistic basis.

    The accepted facts of the Standard Model are based on Yang Mills (gauge boson exchange).

    Forces are caused because the radiation has momentum, like all bosons such as photons of light.

    It allows quantitative predictions to be made such as forces of gravity, electromagnetism, and particle masses.

    If it does raise more questions, that is science. After all, any step forward does that. The only way to stop more questions is to accept rubbish like extra dimensional string theory or Ptolemy's epicycles, or the lambda-CDM model of cosmology. Real science does raise questions, one of which is whether it is right. Message to the stringers: 'If you can predict the quantitative strength of gravity or electromagnetism better, go ahead, but until then, stop making background noise to suppress the proved verifiable facts.'

    Positive and negative charges may emit gauge bosons which spin in opposite directions relative to the direction of propagation. The gauge bosons will have a regular normal (integer) spin (unlike the half integer spin of electrons which is more complex, like a Mobius strip being spun) and maybe all that distinguishes those of different charges is the direction of spin relative to the direction they go, whether it is clockwise. If you picture the gauge boson as a corkscrew, two similar corkscrews going in opposite directions (from similar charges) will oppose each other's spin and interfere. But two corkscrews with opposite spin going in opposite directions will add up without interference.

    Move a fixed-area shield twice as far from you, and the fraction of the total spherical area around you which is shielded falls by the inverse square of the distance! The absolute force is equal to {the total inward force which is ~ 10^43 N for gravity, multiplied by the fraction of the total spherical volume around you which is shielded. Hence for an inward force of 10^43 N, if 1 % of the spherical area around you is shielded by a charge, the net force is simply 10^41 N.

    If you want to know where these gauge bosons come from, they are emitted by accelerating charge (spinning charges have centripetal acceleration and radiate energy, they don't lose energy by doing this because all charges do it, so there is equilibrium as the universe is old).

    It is simply an experimental fact that radiation is emitted due to acceleration. Centripetal acceleration due to the spin of a charge loop emits this radiation. My page http://feynman137.tripod.com/ and my blog http://electrogravity.blogspot.com/ present the evidence. For example, the gauge bosons travel in closed loops between masses, the gauge boson energy is conserved, because this is a characteristic of the loop quantum gravity derivation of general relativity (without a metric) from quantum field theory. The contraction caused by suc gauge boson radiation pressure is like running into a storm of bullets. They hit you harder (with greater momentum) if you run into them head on, since the momentum you receive depends on yor motion. This causes the contraction, just like that from the force from the air pressure on the front of a car moving at high speed. The main difference between air and the gauge bosons is that they penetrate to fundamental masses, and are not stopped by the outer layer of atoms, and they don't carry away energy by being speeded up as such, or by transferring energy into heat, because there is no mechanism by which this could occur.

    If you see my old (superseded) paper on CERN Doc Server, I started off with the hydrodynamic analogy to calculate gravity first: http://cdsweb.cern.ch/search.py?recid=706468&ln=en (this paper is now obsolete and cannot be updated as CERN has closed all updates except via carbon copies from arXiv.org, which censored me in 2002). The gauge boson mechanism is mathematically equivalent, as proved by the side by side calculations on my home page at the section: http://feynman137.tripod.com/#h

    Two similar charges are exchanging radiation beween one another, causing the recoil apart! The exchange is only prevented between charges or masses for neutral charged masses (gravity) and dissimilar charges (electric attraction), where the exchange continues with the surrounding universe, which causes the dissimilar charges or masses to be pushed together from the outside of the line of shielding between them.

    Two masses or two opposite charges are shielding one another along the lines joining them (the lines are actually double cones, because each charge has a fixed size).

    Two similar masses are exchanging beween one another, causing predominant repulsion.

    Redshift occurs in the expansion of the universe, and the mechanism by which it occurs for gauge bosons need not be considered different than that by which it occurs for visible light. See http://www.astro.ucla.edu/~wright/tiredlit.htm for a disproof of all "alternatives" to cosmological redshift (redshift is easily demonstrated to be a factual mechanism, by contrast with pseudo-scientific belief system speculation on "tired light" which has no evidence behind it whatsoever).

    I've said that redshift may be just a slowing down of light speed, in which case the amount of redshift is determined by the difference in speed between us and the object emitting the light. All redshift means is that we see fewer cycles per second than was emitted. The fact light is transverse rather than longitudinal like sound, does not alter the fact that if it does slow down, we will see fewer cycles per second (i.e., we will see it redshifted).

    If the explanation for Michelson-Morley is as Sir Eddington says (he stated that the contraction of the instrument exactly offsets the variation in lght speed by giving it a shorter path to travel where it is slowed down), then the CBR emitted at 300,000 years after BB comes to us at 6 km/s. Light/radiation emitted at time zero will come to us at speed 0 km/s. Light emitted at half the age of the universe will come to us at 50% of c. The percentage is not a fixed fraction, but is proportional to the age of the material. All were considering is the possibility that redshift is caused by light coming to us more slowly. Hence light from the sun comes to us at c, light from galaxies at 1,500,000,000 light years comes to us at 90% of c (because they are receding from us at 10% of c), etc.

    This argument is irrelevant for the gravity-electromagnetism mechanism. The figure of 10^80 hydrogen atoms in the universe is measured by multiplying the average material (directly observed) density of the universe by its spherical volume out to how far light travels in the age of the universe (time since BB).

    This figure has always been the same. Back in the 1930s it was calculated by Eddington who used Hubble's grossly exaggerated value of the Hubble constant (Hubble thought it was 540 km/s/megaparsec - which is 6-8 times too high - because he under-estimated the distances to stars by confusing two populations of Cepheid variables, which he used as measuring sticks for relative distances in multiplying up absolute distances derived from accurate parallax measurements locally).

    Eddington got the right answer because the two massive errors in his calculation largely cancelled each other out. He underestimated the size of the universe because the excessive Hubble constant underestimated the age of the universe (if Hubble constant H is expressed in SI units it has units of 1/seconds, and 1/H is the age of the universe ignoring gravitational deceleration, whereas 2/(3H) is the age of the universe assuming a critical density between collapse and infinite expansion, assuming falsely that gravity is independent of the BB not the result of a mechanism based on the BB), but he overestimated the density of the universe for the same reason. Hence the mass he calculated by multiplying two numbers (one a gross overestimate, and one a gross underestimate) happened to turn out fairly accurate.

    Because the false (high) figure of the Hubble constant used in the 1930s implies an age of the universe 6-8 times less than today's figure (2,000 million years in the 1930s, compared to a modern figure around 15,000 million years), the apparent measured density of the universe was over-estimated by a massive factor in the 1930s.

    Because masses of galaxies were not known accurately then the density estimates were known to have large error margins, but the over-estimate made the apparent density of the universe in agreement with the critical density of general relativity. Later data takes away the exaggerated (high) density value, and so there is a disagreement which is filled by the ad hoc dark matter hypothesis.

    The gravity mechanism dispenses with this by showing the true density when general relativity is made a quantum theory of gravity is not the critical density but is smaller by a factor of (e^3)/2 which is a factor of just over 10. This brings the observed density of the universe today into alignment with theory. It also gets rid of dark energy because the gravity mechanism doesn't cause gravitational retardation on expansion. The postulate of dark energy comes from a small positive cosmological constant added to a general relativity cosmology with critical density (ie the Lambda-CDM model) to cancel out gravitational retardation by causing an acceleration wich cancels out the long range postulated gravitational deceleration which is not observed in supernovae redshifts. Gravity mechanism gets rid of gravitational retardation at long ranges by physical mechanism (there are several equivalent ways to formulate this argument, the most brief and least rigorous being the simple statement to people that gauge bosons are redshifted like light over vast distances, so gravity doesn't cause distant supernovae to slow down). Hence it predicted the correct supernovae recession rates via the Oct 96 issue of Electronics World, two whole years before Perlmutter's experimental results confirmed it. There is no ad hoc dark energy because that isn't needed to counteract gravitational deceleration over vast distances, because the latter is a falsehood due to ignoring the details of quantum gravity mechanism in general relativity.

    SCIENTIFIC METHOD:

    OBSERVATION -> DATA ->GRAPH, TABLE, MATRIX OF RESULTS -> EMPIRICAL EQUATIONS -> UNIFYING LAW -> MECHANISM.

    In long discussions with Catt, he insisted that he did not comprehend the above sequence, despite my explanations of chemistry and thermodynamics as examples of the sequence. Quantum field theory and general relativity are at the stage of unifying laws, not mechanisms. Catt claims that Dirac's equation is like phlogiston or caloric, which I explained is a falsehood because phlogiston and caloric were false attempts to jump straight to a mechanism without a mathematical model which was successful. Catt tries to claim Ockham's Razor gets rid of all empirical equations or data, which is false, because those equations are simply awaiting unification and finally mechanism to explain them and at the same time predict other stuff.

    The spectacular test was when Nature published Permutter's paper "A Supernovae at Half the Age of the Universe" (or whatever it was called) in1998. It fell on the curve predicted by my paper. I asked Brian Josephson to co-author the 2003 Electronics World paper to get people to listen. The mainstream always ignore and suppress or obfuscate to reverse findings. The 1971 experiment with atomic clocks flown around the world in airliners proved absolute motion, although see below how it was cleverly obfuscated to imply the exact opposite, like the M-M experiment and Aspect's entanglement experiment (the EPR paradox in the Copenhagen Interpretation was simply ignored as a disproof of the latter and taken to be proof of many universes within Bohr's mindset...):

    From my page on the 1971 experiment http://feynman137.tripod.com/:

    Professor Paul Davies very indirectly and obscurely (accidentally?) defends Einstein's 1920 'ether and relativity' lecture .In 1995, physicist Professor Paul Davies - who won the Templeton Prize for religion (I think it was $1,000,000), wrote on pp54-57 of his book About Time:'Whenever I read dissenting views of time, I cannot help thinking of Herbert Dingle... who wrote ... Relativity for All, published in 1922. He became Professor ... at University College London... In his later years, Dingle began seriously to doubt Einstein's concept ... Dingle ... wrote papers for journals pointing out Einstein's errors and had them rejected ... In October 1971, J.C. Hafele [used atomic clocks to defend Einstein] ... You can't get much closer to Dingle's 'everyday' language than that.'

    Now, let's check out J.C. Hafele.J. C. Hafele is against crackpot science: Hafele writes in Science vol. 177 (1972) pp 166-8 that he uses 'G. Builder (1958)' for analysis of the atomic clocks.G. Builder (1958) is an article called 'ETHER AND RELATIVITY' in Australian Journal of Physics, v11, 1958, p279, which states:'... we conclude that the relative retardation of clocks... does indeed compel us to recognise the CAUSAL SIGNIFICANCE OF ABSOLUTE velocities.'

    Einstein himself slipped up in one paper when he wrote that a clock at the earth's equator, because of the earth's spin, runs more slowly than one at the pole. One argument, see http://www.physicstoday.org/vol-58/iss-9/p12.html, is that the reason why special relativity fails is that gravitational 'blueshift' given by general relativity cancels out the time dilation: 'The gravitational blueshift of a clock on the equator precisely cancels the time dilation associated with its motion.'It is true that general relativity is involved here, see the proof below of the general relativity gravity effect from the Lorentz transformation using Einstein's equivalence principle. The problem is that there are absolute velocities, and special relativity by itself gives the wrong answers! You need general relativity, which introduces absolute motion, because it deals with acceleration like rotation, and observers can detect rotation as a net force, if in a sealed box that is rotating. It is not subject to the principle of relativity, which does not apply to accelerations. Other Einstein innovations were also confused:

    http://www.guardian.co.uk/print/0,3858,3928978-103681,00.html, http://www.italian-american.com/depretreview.htm, http://home.comcast.net/~xtxinc/prioritymyth.htm.

    'Einstein simply postulates what we have deduced . I have not availed myself of his substitutions, only because the formulae are rather complicated and look somewhat artificial.' - Hendrik A. Lorentz (discoverer of time-dilation in 1893, and re-discoverer of George FitzGerald's 1889 formula for contraction in the direction of motion due to aether).

    As Eddington said, light speed is absolute but undetectable in the Michelson-Morley experiment owing to the fact the instrument contracts in the direction of motion, allowing the slower light beam to cross a smaller distance and thus catch up:

    'The Michelson-Morley experiment has thus failed to detect our motion through the aether, because the effect looked for - the delay of one of the light waves - is exactly compensated by an automatic contraction of the matter forming the apparatus... The great stumbing-block for a philosophy which denies absolute space is the experimental detection of absolute rotation.'

    - Professor A.S. Eddington (who confirmed Einstein's general theory of relativity in 1919), Space Time and Gravitation: An Outline of the General Relativity Theory, Cambridge University Press, Cambridge, 1921, pp. 20, 152.

    Experimental confirmations which prove the mechanism:In 1996 I proved - contrary to the mainstream general relativity model of cosmology - that by the mechanism of gravity the universe is not decelerating. The mainstream model has the universe expanding ever more slowly because of the effect of gravity.This was confirmed experimentally in 1998 by Perlmutter, published in Nature (without mentioning my work, which Nature's ed, Philip Campbell, had said in an autographed letter to me on 25/26 Nov 96 that he was "not able" to publish, and a day before/later his physical sciences ed said the same thing about a review paper proposal I sent on a related mechanism topic).In addition to this, the mechanism predicts the strength of gravity and electromagnetism, the strong nuclear force, and particle masses correctly and is further checkable as the astronomical data used (like density of universe and Hubble parameter) become more accurately known.

    However, two mainstream theories (sting theory for forces and the Lambda-CDM cosmology) which are entirely ad hoc and non-checkable are used to suppress my work. Verification just brings you a stream of abuse from the mainstream, nobody sits up and listens. Nature wouldn't publish my prediction in 1996, nor would they publish a paper showing how the experimental results confirmed the prediction in 1998.If you look at what "string theorists" and mainstream Lambda-CDM cosmologists are saying, they are all bitter people. Ask them about physics, and they reply by saying how impossible it is for anyone to understand them if they aren't genius.

    Professor Hawking writes in an essay that two crackpots sent him mutually incompatible ideas, proving that the are both totally wrong. You can immediately see that just because two theories are incompatible, does not prove that both are wrong. One can be right and the other wrong, or both can be partly right and partly wrong, so that they are incompatible. Hawking is then a charlatan because he and others claim say general relativity is incompatible with quantum mechanics, but don't claim that this incompatibility proves both are 100% wrong. The submessage of the mainstream is clear: 'Everyone with alternatives is wrong, and I'm so clever I know everyone else is wrong without having to check or verify it.' This is not really a scientific attitude, unless science is now purely a matter of snobbery. For those of us who entered science to escape snobbery and that kind of content-less political time-wasting, that is kind of ironic.

    It reminds you of the story of Faraday being forced to act as servant to Davy's wife during their European scientific tour. There was nothing scientific about that. At the end of the day it is fairly obvious that scientists and editors are unprofessional if they are being bribed to hype extra dimensional string theory, whether the bribes are paid by free market book publishers, mad eccentrics, or by tax payers via the government quangos.

    On relativity and absolute motion: the cosmic background radiation is incredibly uniform in all directions except for effects due to the earth's absolute motion. This "new aether drift" was published in Scientific American in 1977 or thereabouts. There are also semi-mainstream objections to special relativity like http://arxiv.org/abs/gr-qc/0406023 but I think two things are needed to get rid of special relativity: (1) a full causal mechanism for how contraction, time-dilation etc occur due to effects of the spacetime fabric on matter and (2) a final theory which predicts everything. You aren't going to get rid of "Einstein" (although he was anti-special relativity after 1915, and gave the pro-aether speech in 1920 at Leyden) until you have a final theory which answers all physics questions.

    The main issue is getting a causal electroweak symmetry breaking mechanism, but I don't think that is going to be too difficult. The really fun part will be the run-in with orthodoxy. By including general relativity and the Standard Model as limiting approximations of the final theory, plus including a summary of all the maths of quantum field theory and tensor analysis, the mainstream orthodoxy will find it ever harder to make meaningful sneers. They have a limited list of tactics, and if they call you crackpot you can point out that the only facts you are using are mainstream ones, which if you do it lucidly (i.e. briefly) enough with a single short sentence, really is a proper defence.

    Responding to an email from R. P. Feynman's erstwhile co-author, Prof. Jonathan Post (might as well

    From: Nigel Cook
    To: jonathan post ; David Tombe ; epola@tiscali.co.uk ; imontgomery@atlasmeasurement.com.au ; Monitek@aol.com
    Cc: marinsek@aon.at ; pwhan@atlasmeasurement.com.au ; graham@megaquebec.net ; andrewpost@gmail.com ; george.hockney@jpl.nasa.gov ; tom@tomspace.com
    Sent: Sunday, July 23, 2006 8:53 PM
    Subject: Re: Heat wave, wonderings, Re: Sodium Chloride and Magnetic Spin Moment

    Dear Jonathan,

    Vacuum polarization is the mechanism for renormalization which is vital in calculating the correct magnetic moment of electron predicted from Dirac equation is defined as exactly 1 Bohr magneton. By 1947 it was known this value is too low by around 0.12%

    QED by Schwinger in 1948 increased it to 1 + (alpha)/(twice Pi) = 1.00116 Bohr magnetons, the added (alpha)/(twice Pi) factor being the first additional Feynman coupling diagram for the photon (or whatever the magnetic mediator is) from the electron to interact with the virtual particles and inbetween mediating magnetism from the electron core to the observer's instrument. You have to remember that the polarized vacuum of virtual charges affect the electron's (Dirac theory) characteristics. This comes from the shell game: the virtual particles in successive shells add very slightly to the magnetic field from the real electron core.

    Renormalization comes about physically in this calculation because alpha is the dimensionless 1/137... factor. Vacuum polarization around the real electron core means that virtual positrons are closer to the core than virtual electrons, so there is a net electric field arrow from the polarized charge which points the opposite way to that from the core, cancelling most of the electron's charge.

    The electron charge we observe from large distances is -e. But when we get closer to the electron core, by hitting electrons together at 92 GeV so that they approach closely, the charge rises by 7%. I've got an calculation which shows that if you could eliminate the polarized shielding shell around the core, you would see the charge rise by a factor of 1/alpha, or a factor of 137...

    Heisenberg's uncertainty says [we are measuring the uncertainty in distance in one direction only, radial distance from a centre; for two directions like up or down a line the uncertainty is only half this, i.e., it equal h/(4.Pi) instead of H/(2.Pi)]:
    pd = h/(2.Pi)
    where p is uncertainty in momentum, d is uncertainty in distance.This comes from his imaginary gamma ray microscope, and is usually written as a minimum (instead of with "=" as above), since there will be other sources of uncertainty in the measurement process.
    For light wave momentum p = mc,pd = (mc)(ct) = Et where E is uncertainty in energy (E=mc^2), and t is uncertainty in time.

    Hence, Et = h/(2.Pi)
    t = h/(2.Pi.E)
    d/c = h/(2.Pi.E)
    d = hc/(2.Pi.E)
    This result is used to show that a 80 GeV energy W or Z gauge boson will have a range of 10^-17 m. So it's OK.
    Now, E = Fd implies
    d = hc/(2.Pi.E) = hc/(2.Pi.Fd)
    Hence
    F = hc/(2.Pi.d^2)

    This force between electrons is 1/alpha, or 137.036, times higher than Coulomb's law for unit fundamental charges.Notice that in the last sentence I've suddenly gone from thinking of d as an uncertainty in distance, to thinking of it as actual distance between two charges; but the gauge boson has to go that distance to cause the force anyway.Clearly what's physically happening is that the true force is 137.036 times Coulomb's law, so the real charge is 137.036. This is reduced by the correction factor 1/137.036 because most of the charge is screened out by polarised charges in the vacuum around the electron core:

    "... we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum ... amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies)." - arxiv hep-th/0510040, p 71.

    Dr M. E. Rose (Chief Physicist, Oak Ridge National Lab.), Relativistic Electron Theory, John Wiley & Sons, New York and London, 1961, pp 75-6:

    'The solution to the difficulty of negative energy states [in relativistic quantum mechanics] is due to Dirac [P. A. M. Dirac, Proc. Roy. Soc. (London), A126, p360, 1930]. One defines the vacuum to consist of no occupied positive energy states and all negative energy states completely filled. This means that each negative energy state contains two electrons. An electron therefore is a particle in a positive energy state with all negative energy states occupied. No transitions to these states can occur because of the Pauli principle. The interpretation of a single unoccupied negative energy state is then a particle with positive energy ... The theory therefore predicts the existence of a particle, the positron, with the same mass and opposite charge as compared to an electron. It is well known that this particle was discovered in 1932 by Anderson [C. D. Anderson, Phys. Rev., 43, p491, 1933].

    'Although the prediction of the positron is certainly a brilliant success of the Dirac theory, some rather formidable questions still arise. With a completely filled 'negative energy sea' the complete theory (hole theory) can no longer be a single-particle theory.

    'The treatment of the problems of electrodynamics is seriously complicated by the requisite elaborate structure of the vacuum. The filled negative energy states need produce no observable electric field. However, if an external field is present the shift in the negative energy states produces a polarisation of the vacuum and, according to the theory, this polarisation is infinite.

    'In a similar way, it can be shown that an electron acquires infinite inertia (self-energy) by the coupling with the electromagnetic field which permits emission and absorption of virtual quanta. More recent developments show that these infinities, while undesirable, are removable in the sense that they do not contribute to observed results [J. Schwinger, Phys. Rev., 74, p1439, 1948, and 75, p651, 1949; S. Tomonaga, Prog. Theoret. Phys. (Kyoto), 1, p27, 1949].

    'For example, it can be shown that starting with the parameters e and m for a bare Dirac particle, the effect of the 'crowded' vacuum is to change these to new constants e' and m', which must be identified with the observed charge and mass. ... If these contributions were cut off in any reasonable manner, m' - m and e' - e would be of order alpha ~ 1/137. No rigorous justification for such a cut-off has yet been proposed.'All this means that the present theory of electrons and fields is not complete. ... The particles ... are treated as 'bare' particles. For problems involving electromagnetic field coupling this approximation will result in an error of order alpha. As an example ... the Dirac theory predicts a magnetic moment of mu = mu[zero] for the electron, whereas a more complete treatment [including Schwinger's coupling correction, i.e., the first Feynman diagram] of radiative effects gives mu = mu[zero].(1 + alpha/{twice Pi}), which agrees very well with the very accurate measured value of mu/mu[zero] = 1.001...'

    Is there charge in empty space? QFT is not very lucid on this. Whereas Dirac said the vacuum is full of virtual charge, and used this to predict antimatter, which was then detected by Anderson in 1932 (resulting in a Nobel Prize for Dirac), there is the issue that a vacuum completely full of charge would polarize around real charge, cancelling it completely.

    Feynman, Schwinger, and Tomonaga showed that the vacuum is not full of free charge or it would cancel out real charges by becoming totally polarized. Instead, there is a limit or cutoff to the amount of polarization, at a collision energy which corresponds to a very small distance from the middle of the electron. This comes from the need to renormalize the charge of the electron, in order to force the QFT to predict the electron's magnetic moment and Lamb shift accurately.

    So free (polarizable) charge pairs can only exist close to the middle of the electron, where the electromagnetic field is strong enough, presumably, to break the bonds of the vacuum sea of charges, and free the charges so that they can then become polarized and cause the right amount of shielding for QFT to work. So from causality + QFT, the vacuum doesn't contain any free charge but must be full of bound charge which is broken up into free charge in the strong fields near a fundamental particle (real charge core).

    There are virtual particles polarized in the strong field near real charge cores, and these must come from some mechanism. The strong field breaks up the orderly vacuum structure for a small distance, allowing it to become polarized which shields some of the real particle charge.

    Quantum field theory implies the core of each real long-lived charge is surrounded by two concentric shells of charged vacuum particles: an inner shell with opposite charge to the core, and an outer one of similar charge to the core. The electric field arrow between the two shells points the other way to that for the charge from the core bare charge, so the latter is shielded. The shielding factor calculated in a previous post in this blog is approximately 137 or 1/alpha.

    The physical reason why quarks have fractional charge can be explained very simply indeed. Electric charges are shielded by the polarized vacuum field they create at short distances. If you hypothetically put three electron charges close together so that they all share the same vacuum polarization cloud, the polarization in that cloud will be three times stronger. Hence, the shielding factor for electric charge will be three times greater. So the electric charge you would theoretically expect to get from each of the three electron-sized charges confined in close proximity is equal to: -1/3e. This is the electric charge of the downquark.

    Renormalization is the failure to give a rigorous reason for the empirical need of the mathematical solution to have the right asymptotic limits. The abstract QFT (including not just electron-positron polarization, but all the loops of other charges up to 92 GeV) suggests the electron charge is approximately e + e[0.005 ln(A/E)], where A and E are respectively upper and lower cutoffs, which for a 92 GeV electron-electron collision are A = 92,000 MeV, and E = 0.511 MeV.

    There are some severe problems in quantum field theory pertaining to the discontinuity introduced by the cutoff needed to prevent the electric charge of a single electron from setting off an infinitely extensive polarization of the vacuum of the universe which entirely cancels the electron's electric charge. This renormalization problem is mentioned in the Dirac and Feynman quotes on http://www.cgoakley.demon.co.uk/qft/. Ultimately these problems stem to the use of statistical equations (the wavefunction in the Schroedinger equation and Dirac equation doesn't tell you detailed facts, just averaged statistics) to obtain detailed facts which they are incapable of doing. Quantum mechanics for example is compatible with the exponential decay law of radioactivity. In reality, radioactivity decays in steps due to individual decay events, and it is only the averaged overall decay rate which approximates the exponential decay curve.

    Considering the log(E/A) type term, on the one hand it could be that the log(E/A) type term may be right after all, if some mechanism can be found for the lower cutoff discontinuity. This low energy cutoff implies - if correct - that vacuum polarization only begins at a certain distance from the middle of the electron, where the electromagnetic field has just enough energy, to create electron-positron pairs in the vacuum which get polarized by the field and shield the electron charge. If this is the case, then the vacuum presumably does not contain any free electron-positron pairs beyond that distance from an electron. The electric field of the electron (as mediated by gauge bosons obeying the inverse-square law) is intense enough vey near the electron core to both create free electrons from the vacuum and to polarize them.

    If the issue were merely polarizing already-free virtual charges of the vacuum, no real (long lived) electrons would have any measurable electric charge at all, because the vacuum polarization would extend for infinity and would cancel out precisely 100% of the electric field from the electron, instead of the 100(1 - alpha) = 99.27%. There is a hell of a lot of difference between 100% and 99.27% shielding; the first would prevent any long range electric field at all, while the second is the observed shielding which allows a small fraction of the electron's charge to go uncancelled. The task of explaining renormalization is that of explaining why this difference exists.

    However, the log(E/A) type term is definitely wrong for another reason: the upper energy limit E allows the term to increase toward infinity as distance from the middle of the electron decreases towards zero. Because the vacuum polarization over a finite distance cannot have infinite shielding, this is clearly false. The bare core charge of an electron is not infinite, but ~137e- as demonstrated above.

    If we take the standard QFT electron charge formula to be e(x)/e(x = infinity) ~ 1 + [0.005 ln(A/E)], with E = 0.511 MeV and A the upper cutoff energy which is roughly inversely proportional to distance, we can roughly approximate the way the electron charge is supposed to vary as a function of distance from the middle of the electron.

    For a 0.511 MeV electron-electron head on collision, the distance of closest approach is given by equating the entire kinetic energy of the moving electron to the potential energy of the electric field, (e-)^2 /(4.Pi.permittivity.distance of closest approach).

    Obviously as energy increases beyond 0.511 MeV, the effective value of the electric charge (e-) decreases because there's less polarized vacuum causing electric field shielding between the two charges, and of course there can be energy losses due to elastic scatter effects (such as gamma ray emissions).However, I'll do some calculations. In the meanwhile, just take the standard formula of the type e(x)/e(x = infinity) ~ 1 + [0.005 ln(A/E)]. If as a rough approximation you put 1/(scaled distance) for A, you might expect to get a feel for how the charge is supposed to vary with distance. It is very unnatural. Normally you get a natural logarithm in rearranging an exponential equation (finding the inverse function), so you would not mathematically expect that an exponential equation would usefully approximate a logarithmic one. However, you would expect that physically a vacuum polarization shielding should have some type of exponential term.It is possible that the detailed dynamics of scattering effects like , in determining the relationship between collision energy and distance from middle of particle, make the correct relationship substantially different (from the simplistic low energy result that energy is proportional to 1/distance of closest approach).

    One last piece of evidence: increasing energy means getting closer to a particle core (harder collisions, closer approaches). Under supersymmetry (SUSY) ideas, unification of strong and electroweak is supposed to occur at extremely high energy of 10^16 GeV. If this happens, it means that all forces are equal and different aspects of the same thing at a very close distance. Physically, that distance implies being so close to the particle core that there is NO intervening vacuum charge polarization (shielding) a all. Therefore, an electron at very close distances would show nuclear force effects. Hence, if supersymmetric unification ideas have any validity at all, they suggest that the role of the vacuum that I've been describing for electromagnetism, is the mechanism for the difference between electron and quark.

    Now, remember, the range of the virtual charges in the vacuum is big enough to knock and deflect the electron randomly as the electron moves in the vacuum: Heisenberg's law says product of energy of virtual particles and the time they exist is h over 2 Pi or whatever, hence by rearranging and putting in the energy for an electron-positron pair, you get the time they exist, 10^-21 second; assuming as an exaggeration they go at speed c, this gives you a maximum range of virtual particles in the vacuum of ct = 10^-12 metre. I don't know that electron-positron pairs predominate in the vacuum, because that depends on the energy density of the vacuum which is not clearly known (John Baez shows that equally defensible estimates ranging from zero to[ward] infinity exist).

    The higher the energy density of the vacuum, the heavier the virtual particles. However, from the unification arguments above, electrons and quarks are related: quarks are pairs or triads of electrons bound and trapped by the short range vacuum attraction effect, and because quarks are close enough to share the same polarization shells, the latter are 2 or 3 times stronger in pairs or triads of quarks, creating apparent fractional electric charges (stronger polarization type shielding causes weaker observed charge at a long distance).

    The increase in the magnetic moment which results for leptons is reduced by the 1/alpha or 137 factor due to shielding from the virtual positron's own polarization zone, and is also reduced by a factor of 2Pi because the two particles are aligned with opposite spins: the force gauge bosons being exchanged between them hit the spinning particles on the edges, which have a side-on length which is 2Pi times smaller than the full circumference of the particle. To give a real world example, it is well known that by merely spinning a missile about its axis you reduce the exposure of the skin of the missile to weapons by a factor of Pi. This is because the exposure is measured in energy deposit per unit area, and this exposed area is obviously decreased by a factor of Pi if the missile is spinning quickly. For an electron, the spin is half integer, so like a Mobius strip (paper loop with half a turn), you have to rotate 720 degrees (not 360) to complete a 'rotation' back to the starting point. Therefore the effective exposure reduction for a spinning electron is 2Pi, rather than Pi.

    Hence by combining the polarization shielding factor with the spin coupling factor, we can account for the fact that the lepton magnetic moment increase due to this effect is approximately 1/(2.Pi x 137) = alpha/(2.Pi) added on to the 1 Bohr magneton of the bare real electron. This gives the 1.0116 Bohr magnetons result for leptons.

    We can make additional predictions using the Z boson of electroweak theory, which is unique because it has rest mass despite being an uncharged fundamental particle! You can easily see how charged particles acquire mass (by attracting a cloud of vacuum charges, which mire them, creating inertia and a response to the spacetime fabric background field which is gravity). But how does a non-composite neutral particle, the Z, acquire mass? This is essentially the question of electroweak symmetry breaking at low energy. The Z is related to the photon, but is different in that it has rest mass and therefore has a limited range, at least below electroweak symmetry breaking energy.

    Z mass model: a vacuum particle with the mass of the electroweak neutral gauge boson (Z) semi-empirically predicts all the masses in the Standard Model. You use data for a few particles to formulate the model, but then it predicts everything else, including making many extra checkable predictions! Here's how. If the particle is at position A in the model illustration, it is inside the polarization range of the electron, but there is still its own polarization shell separating it from the real electron core. Because of the shielding of its own shell of vacuum polarization and from the spin of the electron core, the mass it gives the core is equal to

    M(z)/(2.Pi x 137) = M(z)alpha/(2.Pi) ~ 105.7 MeV. Hence the muon!

    Next, consider the lower energy state where the mass is at position B in the diagram above. In that case, the coupling between the central core charge and the mass at B is reduced by the additional distance (which empirically is a factor of ~1.5 reduction) and also the 137 or 1/alpha polarization attenuation factor. Hence

    M(z).(alpha)^2 /(1.5 x 2.Pi) ~ 0.51 MeV. Hence the electron!

    Generalizing, for n real charge cores (such as a bare lepton or 2-3 bare quarks), and N masses of Z boson mass at position A (a high energy, high mass state), the formula for predicting the observable mass of the leptons or hadrons is:

    M(e).n(N+1)/(2.alpha) = M(z).n(N+1)(alpha)/(6.Pi).

    This does make predictions! It is based on known facts of polarization in the vacuum, the details of which have evidence from many experiments. It has more physics and checkable tests going for it than the periodic table of chemistry had first proposed from sheer empirical association by Newlands and Mendeleev, which today would doubtless be suppressed sneeringly as 'numerology' (the reason for the periodic table had to await the discovery of quantum mechanics). It produces a kind of periodic table of elementary particle masses which are directly comparable to measured data. This fact-guided methodology of doing physics is in stark contrast to stringy, abject, useless extra dimensional speculation.

    The stringy model doesn't predict the force mechanisms, strengths, particle masses, and cosmological results which it suppressed with censorship.

    But for a comparison of the above heuristic ideas with those quantum field theory between the Tomonaga-Feynman-Schwinger quantum field theory renormalized calculation of the magnetic field increase for an electron due to the vacuum (which Dirac and Feynman, as well as others like Dr Chris Oakley, raise objects to on mathematical grounds, as being incomplete/fiddled) see:

    Julian Schwinger, On Gauge Invariance and Vacuum Polarization, Phys. Rev. vo. 82 (1951), p. 664:

    'This paper is based on the elementary remark that the extraction of gauge invariant results from a formally gauge invariant theory is ensured if one employs methods of solution that involve only gauge covariant quantities. We illustrate this statement in connection with the problem of vacuum polarization by a prescribed electromagnetic field. The vacuum current of a charged Dirac field, which can be expressed in terms of the Green's function of that field, implies an addition to the action integral of the electromagnetic field. Now these quantities can be related to the dynamical properties of a "particle" with space-time coordinates that depend upon a proper-time parameter. The proper-time equations of motion involve only electromagnetic field strengths, and provide a suitable gauge invariant basis for treating problems. Rigorous solutions of the equations of motion can be obtained for a constant field, and for a plane wave field. A renormalization of field strength and charge, applied to the modified lagrange function for constant fields, yields a finite, gauge invariant result which implies nonlinear properties for the electromagnetic field in the vacuum. The contribution of a zero spin charged field is also stated. After the same field strength renormalization, the modified physical quantities describing a plane wave in the vacuum reduce to just those of the maxwell field; there are no nonlinear phenomena for a single plane wave, of arbitrary strength and spectral composition. The results obtained for constant (that is, slowly varying fields), are then applied to treat the two-photon disintegration of a spin zero neutral meson arising from the polarization of the proton vacuum. We obtain approximate, gauge invariant expressions for the effective interaction between the meson and the electromagnetic field, in which the nuclear coupling may be scalar, pseudoscalar, or pseudovector in nature. The direct verification of equivalence between the pseudoscalar and pseudovector interactions only requires a proper statement of the limiting processes involved. For arbitrarily varying fields, perturbation methods can be applied to the equations of motion, as discussed in Appendix A, or one can employ an expansion in powers of the potential vector. The latter automatically yields gauge invariant results, provided only that the proper-time integration is reserved to the last. This indicates that the significant aspect of the proper-time method is its isolation of divergences in integrals with respect to the proper-time parameter, which is independent of the coordinate system and of the gauge. The connection between the proper-time method and the technique of "invariant regularization" is discussed. Incidentally, the probability of actual pair creation is obtained from the imaginary part of the electromagnetic field action integral. Finally, as an application of the Green's function for a constant field, we construct the mass operator of an electron in a weak, homogeneous external field, and derive the additional spin magnetic moment of α/2π magnetons by means of a perturbation calculation in which proper-mass plays the customary role of energy.'

    More information on QFT: see Prof. Mark Srednicki's textbook at http://gabriel.physics.ucsb.edu/~mark/MS-QFT-11Feb06.pdf and the corrected Prof. Alvarez-Gaume introduction at http://arxiv.org/PS_cache/hep-th/pdf/0510/0510040.pdf (It turns out that that textbook (1st ed) was wrong as it ignored all the particle creation-annihilation loops which can be created at energies between 0.511 MeV and 92 GeV. Motl emailed the Professor then replied: "Prof. Alvarez-Gaume has written me that it was a pedagogical simplification, which I fully understand and endorse, and in a future version of the text, they will have not only the right numbers but even the threshold corrections!" The second edition corrects the error, with a footnote thanking Motl, who I had informed of the error since I don't get replies from most professors when using hotmail.)

    Kind regards,
    nigel

    Copy of a comment submitted to Cosmic Variance blog in case deleted:

    http://cosmicvariance.com/2006/07/23/n-bodies/
    nigel cook on Jul 24th, 2006 at 4:07 am

    ‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’ – Tim Poston and Ian Stewart, Analog, November 1981.

    It isn’t quantum physics that is the oddity, but actually classical physics! The normal teaching of Newtonian physics (at least at low levels) falsely claims/indoctrinates the persistent lie that it allows the positions of the planets to be exactly calculated (determinism) when it does not if you have 3+ bodies, which you do. Richard P. Feynman conceded this in his book QED:

    ‘when the space through which a photon moves becomes too small (such as the tiny holes in the screen) … we discover that … there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that … interference becomes very important.’

    The interference is due to many vacuum virtual charges:

    ‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

    The duration and maximum range of these charges is easily estimated: take the energy-time form of Heisenberg’s uncertainty principle and put in the energy of an electron-positron pair and you find it can exist for ~10^-21 second; the maximum possible range is therefore this time multiplied by c, or 10^-12 metre. This is far enough to deflect electrons but not enough to be observed as vacuum radioactivity. Like Brownian motion, it introduces chaos on small scales, not large ones:

    ‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 book ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.

    The Schroedinger wave equation arises naturally from a sea of particles because we know that you get waves in particle-based fluids: http://feynman137.tripod.com/#b