Quantum gravity physics based on facts, giving checkable predictions: August 2006

Saturday, August 05, 2006

Peter Woit's Not Even Wrong has been published in America.

Fame, physics, and unification of fundamental forces

Smolin's Why no new Einstein? essay in the June 2005 issue of Physics Today is online as a PDF file here which kinda contrasts with Edward Witten's essay in the April 1996 issue of the same journal which claimed that his (Witten's) stringy 10 and 11 dimensional M-theory 'has the remarkable property of predicting gravity'.

Seeing that for all his brilliance Einstein just set the stage for string theory to take over physics as a mechanism-less speculation which does not interact properly with the real world, Witten well deserves to be called a new Einstein, along with Nobel laureate 't Hooft.

't Hooft falsely claims that Maxwell's continuous differential equation of displacement current for a charging capacitor is not in conflict with the fact that the capacitor's charge increases in discrete units (electrons), not as a continuous variable.

Because Maxwell's displacement current equation is one half of Maxwell's electromagnetic radiation theory (in conjunction with Faraday's law of induction, another Maxwell equation first written in vector calculus by Oliver Heaviside who doesn't get credit) we can see why classical electromagnetism (Maxwell's equations) fail to work for situations where individual charges are important such as small capacitors, for example atoms are capacitors (they have are positive and negative charges separated by vacuum) which is a way of analysing the problem that reveals a lot about why Maxwell's displacement current formula breaks down for quantum theory. This is crucial, because Schroedinger's equation and Dirac's equation describing energy transfer as wavefunctions of a field change are quantized versions of the displacement current law as shown in previous posts and in a section on the page http://feynman137.tripod.com/ (more about this later in this post).

All charges are exchanging continuous (TEM energy slab type) energy, just as two batteries in a parallel circuit are doing so, as explained previously. In a steady state, there is an equilibrium of exchange so each electron receives the same amount of energy as it emits. This explains how space around and between charges is "charged" with an electric field but no magnetic field (the magnetic field curls cancel out when there is equilibrium of exchange, because the curls cancel one another from exchange radiation components travelling in each direction).

The atom is a capacitor which violates Maxwell's equation for displacement current. The photon gets emitted because of this failure in Maxwell's displacement current equation, which is only valid statistically for large numbers of charges, and breaks down for individual electrons. The photon is emitted as compensation for the discrete change in energy level of the electron. When the electron is at a high energy state, it has potential energy tied up the the electromagnetic field between it and the nucleus. The electron cannot radiate all of the energy as the photon, and this fact creates the Planck law of quantized energy levels. When the electron falls back to the ground state, its centripetal outward acceleration increases according to a = (v^2)/r where r is radius. When r gets smaller (electron falling towards ground state), the centripetal acceleration gets bigger, and this counteracts the bigger Coulomb (inverse square law) force which exists nearer the nucleus.

The electron does obey Maxwell's theory that radiation occurs due to acceleration (such as centripetal acceleration) but this is dismissed by Bohr and Rutherford on the false belief that if a charge continuously radiated, it would lose energy. The poverty of intuition here is obvious. By analogy, Bohr and Rutherford might as well claim to disprove Prevost's 1792 thermodynamics by claiming that if every body above absolute zero is always radiating heat, then everything would have long since be frozen. This is false, because it ignores the fact that equilibrium can occur. What is happening in the case of the electron is that it is emitting continuous Heaviside slab type electromagnetic radiation (no oscillation, zero frequency), just as it emits heat photons (oscillating radiation) when above absolute zero. We detect the oscillating radiation because it causes other charges to resonate. We detect the non-oscillating radiation (Yang-Mills exchange radiation) because it causes forces and steady state force fields.

Anyhow, Smolin's new book The Trouble with Physics, has been reviewed on Bee's Backreaction Blog.

Bee is an unusually intelligent German female physicist. I read some of Smolin's writings - including Life of the Cosmos which delivers the useful message that the universe was not designed as such but is still evolving, like a city that evolved gradually over centuries, and listened to some of his lectures on line. I didn't have time to attend Smolin's lecture at the London School of Economics last month because I'm not a professional physicist and hate London, and since Smolin doesn't reply to my emails I don't have any interest in asking him questions at a public lecture which will possibly be verging on harrassment, which won't get help anything. [Sometimes when girls don't respond to emails I keep on regularly following up, because I don't know whether they have received the email or not (if you are on hotmail the messages deleted every now and then automatically if emails are not accessed regularly, regardless of whether they have actually been read or not). If girls are so rude that they can't send a 'no thanks' or 'not interested' email when asked for a date by email, then they should not try to claim that some repeat emails are annoying. If someone says 'not interested' then I'm gone. However, I'm not telepathic so don't have any means to know unless told.]

Anyhow, the world doesn't need more of Einstein's type/hype, as there are now about a million clones of Einstein's style of mathematical physics with PhD's in the world, and they all think pretty much alike which is the reason for the tragic group think farce of string "theory" (I know what theories are, and ad hoc abject stringy speculation that can't do anything for physics, except bog it down in arrogant sh*t, is no theory). Lee Smolin states:

"The problem with following trends is that lots of people try to do that and the result is that they all reduce their chances. There are fewer places for people who ignore fashion and follow their own ideas, but there are also fewer people who have the courage to follow their own ideas and the imagination to have good ideas. Life is not fair, but very roughly these things seem to even out, so that the best advice career wise seems to be to figure out what you love doing and what you are good at (which is often the same thing) and just do that. Whatever you do the invariants are that you have to work hard to get good results and you have to work also at communicating your results."

Gerard 't Hooft and Danny Ross Lunsford are curious characters, as I've commented before. Hooft won the 1999 Nobel Prize for showing that Yang-Mills (gauge boson exchange radiation) quantum field theory is renormalizable. This is just mathematics. Hooft has a photo of a statue of himself on his own page. Need I say more about the man's ego? Yes. Last December, about the time that John 'Jupiter Effect' Gribbin emailed me to say that people can be sued for sending emails containing scientific work (Gribbin refuses to admit that democracy and freedom were won in WorldWar II), and I returned by saying with a few brief (four letter) words that fascism is awful, both Hooft and Smolin emailed me saying simply that they don't want to receive any more emails of that type. None of them replied to the science.

I repeat the science again. Under 'research interests' Hooft lists only the following on his page (all of which are 100% what my research is concerned with, which he ignores and doesn't want to hear about):

"Gauge theories in elementary particle physics. This was the topic of the 1999 Nobel Prize. An idea was proposed by C.N. Yang and Robert Mills in 1954: they suggested that particles in the sub-atomic world might interact via fields that are similar to, but more general than electricity and magnetism. But, even though the interactions that had been registered in experiments showed some vague resemblance to the Yang-Mills equations, the details seemed to be all wrong. Attempts to perform accurate calculations were frustrated by infinite - hence meaningless - results. Together with my advisor then, and my co-Nobel-laureate now, M. Veltman, we found in 1970 how to renormalize the theory, and, more importantly, we identified the theories for which this works, and what conditions they must fulfil. One must, for instance, have a so-called Higgs-particle. It was subsequently discovered that, actually, the details of the observed forces now exactly fall in place. First it was found that the so-called weak force, in combination with the more familiar electro-magnetic one, is exactly described by a Yang-Mills theory. In 1973 it was concluded that also the strong force is a Yang-Mills theory. I was among the small number of people who were already convinced of this from early 1971. During the later 1970s, all pieces fell into place. Of all simple models describing the fundamental particles, one was standing out, the so-called `Standard Model'. Gauge theories are the backbone of this Standard Model. But now it also became clear that this is much more than just a model: it is the Standard Theory. Great precision can be reached, though the practical difficulties in some sectors are still substantial, and it would be great if one could devise more powerful calculation techniques. Also, in spite of all its successes, the Standard Model, as it is formulated at present, shows deficiencies. It cannot be exactly right. Significant refinements are expected when the results of new experiments become known, hopefully during 2007 and subsequent years, when the European particle accelerator LHC becomes fully operational.

"Quantum gravity and black holes . The predominant force controlling large scale events in the Universe is the gravitational one. The physical and the mathematical nature of this force were put in an entirely new perspective by Albert Einstein. He noted that gravitation is rooted in geometric properties of space and time themselves. The equations he wrote down for this force show a remarkable resemblance with the gauge forces that control the sub-nuclear world as described in the previous paragraph, but there is one essential difference: if we investigate how individual sub-atomic particles would affect one another gravitationally, we find that the infinities are much worse, and renormalization fails here. Under normal circumstances, the gravitational force between sub-atomic particles is so weak that these difficulties are insignificant, but at extremely tiny distance scales, of the order of 10^-33 cm, this force will become strong. We are tempted to believe that, at these tiny distance scales, the fabric of space and time is affected by quantum mechanical phenomena, but exactly how this happens is still very mysterious. One approach to this problem is to ask: under which circumstance is the gravitational force as strong as it ever can be? The answer to this is clear: at the horizon of a black hole. If we could understand the peculiar physical phenomena that one expects at the horizon of a black hole, and if we could find a meaningful description of its quantum mechanical laws, then perhaps this would open up new perspectives.

"Fundamental aspects of quantum physics. I have deviating views on the physical interpretation of quantum theory, and its implications for Big Bang theories of the Universe. This topic has been expanded upon in my publication, entitled: Quantum Gravity as a Dissipative Deterministic System (see my publication list)."

My new "anon" comment to Plato's/Hooft's blog (must be anon or Hooft would simple be able to think it was written by an "egotist" and not bother even reading it at all):

Plato {Hooft},

Renormalization is adjustment of charge and mass for the contributions of the vacuum to them.
Around an electron, the virtual charges in the vacuum become polarized, and this reduces the electric field at a large distance because the polarization opposes the electric field from the electron core.

Renormalization deals with the problem that the amount of polarization is not infinite. If the whole vacuum was free to be polarized around an electron, the vacuum polarization would increase until it cancelled exactly 100% of the electron's radial electric field.

Hence the electron would have no electric charge at all observable from a distance.

The physical explanation is that the polarization of the vacuum charges does not stretch out to infinity, it is limited in range to the space close to the electron core, and beyond a certain limit (corresponding to the lower limit energy cutoff in the renormalization math) no polarization can occur.

Clearly, this physics shows that the vacuum charges which are polarized are actually CREATED in the strong electric field close to the electron core, where the electromagnetic energy density (which falls as inverse square of distance) is high enough to create free pairs of charges out of the vacuum.

Quantum field theory includes annihilation and creation operators, and these must have a physical correspondence in reality.

When you get well away from mass, there is no free virtual charge in the vacuum which can be polarized. If there were, all electric fields would be 100% cancelled, so there would be no real charges.

Why can't people understand this? It is so simple. Why always call it a personal pet theory of the person saying it?

There is no [free] zero-point energy, or renormalization would not work, because the lower limit cutoff would be zero and the polarization would extend infinitely and there would no electric charges, no atoms, no people, nothing.

[This argument pertains to free, polarizable charge in the vacuum distant from matter; obviously if there was non-free charge present that could not be polarized to cancel out free charges, then you could still have some kind of structured or chaotic Dirac Sea. This would have to either be so chaotic that polarization could not be set up because the particles' motions are too energetic and random to be ordered at all by polarization at long distances from a charge (this is the most likely explanation as it has a full mechanism and makes predictions which are checkable and so far as tested so far, are in excellent agreement with experimental reality), or so structured such as by a lattice arrangement that weak electric fields are unable to polarize the charge because the charge remains stuck in the lattice unless a minimum work function energy is supplied to free the charges so that they can then be polarized (this is less likely because any lattice structure to the vacuum would seem to break down the isotropic freedom of radiation to go at the same velocity in all directions, if radiation is mediated by the virtual or 'displacement charge' currents in the vacuum as classical electromagnetism suggests).]

Cheers, anon.

anon Homepage 08.05.06 - 6:24 am #

Now for Lunsford, who has an abstract unification of electrodynamics and general relativity using six dimensional with three dimensions presumably representing contractable spacetime dimensions matter, and three representing continually expanding spacetime; so the six dimensions may not be truly supplemental but overlap in a simple sense and represent merely mathematically rulers, a contractable ruler for measuring matter which can be contracted locally by acceleration fields such as gravity and motion, and an expanding ruler for measuring the expanding background spacetime of the cosmos on a large universe-sized, non-local scale.

But the exciting thing about Lunsford's approach is that it makes the definite prediction that there is no cosmological constant and therefore no dark energy. Hence Lunsford appears to get precisely the same result as the Yang-Mills gravity mechanism (which also shows lambda = zero, dark energy = 0, see also this previous post for a visual explanation; there are various different ways of showing the same thing), but by using the abstract mathematical approach instead of the causal mechanism approach. I sent him an email about the atom as capacitor. Since the atom contains separated positive and negative charge with an intervening vacuum, it is a charged vacuum dielectric capacitor for purposes of dealing with the way it gains and loses energy. The electron resists getting an increase in charge, and jumps in kinetic energy and distance from nucleus instead. If Yang-Mills is right for electromagnetism (there is plenty of evidence from electroweak theory experiments that it is correct so far as it goes), there electromagnetic forces are due to continual exchanges of radiation between charges. The radiation is TEM wave radiation, transverse electromagnetic. It doesn't have to wave when it is being exchanged in an equilibrium situation, because that is like a direct current (Morse tap or logic step) Heaviside signal being propagated with currents in opposite directions down each of a pair of parallel conductors (a "transmission line" in electronics jargon). With Yang-Mills exchange radiation, the equal energy exchange in opposite directions cancels out the magnetic field curls from each component, so vacuum magnetic self inductance effects do not stop propagation by being infinite: they are zero. Hence you don't need to have the gauge bosons oscillate to be exchanged between charges. Oscillation (where the electric field varies from a peak of +v volts through zero to -v volts in a cycle which is determined by the frequency, and is accompanied by magnetic fields curling around the direction of energy flow in say clockwise then stopping and then curling inan anticlockwise direction) is only required where energy goes from A to B without energy simultaneously going from B to A. If energy is in exchange equilibrium, going both from A to B and from B to A simultaneously, then there is no need for the radiation to be oscillating to propagate. Catt had claimed that all capacitor charge in steps. Therefore, it seemed reasonable that the reason for quantum jumps in putting energy into an atomic electron is that the Maxwellian exponential charging curve for a capacitor - which is based on Maxwell's displacement current equation - is the sh*t in classical electromagnetism that needs to be corrected to convert it into quantum reality. Lunsford sent a reply that it appeared interesting, then did not indulge in any further discussion. Previously, he had sent an email saying don't send emails.

On the topic of Smolin's Loop Quantum Gravity, Smolin and others showed that Feynman's path integrals quantum field theory can be used to describe metric-less general relativity, i.e., we have quantum gravity minus the dynamics of contraction. Moving a little beyond the mathematics of of path integrals, the underlying physics for the mathematical model is Yang-Mills theory - that forces including inertial forces are due to normal energy exchange processes between charges (such as electric charges, color charges, etc., with gravitational charges being mass) - is capable of supplying not just the contraction dynamics of general relativity, but the dynamics for the physical underlying causes of forces themselves. This is shown in previous posts on this blog (just make sure you are using Microsoft Internet Explorer and not using the Mozilla Firefox internet browser, or else Greek symbols on this blog will appear to you as letters such as p for Pi, r for Rho, and such letters are already used by me, p = momentum, r = radial distance, etc., so you'll get yourself all confused which would be a terrible shame, wouldn't it?).

At http://math.ucr.edu/home/baez/planck/planck.html John Baez has a page about spin network and spin foams. I stopped reading it on the page http://math.ucr.edu/home/baez/planck/node2.html where the fool dimensionally obtains Planck length which is an ARBITRARY (SPECULATIVE) and massive size, on the order 10^-35 m and thus far, far BIGGER than the black hole electron radius which is just 2GM/c^2. What a crackpot. It is a pity that he can't remove rubbish from his webpages, and just leave the good stuff like http://math.ucr.edu/home/baez/gr/gr.html

While I'm on a roll of exposing folly, Peter Woit is silly for taking the Physics World review of his book (Not Even Wrong) by Gordon Fraser, entitled String theory gets knotted, as a genuine review. First the review has no content, just non-scientific trivia such as sneers about the city Rutherford claimed to discover the nucleus on (in fact, Japanese physicists had already come up with the nuclear atom and Rutherford never did the experiments which were done by Geiger and Marsden in Manchester). Second, the review is a sneering attack on Woit's brilliant discussion of the Standard Model. The online review concludes: "Gordon Fraser edited The New Physics for the 21st Century, recently published by Cambridge University Press, e-mail gordon.fraser@wanadoo.fr. He is currently writing a biography of Abdus Salam."

Gordon Fraser should know that Abdus Salam was the sh*t who refused to discuss the Catt Anomaly with Catt in Wireless World in 1981. Salam wrote Catt a letter which Catt had published on the letters page of Wireless World. Salam's letter stated that he refused to try to resolve such problems in physics. So that is the end of Salam's reputation so far as I'm concerned. These people who write books about crackpots who win Nobel Prizes are sh*ts.

Just to elaborate, there is a whole section about Alfred Nobel blowing up family members and workers, warmongering including supplying both sides of the Crimean war of the 1850s, and so on, in my unpublished 1990 book. If you accept a Nobel Peace Prize, or physics prize, or whatever, just remember it was paid for by massacres. When I tried promoting the gravity mechanism on Physics Forums, one of the obscene email messages of abuse I received was pretty revealing: 'If you are right, then why haven't I heard you win the Nobel Prize?'

This is the kind of sneering abuse that shows such prizes are not merely commemorating warmongering, worker exploiting, driven capitalist sh*ts, but the prizes are far worse: they can be used to attack genuine research work which is being suppressed by previous award winners.

I replied in no uncertain terms what I thought of prizes in general and Nobel ones in particular! If you get say an email saying you have won a fantastic prize, you immediately ask 'Who sent it? Who is offering it and why? Am I going to have to pay - or overpay - for this?'

Of the various competitions, the fairest are those in which competitors choose to enter in advance of the awards being announced. If someone enters a sports competition, fair enough. What is less fair is the concept that anyone trying to understand the universe can be smeared with the false allegation that they are merely out to win a warmongering blood-covered prize set up posthumorously by the will of a selfish, Hitler-like thug, obsessed with exploiting and destroying others. Not so!

For information on loop quantum gravity dynamics see this post. The biggest insight provided by Woit's Not Even Wrong for me was the simplification of loop quantum gravity:

'In loop quantum gravity, the basic idea is to use the standard methods of quantum theory, but to change the choice of fundamental variables that one is working with. It is well known among mathematicians that an alternative to thinking about geometry in terms of curvature fields at each point in a space is to instead think about the holonomy [whole rule] around loops in space. The idea is that in a curved space, for any path that starts out somewhere and comes back to the same point (a loop), one can imagine moving along the path while carrying a set of vectors, and always keeping the new vectors parallel to older ones as one moves along. When one gets back to where one started and compares the vectors one has been carrying with the ones at the starting point, they will in general be related by a rotational transformation. This rotational transformation is called the holonomy of the loop. It can be calculated for any loop, so the holonomy of a curved space is an assignment of rotations to all loops in the space.' - P. Woit, Not Even Wrong, Cape, London, 2006, p189.

This, the Yang-Mills gauge boson exchange process, is physically in perfect correspondence not to the 'loop' of the creation-annihilation-creation of matter (as illustrated at the top of this linked post), but rather to the 'loop' of the gauge boson radiation starting from one gravitational charge (mass), going to another, and returning to the first mass again. Because the energy carried by the gauge boson to cause force is conserved, we can extract physical predictions and dynamics from this model. For example, if you move masses apart, the gauge boson radiation energy at any instant is spread over larger distances and the force strength therefore falls as it spends more of its time in transit, and if the masses are receding from one another, the exchange radiation is redshifted so gravity falls off over cosmological distances. In particular, there is a correspondence between this physical picture of what happens in Yang-Mills quantum field theory and the way you get fundamental forces and the contraction effect in relativity, as explained in previous posts on this blog.

Energy is continuously radiated by continuously spinning charge due to the centripetal acceleration due to spin. Since all charges behave alike in this regard, in some situations (generally known as "statics" in mechanics, and "electrostatics" in electricity) an equilibrium can occur, where the radiant power (energy transmitted per second) of gauge boson radiation energy being radiated and received by any charge is EXACTLY equal. This occurs because any slightly larger charge radiates more strongly and since the radiating power of every charge is really immense, all charges thereby equalise to identical charge almost instantly, as Ivor Catt brilliantly explained in a simple manner in his book Electromagnetism 1, http://www.ivorcatt.com/2_2.htm:

'Keeping within the wave theoretical system, it is possible to explain why so-called 'particles' should appear to have equal size ... One method would be to discuss ... the resulting energy/matter exchange. There are three possibilities. Either the larger steals from the smaller, or there is no transfer, or the smaller steals from the larger. The fact that there is more than one 'particle' in today's galaxy indicates that if a galaxy is very old, the first possibility must be wrong. The second possibility is unlikely [nc note: this is because of the Yang-Mills exchange-radiation quantum field theory being such a success experimentally; although Catt wants no part in any aspect of quantum field theory, he was at least sympathetic with Feynman's path integrals when I explained the mechanism of it to him some years ago, but unfortunately Catt doesn't take an interest in developing or even tolerating development of this kind of physics from his own semi-correct intuitive and experimentally based ideas, which is my interest; he is essentially only interested in electronics and censorship problems]. The third would fully explain the gradual equalizing out of 'particles' in a galaxy over time. (This approach ... needs extension to explain the existence of more than one type of particle.)'.

I've developed this to predict all observable masses of elementary particles on my home page: see http://electrogravity.blogspot.com/ (including http://electrogravity.blogspot.com/2006/07/quantum-field-theory-quantum-yang.html and http://electrogravity.blogspot.com/2006/07/important-note-to-users-of-web-browers.html also http://electrogravity.blogspot.com/2006/06/relationship-between-charge-of-quarks.html and http://electrogravity.blogspot.com/2006/06/more-on-polarization-of-vacuum-and.html for diagram). Analogy: temperature is a radiation equilibrium in an undisturbed room because hotter things radiate faster than cooler things, so everything soon approaches the same temperature AUTOMATICALLY. Prevost had a hard time introducing this in 1792.

All charges are continuously exchanging energy at an enormous rate (the Yang-Mills quantum field theory force-causing radiation exchange mechanism). We don't see it except as forces unless there is acceleration of a local charge, because there isequal energy going along all paths all the time which maintain equilibrium (equilibrium is naturally set up, because any particle radiating more rapidly soon loses net energy and its radiation emission rate therefore falls off until it equals the rate of radiation energy input it is receiving). A photon as we know it is an energy release due to a disturbance in the normal exchange process. If a charge accelerates, it radiates a "photon" which is a net increase in the natural continuousradiation exchange process. Photons carry momentum because they cause forces, since the exchange radiation itself is the mechanism of forces."The key evidence for this is that spinning charge should radiate according to Maxwell's equations. The spin of an electron means centripetal acceleration a = (v^2)/r where v is spin speed and r is radius. Acceleration of charge implies radiation of energy which can be calculated easily. It is an enormous non-oscillatory radiation power, sustained because there is equal amounts going both ways at once between any given two charges, so the TEM waves will propagate without requiring oscillation which would be needed of there was a one-way only transfer, or a net one-way transfer which explains why photons are oscillating waves unlike gauge boson exchange radiation. I have much about this on my home page and this blog.

The success of renormalization is quantum field theory where the limit of charge polarization is not infinite but just a small distance, implies that the vacuum can only be polarised in the strong electric field of an electron, and so there is a cutoff at long ranges where the field is too weak to free and polarize vacuum charge, which prevents the entire vacuum being polarised and cancelling normal electric charges completely.

Photons carry momentum because they cause forces, since the exchange radiation itself is the mechanism of forces. There is a degeneration which sets into physics after each new idea. If the new idea is accepted too easily by leaders in a field, it is never presented with the care and force of evidence which is required to make it widely accessible. Quantum mechanics suffered this - working at the abstract mathematical level but then never being developed into a causal mechanism with uncertainty being due to the effect of the aether on a small scalelike the brownian motion of pollen grains in air. This is tragic. So itis probably best in the long run to try to develop it as far as possible in every way through responding to criticisms and making improvements in clarity and simplicity, before hoping to gain widespread interest or mainstream publication. When I started a decade ago, I just wanted to publish the simple idea with the simple maths and leave it to the professional physicists in professorships to develop it. Now I'm certain that had it been published in say Classical and Quantum Gravity, it would not have led to anything but a lot of bigotry and argument which ignores the facts it predicted, like the lack of long range gravitational retardation of supernovae. It is not a pet theory, it is just a jigsaw puzzle with the pieces made entirely by others (from spacetime to Yang-Mills theory, etc.) which I've pieced together so that you see a picture which makes some predictions. However, the more work I do in trying to convince others, the more I get involved with it. At the moment there is nobody else to defend it, certainly not the close one-time friends like Catt who completely hate it for various non-scientific reasons (prejudice).

In science you can't ignore the facts available on your personal whim, you have to address the facts and instead of ignoring them build a better model - in detail - if you think they are being inadequately treated. There is no proved mechanism (you can't count as scientific the untested ad hoc speculative stuff), whatsoever for redshifts to be due to anything but recession, and the beauty of this is that it forces you to conclude that gravity is caused by an inward reaction. What you have to do really is to put aside prejudice for a few seconds and check the simplicity of the mechanism and proofs, and the accuracy of the predictions, see the list at http://feynman137.tripod.com/#d which is about six months out of date and should be supplemented by more recent additional results on this blog.

From: "Hooft 't G." G.tHooft@phys.uu.nl
To: "'Nigel Cook '"
Sent: Tuesday, December 27, 2005 12:52 PM
Subject: RE: Editor Jeremy Webb discredits integrity of the New Scientist

Please remove me from this list. I don't want my in-box to be polluted by all this nonsense about Maxwell's equations. The Maxwell equations correctly describe the propagation of signals as well as the conservation ofcharge in capacitors, period. Keep me out of any further discussions. G. 't Hooft.

Quantum field theorist Dr Chris Oakley says of 't Hooft and others: 'Unfortunately for me, though, most practitioners in the field appear not be be bothered about the inconsistencies in quantum field theory, and regard my solitary campaign against infinite subtractions at best as a humdrum tidying-up exercise and at worst a direct and personal threat to their livelihood. I admit to being taken aback by some of the reactions I have had. In the vast majority of cases, the issue is not even up for discussion.

'The explanation for this opposition is perhaps to be found on the physics Nobel prize web site. The five prizes awarded for quantum field theory are all for work that is heavily dependent on renormalization. These are ... 1999. 't Hooft and Veltman, "for elucidating the quantum structure of electroweak interactions in physics."Presumably the prize is for their proof of the renormalizability of gauge theories. Quite simply, this is not good enough. The result of subtracting infinity from infinity is indeterminate (an elementary mathematical fact). One may fit finite "effective" theories within this indeterminacy, borrowing some of the clothes of quantum field theory, but one should not pretend that such theories are derived from quantum field theory. They are not: they are merely inspired by it.'

Proof that Hooft is a not telling the truth when claiming that Maxwell's equations are valid for capacitors: http://electrogravity.blogspot.com/2006/04/maxwells-displacement-and-einsteins.html. That post proves the detailed dynamics of the Heaviside-step mechanism behind electric transmission and 'displacement current', but to briefly summarise some of the most basic aspects of the proof for scientifically illiterate and ignorant readers: Maxwell's displacement current equation says displacement current i = dD/dt where D is electric displacement (electric field strength E multiplied by the electric permittivity of the medium) which is a continuous differential equation and not a stepwise formula (Catt has a stepwise claim which is false, being based on the false Heaviside assumption that the front of a logic pulse is discontinuous, so don't confuse the stepwise charging based on discrete unit charges entering the capacitor plate, with Catt's false treatment of steps arising from reflections at each end of the capacitor plate). We know that when a capacitor charges the amount of charge is quantized into units of electron size, so it doesn't charge up the way Maxwell's formula says, except in an approximate statistical sense when the amount of stored is large in comparison to the charge of an electron.

We also know that a capacitor is an electron, and this is an example of the small capacitor where Maxwell's formula is totally wrong. We can therefore see in the failure of the displacement current equation of Maxwell the entire conflict between classical and quantum electrodynamics; in particular the photon predicted by Maxwell's equations is dependent upon Maxwell's false displacement current equation. Maxwell in free space has only two equations (two - namely the divergence equations for magnetic and electric fields - disappear when real charge can be ignored in the vacuum): the curl of the magnetic field and the curl of the electric field. The curl of the magnetic field in a vacuum is in classical physics determined by Maxwell's false equation for displacement current, while the curl of an electric field is given by Faraday's law of electromagnetic induction. The entire breakdown of classical physics in the quantum limit (small scale) is due to Maxwell's displacement current formula. Light is quantized because it is emitted from atoms which behave like quantum charging capacitors, which are entirely different to the false classical picture. It is sickening and shocking that Hooft is so ignorant when valuable information is given to him: to precisely point out where the error lies in the conflict between classical and quantum electrodynamics is a very valuable piece of knowledge. Everyone except Niels Bohr's mindset crowd of 'complementarity principle' thugs realises that progress relies in isolating the source of the problem and correcting it.

As I've shown, the true mechanism behind what Maxwell called 'displacement current' involves force causing radiation which establishes the causal mechanism of Yang-Mills quantum field theory as being due to real exchanges of momentum carrying radiation emitted and received continuously by spinning charges. All accelerating charges (spin constitutes centripetal acceleration) radiate. This mechanism was falsely dismissed by moron Niels Bohr who claimed charges would lose energy if they all radiated. This is an analogy to the denial of blackbody radiation by claiming that if all bodies radiated heat, they would get cold until they reached absolute zero, which is not observed. The error here is simple: bodies which radiate do not lose all their energy because they exchange energy and maintain equilibrium. So the walls of your room radiate energy at the table, which radiates energy at the walls, and things are stable. (Don't get confused: forces result from gauge boson radiation which is continuous exchange radiation and is not heat; when gauge boson radiation is received things get compressed slightly in the direction of motion - the FitzGerald contraction - as they accelerate continuously, not in quantum leaps. Gauge boson radiation determines charge and motion, whereas thermal radiation determines temperature. Obviously matter accelerated by gravity can get hot when a collision occurs - such as when a comet collides with a planet - but in mechanism the cause of the effects of gauge boson force mediating radiation and thermal radiation are entirely different.)

Definition of fascism relevant to the abuse from Jeremy Webb and Hooft, et al.:

‘Fascism is not a doctrinal creed; it is a way of behaving towards your fellow man. What, then, are the tell-tale hallmarks of this horrible attitude? Paranoid control-freakery; an obsessional hatred of any criticism or contradiction; the lust to character-assassinate anyone even suspected of it; a compulsion to control or at least manipulate the media ... the majority of the rank and file prefer to face the wall while the jack-booted gentlemen ride by. ... But I do not believe the innate decency of the British people has gone. Asleep, sedated, conned, duped, gulled, deceived, but not abandoned.’ – Frederick Forsyth, Daily Express, 7 Oct. 05, p. 11.

'If you have got anything new, in substance or in method, and want to propagate it rapidly, you need not expect anything but hindrance from the old practitioner - even though he sat at the feet of Faraday ... beetles could do that ... he is very disinclined to disturb his ancient prejudices. But only give him plenty of rope, and when the new views have become fashionably current, he may find it worth his while to adopt them, though, perhaps, in a somewhat sneaking manner, not unmixed with bluster, and make believe he knew all about it when he was a little boy!'

- self-taught mathematician Oliver Heaviside (left school at 13), Electromagnetic Theory Vol. 1, p337, 1893.

RECAP: All particles are trapped "TEM waves", they are light speed Heaviside energy currents trapped in small looks by the gravitational field due to the energy of the electromagnetic field, which is strong on a tiny size scale of radius 2GM/c^2 (black hole radius), see my proof in Electronics world Aug 2002, parts on my site.

Here are three things that I can send from point A to point B:

(1) Heaviside continuous TEM "wave" energy (with nothing actually waving, so we should really call it a "Heaviside-Poynting slab of non-oscillating electromagnetic energy". This can only go from point A to point B uniformly without oscillation of there is also a similar thing simultaneously going the OPPOSITE way, from B to A, thereby adding to the electric field but cancelling the magnetic field vectors (which in wires creates the infinite self-inductance problem that prevents you from sending electric energy at light speed in a uniform pulse down a single wire, so that you must use at least a two wire transmission line to send mores code/logic).

(2) Small gravitationally-trapped Heaviside TEM energy, where the Heaviside-Poynting energy spins around with its direction of propagation forming a small closed loop of black hole radius = 2GM/c^2. This results in a spherically symmetrical electric field and magnetic dipole. Correcting for shielding by vacuum polarization gives the right electric charge and magnetic dipole moment, spin is like a Mobius strip for an electron, etc. Send one of these and you are sending an electron, which is like a particle because it is a small loop black hole.

(3) Oscillating Heaviside TEM wave (real oscillating waves), these can travel from place A to place B without requiring an equal simultaneous return from B to A to be superimposed on them to allow propagation. This is because the oscillatory nature of the wave permits it to go. Think about a sound wave as a crude analogy. If you just breath air out, no propagating wave occurs. You have to have a push followed by a pull, an outward pressure force followed by an inward sucton force, to get sound to work. Since area times pressure is outward force, Newton's 3rd law of motion explains sound waves. See my illustrated explanation here: http://glasstone.blogspot.com/2006/03/outward-pressure-times-area-is-outward.html

To elaborate more on the exchange radiation mechanism, the biggest insight provided by Woit's Not Even Wrong for me was the simplification of loop quantum gravity:

'In loop quantum gravity, the basic idea is to use the standard methods of quantum theory, but to change the choice of fundamental variables that one is working with. It is well known among mathematicians that an alternative to thinking about geometry in terms of curvature fields at each point in a space is to instead think about the holonomy [whole rule] around loops in space. The idea is that in a curved space, for any path that starts out somewhere and comes back to the same point (a loop), one can imagine moving along the path while carrying a set of vectors, and always keeping the new vectors parallel to older ones as one moves along. When one gets back to where one started and compares the vectors one has been carrying with the ones at the starting point, they will in general be related by a rotational transformation. This rotational transformation is called the holonomy of the loop. It can be calculated for any loop, so the holonomy of a curved space is an assignment of rotations to all loops in the space.' - P. Woit, Not Even Wrong, Cape, London, 2006, p189.

This, the Yang-Mills gauge boson exchange process, is physically in perfect correspondence not to the 'loop' of the creation-annihilation-creation of matter (as illustrated at the top of this linked post), but rather to the 'loop' of the gauge boson radiation starting from one gravitational charge (mass), going to another, and returning to the first mass again. Because the energy carried by the gauge boson to cause force is conserved, we can extract physical predictions and dynamics from this model. For example, if you move masses apart, the gauge boson radiation energy at any instant is spread over larger distances and the force strength therefore falls as it spends more of its time in transit, and if the masses are receding from one another, the exchange radiation is redshifted so gravity falls off over cosmological distances. In particular, there is a correspondence between this physical picture of what happens in Yang-Mills quantum field theory and the way you get fundamental forces and the contraction effect in relativity, as explained in previous posts on this blog.

I can predict the maximum separation between electrons and positrons. The duration and maximum range of these charges is easily estimated: take the energy-time form of Heisenberg's uncertainty principle and put in the energy of an electron-positron pair and you find it can exist for ~10^-21 second; the maximum possible range is therefore this time multiplied by c, or 10^-12 metre. This is far enough to deflect electrons but not enough to be observed as vacuum radioactivity. Like Brownian motion, it introduces chaos on small scales, not large ones:

'... the Heisenberg formulae can be most naturally interpreted as statistical scatter relations [eg scatter between virtual charges in the vacuum and real charges], as I proposed [in the 1934 book 'The Logic of Scientific Discovery']... There is, therefore, no reason whatever to accept either Heisenberg's or Bohr's subjectivist interpretation .' - Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.

On average, statistically the separation may be only about half the maximum. So I'd predict that the vacuum contains on the order of (10^-12 m)^-3 = 10^36 electrons and positrons per cubic metre, which allows the computation of the energy of the ground state of the vacuum and gives a large vacuum energy density (far larger than the dark energy epicycle claimed falsely by the mainstream from the force-fitting of the lambda-CDM cosmological model to the supernovae redshift data; which are better explained by the gravity mechanism).

Woit's book "Not Even Wrong (N.E.W.)" gives indirect tests such as getting the vacuum energy in supersymmetry (unification energy) to agree with empirical observation. String theory is apparently way out by an astronomical factor of 10^113 (N.E.W. page 179). This is because of the small value of the vacuum energy given by the false Lambda-CDM cosmological model.

Argument against a structured vacuum due to success of renormalization: Renormalization is due to the adjustment of charge and mass for the contributions of the vacuum to real charge and mass. Around an electron, the virtual charges in the vacuum become polarized, and this reduces the electric field at a large distance because the polarization opposes the electric field from the electron core. Renormalization deals with the problem that the amount of polarization is not infinite. If the whole vacuum was free to be polarized around an electron, the vacuum polarization would increase until it cancelled exactly 100% of the electron's radial electric field. Hence the electron would have no electric charge at all observable from a distance. The physical explanation is that the polarization of the vacuum charges does not stretch out to infinity, it is limited in range to the space close to the electron core, and beyond a certain limit (corresponding to the lower limit energy cutoff in the renormalization math) no polarization can occur. Clearly, this physics shows that the vacuum charges which are polarized are actually CREATED in the strong electric field close to the electron core, where the electromagnetic energy density (which falls as inverse square of distance) is high enough to create free pairs of charges out of the vacuum. Quantum field theory includes annihilation and creation operators, and these must have a physical correspondence in reality. When you get well away from mass, there is no free virtual charge in the vacuum which can be polarized. If there were, all electric fields would be 100% cancelled, so there would be no real charges. Why can't people understand this? It is so simple. Why always call it a personal pet theory of the person saying it? There is no [free] zero-point energy, or renormalization would not work, because the lower limit cutoff would be zero and the polarization would extend infinitely and there would no electric charges, no atoms, no people, nothing.

This argument pertains to free, polarizable charge in the vacuum distant from matter; obviously if there was non-free charge present that could not be polarized to cancel out free charges, then you could still have some kind of structured or chaotic Dirac Sea. This would have to either be so chaotic that polarization could not be set up because the particles' motions are too energetic and random to be ordered at all by polarization at long distances from a charge (this is the most likely explanation as it has a full mechanism and makes predictions which are checkable and so far as tested so far, are in excellent agreement with experimental reality), or so structured such as by a lattice arrangement that weak electric fields are unable to polarize the charge because the charge remains stuck in the lattice unless a minimum work function energy is supplied to free the charges so that they can then be polarized (this is less likely because any lattice structure to the vacuum would seem to break down the isotropic freedom of radiation to go at the same velocity in all directions, if radiation is mediated by the virtual or 'displacement charge' currents in the vacuum as classical electromagnetism suggests).

So the evidence points to a chaotic vacuum which has so much energy it can't be ordered (polarized) to any significant extent except by very intense electromagnetic fields close to the core of an real (long-lived) electron or other long lived charge.

Disorder is the lowest state of any system, and higher energy states lead to more order. You have to expend energy to create order from chaos.

Gravitational potential energy is used up, for example, in working against entropy to create order. At 300,000 years after BB, everything was at the same temperature to within one part per so many thousands. Today, the interior of the sun is at 15,000,000 K, compared to spaces between galaxies which are at just 2.7 K. Hence the ordering which has been done by gravity to create dissimilarities in temperatures in the universe (when there were none significant at the beginning) has used up a vast amount of gravitational potential energy. From the gravity mechanism, the source of the gravitational potential energy is ultimately in the expansion of the universe itself; expansion causes gravity as a mass-shielded inward reaction force or radiation flowing in to fill volume being vacated by outward moving matter. Both of these last statements are in exact mathematical equivalence as proved at http://feynman137.tripod.com/#h and with illustrations at http://electrogravity.blogspot.com/2006/07/important-note-to-users-of-web-browers.html

For the vacuum to be ordered at the ground state would reverse the idea that you supply energy to a disordered state to create order. The idea of a stable structured vacuum like a lattice aether is incompatible with renormalization because for vacuum polarization to occur in strong fields near charges, you really are going to get in a mess from a structured orderly ground state. (How can there be any stable vacuum electron-positron lattice if there is freedom? What stops it from becoming chaotic? When an electron and a positron come slightly closer together, why don't they accelerate closer until they annihilate into gamma rays? In fact, this is what happens. It is chaotic. Since electrons and positrons are of equal mass, it is not possible to make a stable lattice. Heat and radiation would perturb the lattice, and the slightest effect of this sort would set off a massive collapse. The vacuum would explode. The vacuum is chaotic with an equilibrium of annihilation and creation going on all the time.)

Polarizing a gas of charges in a ground state which is chaos is straightforward. If you want to account for the way vacuum polarization works with an ordered, structed vacuum you have to specify an absolute energy at which the ordered structure breaks down to release polarizable free charges. This is incompatible with nature of renormalization, where it is scalable in some sense since you can take different cutoff energies depending on the situation. The key problem to be solved is to plot fundamental force strengths as a function of distance from the core of various particles (electron, quark), instead of plotting strength versus collision energy. See illustrations of forces as function of energy at http://electrogravity.blogspot.com/2006/02/heuristic-explanation-of-short-ranged_27.html.

The key problem is to produced a detailed vacuum polarization model which explains this, and to do so by plotting vacuum polarization (shielding) as a function of distance from a particle core. Then we need to work out how much energy is being tied up in causing the vacuum polarization, and use this quantitative calculation to work out how other forces vary. A loss of electromagnetic force strength at long distances, for example, will be make up for by the strong nuclear force at short distances. Conservation of energy will therefore allow a mechanistic, quantitative undersanding of how force causing energy is being used at all distances. You have to understand that Gauss' and Green's theorem is important; long-range redshifts aside, the total amount of exchange radiation energy which causes force that crosses the spherical area at any given radius around a particle, in a given interval of time, is the same regardless of the distance from the particle core. The amount of exchange radiation going towards a particle core is equal to that coming from the particle core where there is no acceleration of the particle.

Where there is an acceleration of the particle, the kinetic energy is supplied from the fact that during acceleration the rate the particle receives incoming exchange radiation will exceed the rate at which it radiates it. This will be tied up to other processes, such as the FitzGerald contraction of the particle due to its motion, the appearance of a magnetic field due to motion of charge (electrons have a magnetic dipole moment anyway form spin; I'm not referring to this), and time-dilation. I'm confident to have a good model for what happens in outline, but want to make a full mathematical model to account for gauge boson energy usage with differential equations. By understanding the vacuum phenomena quantitatively, force unification should be possible. All the evidence I've compiled indicates that the vacuum is totally chaotic in the ground state. See http://electrogravity.blogspot.com/2006/07/quantum-field-theory-quantum-yang.html for prediction of particle masses.

Regarding the continuous nature of gauge bosons (exchange radiation) that cause force, think about a (very long) slab of ongoing energy moving in distance. It reaches a distant charge (particle) and reflects back towards you, cancelling out the magnetic field of further outward going radiation that has yet to be reflected. You can understand it by sketching it; the exchange radiation is continuous like the Heaviside slab of energy: it flows from one particle to another without stopping, and when it reflects, there is still energy going in like a capacitor plate charging with Heaviside energy current: http://electrogravity.blogspot.com/2006/04/maxwells-displacement-and-einsteins.html

Prevost in 1792 didn't have to worry about how the equilibrium gets established at the beginning of time for temperature, which was just as well because that sort of question is ultimately going to take you back to initial conditions in the BB or whatever you call it 15,000,000,000 years ago. Things were more chaotic then than they are now; gravity tends to counter the third law of thermodynamics (the increase in entropy). Similarly, I don't need to worry too much at present about the chaotic way that gauge boson radiation exchange equilibrium was set up in the first place, causing gravity and making all electrons extremely similar in charge, etc. All I need to do is to make some connections to get the ball rolling, and produce some quantitative predictions. It will be extremely sad if I have to work out an enormous amount of this stuff by myself, with everyone else finding some metaphysical or prejudicial reason to shy away. Certainly the setting up of gauge boson exchange radiation would occur prior to the fusion of light elements in the BB, so we are talking of events within a matter of seconds after the universe began, maybe small fractions of a second. I would deal with that by computer simulations, varying the initial conditions until the output matched empirical data.

Someone claims that LeSage's gravity mechanism was disproved on thermodynamics. This alleged disproof I believe was due to Maxwell (I went into this in depth on my blog sometime ago but can't remember the details if you are interested see the blog post and comments for yourself: http://electrogravity.blogspot.com/2006/03/george-louis-lesage-newtonian_26.html): "Today the model of gravity proposed by Fatio is known almost exclusively as "Lesage's theory", with Fatio relegated to a footnote, so Fatio was ultimately denied even his rightful recognition as the originator of this idea, which in any case has long since been discredited on thermodynamic grounds."

This is a false claim. Gauge boson exchange radiation does not get converted into heat as the critics (Maxwell, etc.) of LeSage claimed it would; gauge bosons causing gravitation interact with the Higgs field particles these are massive particles which smooth out the radiation exchange and prevent heat being generated; Yang-Mills exchange radiation for electroweak theory which includes electromagnetism is well established by the discovery of the W+, W- and Zo gauge bosons at CERN in 1983 and electromagnetism exchange radiation caused forces don't result in charged bodies getting hot because it is not oscillating heat-type radiation that is being exchanged.

Ivor Catt falsely claims electricity has nothing to do with wires (which merely "somehow guide" the TEM wave), and is all occurring in space around the wires. You have got to remember that Catt and probably Heaviside draw the Heaviside-Poynting vector as three orthagonal arrows (each at 90 degrees to the other two), one representing light speed propagation of the energy, one representing electric field, and one representing magnetic field.

In fact, the correct way to draw it is different. You draw the propagation direction arrow, and radiating out from that in all directions (not merely one direction) you draw a bunch of electric field arrows. Then you draw circular loops around the direction of propagation to represent the magnetic field vector.

Now because Catt can't even draw the vector properly, he can't grasp that it is meaningless or misleading his way. What you have is energy going along the electric field lines, being exchanged to cause forces. You need this because Yang-Mills U(1) electromagnetism works and tells you that electric charge is mediated by some kind of light speed radiation.

Yang-Mills theory doesn't directly tell you the frequency of the radiation. (Some people say the exchange radiation is like Casimir force radiation, so there is always one wavelength between two charges. Hence you can calculate from that a frequency for the radiation. Thus, if two charges are x metres apart, then the gauge boson radiation on this hypothesis has a wavelength of x metres and a frequency given by the wave axiom of f = c/x, where c is the wave speed. However the assumption about radiation wavelength equalling separation distance is a TOTAL LIE, because there is no physical basis for it. It is like the "Planck length" which has no physical basis at all and is massive compared to the black hole radius of an electron, which has basis as proved by my aug02 EW article etc.)

The frequency is apparently zero. The evidence is that it doesn't oscillate because (1) it is in exchange equilibrium so it doesn't need oscillation to make it propagate in free space (magnetic field inductance effects cancel because equal energy is coming the opposite direction and cancelling out the magnetic field inductance), and (2) there is no mechanism for it to oscillate periodically because it is being emitted by a spinning loop which is not a periodic phenomena; although spin it has centripetal acceleration a = (c^2)/r which makes the charge radiate energy, it is a continuous radiation because the charge spin is just continuous and not oscillating.

But this concept is not definite. You could alternatively argue that the exchange radiation is actually a very hard, high energy, form of gamma radiation which can't be detected other than as fundamental forces because it is so penetrating that it doesn't interact like normal radiation. So gauge boson radiation is either too low frequency to be detected as oscillating electromagnetic radiation (ie frequency f = zero) or too high frequency to be detected as the highest frequency electromagnetic radiation (gamma rays). (I'm certain the frequency of gauge bosons is not equal to f = c/x where x is distance between two charges, because I've put strong charges x metres apart and not been able to measure any radio effects of frequency f = c/x between them even when inducing temporary interruptions using sheets of metal. It is very easy to detect 1 metre or 10 cm radio waves.) However this is too problematic, because such ultra high energy, ultra high frequency, highly penetrating gamma rays radiation would not interact enough to cause gravity and you would end up with a horsesh*t theory (like the many ideas on the internet that neutrinos cause gravity in a pushing context) with abitrary "magical" characteristics chosen as fiddles to make the theory work, not the use of natural facts which are self-consistent and follow from empirical evidence. Neutrinos can't cause gravity because the abundance needed to do so would make them detectable in far, far, far, far higher numbers by nuclear reactions than the accurate measurements reveal to be the case. So neutrino cause gravity is a lie.

Normally you would think that extremely low frequency waves are easily stopped and higher frequencies are more penetrating, so you might imagine crudely that gauge boson radiation of frequency f = zero (no oscillation) would have no penetrating power at all and would be unable to penetrate through the earth to cause gravitation in electro-gravity unification. However, low frequencies are more penetrating than high frequencies which is why submerged nuclear submarines in the conductive salt water ocean use very long aerials and ELF (extremely low frequencies) to communicate: the earth's skin depth for transmission of radio waves is very large for very low frequencies. Penetration for frequencies below about 1 kHz is proportional to something like 1/(square root of frequency), so as frequency falls toward zero, electromagnetic radiation becomes ever more penetrating (obviously it can't penetrate the actual physical cross-sectional areas of mass such as quarks and electrons; we're talking about attenuation due to the absorption of energy by means of the radio wave inducing oscillations of electrons, not barrier shielding in which the radiation is simply reflected by masses).

So I'm 100% certain from all evidence that gauge boson radiation is continuous exchange radiation with no oscillation. It represents both electric and gravitational fields, the difference between these being due to whether the addition of electric field is a straight line summation across the universe (gravity, statistical mean strength = g) or a random/drunkards walk between all similar charges due to the random spatial distribution of two types of electric charge (electromagnetism, statistical mean strength = g.[root N], where N is number of similar charges in universe).

If muons are 205 times the mass of electrons (or whatever), then they are only 1/205 or 0.5% as abundant as electrons and positrons in the vacuum, assuming that they are equally to be formed. The shorter lifetime does allow us to make estimates. I calculated as below that for electron-positron pairs the mean life is ~10^-21 second; so the maximum possible range is therefore this time multiplied by c, or 10^-12 metre. For muons, this time and range would be reduced by a factor of 205 or whatever. For quarks, the reduction factor is still greater.

So for practical purposes, electron-positron pairs dominate as sources of transistory electric charge in the vacuum, and heavier particles have a negligible contribution. However, heavier particles like muons and quarks are more important in polarization very near a real quark, where the electromagnetic field of the real quark is strong. Is it a case that the local field energy close to a real quark creates heavy particles around it by causing the polarized vacuum electron-positron pairs to undergo more violent collisions, or is it a case that there is a mix up between theory ad experiment?The only place that quantum field theory could be wrong is if the interpretation of the experiments and the interpretation of the corresponding theory are both wrong the same way. Certainly it is an error to talk of fundamental force strengths in terms of collision energy (which is the mainstream error) because you are mixing two effects up in this: (1) the effect of getting closer to particles when they are smashed together harder, and (2) the effect of actually making fresh vacuum particles in the collision process itself.

Effect (2) has the danger that what you measure in a particle acceleration is not pure and unadulterated effect (1). There is pollution from effect (2), so you cannot rationally claim that higher collision energies is merely showing you what the fundamental forces are like at shorter distances. There is also "noise" in the data from nuclear shrapnel, in the sense of particles created by the collisions themselves, which interfere in a way you would not see if you could gently probe what the forces were like at very short distances. The analysis which corrects for this muddle is what I'm working on when time permits.In the similar sort of way, we can't see vast distances in space without simultaneously looking back in time, so we are not seeing merely how the universe looks at vast distances, we must always remember we are seeing it at earlier times. It is tragic that Hubble never even bothered to see what his result would look like as a variation in velocity with time, ie, an acceleration of a sort. If he had, gravity mechanism may well have been discovered soon after 1929. Instead nonsense has hardened into a bigoted orthodoxy.

Because the rest mass energy of muons etc are so much higher than electrons, they exist for a correspondingly smaller period of time and dominate the vacuum to a lesser extent than electrons, according to the uncertainty principle (which as Popper says is just a statistical scatter formula, the average lifespan is just the average time between collisions causing annihilation; no strange metaphysics is required by the known facts, contrary to Bohr/Bore).
At the close distances where polarization of the vacuum shields electromagnetism, the energy used in this way (absorbed/attenuated/shielded) becomes the strong nuclear force and the weak nuclear force: thus the physical basis for the unification of forces is the conservation of gauge boson energy, as explained previously on this blog.

Woit on his blog has said tha the number one problem facing theoretical physics is understanding the mechanism of electroweak symmetry breaking, ie why massive weak force mediating gauge bosons (but not the gauge boson of electromagnetism, referred to vaguely and possibly confusingly/misleadingly as the photon) have mass and short range.

I've said that the key is the Zo gauge boson which is the massive weak force counterpart of the electromagnetic photon. The Zo has something like 91 GeV rest mass (OK David, you might not like E=mc2 being used here, but I can't help it that the units for mass used in nuclear physics are energy units).

I've shown at http://electrogravity.blogspot.com/2006/07/quantum-field-theory-quantum-yang.html how to justify thinking of alpha, the fine structure number 1/137... (approx.) as the reduction factor in the electromagnetic force which is caused by the polarization of charge around the electron core that makes renormalization necessary in quantum field theory. I can calculate the electromagnetic force from Heisenberg and other laws and it is 137... or 1/alpha times the known Coulomb law for electrons. Hence the long range force we observe (Coulomb) is attenuated by ~137 due to vacuum charge polarization.

Now the clever bit. The vacuum "Higgs field" is supposed to include a massive yet chargeless "Higgs boson" which causes all mass in the universe, the mass of all leptons, quarks, etc. Let's assume that the particle has already in a sense been discovered and is either the Zo itself or is closely associated to or paired with the Zo. This may in part or in whole reverse the mainstream picture, which claims that the mass of the Zo (and all other particles) is given by external Higgs bosons.

We can make additional predictions using the Z boson of electroweak theory, which is unique because it has rest mass despite being an uncharged fundamental particle! You can easily see how charged particles acquire mass (by attracting a cloud of vacuum charges, which mire them, creating inertia and a response to the spacetime fabric background field which is gravity). But how does a non-composite neutral particle, the Z, acquire mass? This is essentially the question of electroweak symmetry breaking at low energy. The Z is related to the photon, but is different in that it has rest mass and therefore has a limited range, at least below electroweak symmetry breaking energy.

Z mass model: a vacuum particle with the mass of the electroweak neutral gauge boson (Z) semi-empirically predicts all the masses in the Standard Model. You use data for a few particles to formulate the model, but then it predicts everything else, including making many extra checkable predictions! Here's how. If the particle is inside the polarization range of the electron, there is still its own polarization shell separating it from the real electron core. Because of the shielding of its own shell of vacuum polarization and from the spin of the electron core, the mass it gives the core is equal to M(z)/(2.Pi x 137) = M(z).alpha/(2.Pi) ~ 105.7 MeV. Hence the muon!

Next, consider the lower energy state where the mass is outside the polarization zone (at a large distance). In that case, the coupling between the central core charge and the mass at B is reduced by the additional distance (which empirically is a factor of ~1.5 reduction) and also the 137 or 1/alpha polarization attenuation factor. HenceM(z).(alpha)^2 /(1.5 x 2.Pi) ~ 0.51 MeV. Hence the electron!

Generalizing, for n real charge cores (such as a bare lepton or 2-3 bare quarks), and N masses of Z boson mass at a position within the polarization zone, nearby to the core (a high energy, high mass state), the formula for predicting the observable mass of the elementary particle is: M(e).n(N+1)/(2.alpha)= M(z).n(N+1)(alpha)/(6.Pi).

The higher the energy density of the vacuum, the heavier the virtual particles. However, from the unification arguments at http://electrogravity.blogspot.com/2006/07/quantum-field-theory-quantum-yang.html , electrons and quarks are related: quarks are pairs or triads of electrons bound and trapped by the short range vacuum attraction effect, and because quarks are close enough to share the same polarization shells, the latter are 2 or 3 times stronger in pairs or triads of quarks, creating apparent fractional electric charges (stronger polarization type shielding causes weaker observed charge at a long distance).

The increase in the magnetic moment which results for leptons is reduced by the 1/alpha or 137 factor due to shielding from the virtual positron's own polarization zone, and is also reduced by a factor of 2Pi because the two particles are aligned with opposite spins: the force gauge bosons being exchanged between them hit the spinning particles on the edges, which have a side-on length which is 2Pi times smaller than the full circumference of the particle. To give a real world example, it is well known that by merely spinning a missile about its axis you reduce the exposure of the skin of the missile to weapons by a factor of Pi. This is because the exposure is measured in energy deposit per unit area, and this exposed area is obviously decreased by a factor of Pi if the missile is spinning quickly. For an electron, the spin is half integer, so like a Mobius strip (paper loop with half a turn), you have to rotate 720 degrees (not 360) to complete a 'rotation' back to the starting point. Therefore the effective exposure reduction for a spinning electron is 2Pi, rather than Pi.Hence by combining the polarization shielding factor with the spin coupling factor, we can account for the fact that the lepton magnetic moment increase due to this effect is approximately 1/(2.Pi x 137) = alpha/(2.Pi) added on to the 1 Bohr magneton of the bare real electron. This gives the 1.0116 Bohr magnetons result for leptons.
Compare this to the predictive but obscurely abstract mathematical approach of Nobel Laureates Feynman, Tomonago, and Schwinger:

Julian Schwinger, On Gauge Invariance and Vacuum Polarization, Phys. Rev. vo. 82 (1951), p. 664:'This paper is based on the elementary remark that the extraction of gauge invariant results from a formally gauge invariant theory is ensured if one employs methods of solution that involve only gauge covariant quantities. We illustrate this statement in connection with the problem of vacuum polarization by a prescribed electromagnetic field. The vacuum current of a charged Dirac field, which can be expressed in terms of the Green's function of that field, implies an addition to the action integral of the electromagnetic field. Now these quantities can be related to the dynamical properties of a "particle" with space-time coordinates that depend upon a proper-time parameter. The proper-time equations of motion involve only electromagnetic field strengths, and provide a suitable gauge invariant basis for treating problems. Rigorous solutions of the equations of motion can be obtained for a constant field, and for a plane wave field. A renormalization of field strength and charge, applied to the modified lagrange function for constant fields, yields a finite, gauge invariant result which implies nonlinear properties for the electromagnetic field in the vacuum. The contribution of a zero spin charged field is also stated. After the same field strength renormalization, the modified physical quantities describing a plane wave in the vacuum reduce to just those of the maxwell field; there are no nonlinear phenomena for a single plane wave, of arbitrary strength and spectral composition. The results obtained for constant (that is, slowly varying fields), are then applied to treat the two-photon disintegration of a spin zero neutral meson arising from the polarization of the proton vacuum. We obtain approximate, gauge invariant expressions for the effective interaction between the meson and the electromagnetic field, in which the nuclear coupling may be scalar, pseudoscalar, or pseudovector in nature. The direct verification of equivalence between the pseudoscalar and pseudovector interactions only requires a proper statement of the limiting processes involved. For arbitrarily varying fields, perturbation methods can be applied to the equations of motion, as discussed in Appendix A, or one can employ an expansion in powers of the potential vector. The latter automatically yields gauge invariant results, provided only that the proper-time integration is reserved to the last. This indicates that the significant aspect of the proper-time method is its isolation of divergences in integrals with respect to the proper-time parameter, which is independent of the coordinate system and of the gauge. The connection between the proper-time method and the technique of "invariant regularization" is discussed. Incidentally, the probability of actual pair creation is obtained from the imaginary part of the electromagnetic field action integral. Finally, as an application of the Green's function for a constant field, we construct the mass operator of an electron in a weak, homogeneous external field, and derive the additional spin magnetic moment of α/2π magnetons by means of a perturbation calculation in which proper-mass plays the customary role of energy.'

More information on QFT: see Prof. Mark Srednicki's textbook at http://gabriel.physics.ucsb.edu/~mark/MS-QFT-11Feb06.pdf and the corrected Prof. Alvarez-Gaume introduction at http://arxiv.org/PS_cache/hep-th/pdf/0510/0510040.pdf (It turns out that that textbook (1st ed) was wrong as it ignored all the particle creation-annihilation loops which can be created at energies between 0.511 MeV and 92 GeV. Motl emailed the Professor then replied: "Prof. Alvarez-Gaume has written me that it was a pedagogical simplification, which I fully understand and endorse, and in a future version of the text, they will have not only the right numbers but even the threshold corrections!" The second edition corrects the error, with a footnote thanking Motl, who I had informed of the error since I don't get replies from most professors when using hotmail.)

Try thinking about a sealed large tube of water with a brick at one end. Turn the tube upside down and the brick moves to the other end. At the same time, an equal amount of water is flowing around the brick to prevent a void ("cavitation" in nautical terms) being created. If a similar volume of water is prevented from flowing around the moving brick at the same speed but in the opposite direction to the motion of the brick (for example by making the tube the same cross-sectional size and shape as the brick and water tight by rubber seals at the edge), the brick doesn't move very easily! This is because it would have to compress the water before it and to cavitate (create a vacuum in) the water behind it, in order to move. The same sort of thing must occur whenever a real particle moves in the vacuum medium: a fundamental particle of the vacuum and a real particle can't exist at exactly the same place and time, so one must push the other out of the way. This gives the gravity mechanism as you get a mechanism for Newton's 3rd law in a fluid aether: http://feynman137.tripod.com/#h

Now electroweak stuff: the thing is, the "electroweak symmetry is broken" at low [collision] energies, whereby "low energies" means at large distances from the core of fundamental particles, because the higher the energy of collision, the closer particles approach before bouncing off.

The "broken symmetry" is the fact that the gauge boson photon of electromagnetism has no [rest] mass and infinite range, whereas the weak force gauge bosons such as Zo, W+ and W- all rest have mass and short range. [Symmetry (unbroken) = electromagnetic and weak bosons all having zero mass, infinite range.] This is one reason why the "higgs field" which is commonly thought of as an aether in modern physics behind the scenes, is probably similar to a sea of weak force gauge bosons.

The standard picture is as follows: the Higgs field exists everywhere as the spacetime fabric or ether. It stops the weak gauge bosons within a short distance, and provides the masses of quarks and leptons by the mechanism of their bouncing off the virtual charges of the Higgs field. Because the weak force has a maximum range of 10^-18 metre, the energy-time version of the Heisenberg uncertainty principle (assuming light speed force mediation, so that time multiplied by speed of light equals this maximum distance) tells us the energy of the weak force mediator – it is 250 GeV, equivalent to 10^-24 kilogram.

This energy, 250 GeV, is close to the observed mass-energy of the weak gauge bosons (W-, W+, and Z). It is also the energy of the Higgs bosons. The weak force symmetry (which exists for energies above 250 GeV) breaks spontaneously at 250 GeV, because of the Higgs mechanism. Below 250 GeV, there is no weak force symmetry because particles have masses since they are mired in the Higgs field which causes inertia. Above 250 GeV, particles become effectively massless, simply because they then have more energy than the Higgs bosons. (By analogy, it is possible to move through syrup if you have enough energy to overcome the sticky binding forces holding together the syrup molecules, but you get stuck after a short distance if you don’t have enough energy!)

Electroweak theory was developed by Sheldon Glashow, Steven Weinberg and Abdus Salam. They showed that early in the big bang, there were three weak gauge bosons and a neutral boson, and that the photon which now exists is a combination of two of the original gauge bosons purely because this avoids being stopped by the weak charge of the vacuum; other combinations are stopped so the photon exists uniquely by the filtering out of other weak gauge bosons. Because the photon does not interact with the weak charge of the vacuum, it only interacts with electric charges. The vacuum is composed of weak charge, but not electric charge, so the photon can penetrate any distance of vacuum without attenuation. This is why electric forces are only subject to geometrical dispersion (inverse-square law).

These developments in the 1960s led to the Standard Model of fundamental particles. In this model, the strong nuclear, weak nuclear and electromagnetic forces all become similar at around 10^14 GeV, but beyond that they differ again, with electromagnetic force becoming stronger than the strong and weak forces. In 1974, Howard Georgi and Sheldon Glashow suggested a way to unify all three forces into a single superforce at an energy at 10^16 GeV. This ‘grand unified theory’ of all forces apart from gravity has the three forces unified above 10^16 GeV but separated into three separate forces at lower energies. The way they did this was by ‘supersymmetry’, doubling the particles of the Standard Model, so that each fundamental particle has a supersymmetric partner. The energy of 10^16 GeV is beyond testing on this planet and in this galaxy, so the only useful prediction they could make was that the proton should decay with a half-life somewhat smaller than has already been ruled out by experiment.

Edward Witten developed the current mainstream superstring model, which has 10/11 dimensions with 6/7 rolled up into strings. The history of string theory begins in the 1920s with the Kaluza-Klein theory as I’ve already explained above. Kaluza showed that adding a fifth dimension to general relativity units gravity and electromagnetism tensors, while Klein showed that the fifth dimension could remain invisible to us as a rolled up string. In the late 1960s, it was shown that the strings could vibrate and represent fundamental particle energies. In 1985, Philip Candelas, Gary Horowitz, Andy Strominger and Edward Witten suggested that 10-D string theory with the 6 extra dimensions curled up into a Calabi-Yau manifold would model the standard model, preserving supersymmetry and yet giving rise to an observable 4-D spacetime in which there is the right amount of difference between left and right handed interactions to account for the parity-violating weak force.

This ‘breakthrough’ speculative invention was called ‘superstrings’ and led to the enormous increase in research in string theory. Finally, in March 1995, Edward Witten proved that 10-D strongly coupled superstring theory is equivalent to 11-D weakly coupled supergravity. Apparently because it was presented in March, Witten named this new 10/11-D mathematics ‘M-theory’.

Witten then made the misleading claim that ‘string theory predicts gravity’:

‘String theory has the remarkable property of predicting gravity’: false claim by Edward Witten in the April 1996 issue of Physics Today, repudiated by Roger Penrose on page 896 of his book Road to Reality, 2004: ‘in addition to the dimensionality issue, the string theory approach is (so far, in almost all respects) restricted to being merely a perturbation theory’. String theory does not predict anything scientific whatsoever for the strength constant of gravity, G!

This means that my work on gravitational force mechanism which does predict gravity correctly is suppressed: http://members.lycos.co.uk/nigelbryancook
All the evidence for dark matter and dark energy is from cosmology models, so why can't the basis of cosmology be re-examined? It is entirely possible to incorporate into general relativity a model for gravity which eliminates the dark matter problem (which is due to the false assumption that gravity has no cause within the universe, correcting that just leaves invisible dust contributions), and the dark energy problem results again from ad hoc force-fitting the theory to the observations. This is not the way to do physics. The theory should not keep getting fiddles to make it fit the facts, that is what went wrong with epicycles in ancient cosmology.

3, 4, 5, 10, 11 dimensions:i if you look up at the stars tonight, remember the nearest is over 4 years away (unless you count the sun as a star, in which case 8.3 minutes away). If you quote a distance you are fooling yourself, because everything you see in in the past. Something you see 1 metre away is 3.3 nanoseconds in the past. Spacetime is real because not only visible light itself, but also the physical forces such as electromagnetism and gravity, travel at the speed of light. So when the 'recession speeds of distant stars increase with distance' you know Hubble is not thinking about the fourth dimension, time. The increase in speeds is not just with apparent distance, but with time past. Any variation of speed with time is an acceleration, which implies an effective outward force. Anyway, general relativity treats time (multiplied by the speed of light to get distance units) as a dimension, albeit as a Pythagorean spacetime resultant so that its square has an opposite sign to the squares of the three other dimensions. It works mathematically like a fourth dimension. The fifth dimension is the Kaluza-Klein dimension, which reconciles Maxwell's basic electromagnetism (the light wave, based on the two curl equations) with general relativity, and says the extra dimension is curled up to give a string particle. The first four dimensions can be viewed as a brane on the five dimensional universe, or as a hologram of the five dimensional universe. It seems that these two differing perspectives are all mathematically equivalent and equally valid physically. The issue then arises that there is a completely different formulation of string theory which is necessary, and involves 10 or 11 dimensional spacetime. This is due to the need to explain supersymmetry in the Standard Model of fundamental particle interactions. Supersymmetry doubles the number of fundamental particles by introducing a 'super partner' for each one. The original particle and its super partner are related by supersymmetry transformation. This transformation transforms a fermion into its partnered super boson, changing the spin from a half integer to an integer value (in terms of h over twice pi, obviously). Because of supersymmetry, even the virtual particles which produce the 'Higgs field' (or spacetime fabric) have supersymmetric partners. Unless supersymmetry is broken, the virtual particles have a mass which is the exact opposite of their super partners, so the Higgs field itself has no mass. However there is a slight break of supersymmetry, and this gives rise to mass. The superpartners have very large masses and existing particle accelerators do not have sufficient energy to detect them. Supersymmetry makes the forces described by the Standard Model - the electro-weak and the strong nuclear force - unify into a superforce at an energy of 10^16 GeV. However it is a lie as I've proved.

General relativity (which is empirical Newtonian gravity with empirically based corrections for mass energy and field energy contributions to gravity and also the vital contraction which is a correction for conservation of gravitational potential energy, plus the incorporation of the fact - ignored by Newton - that there is no instantaneous action at a distance, hence gravity fields take time to travel and variations propagate at light speed when a mass moves) tells us that there is no friction/drag (resistance to velocity), only resistance to acceleration from the bulk displacement effect I described:

‘… the source of the gravitational field can be taken to be a perfect fluid…. A fluid is a continuum that ‘flows’... A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.’ – Professor Bernard Schutz, General Relativity, Cambridge University Press, 1986, pp. 89-90.

NOTE THE "PERFECT FLUID" DEFINITION. Now see the PERFECT FLUID in Maxwell's treatise:

‘The ... action of magnetism on polarised light [discovered by Faraday not Maxwell] leads ... to the conclusion that in a medium ... is something belonging to the mathematical class as an angular velocity ... This ... cannot be that of any portion of the medium of sensible dimensions rotating as a whole. We must therefore conceive the rotation to be that of very small portions of the medium, each rotating on its own axis [spin] ... The displacements of the medium, during the propagation of light, will produce a disturbance of the vortices ... We shall therefore assume that the variation of vortices caused by the displacement of the medium is subject to the same conditions which Helmholtz, in his great memoir on Vortex-motion, has shewn to regulate the variation of the vortices [spin] of a perfect fluid.’ - Maxwell’s 1873 Treatise on Electricity and Magnetism, Articles 822-3

'When one looks back over the development of physics, one sees that it can be pictured as a rather steady development with many small steps and superimposed on that a number of big jumps.... These big jumps usually consist in overcoming a prejudice.'- P. A. M. Dirac, 'Development of the Physicist's Conception of Nature', in J. Mehra (ed.), The Physicist's Conception of Nature, D. Reidel Pub. Co., 1973.

‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 German publication, ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303. Note: statistical scatter gives the energy form of Heisenberg’s equation, since the vacuum is full of gauge bosons carrying momentum like light, and exerting vast pressure; this gives the foam vacuum.

Supersymmetry as currently postulated with string theory, supergravity and supersymmetric (SUSY) partners is horse-manure. But the idea that forces unify at high energy (close to the particle core) is not so stupid. Evidence at low energies shows that electromagnetic and weak force strengths increase with collision energy (eg electromagnetism is 7% stronger at 91 GeV collisions of electrons, because the polarised shielding of the vacuum is breached slightly, so there is less shielding by the polarized vacuum, and the electric charge seen is higher), and that the strong nuclear force falls with increasing collision energy. Therefore, at some high energy (higher than achievable in accelerators) they will be converge. Hence unification quantitatively. Qualitatively, the question is what happens to the gauge bosons when unification of strengths occurs? In other words, are the gauge bosons seen at low energy (where symmetries are broken) different aspects of the unified gauge boson, or is it the other way around and maybe at unification there is a soup of many gauge bosons (all those we know)? The second alternative is ugly and I'd like to think that the answer is more subtle, with a few sophisticated gauge bosons whose properties depend on the energy of the vacuum and the electromagnetic fields nearby. We know the photon contains electric field energy which is 50% positive and 50% negative (oscillating).

Viewed like this, electroweak symmetry breaking becomes clearer. The W+ and W- gauge bosons are possibly vacuum charges having mass. The Zo is possibly a combination of the half a W+ and half a W- type field boson. The photon has no mass since it is made of real electromagnetic energy, not virtual electromagnetic energy like the Zo.

The difference between a vacuum "virtual" electron and a real electron is that the real electron has a real energy level. It is not simply living on borrowed time like the "virtual" vacuum electron, which is annihilated after ~ 10^-21 second.

I think if we reduce confusion by discriminating between all vacuum charges as "virtual" and all observable long-lived matter as "real" charge, we will get the answer very simply:

(1) Bosons with mass and/or charge (in other words the weak force Z and W bosons) are "virtual" vacuum (Higgs field/aether) particles; bosons without these are what we call real photons. This is because the real photons are moving through the vacuum like waves in water. The water has mass, but the wave disturbance itself is just transferring energy at the wave speed and it doesn't have real rest mass.

(2) At high energy, you get the equivalent of a shock wave which creates more of a symmetry (because there is no drag in the "perfect fluid" spacetime fabric, the shock wave analogy leads to a light wave type effect whereby the weak force Z and W bosons can behave like photons without (a) attenuation (miring in the virtual charge sea) because the virtual charge sea moves too slowly to interfere with it at high energy when it is going near light speed, and (b) mass (mass is caused by miring which is prevented by extremely high energies).

You could say the second effect is evidence of epola: the lattice causes electroweak symmetry breaking. At low energy, W and Z particles are mired by the lattice like molasses or honey. But at higher energy (above electroweak unification energy), the W and Z particles are so energetic that the lattice breaks down and simply offers no resistance.

The perfect fluid analogy is not "lucky" or ad hoc speculation; it is empirically justified by the non-ad hoc results of general relativity (not the cosmological ad hoc nonsense like steady state in 1916, and everything else that general relativity has been abused with), it is a result of general relativity which is justified by correct local predictions of general relativity. As I quoted, it is ALSO a conclusion for electromagnetism from Maxwell's treatise.

Take the energy-time form of Heisenberg's uncertainty principle and put in the energy of an electron-positron pair and you find it can exist for ~10^-21 second; the maximum possible range is therefore this time multiplied by c, or 10^-12 metre.

The key thing to do would be to calculate the transmission of gamma rays in the vacuum. Since the max separation of charges is 10^-12 m, the vacuum contains at least 10^36 charges per cubic metre. If I can calculate that the range of gamma radiation in such a dense medium is 10^-12 metre, I'll have substantiated the mainstream picture. The shielding of gamma radiation in ordinary matter depends on the gamma ray energy. [Above 1 MeV gamma ray in atomic based matter energy, you get pair-production by nuclei influencing shielding, which is not of interest for the vacuum where we are mainly dealing with electrons (they have a much longer life in the vacuum than any heavier particles, according to Heisenberg's energy-time uncertainty formula).] The actual energy of the gamma rays we are dealing with in the vacuum is I expect either 0.511 MeV or 1.022 MeV, i.e., exactly once or twice the rest-mass energy of an electron (because the gamma rays come from electron-positron annihilation). Normally you get two gamma rays when an electron and positron annhilate (the gamma rays go off in opposite directions), so the energy is probably 0.511 MeV.

This is actually close to the average gamma ray energy of fission products fallout a few days after fission so the shielding is perfectly understood. The penetration of such radiation depends almost entirely upon the Compton effect, the absorption cross-section for which is calculated by the Klein-Nishina formula of quantum mechanics.

All I have to do is to find the mean free path for 0.511 MeV gamma rays (Compton effect scattering of gamma rays by electrons or positrons) in a medium of 10^36 electrons/m^3. I could do this by looking up the distance for the mean free path of cuch gamma rays in water (as given in books on water shielding for nuclear waste) and then scale the distance according to the density of water in electrons/m^3 as compared to the vacuum which has 10^36 electrons/m^3. I'll do this when I have a chance. It is possible I'll be way off and that the energy of the gamma rays is much higher (it can't be lower than 0.511 MeV). The point is I think you have only half the picture, ie that dealing with vacuum effects that emerge after charges are created but before they annihilate back into gamma rays.

The speed of light is involved because trapped energy current is going round in a closed loop (ie spinning) at light speed, like all TEM/Heaviside energy. You get the radius 2GM/c^2 from general relativity or simply from the orbital speed required to trap light:

1. acceleration due to gravity, a = MG/r^2

2. centripetal acceleration of energy with speed c and mass equivalent M (by E=Mc^2) going in a loop of radius r is: a = (c^2)/r

3. Setting 1 and 2 equal: a = MG/r^2 = c^2/r. Multiply this out by r and we get: MG/r = c^2, so r = GM/c^2.

4. Because classical physics is out by a factor of 2 due to kinetic energy at low velocity being only E = (1/2)Mv^2 compared to Einstein's E = Mc^2, the value of the mass we are using in the classical derivation of r = GM/c^2 is too low by a factor of 2, so we need to double the right hand side to make it relativistic: r = 2GM/c^2.

Gravitation is trapping the energy into the small loop it: an inward force from the radiation which is hitting it and bouncing back like a recoil. This gives rise to force. The inward force in reaction to outward recession of matter by Newton's 3rd law is F = ma where m is mass of receding matter (mass of universe, with a correction for the higher density of universe at immense distances where the recession speeds are greatest but before the energy loss/redshift due to recession of the source of the radiation becomes so great that it starts to cut off the inward reaction), and a is acceleration = variation in speeds of recession (0 to c is linear with times past of 0 to 15,000,000,000 years), hence a = Hc where H is Hubble constant in reciprocal seconds (Hubble constant is velocity/distance = 1/time units) = ~ 10^-10 ms^-2.The outward force is thus on the order of 10^43 Newtons. It is massive! The inward force is the same, a massive force! It keeps electrons trapped as black holes. For more on the experimental evidence for black hole ie gravitationally trapped energy currents such as ELECTRONS etc. (which have an event horizon radius 2GM/c^2 ~ 10^-51 m, which is much smaller than Planck size, refuting string theory speculation), see my Aug 2002 Electronics World article and my home page on internet http://feynman137.tripod.com/#a, and also see the internet page http://en.wikipedia.org/wiki/Black_hole_electron. I don't like Maxwell's theory because it leads to the Raleigh-Jeans law for the spectrum of light which is false and was replaced by Planck's quantum theory. I think the displacement current formula Maxwell uses is simplistic, and that he has not got to the underlying mechanism here: http://electrogravity.blogspot.com/2006/04/maxwells-displacement-and-einsteins.html

Since the atom contains separated positive and negative charge with an intervening vacuum, it is a charged vacuum dielectric capacitor for purposes of dealing with the way it gains and loses energy. The electron resists getting an increase in charge, and jumps in kinetic energy and distance from nucleus instead. If Yang-Mills is right for electromagnetism (there is plenty of evidence from electroweak theory experiments that it is correct so far as it goes), there electromagnetic forces are due to continual exchanges of radiation between charges. The radiation is TEM wave radiation, transverse electromagnetic. It doesn't have to wave when it is being exchanged in an equilibrium situation, because that is like a direct current (Morse tap or logic step) Heaviside signal being propagated with currents in opposite directions down each of a pair of parallel conductors (a "transmission line" in electronics jargon). With Yang-Mills exchange radiation, the equal energy exchange in opposite directions cancels out the magnetic field curls from each component, so vacuum magnetic self inductance effects do not stop propagation by being infinite: they are zero. Hence you don't need to have the gauge bosons oscillate to be exchanged between charges. Oscillation (where the electric field varies from a peak of +v volts through zero to -v volts in a cycle which is determined by the frequency, and is accompanied by magnetic fields curling around the direction of energy flow in say clockwise then stopping and then curling in a anticlockwise direction) is only required where energy goes from A to B without energy simultaneously going from B to A. If energy is in exchange equilibrium, going both from A to B and from B to A simultaneously, then there is no need for the radiation to be oscillating to propagate. Catt had claimed that all capacitor charge in steps. Therefore, it seemed reasonable that the reason for quantum jumps in putting energy into an atomic electron is that the Maxwellian exponential charging curve for a capacitor - which is based on Maxwell's displacement current equation - is the approximation in classical electromagnetism that needs to be corrected to convert it into quantum reality.

http://www.math.columbia.edu/~woit/wordpress/?p=444#comment-14521:

nigel cook Says: August 12th, 2006 at 4:23 pm “The Copenhagen interpretation was not simply thrown together from mathematical equations. It too was only reached after decades of experimental data that supported it. And finally the standard model was only proposed and accepted after countless meticulous and detailed experiments gathered vast amounts of cold hard data.” - Obsessive Maths FreakBut the Copenhagen interpretation is an ad hoc philosophy not a mathematical prediction technique, and it doesn’t make unique predictions that have been tested, so you can’t lump it with the Standard Model that does make predictions, has been tested. There is an industry within physics run by full time science fiction writers who do mathematical philosophy of physics part time, and after about 1916 that was what Bohr did. OK, he did some useful applied nuclear physics theory such as determining that U235 is the fssioning nuclide in natural uranium, but he just sprouted content-less, ad hoc, abjectly speculative philosophy when writing about the nature of reality and the future of theoretical physics. He claimed the Copenhagen Interpretation in 1927 solved everything completely for all time by separating and so outlawing any progress understanding of how classical and quantum electrodynamics can be reconciled:

‘... the view of the status of quantum mechanics which Bohr and Heisenberg defended - was, quite simply, that quantum mechanics was the last, the final, the never-to-be-surpassed revolution in physics ... physics has reached the end of the road.’ – Sir Karl Popper, Quantum Theory and the Schism in Physics, Rowman and Littlefield, NJ, 1982, p6.

‘... the Heisenberg formulae can be most naturally interpreted as statistical scatter relations [between virtual particles in the quantum foam vacuum and real electrons, etc.], as I proposed [in the 1934 book ‘The Logic of Scientific Discovery’]. ... There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation ...’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.

Update on gravity mechanism: the shielding area of each electron and quark for gravity is Pi(2GM/c^2)^2 square metres, where M is the mass of electron or quark. A reduction in gravity occurs because in an immense mass, you will get some electrons or quarks behind one another so they can't both shield gauge boson radiation from you, but because the sizes are so smal, black hole size (far, far smaller than even Planck length) the probability of two masses in any body being directly behind one another is virtually zero. So the force of gravity is directly prportional to the number of particles. The true formula, allowing for particles being behind one another, shows that gravity is proportional to 1 - exp(-xn) where n is the number of particles and x depends on the geometry, but because x is so small, this formula approximates to: 1 - exp(-xn) = xn, because this approximation is completely justified in the limit whereby the product xn is negligibly small compared to 1.

Further update (29 Aug 06) as a result of recent emailed discussions with Ian Montgomery <imontgomery@atlasmeasurement.com.au>; Guy Grantham <epola@tiscali.co.uk>; David Tombe <sirius184@hotmail.com>:

Gauge bosons are being exchanged between each charge and all other charges all the time. This produces fundamental forces. Accelerate one charge, and the effect is a disturbance in the equilibrium. The effect of this disturbance goes off at the normal speed of the gauge bosons, c. What is happening in the case of the electron is that it is emitting continuous Heaviside slab type electromagnetic radiation (no oscillation, zero frequency), just as it emits heat photons (oscillating radiation) when above absolute zero. We detect the oscillating radiation because it causes other charges to resonate. We detect the non-oscillating radiation (Yang-Mills exchange radiation) because it causes forces and steady state force fields.

Waves flowing in a particulate medium by virtue of the gross disturbance of a large number of particles, such as sound waves in air or water molecules in water waves, dissipate energy because the particles strike one another at random, transferring momentum. This is why radio waves (which lose energy as they spread out) in Maxwell's (flawed world view) may be carried by the 'ether' or to be precise the quantum foam vacuum, but photons aren't carried by particles or they'd lose energy as they propagate. Photons are carried by the normal continuous radiation of gauge bosons (Yang Mills exchange radiation) which causes gravity, electromagnetim force fields, etc.

Photons emitted by atomic electrons don't spread like radio waves; the energy of a gamma ray doesn't fall off as the inverse of distance like a radio wave. A radio wave is different from a photon in that the energy of the wave changes with distance. Guy has claimed in response to me that this is not a contradiction, and that a radio wave is simply a compose of a vast number of photons, all in phase like coherent light from a laser. This is a possibility: for instance Huygens showed how a large number of fronts of small wavelets when superimposed can produce one large wave front. However, if so, then this is progress beyond the Maxwellian visual concept (as depicted vaguely as two sine waves orthagonal to one another in his Treatise) whereby a radio wave has a really massive transverse spatial extent, on the order of the wavelength or half the wavelength.

Maxwell missed the Heaviside-Poynting vector and Heaviside's proof that a slab of energy can travel without Maxwell's mechanism if there is equal energy flowing in opposite directions at the same time. This is why if I connect at 10 km unterminated long pair of wires to 377 volts DC power supply, I get something on the order of 1 amp flowing in the wires with the front going at light speed for the insulator between and around the wires, for the time of x/c = 33 microseconds that it takes the energy to react the open circuit at the far end.

(1) Photons emitted by single charges

Type of physics: Planck's quantum theory
Electromagnetic E and B fields in wave: constant regardless of distance travelled
Mechanism: Energy transfer is facilitated by a simple disturbance in the normal equilibrium exchange of energy flows between charges (in opposite directions, ie, through one another, continuously)

(2) Radio waves emitted by group behaviour of many charges

Type of physics: Maxwell's classical theory (radio waves)
Electromagnetic E and B fields in wave: fall off inversely with propagation distance and hence time (energy density falls off with inverse square of distance and energy density is proportional to square of field amplitude)
Mechanism: EITHER: (a) the radio wave is the superimposed composite of many photon type waves emitted by individual electrons in the transmitter aerial and the fall in the peak electric field strength of the wave with increasing distance is due statistically to the geometric divergence of these photons as they diverge (each individual photon remaining the same); OR (b) some kind of Maxwellian wave whereby displacement current of virtual charges in vacuum causes a curling magnetic field, which causes induction of vacuum virtual charge displacement current, which again causes a magnetic field and so on in a rolling wave. The equations are Faraday's induction law (Maxwell's equation for curl of an electric field being proportional to the rate of change of magnetic field) and Ampere's law with the current given by Maxwell's displacement current law (so that the curl of a magnetic field is proportional to displacement current which in turn is proportional to the rate of change of electric field strength). The waves dissipate because particles in the vacuum carrying the wave collide or scatter which dissipates momentum and energy as the wave propagates (for waves which are not carried by particles in the vacuum - for example photons which rely on Yang-Mills exchange radiation which is Heaviside non-oscillating contrapuntal flow see (1) above - there is no mechanism for dissipation because they are not carried by vacuum particles; since there is no mechanism for photons to dissipate, they can't dissipate).

The material above brought a request from Ian Montgomery for definition of a gauge boson, how a photon resulting from a disturbance in gauge boson exchange radiation can have any given frequency, what evidence there is for each of the two mechanisms listed above for electromagnetic radiation, and whether there is an overlap at any frequencies between the two mechanisms (whereby a wave can be caused by either mechanism; obviously there are as many variations as there are forms of radiation for instance you have different types of gauge boson and neutrinos, etc., so there are variants in nature), and what the vacuum contains. Response:

Definition of gauge boson: 'Gauge boson: Any of the particles that carry the four fundamental forces of nature (see forces, fundamental). Gauge bosons are elementary particles that cannot be subdivided, and include the photon, the graviton, the gluons, and the W+, W-, and Z particles.'

The key thing is that in any propagating wave (whether longitudinal like sound or sonar, or transverse such as a water surface wave) you need equal and opposite flows. In a longitudinal wave such as sound, the two pressures act in opposite directions along the line of propagation of the wave, whereas in a transverse wave, the pressures act in opposite directions along a line which is perpendicular to the direction of propagation of the wave. This is due to Newton's 3rd law, since pressure times area is force, and forces are balanced.

In a sound wave you have an outward force associated with the overpressure phase and an inward force associated with the below-ambient pressure phase.

If you just release some air at pressure, no sound wave is generated unless somehow an oscillation is produced (such as by a whistle or similar mechanism). See illustrations at: http://glasstone.blogspot.com/2006/03/outward-pressure-times-area-is-outward.html

THE ELECTROMAGNETIC GAUGE BOSON CAN PROPAGATE WITHOUT OSCILLATION

As you charge up a pair of separated metal spheres with the same type of charge (say electrons), the region between them becomes a "negative electric field". What is actually occurring is that the flow of gauge boson energy intensifies in each direction. It is impossible to have a non-oscillating electromagnetic gauge boson energy flow in one direction only, or it will have infinite self inductance problems due to the effect of the magnetic field on the virtual charges in the vacuum!

However, if you have two flows of gauge boson energy, with one passing through the other (in opposite directions), as actually occurs continuously in Yang-Mills exchange, the magnetic fields of the Heaviside-Poynting energy currents exactly cancel out, so there is no magnetic problem and propagation becomes possible without any oscillation. All you get left with is the observed electric field in space, which is exactly what is observed. That field lacks the energy density required to polarize virtual charges in space, because we know from QED that the success of renormalized charge means that the vacuum is not polarized by weak electric fields (there is a definite cutoff; electric fields can only polarize the vacuum charges if the energy density of the electric fields is fairly intense, above the threshold necessary to create fresh electron-positron pairs in the vacuum).

"If the photon mechanism below is gauge boson ballistic disturbance, how can it have a frequency?" - Ian.

The "frequency" of a disturbance is the number one divided by the duration of the disturbance in seconds.

I've already explained why a propagating disturbance has momentum giving rise to forces which are equal and opposite (along the direction of propagation in longitudinal waves; perpendicular to the direction of propagation in the case of transverse waves) due to Newton's 3rd law. Further, the pulse shape does not have to necessarily conform to any particular prejudice (sine waves).

Nobody has ever measured the electric field waveform for light, since it is far too short and beyond experimental measurement even where the intensity is high for example using a coherent laser burst. For gamma rays, the click on the geiger counter or scintillation counter is not even directly caused by the gamma ray, but is an indirect result caused by ionization of gas or the disturbance to the sodium iodide crystal structure in the scintillation counter. There is no way anyone can measure any waveform, and there is no meaning to the word wavefore in this context. Hence the association of the word "frequency" to electromagnetic radiation is entirely mathematical for above S-band radio (microwaves); you can't see any evidence of structure.

The wave mechanisms are different for classical and quantum radiation. However, most of Maxwell's theory is gibberish and he doesn't have a clear illustration of what is going on. In the capactor where Maxwell gets his key equation for light, the 'displacement current' equation, what is normally attributed to displacement current is actually set up in the first place by the flow of radiation, see my piece about this at http://electrogravity.blogspot.com/2006/04/maxwells-displacement-and-einsteins.html and links on that page. So the dynamics of Maxwell's theory are wrong. The 'displacement current' of virtual charge is set up as a result of the real charge being induced to charge up in the capacitor plates by way of electromagnetic radiation exchange.

Maxwell claims that charge accumulates in the plates of the charging capacitor via displacement current in the vacuum due to charge polarization. But this is false since if the charge in the vacuum could be polarized by weak electric fields, it would oppose and cancel out all real charges (quantum field theory uses renormalization which shows that there is a minimum energy density of the electric field which produces free charge that can be polarized). What really occurs in place of Maxwell's 'displacement current' is explained here (and on its links): http://electrogravity.blogspot.com/2006/04/maxwells-displacement-and-einsteins.html

Electromagnetic radiation induces real charge to accumulate in each plate, which in turn might create the displacement of some virtual charges in the vacuum between the plates.

There is factual evidence that the capacitor charges up due to radiation, not virtual displacement current flowing between capacitor plates. Maxwell's mechanism is false because it neglects the dynamics. When you set down the facts you can see that there energy flowing in one plate transmitts electromagnetic radiation to the other plate via charge acceleration in the plates, and the other plate sends out an exactly inverted waveform. Both signals exactly cancel as seen from large distances, so exactly zero energy can be radiated away from the capacitor by radiation, but 100% is used to induce motion of charge in the other plate. This is the mechanism for the Heaviside signal in a capacitor and transmission line.

Maxwell's theory is like the legendiary frauds of science which come from getting cause and effect mixed up. For example, in the book 'How to Lie with Statistics' the example is given that in Holland (I think) researchers found that the houses with the most children had the most stork's nests on the roofs. It is very easy to confuse cause and effect where you don't have a firm grasp on what the full dynamics really are. In this case, the reason for the number of stork's nests were that on the average people with more children had bigger, older houses to accommodate them, and storks nests are more likely to be found on the roofs of bigger, older houses. There is no need for more fanciful theories.

Maxwell's idea that there is virtual charges in the vacuum becoming polarised (which is his final mechanism for displacement current) is wrong, because the success of renormalization in QED which predicts the Lamb shift and the magnetic moments of leptons (electron, muon, etc) proves that the vacuum can't be polarized by weak electric fields, only by electric fields strong enough to be capable of creating pairs of free charges which can then become polarized.

So it is likely that Guy must be right and mechanism (2) above (Maxwellian waves) is a complete dud. In that case, all radiation including radio waves is due to mechanism (1), with radio waves diverging and losing energy because they are superpositions of many small quantum waves just like the continuous wavefronts you can produce in a physics lab using a lot of oscillating sources. The wavefronts combine and add by Huygen's construction.

No there is no evidence for two separate mechanisms once you dismiss Maxwell, and this adds conviction to the idea that all radiation is due to mechanism (1) above. However, David claims to have a version of Maxwell's theory which works. The key thing is whether David's version of the Maxwell theory gets around the polarization problem. QED shows that if the virtual charges in the vacuum were polarizable by any electric field, every charge would cause enough vacuum polarization around it to cancel out the electric field from the charge completely! Suppose you have an electron, QED suggests that without a vacuum it would have a charge of 137e where e is observed electric charge at a distance more than 10^-10 metre or less. Renormalization in QED reduces the electron charge to e because the virtual charge is polarized between the electron core and some distance (10^-10 m or probably much less, I am desperate to make detailed calculations and actually plot the electric charge and nuclear charge as a function of distance, not a function of collision energy which is the usual plot by physicists). So the real electron core has a charge of ~137e, and around it there is attracted a shell of virtual positrons with opposite charge, then around the positron shell there is a virtual electron shell. (By shell I refer to the mean distance of the virtual positrons freed in the vacuum by the intense close in electric field, and the somewhat greater mean distance of the virtual electrons; obviously this is statistical and you not have every virtual electron at one radius and every virtual positron at another smaller radius from the real electron core.)

The exact shielding is dependent on the amount of polarization, ie unpolarized (randomly distributed) virtual charge around the electron core won't do any shielding, but polarization does some since it creates an electric field vector which points away from the electron, cancelling the electron's inward pointing electric field line (electric field lines are by convention labelled with an arrow pointing from positive toward negative charge).

Because there is no polarization in the vacuum due to weak electric fields (because if there was, it would totally cancel out all real charges!), I don't think radio waves can be due to Maxwell's displacement current mechanism. However, it may be possible to have a radio wave in which the mechanism is the same as the Heaviside slab of energy, with the role of Maxwell's displacement current being replaced by that of radiation: http://electrogravity.blogspot.com/2006/04/maxwells-displacement-and-einsteins.html

‘… the view of the status of quantum mechanics which Bohr and Heisenberg defended - was, quite simply, that quantum mechanics was the last, the final, the never-to-be-surpassed revolution in physics … physics has reached the end of the road.’ – Sir Karl Popper, Quantum Theory and the Schism in Physics, Rowman and Littlefield, NJ, 1982, p6.

‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations [between virtual particles in the quantum foam vacuum and real electrons, etc.], as I proposed [in the 1934 book ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.

Therefore I want to preserve the useful Popper mechanism if I can do so, whereas most critics (Catt etc) want to chuck out everything in modern physics just because it is often badly presented (with hype or obfuscation) in its present abstract mathematics form.

The vacuum must contain virtual charges in order that quantum theory work, but there is a constraint that the vacuum is not polarizable unless the energy of the electric field from a charge is strong (ie a charge can only polarize the vacuum very close to it, not at great distances). If a real charge could polarize the vacuum charges without constraint, there would be no observable real electric charges, because the shells of positive and negative vacuum polarization around each real charge would simply expand until they cancelled each real charge out completely.

So the vacuum cannot contain free, polarizable charge. The charges in the vacuum can only be polarized where the electric field is strong enough to first break some kind of constraining forces and therefore free up some charge which si then polarized. So the physics behind renormalization of charge is a two-stage process.

First, the bound vacuum charges have to be freed, which can only occur in strong electric fields quite close to real charges, and then, second, the freed charges are polarized by the electric field which opposes the field and causes the shielding of the bare charge strength (which renormalization requires in quantum field theory).

So some kind of bound electrons and positrons (as contrasted to the mainstream idea of a foam of free annihilating electrons and protons) in the vacuum may explain renormalization.

So there may be some kind of bound electron-positron structure to explain renormalization in quantum field theory.

There is no particle with rest mass between electron and muon. Muon is about 205 times electron mass. Hence, in a randomly colliding sea of electron-positron pairs, using Popper's causal mechanism, a scattering type interpretation, of the energy-time Heisenberg ucertainty principle, there will one muon-antimuon pair for every 205 electron-positron pairs. Hence the vacuum will be 0.5% heavy particles (muons-antimuons) and 99.5% electron-positron pairs. Far beyond muons, you get many other particles, which become rarer and rarer still: pions, quarks, etc.

The Higgs boson has problems: in the simplest model with just one Higgs boson the mass of the Higgs boson should be infinite. More sophisticated Higgs theories are built on SUSY the 10-dimensional supersymmetric unification theory which Witten showed to be a duality of 11 dimensional supergravity. In other words this is string theory, abject speculation with no tests.

My argument is that the origin of all mass is due to gravitationally trapped Z bosons of electroweak theory, which take the place of Higgs bosons. I used this to predict all the masses of all the known particles, and the formula predicts exactly where you can search for certain other short-lived as yet unobserved particles to check it further. See http://electrogravity.blogspot.com/2006/06/more-on-polarization-of-vacuum-and.html for details.

Briefly, to show a mainstream alternative, here is a link to the latest New Scientist article concerning an aetherial replacement to dark matter as far as galactic evidence goes:

http://www.blogger.com/

It cites the following paper which has been submitted to Physical Review Letters:

http://www.blogger.com/

Astrophysics, abstractastro-ph/0607411From: T.G Zlosnik [view email]Date: Tue, 18 Jul 2006 12:43:44 GMT (10kb)

Modifying gravity with the Aether: an alternative to Dark MatterAuthors: T.G Zlosnik, P.G Ferreira, G.D Starkman

I think their use of "Aether" will guarantee that PRL will reject it. They should specify something like "Higgs field" instead. Stanley Brown, editor PRL, still rejected my paper, even after I deleted the word ether! For his email see http://www.blogger.com/

They have a rule: "Extraordinary claims require extraordinary evidence". That rule would have prevented Einstein getting relativity published, because the most impressive tests came after publication! If you get your evidence before submitting it, they accuse you of having an ad hoc theory! (If relativity was just an ad hoc explanation, it would have been rejected as speculative.)

I have not studied recent evidence for dark matter in galaxies, only in cosmology. I recall something about the evidence having to do with the rotation speed of galaxies being proportional to the square root of the distance from the middle (or do I mis-remember, I cannot find the lecture notes)? There is an analogy here possibly with hurricane rotation dynamics and the Corolis force: http://www.blogger.com/ and http://www.haloscan.com/comments/lumidek/115637347840039579/#587544

Christine Dantas has an interesting discussion here:
http://christinedantas.blogspot.com/2006/08/dark-matter-lay-bare.html

UPDATES: Clarification of the physical basis to the gravity mechanism model: Is the quantum mechanical (virtual particle) vacuum expanding like the matter in the universe, or not? My initial answer from the gravity mechanism ten years ago was no, but I didn’t understand the dynamics of the model as a whole at that time, because I had to add in a separate model two years ago to allow for the higher density of the early distant universe in spacetime. Because the gauge boson (Yang Mills exchange) radiation that causes gravity and electromagnetism is weakened due to recession of the distant masses (just as light from those masses is red-shifted), that exchange radiation which drives fundamental forces is being stretched out as the universe expands. This prevents the effective strength of gravity going towards infinity due to the increasing density of the universe at immense distances. The effective density of the universe would increase towards infinity as the distance we are looking to increases to 15,000 million light years (time zero). We don’t get an infinite amount of gauge boson radiation from the infinitely high density at such distances because the big bang expansion (by analogy to visible red-shift) in effect stretches (red-shifts) gauge bosons to an infinite extent as the density goes towards infinity, and these two effects offset one another. If the virtual-particle filled vacuum is expanding, then vacuum energy density would diminish with time, affecting masses and forces which depend on the ‘Higgs field’ and Yang-Mills radiation in the vacuum. If the entire source of the virtual charges in the spacetime fabric is Yang-Mills exchange radiation (giving rise to gravity and electromagnetism forces) which spends part of its time as virtual charge pairs in the vacuum, these arise from the presence of matter and don’t extend for an infinite distance. At present, this possibility seems the most convincing and least speculative extension of existing empirically defensible knowledge. The first priority is to assemble a working set of empirically defensible ideas. The dynamics needs to come first, the mathematics of the model must come later (or else the mathematical model will be speculative with too many loose ends, of no use).

The light photon is a discontinuity/disturbance in the pre-existing equilibrium Yang-Mills continuous (non-oscillating) energy exchange between all charges (producing gravity and other forces). So the photon can only take paths which already exist between charges in the universe. The photon can’t travel along any other paths. This is the deep physical reality of Yang-Mills exchange radiation QFT, corresponding physically to Feynman’s path integrals.
The virtual charge pairs in the vacuum are formed from the energy of the gauge boson exchange radiation energy.

As for the error of Maxwell, consider the continuous energy transfer in a logic step. That is not an oscillating Maxwellian wave, all it requires is that there are two energy flows (equally in opposite directions, such as the opposite energy current flows guided by each of the two conductors in a transmission line). You can’t make that work with just a single wire, which is why if you connect one terminal on a battery to ground earth, it can’t drain the battery with a current of I = V/Z = V /(377 Ohms impedance of free space, with geometrical correction factor for magnetic inductance) until the earth is charged up.
On large scales, radio waves are transverse Maxwellian waves and they disperse and lose peak electric field and peak magnetic field strength as they propagate outwards, unlike photons emitted by single individual charges. Maxwellian radio waves propagate dispersively (as they do) by the carrier motion of virtual charges (displacement current) in the vacuum.

The virtual charges allowing this are created by the force-carrying gauge bosons, which spend part of their time dissociated into matter-antimatter pairs in the vacuum.

The force-carrying “gauge bosons” themselves do NOT require charges in the vacuum in order to propagate, because they are continuous (non-oscillating, non-Maxwellian) energy exchange radiation between charges, like the continuous, non-oscillating exchange of energy which occurs when you place two batteries of similar voltage in a parallel circuit. As soon as you connect such batteries in parallel circuit, each sends a continuous (endlessly-long) logic pulse into the other, which has no mechanism to stop. Neither discharges as a result, because each supplies the other with the same energy as it loses. This is entirely clear from what is known about logic signals, although the fraudulent steady-state electrical trash such as Ohm’s and Kirchoff’s laws are still taught, which only deal with COMPLETE circuits and then taught with the lying claim that electricity only travels in complete circuits (which is contrary to the fact that information does not go instantly around the crcuit; electricity always sets off at light speed with a current determined by the “vacuum displacement current” between the conductors (which has a resistance called a characteristic impedance, which is some multiple of the impedance of free space; 377 ohms = product of magnetic permeability of free space and velocity of light), and doesn’t “know” or care whether the line ahead of it is an open circuit or a closed circuit, so normal teaching of physics is a complete fraud and a lie that must be corrected as it has massive implications for physics).

Response to a dismissive email from Guy Grantham, 25 August 2006 22:10: 1: the prediction of gravity strength G and electromagnetism strength I give using a method which is only compatible with Yang Mills is accurate. Nobody else can put forward any calculations which are accurate and based on a mechanistic theory which is based on observations without any speculation. 2: "> Radio waves are generated by the movement and reversal in direction of> single individual charges too. We cause electrons to travel up to the end > of an antenna and bounce back again, radio waves are emitted." Nobody has ever demonstrated whether the acceleration of a single electron produces constant energy photons or instead radio waves which individually spread out losing peak electric field strength by the inverse of distance. It is possible for a radio wave to decay in peak electric field strength without changing frequency; it does this because the frequency is the number of cycles per second, ie the number of crests (peaks) passing you per second. The frequency has nothing to do with the transverse length of the radio wave. I can generate any frequency I like using any length of aerial I like. Obviously it is hard to get a high efficiency of transmission for a long wavelength using an aerial much shorter than the 'wavelength', but it is quite possible to get transmission using a loading coil at the base of the aerial which changes the resonate frequency, just as you can change the frequency with which a tuning fork vibrates by damping its oscillations. Radio waves are emitted sideways by aerials, ie, they are emitted in a direction perpendicular to the direction of oscillating energy flow up and down the aerial.The peak electric field strength in general falls off inversely with increasing distance for radio waves, because of transverse spreading. This VIOLATES Planck's law whereby energy E = hf = hc / [wavelength]. Radio waves do lose energy WITHOUT varying in frequency. They are divorced from photons. If you claim a radio wave is a composite of a large number of individual photons, you then have the problem of admitting that the real wavelength is not the transverse wavelength, because an electron drifting at 1 mm/second under a 1 amp current in a high power commercial transmitter aerial will only move a fraction of a millimetre while a full cycle of VHF radio (or any frequency above ELF, ie any audio output radio you can actually listen to for practical purposes). Hence the size of the transverse transmission distance is less than 1 millimetre for all wavelengths. For 1 metre waves, the effective aerial length for transmission by a single electron is still under 1 millimetre. This completely undermines the transverse concept of Maxwell's waves, where the wavelength of the wave was supposed to correspond to its transverse dimensions. If so, then the Maxwellian concept of wavelength is a complete fraud for radio waves, since it is not physically correct, does not correspond to the real transverse dimensions of individual radio photons, and is merely a mathematical invention based on the misapplication of the "wave axiom" of physics to radio waves, the wave axiom being frequency f = c / [wavelength]. You can see where Maxwell went wrong: as I said before he doesn't say how far the oscillation occurs transversely in a radio wave. With a transverse (surface) water wave, the transverse oscillation is measured in similar units to the wavelength, and as the height of water waves increase, the wavelength (perpendicular to the height) also generally increases, so the two are in a fixed ratio in general. This is not true for radio waves. Maxwell obfuscates by plotting electric field strength (v/m) versus wavelength (m) in his treatise, when this is 1 dimensional (a line, ie, a longitudinal wave like sound) not a transverse wave because it shows only variations along the propagation axis, not variations perpendicular to it (drawing the field strengths of electromagnetism perpendicular to the propagation dimension proves he is drawing a longitudinal wave as I can plot air pressure variation and temperature variation as a function of propagation distance in a sound wave both perpendicular to the direction of propagation; the graph will look just like Maxwell's light wave picture, but will still be a longitudinal sound wave, not a transverse sound wave). To get a two dimensional diagram, Maxwell would need to plot E and B varying as functions of dimensions OTHER than the propagation direction of the wave! 3: You haven't grasped the battery. Connect a long pair of open circuit wires to 1.5 v battery terminals. That instant, a logic pulse of something on the order of [1.5 volts] / [377 ohms for space between ends of cable + cable resistance for a length of cable equal to ct metres where t is time after connecting wires to battery terminals] ~ 0.004 amps (the actual figure will be different since the impedance won't be exactly 377 ohms unless the wires approximate to flat parallel plates separated by a distance equal to their width) sets off down the wires, not knowing whether it is going towards another battery in parallel circuit, or a load such as a light bult at the far end, or an open circuit. Energy pours into the cable with no knowledge - and no concern for - where it is going, or whether the circuit is complete or open, or what the resistance of the circuit as a whole is. If you simultaneously connect at the far end another battery with the terminals the same way (so the two batteries are parallel), they both fire energy at light speed towards each other. After the logic pulses meet in the middle of the length of cable, they begin to overlap. In the part that they overlap, electrons are not forced to have a net drift in any direction, so there is no electric current for the overlap region. When the logic pulses reach the battery at the other end, they recharge it. Each battery is endlessly pouring out energy (not current, once the equilibrium is set up there is no current as explained because current only occurs where electrons are forced to have a net drift, and ther is no net drift), and receiving it at the same rate. There is no mechanism by which, at some stage, each battery can decide to stop pouring out energy to the other. Neither does. [To insist other wise is like the folly of claiming that something at constant temperature is emitting heat, or else it would cool. Since Prevost 1792, we know that cases of apparently static phenomena in nature are usually dynamic equilibria with the lack of apparent change being down to an equality in the rates of emission and reception.] Take the simple case of connecting an open circuit cable to a battery of 1.5 volts. The cable wire ends of the open circuit cable end become charged to 1.5 volts when the logic pulse from the battery arrives there and reflects back from the bound end electrons at the end of the wires. Energy is endlessly being sent out by the battery, and is endlessly reflecting back off the far end of the wire and returning to the battery. Once the equilibrium is established, the voltage is constant all the way along the cable, so the electric field gradient is E = dV/dX = 0/dX = 0. Because the electric field gradient is zero, there is no electromotive force to accelerate electrons, hence NO ELECTRON DRIFT CURRENT, just electromagnetic force (gauge boson) energy flow in both directions at light speed. Without electric current, there is no heating in the cable, and no energy loss by heat, merely energy loss due to the amount of electromagnetic energy stored in the cable, like energy in a charged capacitor.

FURTHER UPDATE:

Maxwell's models of vacuum polarization in the gap between two capacitor plates generally said that space contains electric dipoles (like polar molecules which are positive on one end and negative on the other, due to localization of electrons). These were supposed to become aligned in the vacuum, forming lines of electric force. However, quantum field theory has a vacuum populated by electric monopoles (virtual electrons, virtual positrons, virtual quarks, etc.), with no electric dipoles at all. Maxwell's dipole orientation theory may at best possibly just have some relevance to magnetic field lines (although there are serious problems with this). Maxwell's theory of electric field lines and 'displacement current' in the vacuum is flawed. Maxwell uses a continuous differential equation for vacuum currents caused by polarization of vacuum charge. Quantum field theory shows that Maxwell's equation must be wrong in mechanism and I've explained the corrections needed and the full replacement theory clearly on this blog earlier; quantum field theory shows that Maxwell's conception of polarization is false in the real world there is a low-energy ('infrared') cutoff on the amount of vacuum polarization:

CALCULATION OF THE POLARIZATION CUTOFF RANGE:

Dyson’s paper http://arxiv.org/abs/quant-ph/0608140 is very straightforward and connects deeply with the sort of physics I understand (I’m not a pure mathematician turned physicist). Dyson writes on page 70:

‘Because of the possibility of exciting the vacuum by creating a positron-electron pair, the vacuum behaves like a dielectric, just as a solid has dielectric properties in virtue of the possibility of its atoms being excited to excited states by Maxwell radiation. This effect does not depend on the quantizing of the Maxwell field, so we calculate it using classical fields.

‘Like a real solid dielectric, the vacuum is both non-linear and dispersive, i.e. the dielectric constant depends on the field intensity and on the frequency. And for sufficiently high frequencies and field intensities it has a complex dielectric constant, meaning it can absorb energy from the Maxwell field by real creation of pairs.’

Pairs are created by the high intensity field near the bare core of the electron, and the pairs become polarised, shielding part of the bare charge. The lower limit cutoff in the renormalized charge formula is therefore due to the fact that polarization is only possible where the field is intense enough to create virtual charges.

This threshold field strength for this effect to occur is 6.9 x 10^20 volts/metre. This is the electric field strength by Gauss’ law at a distance 1.4 x 10^-15 metre from an electron, which is the maximum range of QED vacuum polarization. This distance comes from the ~1 MeV collision energy used as a lower cutoff in the renormalized charge formula, because in a direct (head on) collision all of this energy is converted into electrostatic potential energy by the Coulomb repulsion at that distance: to do this just put 1 MeV equal to potential energy (electron charge)^2 / (4Pi.Permittivity.Distance).

Can someone explain to me why there are no books or articles with plots of observable (renormalized) electric charge versus distance from a quark or lepton, let alone plots of weak and nuclear force as a function of distance? Everyone plots forces as a function of collision energy only, which is obfuscating. What you need is to know is how the various types of charge vary as a function of distance. Higher energy only means smaller distance. It is pretty clear that when you plot charge as a function of distance, you start thinking about how energy is being shielded by the polarized vacuum and electroweak symmetry breaking becomes clearer. The electroweak symmetry exists close to the bare charge but it breaks at great distances due to some kind of vacuum polarization/shielding effect. Weak gauge bosons get completely attenuated at great distances, but electromagnetism is only partly shielded.
To convert energy into distance from particle core, all you have to do is to set the kinetic energy equal to the potential energy, (electron charge)^2 / (4Pi.Permittivity.Distance). However, you have to remember to use the observable charge for the electron charge in this formula to get correct results (hence at 92 GeV, the observable electric charge of the electron to use is 1.07 times the textbook low-energy electronic charge).

More material: http://nige.wordpress.com/