Quantum gravity physics based on facts, giving checkable predictions

Thursday, December 07, 2006

There are two types of loops behind all physics phenomena: creation-annihilation loops which occur inside a very short range of charges (about 1 fm for a unit charge, corresponding to about 1020 volts per metre field strength, which is sufficient for pair production, making the creation-annihilation loops) and are associated with short range atomic chaos and nuclear phenomena, and the long-range loops of Yang-Mills exchange radiation in electromagnetism and gravity.

The latter loops are not creation/annihilation loops, but are purely the exchange of gauge boson radiation between charges. This looping of radiation is such that all charges in the universe are exchanging this long range radiation, the result being electromagnetism and gravity. Force causing radiation delivers momentum, which drives the forces. It causes cosmological expansion because the exchange of radiation pushes charges apart; the incoming force causing exchange from the more distant matter is increasingly red-shifted with increasing distance of that matter and so imparts somewhat less force. This is the primary effect of exchange radiation; gravity is a secondary effect occurring as a result of cosmological expansion. (Obviously, at early stages in the big bang when a lot of matter was colliding as a hot ionized gas, there was also expansive pressure from that, as in a nuclear explosion fireball in space. The exact causality and interdependence of cosmological expansion and force generating mechanisms at the earliest times is still under investigation but it is already known that short range force mechanisms are related to the gravity and electromagnetism forces.)

The recession of galaxies gives an effective outward force of receding matter around us due to Newton's second law, which by Newton's 3rd law and the possibilities known in the Standard Model of particle physics, results in net inward force of gauge boson radiation. The net result of these forces can be repulsive or attractive with equal magnitude for unit charges in electromagnetism, but there is also a weak residue summation automatically allowed for in the electromagnetism force model, (N^{1/2} times weaker than electromagnetism, where N is the number of fermions in the universe) that is always attractive, gravity. (The mechanism for this is described in detail below in bold italics, in conjunction with Lunsford's unification of gravitation and electromagnetism.) This is a very checkable theory, it is founded on well established facts and makes numerous non-ad hoc predictions, quantitatively checkable in greater accuracy as cosmology observation evidence improves!

Using these confirmed facts it was predicted and published in 1996 (way ahead of Perlmutter's observation) that the universe isn't decelerating (without inventing dark energy to offset gravity; gravity simply doesn't operate between receding masses where the exchange radiation is too redshifted for it to work; in addition it relies on pushing effects which are unable - in our frame of observational reference - to operate on the most distant matter). The simplest way to predict gravity, electromagnetism and nuclear forces is to follow renormalized QFT facts. There are no creation-annihilation loops below the IR cutoff, ie beyond 1 fm from a fundamental particle where the field strength is below 1018 v/m, so those creation-annililation loops don’t justify ‘cosmological dark energy’. Long range forces, gravity and electromagnetism, are due (beyond 1 fm) entirely to loop quantum gravity type Yang-Mills exchange radiation (loops that are exchanges of radiation between masses, with nothing to do with charge creation-annihilation loops in the Dirac sea). Short range nuclear forces are due to the pressure of charges in the loops within a 1 fm range.

For the history of LeSage see the survey and comments here. First, it turns out LeSage plagarized the idea from Newton's friend Fatio.

Numerous objections have been raised against it, all superficially very impressive but scientifically totally void objections that were raised against the earth rotating by Ptolemy ('if the earth rotated, we'd be thrown off, the clouds fly from east to west always at 1000 miles/hour', etc):

(1) Maxwell dismissed LeSage because he thought atoms were "solid" so gravity would depend on the surface area of a planet, not on the mass within a planet. LeSage however had used the fact that gravity depends on mass to argue that the atom is mostly void so that the gravity causing radiation can penetrate the earth and act only on massive nuclei, etc. He was confirmed by discovery of x-rays and penetrating radioactive particles.

(2) The tactic against LeSage then changed to the claim that if gravity was due to exchange radiation, the power would be hot enough to melt atoms.


The flaw here is obvious: the Yang-Mills exchange radiation to produce all forces is large. All Standard Model forces are larger in strength than gravity! But they all rely on radiation exchange.

The Standard Model is the best tested physical theory: forces result from radiation exchange in spacetime. Mass recedes at 0-c in spacetime of 0-15 billion years, so outward force F = m.dv/dt ~ m(Hr)/t = mHv = mH(Hr) ~ 1043 N. H is Hubble parameter. Newton's 3rd law implies equal inward force, and this force is carried by Yang-Mills exchange radiation.

The radiation is received by mass almost equally from all directions, coming from other masses in the universe; the radiation is in effect reflected back the way it came if there is symmetry that prevents the mass from being moved. The result is then a mere compression of the mass by the amount mathematically predicted by general relativity, i.e., the radial contraction is by the small distance MG/(3c²) = 1.5 mm for the contraction of the spacetime fabric by the mass in the Earth.

A local mass shields the force-carrying radiation exchange, because the distant masses in the universe have high speed recession, but the nearby mass is not receding significantly. By Newton’s 2nd law the outward force (according of a nearby mass which is not receding (in spacetime) from you is F = ma = m.dv/dt = mv/(x/c) = mcv/x = 0. Hence, by Newton’s 3rd law, the inward force of gauge bosons coming towards you from that mass is also zero; there is no action and so there is no reaction. As a result, the local mass shields you, so you get pushed towards it. This is why apples fall.

Geometrically, the unshielded gravity force is equal to the total inward force, F = ma ~ mcH, multiplied by the proportion of the shielded area of a spherical surface around the observer (illustration here). The surface area of the sphere with radius R (the average distance of the receding matter that is contributing to the inward gauge boson force) is 4*Pi*R². The shielding area of a local mass is projected on to this area: the local mass of say the planet Earth, the centre of which is distance r from you, casts a shadow (on the distant surface 4*Pi*R² ) equal to its shielding area multiplied by the simple ratio (R/r)². This ratio is very big. Because R is a fixed distance, as far as we are concerned here, the most significant variable the 1/r² factor, which we all know is the Newtonian inverse square law of gravity.

More here (that page is under revision with corrections and new information here and a revised version as a properly edited paper will go here). There is strong evidence from electromagnetic theory that every fundamental particle has black-hole cross-sectional shield area for the fluid analogy of general relativity. (Discussed here.)

The effective shielding radius of a black hole of mass M is equal to 2GM/c2. A shield, like the planet earth, is composed of very small, sub-atomic particles. The very small shielding area per particle means that there will be an insignificant chance of the fundamental particles within the earth ‘overlapping’ one another by being directly behind each other.

The total shield area is therefore directly proportional to the total mass: the total shield area is equal to the area of shielding by 1 fundamental particle, multiplied by the total number of particles. (Newton showed that a spherically symmetrical arrangement of masses, say in the earth, by the inverse-square gravity law is similar to the gravity from the same mass located at the centre, because the mass within a shell depends on its area and the square of its radius.) The earth’s mass in the standard model is due to particles associated with up and down quarks: the Higgs field.

From the illustration here, the total outward force of the big bang,
(total outward force) = ma = (mass of universe).(Hubble acceleration, a = Hc, see detailed discussion and proof here),

while the gravity force is the shielded inward reaction (by Newton’s 3rd law the outward force has an equal and opposite reaction):

F = (total outward force).(cross-sectional area of shield projected to radius R) / (total spherical area with radius R).

The cross-sectional area of shield projected to radius R is equal to the area of the fundamental particle (Pi multiplied by the square of the radius of the black hole of similar mass), multiplied by the (R/r)2 which is the inverse-square law for the geometry of the implosion. The total spherical area with radius R is simply four times Pi, multiplied by the square of R. Inserting simple Hubble law results c = RH and R/c = 1/H give us F = (4/3)Pi*Rho*G2M2/(Hr)2. (Notice the presence of M2 in this equation, instead of mM or similar; this is evidence that gravitational mass is ultimately quantized into discrete units of M, see this post for evidence, ie this schematic idealized diagram of polarization - in reality there are not two simple shells of charges but that is similar to the net effect for the purpose required for coupling of masses - and this diagram showing numerically how observable particle masses can be built up on the polarization shielding basis. More information in posts and comments on this blog.) We then set this equal to F=Ma and solve, getting G = (3/4)H2/(Pi*Rho). When the effect of the higher density in the universe at the great distance R is included, this becomes

G = (3/4)H2/(Pi*Rholocale3). This appears to be very accurate.

Newton's gravitation says the acceleration field a = MG/r2.

Let's go right through the derivation of the Einstein-Hilbert field equation in a non-obfuscating way.

To start with, the classical analogue of general relativity's field equation is Poisson's equation

div.2E = 4*Pi*Rho*G

The square of the divergence of E is just the Laplacian operator (well known in heat diffusion) acting on E and implies for radial symmetry (r = x = y = z) of a field:

div.2E
= d2E/dx2 + d2E/dy2 + d2E/dz2
= 3*d2E/dr2


To derive Poisson's equation in a simple way (not mathematically rigorous), observe that for non-relativistic situations

E = (1/2)mv2 = MG/r

(Kinetic energy gained by a test particle falling to distance r from mass M is simply the gravitational potential energy gained at that distance by the fall! Simple!)

Now, observe for spherical geometry and uniform density (where density Rho = M/[(4/3)*Pi*r3]),

4*Pi*Rho*G = 3MG/r3 = 3[MG/r]/r2

So, since E = (1/2)mv2 = MG/r,

4*Pi*Rho*G = 3[(1/2)mv2]/r2 = (3/2)m(v/r)2

Here, the ratio v/r = dv/dr when translating to a differential equation, and as already shown div.2E = 3*d2E/dr2 for radial symmetry, so

4*Pi*Rho*G = (3/2)m(dv/dr)2 = div.2E

Hence proof of Poisson's gravity field equation:

div.2E = 4*Pi*Rho*G.

To get this expressed as tensors you begin with a Ricci tensor Ruv for curvature (this is a shortened Riemann tensor).

Ruv = 4*Pi*G*Tuv,

where Tuv is the energy-momentum tensor which includes potential energy contributions due to pressures, but is analogous to the density term Rho in Poisson's equation. (The density of mass can be converted into energy density simply by using E=mc2.)

However, this equation Ruv = 4*Pi*G*Tuv was found by Einstein to be a failure because the divergence of Tuv should be zero if energy is conserved. (A uniform energy density will have zero divergence, and Tuv is of course a density-type parameter. The energy potential of a gravitational field doesn't have zero divergence, because it diverges - falls off - with distance, but uniform density has zero divergence simply because it doesn't fall with distance!)

The only way Einstein could correct the equation (so that the divergence of Tuv is zero) was by replacing Tuv with Tuv - (1/2)(guv)T, where R is the trace of the Ricci tensor, and T is the trace of the energy-mass tensor.

Ruv = 4*Pi*G*[Tuv - (1/2)(guv)T]

which is equivalent to

Ruv - (1/2)Rguv = 8*Pi*G*Tuv

Which is the full general relativity field equation (ignoring the cosmological constant and dark energy, which is incompatible with any Yang-Mills quantum gravity because to use an over-simplified argument, the redshift of gravity-causing exchange radiation between receding masses over long ranges cuts off gravity, negating the need for dark energy to explain observations).

There are no errors as such in the above but there are errors in the way the metric is handled and in the ignorance of quantum gravity effects on the gravity strength constant G.

Because Einstein doesn't mathematically distinguish between the volume (and its dimensions) of spacetime and the gravity causing volume (and its dimensions) of the spacetime fabric (the Dirac sea or the Yang-Mills exchange radiation which produces forces and curvature), there have always been disputes regarding the correct metric.

Kaluza around 1919 suggested to Einstein that the metric should be 5 dimensional, with an extra distance dimension. Klein suggested the latter is rolled up in a particle. Kaluza's argument was that by changing the metric there is hope of unifying gravitation with Maxwell's photon. However, the ad hoc approach of changing the number of dimensions doesn't predict anything checkable, normally. (Mainstream string theory, M-theory, is an example.)

The exception is Lunsford's 6-dimensional unification (using 3 distance-like dimensions and 3 time-like dimensions), which correctly (see below for how Yang-Mills gauge boson redshift alters general relativity without needing a cosmological constant) predicts the correct electrodynamics and gravitational unification and predicts the absence of a cosmological constant.

The physical basis for Lunsford's solution of the problem is clear: one set of 3 dimensions is describing the contractable dimensions of matter like the Michelson-Morley instrument, while the other set of 3 dimensions is describing the 3 dimensions in the volume of space. This gets away from using the same dimensions to describe the ever expanding universe as are used to describe the contraction of a moving particle in the direction of its motion.

The spacetime fabric is not the same thing as geometric volume: geometric volume is ever expanding due to the big bang, whereas the spacetime fabric has force causing "curvature" due to the presence of masses locally (or pressure, to use a 3 dimensional analogy to the 2-dimensional rubber sheet physics used traditionally to "explain" curvature):

(1). geometric volume is ever expanding due to the big bang (3-dimensions)
(2). spacetime fabric has force causing 'curvature' which is associated with contractions (not uniform expansion!) due to the presence of masses locally

The dimensions describing (1) don't describe (2). Each needs at least 3 dimensions, hence there are 6 dimensions present for a mathematically correct description of the whole problem. My approach in a now-obsolete CERN paper is to tackle this dynamical problem: as the volume of the universe increases, the amount of Dirac sea per unit volume increases because the Dirac sea surrounds matter. Empty a bottle of water, and the amount of air in it increases. A full dynamical treatment of this predicts gravity. (This approach is a dual to another calculation which is based on Yang-Mills exchange radiation pressure but is entirely consistent. Radiation and Dirac sea charges are both intimately related by pair production/annihilation 'loops', so strictly speaking the radiation calculation of gravity is valid below the IR cutoff of QFT, whereas the Dirac sea hydrodynamic effect becomes most important above the IR cutoff. In any case, the total gravity force as the sum of both radiation and charge pressures will always be accurate because when loops tranform fraction f of the radiation into charged loops with material like pressure, the fractional contribution of the latter is 1-f so the total gravity is f+1-f = 1, ie, it is numerically the same as assuming either than 100% is due to radiation or 100% is due to Dirac sea charges.)

In particular, matter contraction is a case where distance can be defined by the matter (ie, how many atoms there are along the length of the material), while the expanding universe cannot be dealt with this way because of the issue that time delays become vitally important (two widely separated objects always see one another as being in the past, due to the immense travel time of the light, and since effects like gravity go at light speed, what matters physically is time delay). Hence, it is rational to describe the contraction of matter due to its motion and gravitational properties with a set of distance dimensions, but to use a set of time dimensions to describe the ever expanding cosmology.

Danny Ross Lunsford’s major paper, published in Int. J. Theor. Phys., v 43 (2004), No. 1, pp.161-177, was submitted to arXiv.org but was removed from arXiv.org by censorship apparently since it investigated a 6-dimensional spacetime which again is not exactly worshipping Witten’s 10/11 dimensional M-theory. It is however on the CERN document server at http://doc.cern.ch//archive/electronic/other/ext/ext-2003-090.pdf, and it shows the errors in the historical attempts by Kaluza, Pauli, Klein, Einstein, Mayer, Eddington and Weyl. It proceeds to the correct unification of general relativity and Maxwell’s equations, finding 4-d spacetime inadequate:

‘... We see now that we are in trouble in 4-d. The first three [dimensions] will lead to 4th order differential equations in the metric. Even if these may be differentially reduced to match up with gravitation as we know it, we cannot be satisfied with such a process, and in all likelihood there is a large excess of unphysical solutions at hand. ... Only first in six dimensions can we form simple rational invariants that lead to a sensible variational principle. The volume factor now has weight 3, so the possible scalars are weight -3, and we have the possibilities [equations]. In contrast to the situation in 4-d, all of these will lead to second order equations for the g, and all are irreducible - no arbitrary factors will appear in the variation principle. We pick the first one. The others are unsuitable ... It is remarkable that without ever introducing electrons, we have recovered the essential elements of electrodynamics, justifying Einstein’s famous statement ...’

D.R. Lunsford shows that 6 dimensions in SO(3,3) should replace the Kaluza-Klein 5-dimensional spacetime, unifying GR and electromagnetism: ‘One striking feature of these equations ... is the absent gravitational constant - in fact the ratio of scalars in front of the energy tensor plays that role. This explains the odd role of G in general relativity and its scaling behavior. The ratio has conformal weight 1 and so G has a natural dimensionfulness that prevents it from being a proper coupling constant - so this theory explains why ordinary general relativity, even in the linear approximation and the quantum theory built on it, cannot be regularized.’

In a comment on Woit's blog, Lunsford writes: ‘... I worked out and published an idea that reproduces GR as low-order limit, but, since it is crazy enough to regard the long range forces as somehow deriving from the same source, it was blacklisted from arxiv (CERN however put it up right away without complaint). ... GR is undeniably correct on some level - not only does it make accurate predictions, it is also very tight math. There are certain steps in the evolution of science that are not optional - you can’t make a gravity theory that doesn’t in some sense incorporate GR at this point, any more than you can make one that ignores Newton on that level. Anyone who claims that Einstein’s analysis is all wrong is probably really a crackpot. ... my work has three time dimensions ... This is not incompatible with GR, and in fact seems to give it an even firmer basis. On the level of GR, matter and physical space are decoupled the way source and radiation are in elementary EM.’

In another comment on Woit's blog, Lunsford writes: ‘... the idea was really Riemann’s, Clifford’s, Mach’s, Einstein’s and Weyl’s. The odd thing was, Weyl came so close to getting it right, then, being a mathematician, insisted that his theory explain why spacetime is 4D, which was not part of the original program. Of course if you want to derive matter from the manifold, it can’t be 4D. This is so simple that it’s easy to overlook.

‘I always found the interest in KK theory curiously misplaced, since that theory actually succeeds in its original form, but the success is hollow because the unification is non-dynamical.’

Regarding Lunsford’s statement that ‘since it is crazy enough to regard the long range forces as somehow deriving from the same source, it was blacklisted from arxiv’, the way that both gravity and electromagnetism derive from the same source is as follows:

A capacitor QFT model in detail. The gauge bosons must travel between all charges, they cannot tell that an atom is "neutral" as a whole, instead they just travel between the charges. Therefore even though the electric dipole created by the separation of the electron from the proton in a hydrogen atom at any instant is randomly orientated, the gauge bosons can also be considered to be doing a random walk between all the charges in the universe.

The random-walk vector sum for the charges of all the hydrogen atoms is the voltage for a single hydrogen atom (the real charges mass in the universe is something like 90% composed of hydrogen), multiplied by the square root of the number of atoms in the universe.

This allows for the angles of each atom being random. If you have a large row of charged capacitors randomly aligned in a series circuit, the average voltage resulting is obviously zero, because you have the same number of positive terminals facing one way as the other.

So there is a lot of inefficiency, but in a two or three dimensional set up, a drunk taking an equal number of steps in each direction does make progress. The taking 1 step per second, he goes an average net distance from the starting point of t^0.5 steps after t seconds.

For air molecules, the same occurs so instead of staying in the same average position after a lot of impacts, they do diffuse gradually away from their starting points.

Anyway, for the electric charges comprising the hydrogen and other atoms of the universe, each atom is a randomly aligned charged capacitor at any instant of time.

This means that the gauge boson radiation being exchanged between charges to give electromagnetic forces in Yang-Mills theory will have the drunkard’s walk effect, and you get a net electromagnetic field of the charge of a single atom multiplied by the square root of the total number in the universe.

Now, if gravity is to be unified with electromagnetism (also basically a long range, inverse square law force, unlike the short ranged nuclear forces), and if gravity due to a geometric shadowing effect (see my home page for the Yang-Mills LeSage quantum gravity mechanism with predictions), it will depend on only a straight line charge summation.

In an imaginary straight line across the universe (forget about gravity curving geodesics, since I’m talking about a non-physical line for the purpose of working out gravity mechanism, not a result from gravity), there will be on average almost as many capacitors (hydrogen atoms) with the electron-proton dipole facing one way as the other, but not quite the same numbers.


You find that statistically, a straight line across the universe is 50% likely to have an odd number of atoms falling along it, and 50% likely to have an even number of atoms falling along it. Clearly, if the number is even, then on average there is zero net voltage. But in all the 50% of cases where there is an odd number of atoms falling along the line, you do have a net voltage. The situation in this case is that the average net voltage is 0.5 times the net voltage of a single atom. This causes gravity. The exact weakness of gravity as compared to electromagnetism is now predicted. Gravity is due to 0.5 x the voltage of 1 hydrogen atom (a "charged capacitor").

Electromagnetism is due to the random walk vector sum between all charges in the universe, which comes to the voltage of 1 hydrogen atom (a "charged capacitor"), multiplied by the square root of the number of atoms in the universe. Thus, ratio of gravity strength to electromagnetism strength between an electron and a proton is equal to: 0.5V/(V.N^0.5) = 0.5/N^0.5. V is the voltage of a hydrogen atom (charged capacitor in effect) and N is the number of atoms in the universe. This ratio is equal to 10^-40 or so, which is the correct figure within the experimental errors involved.


I'm quite serious as I said to Prof. Distler that tensors have their place but aren't the way forward with gravity because people have been getting tangled with that since 1915 and have a landscape of cosmologies being fiddled with cosmological constants (dark energy) to fit observation, and that isn't physics.

For practical purposes, we can use Poisson's equation as a good approximation and modify it to behave like the Einstein' field equation. Example: div.2E = 4*Pi*Rho*G becomes div.2E = 8*Pi*Rho*G when dealing with light that is transversely crossing gravitational field lines (hence light falls twice as much towards the sun than Newton's law predicts).

When gravity deflects an object with rest mass that is moving perpendicularly to the gravitational field lines, it speeds up the object as well as deflecting its direction. But because light is already travelling at its maximum speed (light speed), it simply cannot be speeded up at all by falling. Therefore, that half of the gravitational potential energy that normally goes into speeding up an object with rest mass cannot do so in the case of light, and must go instead into causing additional directional change (downward acceleration). This is the mathematical physics reasoning for why light is deflected by precisely twice the amount suggested by Newton’s law.

That's the physics behind the maths of general relativity! It's merely very simple energy conservation; the rest is just mathematical machinery! Why can't Prof. Distler grasp this stuff? I'm going to try to rewrite my material to make it crystal clear even to stringers who don't like to understand the physical dynamics!

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

- R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

Regards the physics of the metric, guv: in 1949 some kind of crystal-like Dirac sea was shown to mimic the SR contraction and mass-energy variation, see C.F. Frank, ‘On the equations of motion of crystal dislocations’, Proceedings of the Physical Society of London, A62, pp 131-4:
‘It is shown that when a Burgers screw dislocation [in a crystal] moves with velocity v it suffers a longitudinal contraction by the factor (1 - v2/c2)1/2, where c is the velocity of transverse sound. The total energy of the moving dislocation is given by the formula E = Eo/(1 - v2/c2)1/2, where Eo is the potential energy of the dislocation at rest.’


Specifying that the distance/time ratio = c (constant velocity of light), then tells you that the time dilation factor is identical to the distance contraction factor.

Feynman explained that, in general relativity, the contraction around a static mass M is simply a reduction in radius by (1/3)MG/c2 or 1.5 mm for the Earth. You don’t need the tensor machinery of general relativity to get such simple results. You can do it just using the equivalence principle of general relativity plus some physical insight:

The velocity needed to escape from the gravitational field of a mass M (ignoring atmospheric drag), beginning at distance x from the centre of mass M, by Newton’s law will be v = (2GM/x)1/2, so v2 = 2GM/x. The situation is symmetrical; ignoring atmospheric drag, the speed that a ball falls back and hits you is equal to the speed with which you threw it upwards (the conservation of energy). Therefore, the energy of mass in a gravitational field at radius x from the centre of mass is equivalent to the energy of an object falling there from an infinite distance, which by symmetry is equal to the energy of a mass travelling with escape velocity v.

By Einstein’s principle of equivalence between inertial and gravitational mass, this gravitational acceleration field produces an identical effect to ordinary motion. Therefore, we can place the square of escape velocity (v2 = 2GM/x) into the Fitzgerald-Lorentz contraction, giving g = (1 – v2/c2)1/2 = [1 – 2GM/(xc2)]1/2.

However, there is an important difference between this gravitational transformation and the usual Fitzgerald-Lorentz transformation, since length is only contracted in one dimension with velocity, whereas length is contracted equally in 3 dimensions (in other words, radially outward in 3 dimensions, not sideways between radial lines!), with spherically symmetric gravity. Using the binomial expansion to the first two terms of each:

Fitzgerald-Lorentz contraction effect: gamma = x/xo = t/to = mo/m = (1 – v2/c2)1/2 = 1 – ½v2/c2 + ...

Gravitational contraction effect: gamma = x/xo = t/to = mo/m = [1 – 2GM/(xc2)]1/2 = 1 – GM/(xc2) + ...,

where for spherical symmetry (x = y = z = r), we have the contraction spread over three perpendicular dimensions not just one as is the case for the FitzGerald-Lorentz contraction: x/xo + y/yo + z/zo = 3r/ro. Hence the radial contraction of space around a mass is r/ro = 1 – GM/(xc2) = 1 – GM/[(3rc2]

Therefore, clocks slow down not only when moving at high velocity, but also in gravitational fields, and distance contracts in all directions toward the centre of a static mass. The variation in mass with location within a gravitational field shown in the equation above is due to variations in gravitational potential energy. The contraction of space is by (1/3)GM/c2.

There is more than one way to do most things in physics. Just because someone cleverer than me can use tensors analysis to do the above, doesn’t itself discredit intuitive physics. General relativity is not a religion unless you make it one by insisting on a particular approach. The stuff above is not “pre-GR” because Newton didn’t do it. It’s still GR alright. You can have different roads to the same thing even in GR. Baez and Bunn have a derivation of Newton’s law from GR which doesn’t use tensor analysis: see http://math.ucr.edu/home/baez/einstein/node6a.html

Friedmann’s solution to general relativity says the effective dimension of the universe R should increase (for critical density) as t2/3. The two-thirds power of time is due to the effect of gravity.

Now, via the October 1996 issue of Electronics World, I had an 8-pages paper analysing this, and predicting the results Perlmutter found two years later.

General relativity is widely accepted to not be the final theory of quantum gravity, and the discrepancy is in that a Standard Model type (ie Yang-Mills) quantum field theory necessitates gauge boson radiation (”gravitons”) being exchanged between the gravitational “charges” (ie masses). In the expanding universe, over vast distances the exchange radiation will suffer energy loss like redshift when being exchanged between receding masses.

This predicts that gravity falls off over large (cosmological sized) distances. As a result, Friedmann’s solution is false. The universe isn’t slowing down! Instead of R ~ t2/3 the corrected theoretical prediction turns out R ~ t, which is confirmed by Perlmutter’s data from two years after the 1996 prediction was published. Hence no need for dark energy; instead there’s no simply gravity to pull back very distant objects. Nobel Laureate Phil Anderson grasps this epicycle/phlogiston type problem:

‘the flat universe is just not decelerating, it isn’t really accelerating’ - Phil Anderson, http://cosmicvariance.com/2006/01/03/danger-phil-anderson/#comment-10901

To prove Louise’s MG = tc3 (for a particular set of assumptions which avoid a dimensionless multiplication factor of e3 which could be included according to my detailed calculations from a gravity mechanism):

(1) Einstein’s equivalence principle of general relativity:

gravitational mass = inertial mass.

(2) Einstein’s inertial mass is equivalent to inertial mass potential energy:

E = mc2

This equivalent energy is “potential energy” in that it can be released when you annihilate the mass using anti-matter.

(3) Gravitational mass has a potential energy which could be released if somehow the universe could collapse (implode):

Gravitational potential energy of mass m, in the universe (the universe consists of mass M at an effective average radius of R):

E = mMG/R

(4) We now use principle (1) above to set equations in arguments (2) and (3) above, equal:

E = mc2 = mMG/R

(5) We use R = ct on this:

c3 = MG/t

or

MG = tc3

Which is Louise’s equation. QED.

I’ve done and published (where I can) work on the detailed dynamics for such an energy-based mechanism behind gravity. The actual dynamics are slightly more subtle than above because of the definition of mass of universe and its effective radius, which are oversimplified above. In reality, the further you see, the earlier the phase of the universe you are looking at (and receiving gravity effects from), so the effective density of the universe is greater. There is however, a limiting factor which eventually offsets this effect when considering great distances, because big masses in the very dense early-time universe, receding from us rapidly, would exchange very severely redshifted 'gravitons' (or whatever the gauge bosons of gravity are).

LQG is a modelling process, not a speculation. Smolin et al. show that a path integral is a summing over the full set of interaction graphs in a Penrose spin network. The result gives general relativity without a metric (ie, background independent). Next, you simply have to make gravity consistent completely with standard model-type Yang-Mills QFT dynamics to get predictions.

LQG is explained in Woit's book Not Even Wrong where he points out that loops are a perfectly logical and self-consistent duality of the curvature of spacetime: ‘In loop quantum gravity, the basic idea is to use the standard methods of quantum theory, but to change the choice of fundamental variables that one is working with. It is well known among mathematicians that an alternative to thinking about geometry in terms of curvature fields at each point in a space is to instead think about the holonomy [whole rule] around loops in the space.’

‘Gravitation and Electrodynamics over SO(3,3)’ on CERN document server, EXT-2003-090 by D. R. Lunsford was peer-reviewed and published but was deleted from arXiv. It makes predictions, eliminating dark/energy cosmological constant. It’s 3 orthagonal time dimensions and its elimination of the CC mean it fits a gravity mechanism which is also predictive...

LQG has the benefits of unifying Standard Model (Yang-Mills) quantum field theory with the verified non-landscape end of general relativity (curvature) without making a host of uncheckable extradimensional speculations. It is more economical with hype than string theory because the physical basis may be found in the Yang-Mills picture of exchange radiation. Fermions (non-integer spin particles) in the standard model don’t have intrinsic masses (masses vary with velocity for example), but their masses are due to their association with massive bosons having integer spin. Exchange of gauge boson radiations between these massive bosons gives the loops of LQG. If string theorists had any rationality they would take such facts as at least a serious alternative to string!

It is interesting that the editorial on p5 of the 9 Dec 06 issue of New Scientist states: “[Unorthodox approaches] now seem the antithesis of modern science, with consensus and peer review at its very heart. ... The sheer number of ideas in circulation means we need tough, sometimes crude ways of sorting.... The principle that new ideas should be verified and reinforced by an intellectual community is one of the pillars of scientific endeavour, but it comes at a cost.”

Page 46 of the same issue has an article by Professor Harry Collins of Cardiff University which states that a crackpot 'US military spends around $1 million per year on anti-gravity research. This is best understood by analogy with the philosopher Blaise Pascal's famous wager. Everyone, he argued, should believe in God because the cost of believing was small while the cost of not believing could be an eternity in hell. For the military, the cost of missing a technological opportunity, particularly if your enemy finds it first, is a trip to the hell of defeat. Thus goes the logic of the military investigating anti-gravity.'

Yang-Mills exchange radiation causes gravity (predicting it's strength and other general relativity effects accurately), by the push effect gets rid of anti-gravity, and this fact would save U.S. taxpayers $1 million per year by getting rid of unwanted research. First useful spin-off! You cannot shield a push gravity because it doesn't depend on mediation of gravity between two passes, but on the shielding of masses from Yang-Mills exchange radiation. Any attempt to 'shield' gravity would hence make the effect of gravity stronger, not weaker, by adding to the existing shielding.

Professor Collins' major point however is that fear prevents ideas being dismissed and ignored. If you want to deter people from being dismissive and abusive, therefore, you should emphasise the penalty (what they have to lose) due to ignoring and dismissing the facts. The best way to do that is to expose the damage to physics that some of the existing egotists are doing, so that others will be deterred. I've tried waiting for a decade (first publication via October 1996 Electronics World, an 8-pages paper made available via the letters pages). Being polite and nice or modest to the leading physicists is like giving an inch to to a dictator who then takes a mile. They can afford false modesty, others can't. Any modesty is automatically interpreted as proof of incompetence or stupidity. They aren't interested in physics. The only way to get interest is by pointing out facts they don't want to hear. This brings a hostile reaction, but the choice is not between a hostile reaction and a nice reaction! It is a choice between a hostile reaction and an abusive/dismissive reaction. Seen properly, the steps being taken are the best possible choice from the alternatives available.

On the topic of gravity decelerated expansion in the false (conventional) treatment of cosmology, see Brian Green's The Elegant Universe,
UK paperback edition page 355: 'Inflation The root of the inflation problem is that in order to get two [now] widely separated regions of the universe close together [when looking backwards in time towards the big bang time origin] ... there is not enough time for any physical influence to have travelled from one region to the other. The difficulty, therefore, is that [looking backwards in time] ... the universe does not shrink at a fast enough rate.

'... The horizon problem stems from the fact that like a ball tossed upward, the dragging pull of gravity causes the expansion rate of the universe to slow down. ... Guth's resolution of the horizon problem is ... the early universe undergoes a brief period of enormously fast expansion...'

The problem is real but the solution isn't! What's happening is that the cosmological gravitational retardation idea from general relativity is an error because it ignores redshift of gauge boson radiation between receding masses (gravitational charges). This empirically based gravity mechanism also solves the detailed problem of the small-sized ripples in the cosmic background radiation.



Update: extract from an email to Mario Rabinowitz, 30 December 2006.

There are issues with the force calculation when you want great accuracy. For my purpose I wanted to calculate an inward reaction force to get predict gravity by the LeSage gravity mechanism which Feynman describes (with a picture) in his book "Character of Physical Law" (1965).

The first problem is that at greater distances, you are looking back in time, so the density is higher. The density varies as the inverse cube of time after the big bang.

Obviously this would give you an infinite force from the greatest distances, approaching zero time (infinite density).

But the redshift of gravity causing gauge boson radiation emitted from such great distances would weaken their contribution. The further the mass is, the greater the redshift of any gravity causing gauge boson radiation which is coming towards us from that mass. So this effect puts a limit on the effect on gravity from the otherwise infinitely increasing effective outward force due to density rising at early times afte the big bang. A simple way to deal with this is to treat redshift as a stretching of the radiation, and the effects of density can be treated with the mass-continuity equation by supplying the Hubble law and the spacetime effect. The calculation suggests that the overall effect is that the density rise (as limited by the increase in redshift of gauge bosons carrying the inward reaction force) is a factor of e^3 where e is the base of natural logarithms. This is a factor of about 20, not infinity. It allows me to predict the strength of gravity correctly.

The failure of anybody (except for me) to correctly predict the lack of gravitational slowing of the universe is also quite serious.

Either it's due to a failure of gravity to work on masses receding from one another at great speed (due to redshift of force carrying gauge bosons in quantum gravity?) or it's due to some repulsive force offsetting gravity.

The mainstream prefers the latter, but the former predicted the Perlmutter results via October 1996 Electronics World. There is extreme censorship against predictions which are correctly confirmed afterwards, and which quantitatively correlate to observations. But there is bias in favour of ad hoc invention of new forces which simply aren't needed and don't predict anything checkable. It's just like Ptolemy adding a new epicycle everytime he found a discrepancy, and then claiming to have "discovered" a new epicycle of nature.

The claimed accelerated expansion of the universe is exactly (to within experimental error bars) what I predicted two years before it was observed, using the assumption there is no gravitational retardation (instead of an accelerated expansion just sufficient to cancel the expansion). The "cosmological constant" the mainstream is using is variable, to fit the data! You can't exactly offset gravity by simply adding a cosmological constant, see:

http://cosmicvariance.com/2006/01/11/evolving-dark-energy/

See the diagram there! The mainstream best fit using a cosmological constant is well outside many of the error bars. This is intuitively obvious from my perspective. What is occurring is that there is simply no gravitational slowing. But the mainstream is assuming that there is gravitational slowing, and also dark energy causing acceleration which offsets the gravitational slowing. But that doesn't work: the cosmological constant cannot do it. If it is perfectly matched to experimental data at short distances, it over compensates at extreme distances because it makes gravity repulsive. So it overestimates at extreme distances.

All you need to get the correct expansion curve is to delete gravitational retardation altogether. You don't need general relativity to examine the physics.

Ten years ago (well before Perlmutter’s discovery and dark energy), the argument arose that if gravity is caused by a Yang-Mills exchange radiation quantum force field, where gravitons were exchanged between masses, then cosmological expansion would degenerate the energy of the gravitons over vast distances.

It is easy to calculate: whenever light is seriously redshifted, gravity effects over the same distance will be seriously reduced.

At that time, 1996, I was furthering my education with some Open University courses and as part of the cosmology course made some predictions from this quantum gravity concept.

The first prediction is that Friedmann’s solutions to GR are wrong, because they assume falsely that gravity doesn’t weaken over distances where redshifts are severe.

Whereas the Hubble law of recessionis empirically V = Hr, Friedmann’s solutions to general relativity predicts that V will not obey this law at very great distances. Friedmann/GR assume that there will be a modification due to gravity retarding the recession velocities V, due effectively to the gravitational attraction of the receding galaxy to the mass of the universe contained within the radius r.

Hence, the recession velocity predicted by Friedmann’s solution for a critical density universe (which continues to expand at an ever diminishing rate, instead of either coasting at constant - which Friedmann shows GR predicts for low density - or collapsing which would be the case for higher than critican density) can be stated in classical terms to make it clearer than using GR.

Recession velocity including gravity

V = (Hr) - (gt)

where g = MG/(r^2) and t = r/c, so:

V = (Hr) - [MGr/(cr^2)]

= (Hr) - [MG/(cr)]

M = mass of universe which is producing the gravitational retardation of the galaxies and supernovae, ie, the mass located within radius r (by Newton’s theorem, the gravity due to mass within a spherically symmetric volume can be treated as to all reside in the centre of that volume):

M = Rho.(4/3)Pi.r^3

Assuming as (was the case in 1996 models) that Friedmann Rho = critical density = Rho = 3(H^2)/(8.Pi.G), we get:

M = Rho.(4/3)Pi.r^3

= [3(H^2)/(8.Pi.G)].(4/3)Pi.r^3

= (H^2)(r^3)/(2G)

So, the Friedmann recession velocity corrected for gravitational retardation,

V = (Hr) - [MG/(cr)]

= (Hr) - [(H^2)(r^3)G/(2Gcr)]

= (Hr) - [0.5(Hr)^2]/c.

Now, what my point is is this. The term [0.5(Hr)^2]/c in this equation is the amount of gravitational deceleration to the recession velocity.
From Yang-Mills quantum gravity arguments, with gravity strength depending on the energy of exchanged gravitons, the redshift of gravitons must stop gravitational retardation being effective. So we must drop the effect of the term [0.5(Hr)^2]/c.

Hence, we predict that the Hubble law will be the correct formula.

Perlmutter’s results of software-automated supernovae redshift discoveries using CCD telescopes were obtained in about 1998, and fitted this prediction made in 1996. However, every mainstream journal had rejected my 8-page paper, although Electronics World (which I had written for before) made it available via the October 1996 issue.

Once this quantum gravity prediction was confirmed by Perlmutter’s results, instead of abandoning Friedmann’s solutions to GR and pursuing quantum gravity, the mainstream instead injected a small positive lambda (cosmological constant, driven by unobserved dark energy) into the Friedmann solution as an ad hoc modification.

I can’t understand why something which to me is perfectly sensible and is a prediction which was later confirmed experimentally, is simply ignored. Maybe it is just too simple, and people hate simplicity, preferring exotic dark energy, etc.

People are just locked into believing Friedmann’s solutions to GR are correct because they come from GR which is well validated in other ways. They simply don’t understand that the redshift of gravitons over cosmological sized distances would weaken gravity, and that GR simply doesn’t contains these quantum gravity dynamics, so fails. It is “groupthink”.

*****

For an example of the tactics of groupthing, see Professor Sean Carroll's recent response to me: 'More along the same lines will be deleted — we’re insufferably mainstream around these parts.'

He is a relatively good guy, who stated:

‘The world is not magic. The world follows patterns, obeys unbreakable rules. We never reach a point, in exploring our universe, where we reach an ineffable mystery and must give up on rational explanation; our world is comprehensible, it makes sense. I can’t imagine saying it better. There is no way of proving once and for all that the world is not magic; all we can do is point to an extraordinarily long and impressive list of formerly-mysterious things that we were ultimately able to make sense of. There’s every reason to believe that this streak of successes will continue, and no reason to believe it will end. If everyone understood this, the world would be a better place.’ – Prof. Sean Carroll, here

The good guys fairly politely ignore the facts, the bad guys try to get you sacked, or get call you names. There is a big difference. But at the end, the mainstream rubbish continues for ages either way:

Human nature means that instead of using scientific objectivity, any ideas outside the current paradigm should be attacked either indirectly (by ad hominem attacks on the messenger), or by religious-type (unsubstantiated) bigotry, irrelevant and condescending patronising abuse, and sheer self-delusion:

‘(1). The idea is nonsense.

‘(2). Somebody thought of it before you did.

‘(3). We believed it all the time.’

- Professor R.A. Lyttleton’s summary of inexcusable censorship (quoted by Sir Fred Hoyle, Home is Where the Wind Blows Oxford University Press, 1997, p154).

‘If you have got anything new, in substance or in method, and want to propagate it rapidly, you need not expect anything but hindrance from the old practitioner - even though he sat at the feet of Faraday... beetles could do that... he is very disinclined to disturb his ancient prejudices. But only give him plenty of rope, and when the new views have become fashionably current, he may find it worth his while to adopt them, though, perhaps, in a somewhat sneaking manner, not unmixed with bluster, and make believe he knew all about it when he was a little boy!’

- Oliver Heaviside, Electromagnetic Theory Vol. 1, p337, 1893.

UPDATE:

Copy of a comment:


http://kea-monad.blogspot.com/2007/02/luscious-langlands-ii.html


Most of the maths of physics consists of applications of equations of motion which ultimately go back to empirical observations formulated into laws by Newton, supplemented by Maxwell, Fitzgerald-Lorentz, et al.


The mathematical model follows experience. It is only speculative in that it makes predictions as well as summarizing empirical observations. Where the predictions fall well outside the sphere of validity of the empirical observations which suggested the law or equation, then you have a prediction which is worth testing. (However, it may not be falsifiable even then, the error may be due to some missing factor or mechanism in the theory, not to the theory being totally wrong.)


Regarding supersymmetry, which is the example of a theory which makes no contact with the real world, Professor Jacques Distler gives an example of the problem in his review of Dine’s book Supersymmetry and String Theory: Beyond the Standard Model:


http://golem.ph.utexas.edu/~distler/blog/


“Another more minor example is his discussion of Grand Unification. He correctly notes that unification works better with supersymmetry than without it. To drive home the point, he presents non-supersymmetric Grand Unification in the maximally unflattering light (run α 1 ,α 2 up to the point where they unify, then run α 3 down to the Z mass, where it is 7 orders of magnitude off). The naïve reader might be forgiven for wondering why anyone ever thought of non-supersymmetric Grand Unification in the first place.”


The idea of supersymmetry is the issue of getting electromagnetic, weak, and strong forces to unify at 10^16 GeV or whatever, near the Planck scale. Dine assumes that unification is a fact (it isn’t) and then shows that in the absense of supersymmetry, unification is incompatible with the Standard Model.


The problem is that the physical mechanism behind unification is closely related to the vacuum polarization phenomena which shield charges.


Polarization of pairs of virtual charges around a real charge partly shields the real charge, because the radial electric field of the polarized pair is pointed the opposite way. (I.e., the electric field lines point inwards towards an electron. The electric field likes between virtual electron-positron pairs, which are polarized with virtual positrons closer to the real electron core than virtual electrons, produces an outwards radial electric field which cancels out part of the real electron’s field.)


So the variation in coupling constant (effective charge) for electric forces is due to this polarization phenomena.


Now, what is happening to the energy of the field when it is shielded like this by polarization?
Energy is conserved! Why is the bare core charge of an electron or quark higher than the shielded value seen outside the polarized region (i.e., beyond 1 fm, the range corresponding to the IR cutoff energy)?


Clearly, the polarized vacuum shielding of the electric field is removing energy from charge field.


That energy is being used to make the loops of virtual particles, some of which are responsible for other forces like the weak force.


This provides a physical mechanism for unification which deviates from the Standard Model (which does not include energy sharing between the different fields), but which does not require supersymmetry.


Unification appears to occur because, as you go to higher energy (distances nearer a particle), the electromagnetic force increases in strength (because there is less polarized vacuum intervening in the smaller distance to the particle core).


This increase in strength, in turn, means that there is less energy in the smaller distance of vacuum which has been absorbed from the electromagnetic field to produce loops.


As a result, there are fewer pions in the vacuum, and the strong force coupling constant/charge (at extremely high energies) starts to fall. When the fall in charge with decreasing distance is balanced by the increase in force due to the geometric inverse square law, you have asymptotic freedom effects (obviously this involves gluon and other particles and is complex) for quarks.
Just to summarise: the electromagnetic energy absorbed by the polarized vacuum at short distances around a charge (out to IR cutoff at about 1 fm distance) is used to form virtual particle loops.


These short ranged loops consist of many different types of particles and produce strong and weak nuclear forces.


As you get close to the bare core charge, there is less polarized vacuum intervening between it and your approaching particle, so the electric charge increases. For example, the observable electric charge of an electron is 7% higher at 90 GeV as found experimentally.


The reduction in shielding means that less energy is being absorbed by the vacuum loops. Therefore, the strength of the nuclear forces starts to decline. At extremely high energy, there is - as in Wilson’s argument - no room physically for any loops (there are no loops beyond the upper energy cutoff, i.e. UV cutoff!), so there is no nuclear force beyond the UV cutoff.


What is missing from the Standard Model is therefore an energy accountancy for the shielded charge of the electron.


It is easy to calculate this, the electromagnetic field energy for example being used in creating loops up to the 90 GeV scale is the energy of a charge which is 7% of the energy of the electric field of an electron (because 7% of the electron’s charge is lost by vacuumn loop creation and polarization below 90 GeV, as observed experimentally; I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424).


So this physical understanding should be investigated. Instead, the mainstream censors physics out and concentrates on a mathematical (non-mechanism) idea, supersymmetry.
Supersymmetry shows how all forces would have the same strength at 10^16 GeV.


This can’t be tested, but maybe it can be disproved theoretically as follows.


The energy of the loops of particles which are causing nuclear forces comes from the energy absorbed by the vacuum polalarization phenomena.


As you get to higher energies, you get to smaller distances. Hence you end up at some UV cutoff, where there are no vacuum loops. Within this range, there is no attenuation of the electromagnetic field by vacuum loop polarization. Hence within the UV cutoff range, there is no vacuum energy available to create short ranged particle loops which mediate nuclear forces.
Thus, energy conservation predicts a lack of nuclear forces at what is traditionally considered to be “unification” energy.


So there would seem to discredit supersymmetry, whereby at “unification” energy, you get all forces having the same strength. The problem is that the mechanism-based physics is ignored in favour of massive quantities of speculation about supersymmetry to “explain” unification, which are not observed.


***************************


Dr M. E. Rose (Chief Physicist, Oak Ridge National Lab.), Relativistic Electron Theory, John Wiley & Sons, New York and London, 1961, pp 75-6:


‘The solution to the difficulty of negative energy states [in relativistic quantum mechanics] is due to Dirac [P. A. M. Dirac, Proc. Roy. Soc. (London), A126, p360, 1930]. One defines the vacuum to consist of no occupied positive energy states and all negative energy states completely filled. This means that each negative energy state contains two electrons. An electron therefore is a particle in a positive energy state with all negative energy states occupied. No transitions to these states can occur because of the Pauli principle. The interpretation of a single unoccupied negative energy state is then a particle with positive energy … The theory therefore predicts the existence of a particle, the positron, with the same mass and opposite charge as compared to an electron. It is well known that this particle was discovered in 1932 by Anderson [C. D. Anderson, Phys. Rev., 43, p491, 1933].


‘Although the prediction of the positron is certainly a brilliant success of the Dirac theory, some rather formidable questions still arise. With a completely filled ‘negative energy sea’ the complete theory (hole theory) can no longer be a single-particle theory.


‘The treatment of the problems of electrodynamics is seriously complicated by the requisite elaborate structure of the vacuum. The filled negative energy states need produce no observable electric field. However, if an external field is present the shift in the negative energy states produces a polarisation of the vacuum and, according to the theory, this polarisation is infinite.


‘In a similar way, it can be shown that an electron acquires infinite inertia (self-energy) by the coupling with the electromagnetic field which permits emission and absorption of virtual quanta. More recent developments show that these infinities, while undesirable, are removable in the sense that they do not contribute to observed results [J. Schwinger, Phys. Rev., 74, p1439, 1948, and 75, p651, 1949; S. Tomonaga, Prog. Theoret. Phys. (Kyoto), 1, p27, 1949].


‘For example, it can be shown that starting with the parameters e and m for a bare Dirac particle, the effect of the ‘crowded’ vacuum is to change these to new constants e’ and m’, which must be identified with the observed charge and mass. … If these contributions were cut off in any reasonable manner, m’ - m and e’ - e would be of order alpha ~ 1/137. No rigorous justification for such a cut-off has yet been proposed.


‘All this means that the present theory of electrons and fields is not complete. … The particles … are treated as ‘bare’ particles. For problems involving electromagnetic field coupling this approximation will result in an error of order alpha. As an example … the Dirac theory predicts a magnetic moment of mu = mu[zero] for the electron, whereas a more complete treatment [including Schwinger’s coupling correction, i.e., the first Feynman diagram] of radiative effects gives mu = mu[zero].(1 + alpha/{twice Pi}), which agrees very well with the very accurate measured value of mu/mu[zero] = 1.001 …’


Notice in the above that the magnetic moment of the electron as calculated by QED with the first vacuum loop coupling correction is 1 + alpha/(twice Pi) = 1.00116 Bohr magnetons. The 1 is the Dirac prediction, and the added alpha/(twice Pi) links into the mechanism for mass here.
Most of the charge is screened out by polarised charges in the vacuum around the electron core:


‘… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ - arxiv hep-th/0510040, p 71.


‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.


Comment by nc — February 23, 2007 @ 11:19 am

62 Comments:

At 7:47 AM, Blogger nige said...

Copy of comment to http://riofriospacetime.blogspot.com/2006/12/thinking-of-prince.html

Basically the string theorists are just trying to obfuscate. You can do some things far more simply using simple maths. It's partly their self-imposed constraints to use only the most generalized mathematical tools which has created such an insoluble stringy mess in the mainstream.

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

- R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

 
At 2:21 AM, Anonymous Anonymous said...

http://www.math.columbia.edu/~woit/wordpress/?p=497#comment-19726

a.n. onymous Says: Your comment is awaiting moderation.

December 8th, 2006 at 5:20 am
Thomas,

Newton’s mechanics are just incomplete because they ignore the presence of more than 2 bodies. The solar system is only deterministic in the Newtonian framework because the sun contains nearly all the mass and the planets don’t interact strongly. If everything had the same gravitational charge (equal mass), by analogy to an atom, you’d naturally be describing the chaos of the solar system by a fuzzy formula giving the probability of finding a given planet at a given location.

But Newton’s mechanics are not necessarily wrong for requiring absolute time, e.g. today is 1.5 x 10^10 years since BB.

The dynamics of relativity can’t be found in special relativity because the first principle (constancy of velocity of light) is wrong because light bends around the sun due to gravity. Hence the formulation of general relativity on general covariance. General relativity deals with accelerations hence is an absolute reference frame theory: it merely says you can’t distinguish acceleration from gravity. Ie, a person in outer space in a room in a rocket accelerating at 9.8 ms^-2 is going to think he is on earth if there are no windows.

Acceleration is absolute, and because spacetime is curved by the presence of mass, all motion involves some acceleration. Uniform motion is a myth. You accelerate to start it. You get affected by gravity fields/curvature during it. You decelerate at the end of the journey. The paradigm of SR is artificial, it’s only useful for getting the Lorentz transformation etc.

In 1949 a crystal-like Dirac sea [this frozen sea will boil above the IR cutoff, causing the loops of QFT] was shown to mimic the SR contraction and mass-energy variation, see C.F. Frank, ‘On the equations of motion of crystal dislocations’, Proceedings of the Physical Society of London, A62, pp 131-4:

‘It is shown that when a Burgers screw dislocation [in a crystal] moves with velocity v it suffers a longitudinal contraction by the factor (1 - v2/c2)1/2, where c is the velocity of transverse sound. The total energy of the moving dislocation is given by the formula E = Eo/(1 - v2/c2)1/2, where Eo is the potential energy of the dislocation at rest.’

Specifying that the distance/time ratio = c (constant velocity of light), then tells you that the time dilation factor is identical to the distance contraction factor.

 
At 8:24 AM, Blogger nige said...

Copy of a comment

http://cosmicvariance.com/2006/12/07/guest-blogger-joe-polchinski-on-the-string-debates/#comment-148359

nc on Dec 8th, 2006 at 11:22 am
WeemaWhopper,

By ‘positive dark energy’ Prof Polchinski presumably means the secure experimental evidence from supernovae redshifts which show no slowing down in expansion. But there are two : (1) the gravitational retardation of distant galaxies etc is being offset by acceleration due to dark energy, and (2) there is simply no gravitational slowing down mechanism.

Explanation (1) is mainstream (the lambda-CDM general relativity cosmology), but explanation (2) is championed by Nobel Laureate Philip Anderson, who wrote: ‘the flat universe is just not decelerating, it isn’t really accelerating’ - Philip Anderson, http://cosmicvariance.com/2006/01/03/danger-phil-anderson/#comment-10901

Explanation (2) suggests Standard Model type (Yang-Mills) quantum field theory is the theory of gravity, because you’d expect a weakening in gravitational attraction in any situation when the gravity charges (masses) are rapidly receding from one another, due the “graviton” redshift. Ie, where the visible light from a galaxy is seriously redshifted by recession of the galaxy, the gravitons being exchanged with it will also be severely redshifted (weakening the gravity coupling constant between the two charges), which is a mechanism totally omitted in general relativity. This was predicted ahead of Perlmutter’s observations, unlike explanation (1) which relies on the ad hoc invention of dark energy.

 
At 3:26 AM, Blogger nige said...

http://cosmicvariance.com/2006/12/07/guest-blogger-joe-polchinski-on-the-string-debates/#comment-148749

nc on Dec 9th, 2006 at 6:20 am


"Even if you don’t believe in the supernova data at all (and there’s no reason not to), the CMB plus some very weak constraint on the Hubble constant implies acceleration ... It makes all the sense in the world to keep an open mind about any particular explanation for the acceleration, but the universe is definitely accelerating." - Sean

"...the flat universe is just not decelerating, it isn’t really accelerating..." - Philip Anderson, http://cosmicvariance.com/2006/01/03/danger-phil-anderson/#comment-10901

The fact that it is Philip Anderson in the last quote should not matter. Just concentrate on the science:

GR is not the final theory of gravity, which will have to take account of quantum effects. GR predicts that [in] expansion there is a departure from Hubble's law at extreme redshift due to deceleration caused by gravity.

The supposed "acceleration of the universe" observation [is] that there is no gravitational retardation in evidence, not that there is acceleration.

You can explain this either by dismissing long-range gravity or you say there is an acceleration due to dark energy which cancels out the effect of long range gravity.

Any Yang-Mills quantum gravity however predicts the lack of gravitational acceleration precisely so there is simply no room for any significant amount of ad hoc dark energy in explaining the result: the quantum gravity coupling (effective charge) for gravity falls off due to the redshift of the gauge bosons being exchanged when the masses are receding at relativistic velocities. There are different analyses to this problem, but all lead to the same conclusions! The prediction was published ahead of observations.

 
At 11:46 AM, Blogger nige said...

Copy of comment made (in moderation at present) to:

http://rightwingnation.com/index.php/2006/12/04/2544/


Very nice post on groupthink. Just a few comments about:

"As I understand it — and Mahndisa, please jump in and correct me if I err — there are two major problems or issues that the Standard Model is constructed to address. The Theory of General Relativity cannot explain the expansion of the universe, and neither it nor Newtonian physics cannot explain the fact that stars at the rims of galaxies move at nearly the same speed as those toward the center.

"The presence of more matter would explain both these phenomena. Thus, we have the Standard Model, where most of the universe is composed first of dark matter, and then when that wasn't enough, dark energy.

"The problem here is that neither dark matter nor dark energy can be tested or falsified. Indeed, Einstein proposed the equivalent of dark matter to explain the expansion of the universe, but he did so very unhappily, because his hypothesis was ad hoc, unmotivated by anything but the expansion of the universe."


The Standard Model of particle physics only has one outstanding speculation in it, which is the "Higgs" mass mechanism that also is supposed to predict the electro-weak symmetry breaking phenomenon. It doesn't include either dark matter or dark energy, or gravity. It deals with electromagnetism and nuclear forces, very well.

Presumably you're referring to the "Standard Model" of cosmology, the general relativity lambda-CDM model developed by Weinberg and others, which does contain ad hoc phlogiston and caloric (whoops; I should just have written dark energy and dark matter).

Einstein's field equation general relativity is factual up to 1916, when it is was founded on empirical facts - electrodynamics, the Newtonian limit of gravity, and the conservation of energy which forces the Ricci curvature tensor to not equal four Pi times the mass-energy tensor (as in Newton's formula as written in tensor calculus) but to instead be modified. Because of an inconsistency of the tensor form of Newton's law with the principle of conservation of mass-energy, Einstein had to change the equation to make it consistent, and this change introduces the metric and therefore the contraction of spacetime. The radius of the earth is shortened in 1.5 mm due to "curvature" which is just obfuscation for saying that the abstract mathematical model is consistent with the spacetime fabric squeezing matter slightly. This predicts things in a definite fashion.

General relativity doesn't work like that for cosmology because it can "predict" a whole landscape of universe types. Einstein in 1917 put in a cosmological constant with vastly more dark energy than the current lambda-CDM version. He wanted to keep the universe static (preventing gravity from collapsing it), his modification was to have dark energy cancel out gravity at a distance equal to the mean distance between galaxies. At bigger distances than that, dark energy predominates and the force is repulsive, while at smaller distances the gravity effect predominates so masses attract.

The big bang evidence made him drop that model, reverting to a Friedmann solution which makes another error. Basically, general relativity is wrong because it ignores quantum gravity effects.

Most people claim these effects are only significant at small scales, but that's wrong. The Standard Model of particle physics - which is very accurate with thousands of checks for nuclear reactions and suchlike - is a Yang-Mills quantum field theory. Forces are caused by energy ("gauge bosons") being exchanged between charges.

Apply that to gravity, where the charges are masses, and you see that in an expanding universe the recession of masses upsets and weakens the gravitational force coupling constant. Galaxies with huge redshift send lower frequency (and lower energy) photons of light to us due to redshift, but the same will occur for gauge bosons.

Hence, the lack of slowing down of distant galaxies discovered by Dr Saul Perlmutter in 1998 is not a case of gravity being offset by a small positive cosmological constant due to dark energy. Nope.

Instead, the lack of slowing down is just a lack of gravity at long ranges due to the redshift of gravity causing "gauge bosons".

This gets rid of some of the dark matter (some remains in halos around galaxies, and is neutrinos etc.) and it gets rid of all the alleged dark energy, because general relativity includes both of these to model the expansion of the universe on the assumption that the gravitation of general relativity and the acceleration due to dark energy control the universe's expansion.

The exchanged gauge boson radiation between masses which are receding at relativistic velocities will be severely redshifted, nullifying gravitation over cosmic distance scales.

 
At 2:27 PM, Blogger nige said...

Copy of a comment:

http://motls.blogspot.com/2006/12/lorentz-violation-and-deformed-special.html

http://www.haloscan.com/comments/lumidek/116569162383742296/?a=37136#674650

"They imagine that general relativity has rejected special relativity. Quite on the contrary. General relativity is an extension of the principles of special relativity in which all coordinate systems, not just inertial reference frames, are equally good for our formulation of the physical laws."

Yes, you are right in the post, you formulated the light speed to be [constant], not velocity. That is correct, but it isn't 1905 special relativity. Einstein thought the vector c was constant, but as general relativity shows it isn't because the direction and hence vector value c varies; only the light speed (not velocity), c is constant.

In 1949 a crystal-like Dirac sea was shown to mimic the SR contraction and mass-energy variation, see C.F. Frank, ‘On the equations of motion of crystal dislocations’, Proceedings of the Physical Society of London, A62, pp 131-4:

‘It is shown that when a Burgers screw dislocation [in a crystal] moves with velocity v it suffers a longitudinal contraction by the factor (1 - v^2 / c^2)^{1/2}, where c is the velocity of transverse sound. The total energy of the moving dislocation is given by the formula E = E_o/(1 - v^2 / c^2)^{1/2}, where E_o is the potential energy of the dislocation at rest.’

Specifying that the distance/time ratio = c (constant velocity of light), then tells you that the time dilation factor is identical to the distance contraction factor.

If that is the physical mechanism behind the mathematics of special relativity, then this crystalline structure in free space (below IR cutoff, more than 1 fm from a unit electric charge) will alter in intense fields. Heavy loops could easily modify the Lorentzian law by changing the mechanism!

Below the IR cutoff, the Dirac sea may be frozen in a crystal. Above the IR cutoff, pair production begins because the E field is over 10^20 volts/metre, which breaks up the crystalline structure. This causes pair production and in turn the pairs polarize, opposing the core electric charge and partly cancelling it.

Above the IR cutoff the fluid Dirac sea produced by the effect of the strong electric field will behave differently, in a possibly non-Lorentzian way.

The Lorentz transformation is obviously correct at low energy: it is needed to explain the Michelson-Morley experiment (contraction of length in direction of motion), and the time-dilation effect on the radioactive decay rate of muons etc is confirmed easily, as is the mass increase in with velocity when particles are accelerated. E=mc^2 was first validated accurately in 1932 with Anderson's matter-antimatter annihilation studies, but Rutherford and others had some evidence even before 1905 of massive energy locked in atoms from the radioactivity produced heat emission of radium (which has a 1600 years half life). Adding up the radiant power emission for a gram of radium over the average life (44% more than the half-life) gave an immense amount of energy per gram (but not the E=mc^2 amount because only a fraction of the mass of radium is converted into energy by radioactivity).

It's not what's in special relativity which is objectionable, it's what it misses out (mechanism) that will need to be filled in sooner or later. It's Machian philosophy which is behind SR at present. Mach was a fool who denied atoms, electrons, light quanta, etc., as well as spacetime fabric. He was just a philosopher.

Physics doesn't progress forever by being stripped of physical assets.

nc | Homepage | 12.09.06 - 5:22 pm | #

 
At 3:05 AM, Blogger nige said...

Prof. Michio Kaku's book Einstein's Cosmos explains M-theory (the 10 dimensional superstring and 11 dimensional supergravity unification) without maths (or physics):

"... M-theory gives us a possible solution ... by assuming that our universe itself is a membrane floating in an infinite eleven-dimensional hyperspace. Thus, subatomic particles and atoms would be confined to our membrane (our universe), but gravity, being a distortion of hyperspace, can flow freely between universes."

;-)

 
At 2:22 AM, Blogger nige said...

Copy of a comment of mine concerning "Plato" is misrepresenting Feynman's work in renormalization as being proof of the "dark energy" hoax on the discussion thread http://cosmicvariance.com/2006/12/07/guest-blogger-joe-polchinski-on-the-string-debates/#comment-149865



nc on Dec 11th, 2006 at 5:18 am

"Richard Feynman and others who developed the quantum theory of matter realized that empty space is filled with “virtual” particles continually forming and destroying themselves. These particles create a negative pressure that pulls space outward. No one, however, could predict this energy’s magnitude." - Plato

No, pair production in only occurs above the IR cutoff. (Collision energies of 0.511 Mev/particle.) Space is thus only filled with particle creation-annihilation loops at distances closer than 1 fm to a unit charge, where the electric field strength exceeds 10^20 v/m. This is the basis of renormalization of electric charge, which has empirical evidence. http://arxiv.org/abs/hep-th/0510040 is a recent analysis of quantum field theory progress that contains useful information on pair production and polarization around pages 70-80 if I recall correctly. For an earlier review of the subject, see http://arxiv.org/abs/quant-ph/0608140.

 
At 3:05 PM, Blogger nige said...

Copy of relevant email (see also http://nige.wordpress.com for most of the hyperlinks below):

From: Nigel Cook
To: Guy Grantham
Sent: Tuesday, December 12, 2006 10:58 PM
Subject: Re: Displacement Current

Dear Guy,

First, he gets the dates wrong, it isn't 1893 and 1895 for FitzGerald and Lorentz, but 1889 and 1893 respectively.

Reading the rest of it leaves me feeling that Dr Simhony is a philosopher with no connection to experimental fact.

The big problem is that the "epola" can't explain everything. There's another field also in the vacuum for mass. I don't think it's the Higgs field as normally postulated. I think the Z_o field is there in the vacuum.

I also don't think electrons and positrons are fundamental. The evidence is http://photos1.blogger.com/blogger/1931/1487/1600/PARTICLE.4.gif and http://thumbsnap.com/vf/FBeqR0gc.gif on http://nige.wordpress.com/

Vacuum polarization picture: zone A is the UV cutoff, while zone B is the IR cutoff around the particle core in http://thumbsnap.com/vf/FBeqR0gc.gif. See also http://electrogravity.blogspot.com/2006/06/more-on-polarization-of-vacuum-and.html and http://nige.wordpress.com/2006/10/09/16/ for more information. To find out how to calculate the 137 polarization shielding factor (1/alpha), scroll down top post and see the section at http://nige.wordpress.com/ headed ‘Mechanism for the strong nuclear force.’

My evidence from getting the masses of particles out of a vacuum polarization scheme shows that the simplest model is that the electron involves two polarizations and therefore two separated particles - one freed electron and one similarly freed vacuum Z_o massive boson. Each are surrounded by vacuum polarization shields, giving electric field shielding factors of alpha or 1/137.036.

Put in more energy, and you get the muon, which is the same except that the Z_o is then closer to the electron, so there is just one vacuum polarization factor. Allowing for both the changed distance geometry and the reduction from two polarization shielding factors (ie alpha squared) to one vacuum polarization factor (alpha) gives us the muon mass accurately from the electron mass.

Next, we get all other particle masses by following the same route as shown in http://thumbsnap.com/vf/FBeqR0gc.gif

This is very exciting. Notice that the Z_o boson is like a photon which has mass, ie, it is like a photon in being an electric dipole, so it is polarizable (by rotation on its axis in an electric field, not by charge separation as occurs with freed e + p pairs).

Feynman in “QED” (Penguin, London, 1990) writes on p142:

“It’s very clear that the photon and three W’s [Feynman says that a neutral W is the Z_o] are interconnected somehow, but at the present level of understanding, the connection is difficult to see clearly - you can still see the “seams” in the theories; they have not yet been smoothed out so that the connection becomes more beautiful and, therefore, probably more correct.”

From renormalization in QFT, it is well known that charge loops appear and cause effects (like charge renormalization due to polarization shielding bare charge) only above an IR cutoff.

This cutoff is the collision energy corresponding to the rest mass energy of the pair of charges. Ie, for positron-electron loops (total rest mass energy 1.022 Mev) the cutoff energy per particle is 0.511 Mev if both particles are moving or 1.022 Mev [if] only one is.

For the neutral W (ie the Z_o), the rest energy is 91 Gev is produced in the vacuum above a similar-energy cutoff (although the charged W’s in the loops have lower energy).

Surely the electroweak symmetry breaking could be due to the abundance of massive Z_o gauge bosons above 91 Gev? The specific “Higgs expectation value” of 246 GeV, see http://en.wikipedia.org/wiki/Vacuum_expectation_value , is 2.7 or e times the 91 Gev Z_o gauge boson mass!

In radiation attenuation, after a mean free path you get attenuation by a factor of e. So why can’t the Z_o be the mass causing and electroweak symmetry breaking “Higgs boson”?

My suggestion is that the "epola" is not like an electron and positron lattice.

First, the problem is that what we see as electrons are freed electrons from the "epola" and are surrounded by shielding "epola" pairs (virtual charges).

Hence, the real "epola" charges don't have the electron charge, but have a charge 137.036 times stronger. You can't expect that charges in the "epola" are shielded by their own vacuum. That would be like saying that fleas have fleas on them, etc. The vacuum charges are distinguished from real charges for various reasons. One of these is that real charges get virtual charges (between the IR and UV cutoff energies) polarizing around them and shielding them.

Second, the Z_o massive bosons are freed from the "epola" at very high energy, 91 GeV or whatever. They are produced by Popper's scattering mechanism. At an energy e times greater than 91 GeV, ie at 246 GeV, the abundance of massive Z_o bosons is substantial, and cause electroweak unification. Below 246 GeV, there is no electroweak unification because the abundance of Z_o bosons is too small to allow efficient electroweak reactions to be mediated.

Hence, I'd reverse the usual question a bit. The vacuum breaks W+ and W- gauge boson electroweak symmetries by attenuating those massive charged bosons efficiently, so there are various processes going on, I think.

Apart from these problems - my suggestions above are semi-empirical and somewhat tentative - there are massive questions concerned with the QCD colour force mediating "gluons". To properly understand the full Standard Model is requires a lot more than just the "epola".

I'd be interested for any comments you have.



Best wishes,

Nigel







----- Original Message -----

From: Guy Grantham
To: Nigel Cook
Sent: Tuesday, December 12, 2006 9:22 PM
Subject: Re: Displacement Current


Dear Nigel

The vacuum does mimic crystal behaviour. That screwlike dislocation moving at vel v will suffer longitudinal contraction (apparently) because it will suffer a frq change corresponding exactly to the different SR point of view that Lorentz-Einstein equation uses to account for the hypotenuse distance travelled per cycle.


Time dilation and length contraction (Fitzgerald) are contrivances to make SR fit and avoid facing up to an absolute independent medium in which light propagates. Time dilation does not seem to occur if one acknowledges that a greater distance is travelled by light through its medium of propagation at subluminal velocities. 'Clocks' made of reflecting light beams are a contrivance to explain the claimed effect.
Length contraction does not occur as a mathematical subterfuge if one does not need to contrive to explain that the ruler shrank. Atomic orbitals will alter shape at high velocity, molecules compress slightly and eventually valence electrons fail to orbit (using simplistic terminology). Atoms will ionise at far less than 'c'. At or near 'c' only separate subatomic particles can travel independently.

See the attached pages from one of Simhony's (largely uncirculated) publications from 1995 (he didn't get the index finished before he wrote another).

Best regards, Guy
----- Original Message -----
From: Nigel Cook
To: Guy Grantham
Sent: Tuesday, December 12, 2006 8:32 PM
Subject: Re: Displacement Current


Dear Guy,

What do you make of ideas like

"... in 1949 some kind of crystal-like Dirac sea was shown to mimic the SR contraction and mass-energy variation, see C.F. Frank, ‘On the equations of motion of crystal dislocations’, Proceedings of the Physical Society of London, A62, pp 131-4:

‘It is shown that when a Burgers screw dislocation [in a crystal] moves with velocity v it suffers a longitudinal contraction by the factor (1 - v2/c2)1/2, where c is the velocity of transverse sound. The total energy of the moving dislocation is given by the formula E = Eo/(1 - v2/c2)1/2, where Eo is the potential energy of the dislocation at rest.’


"Specifying that the distance/time ratio = c (constant velocity of light), then tells you that the time dilation factor is identical to the distance contraction factor." - http://electrogravity.blogspot.com/

Best wishes,
Nigel

----- Original Message -----
From: Guy Grantham
To: Nigel Cook ; Monitek@aol.com ; sirius184@hotmail.com ; ivorcatt@hotmail.com ; forrestb@ix.netcom.com ; ivorcatt@electromagnetism.demon.co.uk ; jvospost2@yahoo.com ; chalmers_alan@hotmail.com ; bdj10@cam.ac.uk ; imontgomery@atlasmeasurement.com.au
Cc: pwhan@atlasmeasurement.com.au ; jackw97224@yahoo.com ; geoffrey.landis@sff.net ; andrewpost@gmail.com ; ernest@cooleys.net ; george.hockney@jpl.nasa.gov ; tom@tomspace.com
Sent: Tuesday, December 12, 2006 10:23 AM
Subject: Re: Displacement Current


This dilemma is completely resolved by regarding the 'annihilated' electrons and positrons to be bound into a solid lattice as does Simhony with his epola model. This concept explains why e-po's can be generated (freed) only as pairs from one point in the lattice but absorbed back into the lattice as individual particles separated in space. The phenomena associated with polarisable vacuum are well known for the ionic crystal lattice state. In the bound state the component electron and positron ions effectively exist as a 'negative energy' sink or ground state for the vacuum prepared to accept energy to free an epo pair. The velocity of light corresponds directly to the bulk deformation wave velocity of a crystal lattice. Waves can arise because the particles of a lattice are bound to each other. Neutrinos correspond to excitons of the lattice.

Guy
Nigel wrote:



No, the electric force becomes stronger as charges approach due to Coulomb's law. You seem to be confusing potential energy of the field with kinetic energy of the charges. When charges are well separated the potential energy is at a maximum, just as the higher an apple is on the tree, the more gravitational potential energy it has to gain from a fall. However, gravity is stronger when the masses are closer.

To summarise, you're wrong because the binding force increases as particles come together, despite the fact that the potential energy falls as particles come together.

Nigel
----- Original Message -----
From: Monitek@aol.com
To: nigelbryancook@hotmail.com ; sirius184@hotmail.com ; ivorcatt@hotmail.com ; forrestb@ix.netcom.com ; ivorcatt@electromagnetism.demon.co.uk ; jvospost2@yahoo.com ; chalmers_alan@hotmail.com ; epola@tiscali.co.uk ; bdj10@cam.ac.uk ; imontgomery@atlasmeasurement.com.au
Cc: pwhan@atlasmeasurement.com.au ; jackw97224@yahoo.com ; geoffrey.landis@sff.net ; andrewpost@gmail.com ; ernest@cooleys.net ; george.hockney@jpl.nasa.gov ; tom@tomspace.com
Sent: Tuesday, December 12, 2006 1:08 AM
Subject: Re: Displacement Current


In a message dated 11/12/2006 13:03:48 GMT Standard Time, nigelbryancook@hotmail.com writes:
Pair production is due to an electric field strong enough (over about 10^18 to 10^20 volts/metre depending on the calculation used) to free pairs of charges from the bound state of the vacuum.

You have neglected the situation which occurs as the e-p's approach and that is charge cancellation, the binding force reduces as the the particles cancel charge. Thus the closer they are the more loose the bond becomes.

Regards,
arden

 
At 2:30 PM, Blogger nige said...

http://asymptotia.com/2006/11/10/more-scenes-from-the-storm-in-a-teacup-vi/

238 238 - nc Dec 14th, 2006 at 2:15 pm

‘... you’ve written an entire book for the public arguing that an entire research program is wrong, a sham, and akin to a religion or a cult, etc, etc, your presentation of scientific arguements and general approach to avoiding substantial issues is certainly something to hold up to scrutiny. How is anyone to give weight to anything you say (which such certainty) on any issue to do with this research program if they cannot tell when you are an expert on a paper or result you are discussing and when you are not? ... .’ - Clifford

Hi Clifford, the ‘entire’ book he wrote is not a polemic, it’s mainly (more than half) a discussion of the development of the Standard Model and the experimental facts known in physics which come without help of superstring. Maybe Woit can write a ‘revisionist’ version which will explain how superstring theory led to the Standard Model and gravity, and is the basis of the universe? Then he might be taken seriously?

You did comment recently on anothe post that string theory will be a failure when it is developed so far that it makes a falsifiable prediction and is tested and found to fail. Ptolemy could have defended the earth centred universe endlessly with that claim, adding epicycles to make it interact better with reality all the time. It could be that string won’t do that, and will be discarded when an alternative comes along. Then you have a basis for launching conspiracies to stamp out alternatives...

 
At 2:13 AM, Blogger nige said...

http://asymptotia.com/2006/11/10/more-scenes-from-the-storm-in-a-teacup-vi/

268 268 - nc Dec 16th, 2006 at 2:08 am

“... It is the ideas that should lead the way.

“Present good, promising and successful ideas, and people will work on them, strings or not. We wait for Smolin and Woit to do so, rather than writing books attacking what we are doing.” [- Clifford V. Johnson]

This argument is irrefutable, very good. .... Unless you, Smolin and Woit disagree about what is a "good, promising and successful" alternative, of course.

A little from the editorial on p5 of the 9 Dec 06 issue of New Scientist: "[Unorthodox approaches] now seem the antithesis of modern science, with consensus and peer review at its very heart. ... The sheer number of ideas in circulation means we need tough, sometimes crude ways of sorting.... The principle that new ideas should be verified and reinforced by an intellectual community is one of the pillars of scientific endeavour, but it comes at a cost."

Far easier to say anything else is crackpot. String isn't, because it's mainstream, has more people working on it, and has a large number of ideas connecting one another. No "lone genius" can ever come up with anything more mathematically complex, and amazingly technical than string theory ideas, which are the result of decades of research by hundreds of people. So it is unlikely that elite geniuses like Smolin and Woit will come up with ideas impressive enough to make media headlines. Stringers are safe.

 
At 4:47 AM, Blogger nige said...

Copy of a comment to the Asymptotia thread by Dr Dantas:

http://asymptotia.com/2006/11/10/more-scenes-from-the-storm-in-a-teacup-vi/#comment-18410

269 269 - Christine Dantas Dec 16th, 2006 at 4:05 am

Clifford wrote:
I have not read their books, but it is well known that those views are also represented in their books. (...) I am arguing against the larger arc within which the books fit.

Hi Clifford,

This is an interesting detail, thanks for making this clear, specially the last sentence that I quote above from you.

I did not read Woit’s book, but read Smolin’s and reviewed it positively. My review focused on the topics which appeared to me to be quite general, namely: his view on how science works. I believe that he could have written a completely independent book on this matter without mentioning string theory whatsoever, but I could be mistaken.

So you see, in my interpretation, he also offers some other kind of “larger arc” concerning science that it is interesting by itself. However, as I previously wrote, it is quite understandable that the book would find a strong opposition due to the polemics it carries. At the end of the day, it is a pity that this implies a missing opportunity for some to learn about interesting and important issues that deserve attention, thought, action.

Best regards,
Christine

1 Guest Blogger: Joe Polchinski on the String Debates | Cosmic Variance

 
At 5:02 AM, Blogger nige said...

Assistant Professor Lubos Motl made the crackpot claim at his blog http://motls.blogspot.com/2006/12/patent-search-nine-patents-depend-on.html :

Patent search: nine patents depend on string theory

Google Patent Search Beta
containing the full text of the patents. This is a great opportunity to see the patents that depend on string theory. ;-) Among 7+ million patents, you will find nine patents involving string theory. Needless to say, there are no patents based on loop quantum gravity, doubly special relativity, or similar constructs. ;-)

String theory is relevant for

gravitational measurements of gas non-uniformities
avoiding aging process
space vehicle propelled by inflation
method for learning data classification
database searching based on vector triplets
gravity wave generator and energy storage device
cyclone separator
pattern recognition based on the action used in string theory
dictionaries (this is actually just string-period-theory)
The most realistic patents are the electromagnetic anti-aging cure and the space vehicle by Boris Volfson. Do you think that all the entries above were filed by crackpots? Think twice. The triplets actually come from IBM, and there are other companies in the list.

Figure 1: The inflation-driven space vehicle. U.S. patent 6960975


I commented politely on problems with Lubos' stringy hype:

http://motls.blogspot.com/2006/12/patent-search-nine-patents-depend-on.html

Fig 2A and Fig 2B in the patent you link to for "Space vehicle propelled by the pressure of inflationary vacuum state" http://www.google.com/patents?vid=USPAT6960975&id=-FUVAAAAEBAJ&printsec=abstract#PRA1-PA4,M1 look to me the patented idea of Professor Eric Laithwaite in the BBC "Heretic" programme available online here: http://www.gyroscopes.org/heretic.asp

If it is a patent infringement, the patent is worthless because any attempt to enforce it will be thrown out. You can't defend a patent which is a deliberate or even inadvertent plagarism of someone else's expired patent.

The fact that these guys are patenting before experimentally denonstrating the principles are valid, indicates quackery, not eptitude.

His idea there was to get use gyroscopic effects to violate Newton's 3rd law and action without reaction. He explains it with a non-working dummy model, where gyroscopes are moved around (apparently just as in the patent for the string theory space vehicle you show) to get thrust.

In general relativity there is a spacetime fabric supposedly related to a stringy graviton/Higgs field, which by Einstein's equivalence principle gives both inertial and gravitational mass.

If you can distort the spacetime fabric to provide the reaction, then you're not violating anything. A helicopter is able to fly upwards by pushing air downwards. Similarly, if you can push the spacetime fabric downwards very slightly (it would require a great deal of energy to provide even a slight deflection of spacetime fabric), the reaction force will propel you upward.

For more about Laithwaite see http://www.gyroscopes.org/1974lecture.asp nd http://en.wikipedia.org/wiki/Eric_Laithwaite

Personally, seeing the sorry state of theories about gravitons and Higgs fields, I don't think theory is the way forward. As the videos of experiments show, experiment isn't either, because experimental things oscillate and you can't measure the true mass easily. Laithwaite was deceived into thinking some crackpot device lost weight, when really the scales couldn't measure the weight properly because of the vibrations from the gyroscopic systems. There was no weight loss when carefully measured at great expense. The energy needed to warp space enough to get propulsion is likely to be far in excess of anything useful: the rest mass-energy of the earth warps space enough to reduce the radius by 1.5 millimetres. Any successful system for violating Newtonian physics using general relativity effects is likely to require really vast mass-energy and even then will produce very small results.
nc | Homepage | 12.15.06 - 1:33 pm | #


Instead of admitting that his "evidence" for stringy rubbish in the patents applications was empty, Assistant Professor Motl responded with personal abuse and claimed his hype was just a joke:

... You're not welcome here, obnoxious crackpots and Quantokens and Nigel Cooks and others, OK? If you're not able to recognize that my text about an inflation-driven spaceship is not an endorsement of this "discovery" but a joke, then your brain is just irreversibly damaged and your presence will be causing permanent problems. I can't help you. I don't want to help you. It is not possible to help you. I want you to be gone.
Lubos Motl | Homepage | 12.15.06 - 10:34 pm | #


Is this what will happen when 10/11 dimensional M-theory collapses? The perpetrators will simply say it was a joke, and those exposing the fraud lack a sense of humour?

Very funny! Like Stalin saying that his sending of critics to Siberian salt mines was just a joke! Also notice the lack of festive spirit in Lubos' response. If he wants to cultivate humour, he is not succeeding any more with that than string "theory" is succeeding with connecting with physical reality (worthless epicycle type ad hoc "fits" aside).

I should, I suppose be very grateful that Lubos didn't actually lump me as an obnoxious crackpot, but made me a separate category, "... obnoxious crackpots and Quantokens and Nigel Cooks and others...". However, the underlying tone was slightly mean, and the argument he gave lacked scientific substance. It is December, not April 1st!

 
At 8:03 AM, Blogger nige said...

Copy of a comment:
http://cosmicvariance.com/2006/12/14/catching-up-lisa-randall-parents-toronto-and-new-york/#comment-154363

nc on Dec 16th, 2006 at 10:58 am

Warped Passages is the best string theory book I've read so far, from my point of view. Dr Randall is searching for interesting things, the connections between the theory and the reasons real world phenomena, like why gravity is especially weak. That kind of problem is more interesting to me than coming up with reasons for non-observed particles or non-observed Planck scale unification. Maybe it is because they are searching for explanations for known facts that they have very high citations. If so, that is very encouraging. If the physics community is really most influenced by people trying to connect theory to experimental facts (ie, not spin-2 gravitons, not supersymmetry, not 10^500 parallel universes), then it is very encouraging. The probability that any stringy ideas are the correct ideas is less than 1, seeing that there is no solid (falsifiable) prediction yet, but it is encouraging that researchers who are trying to make connections to real observations are generating the most interest and the most respect in the physics community.

 
At 8:31 AM, Blogger nige said...

http://cosmicvariance.com/2006/12/14/catching-up-lisa-randall-parents-toronto-and-new-york/#comment-154374

nc on Dec 16th, 2006 at 11:19 am

RE: the trackback in comment 1, which links to a claim that cosmology is not science. I’ve seen similar disputes, eg, a Daily Telegraph article by Roger Highfield reports:

Prof Heinz Wolff complained that cosmology is “religion, not science.” Jeremy Webb of New Scientist responded that it is not religion but magic. … “If I want to sell more copies of New Scientist, I put cosmology on the cover,” said Jeremy.

- http://www.science-writer.co.uk/news_and_pr/announcements/2005a_announcements.html

I disagree with the implied attack on the big bang, which does make checkable predictions which were observationally confirmed in detail, such as abundance of light elements and microwave background radiation (for which a Nobel Prize was rightly awarded).

The big bang is definitely a science so long as you don’t get too far into claiming that ad hoc modifications to general relativity (cosmological constant powered by “dark energy”) were predicted by Einstein’s steady state cosmology of 1917. If you religiously worship it, then you’re not doing science. The key thing is being critical about the more speculative ideas and “fixes” involved, without throwing the baby out with the bath water by not accepting well verified facts: http://www.astro.ucla.edu/~wright/tiredlit.htm is a very important page about the big bang evidence.

http://cosmicvariance.com/2006/12/14/catching-up-lisa-randall-parents-toronto-and-new-york/#comment-154380

nc on Dec 16th, 2006 at 11:28 am

(The cosmological constant powered by dark energy is superfluous if you allow for quantum gravity: force causing “exchange radiation” between distant receding masses is redshifted by recession, weakening gravity. That maps the supernova recession data on to general relativity without requiring any cosmological constant. There is simply a weakening of long range gravity from this mechanism. Hence dark energy isn’t required to negate long range gravitational attraction.)

 
At 1:13 PM, Anonymous Anonymous said...

http://www.math.columbia.edu/~woit/wordpress/?p=500

loop Says: Your comment is awaiting moderation.

December 16th, 2006 at 4:11 pm

Weinberg is actually further out than Susskind over the anthropic explanation for the small positive CC. See this article by Susskind: http://www.newscientist.com/channel/fundamentals/mg18825305.800.html

"I remember when Steven Weinberg first suggested that the cosmological constant might be anthropically determined - that it has to be this way otherwise we would not be here to observe it. I was very impressed with the argument, but troubled by it. Like everybody else, I thought the cosmological constant was probably zero - meaning that all the quantum fluctuations that make up the vacuum energy cancel out, and gravity alone affects the expansion of the universe. It would be much easier to explain if they cancelled out to zero, rather than to nearly zero. The discovery that there is a non-zero cosmological constant changed everything."

(BTW, what about cosmic expansion resulting in the redshifted gauge boson exchange of quantum gravity over cosmic distances between receding masses? Is that a viable alternative explanation of the lack of long range retardation which is currently being attributed to a CC due to dark energy? Isn't it completely obvious that exchange radiation between receding masses may be redshifted, weaking the gravity coupling constant this way?)

 
At 2:59 AM, Blogger nige said...

Copy of a comment:

http://asymptotia.com/2006/11/10/more-scenes-from-the-storm-in-a-teacup-vi/#comment-20074

289 289 - nc Dec 17th, 2006 at 2:50 am

“... I’ll repeat that Smolin and Woit are not claiming that AMS’s proof is insufficiently rigourous, or that it has unfilled gaps in it. They’re claiming that it’s not a proof at all.” - Professor Jacques Distler

“Proof. n. 1. The evidence or argument that compels the mind to accept an assertion as true.” - http://www.answers.com/topic/proof

That’s the primary definition. But it also says:

“2.a. The validation of a proposition by application of specified rules, as of induction or deduction, to assumptions, axioms, and sequentially derived conclusions.”

Definition 2.a (logical rigour) seems more stringent than definition 1 (brainwashing or consensus). If it isn’t rigorous, why should anyone take it as proof? Lots of nice looking non-rigorous “proofs” collapse when an attempt is made to make them rigorous. Stanley Brown, editor of PRL, and his associate editor used this against me. I claimed simply that you can (given Minkowski’s spacetime) interpret recession of stars as a variation not only of velocity with distance, but also of velocity with time past as seen from our frame of reference. This gives the stars outward acceleration, which gives them outward force, which by Newton’s 3rd law gives equal inward force, which by the Fatio-LeSage mechanism (applied to gravity causing exchange radiation, not to material rays) for the first time in history predicts the strength of gravity (you just have to allow for the redshift of the force-causing exchange radiation from receding stars weakening gravity and for the increased density of the earlier - distant - universe which tends to increase the outward force aand inward reaction force as seen from our frame of reference). The PRL guys denied it was a proof. However, they never said what they were disputing.

This is an analogy to the position taken by Woit and Smolin over AMS's proof. If you don't believe it, you don't need to say what you think is wrong. That's professionalism.

 
At 2:12 PM, Blogger nige said...

http://asymptotia.com/2006/11/10/more-scenes-from-the-storm-in-a-teacup-vi

301 301 - nc Dec 17th, 2006 at 2:07 pm

“... his argument asserting not only that string theory definitely failed, but even that it caused the failure of physics and science as a whole." - Gina

Neither Smolin nor Woit say that; Woit merely suggests that the landscape due to the 6-d Calabi-Yau manifold needed to compress 6-dimensions of superstring theory makes the future dismal. With a vast number of solutions possible it just isn't falsifiable physics and there is no evidence that string theory is really getting closer to being falsifiable. Woit doesn't say this is a complete failure of all conceivable types of stringy theory. He focusses on the mainstream idea of 10-d superstrings with boson-fermion supersymmetry. Tony Smith is a regular commentator on Woit's blog who claims to have a way of getting 26-d bosonic string theory to do useful things. Danny Ross Lunsford has a 6-d unification which I find more interesting. Woit has written that if there is any way of getting string to work, he's interested in that.

If you look at the problem from the bottom, and ask what is a photon or what is a spinning quark core (ignoring the major modifications due to the vacuum loops around the core), vibrating string is the most simple explanation. By 'electric charge' people only mean 'electromagnetic field' and so all you have for an electron is what you observe, and then the mass isn't a direct property but is provided by the vacuum. So the electron is just an electric monopole and magnetic dipole. You get that from a trapped electromagnetic energy current, like a photon trapped by gravity (which deflects photons), see Electronics World, Apr 03. A photon lacks thickness; it only has a transverse extent due its oscillations. So it's just like a oscillating zero width open string.

 
At 4:34 AM, Blogger nige said...

A simple string would be a photon, an oscillatory electromagnetic field. A photon could be defined as a discontinuity in exchange radiation, the latter being normally undetectable due to equilibrium: the electromagnetic field is mediated by exchange radiation, so the photon should be composed of exchange radiation.

The Yang-Mills exchange radiation theory suggests a physical model in terms of what is going on with the Poynting vector of energy flow, in electromagnetic fields. I'm going to develop this further. In the meantime:


http://asymptotia.com/2006/11/10/more-scenes-from-the-storm-in-a-teacup-vi/

303 303 - nc Dec 18th, 2006 at 4:30 am

The problem that people off the mainstream stringy M-theory bandwagon don’t have an audience for their music, or that they are using the wrong (non-string) instruments, is not restricted to physics.

One good analogy is that mainstream medicine is sometimes off course. The resolution of such crises doesn’t usually come about through criticism of the mainstream, but by new ideas. The question is, how much obstruction to new ideas is taken as mere defensiveness against crackpottery? It’s very easy to ignore new ideas. The Nobel laureate, Barry Marshall, who discovered Helicobacter pylori in all duodenal ulcers (contrary to mainstream ideas about stress causing ulcers) took it himself to demonstrate that it can cause ulcers. Still he was generally ignored from 1984-97.

Contrary to the popularist description of science as entirely rational and self-doubting, it’s extremely obvious that people’s status matters more than facts or evidence. Peer review decides what publications you get, whether you’re allowed to take a research degree in a given area, and those peers will only understand you if you’re working from the existing paradigms; otherwise, sorry, they are too busy for ‘pet ideas.’

 
At 5:23 AM, Blogger nige said...

http://asymptotia.com/2006/12/17/odd-one-out/#comment-21188

6 - nc
Dec 18th, 2006 at 5:21 am

Hi Clifford, falsifiable Biblical prophecy was allegedly confirmed by events. The main problem with Moses’ theory of science (M-theory for short) in the Bible is the lack of beautiful, rigorously understood equations and the lack of repeatable experimental confirmation. Luckily, people don’t make those mistakes today in science (allegedly).

 
At 10:53 AM, Blogger nige said...

http://asymptotia.com/2006/11/10/more-scenes-from-the-storm-in-a-teacup-vi/

306 306 - nc Dec 18th, 2006 at 10:47 am

“... It is like claiming, for example, that my new “alternative Mars rocket made out from wood” would be better than a Saturn V, and going on at length about how the “establishment” would “oppress” my great idea. This appears as obvious nonsense to almost everybody, except perhaps to small kids and some desert tribes.” - Moveon

Strawman argument about “obvious nonsense”. Try choosing something that is censored as “obvious nonsense” without anyone even having read it or said what is wrong with it.

By the way, as a kid I used to launch wood and cardboard model rockets and they went higher than metal ones. Wood’s a good material. Provide some calculation to prove it’s definitely better to use metal for rockets! Wooden rockets were used for a long time before metal ones. The latest technology in engineering and maths is not the best just because its the newest. That’s just as much a logical fallacy as ad hominem arguments.

 
At 3:16 AM, Blogger nige said...

http://asymptotia.com/2006/11/10/more-scenes-from-the-storm-in-a-teacup-vi/#comment-21904


307 307 - Clifford Dec 18th, 2006 at 10:51 am

The wooden rocket comment of Moveon, and nc’s unexpected* response to it, wil keep me laughing all day!

-cvj

(*on second thought.... I should have seen it coming... )

308 308 - M Dec 19th, 2006 at 12:36 am

are we sure that approximating real rockets with toy wooden rockets is more funny than approximating the real quark-gluon plasma with a toy AdS/QCD?

309 309 - nc Dec 19th, 2006 at 3:10 am

“Toy rocket”? Sigh... M, metal has a massive expansion coefficient and also massive conductivity compared to wood, this ruptures seals and joints when it gets too hot, and metal alloys also lose strength with temperature. Wood is 25 times weaker than steel at low temperature, but is stronger and safer at the high temperatures; it just ablates slightly. It wouldn’t burn in space. It wouldn’t even burn while travelling at supersonic speed upwards through the atmosphere. It takes a long time to heat up, unlike metal. When exposed to flash heat, a thin layer of surface chars and the carbonated surface protects the underlying wood, like a “smart” material. (It takes a lot of time and oxygen to burn it. No need for expensive and defective tiles which fall off the shuttle, etc.)



**********************


Wood doesn't actually catch fire easily, unlike paper, because it acts as a smart material with surface ablation under rapid heating, which causes a thin surface layer to carbonise and protect the underlying wood: ''Walking across burning coals is no big deal ... just a matter of freshman physics: the coal simply does not have good enough heat conduction properties to transfer enough energy to your foot to burn it as you walk across. ... When you are next baking some potatoes or a pie, open the oven. Everything in there is at the same really high temperature. You can touch (for a short time) the potatoes and the pie quite comfortably. But you would never touch the metal parts of the oven, right?' - Clifford, http://cosmicvariance.com/2006/04/28/gene-firewalker In space on a mission to Mars, wood wouldn't burn too easily because of the lack of air.)

 
At 3:30 AM, Blogger nige said...

310 310 - nc Dec 19th, 2006 at 3:24 am

I meant “carbonized”, not carbonated ;-)

 
At 3:33 AM, Blogger nige said...

More: http://glasstone.blogspot.com/2006/04/ignition-of-fires-by-thermal-radiation.html

 
At 10:35 AM, Blogger nige said...

http://discovermagazine.typepad.com/horganism/2006/12/eleven_worst_gr.html#comment-26758622

Andrei,

Both Kuhn and Popper deserve their place in the list, according to Dr Imre Lakatos:

‘Scientists have thick skins. They do not abandon a theory merely because facts contradict it. ... History of science, of course, is full of accounts of how crucial experiments allegedly killed theories. But such accounts are fabricated long after the theory had been abandoned. ... What really count are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes. Now, how do scientific revolutions come about? If we have two rival research programmes, and one is progressing while the other is degenerating, scientists tend to join the progressive programme. This is the rationale of scientific revolutions. ... Criticism is not a Popperian quick kill, by refutation. Important criticism is always constructive: there is no refutation without a better theory. Kuhn is wrong in thinking that scientific revolutions are sudden, irrational changes in vision. The history of science refutes both Popper and Kuhn: on close inspection both Popperian crucial experiments and Kuhnian revolutions turn out to be myths: what normally happens is that progressive research programmes replace degenerating ones.’

– Imre Lakatos, Science and Pseudo-Science, pages 96-102 of Godfrey Vesey (editor), Philosophy in the Open, Open University Press, Milton Keynes, 1974.

I'll give credit to Popper for one thing in "The Logic of Scientific Discovery":

‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 book The Logic of Scientific Discovery]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation ...’

– Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.

The chaotic motions of electrons and light on small scales are due to interference, from Dirac sea (path integral) type scatter:

‘Light ... “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’

- Feynman, QED, Penguin, 1990, page 54.

Feynman also explains:

‘... when the space through which a photon moves becomes too small (such as the tiny holes in the screen) ... we discover that ... there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that ... interference becomes very important.’

‘... the ‘inexorable laws of physics’ ... were never really there ... Newton could not predict the behaviour of three balls ... In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’

– Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article, not science fiction!) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981. No hocus pocus!

Posted by: nc | December 19, 2006 at 01:32 PM

 
At 1:36 PM, Blogger nige said...

http://asymptotia.com/2006/11/10/more-scenes-from-the-storm-in-a-teacup-vi/

314 314 - nc Dec 19th, 2006 at 11:12 am

Hi Clifford, it’s not the lack of championing that makes my blood pressure go off scale, it’s active censorship that’s the problem! As a specific example, would you have the nerve to ask Jacques if he agreed with the deletion of Lunsford’s published paper from arXiv in 2004? It was published in Int. J. Theor. Phys. 43 (2004) no. 1, pp.161-177 and a non-updatable copy is at http://cdsweb.cern.ch/record/688763 (I know it’s non-updatable because I’ve got a paper on CERN -EXT series which also can’t be updated except via carbon-copy by updating a paper on arXiv, which bans non-mainstream things). Lunsford states:

‘I certainly know from experience that … point about the behavior of the gatekeepers is true - I worked out and published an idea that reproduces GR as low-order limit, but, since it is crazy enough to regard the long range forces as somehow deriving from the same source, it was blacklisted from arxiv (CERN however put it up right away without complaint).’

http://www.math.columbia.edu/~woit/wordpress/?p=128#comment-1932

315 315 - Clifford Dec 19th, 2006 at 11:24 am

nc:- There’s censorship, and there is filtering to keep signal-to-noise at a manageable level. You are entitled to your opinion about where to draw the line. I do not know which is which here and will certainly not get into it. (Btw, I ask and tell Jacques whatever I please, and he does so to me. I don’t understand what “nerve” has to do with anything when discussing science with a sensible colleague.) I’m not going to start discussing individual cases here. This is not intended to be a forum for random grievances about one’s pet theories, be they brilliiant revolutions in the making from visionaries or total nonsense from well-meaning nutcases.

Or, come to think of it, be they brilliant revolutions in the making from well-meaning nutcases, or total nonsense from visionaries.
Either way, this is not the place for it.

-cvj

318 318 - nc Dec 19th, 2006 at 1:31 pm

Hi Clifford, thank you for acknowledging that there is censorship due to mainstream ideas which lack evidence, and for acknowledging that where the line should be drawn isn’t defined by scientific criteria, but is just a matter of personal opinions. That makes it all fine. : (

 
At 2:47 PM, Blogger nige said...

http://asymptotia.com/2006/11/10/more-scenes-from-the-storm-in-a-teacup-vi/

Hi Clifford, thank you for acknowledging that there is censorship due to mainstream ideas which lack evidence, and for acknowledging that where the line should be drawn isn’t defined by scientific criteria, but is just a matter of personal opinions. That makes it all fine. : (

319 319 - Clifford Dec 19th, 2006 at 1:39 pm

You’re quite welcome. Even though I did not say those things.
-cvj

320 320 - nc Dec 19th, 2006 at 2:42 pm

Hi Clifford, well I saw what I took to be an acknowledgement from you that censorship of non-mainstream ideas occurs, and you stated that it is a matter for my opinion if that is reasonable or not. Such a point of view is vague on what is right and what is wrong. If I completely misunderstand you, it’s not due to your any problem in your lucidity, instead it’s my stupidity, lack of appreciation for string theory, etc. Similarly, if something gets deleted without even being read (within a few seconds) by mainstream, that’s good noise reduction policy. If their idea is any good, it will be taken seriously by someone who will be in a position to defend it. Excellent.

 
At 2:42 AM, Blogger nige said...

http://asymptotia.com/2006/11/10/more-scenes-from-the-storm-in-a-teacup-vi


321 321 - gina Dec 19th, 2006 at 10:03 pm

NC,

I do not think that you really refer to censorship. The rejection of Woit’s book from CUP was not an act of censorship but entirely reasonable (see comment (132)) and the book appeared elsewhere where it was appropriate. I suspect that the rejection of your ideas from PRL was very reasonable and I hope somebody, beyond the line of duty, will take the effort to look at them and tell you (better, on the record in a blog, maybe here) why they can’t work. But your ideas are presented on your homepage and weblog so everybody can read them.

322 322 - nc Dec 20th, 2006 at 2:33 am

Dear gina,

“I hope somebody, beyond the line of duty, will take the effort to look at them and tell you (better, on the record in a blog, maybe here) why they can’t work.”

Sigh. Love the scientific objectivity and lack of prejudice about whether ‘my’ ideas will be found useless! Very unbiased. It’s deliberately built like a jigsaw of pieces from empirically defensible fact, due to other people. I didn’t discover spacetime, Newton’s 3rd law, big bang, etc. Put a jigsaw together from facts nobody disputes, and the sum of those facts is absurd because it correctly predicts gravity, cosmology (no retardation of supernovae, predicted ahead of observations and published in 1996), so they do work.

The investigation of ‘pet theories’ [other] than those of mainstream awaits the fall of mainstream theory. Naturally that can’t fall because it isn’t falsifiable. It’s not a blog you need to compete with arXiv’s hyping of string theory, it’s a vast number of cited publications:

‘Scientists have thick skins. They do not abandon a theory merely because facts contradict it. … History of science, of course, is full of accounts of how crucial experiments allegedly killed theories. But such accounts are fabricated long after the theory had been abandoned. … What really count are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes. Now, how do scientific revolutions come about? If we have two rival research programmes, and one is progressing while the other is degenerating, scientists tend to join the progressive programme. This is the rationale of scientific revolutions. … Criticism is not a Popperian quick kill, by refutation. Important criticism is always constructive: there is no refutation without a better theory. Kuhn is wrong in thinking that scientific revolutions are sudden, irrational changes in vision. The history of science refutes both Popper and Kuhn: on close inspection both Popperian crucial experiments and Kuhnian revolutions turn out to be myths: what normally happens is that progressive research programmes replace degenerating ones.’

– Imre Lakatos, Science and Pseudo-Science, pages 96-102 of Godfrey Vesey (editor), Philosophy in the Open, Open University Press, Milton Keynes, 1974.

 
At 3:58 AM, Blogger nige said...

The American Amazon.com site has sent out a fascinating email (I am a customer of Amazon.co.uk, not of Amazon.com) about a new general relativity book aimed at mathematicians, which is cheap enough and topical enough (includes some attempt at quantization of gravity) to be relevant even if the maths is excessively abstruse for easy digest:

From: Amazon.com
To: nigelbryancook@hotmail.com
Sent: Tuesday, December 19, 2006 8:16 PM
Subject: Save 32% at Amazon.com on "General Relativity for Mathematicians" by R. K. Sachs





Dear Amazon.com Customer,

We've noticed that customers who have expressed interest in Not Even Wrong: The Failure of String Theory And the Search for Unity in Physical Law by Peter Woit have also ordered General Relativity for Mathematicians by R. K. Sachs. For this reason, you might like to know that R. K. Sachs's General Relativity for Mathematicians will be released on January 2, 2007. You can pre-order your copy at a savings of $5.74 by following the link below.

General Relativity for Mathematicians
R. K. Sachs
List Price: $17.95
Price: $12.21
You Save: $5.74 (32%)

Release Date: January 2, 2007





Book Description


Geared toward mathematically sophisticated readers with a solid background in differential geometry, this text was written by two noted teachers at the University of California, Berkeley. It offers a firm foundation in the principles of general relativity, particularly in terms of singularity theorems and the quantization of gravity. 1977 edition.




More to Explore





Lie Groups, Lie Algebras, and Some of Their Applications
Robert Gilmore
Differential Forms (Dover Books on Mathematics)
Henri Cartan
The Trouble With Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next
Lee Smolin






See more in Subjects > Science > General
See more in Subjects > Science > Mathematics > General
See more in Subjects > Science > Physics > General
See more in Subjects > Science > Physics > Mathematical Physics
See more in Subjects > Science > Physics > Relativity
See more in Subjects > Professional & Technical > Professional Science > Physics > Mathematical Physics
See more in Subjects > Professional & Technical > Professional Science > Physics > Relativity
More New Releases
Top Sellers
Recommended for You
Sincerely,

Amazon.com
http://amazon.com


--------------------------------------------------------------------------------



--------------------------------------------------------------------------------

We hope you found this message to be useful. However, if you'd rather not receive future e-mails of this sort from Amazon.com, please visit the opt-out link here: http://www.amazon.com/gp/gss/o/1ixjrumVPpvsSYxvv.aspFwfDl1h7Q1UaEfI4ih9oqlw

Please note that product prices and availability are subject to change. Prices and availability were accurate at the time this newsletter was sent; however, they may differ from those you see when you visit Amazon.com.

(c) 2006 Amazon.com. All rights reserved. Amazon.com is a registered trademark of Amazon.com, Inc.

Amazon.com, 1200 12th Ave. S., Suite 1200, Seattle, WA 98144-2734.

Reference 3924150



Please note that this message was sent to the following e-mail address: nigelbryancook@hotmail.com

 
At 4:03 AM, Anonymous Anonymous said...

Actually it's a 1977 book, unlikely to be any help.

 
At 10:09 AM, Blogger nige said...

http://eskesthai.blogspot.com/2006/12/cosmic-ray-spallation.html

Of course my mind is thinking about the events in the cosmos. So I am wondering what is causing the "negative pressure" as "dark energy," and why this has caused the universe to speed up. - Plato

Dear Plato,

The vacuum loops of charges only occur in very intense fields, near matter. If they occurred everywhere in the vacuum, the observed charge of particles would be zero, because vacuum polarization would continue to be caused until there was no uncancelled charge left.

Instead, the loops only occur out to a maximum range where the field strength is about 10^19 volts/metre. This corresponds to the 1 fm distance from a unit charge (by Gauss' law). The collision energy needed for electrons to approach within this distance against Coulomb repulsion is 1.022 MeV per collision or 0.511 Mev/particle. This is the "infrared cutoff" energy.

Beyond a radius of 1 fm from a quark or lepton, there are no annihilation-creation loops in the vacuum! I think this fact from quantum field theory is vital and is suppressed, censored out. Yet it is key to correct renormalization, and is experimentally validated by reality.

Beyond 1 fm from a charge, you only have bosonic radiation. If such radiation doesn't oscillate charges, you can't detect it except by the forces it produces, so that is gauge boson radiation.

The loops of loop quantum gravity consist of exchange radiation being transferred between masses, delivering gravitational force.

If the masses are receding - as in the case of long-range gravity in cosmology - the redshift effect on the gauge bosons for gravity would weaken gravity.

Hence, there is no long-range gravitational retardation of the expansion of the universe, contrary to Friedmann's solution to general relativity.

There is no dark energy. The effect (a lack of gravity slowing expansion over large distances) supposed to be evidence for dark energy is actually evidence which confirms quantum gravity.

In the expanding universe, over vast distances the exchange radiation will suffer energy loss like redshift when being exchanged between receding masses.

This predicts that gravity falls off over large (cosmological sized) distances. As a result, Friedmann’s solution is false. The universe isn’t slowing down! Instead of R ~ t^{2/3} the corrected theoretical prediction turns out R ~ t, which is confirmed by Perlmutter’s data from two years after the 1996 prediction was published. Hence no need for dark energy; instead there’s no simply gravity to pull back very distant objects. Nobel Laureate Phil Anderson grasps this epicycle/phlogiston type problem:

‘the flat universe is just not decelerating, it isn’t really accelerating’ - Phil Anderson, http://cosmicvariance.com/2006/01/03/danger-phil-anderson/#comment-10901

None of this is speculative. Gravity must be causes this way because there's nothing else available to do it. The Dirac sea of creation-annihilation loops only occurs very close to charges. Elsewhere, everything is done purely by bosonic radiation effects.

 
At 3:37 AM, Anonymous Anonymous said...

The electrons have real positions and the indeterminancy principle is explained by ignorance of its position which is always real but often unknown - instead of by metaphysics of the type Bohr and Heisenberg worshipped.

 
At 7:46 AM, Blogger nige said...

http://www.math.columbia.edu/~woit/wordpress/?p=500#comment-20040

nc Says:

December 21st, 2006 at 10:42 am

“... they are claiming that string theory remains the most promising approach to a unified theory...”

What are they comparing with what? Where is the scientific comparison?

They’s waving their arms abit and saying they’re right **because** everyone else is wrong, without having evidence that they’ve checked all alternatives. That’s narcissism.

“well, it took more than 2000 years to get predictions out of the theory that there are atoms”.

As Brian Green says in Elegant Universe, atom means unsplittable, so the atomic hypothesis strictly was falsified in 1939.

 
At 10:47 AM, Blogger nige said...

http://eskesthai.blogspot.com/2006/12/cosmic-ray-spallation.html

Dear Plato,

The mainstream claims that the lack of slowing down of receding supernovae is evidence that gravity (which would slow them down as they recede) is being offset by dark energy.

This interpretation is wrong. The IR cutoff in renormalized QFT shows that there are no creation-annihilation loops beyond 1 fm from matter. If these loops did exist, there would be a mechanism for dark energy to operate. But they don't. We know they don't due to quantum effects like pair production being entirely restricted to regions near matter where the electric field is stronger than 10^20 volts/metre. Also polarization of the vacuum, which shields all charges, would be completely effective if the IR didn't exist, so there would [b]e no real charges.

Instead of this dark energy creation-annihilation loop filled vacuum, the true interpretation of Perlmutter's supernovae recession observations was predicted via Oct 1996 Electronics World, letters pages.

What happens is that quantum gravity mechanism effects prevent the receding galaxies from being slowed down.

Normally, we only see gravity between masses which are not receding relativistically. Eg, [an] apple and the earth are not receding much. The earth and the sun and not receding much. In these cases, the exchange of Yang-Mills gravity causing exchange radiation is not redshifted, so only the inverse square law and curvature/contraction effects occur.

But when the stars are receding with large redshift, the gravity causing exchange radiation is affected and the gravity strength G is reduced as a result. Hence, the correct interpretation of the "dark energy" evidence is not dark energy, but Yang-Mills quantum gravity (not necessarily with a spin-2 graviton, however).

Happy Christmas!
nc | Homepage | 12.21.06 - 10:48 am | #

 
At 2:09 AM, Blogger nige said...

http://www.math.columbia.edu/~woit/wordpress/?p=501#comment-20074

n Says: Your comment is awaiting moderation.

December 22nd, 2006 at 5:06 am
No, the periodic table isn’t messy, it’s simple quantum mechanics.

The main structure of the periodic table comes from the set of four quantum numbers and Pauli exclusion principle. Orbit number n = 1, 2, 3, ...; elliptical-shape orbit number, l, which can take values of n –1, n – 2, n – 3, ...; orbital direction magnetism, which gives a quantum number m with possible values l, l – 1, l – 2, ... , 0, ... - (l- 2), -(l – 1), -l; and spin direction effect, s, which can only take values of +1/2 and –1/2.

To get the periodic table we simply work out a table of consistent unique sets of quantum numbers. The first shell then has n, l, m, and s values of 1, 0, 0, +1/2 and 1, 0, 0, -1/2. The fact that each electron has a different set of quantum numbers is called the ‘Pauli exclusion principle’ as it prevents electrons duplicating one another.

For the second shell, we find it can take 8 electrons, with l = 0 for the first two (an elliptical subshell is we ignore the chaos effect of wave interactions between multiple electrons), and l = 1 for next other 6. Continuing this simple way gives most of the structure of the periodic table from quantum mechanics. You do need more physics to allow for ’screening’ of nuclear charge by the electrons intervening between an outer orbit and the nucleus, and this type of problem makes the periodic table of elements far more complex as you get to heavier elements, but the number of underlying principles needed to explain everything is still tiny.

 
At 2:39 AM, Blogger nige said...

http://cosmicvariance.com/2006/12/21/one-sentence-challenge/#comment-160013

nc on Dec 21st, 2006 at 3:17 pm

The creation-annihilation loops in the vacuum are limited to the range of the IR cutoff (~1 fm) so they don’t justify “dark energy” which is a hoax due to a faulty use of general relativity uncorrected for redshift of quantum gravity mediating gauge bosons, which get redshifted when exchanged between receding masses (gravitational charges), weaking gravity and preventing the big bang expansion from slowing down due to gravity as Friedmann’s [N*O*T E*V*E*N W*R*O*N*G solution to general relativity] suggested.



http://eskesthai.blogspot.com/2006/12/cosmic-ray-spallation.html

Dear Plato,

for your secondary question to me "what would you call the 70% [dark energy] then", I'd have a wide choice of analogies to call that fudge factor after: epicycles, phlogiston, caloric, elastic solid aether, ........ take your pick!

Ockham's Razor: entia non sunt multiplicanda praeter necessitatem;

entities should not be multiplied beyond necessity.

Merry Christmas!

 
At 3:13 AM, Blogger nige said...

http://asymptotia.com/2006/11/10/more-scenes-from-the-storm-in-a-teacup-vi/#comment-22238

325 325 - nc Dec 22nd, 2006 at 3:08 am

‘The dean followed the rule of not allowing and not even considering teaching initiative without the appropriate standard approval procedure. The dean’s rationale for this rule was the goal to prevent educational chaos. This indeed appears to be a rational point of view.’ - Gina, comment 323

Similarly, Galileo would have caused chaos if the more rational professors had allowed consideration of his radical, disruptive new approach to astronomy (using a telescope):

‘Here at Padua is the principal professor of philosophy whom I have repeatedly and urgently requested to look at the moon and planets through my glass which he pertinaciously refuses to do. Why are you not here? What shouts of laughter we should have at this glorious folly! And to hear the professor of philosophy at Pisa labouring before the Grand Duke with logical arguments, as if with magical incantations, to charm the new planets out of the sky.’ - Letter of Galileo to Kepler, 1610, http://www.catholiceducation.org/articles/science/sc0043.html

 
At 8:17 AM, Blogger nige said...

(The hyperlinks will be missing from words in the message copy below: for the message with hyperlinks see the version of it on http://nige.wordpress.com/2006/10/20/loop-quantum-gravity-representation-theory-and-particle-physics/ )


http://physicsmathforums.com/showthread.php?p=6242#post6242

Path integrals are due to Yang-Mills exchange radiation

--------------------------------------------------------------------------------

Aspect/Bell result has the most crackpot analysis you can imagine (Bell's inequality is anything but a model of Ockham's simplicity) with two contradictory 'interpretations' so doesn't prove anything at all (interpretation 1: dump light speed as a limit of action at a distance and accept instantaneous entanglement, OR interpretation 2: accept that the correlation of spins dumps the metaphysical 'wavefunction collapses upon measurement, not before' interpretation of the uncertainty principle, it is just empty minded arguing or bigotry, like string theory assertions to 'predict gravity' made by Witten in Physics Today, 1996).

This will be my last comment here as you are clearly a religious believer who likes the interpretation which has no evidence. There is no evidence for indeterminancy, parallel worlds, or anything. The dice land one way up on the floor under the table, and are not indeterminate until you find them!

‘The world is not magic. The world follows patterns, obeys unbreakable rules. We never reach a point, in exploring our universe, where we reach an ineffable mystery and must give up on rational explanation; our world is comprehensible, it makes sense. I can’t imagine saying it better. There is no way of proving once and for all that the world is not magic; all we can do is point to an extraordinarily long and impressive list of formerly-mysterious things that we were ultimately able to make sense of. There’s every reason to believe that this streak of successes will continue, and no reason to believe it will end. If everyone understood this, the world would be a better place.’ – Prof. Sean Carroll, here

Dr Thomas Love sent me a paper proving that there is a mathematical inconsistency between the time-dependent and time-independent forms of Schroedinger's wave equation when you switch over at the instant a measurement is taken. The usual claim about wavefunction collapse from the mathematical model is due to mathematical inconsistency, not nature.

Bohr simply wasn’t aware that Poincare chaos arises even in classical systems with 2+ bodies, so he foolishly sought to invent metaphysical thought structures (complementarity and correspondence principles) to isolate classical from quantum physics. This means that chaotic motions on atomic scales can result from electrons influencing one another, and from the randomly produced pairs of charges in the loops within 10^{-15} m from an electron (where the electric field is over about 10^20 v/m) causing deflections. The failure of determinism (ie closed orbits, etc) is present in classical, Newtonian physics. It can’t even deal with a collision of 3 billiard balls:

‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’

– Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article, not science fiction!) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981.

‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 book The Logic of Scientific Discovery]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’

– Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.
*****************

The physics of the photon requires first and foremost and understanding of Maxwell's special term. Faraday's law of induction allow a time varying magnetic field to cause a curling electric field. To get the photon to propagate, you need to generate a magnetic field from the electric field somehow. Maxwell's final theory of the light wave works by saying that the vacuum contains charges which get polarized by by an electric field, thereby moving and creating the magnetic field which is needed to allow wave propagation.

But QFT shows there's a cutoff on the electric polarizability of the vacuum, so there's no polarization beyond 1 fm from an electron, etc. Goodbye Maxwell's displacement current.

Instead of there being displacement current in a capacitor or transmission line as the capacitor or transmission line 'charge up', there is radiation of energy through the vacuum.

It is vital to compare fermion and boson. The fermion has rest mass, half integer spin, and can remain at 'rest'. The boson has no rest mass (although it has gravitational mass in motion according to general relativity, because the effect of gravity is due to both mass and field energy), integer spin and can't remain at 'rest'.

Fermions (electrons in this case) have to flow in opposite directions in two parallel conductors to allow light-velocity energy (TEM wave, or transverse electromagnetic wave) propagation to work. The reason is that a single wire would result in infinite self-inductance, ie the magnetic field created is uncancelled.

So you need two parallel conductors, each carrying a similar electron current in an opposite direction, to allow a light velocity logic pulse to propagate using electrons. This was discovered by Heaviside when he was sending Morse Code messages to his brother via the Newcastle to Denmark telegraph line.

Heaviside is wrong to impose a discontinuity (step) rise on the electric pulse. If the electric field at the front rose as a discontinuity, the rate of change of the current there would be infinite, and the resulting charge acceleration would result in an infinite amount of radiation with infinite frequency, which doesn't occur. All observed logic steps have a non-instantaneous rise.

During this non-infinite electric field strength rise in time and distance at the front of the logic step, charges do accelerate and do radiate (just like radio transmission antennas/aerials). Because the direction of charge acceleration in each conductor is the opposite of the other, the radiation from each is an inversion of the signal from the other. The two conductors swap radiation energy, which permits the pulse to propagate, and this effect takes the place of Maxwell's vacuum charge 'displacement current'. Behind the rise of the step, the magnetic field from the current in each wire partly cancels the magnetic field from the other wire, which prevents infinite self inductance.

In a photon of light, the normal background gauge boson radiation in spacetime provides the effective 'displacement current' permitting propagation. Since the Maxwell wave has the 'displacement current' take place in the direction of the electric field vector, ie perpendicular to the direction of propagation of the whole photon, the gauge boson radiation flow in the vacuum which we need is that in the transverse direction to the direction of light. This is why there is a transverse spatial extent to a photon, because when it comes very close to electrons or matter (eg, a screen with two small slits very nearby), the gauge boson field changes strength increases dramatically:

‘Light … "smells" the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’

- Feynman, QED, Penguin, 1990, page 54.

‘… when the space through which a photon moves becomes too small (such as the tiny holes in the screen) … we discover that … there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that … interference becomes very important.’

- R. P. Feynman. (Some of this may be from his book Character of Physical Law, not QED, but it's all 100% Feynman.)

'I like Feynman’s argument very much (although I have not thought about the virtual charges in the loops bit bit). The general idea that you start with a double slit in a mask, giving the usual interference by summing over the two paths... then drill more slits and so more paths... then just drill everything away... leaving only the slits... no mask. Great way of arriving at the path integral of QFT.' - Prof. Clifford V. Johnson's comment, here

--------------------------------------------------------------------------------
Last edited by NB Cook : Today at 03:02 PM.

 
At 9:35 AM, Blogger nige said...

http://discovermagazine.typepad.com/horganism/2006/12/celebrating_win.html#comments

Posted by: Jennifer Loewenherz | December 22, 2006 at 10:53 AM

'... we will continue to find answers to so many more things previously unthinkable to be explained, but don’t think for a minute that we will answer them all.'

Jennifer, you are showing a bias in favour of magic over rational explanation.

Everything that in the past seemed a mystery or magic has - where pursued far enough - turned out to have a causal mechanism behind it. Quantum gravity will replace today's approximations to cosmology (general relativity solutions which ignore Yang-Mills quantum field theory dynamics, and effects on gauge bosons of relativistically receding gravitational charges, masses) and may solve the mysteries of the big bang.

If you are prejudiced in favour of there being some impossible-to-solve mystery behind something, then you are really against progress.

Otherwise, you risk falsely attributing some phenomena to magic which really have a causal mechanism, and hence you risk shutting off the pursuit of science on a topic prematurely.

Posted by: nc | December 22, 2006 at 12:31 PM

 
At 4:57 AM, Blogger nige said...

From my last comment at:

http://physicsmathforums.com/showthread.php?p=6286#post6286

... One more thing, about the photon. The key problem is how you get the electric field to result in radiation which closes the cycle involving Faraday's induction law, without requiring Maxwell's vacuum charge displacement current (which can't flow in weak fields below the IR cutoff according to QFT).

My argument is that the electron needs to be seen for the collection of phenomena it is associated with, including electric field. When the electric field of the electron is accelerated, it radiates energy. (This radiation does the stuff which is normally attributed to Maxwell's vacuum charge displacement current.)

The photon likewise has an electric field. In a reference frame from which the electric field of the photon is seen to be varying, it constitutes the source of radiation emission - just like the radiation emission from an accelerating electron. So you've got to accept that it is possible to explain the photon with a correct theory. There is progress to be made here. Problem is, nobody wants to do it because it is not kosher [religiously correct rubbish] physics. I don't really care much about it myself, beyond ruling out inconsistencies and getting together a framework of simple, empirically defensible facts. ...


(28 December 2006)

 
At 12:33 PM, Blogger nige said...

Update: extract from an email to Mario Rabinowitz, 30 December 2006.

There are issues with the force calculation when you want great accuracy. For my purpose I wanted to calculate an inward reaction force to get predict gravity by the LeSage gravity mechanism which Feynman describes (with a picture) in his book "Character of Physical Law" (1965).

The first problem is that at greater distances, you are looking back in time, so the density is higher. The density varies as the inverse cube of time after the big bang.

Obviously this would give you an infinite force from the greatest distances, approaching zero time (infinite density).

But the redshift of gravity causing gauge boson radiation emitted from such great distances would weaken their contribution. The further the mass is, the greater the redshift of any gravity causing gauge boson radiation which is coming towards us from that mass. So this effect puts a limit on the effect on gravity from the otherwise infinitely increasing effective outward force due to density rising at early times afte the big bang. A simple way to deal with this is to treat redshift as a stretching of the radiation, and the effects of density can be treated with the mass-continuity equation by supplying the Hubble law and the spacetime effect. The calculation suggests that the overall effect is that the density rise (as limited by the increase in redshift of gauge bosons carrying the inward reaction force) is a factor of e^3 where e is the base of natural logarithms. This is a factor of about 20, not infinity. It allows me to predict the strength of gravity correctly.

The failure of anybody (except for me) to correctly predict the lack of gravitational slowing of the universe is also quite serious.

Either it's due to a failure of gravity to work on masses receding from one another at great speed (due to redshift of force carrying gauge bosons in quantum gravity?) or it's due to some repulsive force offsetting gravity.

The mainstream prefers the latter, but the former predicted the Perlmutter results via October 1996 Electronics World. There is extreme censorship against predictions which are correctly confirmed afterwards, and which quantitatively correlate to observations. But there is bias in favour of ad hoc invention of new forces which simply aren't needed and don't predict anything checkable. It's just like Ptolemy adding a new epicycle everytime he found a discrepancy, and then claiming to have "discovered" a new epicycle of nature.

The claimed accelerated expansion of the universe is exactly (to within experimental error bars) what I predicted two years before it was observed, using the assumption there is no gravitational retardation (instead of an accelerated expansion just sufficient to cancel the expansion). The "cosmological constant" the mainstream is using is variable, to fit the data! You can't exactly offset gravity by simply adding a cosmological constant, see:

http://cosmicvariance.com/2006/01/11/evolving-dark-energy/


See the diagram there! The mainstream best fit using a cosmological constant is well outside many of the error bars. This is intuitively obvious from my perspective. What is occurring is that there is simply no gravitational slowing. But the mainstream is assuming that there is gravitational slowing, and also dark energy causing acceleration which offsets the gravitational slowing. But that doesn't work: the cosmological constant cannot do it. If it is perfectly matched to experimental data at short distances, it over compensates at extreme distances because it makes gravity repulsive. So it overestimates at extreme distances.

All you need to get the correct expansion curve is to delete gravitational retardation altogether. You don't need general relativity to examine the physics.

Ten years ago (well before Perlmutter’s discovery and dark energy), the argument arose that if gravity is caused by a Yang-Mills exchange radiation quantum force field, where gravitons were exchanged between masses, then cosmological expansion would degenerate the energy of the gravitons over vast distances.

It is easy to calculate: whenever light is seriously redshifted, gravity effects over the same distance will be seriously reduced.

At that time, 1996, I was furthering my education with some Open University courses and as part of the cosmology course made some predictions from this quantum gravity concept.

The first prediction is that Friedmann’s solutions to GR are wrong, because they assume falsely that gravity doesn’t weaken over distances where redshifts are severe.

Whereas the Hubble law of recessionis empirically V = Hr, Friedmann’s solutions to general relativity predicts that V will not obey this law at very great distances. Friedmann/GR assume that there will be a modification due to gravity retarding the recession velocities V, due effectively to the gravitational attraction of the receding galaxy to the mass of the universe contained within the radius r.

Hence, the recession velocity predicted by Friedmann’s solution for a critical density universe (which continues to expand at an ever diminishing rate, instead of either coasting at constant - which Friedmann shows GR predicts for low density - or collapsing which would be the case for higher than critican density) can be stated in classical terms to make it clearer than using GR.

Recession velocity including gravity

V = (Hr) - (gt)

where g = MG/(r^2) and t = r/c, so:

V = (Hr) - [MGr/(cr^2)]

= (Hr) - [MG/(cr)]

M = mass of universe which is producing the gravitational retardation of the galaxies and supernovae, ie, the mass located within radius r (by Newton’s theorem, the gravity due to mass within a spherically symmetric volume can be treated as to all reside in the centre of that volume):

M = Rho.(4/3)Pi.r^3

Assuming as (was the case in 1996 models) that Friedmann Rho = critical density = Rho = 3(H^2)/(8.Pi.G), we get:

M = Rho.(4/3)Pi.r^3

= [3(H^2)/(8.Pi.G)].(4/3)Pi.r^3

= (H^2)(r^3)/(2G)

So, the Friedmann recession velocity corrected for gravitational retardation,

V = (Hr) - [MG/(cr)]

= (Hr) - [(H^2)(r^3)G/(2Gcr)]

= (Hr) - [0.5(Hr)^2]/c.

Now, what my point is is this. The term [0.5(Hr)^2]/c in this equation is the amount of gravitational deceleration to the recession velocity.
From Yang-Mills quantum gravity arguments, with gravity strength depending on the energy of exchanged gravitons, the redshift of gravitons must stop gravitational retardation being effective. So we must drop the effect of the term [0.5(Hr)^2]/c.

Hence, we predict that the Hubble law will be the correct formula.

Perlmutter’s results of software-automated supernovae redshift discoveries using CCD telescopes were obtained in about 1998, and fitted this prediction made in 1996. However, every mainstream journal had rejected my 8-page paper, although Electronics World (which I had written for before) made it available via the October 1996 issue.

Once this quantum gravity prediction was confirmed by Perlmutter’s results, instead of abandoning Friedmann’s solutions to GR and pursuing quantum gravity, the mainstream instead injected a small positive lambda (cosmological constant, driven by unobserved dark energy) into the Friedmann solution as an ad hoc modification.

I can’t understand why something which to me is perfectly sensible and is a prediction which was later confirmed experimentally, is simply ignored. Maybe it is just too simple, and people hate simplicity, preferring exotic dark energy, etc.

People are just locked into believing Friedmann’s solutions to GR are correct because they come from GR which is well validated in other ways. They simply don’t understand that the redshift of gravitons over cosmological sized distances would weaken gravity, and that GR simply doesn’t contains these quantum gravity dynamics, so fails. It is “groupthink”.

*****

For an example of the tactics of groupthing, see Professor Sean Carroll's recent response to me: 'More along the same lines will be deleted — we’re insufferably mainstream around these parts.' - http://cosmicvariance.com/2006/12/19/what-we-know-and-dont-and-why/#comment-160956


He is a relatively good guy, who stated:

‘The world is not magic. The world follows patterns, obeys unbreakable rules. We never reach a point, in exploring our universe, where we reach an ineffable mystery and must give up on rational explanation; our world is comprehensible, it makes sense. I can’t imagine saying it better. There is no way of proving once and for all that the world is not magic; all we can do is point to an extraordinarily long and impressive list of formerly-mysterious things that we were ultimately able to make sense of. There’s every reason to believe that this streak of successes will continue, and no reason to believe it will end. If everyone understood this, the world would be a better place.’ – Prof. Sean Carroll, here

The good guys fairly politely ignore the facts, the bad guys try to get you sacked, or get call you names. There is a big difference. But at the end, the mainstream rubbish continues for ages either way:

Human nature means that instead of using scientific objectivity, any ideas outside the current paradigm should be attacked either indirectly (by ad hominem attacks on the messenger), or by religious-type (unsubstantiated) bigotry, irrelevant and condescending patronising abuse, and sheer self-delusion:

‘(1). The idea is nonsense.

‘(2). Somebody thought of it before you did.

‘(3). We believed it all the time.’

- Professor R.A. Lyttleton’s summary of inexcusable censorship (quoted by Sir Fred Hoyle, Home is Where the Wind Blows Oxford University Press, 1997, p154).

‘If you have got anything new, in substance or in method, and want to propagate it rapidly, you need not expect anything but hindrance from the old practitioner - even though he sat at the feet of Faraday... beetles could do that... he is very disinclined to disturb his ancient prejudices. But only give him plenty of rope, and when the new views have become fashionably current, he may find it worth his while to adopt them, though, perhaps, in a somewhat sneaking manner, not unmixed with bluster, and make believe he knew all about it when he was a little boy!’

- Oliver Heaviside, Electromagnetic Theory Vol. 1, p337, 1893. - http://physicsmathforums.com/showthread.php?t=2123

 
At 1:14 PM, Blogger nige said...

More for the record:

Hubble in 1929 noticed that the recession velocities v of galaxies increase linearly with apparent distance x, so that v/x = constant = H (Hubble constant with units of 1/time). The problem is that because of spacetime, you are looking backwards in time as you look to greater distances. Hence, using the spacetime principle, Hubble could have written his law as v/t = constant where t is the travel time of the light from distance x to us.

x = ct, so the ratio v/t = v/(x/c) = vc/x. This constant has units of acceleration, and indeed is the outward acceleration equivalent to the normal Hubble recession whereby the recession velocity increases with spacetime (greater distances or earlier times after big bang).

The acceleration value here is on the order of 10^{-10} ms^-2. Newton's 2nd law F=ma gives the ~10^43 Newtons outward force from this, just by multiplying the acceleration by the mass of the universe (mass being the density multiplied by the volume of a sphere of radius cT where T is age of universe).

The idea is that the outward force here is compensated by an inward reaction force (Newton's 3rd law) which must be carried by gauge boson radiation since there is nothing else that can do so in the vacuum over large distances (the Dirac sea charge loops only exist above the IR cutoff, which means at electric fields above 10^20 v/m, or within about 1 fm from a particle). The gauge boson radiation causes gravity and predicts the strength of gravity correctly: http://feynman137.tripod.com/

Of course, this acceleration I'm pointing out isn't to be confused with the "acceleration of the universe" (allegedly due to dark energy). It is virtually impossible to get any discussion of this with anyone. The mainstream is confused and even people like Professor Sean Carroll ignore it. First of all, they would claim I'm confused about the acceleration of the universe, and then they would refuse to listen to any explanation. The whole subject does require a careful review or book to explain both the standard ideas and the necessary corrections.

 
At 3:15 AM, Blogger nige said...

More:

From: Nigel Cook
To: Mario Rabinowitz
Sent: Friday, December 29, 2006 3:31 PM
Subject: Re: I enjoyed your Web Site http://www.quantumfieldtheory.org. Your graphics are terrific.

Dear Mario,

Many thanks for this email, particularly on Feynman. I was visiting relatives over Christmas and did not have internet access.

Regards general relativity and cosmology. The source of the expansion is an initial impulse in the Friedmann solution (no cosmological constant), with gravitation slowing down the expansion. In 1998 it was discovered observationally by Saul Perlmutter, from extremely distant supernovas detected by computer software automatically from CCD telescope data that the expansion doesn't obey Friedmann's solution without a cosmological constant. Hence a cosmological constant (lambda)was then added to the cold dark matter (CDM) general relativity, giving the standard cosmological model of the current time, the "lambda-CDM" model.

The problems with this are very serious. The whole thing is ad hoc. The cosmological constant put in has no physical justification whatsoever and can't be predicted.

The role of the cosmological constant is to offset gravity at extreme distances. Einstein in 1917 had a much greater cosmological constant, which would cancel gravity at the distance of the average gap between galaxies. Beyond that distance, the overall force would be repulsive (not attractive like gravity) because the cosmological constant would predominate. The failure of the steady state cosmology makes the application of general relativity to cosmology extremely suspect.

Another problem is that all known quantum field theories are Yang-Mills theories, whereby forces result from the exchange of gauge bosons. In the case of gravity, this would presumably result in redshift of gauge bosons between receding masses (which is not a problem in the Standard Model because the electromagnetic and nuclear forces as observed are relatively short ranged). This effect (as I predicted in 1996 via the October 1996 Electronics World magazine) means that the gravitational retardation on the expansion which Friedmann predicts is not correct.

This was confirmed by Perlmutter in 1998. However, the mainstream has chosen to avoid taking the data as implying that things are more simple than Friedmann's general relativity. General relativity is just a mathematical model of gravitation in terms of curvature with a clever contraction for conservation of mass-energy. It doesn't include quantum gravity effects like redshift of gauge bosons exchanged between receding masses: http://electrogravity.blogspot.com/2006/12/there-are-two-types-of-loops-behind.html

Best wishes,
Nigel
----- Original Message -----
From: Mario Rabinowitz
To: Nigel Cook
Sent: Saturday, December 23, 2006 4:06 AM
Subject: I enjoyed your Web Site http://www.quantumfieldtheory.org. Your graphics are terrific.


Dear Nigel,

I enjoyed your Web Site http://www.quantumfieldtheory.org. Your graphics are terrific. Since you mentioned the Feynman path integral, I thought I should mention that I think it should be called "The Feynman-Dirac path integral." I believe it was first suggested by Dirac in his book on QM. Feynman carried the concept to greater fruition.

By the way, Richard Feynman came to my office at SLAC because he wanted me to tell him about my physics research – a subject in which he was interested. He listened attentively for about an hour and asked many questions. When I finished, as he was leaving he said, "Mario, that is really good engineering."

For over a decade I thought it was a dig, until a friend of his told me that unless it was "elementary particle physics" he called it "engineering." He also told me that Feynman often tore people's ideas apart, if he disagreed. Feynman treated me with kid gloves, and I didn't appreciate it at the time.

I was motivated to look up some of your other work which is indeed original and interesting. Although I only have small expertise in the subject, please allow me to be so bold as to point out the alternative "orthodox" view.

As I understand the orthodox view of EGR, it is not as if the Big Bang concept represents an explosion in a pre-existent space. Rather it is that the Universe grew (perhaps exponentially) from a tiny size because it is creating more space for itself. (Now that's really lifting itself up by its bootstraps.) I think EGR argues that the distant stars are moving away from us (and each other) because new space-time is being created between us and them, not because of an initial impulse or force.

I think there is an ongoing force associated with this space expansion and that it is not sufficient to overcome solid state forces to expand planets and stars. I believe this differs from Pasqual Jordan's view (more than 40 years ago) that this expansion was an important factor in the expansion of the earth. I interpret your work as preferring the concept of an initial impulse or force. I either don't know enough to conclude one view is better than the other, or the knowledge of science on this subject is not sufficient to make a decision. The decision is obvious for the orthodox, but that doesn't make it right and they have often been wrong.

My best wishes to you for a JOYOUS HOLIDAY SEASON and a HAPPY NEW YEAR.

Best regards,
Mario

 
At 3:23 AM, Blogger nige said...

http://cosmicvariance.com/2007/01/01/happy-2007/#comment-167307

nc on Jan 1st, 2007 at 6:19 am

Do you think that the results of the LHC experiments will definitely be interpreted and presented correctly? There is an eternal tradition in physics that someone makes a mainstream prediction, and the results are then interpreted in terms of the prediction being successful, even where the prediction is a failure!

Example; Maxwell’s electromagnetic theory ‘predicted’ an aether, and Maxwell wrote an article in Encyclopedia Britannica suggesting it could be checked by experimental observations of interference on light beamed in two directions and recombined. Michelson and Morley did the experiment and Maxwell’s theory failed.

But FitzGerald and Lorentz tried to save the aether by adding the epicycle that objects (such as the Michelson Morley instrument) contract in the direction of motion, thus allowing light to travel that way faster in the instrument, and preventing interference while preserving aether.

Einstein then reinterpreted this in 1905, preserving the contraction and the absence of experimental detection of uniform motion while dismissing the aether. The Maxwell theory then became a mathematical theory of light, lacking the physical displacement of vacuum charges which Maxwell had used to close the electromagnetic cycle in a light wave by producing a source for the time-varying magnetic field which by Faraday’s induction law creates a curling electric field, which displaces other vacuum charges causing a new magnetic field, which repeats the cycle.

Experimental results lead first to mainstream theories being fiddled to ‘incorporate’ the new result. In light of historical facts, why should anyone have any confidence that the phenomena to be observed at LHC will be capable of forcing physicists to abandon old failed ideas? They’ll just add ‘corrections’ to old ideas until they match the experimental results ... string theory isn’t falsifiable so how on earth can anything be learned from experiments which has any bearing whatsoever on the mainstream theory? (Sorry if I am misunderstanding something here.)

 
At 3:56 AM, Blogger nige said...

http://riofriospacetime.blogspot.com/2006/12/holiday-gifts.html

nige said...
Rae Ann,

1. The role of "dark energy" (repulsion force between masses at extreme distances) is purely an epicycle to CANCEL OUT GRAVITY ATTRACTION AT THOSE DISTANCES, thus making general relativity consistent with 1999-current observations of recession of supernovas away from us at extreme distances.

2. YOU REPLACE THIS "DARK ENERGY" WITH REDUCED GRAVITY AT LONG RANGES.

YOU GET THE REDUCED GRAVITY AT LONG RANGES THEORETICALLY PREDICTED BY YANG-MILLS QUANTUM FIELD THEORY.

All understood forces (Standard Model) result from exchanges of gauge boson radiation between the charges involved in the forces.

Gravity has no reason to be any different. The exchanged radiation between gravitational 'charges' (ie masses) will be severely redshifted and gravity weakened where the masses are receding at relativistic velocities.

This effect REPLACES DARK ENERGY.

There is NO DARK ENERGY.

The entire effects attributed to dark energy are due to redshift of gravity causing gauge boson radiation in the expanding universe.

This was PREDICTED QUANTITATIVELY IN DETAIL TWO YEARS AHEAD OF THE SUPERNOVAE OBSERVATIONS AND WAS PUBLISHED, back in Oct. 1996.

I can't believe how hard it is to get people to see this.

This prediction actually fits the observations better than a cosmological constant/dark energy. Adding a cosmological constant/dark energy FAILS to adequately model the situation because instead of merely cancelling out gravity at long ranges, it under cancels up to a certain distance and overcompensates at great distances.

The "cosmological constant" the mainstream is using is variable, to fit the data! You can't exactly offset gravity by simply adding a cosmological constant, see:

http://cosmicvariance.com/2006/01/11/evolving-dark-energy/

See the diagram there! The mainstream best fit using a cosmological constant is well outside many of the error bars. This is intuitively obvious from my perspective. What is occurring is that there is simply no gravitational slowing. But the mainstream is assuming that there is gravitational slowing, and also dark energy causing acceleration which offsets the gravitational slowing. But that doesn't work: the cosmological constant cannot do it. If it is perfectly matched to experimental data at short distances, it over compensates at extreme distances because it makes gravity repulsive. So it overestimates at extreme distances.

All you need to get the correct expansion curve is to delete gravitational retardation altogether. You don't need general relativity to examine the physics.

Ten years ago (well before Perlmutter’s discovery and dark energy), the argument arose that if gravity is caused by a Yang-Mills exchange radiation quantum force field, where gravitons were exchanged between masses, then cosmological expansion would degenerate the energy of the gravitons over vast distances.
It is easy to calculate: whenever light is seriously redshifted, gravity effects over the same distance will be seriously reduced.

At that time, 1996, I was furthering my education with some Open University courses and as part of the cosmology course made some predictions from this quantum gravity concept.

The first prediction is that Friedmann’s solutions to GR are wrong, because they assume falsely that gravity doesn’t weaken over distances where redshifts are severe.

Whereas the Hubble law of recessionis empirically V = Hr, Friedmann’s solutions to general relativity predicts that V will not obey this law at very great distances. Friedmann/GR assume that there will be a modification due to gravity retarding the recession velocities V, due effectively to the gravitational attraction of the receding galaxy to the mass of the universe contained within the radius r.

Hence, the recession velocity predicted by Friedmann’s solution for a critical density universe (which continues to expand at an ever diminishing rate, instead of either coasting at constant - which Friedmann shows GR predicts for low density - or collapsing which would be the case for higher than critican density) can be stated in classical terms to make it clearer than using GR.

Recession velocity including gravity

V = (Hr) - (gt)

where g = MG/(r^2) and t = r/c, so:

V = (Hr) - [MGr/(cr^2)]

= (Hr) - [MG/(cr)]

M = mass of universe which is producing the gravitational retardation of the galaxies and supernovae, ie, the mass located within radius r (by Newton’s theorem, the gravity due to mass within a spherically symmetric volume can be treated as to all reside in the centre of that volume):

M = Rho.(4/3)Pi.r^3

Assuming as (was the case in 1996 models) that Friedmann Rho = critical density = Rho = 3(H^2)/(8.Pi.G), we get:

M = Rho.(4/3)Pi.r^3

= [3(H^2)/(8.Pi.G)].(4/3)Pi.r^3

= (H^2)(r^3)/(2G)

So, the Friedmann recession velocity corrected for gravitational retardation,

V = (Hr) - [MG/(cr)]

= (Hr) - [(H^2)(r^3)G/(2Gcr)]

= (Hr) - [0.5(Hr)^2]/c.

Now, what my point is is this. The term [0.5(Hr)^2]/c in this equation is the amount of gravitational deceleration to the recession velocity.
From Yang-Mills quantum gravity arguments, with gravity strength depending on the energy of exchanged gravitons, the redshift of gravitons must stop gravitational retardation being effective. So we must drop the effect of the term [0.5(Hr)^2]/c.

Hence, we predict that the Hubble law will be the correct formula.

Perlmutter’s results of software-automated supernovae redshift discoveries using CCD telescopes were obtained in about 1998, and fitted this prediction made in 1996. However, every mainstream journal had rejected my 8-page paper, although Electronics World (which I had written for before) made it available via the October 1996 issue.

Once this quantum gravity prediction was confirmed by Perlmutter’s results, instead of abandoning Friedmann’s solutions to GR and pursuing quantum gravity, the mainstream instead injected a small positive lambda (cosmological constant, driven by unobserved dark energy) into the Friedmann solution as an ad hoc modification.

I can’t understand why something which to me is perfectly sensible and is a prediction which was later confirmed experimentally, is simply ignored. Maybe it is just too simple, and people hate simplicity, preferring exotic dark energy, etc.

People are just locked into believing Friedmann’s solutions to GR are correct because they come from GR which is well validated in other ways. They simply don’t understand that the redshift of gravitons over cosmological sized distances would weaken gravity, and that GR simply doesn’t contain... these quantum gravity dynamics, so fails. It is “groupthink”.

 
At 9:54 PM, Blogger nige said...

ELECTROWEAK SYMMETRY BREAKING MECHANISM. The following is a comment I was about to submit to http://cosmicvariance.com/2007/01/01/happy-2007/ where I've already made a couple of longish comments. I decided not to submit the following there for the moment as I have made similar (although briefer) comments on this topic elsewhere eg http://asymptotia.com/2006/10/29/a-promising-sign/ and such comments are either ignored or cause annoyance and get deleted due to their length. Trying to put new ideas into brief blog comments to get them read to top dogs in physics doesn't work. What is needed are proper papers, which of course are censored if too innovative and successful compared to the useless (non-falsifiable mainstream stringy M-theory etc.):

Regards electroweak symmetry breaking and LHC, what signature if any can be expected if the mass-giving 'Higgs boson' is the already known, massive neutral Z_o weak gauge boson?

Loops containing Z_o bosons will clearly have a threshold energy corresponding to their rest mass of 91 GeV. Their abundance will become great enough to cause electroweak symmetry above the 'Higgs expectation energy' of 246 GeV as quotes on http://en.wikipedia.org/wiki/Higgs_boson

246 GeV is e times 91 GeV, and in radiation scattering you get serious secondary radiation after a mean free path, which causes attenuation by a factor of e and thus an increase in the energy carried by elastically-scattered radiation by that same factor of e. This is obviously in accordance with Karl Popper's claim that the Heisenberg energy-time relationship determining the appearance of vacuum loops is simply due to high energy scatter of particles in the Dirac sea:

‘... the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 book The Logic of Scientific Discovery]. ... There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation ...’

– Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.

The threshold energy for any loop species is thus a crude analog to the temperature threshold for boiling. The 100 C temperature corresponds to the kinetic energy needed by surface water molecules to break free of the fluid bonding forces. By analogy, the IR cutoff of 0.511 MeV/particle correspoinds to an electric field strength of around 10^20 v/m occurring at around 1 fm from a particle of unit charge. This strong electric field would break down the structure of the Dirac sea. At greater distances, it will be a perfect continuum (no annihilation/creation loops at all), but at shorter distances the loop charges get polarized and cause modifications to all types of interactions.

The fact that the equivalent (postulated) 'Higgs boson field' has spin-0 and not spin-1 (like the Z_o) can be explained by extending Pauli's empirical exclusion principle from spin-1/2 fermions to all massive particles, regardless of their spin. Empirically we know that spin-1 photons don't obey the Pauli exclusion principle, but that can be explained by their lack of rest mass not by spin.

Hence, the massive spin-1 Z_o field at the 'Higgs expectation energy' of 246 MeV would then, on account of the particles having rest mass (unlike photons), obey the extended Pauli exclusion principle and pair up with opposite spins. The spins of Z_o bosons in the field would thus cancelled, to give an average spin-0.

It is incredibly sad that simplicity is discounted in favour of mathematical obfuscation. Anyone with successful simple ideas must nowadays cloak them in arcane or otherwise stringy mathematics to make them look psychologically satisfying to the reader, who is now likely to be way against Feynman's approach of searching for simplicity:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’ - R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

 
At 7:41 AM, Blogger nige said...

Exchange of messages with Mario Rabinowitz

I’ve recently exchanged about a dozen emails with Dr Rabinowitz, who has some interesting papers on arXiv.org (although in small ways I take issue with a few details). Since the purpose of my blog posts are to keep a record of developments, and the discussion with Dr Rabinowitz has been interesting and helpful, some extracts are presented below. Any reader should be aware that these discussions are not intended for publication and may contain errors; they are long yet quickly written emails so may contain slips.

From: Mario Rabinowitz
To: Nigel Cook
Sent: Friday, December 29, 2006 10:26 PM
Subject: What struck me about your simple calculation is that you find the force to be ~ 10^43 N

Dear Nigel,

It is good to hear from you. I assumed that you were preoccupied.

Your view
>However, the mainstream has chosen to avoid taking the data as implying that things are more simple than Friedmann's general relativity.
is of interest to me as I too have dabbled in heresy.
In the 1990's, I developed a model of Black Hole Radiation that differs from Hawking Radiation. I call it Gravitational Tunneling Radiation and it is similar to Electric Field Emission. Rather than being isotropic with an extremely rapid evaporation of a Little Black Hole (LBH), it is beamed between the LBH and another body – the combination of which thins and lowers the gravitational potential energy barrier. This permits tunneling out of the LBH.

The consequence of my model is that all the LBH have not evaporated away as the Hawking model would predict. The dark matter may be accounted for in the remaining LBH from the Big Bang. Parsimoniously, the beamed Gravitational Tunneling Radiation from these LBH may help account for both the expansion and accelerated expansion of the Universe.

What struck me about your simple (simple is usually clearer and better than complex) calculation is that you find the force to be ~ 10^43 N. It is an interesting coincidence that the largest repulsive force (due to the beamed Gravitational Tunneling Radiation) in my model is ~10^43 N.

Another interesting coincidence is that I find that a universal largest attractive force between any size black holes is c^4/(16 G) ~10^43 N . I was tempted (and didn't resist) to conjecture that this may be the largest possible force that can be created in our universe. Two of my later papers dealing with this are in the ArXiv:

Progress in Dark Matter Research Editor J. Blain; NovaScience Publishers, Inc. N.Y., (2005), pp. 1 - 66. physics/0503079. Eqs. (11.5) and (11.6)
"Little Black Holes as Dark Matter Candidates with Feasible Cosmic and Terrestrial Interactions"

Hadronic J. Supplement 16, 125 arXiv.org/abs/astro-ph/0104055
" Macroscopic Hadronic Little Black Hole Interactions." Eqs. (3.1) and (3.2).

Hawking LBH which may result in the early universe can only be short-lived and don't play the game very long. My LBH are a different kind of player in the early and later universe in terms of beaming and longevity.

So much for heresy and conjecture. My next realization is mainstream and rigorous. I found that the quite complicated black hole blackbody expression for Hawking radiation power can be exactly reduced to

P = G(rho)h-bar/90

where rho is the density of the black hole. The 90 seems out of place for a fundamental equation. However the 90 goes away for Gravitational Tunneling Radiation where the radiated power is

P= (2/3)G(rho)h-bar x (transmission probability).

This is in the Arxiv. Eqs. (3) and (4).
"Black Hole Radiation and Volume Statistical Entropy." International Journal of Theoretical
Physics 45, 851-858 (2006). arXiv.org/abs/physics/0506029

Just a very small thing wrt your Web Site, http://www.quantumfieldtheory.org. My minor point is that the text in yellow is a bit hard to discern on my screen.

Best regards,
Mario


From: Nigel Cook
To: Mario Rabinowitz
Sent: Saturday, December 30, 2006 10:30 AM
Subject: Re: What struck me about your simple calculation is that you find the force to be ~ 10^43 N

Dear Mario,

Thank you very much for your comments and the interesting analogy of forces in your black hole Hawking radiation calculations. Hubble in 1929 noticed that the recession velocities v of galaxies increase linearly with apparent distance x, so that v/x = constant = H (Hubble constant with units of 1/time). The problem is that because of spacetime, you are looking backwards in time as you look to greater distances. Hence, using the spacetime principle, Hubble could have written his law as v/t = constant where t is the travel time of the light from distance x to us.

x = ct, so the ratio v/t = v/(x/c) = vc/x. This constant has units of acceleration, and indeed is the outward acceleration equivalent to the normal Hubble recession whereby the recession velocity increases with spacetime (greater distances or earlier times after big bang).

The acceleration value here is on the order of 10^{-10} ms^-2. Newton's 2nd law F=ma gives the ~10^43 Newtons outward force from this, just by multiplying the acceleration by the mass of the universe (mass being the density multiplied by the volume of a sphere of radius cT where T is age of universe).

The idea is that the outward force here is compensated by an inward reaction force (Newton's 3rd law) which must be carried by gauge boson radiation since there is nothing else that can do so in the vacuum over large distances (the Dirac sea charge loops only exist above the IR cutoff, which means at electric fields above 10^20 v/m, or within about 1 fm from a particle). The gauge boson radiation causes gravity and predicts the strength of gravity correctly: http://feynman137.tripod.com/

Of course, this acceleration I'm pointing out isn't to be confused with the "acceleration of the universe" (allegedly due to dark energy). It is virtually impossible to get any discussion of this with anyone. The mainstream is confused and even people like Professor Sean Carroll ignore it. First of all, they would claim I'm confused about the acceleration of the universe, and then they would refuse to listen to any explanation. The whole subject does require a careful review or book to explain both the standard ideas and the necessary corrections.

Best wishes,
Nigel

From: Mario Rabinowitz
To: Nigel Cook
Sent: Saturday, December 30, 2006 4:12 PM
Subject: Thank you for your clarity

Dear Nigel,

Thank you for your clarity. I did an independent confirmation of your numbers using a universe mass ~ 10^53 kg, and got F ~ 6 x 10^43 N. As with all simple calculations this neglects relativistic effects, but it is good enough to make your point, and get an interesting result. I also calculated your acceleration of 6 x 10^{-10} ms^-2. This is miniscule, and just goes to show that an extremely small acceleration acting for an extremely long time can have a huge effect. When the S. Perlmutter, et al , Nature 391, 51(1998) and A.G. Riess, et al , Astronomical Journal 116, 1009 (1998).
papers first came out, I scoured them to see what their claim is for the accelerated expansion of the universe. It was nowhere to be found. I guess this is because in EGR it is model dependent. Yet I would have expected them to give some numbers as related to different models.

Do you know what the claimed accelerated expansion of the universe is?

In my LBH model, the dark energy of repulsion is just beamed Gravitational Tunneling Radiation.

Best regards,
Mario

From: Nigel Cook
To: Mario Rabinowitz
Sent: Saturday, December 30, 2006 7:26 PM
Subject: Re: Thank you for your clarity

Dear Mario,
There are issues with the force calculation when you want great accuracy. For my purpose I wanted to calculate an inward reaction force to get predict gravity by the LeSage gravity mechanism which Feynman describes (with a picture) in his book "Character of Physical Law" (1965).

The first problem is that at greater distances, you are looking back in time, so the density is higher. The density varies as the inverse cube of time after the big bang.

Obviously this would give you an infinite force from the greatest distances, approaching zero time (infinite density).

But the redshift of gravity causing gauge boson radiation emitted from such great distances would weaken their contribution. The further the mass is, the greater the redshift of any gravity causing gauge boson radiation which is coming towards us from that mass. So this effect puts a limit on the effect on gravity from the otherwise infinitely increasing effective outward force due to density rising at early times afte the big bang. A simple way to deal with this is to treat redshift as a stretching of the radiation, and the effects of density can be treated with the mass-continuity equation by supplying the Hubble law and the spacetime effect. The calculation suggests that the overall effect is that the density rise (as limited by the increase in redshift of gauge bosons carrying the inward reaction force) is a factor of e^3 where e is the base of natural logarithms. This is a factor of about 20, not infinity. It allows me to predict the strength of gravity correctly.

The failure of anybody (except for me) to correctly predict the lack of gravitational slowing of the universe is also quite serious.

Either it's due to a failure of gravity to work on masses receding from one another at great speed (due to redshift of force carrying gauge bosons in quantum gravity?) or it's due to some repulsive force offsetting gravity.

The mainstream prefers the latter, but the former predicted the Perlmutter results via October 1996 Electronics World. There is extreme censorship against predictions which are correctly confirmed afterwards, and which quantitatively correlate to observations. But there is bias in favour of ad hoc invention of new forces which simply aren't needed and don't predict anything checkable. It's just like Ptolemy adding a new epicycle everytime he found a discrepancy, and then claiming to have "discovered" a new epicycle of nature.

The claimed accelerated expansion of the universe is exactly (to within experimental error bars) what I predicted two years before it was observed, using the assumption there is no gravitational retardation (instead of an accelerated expansion just sufficient to cancel the expansion). The "cosmological constant" the mainstream is using is variable, to fit the data! You can't exactly offset gravity by simply adding a cosmological constant, see:

http://cosmicvariance.com/2006/01/11/evolving-dark-energy/

See the diagram there! The mainstream best fit using a cosmological constant is well outside many of the error bars. This is intuitively obvious from my perspective. What is occurring is that there is simply no gravitational slowing. But the mainstream is assuming that there is gravitational slowing, and also dark energy causing acceleration which offsets the gravitational slowing. But that doesn't work: the cosmological constant cannot do it. If it is perfectly matched to experimental data at short distances, it over compensates at extreme distances because it makes gravity repulsive. So it overestimates at extreme distances.

All you need to get the correct expansion curve is to delete gravitational retardation altogether. You don't need general relativity to examine the physics.

Ten years ago (well before Perlmutter’s discovery and dark energy), the argument arose that if gravity is caused by a Yang-Mills exchange radiation quantum force field, where gravitons were exchanged between masses, then cosmological expansion would degenerate the energy of the gravitons over vast distances.

It is easy to calculate: whenever light is seriously redshifted, gravity effects over the same distance will be seriously reduced.

At that time, 1996, I was furthering my education with some Open University courses and as part of the cosmology course made some predictions from this quantum gravity concept.

The first prediction is that Friedmann’s solutions to GR are wrong, because they assume falsely that gravity doesn’t weaken over distances where redshifts are severe.

Whereas the Hubble law of recessionis empirically V = Hr, Friedmann’s solutions to general relativity predicts that V will not obey this law at very great distances. Friedmann/GR assume that there will be a modification due to gravity retarding the recession velocities V, due effectively to the gravitational attraction of the receding galaxy to the mass of the universe contained within the radius r.

Hence, the recession velocity predicted by Friedmann’s solution for a critical density universe (which continues to expand at an ever diminishing rate, instead of either coasting at constant - which Friedmann shows GR predicts for low density - or collapsing which would be the case for higher than critican density) can be stated in classical terms to make it clearer than using GR.

Recession velocity including gravity

V = (Hr) - (gt)

where g = MG/(r^2) and t = r/c, so:

V = (Hr) - [MGr/(cr^2)]

= (Hr) - [MG/(cr)]

M = mass of universe which is producing the gravitational retardation of the galaxies and supernovae, ie, the mass located within radius r (by Newton’s theorem, the gravity due to mass within a spherically symmetric volume can be treated as to all reside in the centre of that volume):

M = Rho.(4/3)Pi.r^3

Assuming as (was the case in 1996 models) that Friedmann Rho = critical density = Rho = 3(H^2)/(8.Pi.G), we get:

M = Rho.(4/3)Pi.r^3

= [3(H^2)/(8.Pi.G)].(4/3)Pi.r^3

= (H^2)(r^3)/(2G)

So, the Friedmann recession velocity corrected for gravitational retardation,

V = (Hr) - [MG/(cr)]

= (Hr) - [(H^2)(r^3)G/(2Gcr)]

= (Hr) - [0.5(Hr)^2]/c.

Now, what my point is is this. The term [0.5(Hr)^2]/c in this equation is the amount of gravitational deceleration to the recession velocity.
From Yang-Mills quantum gravity arguments, with gravity strength depending on the energy of exchanged gravitons, the redshift of gravitons must stop gravitational retardation being effective. So we must drop the effect of the term [0.5(Hr)^2]/c.

Hence, we predict that the Hubble law will be the correct formula.

Perlmutter’s results of software-automated supernovae redshift discoveries using CCD telescopes were obtained in about 1998, and fitted this prediction made in 1996. However, every mainstream journal had rejected my 8-page paper, although Electronics World (which I had written for before) made it available via the October 1996 issue.

Once this quantum gravity prediction was confirmed by Perlmutter’s results, instead of abandoning Friedmann’s solutions to GR and pursuing quantum gravity, the mainstream instead injected a small positive lambda (cosmological constant, driven by unobserved dark energy) into the Friedmann solution as an ad hoc modification.

I can’t understand why something which to me is perfectly sensible and is a prediction which was later confirmed experimentally, is simply ignored. Maybe it is just too simple, and people hate simplicity, preferring exotic dark energy, etc.

People are just locked into believing Friedmann’s solutions to GR are correct because they come from GR which is well validated in other ways. They simply don’t understand that the redshift of gravitons over cosmological sized distances would weaken gravity, and that GR simply doesn’t contains these quantum gravity dynamics, so fails. It is “groupthink”.

Best wishes,
Nigel

From: Mario Rabinowitz
To: Nigel Cook
Sent: Sunday, December 31, 2006 4:37 PM
Subject: Thank you for your interest in my work. I am genuinely interested in yours.

Dear Nigel, 12-31-06
I have two papers that deal with the problems of quantum gravity, so I'm not sure which one you have been taking another look at.

In my paper "A Theory of Quantum Gravity may not be possible because Quantum Mechanics violates the Equivalence Principle" physics/0601218, I show that there is a violation of the Weak Equivalence Principle and by implication the Strong Equivalence Principle.

In my paper, "Deterrents to a Theory of Quantum Gravity" physics/0608193 I show direct violation of the Strong Equivalence Principle and other short-comings of traditional quantum mechanics with respect to combining of EGR and QM.

I'm not sure whether both QM and EGR need to be fixed, or just one of them. I would guess that the 2nd paper may turn out to be more important, as some "dyed in the wool" QMers just shrug their shoulders and say that the SEP does not imply the WEP in QM as it does classically.

In this paper, I also show that there appears to be experimental evidence that the equivalence principle is violated experimentally in neutron interference.

Best regards,
Mario

From: Nigel Cook
To: Mario Rabinowitz
Sent: Monday, January 01, 2007 1:27 PM
Subject: Re: Thank you for your interest in my work. I am genuinely interested in yours.

Dear Mario,

Some time ago I read your paper "A Theory of Quantum Gravity may not be possible because Quantum Mechanics violates the Equivalence Principle" physics/0601218, and had several thoughts about it.

The idea you have of putting gravity in place of Coulomb's law in Schroedinger's equation to study the possibility of quantum gravity is very interesting.

One issue: in quantum field theory there is no implicit gravitational charge (mass) for any particle, and all masses gravitational and inertial) are supplied by some kind of "Higgs field" mechanism (resistance to the acceleration of charges by some aspect of the spacetime fabric).

This external coupling of gravity to observed particles, via "gravitons" interacting with the Higgs field to create space time "curvature" (or loop quantum gravity exchange effects which are equivalent mathematically to curvature) could preserve the equivalence principle.
Another question is whether it makes a difference whether the quantum gravity force mediating "graviton" has spin 1 (like electromagnetism) or spin 2 (supposed to be required for an all attractive force). Being a simple person who likes physical models rather than purely mathematical abstraction, I think the important thing is that radiation is being exchanged between charges to create forces.

The fact that star light is smoothly deflected by the sun's gravity in photographs taken during solar eclipses, is direct evidence that the gravity mediation on light is a continuum effect, not particle-particle scattering. If gravitons were causing gravity by hitting photons (or rather the spacetime fabric which caused the deflection via either "curvature" or loop quantum gravity), then you would expect the deflected light to be diffused due to scatter effects.

Obviously the effectiveness of this argument depends on how much you value - or worry about - known physical phenomena in particle-particle reactions, over mathematical abstraction.

But in general, the evidence does suggest that the nature of gravity in the vacuum is more like the effects of a smooth continuum than an erratic quantum foam gravity. According to Ockham's Razor and Mach's principle of science being based on evidence, the best resolution I consider is that there is all forces existing beyond the IR cutoff of quantum field theory (ie, beyond 1 fm from an electron or proton, where the electric field strength is below about 10^20 v/m) doesn't involve any random particles. The creation-annihilation "loops" in the vacuum only produce pair production and vacuum polarization phenomena within this range, and above this threshold electric field strength.

(A possibly crude analogy is that water is a relatively stable continuum composed of bonded molecules below freezing boiling point, but becomes a gas at high temperature. Popper showed in "Logic of Scientific Discovery", 1935, that the Heisenberg uncertainty principle can be interpreted as a statistical scatter law, which would explain the quantum chaos that exists on small scales: the creation-annihilation "loops" simply are due to random collisions creating energetic vacuum particles in strong electric or other fields, which soon collide with other particles and are annihilated.)

Best wishes,
Nigel

From: Mario Rabinowitz
To: Nigel Cook
Sent: Monday, January 01, 2007 3:43 AM
Subject: Much to your credit, you write and calculate with great clarity

Dear Nigel,

I have now had a chance to carefully read your letter of Dec. 30 and check your equations. Your calculations look fine within the realm of approximation that you explicitly stated. Much to your credit, you write and calculate with great clarity. Too many purposely obfuscate to make it hard to tell if they are wrong or right.

I haven't thought about the gravitational red shift of photons and gravitons in a long time. I do seem to recall that at the very large distances with which you are concerned, the problem becomes complex and model dependent in terms of EGR with "z" factors to take this into consideration. However, off the top of my head I'm not sure how it would affect your conclusion.

The LeSage gravity model is intriguing. It seems to give both how and why answers, and may fit into a potential theory of quantum gravity. How successful are calculations of the universal gravitational constant "G" from such a model? I wonder how it fits into Dirac's conjecture that G is changing with time?

It is a shame that deviations from the mainstream are generally rejected by the orthodox. I can empathize with the disappointment you must have felt when your 8-page paper was rejected by the establishment. At least you persisted until it was finally published in Electronics World.

Best regards,

Mario

From: Nigel Cook
To: Mario Rabinowitz
Sent: Monday, January 01, 2007 2:25 PM
Subject: Re: Much to your credit, you write and calculate with great clarity

Dear Mario,

The redshift z factor is often obfuscated. The severest redshift observed is that of the cosmic background radiation, from 300-400,000 years after the big bang. It is the thermal radiation existing at 3-4,000 K when hydrogen molecules formed, making the universe transparent for the first time.

Because it was then transparent, the radiation was free to propagate without absorption, and has been doing ever since, although we see it as microwave radiation due to redshift from the expanding universe (ie, due to the relativistic recession of the hydrogen ions which last scattered the radiation).
The redshift reduces its blackbody radiation temperature from ~3,500 K to 2.73 K, a reduction factor of ~1,300.

z = [ {wavelength observed} - {wavelength emitted} ] / {wavelength emitted}
= [ {frequency emitted} - {frequency observed} ] / {frequency observed}
= [ {temperature at emission} - {temperature at reception} ] / {temperature at reception}
~ 1,300 for the cosmic background radiation (biggest redshift ever observed).

In general relativity applied to cosmology, at such big redshifts z is approximately the ratio {temperature at emission} / {temperature at reception} and this is equal to the ratio of the radius (or scale factor) of the universe now, to that at the time in the past when the radiation was emitted. Hence the universe is now 1,300 times bigger than it was at ~350,000 years after the big bang (if this line of reasoning is correct).
The relativistic Doppler effect relates z to recession velocity v:
z = [ (1 + v/c) / (1 - v/c) ]^{1/2} - 1
which for very big redshift z and big recession velocities (v approaching c), reduces to approximately
z ~ [ 2{c}/ (c - v) ]^{1/2}

This equation comes from special relativity which has some issues (lacking dynamics) but it might be correct. So the recession velocity of the universe when the cosmic background radiation was emitted can be found using z = 1,300 which gives v = 0.9999988c.

On the topic of Dirac and changing G. First, Teller in 1948 claimed G can't change or the sea would have boiled 300 million years ago, which contradicts evidence from the fissile record. However, I found a serious error in Teller's logic. He assumes that G changes without electromagnetic force (Coulomb's law) also changing.

If both G and the electromagnetic force constant for Coulomb's law vary with time in the same way, then a stronger force will not vary the fusion rate in the sun or in the first few minutes of the big bang! This is because fusion depends on positively charged nuclear particles being able to approach each other (due to gravitational compression) close enough against Coulomb's law (which is repulsive) for the strong nuclear force to act and fuse them.

If gravity and electromagnetism forces are coupled, the increase in gravity doesn't increase fusion rates, because the accompanying change in Coulomb repulsion offsets that effect! Despite this, people still claim using Teller's false logic that G can't change or the big bang production of light elements would be affected, which is bogus.

Dirac guessed that that gravity was stronger in the past, falling as G ~ 1/{time}.
In fact, the gravity mechanism I'm working with proves that gravity is increasing linearly with time, G ~ t.

This is because there is nothing to slow down the expansion of the universe, and the observed outward force is F = ma where m is mass (approximately constant, ignoring the mass-energy transformations in stars etc.), and a = c / {time past, ie time looking out from the present time of about 15,000,000,000 years towards smaller times}.

Hence, in the past gravity was weaker because, by scaling time, we get:
G_now ~ 1/ {time past corresponding to size of universe}
So scaling gives:
{G_now} / {G_past} = {t_now} / {t_past}
Hence G (now) ~ t (now).

{NOTE ADDED IN RE-READING THIS; I DON’T LIKE THE DISCUSSION ABOVE, see http://feynman137.tripod.com/#d for a better demonstration}

hence G is actually directly proportional to the present time. There is a different treatment I have on the internet.

Scroll down also to the "Summary of results" list at http://feynman137.tripod.com/#d (you only have to scroll down a little way from that bookmark link).

Also:
On the topic of gravity decelerated expansion in the false (conventional) treatment of cosmology, see Brian Green's The Elegant Universe,UK paperback edition page 355: 'Inflation The root of the inflation problem is that in order to get two [now] widely separated regions of the universe close together [when looking backwards in time towards the big bang time origin] ... there is not enough time for any physical influence to have travelled from one region to the other. The difficulty, therefore, is that [looking backwards in time] ... the universe does not shrink at a fast enough rate.

'... The horizon problem stems from the fact that like a ball tossed upward, the dragging pull of gravity causes the expansion rate of the universe to slow down. ... Guth's resolution of the horizon problem is ... the early universe undergoes a brief period of enormously fast expansion...'

The problem is real but the solution isn't! What's happening is that the cosmological gravitational retardation idea from general relativity is an error because it ignores redshift of gauge boson radiation between receding masses (gravitational charges). This empirically based gravity mechanism also solves the detailed problem of the small-sized ripples in the cosmic background radiation. - http://electrogravity.blogspot.com/

Best wishes,
Nigel

From: Mario Rabinowitz
To: Nigel Cook
Sent: Monday, January 01, 2007 10:27 PM
Subject: I would like to respond to some of your points.

Dear Nigel,

Thanks for your detailed letters. I would like to respond to some of your points.
> On the topic of Dirac and changing G. First, Teller in 1948 claimed G can't change or the sea would have boiled 300 million years ago, which contradicts evidence from the fissile record. However, I found a serious error in Teller's logic. He assumes that G changes without electromagnetic force (Coulomb's law) also changing.
I didn't know that Edward had commented on Dirac's conjecture about G, or I would have asked him about it. I knew him for over 30 years, was often invited to his home, and although I didn't always accept his gracious offers to dine with him, I did have many enjoyable meals with him. Was Teller's logic that the bigger G would have caused the sun to be hotter and that the greatly increased solar radiation would have evaporated the ocean?
Wouldn't a change in the Coulomb force change alpha (the fine structure constant)? I'm not aware of experimental evidence that shows much if any change in alpha.

> Dirac guessed that gravity was stronger in the past, falling as G ~ 1/{time}. In fact, the gravity mechanism I'm working with proves that gravity is increasing linearly with time, G ~ t.
It's an interesting coincidence that you got G ∝ t. In my paper "Weighing the Universe and Its Smallest Constituents. IEEE Power Engr. Review 10,No.11, 8-13 (1990), I also found G ∝ t. In 1937 Dirac proposed that G ∝ 1/t, based upon an observation of a numerical nature that dimensional rations of some physical quantities yield numbers ~ 10^40. This seems inconsistent with the exception he took to both Milne's Dimensional Hypothesis that constants with dimensions should not appear in cosmology, and to Milne's G ∝ t. [My argument differs from Milne's.]
In 1938 Dirac concluded, "We are thus left with the case of zero curvature, or flat t-space, as the only one consistent with our fundamental principle and with conservation of mass." He maintained his position beyond 1973, despite his acknowledgment, "Now Einstein's theory of gravitation is irreconcilable with the Large Numbers hypothesis." In addition, time variation of G leads to nonconservation of energy. Today experiments on the moments of the early universe fluctuations seem to point to a flat universe – at least the part we can observe.

>The problem is real but the solution isn't! What's happening is that the cosmological gravitational retardation idea from general relativity is an error because it ignores redshift of gauge boson radiation between receding masses (gravitational charges).
As far as I am knowledgeable, your graviton red-shift concept is original, and may even be right. It certainly differs from the mainstream approach, and I haven't seen arguments as to why the gravitational attraction doesn't weaken between rapidly receding bodies. I presume energy is not conserved. If so, how do you deal with the non-conservation of energy? Have you thought of a non-graviton way that would lead to gravity reduction from a space-time curvature framework?

>One issue: in quantum field theory there is no implicit gravitational charge (mass) for any particle, and all masses gravitational and inertial) are supplied by some kind of "Higgs field" mechanism (resistance to the acceleration of charges by some aspect of the spacetime fabric). This external coupling of gravity to observed particles, via "gravitons" interacting with the Higgs field to create space time "curvature" (or loop quantum gravity exchange effects which are equivalent mathematically to curvature) could preserve the equivalence principle.
IMHO, I don't think that the Higgs field, nor gravitons interacting with the Higgs field to create space-time curvature reconciles the violation of the equivalence principle by QM. My paper "Deterrents to a Theory of Quantum Gravity" physics/0608193 may make this clearer than my earlier paper that QM violates the Strong Equivalence Principle (SEP). I show by a simple transformation of the Schroedinger equation to an accelerated frame, that this cannot introduce a gravitational potential energy into the Hamiltonian without fudging. Only in hindsight by a kind of phenomenological curve fitting with the phase factor, does QM pull a rabbit out of its hat. Even if this is allowed, then there is a basic inconsistency with respect to the Weak Equivalence Principle (WEP). The same can be shown for relativistic QM. In all cases, the equations have to reduce to the Schroedinger equation, so it may be sufficient just to deal with the Schroedinger equation.

>Another question is whether it makes a difference whether the quantum gravity force mediating "graviton" has spin 1 (like electromagnetism) or spin 2 (supposed to be required for an all attractive force). Being a simple person who likes physical models rather than purely mathematical abstraction, I think the important thing is that radiation is being exchanged between charges to create forces.
The prevailing view is that gravity can only be a quadrupole interaction, and hence has to be spin 2. I think traditional QM would violate the EP regardless of whether gravitons have spin 1 or 2. I totally agree with those who think we need a theory of Quantum Gravity. It is fundamental to the possibility of gravitational atoms, and many macroscopic things like black holes and black hole radiation, etc. If done properly it may totally revolutionize our concept of time. Yet, to me there appears to be a fundamental inconsistency that we need to come to grips with.
Speaking of time: We do not seem to occupy a preferred place in the universe, but it is nevertheless a very special place in being conducive not only to the existence of life but the preservation of life and discourse. On a broader scale, it appears that we live in a very special time in the history of the universe. Do you agree? Do you think this is just a coincidence, or do you think there is a deeper reason for this?

>The fact that star light is smoothly deflected by the sun's gravity in photographs taken during solar eclipses, is direct evidence that the gravity mediation on light is a continuum effect, not particle-particle scattering. If gravitons were causing gravity by hitting photons (or rather the spacetime fabric which caused the deflection via either "curvature" or loop quantum gravity), then you would expect the deflected light to be diffused due to scatter effects.
I'm not sure delicate enough experiments have been set up yet to discern any graininess, if it is there. On a large enough scale, the graininess may accumulate and be detectable. It's interesting that one needs to look at a small enough scale to find the graininess of liquids. Yet our limitations keep us from going to a small enough scale to see if space-time is grainy – so we have to go to a large enough scale. Just goes to show that one needs to have an open mind – - which is not the same as an empty hear.
Best regards,
Mario

From: Nigel Cook
To: Mario Rabinowitz
Sent: Tuesday, January 02, 2007 11:29 AM
Subject: Re: I would like to respond to some of your points.

Dear Mario,

Many thanks for your response. Did you discuss the electromagnetic pulse or nuclear weapons effects with Edward Teller at all? He seemed to be interested in discrediting some of the political-type exaggeration of certain supposed nuclear effects (climatic disaster predictions) in the 1980s.

E. Teller, http://prola.aps.org/abstract/PR/v73/i7/p801_1 "On the Change of Physical Constants", Phys. Rev. 73, 801 - 802 (1948) [Issue 7 – April 1948].

He did claim G was not varying. The fine structure constant or 137 number is the polarization shielding factor of electric charge, see http://nige.wordpress.com/

The ~10^40 ratio of electromagnetic force to gravity is not varying, it is due to the fact you can treat the charges in the universe in two different ways to add up the net charge - see http://electrogravity.blogspot.com/

" As far as I am knowledgeable, your graviton red-shift concept is original, and may even be right. It certainly differs from the mainstream approach, and I haven't seen arguments as to why the gravitational attraction doesn't weaken between rapidly receding bodies. I presume energy is not conserved. If so, how do you deal with the non-conservation of energy? "

Sorry, but visible light is redshifted, losing energy when it reaches it. This doesn't violate conservation of energy. For example the visible light photons from any rapidly receding object lose energy as seen from our frame of reference. So for your objection to hold water, you would also be disproving all redshift data! Why claim gauge boson energy loss due to redshift violates conservation of energy, when it doesn't for observable light photons?

You are simply ignoring reference frames. The energy you get out of a moving object depends on the relative speed of the two objects.

If you fire a bullet at someone who is receding from you in an aircraft at nearly the speed of the bullet, the bullet won't have any significant energy relative to the target when it strikes. This doesn't violate conservation of energy. The energy depends on the reference frame.

I'll read "Deterrents to a Theory of Quantum Gravity" physics/0608193 as soon as possible. (Are you certain that the error is definitely that the equivalence principle (inertial mass = gravitational mass) is definitely incompatible with quantum gravity, rather than a problem with the inconsistent, non-relativistic Hamiltonian that is used in quantum mechanics?)

In an intuitive way, I disagree with the idea that the Schroedinger equation should automatically apply to quantum gravity. It's clear that the cause of the Schroedinger equation is chaos introduced by interference of electron motion on small distance scales. This is partly due to the randomly appearing pair production/annihilation loops of virtual charges which are created in the strong electric field near a charge, causing random deflections on the motion of the orbital electron (so it's direction keeps changing chaotically) and it is partly due to the fact that every observable atom suffers from the 3+ body Poincare chaos effect: http://nige.wordpress.com/

Because these effects don't occur to a significant extent on large scales (too far from charges for loops, etc.), quantum gravity will take a different form in general than that of Schroedinger's equation.
What's important is the mechanism, as far as I'm concerned. If two ideas are incompatible, say the equivalence principle in a quantum gravity and quantum mechanics, you really can't be certain where the error is. It might be best to investigate what the actual mechanism for quantum gravity is, before dismissing the possibility.

"I think traditional QM would violate the EP regardless of whether gravitons have spin 1 or 2. I totally agree with those who think we need a theory of Quantum Gravity. It is fundamental to the possibility of gravitational atoms, and many macroscopic things like black holes and black hole radiation, etc. If done properly it may totally revolutionize our concept of time."

I don't know what you mean by "gravitational atoms". Do you think there's some kind of mass which has no electric charge? Or do you consider the solar system to be a massive kind of "gravitational atom" (albeit not one obeying Schroedinger's model...). I don't agree with you that we need to speculate about these things, because there is loads of real data to be explained (masses of particles, etc.). Why build a theory to explain speculations which have no foundation in physical reality? That's the mess string theory creates.

"On a broader scale, it appears that we live in a very special time in the history of the universe. Do you agree? Do you think this is just a coincidence, or do you think there is a deeper reason for this?"
The anthropic principle isn't the kind of physics I'm interested in. Yes, you can "explain" everything by saying that everything is this way because if it were slightly different, we wouldn't be observing the world the way it is. That's just waffle. Hoyle used this "anthropic principle" to work out that there is a high capture cross-section for three alpha particles to fuse into carbon-12, because there was no other way to account for the observed abundance of carbon-12 stars, planets, etc. But that's "fiddling" so to speak, like introducing renormalization to make quantum field theory work. What I want is a theory of physical phenomena that gives a causal explanation of the detailed dynamical processes occuring. So the way you should predict that alpha particles can fuse into carbon is by understanding the nuclear physics details involved, not by detective work based on coincidences and Sherlock Holmes' type logical elimination of alternatives.

"If gravitons were causing gravity by hitting photons (or rather the spacetime fabric which caused the deflection via either "curvature" or loop quantum gravity), then you would expect the deflected light to be diffused due to scatter effects.

I'm not sure delicate enough experiments have been set up yet to discern any graininess, if it is there. On a large enough scale, the graininess may accumulate and be detectable"

This depends on electromagnetic theory of what the light wave is waving in. See my comment http://cosmicvariance.com/2007/01/01/happy-2007/#comment-167307

"Maxwell’s electromagnetic theory ‘predicted’ an aether, and Maxwell wrote an article in Encyclopedia Britannica suggesting it could be checked by experimental observations of interference on light beamed in two directions and recombined. Michelson and Morley did the experiment and Maxwell’s theory failed.

"But FitzGerald and Lorentz tried to save the aether by adding the epicycle that objects (such as the Michelson Morley instrument) contract in the direction of motion, thus allowing light to travel that way faster in the instrument, and preventing interference while preserving aether.

"Einstein then reinterpreted this in 1905, preserving the contraction and the absence of experimental detection of uniform motion while dismissing the aether. The Maxwell theory then became a mathematical theory of light, lacking the physical displacement of vacuum charges which Maxwell had used to close the electromagnetic cycle in a light wave by producing a source for the time-varying magnetic field which by Faraday’s induction law creates a curling electric field, which displaces other vacuum charges causing a new magnetic field, which repeats the cycle.

"Experimental results lead first to mainstream theories being fiddled to ‘incorporate’ the new result. In light of historical facts, why should anyone have any confidence that the phenomena to be observed at LHC will be capable of forcing physicists to abandon old failed ideas? They’ll just add ‘corrections’ to old ideas until they match the experimental results … string theory isn’t falsifiable so how on earth can anything be learned from experiments which has any bearing whatsoever on the mainstream theory? (Sorry if I am misunderstanding something here.) "

Best wishes,
Nigel


From: Mario Rabinowitz
To: Nigel Cook
Sent: Tuesday, January 02, 2007 3:14 PM
Subject: Thank you for your interest and patience

Dear Nigel,

Thank you for your interest and patience.

>Did you discuss the electromagnetic pulse or nuclear weapons effects with Edward Teller at all? He seemed to be interested in discrediting some of the political-type exaggeration of certain supposed nuclear effects (climatic disaster predictions) in the 1980s.
No we didn't. I did not have a security clearance when I first wrote my report and papers. Edward wanted me to leave my job at EPRI, and go to work at Lawrence Livermore National Lab. I didn't want to.
Next there was a lot of pressure for me to get a very high level Q clearance as an employee of EPRI. I didn't want that either, but it was forced on me. So after I wrote all my papers, I had classified meetings at Sandia, and LLNL.
Long after Sagan et al proposed the Nuclear Winter scenario, Edward was concerned that in a nuclear war, part of the earth's atmosphere could be blasted into the solar system. The consequences of this make "nuclear winter" seem mild in comparison. However he was confident that because we were a Democratic country that had both hydrogen bombs and the Polaris submarine, we could deter a nuclear war.

>Sorry, but visible light is redshifted, losing energy when it reaches it. This doesn't violate conservation of energy. For example the visible light photons from any rapidly receding object lose energy as seen from our frame of reference. So for your objection to hold water, you would also be disproving all redshift data! Why claim gauge boson energy loss due to redshift violates conservation of energy, when it doesn't for observable light photons? You are simply ignoring reference frames.
I agree that, "The energy you get out of a moving object depends on the relative speed of the two objects." Within the context of EGR, the problem is more subtle than this. I'll hold to my statement that a time varying G violates conservation of energy and is antithetical to EGR. The non-conservation of energy with a decreasing G is troublesome to me. Although energy is conserved globally, energy is not conserved locally in Einstein's general relativity because gravitational energy cannot be expressed in tensor form. What do you think?
It is not just a question of reference frame if the universe is treated as a closed system. I think there is a long-standing problem of radiation energy loss associated with the expansion of the Universe. I'm not sure if treating the cosmological redshift as a Doppler effect would make the energy violation disappear.
I think there are 2 questions, one related to gravitational energy and one related to electromagnetic energy. Even though the gravitational field is similar to the electromagnetic field with many analogs including magnetism, there are also some basic differences. Do you agree?
For a moving source, the photon recoil changes the velocity and kinetic energy of the source, taking up the energy difference given by the Doppler redshift, conserving energy. However, the cosmological expansion appears not to conserve either gravitational or radiant energy. I don't think it is as simple as a reference frame consideration because photons and gravitons are received with a lower energy than when emitted and because the source appears to be moving away from all the distant absorbers. I don't think there is a mechanism by which energy can be conserved – all
photons and gravitons lose energy. To balance the energy loss we would have to postulate local variations in expansion based on photon and graviton density leading to intricate space-time topologies, and these would probably not satisfy energy conservation requirements.
Let us consider that you are indeed correct that there is no energy conservation problem. There is still another problem, aside from the question of energy conservation if G were to vary. I think no one has solved the 2-body problem yet in EGR even with the biggest and fastest computers. If the question must be asked in terms of a stationary massive body (the inner part of the universe) which has set up a gravitational field from which a test body (the outer part of the universe) is receding, then I would guess that there is no reduction in the gravitational force between them. The gravitational field of the stationary source body in which the test body finds itself, has already been set up. What do you think? Since the expansion of the universe is not a one-body problem with test bodies, is your approach a good approximation?

>I'll read "Deterrents to a Theory of Quantum Gravity" physics/0608193 as soon as possible. (Are you certain that the error is definitely that the equivalence principle (inertial mass = gravitational mass) is definitely incompatible with quantum gravity, rather than a problem with the inconsistent, non-relativistic Hamiltonian that is used in quantum mechanics?)
Thanks, I'd appreciate your comments and criticisms. Inertial mass = gravitational mass (WEP) was the incentive for Einstein to develop EGR, but not the basis for it. The WEP led him to the SEP that an accelerated frame is equivalent to a gravitational field (and vice versa) is the cornerstone of EGR. So the important point is that QM violates the SEP in deciding the compatibility of QM and EGR. The SEP implies the WEP. Leonard Schiff tried to show that the WEP implies the SEP, but did not succeed, so this is known as Schiff's conjecture.

>In an intuitive way, I disagree with the idea that the Schroedinger equation should automatically apply to quantum gravity
The relativisitic Dirac and Klein-Gordon equations don't apply either. It is legitimate to look at the Schroedinger equation because the Dirac and Klein-Gordon equations reduce to it at low velocity.

>I don't know what you mean by "gravitational atoms".
There may be highly compact neutral bodies that could form gravitational atoms with binding as strong or even stronger than electrostatic atoms. With enough mass the 10^40 factor can easily be exceeded.

I've rushed to respond as soon as I received your e-letter this am as there's lots to do today. Hope I come through coherently. Have to put out some fires now.
Best regards,
Mario

From: Nigel Cook
To: Mario Rabinowitz
Sent: Tuesday, January 02, 2007 8:27 PM
Subject: Re: Thank you for your interest and patience

Dear Mario,

"Within the context of EGR, the problem is more subtle than this. I'll hold to my statement that a time varying G violates conservation of energy and is antithetical to EGR. The non-conservation of energy with a decreasing G is troublesome to me. Although energy is conserved globally, energy is not conserved locally in Einstein's general relativity because gravitational energy cannot be expressed in tensor form. What do you think?"

Einstein's general relativity is discussed by me at http://electrogravity.blogspot.com/ in detail.
G is not decreasing, it's increasing with time. I've studied the gravitational energy conservation issue in detail and it agrees with this, energy IS conserved. From http://electrogravity.blogspot.com/ :

To prove Louise’s MG = tc3 (for a particular set of assumptions which avoid a dimensionless multiplication factor of e3 which could be included according to my detailed calculations from a gravity mechanism):

(1) Einstein’s equivalence principle of general relativity:

gravitational mass = inertial mass.

(2) Einstein’s inertial mass is equivalent to inertial mass potential energy:

E = mc2

This equivalent energy is “potential energy” in that it can be released when you annihilate the mass using anti-matter.

(3) Gravitational mass has a potential energy which could be released if somehow the universe could collapse (implode):

Gravitational potential energy of mass m, in the universe (the universe consists of mass M at an effective average radius of R):

E = mMG/R

(4) We now use principle (1) above to set equations in arguments (2) and (3) above, equal:

E = mc2 = mMG/R

(5) We use R = ct on this:

c3 = MG/t

or

MG = tc3

Hence G increases with t.
Best wishes,
Nigel

From: Mario Rabinowitz
To: Nigel Cook
Sent: Wednesday, January 03, 2007 4:18 AM
Subject: interesting coincidence that by different routes we got MG = tc^3

Dear Nigel,

The work is piling on with time deadlines, so I haven't had a chance to look at the Web Sites you suggested. I did go over the calculations in your letter of 1-2-07. It is a very interesting coincidence that by different routes we got the same relation between M, G, c, and t; namely

MG = tc^3.

My calculation finds that the mass of the universe is

M = c^2R/G.

"Weighing the Universe and Its Smallest Constituents." IEEE Power Engr. Review 10,No.11, 8-13 (1990).

This simply derived result differs little from Einstein's result

M= (pi/2)c^2R/G,

as derived from general relativity.

Substituting R = ct in my equation, gives the same result that you got,

MG = tc^3.

There are 2 interpretations. If M is constant then I also found G ∝ t.

However the Friedmann universe solution of Einstein's general relativity
Has M ∝ t. Similarly, inflationary models of the universe have the entire universe evolving from a quantum fluctuation in empty space with a very small initial mass ~ 10 kg.

When I said a decreasing G is troublesome to me, I was referring to your idea that the gravitons from distant stars are redshifting to yield a smaller gravitational attraction i.e. a smaller effective G if I understood you in your letter of 12-30-6 when you said:

"From Yang-Mills quantum gravity arguments, with gravity strength depending on the energy of exchanged gravitons, the redshift of gravitons must stop gravitational retardation being effective. So we must drop the effect of the term
[ 0.5(Hr)^2]/c."

Best regards,
Mario

From: Nigel Cook
To: Mario Rabinowitz
Sent: Wednesday, January 03, 2007 6:00 AM
Subject: Re: interesting coincidence that by different routes we got MG = tc^3

Dear Mario,

"From Yang-Mills quantum gravity arguments, with gravity strength depending on the energy of exchanged gravitons, the redshift of gravitons must stop gravitational retardation being effective. So we must drop the effect of the term
[ 0.5(Hr)^2]/c."

This is not troublesome, it was published October 1996, and confirmed by observations two years later by Perlmutter. The trouble is not the above prediction, which is entirely successful. The trouble is that Perlmutter's result was compensated by the false invention of dark energy to cancel out long range gravity, instead of redshift. …

"However the Friedmann universe solution of Einstein's general relativity Has M ∝ t."

I don't see any evidence for that and did a cosmology course when working on my approach a decade ago. M was constant with respect to t at that time. Maybe we're talking cross purposes and there is a new person also called Friedmann who has recently come up with such a varying mass theory? However, maybe I'm wrong here and old Friedmann's landscape of endless (all false) solutions to GR do claim that mass increases with time. Could you give me a reference for this so I can learn more? Which of Friedmann's solutions do you claim has M increase with t? All of them?

The result MG = tc^3 I derived from factual evidence about the equivalence principle, the energy locked in inertial mass and the energy locked in "gravitational mass" (gravitational potential energy), is wrong.

The problem is that the derivation falsely assumes that the entire mass of the universe is located at the horizon radius. Actually the mass is distributed over the whole of space and time, so the average radius of the mass of the surrounding universe is somewhat less than the distance ct where t is age of universe. The correction factor is subjective to the frame of reference, because the correct weighting for the density distribution is acutely dependant on what you are interested in.

If you assume the universe has average density, then the effective radius that the universe's mass is located at is going to be the cube-root of 0.5, multiplied by ct, ie 0.794ct. However, this is naive.

The correct way to look at the problem is in spacetime, where the density varies with distance because you're looking backwards in time with increasing density, and because gravity effects go at light speed (not instantly) you must allow for this variation in density in the physical model (because gravitational potential energy is physically an effect of gauge boson radiation being exchanged at speed of light, not instantly!). The effective spacetime density is e^3 times the density we observe (calculated on my home page) and affects the result.
MG = tc^3 is missing is dimensionless correction for the spacetime distribution of matter in the universe. However, it is accurate dimensionally. Louise Riofrio at some Californian university near you, came up with this equation dimensionally and suggested c is varying tith time, while M and G are constant. So that's a third solution to add to my statement G is increasing (based on entirely independent evidence from gravitational mechanism which using well checked empirical facts correctly predicts the value of G, and hence is a scientific theory), and your interesting suggestion that Friedmann was some kind of "varying mass" theorist.
Actually there is no evidence that M or c are varying simply with time, but there is plenty of evidence as I've listed, and linked to, that G is increasing. The small ripples in the cosmic background at 300,000 years were that small because gravity was ten thousand times weaker, etc.

" When I said a decreasing G is troublesome to me, I was referring to your idea that the gravitons from distant stars are redshifting to yield a smaller gravitational attraction i.e. a smaller effective G if I understood you in your letter of 12-30-6 . .."

No I said G is increasing. Dirac thought G was decreasing, not me. I've worked out all the implications of the prediction in detail. They totally concur with experimental and observational evidence, currently interpreted falsely with ad hoc ideas like inflation, etc. Any specific troubles you have with this need to be stated clearly so I can actually respond to them.
I'll get back to you probably at the weekend when I will have a chance to read your other relevant arxiv paper on gravity.

Many thanks,
Nigel


From: Mario Rabinowitz
To: Nigel Cook
Sent: Thursday, January 04, 2007 9:21 PM
Subject: Please let me know if and where you disagree with this derivation

Dear Nigel,

>The result MG = tc^3 I derived from factual evidence about the equivalence principle, the energy locked in inertial mass and the energy locked in "gravitational mass" (gravitational potential energy), is wrong.
>The problem is that the derivation falsely assumes that the entire mass of the universe is located at the horizon radius.

One of the objectives in my 1990 papers was to show how simply and closely one could obtain a result as gotten from EGR. It took very few steps and very few assumptions. I did not assume anything about the distribution of mass in the universe. Since you think this simple result is at odds with your more careful calculation, here is the essence of my derivation.

GmM/R^2 = [mM/(m + M)]v^2/R ≈ mv^2/R for M >> m.

For m ~ 0 such as a neutrino or photon, v ≈ c. So we get

M = c^2R/G which compares well with Einstein's result
M= (pi/2)c^2R/G derived from general relativity. With R = ct, I get

MG = tc^3.

The only assumption about the mass M of the universe is that it is located at ≤ R.

Please let me know if and where you disagree with this derivation, aside from the fact that it is non-relativistic.

I did this work about two decades ago, so I don't offhand recall the specific reference to Friedmann. I will see if I can find it. It will be somewhat like an "Archeological Dig" as I've collected all too many papers over the years.

Many thanks to you,

Mario

From: Nigel Cook
To: Mario Rabinowitz
Sent: Saturday, January 06, 2007 1:29 PM
Subject: Re: Please let me know if and where you disagree with this derivation

Dear Mario,

Thank you very much for supplying these details, it is a very novel and interesting approach, setting the gravitational force equal to the centripetal force for orbital motion. Obviously this raises the issue of the analogy to orbital dynamics is relevant to the universe. I think you are right up to a point because Einstein's principle of equivalence implies that any inertial mass is equivalent to a gravitational mass. Hence an outward force in the universe due to accelerating recession of mass, F = ma, can be treated your way by using the equivalence principle.

But the precise mass of universe you quote from Einstein's general relativity M = (Pi/2)(c^2)R/G, is wrong at three levels. Superficially, you are including a Pi which shouldn't be there: the normal derivation is based on the assumption of critical density, Rho = (3/8)(H^2)/(Pi*G).

This critical density comes from Friedmann's solutions to general relativity, but can also be obtained by Newtonian physics.

Hubble: v = HR

Kinetic energy of receding masses: E = (1/2)mv^2 = (1/2)m(HR)^2

Gravitational potential energy of expanding universe: E = GmM/R

For critical density, the gravitational potential energy (attraction of mass) is enough to eventually (after infinite time) just stop expansion due to outward kinetic energy, so we have a mechanism to set the kinetic energy of recession equal to gravitational potential energy of attraction:

E = (1/2)m(HR)^2 = GmM/R

where m is the mass of a small (test) object and M is mass of universe (we are at the centre of that mass for the purpose of calculation, because the surrounding universe is always observed from our frame of reference).

Mass is then M = (4/3)Pi*(R^3)Rho

Hence

E = (1/2)m(HR)^2 = Gm(4/3)Pi*(R^3)Rho/R

Cancelling and simplifying:

Rho = (3/8)(H^2)/(Pi*G)

which is the critical density given by Friedmann's solution to general relativity. Hence:

M = (4/3)Pi*(R^3)Rho

= (1/2)(R^3)(H^2)/G

= (1/2)(c^2)R/G if we are using R = c/H. (Notice that there is no Pi in the result. There is no physical basis for including Pi in the mass of the universe. Pi occurs in the numerator of the expression that mass is volume times Rho, and it also appears in the denominator of the expression for Rho. Hence, Pi cancels out. M = (1/2)(c^2)R/G in Einstein's general relativity under your assumptions.) At a deeper level, it is wrong because the recession is such that Friedmann's solution for critical density (without a cosmological constant) doesn't permit R = c/H.

Because Einstein/Friedmann supposed that critical density was slowing down the expansion, general relativity (no cosmological constant) predicted age of universe t = (2/3)/H .The identity R = c/H is based on t = 1/H, not t = (2/3)/H. If we use t = (2/3)/H, then R = ct = (3/2)c/H, the factor of 3/2 being the factor by which the expansion is slowed down by gravity from what it would be without gravity. Putting R = (3/2)c/H into

M = (4/3)Pi*(R^3)Rho

= (1/2)(R^3)(H^2)/G

= (9/8)(c^2)R/G

(This is directly analogous to the fact that in a free very high pressure supersonic shock wave in a uniform fluid, the deceleration due to the shock wave continuously hitting, engulfing more air and heating it etc, ensures that the velocity U = (2/5)R/t where R is shock radius and t is time, see my proof at http://glasstone.blogspot.com/2006/03/analytical-mathematics-for-physical.html , so for a shock wave being decelerated by hitting and engulfing air, R = (5/2)Ut, which is a direct analogy to Einstein's or rather Friedmann's R = (3/2)c/H where the multiplier is caused by deceleration due to gravity, rather than hitting, engulfing, and compressing fluid. The multiplier is bigger than 1 because the earlier expansion was faster, because deceleration causes a progressive slow down.)

However, as observations by Perlmutter in 1998 showed, the actual expansion shows no such deceleration: R = c/H (Hubble's law) is correct for some reason. The issue is that Einstein's/Friedmann's "explanation" of Hubble's law changes it from R = v/H where v << c (small distances), to R = (3/2)v/H at immense (relativistic) distances where v ~ c. Perlmutter was trying in 1998 to confirm this prediction when he disproved it. There is only one explanation which accurately predicts why Friedmann's solution is wrong: quantum gravity, redshift of gauge bosons preventing deceleration of extremely rapidly receding galaxies and supernovae, because the redshift of gravity causing radiation (and other gravity mechanism phenomena, such as gravity being due to surrounding recession, not independent of it) prevent deceleration from occurring at extreme redshifts.

The mainstream explanation is a complete lie. Adding dark matter is not only an ad hoc epicycle added to the theory which wasn't predicted (unlike the gravity mechanism prediction that there is no retardation due to gravity at extreme distances, because gravity as seen from our frame of reference is not independent of the recession of surrounding mass but is caused by that recession of mass), it also cannot even do what it claims to do on the tin:

http://www.google.co.uk/search?hl=en&q=evolving+dark+energy&meta= See for example "evolving dark energy" essay with diagram by Prof. Seam Carroll, "What's more, the best-fit behavior of the dark energy density seems to have it increasing with time, as in phantom energy.... That's quite bizarre and unexpected." –
http://cosmicvariance.com/2006/01/11/evolving-dark-energy/

Jigsaw pieces of facts on gravity mechanism:

1. Minkowski's spacetime 1907 says: time = distance/c

2. Hubble 1929 says recession velocity of mass is v = H.distance where H is constant

3. Putting Minkowski's spacetime into Hubble's equation gives v = Hct hence observable v (in spacetime we can observe, where light was emitted in the past) is increasing with time past we observe. Linear acceleration a = dv/dt = Hc ~ 6*10^{-10} ms^-2

4. Newton 1687 empirically suggests that mass accelerating implies outward force: F = ma. Putting in the Hubble acceleration from (3) a = dv/dt = Hc ~ 6*10^{-10} ms^-2, and the mass of surrounding (receding) universe m gives F ~ 10^{43} Newtons.

5. Newton 1687 empirically suggests that each action has an equal reaction force: F_action = -F_reaction. This predicts a reaction force of similar magnitude but opposite direction to the force in step (4) above.

6. The Standard Model physics empirically shows that the only stuff known below the IR cutoff (ie, which exists over vast distances) and which is also forceful enough to carry such a big inward force is Yang-Mills gauge boson exchange radiation.

7. The inward force predicts a universal attractive force of 6.7 x 10^{-11} mM/r^2 Newtons which is correct to two significant figures. http://quantumfieldtheory.org/Proof.htm I claim that nobody else can predict gravity on the basis of empirical facts; I've searched for a decade and nobody else can do this. There is no paper anywhere on the internet or in any journal predicting gravity. (Everyone else uses measured G instead of calculating it from other data and a causal mechanism for gravity based entirely on observed hard facts. Some people weirdly think Newton had a theory of gravity which predicted G, or that because Witten claimed in Physics Today magazine in 1996 that his stringy M-theory has the remarkable property of "predicting gravity", he can do it. The editor of Physical Review Letters seemed to suggest this to me when claiming falsely that the facts above leading to a prediction of gravity etc is an "alternative to currently accepted theories". Where is the theory in string? Where is the theory in M-"theory" which predicts G? It only predicts a spin-2 graviton mode for gravity, and the spin-2 graviton has never been observed. So I disagree with Dr Brown. This isn't an alternative to a currently accepted theory. It's tested and validated science, contrasted to currently accepted religious non-theory explaining an unobserved particle by using unobserved extra dimensional guesswork. I'm not saying string should be banned, but I don't agree that science should be so focussed on stringy guesswork that the hard facts are censored out in consequence!)

My physics jigsaw shows that the density of the universe, calculated from the gravitational mechanism whereby you have an outward force due to recession of mass from you in spacetime being balanced by an implosion force of gauge boson radiation pushing inwards and giving a predictable Lesage/Fatio gravity, has density of NOT the critical density (3/8)(H^2)/(Pi*G) (which is about many times too high, hence excessive "dark matter" epicycles) but is actually

(3/4)(H^2)/(Pi*G*e^3) - derivation at Rho = (3/8)(H^2)/(Pi*G).

which is about 10 times lower. This brings cosmology into direct contact with empirical earth-bound observations of gravity by Cavendish et al.

Now for why the "critical density" theory is wrong. It is too high by a factor of (1/2)e^3 because it ignores the dynamics of quantum gravity.

I do wish that people such as yourself, mainstream enough to be able to submit papers to arXiv,org and elsewhere and to be able to put updates on them, could try to review the available facts from time to time objectively. I've got one paper on CERN document server when at Gloucestershire university, but can't update it at all because in 2004 it stopped allowing any external paper updates except by automatical link from arXiv, which deleted the copy of the paper I submitted.

I can't understand why arXiv deletes brief 4 page paper. It is probably down to Professor Jacques Distler at University of Texas. He is a moderator on the board of arXiv, and is rude to me on his blog, without listening to my science at all. But when you check out these people, you find that other people have similarly poor experiences:

The University of Texas student's guide http://utphysguide.livejournal.com/3047.html states of Professor Jacques Distler:

"Reportedly the best bet for string theory ... His responses to class questions tend to be subtly hostile and do not provide much illumination. ...

"... Jacques Distler: (String Theory I) ... Jacques Distler is quite possibly the worst physics professor I have ever had. He has the uncanny ability to make even the simplest concepts utterly incomprehensible. He is a true intellectual snob, and he treats most questions with open hostility. Unless you have a PhD in math and already know string theory, you will not learn anything from Distler. String theory is hard, but not as hard as Distler wants it to be."

...

Best wishes,
Nigel

 
At 4:16 AM, Blogger nige said...

http://motls.blogspot.com/2007/01/after-391-years-galileo-is-dangerous.html

http://www.haloscan.com/comments/lumidek/2381647157688067208/?a=16239#696132

Dr Galileo also scored high on Dr Baez's crackpot index:

‘Here at Padua is the principal professor of philosophy whom I have repeatedly and urgently requested to look at the moon and planets through my glass which he pertinaciously refuses to do. Why are you not here? What shouts of laughter we should have at this glorious folly! And to hear the professor of philosophy at Pisa labouring before the Grand Duke with logical arguments, as if with magical incantations, to charm the new planets out of the sky [like M-theory trying to charm superpartners and gravitons out of the vacuum].’

- Letter of Galileo to Kepler, 1610, http://www.catholiceducation.org...nce/ sc0043.html

Now, because nobody in a position of authority would look through Galileo's telescope, that proves he must have been a crackpot. He also gets a high score for making fun of mainstream theory (let's call it M-theory) professors.

In addition, his associate Kepler scores well on Baez's index:

‘There will certainly be no lack of human pioneers when we have mastered the art of flight. Who would have thought that navigation across the vast ocean is less dangerous and quieter than in the narrow, threatening gulfs of the Adriatic , or the Baltic, or the British straits? Let us create vessels and sails adjusted to the heavenly ether, and there will be plenty of people unafraid of the empty wastes. In the meantime, we shall prepare, for the brave sky travelers, maps of the celestial bodies - I shall do it for the moon, you, Galileo, for Jupiter.’

- Letter from Johannes Kepler to Galileo Galilei, April 1610, http://www.physics.emich.edu/aoa...kes/ letter.html

The truth is really:

‘... a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.’

- Max Planck.
nc | Homepage | 01.07.07 - 7:14 am | #


A new comment on: http://nige.wordpress.com/2006/10/20/loop-quantum-gravity-representation-theory-and-particle-physics/

More on the structure of matter. The quark is related to the lepton by vacuum dynamics; ie, if you could confine 3 positrons closely, the total electric charge would be +3, but because vacuum polarization is induced above the IR cutoff according to the strength of the electric field, the vacuum polarization would be three times stronger and would therefore shield the observed long range electric charge of the triad of positrons by a factor of 3, giving +1. Hence the relationship between positron and proton depends on the dynamics of the shielding by the polarized vacuum loops which are being created and annihilated spontaneously in the space above the IR cutoff (ie, within 1 fm range of the particle core).

This model ignores the Pauli exclusion principle and other factors which make the dynamics more complicated. This is why so little progress has been made by the mainstream: the few clues are submerged by a lot of complexity due to cloaking by combinations of principles. You have to look for simplicity very hard using established hard facts such as loops, their polarization in electric fields according to the electric field strength, and the resulting shielding of the primary electric field due to the opposing polarization field which it induces in the loop charges. Thw model as described in the last paragraph predicts the upquark charge as +1/3, and explains that it is a positron with extra shielding by the polarized vacuum. Hence, as the top quotation on this post claims, the entire difference between leptons and quarks is down to polarized vacuum shielding and related vacuum loop effects; when you have a triad of confined ‘leptons’ their properties are cloaked and modified by the vacuum polarization and related strong force effects to produce what are normally perceived to be entirely different particles, ‘quarks’.

In the bare core of any particle, there is gravitationally trapped, light speed spinning electromagnetic energy if you accept E=mc^2. The spin of the particle is the circular motion of the small loop of Poynting electromagnetic energy current, the radius of the loop being 2GM/c^2.

This loop is a simple circular flow of energy which exists above the UV cutoff and is entirely different from the loops in spacetime of particle creation/annihilation in the vacuum between the IR and UV cutoffs, and is also of course entirely different from long ranged loops of Yang-Mills exchange radiation being passed between charges at ranges beyond 1 fm, below the IR cutoff, in loop quantum gravity. All of this is based on extensive empirical evidence: http://quantumfieldtheory.org/

Comment by nc | January 7, 2007

 
At 4:45 AM, Blogger nige said...

Correction for some typos in an email above:

From: Nigel Cook
To: Mario Rabinowitz
Sent: Sunday, January 07, 2007 12:43 PM
Subject: Re: interesting coincidence that by different routes we got MG = tc^3


Dear Mario,

Correction to previous email:

According to general relativity with critical density but no cosmological constant, the age of the universe is t = (2/3)/H, where H is Hubble constant.

The 2/3 ratio is for gravitational deceleration without a cosmological constant to offset it.

Because Einstein/Friedmann supposed that critical density was slowing down the expansion, general relativity (no cosmological constant) predicted age of universe t = (2/3)/H. The identity R = c/H is based on t = 1/H, not t = (2/3)/H. If we use t = (2/3)/H, then R = ct = {2/3}c/H, the factor of {2/3} being the factor by which the expansion is slowed down by gravity from what it would be without gravity. Putting R = {2/3}c/H into

M = (4/3)Pi*(R^3)Rho

= (1/2)(R^3)(H^2)/G

= (4/18)(c^2)R/G

= (2/9)(c^2)R/G


(This is directly analogous to the fact that in a free very high pressure supersonic shock wave in a uniform fluid, the deceleration due to the shock wave continuously hitting, engulfing more air and heating it etc, ensures that the velocity U = (2/5)R/t where R is shock radius and t is time, see my proof at http://glasstone.blogspot.com/2006/03/analytical-mathematics-for-physical.html , so for a shock wave being decelerated by hitting and engulfing air, R = (5/2)Ut, which is a direct analogy to Einstein's or rather Friedmann's R = {2/3}c/H where the multiplier is caused by deceleration due to gravity, rather than hitting, engulfing, and compressing fluid. The multiplier is bigger than 1 because the earlier expansion was faster, because deceleration causes a progressive slow down.)

However, as observations by Perlmutter in 1998 showed, the actual expansion shows no such deceleration: R = c/H (Hubble's law) is correct for some reason. The issue is that Einstein's/Friedmann's "explanation" of Hubble's law changes it from R = v/H where v {is much smaller than} c (small distances), to R = {2/3}v/H at immense (relativistic) distances where v ~ c. Perlmutter was trying in 1998 to confirm this prediction when he disproved it. There is only one explanation which accurately predicts why Friedmann's solution is wrong: quantum gravity, redshift of gauge bosons preventing deceleration of extremely rapidly receding galaxies and supernovae, because the redshift of gravity causing radiation (and other gravity mechanism phenomena, such as gravity being due to surrounding recession, not independent of it) prevent deceleration from occurring at extreme redshifts.


Best wishes,
Nigel

 
At 6:42 AM, Blogger nige said...

http://cosmicvariance.com/2007/01/06/string-wars-hit-the-msm/#comment-171119

nc on Jan 7th, 2007 at 8:44 am

It’s about time someone stood up for string theory and its wealth of endless predictions over the years.

1. The first string theory predicted in the 60s that if the strong force was due to bits of string holding protons together (against Coulomb repulsion) in the nucleus, the string’s would be like elastic with a tension of 10 tons. It wasn’t a falsifiable theory and was replaced by QCD with gluon Yang-Mills exchange radiation in the 70s.

2. Scherk in 1974 predicted that if strings had a pull of not just 10 tons {but} 10^39 tons, then they would predict gravity.

3. By 1985, string theory was predicting 10 or 26 dimensions. Supersymmetry was developed, predicting that unification of the three standard model forces conveniently occurs at the uncheckably high energy of the the Planck scale, and requiring merely extra dimensions and one unobserved bosonic superpartner for each observable fermion in the universe.

4. In 1995 M-theory was developed, leading to the dumping of 26-dimensional bosonic string theory and to the new prediction that the 10 dimensional superstring universe is a brane on 11 dimensional supergravity, like a 2-dimensional bubble surface is a brane on 3-dimensional bubble.

5. Now the beautiful fact of the string theory landscape has led to the anthropic prediction that the Standard Model can in principle be reproduced by string. The complexity of the parameters of the 6-dimensional Calabi-Yau manifold which rolls up 6 dimensions is such that there are 10^500 or more sets of solutions, ie 10^500 Standard Models. This landscape of solutions beautifully fits in with the many universes (or multiverse) interpretation. One of these solutions must be our universe, because we exist. (Unless string theory is wrong, of course, but don’t worry because that will always be impossible to prove from the model itself because 10^500 solutions can’t ever be rigorously checked in the time scales available to us. Even at the rate of checking 10^100 solutions per second - which is far beyond anything that can be done - it would take 10^383 times the age of the universe to check each solution rigorously, the universe being merely on the order of 10^17 seconds old.)

 
At 6:48 AM, Blogger nige said...

http://www.math.columbia.edu/~woit/wordpress/?p=505#comment-20698

Factual evidence Says:

January 9th, 2007 at 9:30 am

Look, the bending of the path of light to twice the amount Newton’s law predicts (treating light rays like bullets) occurs because light gains gravitational potential energy as it approaches the sun. Whereas a bullet is both speeded up and deflected somwhat towards the sun, a photon can’t be speeded up. It turns out that as a result the photon is deflected twice as much by gravity as a slow-moving object. General relativity is mathematically right because it contains conservation of energy and the light speed limit, but you need to dig deeper for physical explanation. It’s not going to be complete until quantum gravity comes along. If gravity is due to exchange radiation, then is that radiation redshifted and weakened by recession of masses far separated in the universe? So the effects of quantum gravity are not merely a change to general relativity on small scales, but also likely to modify general relativity on very large distance scales.

 
At 2:52 AM, Anonymous Mohammad Mansouryar said...

Hi nige, (sorry for not being related to this post)

Probably my releasd paper about the schematic design of a practical spacewarp can be considered as a subject of a report by you! That's placed on:

http://arxiv.org/abs/gr-qc/0511086

For more information see below links:

http://www.centauri-dreams.org/?p=561

http://www.americanantigravity.com/articles/455/1/Iranian-Einstein%3F

http://extremetechnology.blogspot.com/2006/03/macroscopic-tranversable-spacewarp.html

http://www.stardrivedevice.com/links.html

http://www.starstreamresearch.com/mansouryar.htm

I'm waiting for your reply.

Best Regards
M. Mansouryar

 
At 5:06 AM, Blogger nige said...

Dear Dr Mohammad Mansouryar,

I've read your arXiv paper,

http://arxiv.org/abs/gr-qc/0511086

General Relativity and Quantum Cosmology, abstract
gr-qc/0511086
From: Mohammad Mansouryar [view email]
Date (v1): Wed, 16 Nov 2005 13:31:52 GMT (873kb)
Date (revised v2): Mon, 2 Jan 2006 16:50:09 GMT (2740kb)

On a macroscopic traversable spacewarp in practice
Authors: Mohammad Mansouryar
Comments: 44 pages, 8 Boxes of Figs, typos and one box corrected, one formula and some links along with some new notes added, some references changed

A design of a configuration for violation of the averaged null energy condition (ANEC) and consequently other classic energy conditions (CECs), is presented. The methods of producing effective exotic matter (EM) for a traversable wormhole (TW) are discussed. Also, the approaches of less necessity of TWs to EM are considered. The result is, TW and similar structures; i.e., warp drive (WD) and Krasnikov tube are not just theoretical subjects for teaching general relativity (GR) or objects only an advanced civilization would be able to manufacture anymore, but a quite reachable challenge for current technology. Besides, a new compound metric is introduced as a choice for testing in the lab.
Full-text: PDF only


The exotic matter and wormhole speculations don't concern physics. Doubtless in Ptolemies time people were proposing how to use the epicycles of the Earth centred universe to engineer spaceships.

What is interesting is that Professor Jacques Distler and other string theorists, who are abusive to me (and everyone else who is concerned for science above speculative group think religion which has no evidence whatsoever) when I make comments on his blog, is an arXiv thug who deletes and blacklists people.

That's interesting. He lets science fiction go on arXiv, but censors out science.

This perverted, bitter anti-scientific behaviour is sick.

Thanks for your comment!

Best wishes,
Nigel Cook.

 
At 2:52 PM, Blogger nige said...

Copy of a comment:

http://kea-monad.blogspot.com/2007/02/m-theory-revision.html


Hi Kea,

Obviously a gravitational field, where the "charges" are masses, is non-renormalizable because you can't polarize a field of mass.

In an electric field above 10^18 v/m or so, the field of electric charges are polarizable in the Dirac sea which appear spontaneously as part of photon -> electron + positron creation-annihilation "loops" in spacetime.

This is because virtual positrons are attracted to the real electron core while virtual electrons are repelled from it, so there is a slight vacuum displacement, resulting in a cancellation of part of the core charge of the electron.

This explains how electric charge is a renormalizable quantity. Problem is, this heuristic picture doesn't explain why mass is renormalized. For consistency, mass as well as electric charge should be renormalizable to get a working quantum gravity. However, Lunsford's unification - which works - of gravity and electromagnetism shows that both fields are different aspects of the same thing.

Clearly the charge for quantum gravity is some kind of vacuum particle, like a Higgs boson, which via electric field phenomena can be associated with electric charges, giving them mass.

Hence for electric fields, the electron couples directly with the electric field.

For gravitational fields and inertia (ie, spacetime "curvature" in general) the interaction is indirect: the electron couples to a vacuum particle such as one or more Higgs bosons, which in turn couple with the background field (gravity causing Yang-Mills exchange radiation).

In this way, renormalization of gravity is identical to renormalization of electric field, because both gravity and electromagnetism depend on the renormalizable electric charge (directly in the case of electromagnetic fields, but indirectly in the case of spacetime curvature).

The renormalization of electric charge and mass for an electron is discussed vividly by Rose in an early introduction to electrodynamics (books written at the time the theory was being grounded are more likely to be helpful for physical intuition than the modern expositions, which try to dispense with physics and present only abstract maths):

Dr M. E. Rose (Chief Physicist, Oak Ridge National Lab.), Relativistic Electron Theory, John Wiley & Sons, New York and London, 1961, pp 75-6:

‘The solution to the difficulty of negative energy states [in relativistic quantum mechanics] is due to Dirac [P. A. M. Dirac, Proc. Roy. Soc. (London), A126, p360, 1930]. One defines the vacuum to consist of no occupied positive energy states and all negative energy states completely filled. This means that each negative energy state contains two electrons. An electron therefore is a particle in a positive energy state with all negative energy states occupied. No transitions to these states can occur because of the Pauli principle. The interpretation of a single unoccupied negative energy state is then a particle with positive energy ... The theory therefore predicts the existence of a particle, the positron, with the same mass and opposite charge as compared to an electron. It is well known that this particle was discovered in 1932 by Anderson [C. D. Anderson, Phys. Rev., 43, p491, 1933].

‘Although the prediction of the positron is certainly a brilliant success of the Dirac theory, some rather formidable questions still arise. With a completely filled ‘negative energy sea’ the complete theory (hole theory) can no longer be a single-particle theory.

‘The treatment of the problems of electrodynamics is seriously complicated by the requisite elaborate structure of the vacuum. The filled negative energy states need produce no observable electric field. However, if an external field is present the shift in the negative energy states produces a polarisation of the vacuum and, according to the theory, this polarisation is infinite.

‘In a similar way, it can be shown that an electron acquires infinite inertia (self-energy) by the coupling with the electromagnetic field which permits emission and absorption of virtual quanta. More recent developments show that these infinities, while undesirable, are removable in the sense that they do not contribute to observed results [J. Schwinger, Phys. Rev., 74, p1439, 1948, and 75, p651, 1949; S. Tomonaga, Prog. Theoret. Phys. (Kyoto), 1, p27, 1949].

‘For example, it can be shown that starting with the parameters e and m for a bare Dirac particle, the effect of the ‘crowded’ vacuum is to change these to new constants e’ and m’, which must be identified with the observed charge and mass. ... If these contributions were cut off in any reasonable manner, m’ - m and e’ - e would be of order alpha ~ 1/137. No rigorous justification for such a cut-off has yet been proposed.

‘All this means that the present theory of electrons and fields is not complete. ... The particles ... are treated as ‘bare’ particles. For problems involving electromagnetic field coupling this approximation will result in an error of order alpha. As an example ... the Dirac theory predicts a magnetic moment of mu = mu[zero] for the electron, whereas a more complete treatment [including Schwinger’s coupling correction, i.e., the first Feynman diagram] of radiative effects gives mu = mu[zero].(1 + alpha/{twice Pi}), which agrees very well with the very accurate measured value of mu/mu[zero] = 1.001 ...’

Notice in the above that the magnetic moment of the electron as calculated by QED with the first vacuum loop coupling correction is 1 + alpha/(twice Pi) = 1.00116 Bohr magnetons. The 1 is the Dirac prediction, and the added alpha/(twice Pi) links into the mechanism for mass here.

Most of the charge is screened out by polarised charges in the vacuum around the electron core:

‘... we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum ... amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ - arxiv hep-th/0510040, p 71.

‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

The way that both electromagnetism and gravity can arise from a single mechanism is quite simple when you try to calculate the ways electric charge can be summed in the universe, assuming say 10^80 positive charges and a similar number of negative charges randomly distributed.

If you assume that Yang-Mills exchange radiation is permitted to take any path possible between all of the charges, you end up with two solutions. Think of it as a lot of charged capacitors with vacuum or air dielectric between the charged plates, all arranged at random orientations throughout the volume of a large room. The drunkard's walk of gauge boson radiation between similar charges results in a strong electromagnetic force which can be either positive or negative and can result in either attraction or repulsion. The net force strength turns out to be proportional to the square root of the number of charges, because the inverse square law due to geometric divergence is totally cancelled out due to the fact that the divergence of gauge radiation going away from one particular charge is cancelled out by the convergence of gauge boson radiation going towards that charge. Hence, the only influence on the resulting net force strength is the number of charges. The force strength turns out to be the average contribution per charge multiplied by the square root of the total number of charges.

However, the alternative solution is to ignore a random walk between similar charges (this zig-zag is is required to avoid near cancellation by equal numbers of positive and negative charges in the universe) and consider a radial line addition across the universe.

The radial line addition is obviously much weaker, because if you draw a long line through the universe, you expect to find that 50% of the charges it passes through are positive, and 50% are negative.

However, there is the also the vitally saving grace that such a line is 50% likely to have an even number of charges, and is 50% likely to have an odd number of charges.

The situation we are interested in is the case of an odd number of charges, because then there definitely be always be a net charge present!! (in the even number, there will on average be no net charge). Hence, the relative force strength for this radial line summation (which is obviously the LeSage "shadowing" effect of gravity), is one unit (from the one odd charge). It turns out that this is an atractive force, gravity.

By comparison to the electromagnetic force mechanism, gravity is smaller in strength than electromagnetism by the square root of the number of charges in the universe.

Since it is possible from the mechanism based on Lunsford's unification (which has three orthagonal time dimensions, SO(3,3)) of electromagnetism and gravity to predict the strength constant of gravity, it is thus possible to predict the strength of electromagnetism by multiplying the gravity coupling constant by the square root of the number of charges.

Lunsford, Int. J. Theoretical Physics, v 43 (2004), No. 1, pp.161-177:

http://cdsweb.cern.ch/record/688763

http://www.math.columbia.edu/~woit/wordpress/?p=128#comment-1932



***********************************


From: "Nigel Cook"
To: "David Tombe" sirius184@hotmail.com; epola@tiscali.co.uk; imontgomery@atlasmeasurement.com.au
Cc: Monitek@aol.com; pwhan@atlasmeasurement.com.au
Sent: Sunday, February 11, 2007 10:04 PM
Subject: Re: Paper Number 5 (hydrodynamics)

Copy of an email:

"But there is not a snowball's chance in hell of getting the
establishment or the Wikipedia Gestapo to even contemplate the idea of any link between magnetism and the Coriolis force." - David.

Dear David,

Look at it this way: why bother trying to help people who don't want to know? That's Ivor Catt's error, don't follow him down that road to paranoia. Wrestle with pigs, and some of the mud gets on you.

I think there may be a link between magnetism and the Coriolis force (although I don't have proof) because magnetism is a deflection effect, the magnetic field lines loop (curl) around a wire carrying a steady current.

If you have a long wire lying along the ground, and you walk along it at the speed of average electron drift, then the magnetic field disappears.

Hence relativity of your motion to the electron motion is essential for you to see magnetism (this relativity is true, and inspired some of Einstein's ideas in special relativity which lead to the twins paradox etc., because Einstein tried to extend the principle too far in SR - although he went a long way to sort out the mess in GR - instead of investigating the mechanism, which is the deep physical issue Einstein didn't have the skill or perhaps the guts to get involved with, I've often quoted his admission of this from his 1938 book with Infeld, "Evolution of Physics").

Just take things simply. Take the situation of the magnetic field lines coming from a simple permanent magnet, and come up with some mechanism. Or take the curling magnetic field lines that loop around the wire carrying a current, but which only "exist" for if the observer is not moving along the wire at the same speed (and the same direction!) as the drifting electrons.

The electric force is quite different because it is usually argued that all fundamental electric charges with rest mass are monopoles (fermions like electrons), while magnets are dipoles. However, neutral bosons like a light photon are electric dipoles; the electric "charge" or rather field at each end is the opposite of that at the other end, so the net charge is zero.

The most interesting example for polarization effects is the W_o boson because as Feynman writes in "QED", it is a massive (91 GeV) kind of photon but having with rest mass. Because it has has rest mass, it goes slower than c, and has time therefore to respond to fields by polarizing by
rotation.

... The main difference between the W_o and the Higgs boson is supposed to be that the Higgs is a scalar, with zero spin (the W_o has spin 1, just like a photon).

The vacuum could consist of an array of trapped W_o or perhaps "Higgs" bosons (or both??), which become rotationally polarized, creating magnetic fields?? [Or it could be a spin effect of exchange radiation.]

... Pair production, I've explained, doesn't involve electrons or positrons existing beforehand in the vacuum. There is only one type of massive particle (having one fixed mass) and one type of charged particle in the universe.

All the leptons and hadrons observed are combinations of these two types of particle, with vacuum effects contributing different observable charges and forces.

For evidence see http://quantumfieldtheory.org/ where I've added hyperlinked extracts on vacuum polarization from QFT papers, and
http://nige.wordpress.com/2006/10/20/loop-quantum-gravity-representation-theory-and-particle-physics/ for a full discussion, particularly see http://thumbsnap.com/vf/FBeqR0gc.gif for how vacuum polarization allows a single mass-giving particle to give rise to all known particle masses (to within a couple of percent), see also table of comparisons on http://quantumfieldtheory.org/Proof.htm (this page is now under revision urgently).

Best wishes,
Nigel

 
At 6:01 AM, Blogger nige said...

Copy of a comment:

http://kea-monad.blogspot.com/2007/02/luscious-langlands-ii.html

Most of the maths of physics consists of applications of equations of motion which ultimately go back to empirical observations formulated into laws by Newton, supplemented by Maxwell, Fitzgerald-Lorentz, et al.

The mathematical model follows experience. It is only speculative in that it makes predictions as well as summarizing empirical observations. Where the predictions fall well outside the sphere of validity of the empirical observations which suggested the law or equation, then you have a prediction which is worth testing. (However, it may not be falsifiable even then, the error may be due to some missing factor or mechanism in the theory, not to the theory being totally wrong.)

Regarding supersymmetry, which is the example of a theory which makes no contact with the real world, Professor Jacques Distler gives an example of the problem in his review of Dine’s book Supersymmetry and String Theory: Beyond the Standard Model:

http://golem.ph.utexas.edu/~distler/blog/

“Another more minor example is his discussion of Grand Unification. He correctly notes that unification works better with supersymmetry than without it. To drive home the point, he presents non-supersymmetric Grand Unification in the maximally unflattering light (run α 1 ,α 2 up to the point where they unify, then run α 3 down to the Z mass, where it is 7 orders of magnitude off). The naïve reader might be forgiven for wondering why anyone ever thought of non-supersymmetric Grand Unification in the first place.”

The idea of supersymmetry is the issue of getting electromagnetic, weak, and strong forces to unify at 10^16 GeV or whatever, near the Planck scale. Dine assumes that unification is a fact (it isn’t) and then shows that in the absense of supersymmetry, unification is incompatible with the Standard Model.

The problem is that the physical mechanism behind unification is closely related to the vacuum polarization phenomena which shield charges.

Polarization of pairs of virtual charges around a real charge partly shields the real charge, because the radial electric field of the polarized pair is pointed the opposite way. (I.e., the electric field lines point inwards towards an electron. The electric field likes between virtual electron-positron pairs, which are polarized with virtual positrons closer to the real electron core than virtual electrons, produces an outwards radial electric field which cancels out part of the real electron’s field.)

So the variation in coupling constant (effective charge) for electric forces is due to this polarization phenomena.

Now, what is happening to the energy of the field when it is shielded like this by polarization?

Energy is conserved! Why is the bare core charge of an electron or quark higher than the shielded value seen outside the polarized region (i.e., beyond 1 fm, the range corresponding to the IR cutoff energy)?

Clearly, the polarized vacuum shielding of the electric field is removing energy from charge field.

That energy is being used to make the loops of virtual particles, some of which are responsible for other forces like the weak force.

This provides a physical mechanism for unification which deviates from the Standard Model (which does not include energy sharing between the different fields), but which does not require supersymmetry.

Unification appears to occur because, as you go to higher energy (distances nearer a particle), the electromagnetic force increases in strength (because there is less polarized vacuum intervening in the smaller distance to the particle core).

This increase in strength, in turn, means that there is less energy in the smaller distance of vacuum which has been absorbed from the electromagnetic field to produce loops.

As a result, there are fewer pions in the vacuum, and the strong force coupling constant/charge (at extremely high energies) starts to fall. When the fall in charge with decreasing distance is balanced by the increase in force due to the geometric inverse square law, you have asymptotic freedom effects (obviously this involves gluon and other particles and is complex) for quarks.

Just to summarise: the electromagnetic energy absorbed by the polarized vacuum at short distances around a charge (out to IR cutoff at about 1 fm distance) is used to form virtual particle loops.

These short ranged loops consist of many different types of particles and produce strong and weak nuclear forces.

As you get close to the bare core charge, there is less polarized vacuum intervening between it and your approaching particle, so the electric charge increases. For example, the observable electric charge of an electron is 7% higher at 90 GeV as found experimentally.

The reduction in shielding means that less energy is being absorbed by the vacuum loops. Therefore, the strength of the nuclear forces starts to decline. At extremely high energy, there is - as in Wilson’s argument - no room physically for any loops (there are no loops beyond the upper energy cutoff, i.e. UV cutoff!), so there is no nuclear force beyond the UV cutoff.

What is missing from the Standard Model is therefore an energy accountancy for the shielded charge of the electron.

It is easy to calculate this, the electromagnetic field energy for example being used in creating loops up to the 90 GeV scale is the energy of a charge which is 7% of the energy of the electric field of an electron (because 7% of the electron’s charge is lost by vacuumn loop creation and polarization below 90 GeV, as observed experimentally; I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424).

So this physical understanding should be investigated. Instead, the mainstream censors physics out and concentrates on a mathematical (non-mechanism) idea, supersymmetry.

Supersymmetry shows how all forces would have the same strength at 10^16 GeV.

This can’t be tested, but maybe it can be disproved theoretically as follows.

The energy of the loops of particles which are causing nuclear forces comes from the energy absorbed by the vacuum polalarization phenomena.

As you get to higher energies, you get to smaller distances. Hence you end up at some UV cutoff, where there are no vacuum loops. Within this range, there is no attenuation of the electromagnetic field by vacuum loop polarization. Hence within the UV cutoff range, there is no vacuum energy available to create short ranged particle loops which mediate nuclear forces.

Thus, energy conservation predicts a lack of nuclear forces at what is traditionally considered to be “unification” energy.

So there would seem to discredit supersymmetry, whereby at “unification” energy, you get all forces having the same strength. The problem is that the mechanism-based physics is ignored in favour of massive quantities of speculation about supersymmetry to “explain” unification, which are not observed.

***************************

Dr M. E. Rose (Chief Physicist, Oak Ridge National Lab.), Relativistic Electron Theory, John Wiley & Sons, New York and London, 1961, pp 75-6:

‘The solution to the difficulty of negative energy states [in relativistic quantum mechanics] is due to Dirac [P. A. M. Dirac, Proc. Roy. Soc. (London), A126, p360, 1930]. One defines the vacuum to consist of no occupied positive energy states and all negative energy states completely filled. This means that each negative energy state contains two electrons. An electron therefore is a particle in a positive energy state with all negative energy states occupied. No transitions to these states can occur because of the Pauli principle. The interpretation of a single unoccupied negative energy state is then a particle with positive energy … The theory therefore predicts the existence of a particle, the positron, with the same mass and opposite charge as compared to an electron. It is well known that this particle was discovered in 1932 by Anderson [C. D. Anderson, Phys. Rev., 43, p491, 1933].

‘Although the prediction of the positron is certainly a brilliant success of the Dirac theory, some rather formidable questions still arise. With a completely filled ‘negative energy sea’ the complete theory (hole theory) can no longer be a single-particle theory.

‘The treatment of the problems of electrodynamics is seriously complicated by the requisite elaborate structure of the vacuum. The filled negative energy states need produce no observable electric field. However, if an external field is present the shift in the negative energy states produces a polarisation of the vacuum and, according to the theory, this polarisation is infinite.

‘In a similar way, it can be shown that an electron acquires infinite inertia (self-energy) by the coupling with the electromagnetic field which permits emission and absorption of virtual quanta. More recent developments show that these infinities, while undesirable, are removable in the sense that they do not contribute to observed results [J. Schwinger, Phys. Rev., 74, p1439, 1948, and 75, p651, 1949; S. Tomonaga, Prog. Theoret. Phys. (Kyoto), 1, p27, 1949].

‘For example, it can be shown that starting with the parameters e and m for a bare Dirac particle, the effect of the ‘crowded’ vacuum is to change these to new constants e’ and m’, which must be identified with the observed charge and mass. … If these contributions were cut off in any reasonable manner, m’ - m and e’ - e would be of order alpha ~ 1/137. No rigorous justification for such a cut-off has yet been proposed.

‘All this means that the present theory of electrons and fields is not complete. … The particles … are treated as ‘bare’ particles. For problems involving electromagnetic field coupling this approximation will result in an error of order alpha. As an example … the Dirac theory predicts a magnetic moment of mu = mu[zero] for the electron, whereas a more complete treatment [including Schwinger’s coupling correction, i.e., the first Feynman diagram] of radiative effects gives mu = mu[zero].(1 + alpha/{twice Pi}), which agrees very well with the very accurate measured value of mu/mu[zero] = 1.001 …’

Notice in the above that the magnetic moment of the electron as calculated by QED with the first vacuum loop coupling correction is 1 + alpha/(twice Pi) = 1.00116 Bohr magnetons. The 1 is the Dirac prediction, and the added alpha/(twice Pi) links into the mechanism for mass here.

Most of the charge is screened out by polarised charges in the vacuum around the electron core:

‘… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ - arxiv hep-th/0510040, p 71.

‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

Comment by nc — February 23, 2007 @ 11:19 am

 
At 12:22 PM, Blogger nige said...

Copy of a comment, 26 Feb 2007:

http://riofriospacetime.blogspot.com/2007/02/photons-behind-bars-breaking-loose.html

Hi Louise,

For decades Niels Bohr's Complementarity Principle was thought to prevent the wave and particle qualities of light from being measured simultaneously. Recently physicist Shahriar Afshar proved this wrong with a very simple experiment. As a reward, the physics community attacked everything from Afshar's religion to his ethnicity. Prevented from publishing a paper, even on arxiv, he "went public" to NEW SCIENTIST magazine.

Bohr's Complementary and Correspondence Principles are just religion, they're not based on evidence.

The experimental evidence is that Maxwell's empirical equations are valid apart from vacuum effects which appear close to electrons, where the electric field is above the pair-production threshold of about 10^18 v/m.

This is clear even in Dyson's Advanced Quantum Mechanics. There is a physical mechanism - pair production - which causes chaotic phenomena above the IR cutoff, that is within a radius of approx. 10^{-15} m from a unit electric charge like an electron.

Beyond that range, the field is far simpler (better described by classical physics), because the field doesn't have enough energy to create loops of particles from the Dirac sea.

What Bohr tries to do is to freeze the understanding of quantum theory at the 1927 Solvay Congress level, which is unethical.

Bohr went wrong with his classical theory of the atom in 1917 or so.

Rutherford wrote to Bohr asking a question like "how do the electrons know when to stop when they reach the ground state (i.e., who don't they carry on spiralling into the nucleus, radiating more and more energy as Maxwell's light model suggests)?"

Bohr should have had the sense to investigate whether radiation continues. We know from Yang-Mills theory and the Feynman diagrams that electric force results from virtual (gauge boson) photon exchanges between charges!!!!

What is occurring is that Bohr ignored the multibody effects of radiation whereby every atom and spinning charge is radiating! All charges are radiating, or else they wouldn't have electric charge! (Yang-Mills theory.)

Let the normal rate of exchange of energy (emission and reception per electron) be X watts. When an electron in an excited state radiates a real photon, it is radiating at a rate exceeding X.

As it radiates, it loses energy and falls to the ground state where it reaches equilibrium, with emission and reception of gauge boson radiant power equal to X.

I did a rough calculation of the transition time at http://cosmicvariance.com/2006/11/01/after-reading-a-childs-guide-to-modern-physics/#comment-131020

Once you know that the Yang-Mills theory suggests electric and other forces are due to exchange of radiation, you know why there is a ground state (i.e., why the electron doesn’t go converting its kinetic energy into radiation, and spiral into the hydrogen nucleus).

The ground state energy level is the Yang-Mills corresponds to the equilibrium power the electron has radiate which balances the reception of Yang-Mills radiation with the emission of energy.

The way Bohr should have analysed this was to first calculate the radiative power of an electron in the ground state using its acceleration, which is a = (v^2)/x. Here x = 5.29*10^{-11} m (see http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/hydr.html ) and the value of v is only c.alpha = c/137.

Thus the appropriate (non-relativistic) radiation formula to use is: power P = (e^2)(a^2)/(6*Pi*Permittivity*c^3), where e is electron charge. The ground state hydrogen electron has an astronomical centripetal acceleration of a = 9.06*10^{22} m/s^2 and a radiative power of P = 4.68*10^{-8} Watts.

That is the precise amount of background Yang-Mills power being received by electrons in order for the ground state of hydrogen to exist. The historic analogy for this concept is Prevost’s 1792 idea that constant temperature doesn’t correspond to no radiation of heat, but instead corresponds to a steady equilibrium (as much power radiated per second as received per second). This replaced the old Bohr-like Phlogiston and Caloric philosophies with two separate real, physical mechanisms for heat: radiation exchange and kinetic theory. (Of course, the Yang-Mills radiation determines charge and force-fields, not temperature, and the exchange bosons are not to be confused with photons of thermal radiation.)

Although P = 4.68*10^{-8} Watts sounds small, remember that it is the power of just a single electron in orbit in the ground state, and when the electron undergoes a transition, the photon carries very little energy, so the equilibrium quickly establishes itself: the real photon of heat or light (a discontinuity or oscillation in the normally uniform Yang-Mills exchange progess) is emitted in a very small time!

Take a photon of red light, which has a frequency of 4.5*10^{14} Hz. By Planck’s law, E = hf = 3.0*10^{-19} Joules. Hence the time taken for an electron with a ground state power of P = 4.68*10^{-8} to emit a photon of red light in falling back to the ground state from a suitably excited state will be only on the order of E/P = (3.0*10^{-19})/(4.68*10^{-8}) = 3.4*10^{-12} second.

This agrees with the known facts. So the quantum theory of light is compatible with classical Maxwellian theory!

Now we come to the nature of a light photon and the effects of spatial transverse extent: path integrals.

‘Light ... "smells" the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’

- Feynman, QED, Penguin, 1990, page 54.

That's the double-slit experiment, etc. The explanation behind it is a flaw in Maxwell's electromagnetic wave illustration:

http://www.edumedia-sciences.com/a185_l2-transverse-electromagnetic-wave.html

The problem with the illustration is that the photon goes forward with the electric (E) and magnetic (B) fields orthagonal to both the direction of propagation and to each other, but with the two phases of electric field (positive and negative) behind one another.

That way can't work, because the magnetic field curls don't cancel one another's infinite self inductance.

First of all, the illustration is a plot of E, B versus propagation dimension, say the X dimension. So it is one dimensional (E and B depict field strengths, not distances in Z and Y dimensions!).

The problem is that for the photon to propagate, the two different curls of the magnetic field (one way in the positive electric field half cycle, the other way in the negative electric field half cycle) must partly cancel one another to prevent the photon having infinite self inductance: this is similar to the problem of sending a propagating pulse of electric energy into a single wire.

It doesn't work: the wire radiates, the pulse dies out quickly. (This is only useful for antennas.)

To send a propagating pulse of energy, a logic step, in an electrical system, you need two conductors forming a transmission line. In a Maxwellian photon, there can be no cancellation of infinite inductance from each opposing magnetic curl, because each is behind or in front of the other. Because fields only travel at the speed of light, and the whole photon is moving ahead with that speed, there can be no influence of each half cycle of a light photon upon the other half cycle.

I've illustrated this here:

photon

If you look at Maxwell's equations, they describe hoe cyclically a varying electric field induces a "displacement current" in the vacuum which in tren creates a magnetic field curling around the current, and so on. They don't explain the dynamics of the photon or light wave in detail.

One thing that's interesting about it is this: electromagnetic fields are composed of exchange radiation according to Yang-Mills quantum field theory.

The photon is composed of electromagnetic fields according to Maxwell's theory.

Hence, the photon is composed of electromagnetic fields which in turn are composed of gauge bosons exchange radiation. The photon is a disturbance in the existing field of exchange radiation between the charges in the universe. The apparently cancelled electromagnetic field you get when you pass two logic steps with opposite potentials through each other in a transmission line, is not true cancellation since although you get zero electric field (and zero electrical resistance, as Dr David S. Walton showed!) while those pulses overlap, their individual electric fields re-emerge when they have passed through one another.

So if you are some distance from an atom, the "neutral" electric field is not the absence of any field, but the superposition of two fields. (The background "cancelled" electromagnetic field is probably the cause of gravitation, as Lunsford's unification suggests; I've done a calculation of this here (top post).)

Aspect's "entanglement" seems to be due to wavefunction collapse error in quantum mechanics, as Dr Thomas S. Love has showed: the when you take a measurement on a steady state system like an atom, you need to switch over the mathematical model you are using from the time-independent to the time-dependent Schroedinger equations, because your measurement causes a perturbation to the state of the system (e.g., your probing electron upsets the state of the atom, causing a time-dependent effect). This switch over in equations causes "wavefunction collapse", it is not a real physical phenomenon travelling instantly! This is the perfect example of confusing a mathematical model with reality.

Aspects experimental results show that the polarizations of the same-source photons do correlate. All this means is that the measurement paradox doesn't apply to photons. A photon is moving at light speed, so it doesn't have any internal time whatsoever (unlike electrons!). Something which is frozen in time like a photon, can't change state. To change the nature of the photon it has to be absorbed and re-emitted, as in the case of Compton scattering.

Electrons can have their state changed by being measured, since they aren't going at light speed. Time only halts for things going at speed c.

So Heisenberg's uncertainty principle should strictly apply to measuring electron spins as Einstein, Polansky, and Rosen suggested in the Physical Review in 1935, but it shouldn't apply to photons. It's the lack of physical dynamics in modern physics which creates such confusion. The mathematician who lacks physical mechanisms is in fairyland, and drags down too many experimental physicists and others who believe the metaphysical (non-mechanistic) interpretations of the theory. That's why string theory and other unconnected-to-any-experimental-fact drivel flourishes.

 
At 12:36 PM, Blogger nige said...

The link to the "photon" illustration above should be http://thumbsnap.com/v/CW93pyt3.jpg

26 Feb. 2007.

 
At 12:41 AM, Blogger nige said...

[Although you might naively expect the classical Maxwellian radiation emission rate to be greatest in the ground state, you need also take account of the effect of electron spin changes on the radiation emission rate in the full analysis; see 'An Electronic Universe, Part 2', Electronics World, April 2003. I will try to put a detailed paper about this effect on the internet soon.]

27 Feb. 2007

 
At 3:37 AM, Blogger nige said...

Copy of a comment to Louise Riofrio's blog:

http://riofriospacetime.blogspot.com/2007/02/dark-thoughts.html

Hi Louise,

I agree there is evidence for dark (unidentified) matter, but the claimed precise estimates for the quantity are all highly speculative. Regards galactic rotation curves, Cooperstock and Tieu have explained galactic rotation ‘evidence’ for dark matter as not being due to dark matter, but a GR effect which was not taken into account by the people who originally applied Newtonian dynamics to analyse galactic rotation:

‘One might be inclined to question how this large departure from the Newtonian picture regarding galactic rotation curves could have arisen since the planetary motion problem is also a gravitationally bound system and the deviations there using general relativity are so small. The reason is that the two problems are very different: in the planetary problem, the source of gravity is the sun and the planets are treated as test particles in this field (apart from contributing minor perturbations when necessary). They respond to the field of the sun but they do not contribute to the field. By contrast, in the galaxy problem, the source of the field is the combined rotating mass of all of the freely-gravitating elements themselves that compose the galaxy.’

- http://arxiv.org/abs/astro-ph/0507619, pp. 17-18.

If that is true, and I'm aware of another analysis of the galactic rotation curves which similarly explains them as a calculational issue without large quantities of dark matter, then that's the major source of quantitative observational data on dark matter gone.

Another quantitative argument is the one you have, where you calculate the critical density of the universe using the Friedmann-Walker-Robertson solutions to GR by fitting a solution to cosmology evidence like the Hubble constant and alleged CC, and then compare that critical density to the observed density of visible masses in the universe.

The problem with that is the assumption that Einstein's field equation with fixed constants is a complete description of the effect of gravitation on the big bang.

I've evidence that it isn't a complete description. It is not compatible with all the other better understood forces of the universe, because if gravity can be unified with the other Yang-Mills quantum field theories, the exchange radiation should suffer redshift (energy loss) due to the relativistic recession of masses in the expanding universe.

In addition, it's clear that the only way to make an empirical prediction of the strength of gravity, for instance the gravity constant G, is for gravity to be interdependent (i.e., partly a result of) the big bang.

Yang-Mills exchange radiation will travel between all masses in the universe.

If a mass is receding from you in spacetime and behaving the Hubble recession v = Hr, then in your frame of reference, the mass is accelerating into the past (further from you).

If you could define a universal time by assuming you could see everything in the universe without delays due to the travel time of light, then this might be wrong.

However, in the spacetime which we observe whereby a greater distance means an earlier time, there is an apparent acceleration.

Suppose you see a galaxy cluster of mass M is receding at velocity v at an apparent distance r away from you.

After the small time increment T seconds have passed, the distance will have increased to:

R = r + vT

= r + (Hr)T

= r(1 + HT).

If the Hubble law is to hold, the apparent velocity at the new distance will be:

V = HR = Hr(1 + HT).

Hence the small increment

dv ~ V - v = {Hr(1 + HT)} - {Hr}

= (H^2)rT.

The travel time of the light from the galaxy cluster to your eye will also increase from t = r/c to:

(t + T) = R/c

= {r(1 + HT)}/c

= (r/c) + (rHT/c).

Hence the small increment

dt ~ T.

Now the observable (spacetime) acceleration of the receding galaxy cluster, will be:

a = dv/dt

= {(H^2)rT}/T

= (H^2)r.

This result is the outward acceleration of the universe responsible for the Hubble expansion at any distance r (it is not the alleged acceleration which is claimed to be required to explain the lack of gravitational slowing down of matter receding at extreme redshifts).

Calculating the total outward force, F = ma, where a is acceleration outward and m is matter receding outward, for the normal big bang is then fairly easy. Two problems are encountered but easily solved.

First, the density of the universe is bigger in the earlier spacetimes we see at the greatest distances. This would cause a problem because material density for constant mass should fall by the inverse cube of time as the universe expands. Hence, seeing ever earlier times means that density should rise toward infinity at the greatest distances.

But this problem is solved by the solution to the second problem, which is the problem that an outward force will, by Newton's 3rd empirically confirmed law, be accompanied by an inward reaction force.

The only thing we know of which can be causing an inward force is the gravitational field, specifically the gravity causing exchange radiation. This solves the entire problem!

By Newton's 3rd law, any mass which is accelerating away from you in spacetime will send gravity causing exchange radiation towards you, giving a net force on you unless this is spherically symmetric.

However, if the receding mass is receding too fast (relativistically), then the gauge boson radiation sent towards you is redshifted to a large degree, which means that matter receding at near the velocity of light doesn't exert much force on you: this is another way of saying that the Hubble acceleration effect breaks down when the recession velocity v approaches c, because once something is observably receding from you at near a constant velocity (c) it is no longer accelerating much!

Hence, even if the density of the universe approaches infinity at the earliest times, this doesn't make the effective outward force infinite, because the acceleration term in F = ma is cut. The first problem was that the masses, m, at extreme distances (early times) become large, making F go towards infinity. The solution to the second problem shows that although m tends to become large, the effective value of a falls at the greatest distances because the spacetime recession speed effectively becomes a constant c, so a = dc/dt = 0. Hence the product in F = ma can't become infinite for great distances in spacetime.

There is a straightforward mathematical way to calculate the overall net effect of these phenomena, by offsetting the density increase with redshift from the stretching of the universe.

Now we have the outward force of the big bang recession and the inward reaction force calculated, we can then see how exchange radiation works to cause gravity.

The exact nature of the gauge boson exchange radiation processes are supposed to be gravitons interacting with a mass-giving field of Higgs bosons, and there are physical constraints on what is possible. If you can assume each mass to be like a mirror and the gauge bosons to be like light, a pressure of is exerted each time the gauge bosons are reflected between masses (exchanged). (A light photon has a momentum of p = E/c if it is absorbed, or p = 2E/c if it is reflected.)

Because the universe is spherically symmetric around us, the overall inward pressure from each direction cancels, merely producing spacetime curvature (the gravitational contraction of spacetime radially by the amount (1/3)GM/c^2 = 1.5 mm for planet earth), a squashing effect on radial but not transverse directions (this property of general relativity is completely consistent with a gravity causing Yang-Mills exchange radiation).

What is interesting next is to consider the case of a nearby mass, like the planet earth.

Because all masses in a Yang-Mills quantum gravity will be exchanging gravity causing radiation, you will be exchanging such radiation with planet earth.

However, as already explained, for there to be a net force towards you due to exchange radiation from a particular mass, that mass must be accelerating away from you (the net force of radiation towards you is due to Newton's 3rd law, the rocket effect of action and reaction).

So because the earth isn't significantly accelerating away from you, the net force from the gauge boson radiation you exchange with the masses in the earth (which have small cross-sectional areas) is zero.

So the fundamental particles in the earth shield you, over their small cross-sectional areas, from gauge boson radiation that you would otherwise be exchanging with distant stars (the LeSage shadowing effect).

Gravity results because the tiny amount of shielding due to fundamental particles in the earth, reduces causes an asymmetry in gravity causing gauge boson radiation hitting you, and this asymmetry is gravity.

Besides predicting correctly mechanism for curvature of spacetime due to local gravitational fields in general relativity (the radial contraction can be calculated), this also predicts the correct form of gravity for low velocities and weak fields (Newton's law), which produces a relationship between the density we observe for the universe and the parameters G and H, which is different from the Friedmann-Walker-Robertson metric.

The dynamics of gravity differ from the Friedmann-Walker-Robertson solution to GR due to physical dynamics ignored by GR, namely gravity being (1) a result of the recession (or rather, interdependent on the recession, since the exchange of force causing gauge boson radiation between all masses sheds light on the mechanism for the Hubble law continuing after the real radiation pressure in the universe became trivial), and (2) due to exchange radiation which gets severely redshifted to lower energies in cases where the masses which are exchanging the radiation are receding at relativistic speeds.

You can completely correct GR by setting lambda = 0 and using a calculated value for G which is based on the mechanism. Hence, Einstein's GR is fine as long as you make the gravitational parameter G a vector which depends on various physical dynamics as described in outline above. The details maths for what is above is at http://quantumfieldtheory.org/Proof.htm.

There is some dark matter (no where near as much as the lambda-CDM model suggests) but no cosmological constant or dark energy. The result I get suggests that the Friedmann critical density is higher than the correct formula for the density (from the dynamics above) by the factor (e^3)/2 ~ 10, where e = base of natural logs. This comes from a calculation, obviously, at http://quantumfieldtheory.org/Proof.htm. (When I discussed this result about a year ago on Motl's blog, I think Rivero suggested that it was just numerology. This is the problem where you have a detailed mathematical proof. Where you give it, nobody reads it or will publish it. When you give the results from it, people just assume it has no proof behind it. Whatever you do, there is no interest because the whole approach is too different from orthodoxy, and orthodoxy is respected to the exclusion of science.)

On my old blog, I have an abstract of the theory very briefly at the top:

"The Standard Model is the best-tested physical theory. Forces result from radiation exchange in spacetime. Mass recedes at 0-c in spacetime of 0-15 billion years, so outward force F = m.dv/dt ~ m(c - 0)/(age of universe, t) ~ mcH ~ 10^43 N (H is Hubble parameter). Newton's 3rd law implies equal inward force, carried by exchange radiation, predicting cosmology, accurate general relativity, SM forces and particle masses."

I think the message isn't getting home because people are unwilling to think about velocity of recession being a function of time rather than space! Hence, ever since Hubble discovered it, the recession has been mathematically represented the wrong way (as a recession velocity increasing with distance, instead of as an acceleration). This contravenes spacetime. A nice description of the lack of this in popular culture is given by Professor Carlo Rovelli’s "Quantum Gravity" book, http://www.cpt.univ-mrs.fr/~rovelli/book.pdf :

‘The success of special relativity was rapid, and the theory is today widely empirically supported and universally accepted. Still, I do not think that special relativity has really been fully absorbed yet: the large majority of the cultivated people, as well as a surprising high number of theoretical physicists still believe, deep in the heart, that there is something happening “right now” on Andromeda; that there is a single universal time ticking away the life of the Universe.’ (P. 7 of draft.)

Best wishes,
Nigel

 
At 5:54 AM, Blogger nige said...

Another comment to Louise's blog:

I've now rewritten my brief abstract at the top of my old blog:

The Standard Model is the most tested theory: forces result from radiation exchanges. Masses recede at Hubble speed v = Hr = Hct in spacetime, so there's outward force F = m.dv/dt ~ 10^43 N. Newton's 3rd law implies an inward reaction, carried by exchange radiation, predicting forces, curvature, cosmology and particle masses. Non-receding masses obviously don't cause a reaction force, so they cause asymmetry => gravity.

1 Mar. '07.

 
At 8:24 AM, Blogger nige said...

copy of a comment:

"http://kea-monad.blogspot.com/2007/04/gravity-probe-b.html"

Thanks for these links! This nice experimental testing will probably go wrong because the error bars will be too big to rule anything out, or whatever. If you get crazy looking experimental results, people will dismiss them instead of throwing general relativity away in any case; as a last resort an epicycle will be added to make general relativity agree (just as the CC was modified to make the mainstream general relativity framework fit the facts in 1998).

This post reminds me of a clip on U-tube showing Feynman in November 1964 giving his Character of Physical Law lectures at Cornell (these lectures were filmed for the BBC which broadcast them in BBC2 TV in 1965):

"In general we look for a new law by the following process. First we guess it. Don't laugh... Then we compute the consequences of the guess to see what it would imply. Then we compare the computation result to nature: compare it directly to experiment to see if it works. If it disagrees with experiment: it's wrong. In that simple statement is the key to science. It doesn't any difference how beautiful your guess is..."

- http://www.youtube.com/watch?v=ozF5Cwbt6RY

I haven't seen the full lectures. Someone should put those lectures films on the internet in their entirity. They have been published in book form, but the actual film looks far more fun, particularly as they catch the audience's reactions. Feynman has a nice discussion of the LeSage problem in those lectures, and it would ne nice to get a clip of him discussing that!

General relativity is right at a deep level and doesn't in general even need testing for all predictions, simply because it's just a mathematical description of accelerations in terms of spacetime curvature, with a correction for conservation of mass-energy. You don't keep on testing E=mc^2 for different values of m, so why keep testing general relativity? Far better to work on trying to understand the quantum gravity behind general relativity, or even to do more research into known anomalies such as the Pioneer anomaly.

General relativity may need corrections for quantum effects, just as it needed a major correction for the conservation of mass-energy in November 1915 before the field equation was satisfactory.

The major advance in general relativity (beyond the use of the tensor framework, which dates back to 1901, when developed by Ricci and Tullio Levi-Civita) is a correction for energy conservation.

Einstein started by saying that curvature, described by the Ricci tensor R_ab, should be proportional to the stress-energy tensor T_ab which generates the field.

This failed, because T_ab doesn't have zero divergence where zero divergence is needed "in order to satisfy local conservation of mass-energy".

The zero divergence criterion just specifies that you need as many field lines going inward from the source as going outward from the source. You can't violate the conservation of mass-energy, so the total divergence is zero.

Similarly, the total divergence of magnetic field from a magnet is always zero, because you have as many field lines going outward from one pole as going inward toward the other pole, hence div.B = 0.

The components of T_ab (energy density, energy flux, pressure, momentum density, and momentum flux) don't obey mass-energy conservation because of the gamma factor's role in contracting the volume.

For simplicity if we just take the energy density component, T_00, and neglect the other 15 components of T_ab, we have

T_00 = Rho*(u_a)*(u_b)

= energy density (J/m^3) * gamma^2

where gamma = [1 - (v^2)/(c^2)]^(-1/2)

Hence, T_00 will increase towards infinity as v tends toward c. This violates the conservation of mass-energy if R_ab ~ T_ab, because radiation going at light velocity would experience infinite curvature effects!

This means that the energy density you observe depends on your velocity, because the faster you travel the more contraction you get and the higher the apparent energy density. Obviously this is a contradiction, so Einstein and Hilbert were forced to modify the simple idea that (by analogy to Poisson's classical field equation) R_ab ~ T_ab, in order to make the divergence of the source of curvature always equal to zero.

This was done by subtracting (1/2)*(g_ab)*T from T_ab, because T_ab - (1/2)*(g_ab)*T always has zero divergence.

T is the trace of T_ab, i.e., just the sum of scalars: the energy density T_00 plus pressure terms T_11, T_22 and T-33 in T_ab ("these four components making T are just the diagonal - scalar - terms in the matrix for T_ab").

The reason for this choice is stated to be that T_ab - (1/2)*(g_ab)*T gives zero divergence "due to Bianchi’s identity", which is a bit mathematically abstract, but obviously what you are doing physically by subtracting (1/2)(g_ab)T is just getting rid from T_ab what is making it give a finite divergence.

Hence the corrected R_ab ~ T_ab - (1/2)*(g_ab)*T ["which is equivalent to the usual convenient way the field equation is written, R_ab - (1/2)*(g_ab)*R = T_ab"].

Notice that since T_00 is equal to its own trace T, you see that

T_00 - (1/2)(g_ab)T

= T - (1/2)(g_ab)T

= T(1 - 0.5g_ab)

Hence, the massive modification introduced to complete general relativity in November 1915 by Einstein and Hilbert amounts to just subtracting a fraction of the stress-energy tensor.

The tensor g_ab [which equals (ds^2)/{(dx^a)*(dx^b)}] depends on gamma, so it simply falls from 1 to 0 as the velocity increases from v = 0 to v = c, hence:

T_00 - (1/2)(g_ab)T = T(1 - 0.5g_ab) = T where g_ab = 0 (velocity of v = c) and

T_00 - (1/2)(g_ab)T = T(1 - 0.5g_ab) = (1/2)T where g_ab = 1 (velocity v = 0)

Hence for a simple gravity source T_00, you get curvature R_ab ~ (1/2)T in the case of low velocities (v ~ 0), but for a light wave you get R_ab ~ T, i.e., there is exactly twice as much gravitational acceleration acting at light speed as there is at low speed. This is clearly why light gets deflected in general relativity by twice the amount predicted by Newtonian gravitational deflection (a = MG/r^2 where M is sun's mass).

I think it is really sad that no great effort is made to explain general relativity simply in a mathematical way (if you take away the maths, you really do lose the physics).

Feynman had a nice explanation of curvature in his 1963 Lectures on Physics:gravitational contracts (shrinks) earth's radius by (1/3)GM/c^2 = 1.5 mm, but this contraction doesn't affect transverse lines running perpendicularly to the radial gravitational field lines, so the circumference of earth isn't contracted at all! Hence Pi would increase slightly if there are only 3 dimensions: circumference/diameter of earth (assumed spherical) = [1 + 2.3*10^{-10}]*Pi.

This distortion to geometry - presumably just a simple physical effect of exchange radiation compressing masses in the radial direction only (in some final theory that includes quantum gravity properly) - explains why there is spacetime curvature. It's a shame that general relativity has become controversial just because it's been badly explained using false arguments (like balls rolling together on a rubber water bed, which is a false two dimensional analogy - and if you correct it by making it three dimensional and have a surrounding fluid pushing objects together where they shield one another, you get get censored out, because most people don't want accurate analogies, just myths).

(Sorry for the length of this comment by the way and feel free to delete it. I was trying to clarify why general relativity doesn't need testing.)

 
At 3:36 AM, Blogger nige said...

copy of a follow up comment:

http://kea-monad.blogspot.com/2007/04/gravity-probe-b.html

Matti, thank you very much for your response. On the issue of tests for science, if a formula is purely based on facts, it's not speculative and my argument is that it doesn't need testing in that case. There are two ways to do science:

* Newton's approach: "Hypotheses non fingo" [I frame no hypotheses].

* Feynman's dictum: guess and test.

The key ideas in the framework of general relativity are solid empirical science: gravitation, the equivalence principle of inertial and gravitational acceleration (which seems pretty solid to me, although Dr Mario Rabinowitz writes somewhere about some small discrepancies, there's no statistically significant experimental refutation of the equivalence principle, and it's got a lot of evidence behind it), spacetime (which has evidence from electromagnetism), the conservation of mass-energy, etc.

All these are solid. So the field equation of general relativity, which is key to to making the well tested unequivocal or unambiguous predictions (unlike the anthropic selection from the landscape of solutions it gives for cosmology, which is a selection to fit observations of how much "dark energy" you assume is powering the cosmological constant, and how much dark matter is around that can't be detected in a lab for some mysterious reason), is really based on solid experimental facts.

It's as pointless to keep testing - within the range of the solid assumptions on which it is based - a formula based on solid facts, as it is to keep testing say Pythagoras' law for different sizes of triangle. It's never going to fail (in Euclidean geometry, ie flat space), because the inputs to the derivation of the equation are all solid facts.

Einstein and Hilbert in 1915 were using Newton's no-hypotheses (no speculations) approach, so the basic field equation is based on solid fact. You can't disprove it, because the maths has physical correspondence to things already known. The fact it predicts other things like the deflection of starlight by gravity when passing the sun as twice the amount predicted by Newton's law, is a bonus, and produces popular media circus attention if hyped up.

The basic field equation of general relativity isn't being tested because it might be wrong. It's only being tested for psychological reasons and publicity, and the false idea that Popper had speculations must forever be falsifiable (ie, uncertain, speculative, or guesswork).

The failure of Popper is that he doesn't include proofs of laws which are based on solid experimental facts.

First, Archimedes proof of the law of buoyancy in On Floating Bodies. The water is X metres deep, and the pressure in the water under a floating body is the same as that at the same height above the seabed regardless of whether a boat is above it or not. Hence, the weight of water displaced by the boat must be exactly equal to the weight of the boat, so that the pressure is unaffected whether or not a boat is floating above a fixed submerged point.

This law is a falsifiable law. Nor are other empirically-based laws. The whole idea of Popper that you can falsify an solidly empirically based scientific theory is just wrong. The failures of epicycles, phlogiston, caloric, vortex atoms, and aether are due to the fact that those "theories" were not based on solid facts, but were based upon guesses. String theory is also a guess, but is not a Feynman type guess (string theory is really just postmodern ***t in the sense that it can't be tested, so it's not even a Popper type ever-falsifiable speculative theory, it's far worse than that: it's "not even wrong" to begin with).

Similarly, Einstein's original failure with the cosmological constant was a guess. He guessed that the universe is static and infinite without a shred of evidence (based on popular opinion and the "so many people can't all be wrong" fallacy). Actually, from Obler's paradox, Einstein should have realised that the big bang is the correct theory.

The big bang goes right back to Erasmus Darwin in 1791 and Edgar Allan Poe in 1848, which was basically an a fix to Obler's paradox (the problem that if the universe is infinite and static and not expanding, the light intensity from the infinite number of stars in all directions will mean that the entire sky would be as bright as the sun: the fact that the sun is close to us and gives higher inverse-square law intensity than a distant star would be balanced by the fact that at great distances, there are more stars by the inverse square law of the distance, covering any given solid angle of the sky; the correct resolution to Obler's paradox is not - contrary to popular accounts - the limited size of the universe in the big bang scenario, but the redshifts of distant stars in the big bang, because after all we're looking back in time with increasing distance and in the absence of redshift we'd see extremely intense radiation from the high density early universe at great distances).

Erasmus Darwin wrote in his 1790 book ‘The Botanic Garden’:

‘It may be objected that if the stars had been projected from a Chaos by explosions, they must have returned again into it from the known laws of gravitation; this however would not happen, if the whole Chaos, like grains of gunpowder, was exploded at the same time, and dispersed through infinite space at once, or in quick succession, in every possible direction.’

So there was no excuse for Einstein in 1916 to go with popular prejudice and ignore Obler's paradox, ignore Darwin, and ignore Poe. What was Einstein thinking? Perhaps he assumed the infinite eternal universe because he wanted to discredit 'fiat lux' and thought he was safe from experimental refutation in such an assumption.

So Einstein in 1916 introduced a cosmological constant that produces an antigravity force with increases with distance. At small distances, say within a galaxy, the cosmological constant is completely trivial because it's effects are so small. But at the average distance of separation between galaxies, Einstein made the cosmological constant take the right value so that its repulsion would exactly cancel out the gravitation attraction of galaxies.

He thought this would keep the infinite universe stable, without continued aggregation of galaxies over time. As now known, he was experimentally refuted over the cosmological constant by Hubble's observations of redshift increasing with distance, which is redshift of the entire spectrum of light uniformly caused by recession, and not the result of scattering of light with dust (which would be a frequency-dependent redshift) or "tired light" nonsense.

However, the Hubble disproof is not substantive to me. Einstein was wrong because he built the cosmological constant extension on prejudice not facts, he ignored evidence from Obler's paradox, and in particular his model of the universe is unstable. Obviously his cosmological constant fix suffered from the drawback that galaxies are not all spaced at the same distance apart, and his idea to produce stability in an infinite, eternal universe was a failure physically because it was not a stable solution. Once you have one galaxy slightly closer to another than the average distances, the cosmological constant can't hold them apart, so they'll eventually combine, and that will set off more aggregation.

The modern cosmological constant application (to prevent the long range gravitational deceleration of the universe from occurring, since no deceleration is present in data of redshifts of distant supernovas etc) is now suspect experimentally because the "dark energy" appears to be "evolving" with spacetime. But it's not this experimental (or rather observational) failure of the mainstream Lambda-Cold Dark Model of cosmology which makes it pseudoscience. The problem is that the model is not based on science in the first place. There's no reason to assume that gravity should slow the galaxies at great distances. Instead,

"... the flat universe is just not decelerating, it isn’t really accelerating..."

The reason it isn't decelerating is that gravity, contraction, and inertia are ultimately down to some type of gauge boson exchange radiation causing forces, and when these exchange radiation are exchanged between receding masses over vast distances, they get redshifted so their energy drops by Planck's law E=hf. That's one simple reason for why general relativity - which doesn't include quantum gravity with this effect of redshift of gauge bosons - falsely predicts gravitational deceleration which wasn't seen.

The mainstream response to the anomaly, of adding an epicycle (dark energy, small positive CC) is just what you'd expect from from mathematicians, who want to make the theory endlessly adjustable and non-falsifiable (like Ptolemy's system of adding more epicycles to overcome errors).

Many thanks for discussion you gave of issues with the equivalence principle. I can't grasp what is the problem with inertial and gravitational masses being equal to within experimental error to many decimals. To me it's a good solid fact. There are a lot of issues with Lorentz invariance anyway, so its general status as a universal assumption is in doubt, although it certainly holds on large scales. For example, any explanation of fine-graining in the vacuum to explain the UV cutoff physically is going to get rid of Lorentz invariance at the scale of the grain size, because that will be an absolute size. At least this is the argument Smolin and others make for "doubly special relativity", whereby Lorentz invariance only emerges on large scales. Also, from the classical electromagnetism perspective of Lorentz's original theory, Lorentz invariance can arise physically due to contraction of a body in the direction of motion in a physically real field of force-causing radiation, or whatever is the causative agent in quantum gravity.

Many thanks again for the interesting argument. Best wishes, Nige

 

Post a Comment

<< Home