Quantum gravity physics based on facts, giving checkable predictions

Sunday, March 18, 2007


Galaxy recession velocity: v = dR/dt = HR => Acceleration: a = dv/dt = d[HR]/dt = H*dR/dt = Hv = H*HR = RH^2. Now remember another observation-based law, Newton's 2nd: F = ma. So the outward acceleration of mass in the universe due to recession is equivalent to an outward force from our standpoint. Next, another observation-based law, Newton’s 3rd law, predicts an inward directed force (delivered by graviton impulses) which is equal to the outward force of the universe around us, but since non-receding relatively nearby masses don’t cause this, they introduce an asymmetry, predicting gravity and particle physics. In 1996 it also predicted the lack of deceleration at large redshifts, which was confirmed by Perlmutter’s observations on distant supernovae redshifts in 1998.

The diagram above shows how the flux of Yang-Mills gravitational exchange radiation (gravitons) being exchanged between all the masses in the universe physically creates an observable gravitational acceleration field directed towards a cosmologically nearby or non-receding mass, labelled ’shield’. (The Hubble expansion rate and the distribution of masses around us are virtually isotropic, i.e., radially symmetric.) The mass labelled ’shield’ creates an asymmetry for the observer in the middle of the sphere, since it shields the graviton flux because it doesn’t have an outward force relative to the observer (in the middle of the circle shown), and thus doesn’t produce a forceful graviton flux in the direction of the observer according to Newton’s 3rd law (action and reaction, an empirical fact, not a speculative assumption).

Hence, any mass that is not at a vast cosmological distance (with significant redshift) physically acts as a shield for gravitons, and you get pressed towards that shield from the unshielded flux of gravitons on the other side. Gravitons act by pushing, they have spin-1. In the diagram, r is the distance to the mass that is shielding the graviton flux from receding masses located at the far greater distance R. As you can see from the simple but subtle geometry involved, the effective size of the area of sky which is causing gravity due to the asymmetry of mass at radius r is equal to the cross-sectional area of the mass for quantum gravity interactions (detailed calculations, included elsewhere, show that this cross-section turns out to be the area of the event horizon of a black hole for the mass of the fundamental particle which is acting as the shield), multiplied by the factor (R/r)^2, which is how the inverse square law, i.e., the 1/r^2 dependence on gravitational force, occurs.

Because this mechanism is built on solid facts of expansion from redshift data that can’t be explained any other way than recession, and on experiment and observation based laws of nature such as Newton’s, it is not just a geometric explanation of gravity but it uniquely makes detailed predictions including the strength of gravity, i.e., the value of G, and the cosmological expansion rate; it is a simple theory as it uses spin-1 gravitons which exert impulses that add up to an effective pressure or force when exchanged between masses. It is quite a different theory to the mainstream model which ignores graviton interactions with other masses in the surrounding universe.

The mainstream model in fact can’t predict anything at all. It begins by ignoring all the masses in the universe except for two masses, such as two particles. It then represents gravity interactions between those two masses by a Lagrangian field equation which it evaluates by a Feynman path integral. It finds that if you ignore all the other masses in the universe, and just consider two masses, then spin-1 gauge boson exchange will cause repulsion, not attraction as we know occurs for gravity. It then ‘corrects’ the Lagrangian by changing the spin of the gauge boson to spin-2, which has 5 polarizations. This new ‘corrected’ Lagrangian with 5 tensor terms for the 5 polarizations of the spin-2 graviton being assumed, gives an always-attractive force between two masses when put into the path integral and evaluated. However, it doesn’t say how strong gravity is, or make any predictions that can be checked. Thus, the mainstream first makes one error (ignoring all the graviton interactions between masses all over the universe) whose fatally flawed prediction (repulsion instead of attraction between two masses) it ‘corrects’ using another error, a spin-2 graviton.

So one reason why the actual spin-2 gravitons don’t cause masses to repel is because the path integral isn’t just a sum of interactions between two gravitational charges (composed of mass-energy) when dealing with gravity; it’s instead a sum of interactions between all mass-energy in the universe. The reason why mainstream people don’t comprehend this is that the mathematics being used in the Lagrangian and path integral are already fairly complex, and they can’t readily include the true dynamics so they ignore them and believe in a fiction instead. (There is a good analogy with the false mathematical epicycles of the Earth-centred universe. Whenever the theory was in difficulty, they simply added another epicycle to make the theory more complex, ‘correcting’ the error. Errors were actually celebrated and simply re-labelled being ‘discoveries’ that nature must contain more epicycles.)

A Feynman lecture about was briefly in 2007 on Google Video but was then deleted LeSage's theory of pushing gravity by material particles, not exchange radiation, in a November 1964 Cornell lecture, filmed for BBC2 TV.



The answer to Feynman's problem is that the gauge boson radiation does cause effects on moving particles (predicting the strength of gravity and many other things) and the 'resistance' to the gravitational field of gauge boson radiation is exhibited as the phenomena of Lorentz contraction and inertia (Newton's 1st law of motion).






By Einstein's equivalence principle of general relativity, inertial mass equals gravitational mass. The graviton interactions that cause gravity therefore also cause inertial effects. The gravitational contraction of Earth's radius which Feynman himself calculated (in his main 1963 Lectures on Physics) to be (1/3)MG/c2 = 1.5 millimetres, is equivalent to the Lorentz contraction of inertial mass as proved here.




SU(3)xSU(2)xSU(2) or SU(3)xSU(2) instead of the standard model SU(3)xSU(2)xU(1), describes electromagnetism, gravity, strong and weak forces

(Several old links to this blog referring to 'top post' mean the previous one, below this post.)

Mainstream quantum gravity is based on the idea of the exchange of gravitons. The problem is such gravitons would carry energy (the energy a falling object picks up from the gravitational field, for example) and, in the non-ad hoc sector of general relativity (regardless of arguments about the cosmological constant), gravity affects anything with energy (such as deflecting star light, as observed in confirmations of general relativity), not just mass. Therefore, gravitons - because they carry energy - should interact with each other. This is why the problem of quantum gravity is traditionally different to electromagnetism, where photons (containing as much positive as negative electric field energy, and as much magnetic curl in one direction as in the other) are neutral as a whole, and so do not interact with each other, at least by electromagnetic interactions.

Hence, in quantum gravity, there is a self-interaction in the graviton problem which simply does not occur in mainstream quantum electrodynamics (quantum electromagnetism). Professor Lee Smolin on page 85 of his recent book The Trouble with Physics (U.S. edition), argues that the failure here is 'a consequence of not taking Einstein's principle of background independence seriously. Once the gravitational waves [gravitons] interact with one another, they can no longer be seen as moving on a fixed background. They change the background as they travel.' He explains how background independent loop quantum gravity solves this problem. However, while his argument here is correct so far as it goes (assuming that gravitational effects are due to gravitons), there is a difficulty in Maxwellian electrodynamics which are incuded in quantum electrodynamics: see this earlier post which explains that in quantum field theory, you can't get any pair production or vacuum polarization below the IR cutoff energy, which corresponds to about 1018 volts/metre electric field strength. This proves that "Maxwell's radio waves" which we all use can't possibly use vacuum displacement current to close the wave cycle of time-varying electric fields -> time-varying displacement current -> time-varying magnetic fields -> time-varying electric fields, et seq. Instead of there being displacement currents in electromagnetic waves with electric fields below the IR cutoff equivalent of 1018 v/m, there are electromagnetic radiation effects which cause effects normally attributed to displacement currents, as that post proves. Now is the time to review the whole problem.

Chargeless photons are assumed to be exchange radiation in electromagnetism, maybe because it is considered that there would be observable interactions if the gauge bosons were charged. In fact, the exchange of charged bosons in two directions at once is possible since the magnetic field from each component (moving in opposite directions) will cancel that from the other component, leaving only the electric field of the radiation uncancelled. You can't transmit charge electromagnetic radiation normally, because it has infinite - uncancelled - magnetic self-inductance. The exception is the contrapuntal situation, where you can fire as much in one direction as in an opposite direction, so cancelling out the infinite magnetic self-inductance. This is well known form transmission line theory. You can send energy carrying an electric field of one sign only along a transmission line of one conductor contains electrons moving in one direction, and the other conductor contains electrons moving in the opposite direction, so cancelling out the net magnetic self-inductance of each conductor. A further effect is that if you have two such pulses passing through one another in opposite directions, all magnetic fields appear to cancel out completely, creating the illusion of a steady charged situation.

SU(3)xSU(2)xU(1) is the standard model, which is wrong in the U(1) sector: U(1), supposedly modelling electromagnetism with a single gauge boson (the massless photon), is actually an oversimplification (see illustration below) of the physical dynamics necessary. It specifically ignores the issue of how neutral photons can mediate and thereby constitute a field which has a positive or negative charge in the vacuum. The necessary expansion of U(1) to include the full dynamics for electromagnetic forces leads to a symmetry similar to the weak force SU(2), apart from the lack of mass and the breaking of parity. (Weak force chiral symmetry, the handedness of particles and the fact that the weak force operates only on left-handed particles, sets apart the SU(2) weak force from the SU(2) electromagnetic force in this scheme.)

The SU(2) weak force has 3 massive gauge bosons (one neutral, one with positive electric charge, and one with negative electric charge): the SU(2) electrogravity force similarly has 3 gauge bosons but all are massless; the photon mediates gravitation and so it is unable to add up in the universe when exchanged between similar charges in a random walk to avoid dissimilar charges and cancellation (see previous post for the simple mechanism), the massless positively charged gauge boson mediates positive electric fields, while the massless negatively charged gauge boson mediates negative electric fields. That's the corrected version of the standard model. The replacement of the electroweak groups SU(2)xU(1) with SU(2)xSU(2) makes no difference to electroweak theory beyond enabling the inclusion of gravity plus a causal mechanism of electromagnetic and gravitational forces.

The complex polarization of a gauge boson of electromagnetism in the conventional theory is due to this. Electric forces work because electric charges only emit electric field gauge bosons of a similar electric charge to themselves, but absorb all charged gauge bosons. The way that all long-range forces derive from this SU(2) electromagnetic unification is illustrated below. Remember that radiation is exchanged in all directions but only that along the connecting line between two particles is illustrated for simplicity. Also see previous post for details of this physical theory which predicts how electromagnetism is 10^40 times stronger than gravity. Note that incoming exchange radiation from distant universe is redshifted, unlike radiation exchanged between two local masses which are not receding from one another (redshifted exchange radiation has less energy E because by Planck E = hf where f is frequency, which falls in redshifted radiation, and the momentum imparted by radiation is p = E/c if the radiation is absorbed, or p = 2E/c if it is reflected). See also here including this post for other developments.








In the illustration above, electric charges receive gauge boson of either positive or negative electric field, but only reflect back charged gauge boson radiation of their own charge. Hence when two charges are nearby, the effect is an asymmetry with a net gain that either pushes them together (dissimilar charges) like two people standing back-to-back and firing machine guns outward (so the recoil causes attraction) or causes them to repel (similar charges) like two people firing machine guns at each other. The analysis above shows that the magnitude of the attraction and repulsion forces are similar in strength. (The apparent lack of an electric field near matter which contains equal numbers of positive and negative charges is due to the superimposed field being neutral, not to there be an absence of gauge bosons of electromagnetism. Magnetic fields have been discussed previously; there is no such thing as an electric field without a perpendicular magnetic field, and where that is apparently the case, what is happening is that the curls of the magnetic field from the exchange radiation or from a 'Heaviside energy current' are cancelling one another's effects out, with the energy still present but unobserved until the observer moves in the electric field and hence experiences uncancelled magnetic field effects. Magnetic field B = E/c but is not normally seen due to calcellation effects. For more on radiation effects replacing Maxwell's classical 'displacement current' equation's mechanism in electromagnetism, see this post and see this post for a basic discussion of exchange radiation in quantum field theory and problems with the photon in Maxwell's theory.)




The gravity mechanism in the diagram above does not explain much: see instead the illustrations at http://quantumfieldtheory.org/Proof.htm for the full geometric proof of gravity. Masses recede at Hubble speed v = Hr = Hct in spacetime, so there's outward force F = m.dv/dt ~ 10^43 N. Newton's 3rd law implies an inward reaction, carried by exchange radiation, predicting forces, curvature, cosmology and particle masses. Non-receding masses obviously don't cause a reaction force, so they cause asymmetry => gravity. Non-receding masses are nearby masses which aren't moving away from one another at relativistic velocities in spacetime (hence accelerating away, exerting outward force F=ma which by Newton's 3rd law has a recation force, which by the Standard Model possibilities must be carried by exchange radiation). Because non-receding masses don't fire exchange radiation at one another as a reaction force (like rocket exhaust) from their force in accelerating apart, they is no exchange of photon energy by exchange radiation between local, non-receding masses, but there is with distant receding matter in the rest of the distant, receding universe. Hence, local, non-receding masses act as a shield so that a type of LeSage gravity mechanism works. That's the mechanism for gravity: it's a Rube-Goldberg machine. Each stage is simple, but each stage is fact-based, and put together they fully explain the empirically defensible parts of physics (it isn't all defensible, see posts here, here and here).




Update, 18 March 2007: obviously, the electro-weak-gravity group SU(2)xSU(2) described here may be just SU(2) if there are dynamics which allow some of the electro-gravity SU(2) group to transform into weak SU(2) forces. For such dynamics see http://nige.wordpress.com/2007/03/17/the-correct-unification-scheme/. If so, the vacuum pair production dynamics will give rise to the massive weak force gauge bosons at the vacuum energy which allows them, so replacing the Higgs mechanism. Then the corrected standard model becomes simply SU(3)xSU(2), i.e., we merely drop the U(1) from the existing standard model. The mass mechanism needed is:








Update, 23 March '07: I've included some relevant comments near the end of a post here: "The electromagnetic theory, in order to causally explain the mechanisms for repulsion and attraction between similar and dissimilar charges as well as gravity with the correct strength from the diffusion of gauge bosons between similar charges throughout the universe (a drunkard’s walk with a vector sum of strength equal to the square root of the number of charges in the universe, multiplied by the gravity force which is mediated by photons) ends up with 3 gauge bosons like the weak SU(2) force. So this looks as if it can incorporate gravity into the standard model of particle physics.





"The conventional treatment of how photons can cause attractive and repulsive forces just specifies the right number of polarizations and the right spin. If you want a purely attractive gauge boson, you would have a spin-2 ‘graviton’. But this comes from abstract symmetry principles, it isn’t dynamical physics. For example, you can get all sorts of different spins and polarizations when radiation is exchanged depending on how you define what is going on. If, for example, two transverse electromagnetic (TEM) waves pass through one another with the same amplitude while travelling in opposite directions, the curls of their respective magnetic fields will cancel out during the duration of overlap. So the polarization number will be changed! As a result, the exchange of radiation in two directions is easier than a one-way transfer of radiation. Normally you need two parallel conductors to propagate an electromagnetic wave by a cable, or you need an oscillating wave (with as much negative electric field as positive electric field in it) for energy to propagate. The reason for this is that a wave of purely one type of electric field (positive only or negative only) will have an uncancelled infinite self-inductance due to the magnetic field it creates. You have to ensure that the net magnetic field is zero, or the wave won’t propagate (whether guided by a wire, or launched into free space). The only way normally of getting rid of this infinite self-inductance is to fire off two electric field waves, one positive and one negative, so that the magnetic fields from each have opposite curls, and the long range magnetic field is thus zero (perfect cancellation).





"This explains why you normally need two wires to send logic signals. The old explanation for two wires is false: you don’t need a complete circuit. In fact, because electricity can never go instantly around a circuit when you press the on switch, it is impossible for the electricity to ‘know’ whether the circuit it is entering is open or is terminated by a load (or short-circuit), until the light speed electromagnetic energy completes the circuit.





"Whenever energy first enters a circuit, it does so the same way regardless of whether the circuit is open or is closed, because goes at light speed for the surrounding insulator, and can’t (and doesn’t in experiments) tell what the resistance of the whole circuit will turn out to be. The effective resistance, until the energy completes the circuit, is equal to the resistance of the conductors up to the position of the front of the energy current current (which is going at light speed for the insulator), plus the characteristic impedance of the geometry of the pair of wires, which is the 377 ohm impedance of the vacuum from Maxwell’s theory, multiplied by a dimensionless correction factor for the geometry. The 377 ohm impedance here is due to the fact that Maxwell’s so-called ‘displacement current’, which is (for physics at energies below the IR cutoff of QFT) radiation rather than virtual electron and virtual positron motion.





"The point is that the photon’s nature is determined by what is required to get propagation to work through the vacuum. Some configurations are ruled out physically, because the self-inductance of uncancelled magnetic fields is infinite, so such proto-photons literally get nowhere (they can’t even set out from a charge). It’s really like evolution: anything can try to work, but those things that don’t succeed get screened out.





"The photon, therefore, is not the only possibility. You can make exchange radiation work without photons if where each oppositely-directed component of the exchange radiation has a magnetic field curl that cancels the magnetic field of the other component. This means that two other types of electromagnetic gauge boson are possible beyond what is normally considered to be the photon: negatively charged electromagnetic radiation will propagate providing that it is propagating in opposite directions simultaneously (exchange radiation!) so that the magnetic fields are cancelled in this way, preventing infinite self-inductance. Similarly for positive electromagnetic gauge bosons. See this post.





"For those who are easily confused, I’ll recap. The usual photon has an equal amount of positive and negative electric field energy, spatially separated as implied by the size or wavelength of the photon (it’s a transverse wave, so it has a transverse wavelength). Each of these propagating positive and negative electric fields has a magnetic field, but because the magnetic field curls in the opposite direction from a moving electric field as from a moving magnetic field, the two curls cancel out when the photon is seen from a distance large compared to the wavelength of the photon. Hence, near a photon there are electric fields and magnetic fields, but at a distance large compared to the wavelength of the photon, these fields are both cancelled out. This is the reason why a photon is said to be uncharged. If the photon’s fields did not cancel, it would have charge. Now, in the weak force theory there are three gauge bosons which have some connection to the photon: two charged W bosons and a neutral Z boson. This suggests a workable, predictive revision to electromagnetic theory."



Copy of a comment comment:

http://kea-monad.blogspot.com/2007/04/whats-new.html

"But note that White seems to think that DE has solid foundations." - Kea

Even Dr Woit might agree with White, because anything based on observation seems more scientific than totally abject speculation.

If you assume the Einstein field equation to be a good description of cosmology and to not contain any errors or omissions of physics, then you are indeed forced by the observations that distant supernovae aren't slowing, to accept a small positive cosmological constant and corresponding 'dark energy' to power that long range repulsion just enough to stop the gravitational retardation of distant supernovae.

Quantum gravity is supposed - by the mainstream - to only affect general relativity on extremely small distance scales, ie extremely strong gravitational fields.

According to the uncertainty principle, for virtual particles acting as gauge boson in a quantum field theory, their energy is related to their duration of existence according to: (energy)*(time) ~ h-bar.

Since time = distance/c,

(energy)*(distance) ~ c*h-bar.

Hence,

(distance) ~ c*h-bar/(energy)

Very small distances therefore correspond to very big energies. Since gravitons capable of graviton-graviton interactions (photons don't interact with one another, for comparison) are assumed to mediate quantum gravity, the quantum gravity theory in its simplest form is non-renormalizable, because at small distances the gravitons would have very great energies and be strongly interacting with one another, unlike the photon force mediators in QED, where renormalization works. So the whole problem for quantum gravity has been renormalization, assuming that gravitons do indeed cause gravity (they're unobserved). This is where string theory goes wrong, in solving a 'problem' which might not even be real, by coming up with a renormalizable quantum graviton based on gravitons which they then hype as being the 'prediction of gravity'.

The correct thing to do is to first ask how renormalization works in gravity. In the standard model, renormalization works because there are different charges for each force, so that the virtual charges will become polarized in a field around a real charge, affecting the latter and thus causing renormalization, ie, the modification of the observable charge as seen from great distances (low energy interactions) from that existing near the bare core of the charge at very short distances, well within the pair production range (high energy interactions).

The problem is that gravity has only one type of 'charge', mass. There's no anti-mass, so in a gravitational field everything falls one way only, even antimatter. So you can't get polarization of virtual charges by a gravitational field, even in principle. This is why renormalization doesn't make sense for quantum gravity: you can't have a different bare core (high energy) gravitational mass from the long range observable gravitational mass at low energy, because there's no way that the vacuum can be polarized by the gravitational field to shield the core.

This is the essential difference between QED, which is capable of vacuum polarization and charge renormalization at high energy, and gravitation which isn't.

However, in QED there is renormalization of both electric charge and the electron's inertial mass. Since by the equivalence principle, inertial mass = gravitational mass, it seems that there really is evidence that mass is renormalizable, and the effective bare core mass is higher than that observed at low energy (great distances) by the same ratio that the bare core electric charge is higher than the screened electronic charge as measured at low energy.

This implies (because gravity can't be renormalized by the effects of polarization of charges in a gravitational field) that the source of the renormalization of electric charge and of the electron's inertial mass in QED is that the mass of an electron is external to the electron core, and is being associated to the electron core by the electric field of the core. This is why the shielding which reduces the effective electric charge as seen at large distances, also reduces the observable mass by the same factor. In other words, if there was no polarized vacuum of virtual particles shielding the electron core, the stronger electric field would give it a similarly larger inertial and gravitational mass.

Penrose claims in his book 'The Road to Reality' that the bare core charge of the electron is 'probably' (137.036^0.5)*e = 11.7e.

In getting this he uses Sommerfeld's fine structure parameter,

alpha = (e^2)/(4*Pi*permittivity of free space*c*h-bar) = 1/137.036...

Hence, e^2 is proportional to alpha, so you'd expect from dimensional analysis that electric charge shielding should be proportional to (alpha)^0.5.

However, this is wrong physically.

From the uncertainty principle, the range r of a gauge boson is related to its energy E by:

E = hc/(2*Pi*r).

Since the force exerted is F = E/r (from: work energy = force times distance moved in direction of the applied force), we get

F = E/r = [hc/(2*Pi*r)]/r

= hc/(2*Pi*r^2)

= (1/alpha)*(Coulomb's law for electrons)

Hence, the bare core electron's bare core charge really has the value e/alpha, not e/(alpha^0.5) as Penrose guessed from dimensional analysis. This "leads to predictions of masses."

It's really weird that this simple approach to calculating the total amount of vacuum shielding for the electron core is so ignorantly censored out. It's published in an Apr. 2003 Electronics World paper, and I haven't found it elsewhere. It's a very simple calculation, so it's easy to check both the calculation and its assumptions, and it leads to predictions.

I won't repeat the argument that dark energy is a false theory here at length. Just let's say that on cosmological distances, all radiation including gauge bosons, will be stretched and degraded in frequency and hence in energy. This, the exchange radiation which causes gravity will be weakened by redshift due to expansion over large distances, and when you include this effect on the gravitational interaction coupling parameter G in general relativity, general relativity then predicts the supernovae redshifts correctly. Instead of inventing an additional unobservable to offset the unobserved long range gravitational retardation being offset by dark energy, you just have no long range gravitational deceleration. Hence, no outward acceleration to offset inward gravity at long distances. The universe is simply flat on large scales because gravity is weakened by the redshift of gauge bosons over great distances in an expanding universe where gravitational charges (masses) are receding from one another. Simple.

Another problem with general relativity as currently used is the T_{ab} tensor which is usually represented by a smooth source for the gravitational field, such as a continuum of uniform density.

In reality, the whole idea of density is a statistical approximation, because matter consists of particle of very high density, distributed in the vacuum. So the idea that general relativity shows that spacetime is flat on small distance scales is just bunk, it's based on the false statistical approximation (which holds on large scales, not on small scales) that you can represent the source for gravity (ie, quantized particles) by a continuum.

So the maths used to make T_{ab} generate solvable differential equations is an approximation which is correct at large scales (after you make allowances for the mechanism of gravity, including redshift of gauge bosons exchanged over large distances), but is inaccurate in general on small scales.

General relativity doesn't prove a continuum exists, it requires a continuum because its based on continuously variable differential tensor equations which don't easily model the discontinuities in the vacuum (ie, real quantized matter). So the nature of general relativity forces you to use a continuum as an approximation.

Sorry for the length of comment, feel free to delete.

24 Comments:

At 2:51 PM, Blogger nige said...

Obviously this post supersedes part of my Electronics World article published in April 2003, which was far more sketchy regarding the gauge bosons.

(1) This proof does show how both electromagnetic attraction and repulsion arise from exchange of gauge bosons due to simple momentum transfer.

(2) It shows that the attraction force for unit charges is equal in magnitude but obviously opposite in sign (direction) to the repulsion force.

(3) It unifies electricity and gravity because photon exchange causes gravity (an always attractive force). See previous post for how to predict the 10^40 times greater force of electromagnetism than gravity between unit charges.

 
At 10:37 AM, Blogger nige said...

Just a note about what fundamental particles are: they probably are "strings" in a sense, but different particles aren't caused by oscillation energies or vibrations of the strings: you have to build the model on the basis of experimental evidence.

There's an illustration of the basic argument and result for an electron in the 6 pages published April 2003 Electronics World magazine article I mention above.

Different leptons are distinguished essentially by the mass; there are only small differences between the magnetic moment of the muon and electron.

The electron can be modelled by a looped Heaviside energy current in which electric field E, magnetic field B and propagation at velocity c are all mutually perpendicular vectors. The propagation at velocity c is in the small loop which forms the "string" electron. There's no evidence of any need for it to have vibration modes or extra dimensions rolled up into a Calabi-Yau manifold. Supergravity and supersymmetry and therefore their synthesis string theory (M-theory) are not even wrong and wrong in different details. So forget vibrating strings and extra dimensions.

Lunsford's idea that there is a time dimension for every spatial dimension (3 dimensions of each, 6 dimensions in total) correctly unifies electrodynamics and gravitation, dispensing with the cosmological constant. For the falsity and replacement of the cosmological constant, see: http://nige.wordpress.com/2007/01/21/hawking-and-quantum-gravity

This idea of the "strings" which may comprise electrons (it's foolish to insist on even this empirically based ad hoc theory of particles being strings as being a fact until it has been checked more carefully) is based on

(1) The spin of particles. The spin is the loop rotation due to the light speed motion of energy current. The internal spin speed c is not valid if the particle is moving with respect to the observer, due to obvious relativistic problems.

The April 2003 Electronics World article argues that motion occurs with respect to the observer along the axis of spin, and so the spin speed (speed of the energy current going round in the loop) x is related to the propagation velocity v of the entire electron (along the axis of the spin) and light speed c by Pythagoras' theorem:

v^2 + x^2 = c^2

This rearranges to give the relative spin speed as a function of particle velocity,

x_v = x_0 *(1 - v^2 /c^2)^(1/2)

which is of course the Lorentz factor.

Internal spin speed is the only measure of "time" for an electron. So, the relative fall in spin speed with velocity is proportional to time, and this explains time-dilation for fundamental particles.

(2) Spherically symmetrical E-field and dipole B-field around electron: as shown graphically in the April 2003 Electronics World article, the nature of a Heaviside transverse electromagnetic energy current trapped in a loop produces radial spherically symmetric E-field lines which explains the electric charge (electric monopole) of the electron, while the B-field lines close in look like a Penrose twistor diagram, a torus around the loop, with magnetic poles coinciding with the spin axis.

Hence, we get spin, magnetic field, and electric field for the lepton correctly predicted here.

The usual quantum entanglement arguments that spin is in superposition until the act of measurement collapses the wavefunction from its indeterminate state is disproved by Dr Thomas Love's analysis, quoted earlier on this blog. The wavefunction isn't in an indeterminate state, just the mathematical equation that people use to model the wavefunction! At the moment of taking a measurement, a time-independent system which is described by the time-independent Schroedinger equation becomes a time-dependent system, described by the obviously very different time-dependent Schroedinger equation. In switching over the mathematical model, you normally fabricate the "evidence" that the wave function collapses. Sure, the act of measurement does affect and disturb the system (the Heisenberg effect), but that is true of anything. When you measure anything, you affect the thing slightly. A battery meter samples some electricity, a tyre pressure gauge lets out some air, a light intensity meter absorbs some light, etc. But that doesn't mean that the battery condition, tyre pressure, light intensity, etc., were all indeterminate until measured. If you toss a coin it is not really (physically) in an indeterminate state between heads and tails until you see how it landed. Mathematically it is indeterminate because it is 0.5 heads, 0.5 tails. But you know the coin physically lands one way up, and you know that the mathematical model of probability is a statement about your ignorance of the condition of the coin, which is entirely different from being a statement mathematically about the factual condition of the coin.

One very sick thing about the state of physics is that I have to write the paragraph above, which should be everyone's default position. Extraordinary claims about the universe being 11 dimensional and there being dark energy and things like quantum entanglement require extraordinary evidence, when the facts are to the contrary.

Alain Aspect's experiment shows that photon spins correlate, proving that although the uncertainty principle (the act of measurement introducing uncertainty) applies to electrons as shown other experiments (not by Aspect), the uncertainty principle does not apply to measuring the spins of photons. You wouldn't expect it to. An electron can change spin when measured because it is physically going slower than c, so it experiences time.

A photon can't change spin state, it is frozen in time. Hence the Heisenberg uncertainty principle doesn't apply to measuring the spins of photons or other light speed radiations. It only applies to particles which aren't frozen in time by c speed, so it only applies to particles which have time in which to change spin state.

Similarly, consider the Compton effect. A photon when "scattered" by an electron doesn't change state or energy, it gets absorbed and a new photon is created and moves off at an angle to the original one, with lower energy.

 
At 9:38 AM, Blogger nige said...

Copy of a comment:

http://dorigo.wordpress.com/2007/03/18/a-fair-account-of-the-matter-for-once/

6. nc - March 20, 2007

Can I just clarify what the Higgs is (ignoring the complexity of needing 5 or more Higgs bosons if supersymmetry of some type is correct)? I know it is supposed to be a spin 0 boson that gives mass to everything. But Einstein’s equivalence principle between inertial and gravitational mass therefore implies that the Higgs boson interacts with whatever exchange radiation there is that causes gravity (spin-2 gravitons?).

If that’s the case, then the simple physical picture is that you have Higgs bosons there in vacuum, exchanging gravitons with other Higgs bosons. Because, by pair-production, photons can be converted into massive fermions, there must be a Higgs field (like a Dirac sea) everywhere in space which can allow such particles to pop into existence when fermions with mass are created from photons.

However, the Dirac sea doesn’t say the vacuum is full of pair production everywhere causing virtual particles to pop into existence. These loops of creation and annihilation can only occur in the intense fields near a charge. Pair production from gamma rays occurs where the gamma rays enter a strong field in a high Z-number nucleus.

The IR cutoff used in renormalization indicates that no virtual particle loops are created below collision energy of 0.511 MeV/particle, i.e., a distance of about 1 fm from Coulomb scattering electrons, corresponding to an electric field strength of about 10^18 v/m or so. If virtual particles were everywhere, the no real charges would exist because there would be no limit to the polarization of the vacuum (which would polarize just enough to entirely cancel out all real charges).

Does apply to the Higgs field? If the Higgs mass is say 150 GeV or whatever, then obviously they are not being created when an electron+positron pair are created from a gamma ray. It takes only 1 MeV to do that, not 300 MeV or whatever would be required to form a pair of Higgs bosons?

Or is the case really that Higgs bosons are virtual particles created from the vacuum at the energy of collisions corresponding to their mass? Assuming a Higgs mass of 150 GeV and a using Coulomb scatter to relate distance of closest approach to collision energy, then Higgs bosons would be spontaneously created at a distance on the order of 10^{-21} m from the core of an electron.

If this is the case, then the vacuum isn’t full of interacting Higgs bosons like the usual picture of an “aether” miring particles and giving them mass. Instead, the actual mechanism is that Higgs particles appear in some kind of annihilation-creation loops at a distance of 10^{-21} m and closer to a fermion, and the Higgs bosons themselves are “mired” by the exchange of radiation (gravitons) with the Higgs bosons at similar distances from other particles.

This is clearly the correct picture of what is going on, if the equivalence principle is correct and if mass (provided by the Higgs boson) is the charge in quantum gravity.

Professor Alfonso Rueda of California State University and Bernard Haisch have been arguing that radiation pressure from the quantum vacuum produces inertial mass (Physical Review A, v49, p678) and gravitational mass (because the presence of a mass warps the spacetime around it, encountering more photons on the side away from another nearby mass than on the side facing it, so the masses get pushed together - they have a paper proving this effect in principle in Annalen der Physik, v14, p479).

http://www.calphysics.org/articles/newscientist.html

http://www.eurekalert.org/pub_releases/2005-08/ns-ijv081005.php

If Rueda and Haisch are correct in their “vacuum zero-point photon radiation pressure causes inertia and gravity” argument, then the problem is that they are using photon exchange for both electromagnetism and gravity, which is a real muddle because those forces are different in strength by a factor of 10^40 or so for unit charges. So either they’re totally wrong, or oversimplifying.

Suppose they’re not wrong and are oversimplyfying by using rejecting the Higgs boson while using the same gauge boson for electromagnetism and gravity.

Suppose there is still a Higgs boson in their theory, and there are different kinds of gauge bosons for gravity and electromagnetism.

Then, the gravity-causing exchange radiation is mediated between the Higgs bosons in the strong vacuum field near particles. The electromagnetism causing exchange radiation is mediated between the fermions etc. in the cores of the particles.

My next question is how the Higgs bosons explain electro-weak symmetry breaking? My understanding is that above electroweak expectation energy, there is a electroweak symmetry with SU(2)xU(1) producing four zero mass gauge bosons: W+, W-, Z, photon. Below that energy, the W+, W- and Z acquire great mass from the Higgs field.

Because they acquire such mass at low energies (unlike the photon), they have a short range, which makes the weak force short ranged unlike electromagnetism, breaking the electroweak symmetry.

The conventional idea is that very high energy W and Z bosons aren’t coupled to the Higg’s field? The Higgs field still exists at extremely high energy, but W and Z bosons are unable to couple effectively to it?

Normally in a “miring” medium, drag effects and miring increase with your kinetic energy, since the resistance force is proportional to the square of velocity, and drag effects become small at low speeds.

So the Higgs miring effect is the opposite of a fluid like the air or water? It retards particles of low energy and low speed, but doesn’t mire particles of high energy and high speed?

I’m wondering what the real mechanism is for why Z and W have mass at low speeds but not high speeds? It is also the opposite of special relativity, where mass increases with velocity.

Have I got this all wrong? Maybe the Higgs field disappears above the electroweak symmetry breaking energy, and that explains why the masses of Z and W bosons disappear?

Comment by nc — March 20, 2007 @ 2:44 pm

*******************

If nobody at the blog where I left the comment above confirms it, I will study the Higgs mechanism ideas a bit more and clarify it myself.

The key question is whether the massless Z boson at high energy is just a photon or not.

Perhaps the weak gauge bosons couple the mass causing Higgs bosons to particles like fermions.

Normally exchanges of Z bosons between particles are referred to as "neutral currents" because they carry no net electrical charge (well, if they are related to photons they probably transfer electric field energy, and all we mean by "electric field energy" is all we normally mean by "electric charge", so a trapped electric field would be the same thing as an electric charge, so a photon or presumably a Z boson does contain electromagnetic fields but has no electric charge simply because it contains as much negative electric field energy as positive electric field energy, so the two balance one another - which is different from complete cancellation).

Newtral currents between fermions and Higgs bosons could well be the way in which Higgs bosons are associated with fermions, giving the fermions their masses.

 
At 4:41 AM, Blogger nige said...

Copy of a comment:

http://kea-monad.blogspot.com/2007/03/censorship.html

Louise has a new post showing that the person behind this latest attack is at Cornell, and he seems to be upset that Louise tried posting papers on arXiv. The previous time (last June) the critic was Dr Motl, and Dr Woit really fell out with him over his sexism, rudeness, etc.

I wrote a post showing that Louise's GM = tc^3 far from being completely unphysical is really obtained simply:

Simply equate the rest mass energy of m with its gravitational potential energy mMG/R with respect to large mass of universe M located at an average distance of R = ct from m.

Hence E = mc^2 = mMG/(ct)

Cancelling and collecting terms,

GM = tc^3

So Louise’s formula is derivable.

The rationale for equating rest mass energy to gravitational potential energy in the derivation is Einstein's principle of equivalence between inertial and gravitational mass in general relativity (GR), when combined with special relativity (SR)equivalence of mass and energy!

(1) GR equivalence principle: inertial mass = gravitational mass.

(2) SR equivalence principle: mass has an energy equivalent.

(3) Combining (1) and (2):

inertial mass-energy = gravitational mass-energy

(4) The inertial mass-energy is E=mc^2 which is the energy you get from complete annihilation of matter into energy.

The gravitational mass-energy is is gravitational potential energy a body has within the universe. Hence the gravitational mass-energy is the gravitational potential energy which would be released if the universe were to collapse. This is E = mMG/R with respect to large mass of universe M located at an average distance of R = ct from m.

I wrote several follow up posts about this because as a result of my post, Dr Thomas S. Love of the Departments of Mathematics and Physics, California State University, emailed me a derivation of Kepler's law based on the similar reasoning of equating the relevant energy equations! See for example

http://nige.wordpress.com/2006/09/30/keplers-law-from-kinetic-energy/

When I explained all the above in a blog discussion on alternatives to string theory, I think it was one of the discussions at Asymptotia run by Professor Clifford V. Johnson (who is the most friendly and reasonable of the string theorists, I think), Professor Jacques Distler of the University of Texas ridiculed it because it didn't use tensor calculus or something irrelevant.

I'm sympathetic with people who want alternatives to do an enormous amount and prove G = 8*Pi*T and SU(3)xSU(2)xU(1), but what these censors are doing is driving the early development of alternative ideas underground, minimising the number of people working on them, and generating a lot of needless heat and hostility. I've rarely seen a critic make a genuine scientific point, but whenever they do, these points are taken seriously and addressed carefully.

It's all in Machiavelli's description of human politics:

"... the innovator has for enemies all those who have done well under the old conditions, and lukewarm defenders in those who may do well under the new. This coolness arises partly from fear of the opponents, who have the laws on their side, and partly from the incredulity of men, who do not readily believe in new things until they have had a long experience of them. Thus it happens that whenever those who are hostile have the opportunity to attack they do it like partisans, whilst the others defend lukewarmly ..."

That's the mechanism by which new ideas traditionally have to struggle against those who are happy with the hype of string theory and the lambda-CDM ad hoc cosmology.

Some of the comments Clifford made recently on Asymptotia in trying to defend string theory by saying it is incomplete made me very upset. There is serious hypocrisy, and string theorists themselves just can't see it:

...

"... this is an ongoing research program on a still rather underdeveloped theory. ... how come you are willing to pre-suppose the outcome and discard all the work that is going on to understand and better develop string theory in so many areas? This is especially puzzling since you mention that lack of understanding.
How can you on the one hand claim that a theory is poorly understood, and then in the same breath condemn it as unworkable before it is understood?"
- Clifford

They just can't understand that mainstream string theory ideas that fail to make really checkable predictions can't be defended like this, because mainstream pro-string censors (not Clifford, admittedly) go out of their way to attack alternatives like LQG for being incomplete, not to mention Louise's theory.

(BTW, sorry this is a such a long comment, I'll copy it to my blog so you are free to delete, if it's clutter.)

 
At 4:44 AM, Blogger nige said...

Links regarding controversy mentioned above:

Kea's post

Louise's post

 
At 8:25 AM, Blogger nige said...

Two ways to get GM = tc^3:

(1)

Consider why the big bang was able to happen, instead of the mass being locked by gravity into a black hole singularity and unable to expand!

This question is traditionally answered (Prof. Susskind used this in an interview about his book) by the fact the universe simply had enough outward explosive or expansive force to counter the gravitational pull which would otherwise produce a black hole.

In order to make this explanation work, the outward acting explosive energy of the big bang, E = Mc^2, had to either be equal to, or exceed, the energy of the inward acting gravitational force which was resisting expansion.

This energy is the gravitational potential energy E = MMG/R = (M^2)G/(ct).

Hence the explosive energy of the big bang's nuclear reactions, fusion, etc., E = Mc^2 had to be equal or greater than E = (M^2)G/(ct):

Mc^2 ~ (M^2)G/(ct)

Hence

MG ~ tc^3.

That's the first way, and perhaps the easiest to understand.


(2)

Simply equate the rest mass energy of m with its gravitational potential energy mMG/R with respect to large mass of universe M located at an average distance of R = ct from m.

Hence E = mc^2 = mMG/(ct)

Cancelling and collecting terms,

GM = tc^3

So Louise’s formula is derivable.

The rationale for equating rest mass energy to gravitational potential energy in the derivation is Einstein's principle of equivalence between inertial and gravitational mass in general relativity (GR), when combined with special relativity (SR)equivalence of mass and energy!

(1) GR equivalence principle: inertial mass = gravitational mass.

(2) SR equivalence principle: mass has an energy equivalent.

(3) Combining (1) and (2):

inertial mass-energy = gravitational mass-energy

(4) The inertial mass-energy is E=mc^2 which is the energy you get from complete annihilation of matter into energy.

The gravitational mass-energy is is gravitational potential energy a body has within the universe. Hence the gravitational mass-energy is the gravitational potential energy which would be released if the universe were to collapse. This is E = mMG/R with respect to large mass of universe M located at an average distance of R = ct from m.

 
At 4:56 PM, Blogger nige said...

copy of a comment to

http://riofriospacetime.blogspot.com/2007/03/t-minus-3-days-conference-opening.html

I just looked up the blog post mentioned in the previous comment and the comments are shut down, preventing any response.

Tony Smith kindly quoted a little bit I wrote saying something like "the total gravitational potential energy of the universe is on the order E = MMG/R = MMG/(ct), which when equated to E=Mc^2 gives Louise's equation"

However, this was then dismissed by another comment from somebody else, who did not go back and check my comment on the other blog. The point is, I also have a lot more justification, such as the reason why you need to equate the gravitational energy with the rest mass energy.

Consider a star. If you had a star of uniform density and radius R, and it collapsed, the energy release from gravitational potential energy being turned into explosive (kinetic and radiation) energy is E = (3/5)(M^2)G/R. The 3/5 factor from the integration which produces this result is not applicable to the universe where the density rises with apparent distance because of spacetime (you are looking to earlier, more compressed and dense, epochs of the big bang when you look to larger distances). It's more sensible to just remember that the gravitational potential energy of mass m located at distance R from mass M is simply E = mMG/R so for gravitational potential energy of the universe is similar, if R is defined as the effective distance the majority of the mass would be moving if the universe collapsed.

This idea of gravitational potential energy shouldn't bee controversial: in supernovae explosions much energy comes from such an implosion, which turns gravitational potential energy into explosive energy!

Generally, to overcome gravitational collapse, you need to have an explosive outward force.

The universe was only able to expand in the first place because the explosive outward force, provided by kinetic and radiation energy, which counteracted the gravitational force.

Initially, the entire energy of the radiation was present as various forms of radiation. Hence, to prevent the early universe from being contracted into a singularity by gravity, we have the condition that E = Mc^2 = (M^2)G/R = (M^2)G/(ct) which gives GM = tc^3.

****************

My earlier comment:

Two ways to get GM = tc^3:

(1)

Consider why the big bang was able to happen, instead of the mass being locked by gravity into a black hole singularity and unable to expand!

This question is traditionally answered (Prof. Susskind used this in an interview about his book) by the fact the universe simply had enough outward explosive or expansive force to counter the gravitational pull which would otherwise produce a black hole.

In order to make this explanation work, the outward acting explosive energy of the big bang, E = Mc^2, had to either be equal to, or exceed, the energy of the inward acting gravitational force which was resisting expansion.

This energy is the gravitational potential energy E = MMG/R = (M^2)G/(ct).

Hence the explosive energy of the big bang's nuclear reactions, fusion, etc., E = Mc^2 had to be equal or greater than E = (M^2)G/(ct):

Mc^2 ~ (M^2)G/(ct)

Hence

MG ~ tc^3.

That's the first way, and perhaps the easiest to understand.


(2)

Simply equate the rest mass energy of m with its gravitational potential energy mMG/R with respect to large mass of universe M located at an average distance of R = ct from m.

Hence E = mc^2 = mMG/(ct)

Cancelling and collecting terms,

GM = tc^3

So Louise’s formula is derivable.

The rationale for equating rest mass energy to gravitational potential energy in the derivation is Einstein's principle of equivalence between inertial and gravitational mass in general relativity (GR), when combined with special relativity (SR)equivalence of mass and energy!

(1) GR equivalence principle: inertial mass = gravitational mass.

(2) SR equivalence principle: mass has an energy equivalent.

(3) Combining (1) and (2):

inertial mass-energy = gravitational mass-energy

(4) The inertial mass-energy is E=mc^2 which is the energy you get from complete annihilation of matter into energy.

The gravitational mass-energy is is gravitational potential energy a body has within the universe. Hence the gravitational mass-energy is the gravitational potential energy which would be released if the universe were to collapse. This is E = mMG/R with respect to large mass of universe M located at an average distance of R = ct from m.

****************

What's interesting is that the mainstream doesn't want to discuss science when it comes to alternatives, as Tony Smith makes clear in his discussion of censorship.

They use ad hominem attacks, which is a lazy approach whereby no careful science or disciplined checks are involved. The mainstream however objects if ad hominem attacks are used against it's leaders. For example, Dr Ed Witten - M-theory creator - was misleading when he claimed:

‘String theory has the remarkable property of predicting gravity.’ - Dr Edward Witten, M-theory originator, Physics Today, April 1996.

Dr Peter Woit remarks that the prediction is just a prediction of an unobservable spin-2 graviton and not a prediction of anything to do with gravity that is either already experimentally verified or checkable in the future:

‘There is not even a serious proposal for what the dynamics of the fundamental ‘M-theory’ is supposed to be or any reason at all to believe that its dynamics would produce a vacuum state with the desired properties. The sole argument generally given to justify this picture of the world is that perturbative string theories have a massless spin two mode and thus could provide an explanation of gravity, if one ever managed to find an underlying theory for which perturbative string theory is the perturbative expansion.’ – Quantum Field Theory and Representation Theory: A Sketch (2002), http://arxiv.org/abs/hep-th/0206135

If you call mainstream M-theory hypers ‘liars’, ‘charlatans’, ‘crackpots’, etc., you find that you are then accused of being a ‘science hater’. So they don't like criticism.

To give credit where due, Dr Ed Witten published a letter in Nature, Nature, Vol 444, 16 November 2006, stating:

‘The critics feel passionately that they are right, and that their viewpoints have been unfairly neglected by the establishment. ... They bring into the public arena technical claims that few can properly evaluate. ... Responding to this kind of criticism can be very difficult. It is hard to answer unfair charges of élitism without sounding élitist to non-experts. A direct response may just add fuel to controversies.’

So Dr Ed Witten at least doesn't encourage attacks on critics, he just prefers to ignore them. Maybe this is worse for critics with alternative ideas, however, where the choice is controversy or being ignored altogether.

But the mainstream as a whole does go far out of its way to use ad hominem attacks on alternatives, hence Lubos Motl's attacks, and many others.

 
At 5:06 PM, Blogger nige said...

One new idea which occurs to me: the two types of derivation in the above comment could be combined to prove one or the other. If you can take the first type of derivation as experimentally sound, for example, then that would allow you to theoretically derive Einstein's equivalence principle between inertial and gravitational mass.

 
At 3:44 AM, Blogger nige said...

copy of a comment:

http://dorigo.wordpress.com/2007/03/29/confused/#comment-31696

3. nc - March 29, 2007

Regarding Louise Riofrio's GM = tc^3:

Consider a star. If you had a star of uniform density and radius R, and it collapsed, the energy release from gravitational potential energy being turned into explosive (kinetic and radiation) energy is E = (3/5)(M^2)G/R. The 3/5 factor from the integration which produces this result is not applicable to the universe where the density rises with apparent distance because of spacetime (you are looking to earlier, more compressed and dense, epochs of the big bang when you look to larger distances). It's more sensible to just remember that the gravitational potential energy of mass m located at distance R from mass M is simply E = mMG/R so for gravitational potential energy of the universe is similar, if R is defined as the effective distance the majority of the mass would be moving if the universe collapsed.

This idea of gravitational potential energy shouldn't bee controversial: in supernovae explosions much energy comes from such an implosion, which turns gravitational potential energy into explosive energy!

Generally, to overcome gravitational collapse, you need to have an explosive outward force.

The universe was only able to expand in the first place because the explosive outward force, provided by kinetic and radiation energy, which counteracted the gravitational force.

Initially, the entire energy of the radiation was present as various forms of radiation. Hence, to prevent the early universe from being contracted into a singularity by gravity, we have the condition that E = Mc^2 = (M^2)G/R = (M^2)G/(ct) which gives GM = tc^3.

****************

Comparison of two ways to get GM = tc^3:

(1)

Consider why the big bang was able to happen, instead of the mass being locked by gravity into a black hole singularity and unable to expand!

This question is traditionally answered (Prof. Susskind used this in an interview about his book) by the fact the universe simply had enough outward explosive or expansive force to counter the gravitational pull which would otherwise produce a black hole.

In order to make this explanation work, the outward acting explosive energy of the big bang, E = Mc^2, had to either be equal to, or exceed, the energy of the inward acting gravitational force which was resisting expansion.

This energy is the gravitational potential energy E = MMG/R = (M^2)G/(ct).

Hence the explosive energy of the big bang's nuclear reactions, fusion, etc., E = Mc^2 had to be equal or greater than E = (M^2)G/(ct):

Mc^2 ~ (M^2)G/(ct)

Hence

MG ~ tc^3.

That's the first way, and perhaps the easiest to understand.


(2)

Simply equate the rest mass energy of m with its gravitational potential energy mMG/R with respect to large mass of universe M located at an average distance of R = ct from m.

Hence E = mc^2 = mMG/(ct)

Cancelling and collecting terms,

GM = tc^3

So Louise’s formula is derivable.

The rationale for equating rest mass energy to gravitational potential energy in the derivation is Einstein's principle of equivalence between inertial and gravitational mass in general relativity (GR), when combined with special relativity (SR)equivalence of mass and energy!

(1) GR equivalence principle: inertial mass = gravitational mass.

(2) SR equivalence principle: mass has an energy equivalent.

(3) Combining (1) and (2):

inertial mass-energy = gravitational mass-energy

(4) The inertial mass-energy is E=mc^2 which is the energy you get from complete annihilation of matter into energy.

The gravitational mass-energy is is gravitational potential energy a body has within the universe. Hence the gravitational mass-energy is the gravitational potential energy which would be released if the universe were to collapse. This is E = mMG/R with respect to large mass of universe M located at an average distance of R = ct from m.

Analysis of what GM = tc^3 implies:

If you look at GM = tc^3, you see 'problems' right away. The inclusion of time on the right hand side implies that there is some variation with time of something else there, G, M, or c.

Louise has investigated the assumption that c is varying while GM remains constant. This tells her that c would need to fall with the inverse cube-root of the age of the universe. She has made studies on this possibility, and has detailed arguments.

I should mention that I've investigated the situation that c doesn't vary, but that G increases in direct proportion to t. This increase of G is the opposite of Dirac's assumption (he thought G may decrease with time, and was initially higher, a claim refuted by Teller who pointed out the fusion rate dependence on G which would have made the sun's power boil the oceans during the Cambrian era, which clearly didn't occur). G variation actually doesn't affect fusion in starts or the big bang, because electromagnetism would vary in a similar way. Fusion depends on protons approaching close enough due to gravity-caused compression to overcome the Coulomb repulsion, so that the strong force can attract them together. If you vary both gravity and electromagnetism in the same way (in a theory unifying gravity with the standard model) you end up with no affect on the fusion rate: the increased gravity from bigger G doesn't increase fusion because coulomb repulsion is also increased! Hence, variations in G doesn't affect fusion in stars or the big bang.

Smaller G in the past doesn't therefore upset the basic big bang model. What it does do is to explain why the ripples in the cosmic background radiation are so small: they are small because G is small, not because of inflation.

So this is another aspect of Louise's equation GM = tc^3. It could turn out that something else like G is varying, not c. One more thing about this, some theoretical calculations I did suggest that there is a dimensionless constant equal to e^3 (the cube of the base of natural logs), due to quantum gravity effects of exchange radiation in causing gravitation. Basically, the exchange radiation travels at light velocity in spacetime (it doesn't travel instantly), so the more distant universe is of higher density (being seen further in the past, and earlier in time after big bang, hence more compressed). Hence, gravity is affected by this apparently increasing density at great spacetime distances. Another factor stops this effect from going toward infinity at the greatest distances: redshift. Gauge bosons should get stretched out (redshifted in frequency) by expansion, so the energy they carry E=hf, decreases. Great redshift offsets the increasing strength of gravitational exchange radiation due to the density going towards infinity as you look to great distances.

This effects is easily calculated, and the result is G = ¾(H^2)/(Pi * Rho * e^3), which is a factor of (e^3)/2 or approx. 10 times smaller than the value implied by a critical density in pre-1998 cosmology (no cc), where you can rearrange critical density to give G = 3(H^2)/(8 * Pi * Rho).

This means that Louise's equation becomes:

GMe^3 = tc^3.

The dynamics resolve the dark matter problem. I'm writing a paper on this. Previously I've had published 10 pages on it in the August 2002 and April 2003 issues of Electronics World because the mechanism for the gravity exchange radiation is linked to that for electromagnetism, but I'd like to try again to get a paper in Classical and Quantum Gravity. The editor of Classical and Quantum Gravity had my last submission refereed by a string theorist in who ignored the science and just said it didn't fit into the mainstream speculation. (The editor of Classical and Quantum Gravity forwarded me the referee's report, without giving the name of the referee: it was evident from the report that the referee would be happier if the paper was within string theory's framework, which is why I suspect he/she is a string theorist.)

 
At 8:33 AM, Blogger nige said...

Copy of a comment to

http://kea-monad.blogspot.com/2007/03/time.html

"This does not sound like Einstein at all to me. It sounds like the common misconception of the Big Bang cosmology, according to which the Big Bang was an explosion which took place in a preexisting space, which was empty except for a concentration of mass around some particular point. But that is not what the standard cosmology says!"

Science is about finding and using facts, not just about "standard cosmology". That's education which is consensus, regardless of whether there are facts or not.

When you teach something, you may have to conform to consensus so that your students know the same as other students taught by other people, and will therefore be able to take the same exams and pass.

Science is different to consensus, and this has created problems.

Until quantum gravity and general relativity are combined, you have to stick to facts, not speculations such as the standard application of general relativity with cosmological constant (lambda-CDM) to the data. That's an ad hoc theory which can't be explained by quantum field theory.

For one thing, your definition of "space" is probably the fault. The big bang could in theory be simulated by a 10^55 megatons matter-antimatter explosion.

This would not be an explosion in some "pre-existing space".

What exactly do you mean by pre-existing space?

It's a meaningless concept! The standard model of cosmology has a singularity at time zero, which creates problems (infinite energy density), and will require quantum gravity of some sort to resolve.

Until then, nobody can say what happened or existed (if anything) "before" the big bang.

It's simply incorrect for you to claim that the standard science has resolved the singularity problem and has proved that the big bang definitely created the Dirac sea and the spacetime fabric.

General relativity can give no answer. Today's consensus on a topic dependent on quantum gravity, which is controversial, is meaningless.

As regards the problems with c change, I agree. Take E=Mc^2. If c is varying (Louise has analysed the case where c falls as the inverse cube-root of time), then is M also varying to keep E constant, or does "E" vary? It's not too nice, because if energy isn't constant or conserved, where does it go? To explain this, you could maybe argue that cosmic expansion and associated redshift causes a loss of energy to radiation. As frequencies are redshifted to lower levels, the energy of quanta decrease because of Planck's law, E=hf. I not however too concerned with the idea that GM= tc^3 implies that tc^2 is a constant, and have been investigating the situation where M and c are constant, so G is proportional to t.

 
At 8:34 AM, Blogger nige said...

Correction. The last sentence in the previous comment should read:

"I not however too concerned with the idea that GM= tc^3 implies that tc^3 is a constant, and have been investigating the situation where M and c are constant, so G is proportional to t."

 
At 11:18 AM, Blogger nige said...

copy of a comment:

http://scienceblogs.com/gnxp/2007/04/string_theory_theology.php

“Leprechauns were not invented by applying the concepts of quantum mechanics to the motion of wiggly objects defined to obey special relativity. Both quantum mechanics and special relativity are bodies of knowledge in which we have extremely high levels of confidence, assuming we discuss them within their range of applicability.” - Blake Stacey.

That’s just it: special relativity won’t hold at the Planck scale. Firstly, gravity becomes strong at the Planck scale, and strong gravity invalidates the constant velocity of light (gravity makes light bend).

It’s significant that special relativity assumes that the velocity of light is constant, i.e., it assumes that light cannot curve (a change in direction involves a change of velocity, a vector; it’s not the same as changing speed).

General relativity is entirely different to special relativity. Special relativity is just that, a special case which actually exists nowhere in the real universe. It’s just an approximation because light is never really moving with constant velocity where there are masses around.

‘The special theory of relativity ... does not extend to non-uniform motion ... The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. Along this road we arrive at an extension of the postulate of relativity… The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant). ...’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916.

String theorists are confused over background independence. Really, general relativity is background independent: the metric is always the solution to the field equation, and can vary in form, depending on the assumptions used because the shape of spacetime (the type and amount of curvature) depends on the mass distribution, cc value, etc. The weak field solutions like the Schwarzschild metric have a simple relationship to the FitzGerald-Lorentz transformation. Just change v^2 to the 2GM/r, and you get the Schwarzschild metric from the FitzGerald-Lorentz transformation, and this is on the basis of the energy equivalence of kinetic and gravitational potential energy:

E = (1/2)mv^2 = GMm/r, hence v^2 = 2GM/r.

Hence gamma = (1 – v^2 / c^2)^{1/2} becomes gamma = (1 – 2GM/ rc^2)^{1/2}, which is the contraction and time dilation form of the Schwarzschild metric.

Einstein’s equivalence principle between inertial and gravitational mass in general relativity when combined with his equivalence between mass and energy in special relativity, implies that the inertial energy equivalent of a mass (E = 1/2 mv^2) is equivalent to the gravitational potential energy of that mass with respect to the surrounding universe (i.e., the amount of energy released per mass m if the universe collapsed, E = GMm/r, where r the effective size scale of the collapse). So there are reasons why the nature of the universe is probably simpler than the mainstream suspects:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

- R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

In addition to the gravitational field problem for string scales, special relativity doesn’t apply at the Planck scale because that’s supposed to be a grain size in the vacuum irrespective of Lorentz contraction.

The Planck scale doesn’t get smaller when there is motion relative to the observer. People including Smolin have introduced “doubly special relativity” to resolve which is a break down of special relativity at the vacuum grain size such as the Planck scale (which is the size scale assumed for strings!!).

The fact that string theory is built on the assumption that special relativity applies to all scales is typical of the speculative, non-fact based nature of string theory.

All of the other non-empirical, yet uncheckable, assumptions put into string theory follow suite (7 extra dimensions to explain unobserved gravitons, 6 extra dimensions for unobserved speculative supersymmetric unification of forces at the planck scale which “explains”, branes to explain that 10 dimensional superstring is a membrane surface effect on an 11 dimensional bulk, etc., etc.).

‘... I do feel strongly that this is nonsense! ... I think all this superstring stuff is crazy and is in the wrong direction. ... I don’t like it that they’re not calculating anything. I don’t like that they don’t check their ideas. I don’t like that for anything that disagrees with an experiment, they cook up an explanation - a fix-up to say “Well, it still might be true”. For example, the theory requires ten dimensions. Well, maybe there’s a way of wrapping up six of the dimensions. Yes, that’s possible mathematically, but why not seven? ... In other words, there’s no reason whatsoever in superstring theory that it isn’t eight of the ten dimensions that get wrapped up ... So the fact that it might disagree with experiment is very tenuous, it doesn’t produce anything; it has to be excused most of the time. ... All these numbers ... have no explanations in these string theories - absolutely none! ...’ - Richard P. Feynman, in Davies & Brown, Superstrings, 1988, pages 194-195.

Posted by: nc | April 3, 2007 10:12 AM

 
At 8:16 AM, Blogger nige said...

Copy of a comment

http://riofriospacetime.blogspot.com/2007/04/brief-history-of-c-change.html


This is extremely interesting, and well worth investigation. In order to cover all possibilities, I wonder if maybe, if you write a longer paper, you might include some discussion of the possibility of alternative variables in GM = tc^3?

c = (GM/t)^{1/3} is a major solution, and you have investigated it and found that it explains interesting features in the experimental data, but I'm aiming to write a detailed paper analysing and comparing the possibility that c varies with the possibility that something else varies.

The idea - which may be wrong - is that as you look to larger distances, you're looking back in time. This means that the normal Hubble law v = Hr (there's no observable gravitational retardation on the expansion, which either doesn't exist at all at great distances, unless long-range gravity is being cancelled out by outward acceleration due to "dark energy" which seems too much of an ad hoc, convenient epicycle-type invention) can be written:

v = Hr = Hct

where t is the time past you are looking back to, and H is Hubble's parameter. The key thing here is that from our frame of reference, in which we always see things further back in time when we see greater distances (due to travel time of light and long range fields), there is a variation of velocity with time as seen in our frame of reference. This is equivalent to an acceleration.

v = dr/dt hence dt = dr/v

hence

a = dv/dt = dv/(dr/v) = v*dv/dr

now substituting v = Hr

a = v*dv/dr = Hr*d(Hr)/dr

= Hr*H = (H^2)r.

So the mass of the universe M around us at an effective average radial distance from us r has outward force

F = Ma = M(H^2)r.

By Newton's 3rd law of motion, there should be an inward reaction force. From a look at the particles in that could be giving that force, the gauge boson radiation which causes curvature and gravity looks the most likely.

Conjecture: curvature is due to an inward force of F = Ma = M(H^2)r in our spacetime due to the outward motion of matter around us.

But notice that if this is correct, G is caused by an inward force which is proportional to some scale of the universe, r. If this is correct, the gravitational coupling constant G will be increasing in proportion to r, which in turn is proportional to age of universe t.

The result from a full theory is G = (3/4)(H^2)/(Pi*Rho*e^3), which is your equation with a factor of e^3 included theoretically from other effects (the redshift of exchange radiation and the increasing density of the universe with greater distance/time past).

Since H^2 = 1/t^2 and Rho is proportional to r^{-3} or t^{-3}, G here is proportional in this equation to (1/t^2)/(t^{-3}) = t, agreeing with the simplified argument above that G is directly proportional to age of universe t.

Dirac investigated the a different idea, whereby G is inversely proportional to t.

Dirac's idea was derided by Teller in 1948 who claimed it would affect fusion rates in the sun, making the sea boil in the Cambrian when life was evolving! It's true that fusion rates in stars and indeed the first few minutes of the big bang itself, depend in an extremely sensitive way on G (actually G to quite a large power) due to gravitational compression being the basis for fusion, but if the electromagnetism force is unified with and thus varies the same way with time as gravity (Coulomb's law is also a long range inverse square law, like gravity), the variation in repulsion between protons will offset the variation in gravitational attraction. In addition the G proportional to t idea has a few good experimental agreements already, like your theory. For instance, the weaker G at the time of emission of the CBR means that the smaller than expected ripples in the CBR spectrum from galaxy seeding is due to weaker gravity.

This does seem to offer an alternative variation possibility in case c change is not the right solution. I hope to fully investigate both models.



6 April 2007

 
At 8:23 AM, Blogger nige said...

copy of a comment:

"http://kea-monad.blogspot.com/2007/04/gravity-probe-b.html"

Thanks for these links! This nice experimental testing will probably go wrong because the error bars will be too big to rule anything out, or whatever. If you get crazy looking experimental results, people will dismiss them instead of throwing general relativity away in any case; as a last resort an epicycle will be added to make general relativity agree (just as the CC was modified to make the mainstream general relativity framework fit the facts in 1998).

This post reminds me of a clip on U-tube showing Feynman in November 1964 giving his Character of Physical Law lectures at Cornell (these lectures were filmed for the BBC which broadcast them in BBC2 TV in 1965):

"In general we look for a new law by the following process. First we guess it. Don't laugh... Then we compute the consequences of the guess to see what it would imply. Then we compare the computation result to nature: compare it directly to experiment to see if it works. If it disagrees with experiment: it's wrong. In that simple statement is the key to science. It doesn't any difference how beautiful your guess is..."

- http://www.youtube.com/watch?v=ozF5Cwbt6RY

I haven't seen the full lectures. Someone should put those lectures films on the internet in their entirity. They have been published in book form, but the actual film looks far more fun, particularly as they catch the audience's reactions. Feynman has a nice discussion of the LeSage problem in those lectures, and it would ne nice to get a clip of him discussing that!

General relativity is right at a deep level and doesn't in general even need testing for all predictions, simply because it's just a mathematical description of accelerations in terms of spacetime curvature, with a correction for conservation of mass-energy. You don't keep on testing E=mc^2 for different values of m, so why keep testing general relativity? Far better to work on trying to understand the quantum gravity behind general relativity, or even to do more research into known anomalies such as the Pioneer anomaly.

General relativity may need corrections for quantum effects, just as it needed a major correction for the conservation of mass-energy in November 1915 before the field equation was satisfactory.

The major advance in general relativity (beyond the use of the tensor framework, which dates back to 1901, when developed by Ricci and Tullio Levi-Civita) is a correction for energy conservation.

Einstein started by saying that curvature, described by the Ricci tensor R_ab, should be proportional to the stress-energy tensor T_ab which generates the field.

This failed, because T_ab doesn't have zero divergence where zero divergence is needed "in order to satisfy local conservation of mass-energy".

The zero divergence criterion just specifies that you need as many field lines going inward from the source as going outward from the source. You can't violate the conservation of mass-energy, so the total divergence is zero.

Similarly, the total divergence of magnetic field from a magnet is always zero, because you have as many field lines going outward from one pole as going inward toward the other pole, hence div.B = 0.

The components of T_ab (energy density, energy flux, pressure, momentum density, and momentum flux) don't obey mass-energy conservation because of the gamma factor's role in contracting the volume.

For simplicity if we just take the energy density component, T_00, and neglect the other 15 components of T_ab, we have

T_00 = Rho*(u_a)*(u_b)

= energy density (J/m^3) * gamma^2

where gamma = [1 - (v^2)/(c^2)]^(-1/2)

Hence, T_00 will increase towards infinity as v tends toward c. This violates the conservation of mass-energy if R_ab ~ T_ab, because radiation going at light velocity would experience infinite curvature effects!

This means that the energy density you observe depends on your velocity, because the faster you travel the more contraction you get and the higher the apparent energy density. Obviously this is a contradiction, so Einstein and Hilbert were forced to modify the simple idea that (by analogy to Poisson's classical field equation) R_ab ~ T_ab, in order to make the divergence of the source of curvature always equal to zero.

This was done by subtracting (1/2)*(g_ab)*T from T_ab, because T_ab - (1/2)*(g_ab)*T always has zero divergence.

T is the trace of T_ab, i.e., just the sum of scalars: the energy density T_00 plus pressure terms T_11, T_22 and T-33 in T_ab ("these four components making T are just the diagonal - scalar - terms in the matrix for T_ab").

The reason for this choice is stated to be that T_ab - (1/2)*(g_ab)*T gives zero divergence "due to Bianchi’s identity", which is a bit mathematically abstract, but obviously what you are doing physically by subtracting (1/2)(g_ab)T is just getting rid from T_ab what is making it give a finite divergence.

Hence the corrected R_ab ~ T_ab - (1/2)*(g_ab)*T ["which is equivalent to the usual convenient way the field equation is written, R_ab - (1/2)*(g_ab)*R = T_ab"].

Notice that since T_00 is equal to its own trace T, you see that

T_00 - (1/2)(g_ab)T

= T - (1/2)(g_ab)T

= T(1 - 0.5g_ab)

Hence, the massive modification introduced to complete general relativity in November 1915 by Einstein and Hilbert amounts to just subtracting a fraction of the stress-energy tensor.

The tensor g_ab [which equals (ds^2)/{(dx^a)*(dx^b)}] depends on gamma, so it simply falls from 1 to 0 as the velocity increases from v = 0 to v = c, hence:

T_00 - (1/2)(g_ab)T = T(1 - 0.5g_ab) = T where g_ab = 0 (velocity of v = c) and

T_00 - (1/2)(g_ab)T = T(1 - 0.5g_ab) = (1/2)T where g_ab = 1 (velocity v = 0)

Hence for a simple gravity source T_00, you get curvature R_ab ~ (1/2)T in the case of low velocities (v ~ 0), but for a light wave you get R_ab ~ T, i.e., there is exactly twice as much gravitational acceleration acting at light speed as there is at low speed. This is clearly why light gets deflected in general relativity by twice the amount predicted by Newtonian gravitational deflection (a = MG/r^2 where M is sun's mass).

I think it is really sad that no great effort is made to explain general relativity simply in a mathematical way (if you take away the maths, you really do lose the physics).

Feynman had a nice explanation of curvature in his 1963 Lectures on Physics:gravitational contracts (shrinks) earth's radius by (1/3)GM/c^2 = 1.5 mm, but this contraction doesn't affect transverse lines running perpendicularly to the radial gravitational field lines, so the circumference of earth isn't contracted at all! Hence Pi would increase slightly if there are only 3 dimensions: circumference/diameter of earth (assumed spherical) = [1 + 2.3*10^{-10}]*Pi.

This distortion to geometry - presumably just a simple physical effect of exchange radiation compressing masses in the radial direction only (in some final theory that includes quantum gravity properly) - explains why there is spacetime curvature. It's a shame that general relativity has become controversial just because it's been badly explained using false arguments (like balls rolling together on a rubber water bed, which is a false two dimensional analogy - and if you correct it by making it three dimensional and have a surrounding fluid pushing objects together where they shield one another, you get get censored out, because most people don't want accurate analogies, just myths).

(Sorry for the length of this comment by the way and feel free to delete it. I was trying to clarify why general relativity doesn't need testing.)

 
At 3:34 AM, Blogger nige said...

copy of a follow up comment:

http://kea-monad.blogspot.com/2007/04/gravity-probe-b.html

Matti, thank you very much for your response. On the issue of tests for science, if a formula is purely based on facts, it's not speculative and my argument is that it doesn't need testing in that case. There are two ways to do science:

* Newton's approach: "Hypotheses non fingo" [I frame no hypotheses].

* Feynman's dictum: guess and test.

The key ideas in the framework of general relativity are solid empirical science: gravitation, the equivalence principle of inertial and gravitational acceleration (which seems pretty solid to me, although Dr Mario Rabinowitz writes somewhere about some small discrepancies, there's no statistically significant experimental refutation of the equivalence principle, and it's got a lot of evidence behind it), spacetime (which has evidence from electromagnetism), the conservation of mass-energy, etc.

All these are solid. So the field equation of general relativity, which is key to to making the well tested unequivocal or unambiguous predictions (unlike the anthropic selection from the landscape of solutions it gives for cosmology, which is a selection to fit observations of how much "dark energy" you assume is powering the cosmological constant, and how much dark matter is around that can't be detected in a lab for some mysterious reason), is really based on solid experimental facts.

It's as pointless to keep testing - within the range of the solid assumptions on which it is based - a formula based on solid facts, as it is to keep testing say Pythagoras' law for different sizes of triangle. It's never going to fail (in Euclidean geometry, ie flat space), because the inputs to the derivation of the equation are all solid facts.

Einstein and Hilbert in 1915 were using Newton's no-hypotheses (no speculations) approach, so the basic field equation is based on solid fact. You can't disprove it, because the maths has physical correspondence to things already known. The fact it predicts other things like the deflection of starlight by gravity when passing the sun as twice the amount predicted by Newton's law, is a bonus, and produces popular media circus attention if hyped up.

The basic field equation of general relativity isn't being tested because it might be wrong. It's only being tested for psychological reasons and publicity, and the false idea that Popper had speculations must forever be falsifiable (ie, uncertain, speculative, or guesswork).

The failure of Popper is that he doesn't include proofs of laws which are based on solid experimental facts.

First, Archimedes proof of the law of buoyancy in On Floating Bodies. The water is X metres deep, and the pressure in the water under a floating body is the same as that at the same height above the seabed regardless of whether a boat is above it or not. Hence, the weight of water displaced by the boat must be exactly equal to the weight of the boat, so that the pressure is unaffected whether or not a boat is floating above a fixed submerged point.

This law is a falsifiable law. Nor are other empirically-based laws. The whole idea of Popper that you can falsify an solidly empirically based scientific theory is just wrong. The failures of epicycles, phlogiston, caloric, vortex atoms, and aether are due to the fact that those "theories" were not based on solid facts, but were based upon guesses. String theory is also a guess, but is not a Feynman type guess (string theory is really just postmodern ***t in the sense that it can't be tested, so it's not even a Popper type ever-falsifiable speculative theory, it's far worse than that: it's "not even wrong" to begin with).

Similarly, Einstein's original failure with the cosmological constant was a guess. He guessed that the universe is static and infinite without a shred of evidence (based on popular opinion and the "so many people can't all be wrong" fallacy). Actually, from Obler's paradox, Einstein should have realised that the big bang is the correct theory.

The big bang goes right back to Erasmus Darwin in 1791 and Edgar Allan Poe in 1848, which was basically an a fix to Obler's paradox (the problem that if the universe is infinite and static and not expanding, the light intensity from the infinite number of stars in all directions will mean that the entire sky would be as bright as the sun: the fact that the sun is close to us and gives higher inverse-square law intensity than a distant star would be balanced by the fact that at great distances, there are more stars by the inverse square law of the distance, covering any given solid angle of the sky; the correct resolution to Obler's paradox is not - contrary to popular accounts - the limited size of the universe in the big bang scenario, but the redshifts of distant stars in the big bang, because after all we're looking back in time with increasing distance and in the absence of redshift we'd see extremely intense radiation from the high density early universe at great distances).

Erasmus Darwin wrote in his 1790 book ‘The Botanic Garden’:

‘It may be objected that if the stars had been projected from a Chaos by explosions, they must have returned again into it from the known laws of gravitation; this however would not happen, if the whole Chaos, like grains of gunpowder, was exploded at the same time, and dispersed through infinite space at once, or in quick succession, in every possible direction.’

So there was no excuse for Einstein in 1916 to go with popular prejudice and ignore Obler's paradox, ignore Darwin, and ignore Poe. What was Einstein thinking? Perhaps he assumed the infinite eternal universe because he wanted to discredit 'fiat lux' and thought he was safe from experimental refutation in such an assumption.

So Einstein in 1916 introduced a cosmological constant that produces an antigravity force with increases with distance. At small distances, say within a galaxy, the cosmological constant is completely trivial because it's effects are so small. But at the average distance of separation between galaxies, Einstein made the cosmological constant take the right value so that its repulsion would exactly cancel out the gravitation attraction of galaxies.

He thought this would keep the infinite universe stable, without continued aggregation of galaxies over time. As now known, he was experimentally refuted over the cosmological constant by Hubble's observations of redshift increasing with distance, which is redshift of the entire spectrum of light uniformly caused by recession, and not the result of scattering of light with dust (which would be a frequency-dependent redshift) or "tired light" nonsense.

However, the Hubble disproof is not substantive to me. Einstein was wrong because he built the cosmological constant extension on prejudice not facts, he ignored evidence from Obler's paradox, and in particular his model of the universe is unstable. Obviously his cosmological constant fix suffered from the drawback that galaxies are not all spaced at the same distance apart, and his idea to produce stability in an infinite, eternal universe was a failure physically because it was not a stable solution. Once you have one galaxy slightly closer to another than the average distances, the cosmological constant can't hold them apart, so they'll eventually combine, and that will set off more aggregation.

The modern cosmological constant application (to prevent the long range gravitational deceleration of the universe from occurring, since no deceleration is present in data of redshifts of distant supernovas etc) is now suspect experimentally because the "dark energy" appears to be "evolving" with spacetime. But it's not this experimental (or rather observational) failure of the mainstream Lambda-Cold Dark Model of cosmology which makes it pseudoscience. The problem is that the model is not based on science in the first place. There's no reason to assume that gravity should slow the galaxies at great distances. Instead,

"... the flat universe is just not decelerating, it isn’t really accelerating..."

The reason it isn't decelerating is that gravity, contraction, and inertia are ultimately down to some type of gauge boson exchange radiation causing forces, and when these exchange radiation are exchanged between receding masses over vast distances, they get redshifted so their energy drops by Planck's law E=hf. That's one simple reason for why general relativity - which doesn't include quantum gravity with this effect of redshift of gauge bosons - falsely predicts gravitational deceleration which wasn't seen.

The mainstream response to the anomaly, of adding an epicycle (dark energy, small positive CC) is just what you'd expect from from mathematicians, who want to make the theory endlessly adjustable and non-falsifiable (like Ptolemy's system of adding more epicycles to overcome errors).

Many thanks for discussion you gave of issues with the equivalence principle. I can't grasp what is the problem with inertial and gravitational masses being equal to within experimental error to many decimals. To me it's a good solid fact. There are a lot of issues with Lorentz invariance anyway, so its general status as a universal assumption is in doubt, although it certainly holds on large scales. For example, any explanation of fine-graining in the vacuum to explain the UV cutoff physically is going to get rid of Lorentz invariance at the scale of the grain size, because that will be an absolute size. At least this is the argument Smolin and others make for "doubly special relativity", whereby Lorentz invariance only emerges on large scales. Also, from the classical electromagnetism perspective of Lorentz's original theory, Lorentz invariance can arise physically due to contraction of a body in the direction of motion in a physically real field of force-causing radiation, or whatever is the causative agent in quantum gravity.

Many thanks again for the interesting argument. Best wishes, Nige

 
At 10:16 AM, Blogger nige said...

copy of a comment:

http://kea-monad.blogspot.com/2007/04/sparring-sparling.html

Thank you very much indeed for this news. On 3 space plus 3 time like dimensions, I'd like to mention D. R. Lunsford's unification of electrodynamics and gravitation, “Gravitation and Electrodynamics over SO(3,3)”, International Journal of Theoretical Physics, Volume 43, Number 1 / January, 2004, Pages 161-177, as summarized here.

Lunsford discusses why, despite being peer reviewed and published, arXiv blacklisted it, in his comment here. Lunsford's full paper is available for download, however, here.

Lunsford succeeds in getting a unification which actually makes checkable predictions, unlike the Kaluza-Klein unification and other stuff: for instance it predicts that the cosmological constant is zero, just as observed!

The idea is to have three orthagonal time dimensions as well as three of the usual spatial dimensions. This gets around difficulties in other unification schemes, and although the result is fairly mathematically abstract it does dispense with the cosmological constant. This is helpful if you (1) require three orthagonal time dimensions as well as three orthagonal spatial dimensions (treating the dimensions of the expanding universe as time dimensions rather than space dimensions makes the Hubble parameter v/t instead of v/x, and thus it becomes an acceleration which allows you to predict the strength of gravity from a simple mechanism, since outward force of the big bang is simply f=ma where m is the mass of the universe, and newton's 3rd law then tell's you that there is equal inward reaction force, which - from the possibilities known - must be gravity causing gauge boson radiation of some sort, and you can numerically predict gravity's strength as well as the radial gravitational contraction mechanism of general relativity in this way), and (2) it require no cosmological constant:

(1) The universe is expanding and time can be related to that global (universal) expansion, which is entirely different from local contractions in spacetime caused by motion and gravitation (mass-energy etc.). Hence it is reasonable, if trying to rebuild the foundations, to have two distinct but related sets of three dimensions; three expanding dimensions to describe the cosmos, and three contractable dimensions to describe matter and fields locally.

(2) All known real quantum field theories are Yang-Mills exchange radiation theories (ie, QED, weak and QCD theories). It is expected that quantum gravity will similarly be an exchange radiation theory. Because distant galaxies which are supposed to be slowing down due to gravity (according to Friedmann-Robertson-Walker solutions to GR) are very redshifted, you would expect that any exchange radiation will similarly be “redshifted”. The GR solutions which slow slowing should occur are the “evidence” for a small positive constant and hence dark energy (which provides the outward acceleration to offset the presumed inward directed gravitational acceleration).

Professor Philip Anderson argues against Professor Sean Carroll here that: “the flat universe is just not decelerating, it isn’t really accelerating ... there’s a bit of the “phlogiston fallacy” here, one thinks if one can name Dark Energy or the Inflaton one knows something about it. And yes, inflation predicts flatness, and I even conditionally accept inflation, but how does the crucial piece Dark Energy follow from inflation?–don’t kid me, you have no idea.”

My arguments in favour of lambda = 0 and 6 dimensions (3 time like global expansion, and 3 contractable local spacetime which describes the coordinates of matter) are at places like this and other sites.

 
At 2:18 AM, Blogger nige said...

copy of a follow up comment:

http://kea-monad.blogspot.com/2007/04/sparring-sparling.html

"I checked Lunsford's article but he said nothing about the severe problems raised by the new kinematics in particle physics unless the new time dimensions are compactified to small enough radius." - Matti Pitkanen

Thanks for responding, Matti. But the time dimensions aren't extra spatial dimensions, and they don't require compactification. Lunsford does make it clear, at least in the comment on Woit's blog, that he mixes up time and space.

The time dimensions describe the expanding vacuum (big bang, Hubble recession of matter), the 3 spatial dimensions describe contractable matter.

There's no overlap possible because the spatial dimensions of matter are contracted due to gravity, while the vacuum time dimensions expand.

It's a physical effect. Particles are bound against expansion by nuclear, electromagnetic and gravitational (for big masses like planets, stars, galaxies, etc.) force.

Such matter doesn't expand and so it needs a different coordinate system to describe it from the vacuum in the expanding universe inbetween lumps of bound matter (galaxies, stars, etc.).

Gravitation in general relativity causes a contraction of spatial distance, the amount of radial contraction of mass M being approximately (1/3)MG/c^2. This is 1.5 mm for earth's radius.

The problem is that this local spatial contraction of matter is quite distinct from the global expansion of the universe as a whole. Attractive forces over short ranges, such as gravity, prevent matter from expanding and indeed cause contraction of spatial dimensions.

So you need one coordinate system to describe matter's coordinates. You need a separate coordinate system to describe the non-contractable vacuum which is expanding.

The best way to do this is to treat the distant universe which is receding as receding in time. Distance is an illusion for receding objects, because by the time the light gets to you, the object is further away. This is the effect of spacetime.

At one level, you can say that a receding star which appears to you to be at distance R and receding at velocity v, will be at distance = R + vt = R + v(R/c) = R(1 + v/c) by the time the light gets to you.

However, you then have an ambiguity in measuring the spatial distance to the star. You can say that it appears to be at distance R in spacetime where you are at your time of t = 10^17 seconds after the big bang (or whatever the age of the universe is) and the star is measured at a time of t - (R/c) seconds after the big bang (because you are looking back in time with increasing distance).

The problem here is that the distance you are giving relates to different values of time after the big bang: you are observing at time t after the big bang, while the thing you are observing at apparent distance R is actually at time t - (R/c) after the big bang.

Alternatively you get a problem if you specify the distance of a receding star as being R(1 + v/c), which allows for the continued recession of the star or galaxy while its light is in transit to us. The problem here is that we don't can't directly observe how the value of v varies over the time interval that the light is coming to us. We only observationally know the value of recession velocity v for the star at a time in the past. There is no guarantee that it has continued receding at the same speed while the light has been in transit to us.

So all possible attempts to describe the recession of matter in the big bang as a function of distance are subjective. This shows that to achieve an unequivocal, unambiguous statement about what the recession means quantitatively, we must always use time dimensions, not distance dimensions to describe the recession phenomena observed. Hubble should have realized this and written his empirical recession velocity law not as v/R = constant = H (units reciprocal seconds), but as a recession velocity increasing in direct proportion to time past v/T = v/(R/c) = vc/R = (RH)c/R = Hc.

This has units of acceleration, which leads directly to a prediction of gravitation, because that outward acceleration of receding matter means there's outward force F = m.dv/dt ~ 104^3 N. Newton's 3rd law implies an inward reaction, carried by exchange radiation, predicting forces, curvature, cosmology. Non-receding masses obviously don't cause a reaction force, so they cause asymmetry => gravity. This "shadowing" is totally different from LeSage's mechanism of gravity, which predicts nothing and involves all sorts of crackpot speculations. LeSage has a false idea that a gas pressure causes gravity. It's really exchange radiation in QFT. LeSage thinks that there is shielding which stops pressure. Actually, what really happens is that you get a reaction force from receding masses by known empirically verified laws (Newton's 2nd and 3rd), but no inward reaction force from a non-receding mass like the planet earth below you (it's not receding from you because you're gravitationally bound to it). Therefore, because local, non-receding masses don't send a gauge boson force your way, they act as a shield for a simple physical reason based entirely on facts, such as the laws of motion, which are not speculation but are based on observations.

The 1.5 mm contraction of Earth's radius according to general relativity causes the problem that Pi would change because circumference (perpendicular to radial field lines) isn't contracted. Hence the usual explanation of curved spacetime invoking an extra dimension, with the 3 known spatial dimensions a curved brane on 4 dimensional spacetime. However, that's too simplistic, as explained, because there are 6 dimensions with a 3:3 correspondence between the expanding time dimensions and non-expanding contractable dimensions describing matter. The entire curvature basis of general relativity corresponds to the mathematics for a physical contraction of spacetime!

The contraction is a physical effect. In 1949 some kind of crystal-like Dirac sea was shown to mimic the SR contraction and mass-energy variation, see C.F. Frank, ‘On the equations of motion of crystal dislocations’, Proceedings of the Physical Society of London, A62, pp 131-4: ‘It is shown that when a Burgers screw dislocation [in a crystal] moves with velocity v it suffers a longitudinal contraction by the factor (1 - v^2 /c^2)^1/2, where c is the velocity of transverse sound. The total energy of the moving dislocation is given by the formula E = E(o)/(1 - v^2 / c^2)^1/2, where E(o) is the potential energy of the dislocation at rest.’

Because constant c = distance/time, a contraction of distance implies a time dilation. (This is the kind of simple argument FitzGerald-Lorentz used to get time dilation from length contraction due to motion in the spacetime fabric vacuum. However, the physical basis of the contraction is due to motion with respect to the exchange radiation in the vacuum which constitutes the gravitational field, so it is a radiation pressure effect, instead of being caused directly by the Dirac sea.)

You get the general relativity contraction because a velocity v, in the expression (1 - v^2 /c^2)^1/2, is equivalent to the velocity gravity gives to mass M when it falls from an infinite distance away from M to distance R from M: v = (2GM/R)^{1/2}. This is just the escape velocity formula. By energy conservation, there is a symmetry: the velocity a body gains from falling from an infinite distance to radius R from mass M is identical to the velocity needed to escape from mass M beginning at radius R.

Physically, every body which has gained gravitational potential energy, has undergone contraction and time dilation, just as an accelerating body does. This is the equivalence principle of general relativity. SR doesn't specify how the time dilation rate of change varies as a function of acceleration, it merely gives the time flow rate once a given steady velocity v has been attained. Still, the result is useful.

The fact that quantum field theory can be used to solve problems in condensed matter physics, shows that the vacuum structure has some analogies to matter. At very low temperatures, you get atoms characterized by outer electrons (fermions) pairing up to form integer spin (boson like) molecules, which obey Bose-Einstein statistics instead of Fermi-Dirac statistics. As temperatures rise, increasing random, thermal motion of atoms breaks this symmetry down, so there is a phase transition and weird effects like superconductivity disappear.

At higher temperatures, further phase transitions will occur, with pair production occurring in the vacuum at the IR cutoff energy, whereby the collision energy is equal to the rest mass energy of the vacuum particle pairs. Below that threshold, there's no pair production, because there can be no vacuum polarization in arbitrarily weak electric fields or else renormalization wouldn't work (the long range shielded charge and mass of any fermion would be zero, instead of the finite values observed).

The spacetime of general relativity is approximately classical because all tested predictions of general relativity relate to energies below the IR cutoff of QFT, where the vacuum doesn't have any pair production.

So the physical substance of the general relativity "spacetime fabric" isn't a chaotic fermion gas or "Dirac sea" of pair production. On the contrary, because there is no pair production in space where the steady electric field strength is below 10^18 v/m, general relativity successfully describes a spacetime fabric or vacuum where there are no annihilation-creation loops; it's merely exchange radiation which doesn't undergo pair production.

This is why field theories are classical for most purposes at low energy. It's only at high energy that you get within a femto metre from a fermion, so QFT loop effects like pair production begin to affect the field due to vacuum polarization of the virtual fermions and chaotic effects.

 
At 2:49 AM, Blogger nige said...

Copy of another comment on the subject to a different post of Kea’s:

http://kea-monad.blogspot.com/2007/04/sparring-sparling-ii.html

I've read Sparling's paper http://www.arxiv.org/abs/gr-qc/0610068 and it misses the point, it ignores Lunsford's paper and it's stringy in its references (probably the reason why arXiv haven't censored it, unlike Lunsford's). However, it's a step in the right direction that at least some out of the box ideas can get on to arXiv, at least if they show homage to the mainstream stringers.

The funny thing will be that the mainstream will eventually rediscover the facts others have already published, and the mainstream will presumably try to claim that their ideas are new when they hype them up to get money for grants, books etc.

It is sad that arXiv and the orthodoxy in science generally, censors radical new ideas, and delays or prohibits progress, for fear of the mainstream being offended at being not even wrong. At least string theorists are well qualified to get jobs as religious ministers (or perhaps even jobs as propaganda ministers in dictatorial banana republics) once they accept they are failures as physicists because they don't want any progress to new ideas. ;-)

 
At 6:39 PM, Blogger nige said...

copy of a comment:

http://kea-monad.blogspot.com/2007/04/sparring-sparling-iii.html

Kea, thanks for this, which is encouraging.

In thinking about general relativity in a simple way, a photon can orbit a black hole, but at what radius, and by what mechanism?

The simplest way is to say 3-d space is curved, and the photon is following a curved geodesic because of the curvature of spacetime.

The 3-d space is curved because it is a manifold or brane on higher-dimensional spacetime, where the time dimension(s) create the curvature.

Consider a globe of the earth as used in geography classes: if you try to draw Euclidean triangles on the surface of that globe, you get problems with angles being bigger than on a flat surface, because although the surface is two dimensional in the sense of being an area, it is curved by the third dimension.

You can't get any curvature in general relativity due to the 3 contractable spatial dimensions: hence the curvature is due to the extra dimension(s) of time.

This implys that the time dimension(s) are the source of the gravitational field, because the time dimension(s) produce all of the curvature of spacetime. Without those extra dimension(s) of time, space is flat, with no curved geodesics, and no gravity.

This should tell people that the mechanism for gravity is to be found in the role of the time dimension(s). With the cosmic expansion represented by recession of mass radially outward in three time dimensions t = r/c, you have a simple mechanism for gravity since you have outward velocity varying specifically with time not distance, which implies outward acceleration of all the mass in the universe, using Hubble's empirical law, dr/dt = v = Hr:

a = dv/dt
= dv/(dr/v)
= v*dv/dr
= v*d(Hr)/dr
= vH.

Thus outward force of universe F=Ma = MvH. Newton's 3rd law tells you there's equal inward force. That inward force predicts gravity because it (which is corce-mediating gauge boson exchange radiation, i.e., gravitational field) exerts pressure against masses from all directions except where shielded by local, non-receding masses. The shielding is simply caused by the fact that non-receding (local) masses don't cause a reaction force, so they cause an asymmetry, gravity. There are two different working sets of calculations for this mechanism which predict the same formula for G (which is accurate well within observational errors on the parameters) using different approaches (I'm improving the clarity of those calculations in a big rewrite).

Back to the light ray orbiting the black hole due to the curvature of spacetime: Kepler's law for planetary orbits is equivalent to saying the radius of orbit, r is equal to 2MG/v^2, where M is the mass of the central body and v is the velocity of the orbiting body.

This comes from: E = (1/2)mv^2 = mMG/r, as Dr Thomas R. Love has explained.

Light, however, due to its velocity v = c, is deflected by twice as much by gravity than the slow moving objects (v << c).

Instead of (1/2)mv^2 = mMG/r giving an orbital radius of 2MG/v^2, for light you have

mc^2 = 2mMG/r

The factor of two on the right hand side comes from the fact that light (moving perpendicular to gravitational field lines) is deflected twice as much by gravity than predicted by Newton's law. The lack of the factor of (1/2) on the left hand side is due to the fact that the mass-energy equivalence for velocity c is E=mc^2 not the low velocity kinetic energy E = (1/2)mv^2.

However,

E = (1/2)mv^2 = mMG/r

and

E = mc^2 = 2mMG/r

both lead to the same formula for radius or orbit:

(1/2)v^2 = MG/r implies r = 2MG/v^2

mc^2 = 2mMG/r implies r = 2MG/c^2

So there are two relativistic factors involved, and each exactly offsets the other. This is clearly the reason why, paradoxically, Newtonian gravity gives the same formula for the event horizon radius of a black hole that general relativity does, and is not wrong by a factor of 2.

In general relativity, the metric tensor (for the Schwarzschild metric) is

g_{00} = [(1 - GM/(2rc^2)/(1 + GM/(2rc^2)]^2

g_{11} = g_{22} = g_{33} = -[1 + GM/(2rc^2)]^4

When each of these are expanded with Maclaurin's series to the first two terms,

g_{00} = 1 - [2GM/(rc^2)]

g_{11} = g_{22} = g_{33} = -1 - [2GM/(rc^2)]

In each case, the major effect of the gravitational field is that you reduce the metric tensor by an amount equal to 2GM/(rc^2), which is the ratio of the event horizon of a black hole to the distance r.

Put another way,

g_{00} = 1 - (1/n)

g_{11} = g_{22} = g_{33} = -1 - (1/n)

where n is simply distance as measured in units of black hole event horizon radii (similar to the way that the Earth's Van Allen belt's are plotted in units of earth radii).

For the case where n = 1, i.e., one event horizon radius, you get g_{00} = g_{11} = g_{22} = g_{33} = 0.

That's obviously wrong because there is severe curvature. The problem is that in using Maclaurin's series to the first two terms only, the result only applies to small curvatures, and you get a strng curvature at event horizon radius of a black hole.

So it's vital at black hole scales to not use Maclaurin's series to approximate the basic equations, but to keep them intact:

g_{00} = [(1 - GM/(2rc^2)/(1 + GM/(2rc^2)]^2

g_{11} = g_{22} = g_{33} = -[1 + GM/(2rc^2)]^4

where GM/(2rc^2) = (2GM/c^2)/(4r) = 1/(4n) where as before n is the distance in units of event horizon radii. (Every mass consititues a black hole at a small enough radius, so this is universally valid.) Hence:

g_{00} = [(4 - 1/n)/(4 + 1/n)]^2

g_{11} = g_{22} = g_{33} = -(1/256)*[4 + 1/n]^4.

So for one event horizon radius (n = 1),

g_{00} = (3/5)^2 = 9/25

g_{11} = g_{22} = g_{33} = -(1/256)*5^4 = -625/256.

The gravitational time dilation factor of 9/25 at the black hole event horizon radius is equivalent to a velocity of about 0.933c.

It's pretty easy for to derive the Schwarzschild metric for weak gravitational fields just by taking the Lorentz-FitzGerald contraction gamma factor and inserting v^2 = 2GM/r, on physical arguments, but then we have the problem that Schwarzschild's metric only applies to weak gravity fields because uses only the first two terms in the Maclaurin's series' for the metric tensor's time and space. It's an interesting problem to try to get a completely defensible, simple physical model for the maths of general relativity. Of course, there is no real physical need to work beyond the Schwarzschild metric since ... all the tests of general relativity apply to relatively weak gravitational fields within the domain of validity of the Schwarzschild metric. There's not much physics in worrying about things that can't be checked or tested.

 
At 4:46 AM, Blogger nige said...

copy of a fast comment

http://motls.blogspot.com/2007/04/resolving-big-bang.html

"Dear active moron, let me inform you that there is no "middle" of the Universe - exactly because of the cosmological principle. People have known this since the time of Copernicus and every sentence about something being "closer to the middle" is a sign of reduced intelligence." - Lubos Motl

No, the "cosmological principle" has no evidence;

1. first it was definitely the earth-centred universe,

2. then it was changed to the idea that we're definitely not in a special place, because earth orbits the sun,

3. finally, the false assumption that spacetime is curved over large distances was used to claim there's no boundary to spacetime because it curves back on itself.

This last idea was disproved in 1998 when the supernova data showed that there's no gravitational retardation, hence no curvature, on large scales. (The mainstream idea of accounting for this by adding dark energy as a repulsive force to try to cancel out gravity over cosmological distances doesn't really affect it - spacetime is still flat on the biggest distances, according to empirical evidence.)

The simplest thing is to accept general relativity works on small scales up to clusters of galaxies, but breaks down over cosmological distances. It's just an approximation. The stress-energy tensor has to falsely assume that matter and energy are continuums by using an average density, ignoring discontinuities such as the quantization of mass-energy. That's why you get smoothly curved spacetime coming out of the Ricci tensor. You put in a false smooth source for the curvature, and naturally you get out a false smooth curved spacetime as a result. The lack of curvature on large scales means that spacetime globally doesn't caused looped geodesics, which only occur locally around planets, stars, galaxies. The universe does have a "middle":

The fact is, the universe is a quantized mass and energy expanding in 3 spatial dimensions with curvature of geodesics due to time. It's a fireball expanding in 3 spatial dimensions, so it has a "middle". The role of time doesn't take make spacetime boundless, because there's no curvature on cosmological scales. That's because gravity-causing gauge bosons exchanged between relativistically receding masses at great distances in an expanding universe will lose energy; their redshift of frequency causes an energy loss by Planck's law E = hf. Hence, gravity is terminated by redshift of gauge boson force-causing exchange radiation over such massive distances, and there's no global curvature. On cosmological distance scales, geodesics can't be looped because of this lack of curvature. Therefore, the role of time in causing curvature on cosmological scales is trivial, spatial dimensions don't suffer curvature, and the universe isn't boundless. There is a middle to the big bang, just as there's a middle to any expanding fireball. Whether spatial dimensions are created in the big bang or not is irrelevant to this.

 
At 3:40 AM, Blogger nige said...

copy of a comment:

http://kea-monad.blogspot.com/2007/05/wittens-news.html

Mach's relationism isn't proved. He claimed that the earth's rotation is just relative to the stars. However, the Foucault pendulum proves that absolute rotation can be determined without reference to the stars. Mach could only counter that objection by his faith (without any evidence at all) that if the earth was not rotating, but the stars were instead rotating around it, then the Coriolis force would still exist and deflect a Foucault pendulum.

To check Mach's principle, instead of the complex situation of the earth and the stars (where the dynamical relationship is by consensus unknown until there is a quantum gravity to explain inertia by the equivalence principle), consider the better understood situation of a proton with an electron orbiting it.

From classical mechanics, neglecting therefore the normal force causing exchange (equilibrium) radiation which constitutes fields, there should be a net (non equilibrium) emission of radiation by accelerating charge.

If you consider a system with an electron and a proton nearby but not in orbit, they will be attracted by Coulomb's law. (The result of the normal force-causing exchange radiation.)

Now, consider the electron in orbit. Because it is accelerating with acceleration a = (v^2)/r, it is continuously emitting radiation in a direction perpendicular to its orbit; i.e., generally the direction of the radiation is the radial line connecting the electron with the proton.

Because the electron describes a curved orbit, the angular distribution of its radiation is asymmetric with more being emitted generally towards the nucleus than in the opposite direction.

The recoil of the electron from firing off radiation is therefore in a direction opposite to the centripetal Coulomb attraction force. This is why how the "centrifugal" effect works.

What is so neat is that no loss kinetic energy occurs to the electron. T.H. Boyer in 1975 (Physical Review D, v11, p790) suggested that the ground state orbit is a balance between radiation emitted due to acceleration and radiation absorbed from the vacuum's zero point radiation field caused by all the other accelerating charges which are also radiating in the universe surrounding any particular atom.

H.E. Puthoff in 1987 (Physical Review D v35, p3266, "Ground state of hydrogen as a zero-point-fluctuation-determined state") assumed that the Casimir force causing zero-point electromagnetic radiation had an energy spectrum

Rho(Omega)d{Omega} = {h bar}[{Omega}^3]/[2(Pi^2)*(c^3)] d{Omega}

which causes an electron in a circular orbit to absorb radiation from the zero-point field with the power

P = (e^2)*{h bar}{Omega^3}/(6*Pi*Epsilon*mc^3)

Where e is charge, Omega is angular frequency, and Epsilon is permittivity. Since the power radiated by an electron with acceleration a = r*{Omega^2} is:

P = (e^2)*(a^2)/(6*Pi*Epsilon*c^3),

equating the power the electron receives from the zero-point field to the power it radiates due to its orbit gives

m*{Omega}*(r^2) = h bar,

which is the ground state of hydrogen. Puthoff writes:

"... the ground state of the hydrogen atom can be precisely defined as resulting from a dynamic equilibrium between radiation emitted due to acceleration of the electron in its ground-state orbit and radiation absorbed from zero-point fluctuations of the background vacuum electromagnetic field, thereby resolving the issue of radiative collapse of the Bohr atom."

This model dispenses with Mach's principle. An electron orbiting a proton is not equivalent to the proton rotating while the electron remains stationary; one case results in acceleration of the electron and radiation emission, while the other doesn't.

The same arguments will apply to the case of the earth rotating, or the stars orbiting a stationary earth, although some kind of quantum gravity/inertia theory is there required for the details.

One thing I disagree with Puthoff over is the nature of the zero-point field. Nobody seems to be aware that the IR cutoff and the Schwinger requirement for a minimum electric field strength of 10^18 v/m, prevents the entire vacuum from being subject to creation/annihilation loop operators. Quantum field theory only applies to the fields above the IR cutoff, or closer than 10^{-15} metre to a charge.

Beyond that distance, there's no pair production in the vacuum whatsoever, so all you have is radiation. In general, the "zero-point field" is the gauge boson exchange radiation field which causes forces. The Casimir force works because long wavelengths of the zero-point field radiation are excluded from the space between two metal plates, which therefore shield one another and get pushed together like two suction cups being pushed together by air pressure when normal air pressure is reduced in the small gap between them.

Puthoff has an interesting paper, "Source of the vacuum electromagnetic zero-point energy" in Physical Review D, v40, p4857 (1989) [note that an error in this paper is corrected by Puthoff in an update published in Physical Review D v44, p3385 (1991)]:

"... the zero-poing energy spectrum (field distribution) drives particle motion ... the particle motion in turn generates the zero-point energy spectrum, in the form of a self-regenerating cosmological feedback cycle. The result is the appropriate frequency-cubed spectral distribution of the correct order of magnitude, this indicating a dynamic-generation process for the zero-poing energy fields."

What interests me is that Puthoff's calculations in that paper tackle the same problems which I had to deal with over the last decade in regards to a gravity mechanism. Puthoff writes that since the radiation intensity from any charge will fall as 1/r^2, and since charges in shells of thickness dr will have an area of 4*Pi*r^2, the increasing number of charges at bigger distances offsets the inverse square law of radiation, so you get a version of Obler's paradox appearing.

In addition, Puthoff notes that:

"... in an expanding universe radiation arriving from a particular shell located now at a distance was emitted at an earlier time, from a more compacted shell."

This effect tends to make Obler's paradox even more severe, because the earlier universe we see at great distances should be more and more compressed with distance, and ever brighter.

Redshift of radiation emitted from receding matter at such great distances solves these problems.

However, Puthoff assumes that some already known metric of general relativity is correct, which clearly is false because of the redshift of gauge bosons in an expanding universe will weaken the gravitational coupling constant between receding (distant) masses, a fact that all widely accepted general relativity metrics totally ignore!

Sorry for the length of this comment, and feel free to delete this comment (I'll put a copy on my blog).

 
At 6:37 AM, Blogger nige said...

Just an update about the Feynman lecture Google Videos player insert at the top of this page.

It seems that it has been removed from Google Videos.

The seven Feynman lectures on The Character of Physical Law were given as part of the Messenger Lecture Series at Cornell University in November 1964, and were recorded on 16 mm film by the BBC for transmission on BBC2 TV in 1965.

The link I had was to Lecture 2 - The Relation of Mathematics and Physics, http://video.google.ca/url?docid=-7720569585055724185&esrc=sr7&ev=v&q=messenger%2Blecture&srcurl=http%3A%2F%2Fvideo.google.ca%2Fvideoplay%3Fdocid%3D-7720569585055724185&vidurl=%2Fvideoplay%3Fdocid%3D-7720569585055724185%26q%3Dmessenger%2Blecture%26total%3D50%26start%3D0%26num%3D10%26so%3D0%26type%3Dsearch%26plindex%3D6&usg=AL29H20PDgeJo_xlX-Bm3Ur4-N8DlbYo0Q

Apparently, the videos were removed by request of the Feynman family because they are allegedly for sale (however, they are not actually for sale anyplace known in this actual universe):

"Just a note: I was contacted by the attorney representing the heirs to the estate of Dr. Feynman and asked to remove the videos and destroy my copies under threat of legal action. I just want to reiterate here that I complied a long time ago, which is why the videos went inactive shortly after being uploaded.

"Apparently, however, they are for sale. I encourage all of you to go out and purchase the lectures. I do not, at this time, know *where* they are for sale."

...

"I have absolutely no knowledge of anywhere you can purchase videos though. The attorney did make it clear that the videos were indeed for sale retail, he just failed to mention where."

- http://www.physicsforums.com/archive/index.php/t-160463.html "Feynman's Messenger Lectures, online at last."

I failed to download the lectures while they were available (you need special software to do that), so don't have any copies. However, you can obtain the basic gist of it from the published version of the lectures (which are severely edited, and do NOT follow Feynman's oratory in the film version): "Character of Physical Law". See http://www.cmu.edu/mcs/about-mcs/news/040215-feynman.html

It is a classic that google video now contains several 1980s documentaries about Feynman but not the key stuff, all because of a false claim by a lying lawyer that the lectures are on public sale!!!! Just proves how useless the internet is in reality: one step forward, two steps back.

 
At 2:58 AM, Blogger nige said...

I've just found that I did quote the vital two sections from the Feynman lecture FILM (not the book version!) on my new blog:

http://nige.wordpress.com/2007/07/04/metrics-and-gravitation/

... Feynman discusses the Newton proof in his November 1964 Cornell lecture on ‘The Law of Gravitation, an Example of Physical Law’, which was filmed for a BBC2 transmission in 1965 and can viewed on google video here (55 minutes). Feynman in his second filmed November 1964 lecture, ‘The Relation of Mathematics to Physics’, also on google video (55 minutes), stated:

‘People are often unsatisfied without a mechanism, and I would like to describe one theory which has been invented of the type you might want, that this is a result of large numbers, and that’s why it’s mathematical. Suppose in the world everywhere, there are flying through us at very high speed a lot of particles … we and the sun are practically transparent to them, but not quite transparent, so some hit. … the number coming [from the sun’s direction] towards the earth is less than the number coming from the other sides, because they meet an obstacle, the sun. It is easy to see, after some mental effort, that the farther the sun is away, the less in proportion of the particles are being taken out of the possible directions in which particles can come. So there is therefore an impulse towards the sun on the earth that is inversely as square of the distance, and is the result of large numbers of very simple operations, just hits one after the other. And therefore, the strangeness of the mathematical operation will be very much reduced the fundamental operation is very much simpler; this machine does the calculation, the particles bounce. The only problem is, it doesn’t work. …. If the earth is moving it is running into the particles …. so there is a sideways force on the sun would slow the earth up in the orbit and it would not have lasted for the four billions of years it has been going around the sun. So that’s the end of that theory. …

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

The error Feynman makes here is that quantum field theory tells us that there are particles of exchange radiation mediating forces normally, without slowing down the planets: this exchange radiation causes the FitzGerald-Lorentz contraction and inertial resistance to accelerations (gravity has the same mechanism as inertial resistance, by Einstein’s equivalence principle in general relativity). So the particles do have an effect, but only as a once-off resistance due to the compressive length change, not continuous drag. Continuous drag requires a net power drain of energy to the surrounding medium, which can’t occur with gauge boson exchange radiation unless acceleration is involved, i.e., uniform motion doen’t involve acceleration of charges in such a way that there is a continuous loss of energy, so uniform motion doesn’t involve continuous drag in the sea of gauge boson exchange radiation which mediates forces! The net energy loss or gain during acceleration occurs due to the acceleration of charges, and in the case of masses (gravitational charges), this effect is experienced by us all the time as inertia and momentum; the resistance to acceleration and to deceleration. The physical manifestation of these energy changes occurs in the FitzGerald-Lorentz transformation; contractions of the matter in the length parallel to the direction of motion, accompanied by related relativistic effects on local time measurements and upon the momentum and thus inertial mass of the matter in motion. This effect is due to the contraction of the earth in the direction of its motion. Feynman misses this entirely. The contraction of the earth’s radius by this mechanism of exchange radiation (gravitons) bouncing off the particles, gives rise to the empirically confirmed general relativity law due to conservation of mass-energy for a contracted volume of spacetime, as proved in an earlier post. So it is two for the price of one: the mechanism predicts gravity but also forces you to accept that the Earth’s radius shrinks, which forces you to accept general relativity, as well. Additionally, it predicts a lot of empirically confirmed facts about particle masses and cosmology, which are being better confirmed by experiments and observations as more experiments and observations are done.

As pointed out in a previous post giving solid checkable predictions for the strength of quantum gravity and observable cosmological quantities, etc., due to the equivalence of space and time, there are 6 effective dimensions: three expanding time-like dimensions and three contractable material dimensions. Whereas the universe as a whole is continuously expanding in size and age, gravitation contracts matter by a small amount locally, for example the Earth’s radius is contracted by the amount 1.5 mm as Feynman emphasized in his famous Lectures on Physics. This physical contraction, due to exchange radiation pressure in the vacuum, is not only a contraction of matter as an effect due to gravity (gravitational mass), but it is also a contraction of moving matter (i.e., inertial mass) in the direction of motion (the Lorentz-FitzGerald contraction).

This contraction necessitates the correction which Einstein and Hilbert discovered in November 1915 to be required for the conservation of mass-energy in the tensor form of the field equation. Hence, the contraction of matter from the physical mechanism of gravity automatically forces the incorporation of the vital correction of subtracting half product of the metric and the trace of the Ricci tensor, from the Ricci tensor of curvature. This correction factor is the difference between Newton’s law of gravity merely expressed mathematically as 4 dimensional spacetime curvature with tensors and the full Einstein-Hilbert field equation; as explained on an earlier post, Newton’s law of gravitation when merely expressed in terms of 4-dimensional spacetime curvature gives the wrong deflection of starlight and so on. It is absolutely essential to general relativity to have the correction factor for conservation of mass-energy which Newton’s law (however expressed in mathematics) ignores. This correction factor doubles the amount of gravitational field curvature experienced by a particle going at light velocity, as compared to the amount of curvature that a low-velocity particle experiences. The amazing thing about the gravitational mechanism is that it yields the full, complete form of general relativity in addition to making checkable predictions about quantum gravity effects and the strength of gravity (the effective gravitational coupling constant, G). It has made falsifiable predictions about cosmology which have been spectacularly confirmed since first published in October 1996. The first major confirmation came in 1998 and this was the lack of long-range gravitational deceleration in the universe. It also resolves the flatness and horizon problems, and predicts observable particle masses and other force strengths, plus unifies gravity with the Standard Model. But perhaps the most amazing thing concerns our understanding of spacetime: the 3 dimensions describing contractable matter are often asymmetric, but the 3 dimensions describing the expanding spacetime universe around us look very symmetrical, i.e. isotropic. This is why the age of the universe as indicated by the Hubble parameter looks the same in all directions: if the expansion rate were different in different directions (i.e., if the expansion of the universe was not isotropic) then the age of the universe would appear different in different directions. This is not so. The expansion does appear isotropic, because those time-like dimensions are all expanding at a similar rate, regardless of the direction in which we look. So the effective number of dimensions is 4, not 6. The three extra time-like dimensions are observed to be identical (the Hubble constant is isotropic), so they can all be most conveniently represented by one ‘effective’ time dimension.

Only one example of a very minor asymmetry in the graviton pressure from different directions, resulting from tiny asymmetries in the expansion rate and/or effective density of the universe in different directions, has been discovered and is called the Pioneer Anomaly, an otherwise unaccounted-for tiny acceleration in the general direction toward the sun (although the exact direction of the force cannot be precisely determined from the data) of (8.74 ± 1.33) × 10−10 m/s2 for long-range space probes, Pioneer-10 and Pioneer-11. However these accelerations are very small, and to a very good approximation, the three time-like dimensions - corresponding to the age of the universe calculated from the Hubble expansion rates in three orthagonal spatial dimensions - are very similar. ...


continued at http://nige.wordpress.com/2007/07/04/metrics-and-gravitation/

 
At 2:43 AM, Blogger nige said...

As of 10 February 2008, I've changed the banner on this blog to read:

U(1) x SU(2) x SU(3) particle physics based on facts, giving quantum gravity predictions

Galaxy recession velocity: v = dR/dt = HR => Acceleration: a = dv/dt = d[HR]/dt = H*dR/dt = Hv = H*HR = RH^2. 0 < a < 6*10^{-10} ms-2. Outward force: F = ma. Newton's 3rd law predicts equal inward reaction force (via gravitons), but since non-receding nearby masses don't cause reaction, there's an asymmetry, predicting gravity and particle physics. In 1996 it predicted the lack of deceleration at large redshifts.


The reason for this change is explained in post http://nige.wordpress.com/2008/01/30/book/

As of 10 February 2008, I’ve changed the banner of this blog from SU(2) x SU(3) to “U(1) x SU(2) x SU(3) quantum field theory: Is electromagnetism mediated by charged, massless SU(2) gauge bosons? Is the weak hypercharge interaction mediated by the neutral massless SU(2) weak gauge boson? Is gravity mediated by the spin-1 gauge boson of U(1)? This blog provides the evidence and predictions for this introduction of gravity into the Standard Model of particle physics.” This is driven by the fact, explained in the comments to this post, that:

... SU(2) x SU(3), ... [it] seems too difficult to make SU(2) account for weak hypercharge, weak isospin charge, electric charge and gravity.

I thought it would work out by changing the Higgs field so that some massless versions of the 3 weak gauge bosons exist at low energy and cause electromagnetism, weak hypercharge and gravity.

However, since the physical model I’m working on uses the two electrically charged but massless SU(2) gauge bosons for electromagnetism, that leaves only the electrically neutral massless SU(2) gauge boson to perform both the role of weak hypercharge and gravity. That doesn’t work out, because the gravitational charges (masses) are evidently going to be different to the weak hypercharge which is only a factor of two different between an electron and a neutrino. Clearly, an electron is immensely more massive than a neutrino. So the SU(2) x SU(3) model must be wrong.

The only possibility left seems to be similar to the Standard Model U(1) x SU(2) x SU(3), but with differences from the Standard Model. U(1) would model gravitational charge (mass) and spin-1 (push) gravitons. The massless neutral SU(2) gauge boson in the model I’m working on would then mediate weak hypercharge only, instead of mediating gravitation as well.


The whole point about my approach is that I’m working from fact-based predictive mechanisms for fundamental forces, and in this world view the symmetry group is just a mathematical model which is found to describe the symmetries suggested by the mechanisms. ... Ryder’s book Quantum Field Theory (2nd ed., 1996), chapters 1-3 and 8-9, contains the best (physically understandable) introduction to the basic mathematics including Lagrangians, path integrals, Yang-Mills theory and the Standard Model. From my perspective, the symmetry groups are the end product of the physics; they summarise the symmetries of the interactions. The end product can change when the understanding of the details producing it changes. I’ve revised the latest draft book manuscript PDF file accordingly.

http://nige.wordpress.com/2008/01/30/book/

 

Post a Comment

<< Home