Quantum gravity physics based on facts, giving checkable predictions

Tuesday, November 08, 2005



Revised http://nigelcook0.tripod.com/ and http://members.lycos.co.uk/nigelbryancook/


Trying to ruthlessly exterminate the proved facts…

On the above pages, if you scroll down, you find a list of more than 10 major predictions from the factual proof of gravity and other forces. The physical mechanism does give rise to a lot of mathematics, but not the same type of useless mathematics that ‘string theory’ generates. Because ‘string theory’ falsely is worshipped as a religion, naturally the productive facts are ridiculed. The accurate predictions include the strengths of gravity, electroweak and strong nuclear forces, as well as solutions to the problems of cosmology and the correct ratios of some fundamental particles.

Re-reading Feynman’s old three-volume series of undergraduate lectures, I was motivated today to make some more predictions from the mechanism, and to try to test them (I have not added these to my home page yet). Feynman correctly calculates the huge ratio of gravity attraction force to the repulsive force of electromagnetism for two electrons as 1/(4.17 x 10^42). He then says:

‘It is very difficult to find an equation for which such a fantastic number is a natural root. Other possibilities have been thought of; one is to relate it to the age of the universe.’
He then says that the ratio of the time taken by light to cross the universe to the time taken by light to cross a proton is about the same huge factor. After this, he chucks out the idea because gravity would vary with time, and the sun’s radiating power varies as the sixth power of the gravity constant G.
The error here is that there is no mechanism for Feynman’s idea about the times for light to cross things. Where you get a mechanism is for the statistical addition of electric charge (virtual photons cause electric force) exchanged between similar charges distributed around the universe. This summation does not work in straight lines, as equal numbers of positive and negative charges will be found along any straight line. So only a mathematical drunkard’s walk, where the net result is the charge of one particle times the square root of the number of particles in the universe, is applicable.

This means that the electric force is equal to gravity times the square root of the number of particles. Since the number of particles is effectively constant, the electric force varies with the gravity force!

This disproves Feynman: suppose you double the gravity constant. The sun is then more compressed, but does this mean it releases 2^6 = 64 times more power? No! It releases the same. What happens is that the electric force between protons – which is called the Coulomb barrier – increases in the same way as the gravity compression. So the rise in the force of attraction (gravity) is offset by the rise in the Coulomb repulsion (electric force), keeping the proton fusion rate stable!

However, Feynman also points out another effect, that the variation in gravity will also alter the size of the Earth’s orbit around the sun, so the Earth will get a bit hotter due to the distance effect if G rises, although he admits: ‘such arguments as the one we have just given are not very convincing, and the subject is not completely closed.’

Now the smoothness of the cosmic background radiation is explained by the lower value of G in the past (see my home page). Gravity constant G is directly proportional to the age of the universe, t.

Let’s see how far we get playing this game (I’m not really interested in it, but it may help to test the theory even more rigorously).

The gravity force constant G and thus t are proportional to the electric force, so that if charges are constant, the electric permittivity varies as ‘1/t’, while the magnetic permeability varies directly with t.

By Weber and Maxwell, the speed of light is c =1/(square root of the product of the permittivity and the permeability). Hence, c is proportional to 1/ [square root of {(1/t).(t)}] = constant. Thus, the speed of light goes NOT vary in any way with the age of the universe.
The strong nuclear force strength, basically F = hc/(2.Pi.d^2) at short distances, is varying like gravity and electroweak forces, results in the implication that h is proportional to G and thus also to t.

Many ‘tests’ for variations in G assume that h is a constant. Since this is not correct, and G is proportional to h, the interpretations of such ‘tests’ are total nonsense, much as the Michelson-Morley experiment does not disprove the existence of the sea of gauge bosons that cause fundamental forces!

At some stage this model will need to be applied rigorously to very short times after the big bang by computer modelling. For such times, the force ratios vary not merely because the particles of matter have sufficient energy to smash through the shielding veils of polarised virtual particles which surround the cores of particles, but also because the number of fundamental particles was increasing significantly at early times! Thus, soon after the big bang, the gravity and electromagnetic forces would have been similar. The strong nuclear force, because it is identical in strength to the unshielded electroweak force, would also have been the same strength because the energy of the particles would break right through the polarised shields. Hence, this is a unified force theory that really works! Nature is beautifully simple after all.

6 Comments:

At 9:43 AM, Blogger nige said...

Copy of a post to Dr Motl's blog:
http://motls.blogspot.com/2005/11/modern-science-haters.html

Nigel said...
Dear Lumos,

Your attack on Dr Lawrence Krauss is inexcusable. Just because he is popular and a kind of new Einstein figure in the media eye, he does not need this sort of attack.

He is the one defending science. At best you are defending arcane untested pure mathematics, unlike real tested maths of my sort!

String theory is not applied maths. Until there is some evidence for it, it remains pure maths. Pure maths gives us results like "i to the i equals e to the minus half of Pi".

This is really clever for people who don't live in the real world, but is regarded as technical trivia by physicists like me.

In electronics, I use imaginary/complex numbers in circuit analysis, also they were needed to pass exams in quantum mechanics, finding solutions to the Schroedinger equation.

However, in the real world these procedures are tricks. Electronics engineers don't go around saying that because i is useful in solving some phase equation, it proves God is a pure mathematician. The same thing arises for infinity in renormalised QED: it has to be put in to make the solution work. Feynman said it was a "dippy process".

The reality is that the electron core is going to couple with an infinite number of virtual charges in the surrounding universe, unless you limit it artificially. The problem probably arises from the wave equation nature of the Dirac formulation of QED. A wave equation is a continuous model for a lot of particulate interactions. There is probably another mathematical way of doing this which avoids renormalisation. Thus to calculate the increase in the magnetic moment of the electron due to teh first Feynman coupling diagram you probably have to consider the electron core pairing up (by the Pauli exclusion principle) with a single virtual particle (a virtual electron, which is outside the polarised, inner shell of virtual positrons) in the surrounding vacuum. Because the core charge is shielded by a factor of 137 due to the polarisation of the vacuum, the virtual electron that the real electron core pairs up with will be affected by a an electric force 137 times weaker than the real core. So the magnetic moment increases by the factor 1 + 1/(2xPix137) giving the 1.00116 factor (the 2xPi arises from some kind of orbit of the paired virtual spinning charge).

All I'm pointing out is that there may be other, less obscure, mathematical calculations for some of these things. I admit that the existing abstruse methods work better at present, and are vital for giving results which act as clues for heuristic explanations in terms of virtual particles.

But remember that Feynman diagrams themselves were a step away from purely mathematical procedures, towards a more pictorial understanding.

You don't have to worship abstruse maths that work, to defend them. Archimedes in "The Method" presented some kind of calculus to help him calculate things, but then used the results to build a mroe classical geometrical proof.

In applied maths, unlike pure maths, there is always more than one way to prove something or derive something. It is just crazy to try to shut down science after one guy finds one way of getting the right result. It is not an insult to that person for others to carry on trying to get the same right answers by alternative, more "straightforward" maths.

Otherwise, you may shut down the only real way forward. As for t'Hooft, you seem happy to praise his infinities in his renormalisation, but you won't say the same about his idea that the fifth dimension is the spacetime fabric!

Best wishes,
Nigel Cook

 
At 10:22 AM, Blogger nige said...

Lumos deleted the above comment from his blog within 2 minutes! My response is:


Nigel said...
Dear Lumos,

I apologise if the last comment was so upsetting it had to be deleted.

You will appreciate that I still respect your attempts to improve string theory from its current mess, despite the fact string theory is in my view unlikely to make the transition from pure to applied mathematics anytime soon.

Best wishes,
Nigel

1:17 PM

 
At 8:52 AM, Blogger nige said...

Copy of a comment to another post of Dr Motl:

http://motls.blogspot.com/2005/11/higgs-at-105-gev.html


Nigel said...
Lumos,

The Higgs boson is going to be crucial for determining where physics goes. As Quantoken pointed out on this blog a while back, Einstein's equivalence principle between inertial and gravitational forces suggests that the Higgs mechanism controls both inertial mass and gravitational mass.

If so, the Higgs field provides a mechanism for not only inertial mass, but gravity too.

I've calculated that if you treat the big bang as an explosion, the outward force F=ma = 7x10^43 Newtons. This calculation uses the mass of the universe for m and the Hubble parameter to determine acceleration a, which is (recession speed variation)/(time past variation with distance) = (c-0)/(t-0) = c/t = 6 x 10^-10 m/s^2.

This implies an equal inward pressure from the Higgs field, which can't flood-fills 3-d volume. The volume left void behind quarks in stars moving away from us is filled in by the "perfect fluid" spacetime fabric (Higgs field/gauge bosons) coming inward.

Another way to calculate this, which gives the same result, is Newton's 3rd law: outward forces have an inward reaction force.

In ordinary chemical explosions there is such an inward force, which is used in "implosion" type nuclear weapons to compress plutonium in the core to supercriticality (reducing the ratio of surface area to mass and thus increasing the fission probability by reducing neutron escape, by bringing nuclei closer together and of course by reducing the distances and thus neutron travel times which speeds up the chain reaction).

In the big bang, the inward reaction is the spacetime fabric pressure which causes gravity.

I don't think this mechanism will be taken seriously until the Higgs boson is detected. People are too prejudiced against any spacetime fabric being real at present. (I blame the string community mostly for this, as they suppressed my papers, so I'm glad you are taking the Higgs field seriously!)

Notice that since the outward force of the big bang is 7x10^43 N (this includes the e^3 correction for higher densities with increasing distances which I proved) , the inward Higgs field pressure (force/area) is massive at close ranges. I'd imagine it is the huge pressure of the Higgs field pushing in on any place in space that gives rise to the validity of the time-energy version of the uncertainty principle, as particles forever randomly collide at high energy, and the products recombine to form energy. The vacuum of space must be a seething sea of virtual particles, and "foam" is a good term for it.

I hope the Higgs boson is detected soon, and its properties are experimentally determined. The whole idea of spin-2 gravitons to explain the universal attractive force seems to be crazy. If they are really there, then gravitons will be (by the equivalence principle) involved in inertial accelerations as well as gravity. Surely the Higgs field is all we need for inertia and gravity!

Best wishes,
Nigel

 
At 10:51 AM, Blogger nige said...

Catt runs away!

Catt has removed my Jan 03 Electronics World article "Air Traffic Control: How Many More Air Disasters?" from his site www.ivorcatt.com, leaving an empty space.

What happened was this. Catt did engineering (B.Eng amd MA) at Trinity College Cambridge, and has no interest in the big bang or gravity, or even mathematical physics.

He however re-discovered the Heaviside TEM wave when working out empirical cross-talk estimation methods in the 1960s at Motorola. He wrote a great paper at an early age, IEEE Trans. on Electronic Computers, EC-16, December 1967: "Cross-talk (Noise) in Digital Systems".

Because of some political arguments over the paper, his boss got fired and Catt eventually returned to England, teaching remedial English and writing popular articles for new Scientist and books of politics for commercial publishers, like "The Catt Concept" (egotism beginning???).

Eventually he linked up with Walton and Davidson and made rapid progress, working out a TEM wave mechanism for the charging of the capacitor. For some reason, Catt disliked electrons and then tried to (crazily) use his results plus Occam's razor to get rid of electrons. Walton, a physics professor, let Catt have his own way.

What is actually true is that the electron is a standing TEM wave, so the TEM wave is the primitive from which charge arises. But Catt made a mess of it, and then stuck by his guns for decades (he still does so now, despite sometimes nodding to the concept of electrons composed of gravitationally trapped TEM wave energy in a loop).

"When in a hole, keep digging." This is sadly what happened. Then Catt pointed out suppression. There is no doubt that suppression can be a problem. Catt had previously been suppressed for more mainstream work until managing to push things into print, but he was up against a bigger fish than usual when trying to obliterate electrons and Maxwell's displacement current law.

Obviously string theory from 1985 was a major barrier to Catt, as it remains the major barrier to others today.

At http://www.ivorcatt.org/ you can see Catt's materials. A lot of the valid physics and maths was done by Walton and Davidson, with Catt providing the politics.

I fear that Catt's control is so bad he will censor anything leading towards a unified field theory so far as he is able, which is not far. He helped me a bit to formulate the non-mathematical bits of my earlier papers, but his contributions all turned out wrong (continuous spacetime fabric is Catt's ether, whereas the fact is that is is particulate in the sense of the Dirac sea of virtual particles, and the Higgs field).

He has no interest in science as I understand it, just in political type attacks on modern physics using Occam's razor to get rid of the whole thing, which is wrong.

I'm sad that Catt is locked into political arguments and can't have any interest in science. Personally, I view "science" statements of what people "believe in" as drivel. What you have to talk about is what you "have evidence for" or "can prove" mathematically and test experimentally! Not what you think is ugly or beautiful. This is why Catt is really in bed with the string theorists politically, although he has no time for maths.

Im a way, I suppose, I should be pleased that I'm not associated to Catt anymore on www.ivorcatt.org.

At least Kevin won't be able to hold Catt against me! However, it is sad because that article is on one thing Catt really DOES KNOW SOMETHING ABOUT: computer chip design. Here's the full text of the article:

AIR TRAFFIC CONTROL: How many more air disasters?

Nigel Cook, Electronics World, January 2003, pp12-15 (cover story)


In July last year, problems with the existing system were highlighted by the tragic death of 71 people, including 50 school children, due to the confusion when Swiss air traffic control noticed too late that a Russian passenger jet and a Boeing 757 were on a collision path. The processing of extensive radar and other aircraft input information for European air space is a very big challenge, requiring a reliable system to warn air traffic controllers of impending disaster. So why has Ivor Catt's computer solution for Air Traffic Control been ignored by the authorities for 13 years? Nigel Cook reports.


In Electronics World, March 1989, a contributor explained the longterm future of digital electronics. This is a system in which computers are networked adjacently, like places in the real world, but unlike the internet. An adjacent processor network is the ingenious solution proposed for the problem of Air Traffic Control: a grid network of computer processors, each automatically backed-up, and each only responsible for the air space of a fixed area. Figure 1 shows the new processing system, the Kernel computer, as proposed for safe, automated air traffic control.


Figure 1: How an adjacent network of processors would manage the ATC of European air space

This system is capable of reliably tracking a vast air space and could automatically alert human operators whenever the slant distance between any two adjacent aircraft decreased past the safety factor. Alternatively, if the air traffic controllers were busy or asleep, it could also send an automatic warning message directly to the pilot of the aircraft that needs to change course.

The existing suggestions are currently based on software solutions, which are unsatisfactory. For such a life-and-death application, there is a need for reliability through redundancy, and a single processor system does not fit the bill. System freezes must be eliminated in principle. Tracking aircraft individually by reliably using radar and other inputs requires massive processing, and a safe international system must withstand the rigours of continuous use for long periods, without any software crashes or system overheat failure.

The only practicable way to do this is through using Ivor Catt's adjacent processor network.

Originally suggested for a range of problems, including accurate prediction of global warming and long-range weather, the scheme proposed by Ivor was patented as the Kernel Machine, an array of 1,000 x 1,000 = 1,000,000 processors, each with its own memory and program, made using wafer-scale integration with 1000 silicon wafers in a 32 by 32 wafer array. The data transfer rate between adjacent processors is 100 Mb/s.

Ivor Catt's original computer development is the Catt Spiral (Wireless World, July 1981), in which Sir Clive Sinclair's offshoot computer company, Anamartic, invested £16 million. Although revolutionary, it came to market and was highly praised by electronics journals. The technology is proven by the successful introduction in 1989 of a solid-state memory called the Wafer Stack, based on a Catt patent. This received the `Product of the Year Award' from the U.S. journal Electronic Products, in January 1990.

It is a wafer scale integration technology, which self-creates a workable computer from a single silicon wafer by automatically testing each chip on the wafer, and linking up a spiral of working chips while by-passing defective ones. This system is as big an advance as the leap from transistor to compact IC (which was invented in 1959), because the whole wafer is used without having to be divided up into individual chips for separate testing and packaging.

By having the whole thing on a single silicon wafer, the time and energy in separating, testing, and packaging the chips was saved, as well as the need to mount them separately on circuit boards. By the time Catt had completed his invention for wafer scale integration, he was already working on the more advanced project, the Kernel Machine.

In the Sunday Times (12 March 1989, p. D14), journalist Jane Bird interviewed Ivor Catt and described the exciting possibilities: "in air traffic control, each processor in the array could correspond to a square mile of airspace... weather forecasters could see at the press of a button whether rain from the west would hit Lord's before the end of cricket play."

The Kernel machine versus P.C. thinking

The primary problem facing the Kernel Machine is the predominance of single-processor computer solutions and the natural inclination of programmers to force software fixes on to inappropriate hardware.

Ivor Catt has no sympathy with ideas to use his Kernel Machine for chemistry or biology research. However, this sort of technology is vital for simulation of all real-life systems, since they are all distributed in space and time. Chemical molecule simulation for medical research would become a practical alternative to brewing up compounds in the lab, if such computers became available. It would help to find better treatments for cancer.

Modern research on the brain shows that the neurons are interconnected locally. Quite often the false notion is spread that the neocortex of the brain is a type of `internet'. In reality, the billions of neurons are each only connected to about 11,000 others, locally. The network does not connect each cell to every other cell. This allows it to represent the real world by a digital analogue of reality, permitting interpretation of visual and other sensory information. Each processor of the Kernel Machine is responsible for digitally representing or simulating the events in a designated area of real space. Certainly, the Kernel machine would be ideally suited to properly interpret streamed video from a camera, permitting computers to `see' properly. This would have obvious benefits for security cameras, satellite spy and weather video, etc.

Catt filed patents for the Kernel Machine in Europe (0 366 702 B 1, granted 19 Jan 1994) and the U.S.(5 055 774, granted 8 Oct 1991), a total patenting cost around the world of about £10,000. His earlier invention, the Catt Spiral, was patented in 1972 but only came to market 17 years later after £16 million of investment by Anamartic Plc.

Patented design for the new kernel computer.

Figure 2 shows how the Kernel patent differs from the Spiral in two important ways. The Spiral design as utilised in the Anamartic memory wafer, once it has been manufactured like an ordinary silicon wafer, is set up as a whole wafer computer by sending test data into a chip on the edge of the wafer.

If that chip works, it sends test data into another adjacent chip, which in turn repeats the process: sidestepping faulty chips and automatically linking up the good chips into a series network. Each chip that works is therefore incorporated into a `Spiral' of working chips, while each defective chip is bypassed. The result saves the labour of dividing up the wafer, packaging the individual chips separately, and soldering them separately on to circuit boards. It saves space, time, and money.

The problem with the Catt Spiral is that by creating a spiral or series connected memory, it causes time delays in sending and receiving data from chips near the end of the spiral. Data can also be bottlenecked in the spiral. The invention was innovative, and won awards; yet by the time Sir Clive Sinclair was ready to begin production for a massive wafer scale plug-in memory for computers, Ivor Catt was already arguing that it was superseded by his later invention, the Kernel machine. Born in 1935, Cambridge educated Catt is extremely progressive. His immediate replacement of earlier patents of his own when new developments arrive seems logical to him, although it can disturb those who invested in the previous design which has yet to make a profit.

The adjacent linking of chips into a two dimensional array in the Kernel Machine, is so-named from the `kernels' in the corners of each chip which allow networking through the chip even if it has errors and is not used itself. Kernel computers are designed to have enough networking to avoid all of the problems of the Spiral wafer. Kernel's built-in `self repair' works by ignoring individual chips when they burn out, the concept of reliability through redundancy. There are sufficient spare chips available on each wafer to take over from failures.

Catt's intended scientific and commercial computing company calls for a three-stage investment of £0.5m, £8m, and £12m, respectively. The project outline states: "The scientific market and the commercial market need to be aware that there are two fundamentally different methods of computing: large, single processing engines performing single tasks one at a time, and parallel systems where there is an array of smaller engines that perform a series of tasks independently of each other until they are brought together by some management mechanism to give a result. The scientific market's major application areas are: scientific and engineering computing; signal and image processing and artificial intelligence.

" In the commercial world there are a number of application areas where the application of very fast numerical processing is extremely useful. As the limits of physical performance are now in sight for semiconductors, the next level of performance will be achieved by applying an array of processors to a particular task. To achieve even better price/performance ratios than is presently available, the architecture needs to be flexible enough to use any one of a number of computer processor types.

"Having proven the technology and its ability to be applied to specific operational areas, the company will set to licence the technology within these application areas. The company will also develop intermediate and peripheral products on its route to the major goal; that of a parallel processing super-computer using patented technology.

"In common with all companies first entering a high technology market, this company will make a loss during the initial stages. The various stages of product development will be interposed with the marketing of that development. It is anticipated that this will reduce the negative cash flow impact inherent in an R&D environment. Industry norms have been applied to the cost of sales, marketing and administration expenditures, and to the capital costs."

In order to develop the software for the Kernel Computer, current computer technology will be used, networked in the Kernel adjacent processor array. Software, for all of the challenges facing the Kernel Computer, can be tested and debugged on this inexpensive mockup. The next phase will be the production of the first large scale super-computers using the Kernel system of wafer-scale integration.

Catt comments: "The first Wafer Scale Integration (WSI) product, a solid state disc called Wafer Stack, came to market in 1989, based on `Catt Spiral'. We can now advance to a WSI array processor, the Kernel machine, with one million processors giving one million million operations per second. The Kernel machine, when built from an array of 100 wafers, will retail for £500,000. The external control system maps out the good and bad chips, and devises a strategy for building a volatile, perfect square two-dimensional array of 1,000,000 processing elements (PE's) out of a larger, imperfect array. Reliability is achieved through redundancy; having spare PEs available.

"The project costs £20 million spread over four years. A proper figure for profit in this market would be 20% of retail price. The $0.2Bn turnover needed to justify the Kernel project is dwarfed by the $50Bn world computer market." The Kernel array computer is the machine of the future, replacing the single processor von Neumann machine of the present day.

published in ELECTRONICS WORLD January 2003 pp12, 13, 14

 
At 4:04 AM, Blogger nige said...

Kevin,

The Kernel Machine is designed to deal with a particular class of problems, where the 2-d array of processors is a physical representation of an area of ground.

Say you have an array of 1000 by 1000 processors, a total of 1 million, then each processor on the wafer can be used to deal with 1 square kilometre (say) of an area of 1000,000 square km.

For weather simulation or ATC, the program you build into each processor is identical.

Each processor is dealing with its immediate neighbours! This is the physical reality. You aren't dealing with planes jumping around without passing intermediate points. Similarly, for weather prediction, you're dealing with differential equations for air pressure, density, temperature etc., where the conditions at any one location depend on the immediate surroundings only. This is the way the maths works.

In these cases, where the array of processors on the wafer is physically simulating reality, you don't get the bottleneck issues.

OK, you need to put in initial data and then new data, but then the Kernel Machine is not sending out or taking in information while it is simulating! Each processor has enough memory built in to store the information for its own 1 sq km of territory, which is limited.

It is entirely different to the single processor system! The programming is simple, since the same program is run by each of the 1,000,000 "chips" on the wafer.

You just write one basic program, and give each processor its coordinates on the array. It is simple from start to finish, just a different concept from von Neumann single processor computers.

If ATC had been better controlled by computer, 9/11 problems would have been spotted sooner and actions could have been taken. The better the technology, the less chance of confusion and loss of life. (It is sad Ivor Catt has removed that article from his site, I hope his paranoia does not get in the way of trying to make the world a better place.)

What you are doing is thinking about the vast information flows in and out of a number-crunching single processor and saying that will cause problems for the array of the Kernel Machine. This is true if you want to use a few processors in the middle of the array to run all the accounts of Barclays Bank in real time. But it is not a problem for weather prediction or ATC, where the amount of input and out to any given processor is not excessive.

If needed, the I/O for each processor in the array could be done sequentially in time, just as a CRT tv's beam scan every pixel in turn 25 times a second. You're not forced to use it like a single processor computer.

A bigger problem than those you suggest is the heat produced from the wafer, which I don't mention in the EW article. It would have to be submersed in a non-conductive cooling fluid, but that technology is actually well established.

Nigel

 
At 5:03 AM, Blogger nige said...

Kevin,

You say: "A little bit of arithmetic will show that at 100Mbps it will take 10 seconds to load all 1000 1M bit memories in a row of the KM."

This is nonsense. Why would you need 1 Mb to describe what is going on in 1 sq km?

1 Mb may be the total memory assigned to each chip on the wafer, but you would be dealing with a lot less.

In any case this 10 seconds start up time applies to switching the Kernel Machine on, equivalent to a very fast boot indeed!!!!!!!!

Now you are not dealing with changing all the information in every memory every second.

You just make up nonsense. If you worked on the Superconducting Supercollider, I can well understand why it was cancelled.

You then give some gibberish about the programs not being virtually identical for each processor in the array. You just don't have any grasp of the simplicity involved!

Nigel

 

Post a Comment

<< Home