A Short History of c

In the bustling Cairo markets of 3,000 years ago, when the earliest Egyptian pyramids were well into their second millennium, eggs were being sold pretty much as they are today, i.e. in dozens and half-dozens. The reason for this, then as now, is that fresh market produce can generally only be sold in units, of which eggs are a prime example, or quantities once the units become too small to be sold separately. Grains of salt, for example, but also sprouts. Exactly where the dividing line falls may vary within each market, and across the ages, but we can all agree that the market for half a raw egg will always be small, as it is for half an apple, a third of a lemon, and so on.

The reason for this is that the Earth’s atmosphere, fond though we are, is highly corrosive. It can destroy and corrupt anything from bridges to blueberries, once they are exposed to it. As a result, everything is contained and protected within a skin or shell of some sort, bringing the concept of peiron1to the market stall. One early and natural effect was a special interest in numbers that could most easily and effectively be divided by integers. One of these was the number 6, which is both a perfect number, i.e. one which is the sum of its positive divisors (1, 2, 3), and a superior highly composite number, a category of which the first six are 2, 6, 12, 60, 120, 360. The divisors of 60 are 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, and 30, while those of 6 times 60 (360) are 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 18, 20, 24, 30, 36, 40, 45, 60, 72, 90, 120 and 180.

The Egyptians adopted all this from the Babylonians, who can only be described as six-mad. They used a sexagesimal (base-60) number system that they adapted from the Sumerians and Akkadians before them. This system must have had significant advantages, presumably derived from the wide range of divisors available, because we know they chose it deliberately. However, it’s incredibly complicated, and seems to have defeated most Babylonians; they needed a huge number of look-up tables just to perform relatively simple calculations. We know it was a deliberate choice, though, because, like the Romans, the Babylonians counted in tens, but unlike the Romans, they used recognisable numerals in a decimal system identical to ours, albeit only up to sixty, with digits up to nine in the right hand column, and tens in the column to the left. They also left a space to denote zero, just as the Romans would. The Babylonians arranged abacus columns from right to left as well, except increasing by 60 each time. For us, however, the lasting legacy of the Babylonians, apart from the 360º circle, is the 60 minute hour and the 60 second2minute3.

The hours themselves, on the other hand, have enjoyed a varied existence, taking minutes and seconds with them. In our Egyptian marketplace, for instance, for at least a millennium before and up to almost the present day, day and night were divided, naturally enough, into twelve hours each, regardless of how long they actually were4. You can see these ‘unequal’ hours marked out on the face of the 15th Century Orloj5in the Old Town Square in Prague, proving that they had the technology at that point, but truthfully, nobody cared. The clock was just a showpiece to impress other countries.

For most people, their interest in the time of day was limited to “Can I see well enough to get started?” through to “Will I be able to see well enough to get finished?”6It didn’t bother anybody that the hours were different lengths. In fact, in northern Europe it was regarded as a sign of the Almighty’s beneficence that he had arranged for the days to be longer when most of the work (ploughing, sewing, reaping) had to be done, and shorter when all you had to do was repair and maintenance7.

These twin ideas of “How long have I got?” and “How long does it take?” dominated all timekeeping for millennia. Clepsydrae, the water clocks that Galileo was still using in the 17th century, would have been perfectly familiar to our Egyptian barrow boys. The Roman army used them to time their watches8, law courts used them to keep testimony to the point, students were examined according to them. Sometimes the water9would flow in, sometimes out, or there might be a flame on a notched candle. In the East, incense sticks were used, which were more consistent, and safer, being flameless. Sometimes the incense would change at fixed intervals, so one would be aware of time’s passing, but not distracted by it10.

None of these was measured against any standard unit of time. Even the first mechanical clocks simply chimed. They had no faces or hands. They were used just to call the monks to prayer during the night, starting with Vespers in the evening, which marked the end of the monks’ working day11, followed by Compline at bedtime, Midnight prayers at midnight, as you might have supposed, and then Matins to take you through to dawn and the start of another day12.

There’s an old proverb: “A man with one clock knows what time it is; a man with two clocks is never sure.” These early mechanical clocks, whether powered by weights or spring-driven, were notoriously inaccurate, losing up to fifteen minutes a day, but for most people they were the only clock, so no one could tell. Besides, they could always be reset at noon, and none the wiser. However, astronomers and other, generally military, professionals went on developing more and more accurate timekeeping mechanisms. Outside a very few professions, the task/process dichotomy that has so dominated and transformed modern organisations and society itself simply did not exist. Each individual’s place within the hierarchy of the community was largely determined at birth, and his or her role in any work to be done was defined by custom and apprenticeship. The timing of any specific step in any sequence of steps depended solely on completion of the preceding step. Even the military, who could be expected to make battlefield decisions on the hoof, so to speak, required only a limited number of calls and signals to initiate and coordinate major troop movements without the need for pre-planning. In short, everybody in all fields knew what to do, and when it needed to be done, without reference to any independent timing mechanism. For those rare instances when an external reference was needed, there was always the Sun itself, or a sundial if greater precision were necessary.

Astronomers had a different motivation, not least John Flamsteed13, the first Astronomer Royal. He was a young man, only 29, when he was appointed in 1675, and the Royal Greenwich Observatory had not even been begun. It was just half a century since Kepler had modelled the solar system as we know it, and Newton had not yet confirmed Kepler’s calculations. He was already a boy of nine when Christiaan Huygens, also an astronomer, invented and built the first accurate pendulum clock. Both Huygens and Flamsteed were interested in what is called the ‘equation of time’, the question of whether or not sundials and mechanical clocks would keep the same time14. Flamsteed was convinced that the rotation of the Earth on its axis was constant, and therefore that the passage of the Sun across the sky must be regular, thus the shadow cast upon a sundial would keep correct time15. Even in 1677, he was still confident enough to write:

“… our clocks kept so good a correspondence with the Heavens that I doubt it not but they would prove the revolutions of the Earth to be isochronical… “

Sadly, he was wrong. The Sun does not go smoothly around the Earth. At the beginning of the year it appears to go slower, so sundials lag behind clocks, but they catch up, and indeed overtake them towards the end of the year. It’s an effect of two things: the fact that the Earth’s axis is tilted slightly16as it orbits the Sun, and that its orbit is very slightly eccentric17. Even without clocks, the Babylonians were aware of this, and Ptolemy also devoted much of Book III of the Almagest to it. He needed to correct for it when getting a reading for noon. The apparent time of the Sun is called, conveniently, the Apparent Solar Time, and this has to be converted to the Mean Solar Time, i.e. the average of the total annual solar time18. Four times a year, in April, June, September and December, ‘apparent’ and ‘mean’ solar time coincide, but they can be out by as much as a quarter of an hour19at other times.

This wasn’t the only problem. Not only did you have to adjust for the Sun being in the wrong place, you also had to worry about where the sundial was, and take that into account. The Sun goes around the Earth at a ground speed of a little over a thousand miles, i.e. one time zone, per hour at the equator, which equates to around six hundred or so miles per hour in the UK, lying, as it does, between 50º and 55º N20. At its widest point, the UK is about three hundred miles across, so getting on for half an hour in terms of possible noon readings. Noon in Bristol, for example, at 2.5º west of London, is ten minutes behind Greenwich as the Sun flies.

This is where the “man with two clocks” problem raises its ugly, or beautiful, head, depending on what you are trying to do. If you want to know where Bristol is in relation to London, or more practically, if you want to know where the ship that recently left Bristol is in relation to London, then the discrepancy between their respective noon readings can tell you. As I’m sure you know, the Yorkshireman John Harrison based his ultimately successful attempt at winning the Longitude Prize21on just this principle. However, the solution to the longitude problem didn’t just depend on having an accurate clock on the ship; there had to be an agreed time and an accurate clock in an agreed place on the shore for use as a reference. This took longer than you might think to organise. The Longitude Act was passed in 1714. It was nearly sixty years22, 1772, before Harrison finally produced H5, the world’s first marine chronometer, making it possible to take an accurate copy of the time on shore with you on the boat. Now for the clock on shore.

«§»

Henry VIII always had a soft spot for Greenwich. He was born there, for a start, and it was where he used to go for a little rest and recreation, keeping a park well-stocked with deer, and his mistresses in the old Castle. It was handy for the Palace23. He founded the Navy Royal24there, and moved the Royal Dockyards from Portsmouth to Deptford, next door to Greenwich, and Woolwich further towards the mouth of the Thames. Sir Christopher Wren demolished the old Castle to build the first Royal Observatory, known for a long time as Flamsteed House, after John. Sir Jonas Moore, the driving force behind the establishment of an observatory25, originally wanted it to be built on the site of Chelsea College26, but Wren had other plans for that. The King, Charles II, commissioned it specifically “to find out the so much desired longitude of places for the perfecting of the art of navigation”, which at the time was thought most likely to be achieved by an astronomer, not a carpenter from Yorkshire27.

In 1675, the year that Flamsteed laid the foundation stone for the observatory, Christiaan Huygens developed the world’s first practical clockwork clock. These two approaches – astronomical, eventually focusing on a method involving lunar distances invented and developed by Tobias Mayer28of Göttingen; and clockwork, culminating in John Harrison’s marine chronometer – were both used into the second half of the 19th century to determine the time in Greenwich29. Accuracy was crucial. A clock being ten minutes fast or slow was of little significance in the life of a landlubber, but once at sea, each minute in error was equivalent to ten miles on the map, and the slightest inaccuracy could put you on the rocks in seconds30. With lives at stake, it’s no wonder that chronometry should have advanced during this period. Fortunately, the need arose in England, at that time the world’s leading horologist nation, which had recently welcomed its largest ever31flood of immigrants, Huguenots fleeing persecution in France, predominantly skilled craftsmen in many trades, among them watchmaking32.

The original observatory was equipped with two extraordinary clocks by Thomas Tompion, installed in the Octagon Room, again a gift of Sir Jonas Moore. They needed the room’s 20’ high ceilings, as Tompion had adopted Robert Hooke’s idea of a very long pendulum with a very small arc33, thus minimising Galileo’s error. Each pendulum was over 13’ long, and mounted above the clock face. They were accurate to within seven seconds per day, remarkable for the time, but nothing compared to Harrison’s H5 just a hundred years later, tested by King George III himself to be within one-third of a second per day. Since then, driven, and largely supplied, by advances in science, the accuracy of clocks has continued to improve, until now they are controlled according to the vibration of caesium atoms34. However, it is only recently that such precision has come within reach of the ordinary man or woman. In Harrison’s time, one of his marine chronometers could account for over a quarter of the cost of a ship. A far cheaper, although less precise, option was to use Professor Mayer’s lunar distance method.

Unfortunately, the calculations involved were highly complex and time-consuming. Fortunately, however, Nevil Maskelyne, the fifth Astronomer Royal, had the brilliant idea to do them all ahead of time and publish them annually as tables, along with other useful information, in The Nautical Almanac and Astronomical Ephemeris, starting in 1766 with the tables for 1767. This had two, possibly unintended, consequences. One was that the time in Greenwich suddenly became more important than the time anywhere else. The Almanac tables calculated Apparent Solar Time at Greenwich, on the grounds that the sailors would be doing the same at sea, but the clocks themselves showed Mean Solar Time, or Greenwich Mean Time, for short. Now, however useful it may be to have two clocks for determining longitude, that’s no way to run a railroad. Not knowing precisely when the train was due was one thing; not knowing precisely when the express was coming at you down the same track was quite another. In 1840, the Great Western Railway company was the first to standardise on GMT for its timetables, followed in 1847 by the Railway Clearing House, the coordinating organisation for the industry in Great Britain, making a National Railway Timetable possible. Clocks at the stations would show both what was known as Railway Time and the local time. Given that missing your train because you had Railway Time wrong could ruin your whole day, while being ten local minutes late wouldn’t even affect lunch, people who had clocks or watches started to set them to GMT, especially in business. In 1880, it became the official time across the country, and eventually the international reference time.

The other consequence of Maskelyne’s idea was even greater. Now, not only was the chronometer method of determining longitude more precise, it could determine the longitude from anywhere. Until the 18th century, sailors just wanted to know how far they were from home, so their own port or capital city could be zero on their map. The French were particularly fond of the Paris Meridian35, for instance, but there were others, usually at some westernmost point of something, so that the cartographers could get complete countries on the page. But Maskelyne’s tables only worked from Greenwich. On the other hand, they were cheap, unlike chronometers. Pretty soon the world was full of sailors and traders who knew where places were in absolute terms, in degrees of longitude from Greenwich. By the 19th century, 72% of global shipping was using Greenwich as the meridian. In 1851, Sir George Airy established the Airy Transit Circle in Greenwich through which passed his Prime Meridian. In 1884, at the request of US President Chester A. Arthur, the aptly yclept36International Meridian Conference was assembled in Washington D.C. to agree on an International Prime Meridian. Forty-one delegates from twenty-five countries met there, and various options were put to them. In the end, they settled on Greenwich. The French abstained. For almost thirty years. They finally got on board in 191137.

While we’re on the subject of the Paris Meridian, although going back a bit, it was also involved in another international standard.

«§»

In 1789, with the stirring example of the American colonists firmly before them, the French embarked on a revolution of their own. They, too, were imbued with the same Enlightenment ideals as Jefferson, Adams, Paine, Franklin, et al., one major difference being that, while George III38was an ocean away in England, the French monarch was in Versailles, just outside Paris where the mob lived. The War of Independence had been a relatively civilised affair. The Americans were revolting on principle39, and when John Adams was sent to England as the American Minister to London a mere two years after the war ended, King George was able to tell him sincerely that, although he was the last to consent to the loss of America, once done, he had always meant to be “the first to meet the friendship of the United States as an independent power.”[i] The French crown would be given no such opportunity.

Everything about the ancien régime had to go. Not just the monarchy, but the entire apparatus of government. The Church could stay, but not the saints. The untidy Gregorian calendar was replaced by twelve months of thirty days apiece40, named not after saints, but the fruits, vegetables, tools and animals of the farm. The months themselves were named after the weather, Brumaire, Frimaire (foggy, frosty), or the farming year, Germinal, Floréal, Messidor (sowing, flowering, harvesting41). As to years, the birth of Jesus was replaced by the birth of the Republic, with 1792 becoming the year 1.

Early on, they decided to be rid of the wildly varying system of weights and measures that was the French marketplace, and the Académie des sciences appointed a commission to do it. They could, of course, have simply decided to standardise the existing system, but that was based on royal body parts – the forearm, the foot, the thumb – and clearly would have to go. Still, given their propensity for the bucolic, you would expect them to retain some version of the Babylonian market arrangement, and indeed, Pierre-Simon Laplace, who was on the commission, did suggest adopting a duodecimal scheme, but it was rejected in favour of decimals. This was, after all, the age of reason, and the decimal system did have one extraordinary advantage: for so long as people have been counting, they’ve been counting in tens42. All you have to do is add a decimal point, and you can go on for as long as you want in either direction.

It was Simon Stevins again who, in 1585, taking some time off from one-upping Galileo, had first proposed extending decimals43to include fractions44. In 1614, John Napier used decimal fractions in his logarithmic tables, and that was that. Perhaps more to the point, the commission was chaired by Jean-Charles, chevalier de Borda45, author of Tables of Logarithms of sines, secants, and tangents, co-secants, co-sines, and co-tangents for the Quarter of the Circle divided into 100 degrees, the degree into 100 minutes, and the minute into 100 seconds. I leave you to guess where he stood on decimalisation.

The commissioners themselves were in no doubt as to what they were setting out to achieve. In the words of the Marquis de Condorcet, it was to be “for all people for all time”[ii], and ultimately, with the Système international d’unités (SI), that is what it has become. At the beginning, though, they were given just five measures to define:

  • The mètre for length
  • The are (100 m2) for area [of land]
  • The stère (1 m3) for dry volumes (stacked firewood, in their case)
  • The litre (1 dm3) for liquid volumes
  • The gramme for mass.

Of these, only the first presented any problem. The others could easily be defined and measured without leaving Paris, or indeed, the office. The metre, however, was to be defined in relation to the planet itself. According to John Tabak (2004), Stevin thought that coinage, weights and measures would all eventually be decimalised. However, the first actual proposal came over a century later from John Wilkins in An Essay towards a Real Character, and a Philosophical Language, in which he laid out a system of weights and measures that closely resembles the metric system, while a couple of years after him, in 1670, Abbot Gabriel Mouton proposed a measure of length he called the Virga, which would be equal to a thousandth of the distance along the Earth’s meridian equal to one minute46of angle. The Virga would thus be around six feet long, and more or less equivalent to a Toise47, which itself was about a fathom. In 1673, Leibniz came up with a similar plan, all three of them – Wilkins, Mouton and Leibniz – suggesting the actual length be based on a seconds pendulum48in some form.

A century and a quarter later, de Borda rejected the whole seconds pendulum approach for the very sensible reason that the second was one of the very measures they were supposed to be redefining. He – and the commission agreed with him – thought the length should be based on the planet itself, and fixed it at one ten millionth of a quarter arc of the Paris Meridian. It was not just that he was the poster boy for decimals; he was looking for a measurement that would be handier and more practical than the Toise, something more along the lines of the yard. The yard, based on a single pace49, was a useful standard in everything from weaving to carpentry. Ever since Columbus’s lucky escape in 1492, everybody in Europe was acutely aware of the circumference of the Earth – a little less than 25,000 miles, and the meridian would be approximately the same. 25,000 times 1,760 equals 44,000,000 yards, making a quarter arc 11,000,000, so his desired measure would be about 10 per cent over a yard. This suited him well. The only problem was that they would have to leave Paris to measure it.

In any other year but 1792, this would not have been an issue, but the project was unfortunately timed to coincide with the outbreak of the French Revolutionary Wars50. It outlasted the entire war against the first coalition of European states, France against the combined might of Austria, Spain, Belgium, the Netherlands, Germany and Great Britain, together with Portugal and much of Italy. France won51, although Great Britain did not accept this and went on fighting. Throw in the Reign of Terror, which only lasted 11 months, from September 1793 to July 1794, but accounted for over 40,000 executions throughout France, and the whole idea of doing the survey in France because it would be safer starts to look a bit weak; however, there was nowhere else in Europe they could go, so the Paris Meridian it was.

The hapless crew sent out to do the work was to have been led by Pierre Méchain, an astronomer and member of the Académie, as well as a Fellow of the Royal Society, and Jean-Dominique, Comte de Cassini. However, Cassini was a royalist, and refused to work for the National Assembly once they arrested52the king in August of that year. He was replaced by Jean-Baptiste Delambre, a newly, but unanimously, elected member of the Académie des sciences, who took charge of the northern section to be surveyed, while Méchain took the southern part. The whole route, from Dunkirk to Barcelona, was 242 leagues; the northern part, from Dunkirk to Rodez in the south of France, was 167 leagues, while the shorter part, from Rodez to Barcelona, was only 75 leagues. However, that part contained the Pyrenees, while the north was largely flat, so it seemed a fair division. As a baseline, Delambre used a six mile stretch of straight road, between Melun and Lieusaint, as it happens, which he measured using platinum rods, each two toises in length. Méchain found a similar stretch between Vernet and Salces, and did the same.

At this point you may justifiably be wondering precisely what a toise might be, and if so, how long is a league? Good point. As if to prove that the whole exercise was justified, in 1792 Paris alone had four, count ‘em four, definitions of a league (lieue in French). The oldest was the one used by the Public Works department, originally called just the lieue de Paris, but from 1737 on known as the lieue de ponts et chaussées (bridges and roads). It was 2,000 toises long. There was also one for the Post Office at 2,200 toises, and another for calculating tariffs at 2,400 toises. The fourth, and the one I am using here, was defined by Jean Picard in 1669. It was to be one twenty-fifth of a degree of the polar circumference of the Earth, and called simply the Twenty-five to a degree league. Picard calculated it to be exactly 2,282 toises.

“So what about the toise?” I hear you cry. Well, the toise changed very little, except in name, between Picard’s definition and the 1792 survey. In Picard’s time it was called the Toise du Châtelet, and the one that replaced it, known as the Toise du Pérou, was almost identical. So far, so good. However, the year before Picard did his measuring, somebody decided to check on the original reference toise which went back to Charlemagne’s day, and found it had shrunk by nearly half an inch, or about five lignes. Désastre! But no; just time for a swift, collective Gallic shrug, and they go with the new short version, so that’s what Picard used.

I want you to be impressed by Picard. If you take his 2,282 toises, multiply them by 25 to give you a degree, then by 360 for the polar circumference, you get 20,538,000 toises. Divide that by 40,000,000, as the metre was intended to be, and that gives you 0.51345 of a toise. There are 864 lignes to a toise, so a ‘real’ metre, one that corresponds to the original specs, is 443.6208 lignes according to Picard, and according to World Geodetic System 84 it should be 443.38308 lignes (or at least it would be if anyone had known what a ligne was). Anyway, the difference between Picard and the WGS is less than a quarter of a ligne, otherwise known as 20 thousandths of an inch, or half a millimetre in metric. Respect.

Méchain and Delambre did better of course, but even they weren’t perfect. It may not have been their fault. Cassini’s father had already done a survey in 1744, and Delambre reused a lot of his triangulation points, for instance, while Méchain had the opposite problem; large parts of Northern Spain had never been surveyed at all. On top of all this, they were constantly being arrested and slung in gaol as spies for one side or the other in either the war or the revolution. Méchain ultimately gave his life for the survey, dying in 1804 of the yellow fever he contracted in Spain while trying to improve on his work there53.

Meanwhile, back at the ranch, de Borda54and the commission had jumped the gun somewhat, and calculated a provisional value for the metre based on earlier surveys. This was put into law in 1795 as 443.44 lignes, so within a couse55of Picard’s value. Based on this, they had a bunch of platinum bars made up so that when Méchain and Delambre got back they could just pick the nearest one to their measurement, and get on with things. The survey duly came back with a length of 443.296, the appropriate bar was chosen and went on record in 1799 as the mètre des Archives. Unfortunately, 443.296 fell short of meeting the 10 million to the arc requirement, a fact which quickly became apparent. Désastre! But no; another swift shrug, and life carried on56.

The mètre des Archives may have fixed the length of the metre itself, but work still went on into improving the reference bar. In the 1870s, the International Metre Commission, comprising some thirty countries, met to discuss, and eventually, in 1875, sign, the Metre Convention which set up the Bureau international des poids et mesures in Sèvres, just outside Paris, although not actually in France57. As a reference, they used the distance between two marks on a longer bar, which reduced the wear and tear problems associated with earlier metre-long “end standards”, what Wikipedia wittily calls their “shortcomings”58. The new international prototype metre was made of an alloy of 90% platinum and 10% iridium, and copies in the same alloy were distributed to all the signatories, along with precisely calibrated notes as to each bar’s variation from the prototype59. However, this only pointed up the difficulties inherent in having a physical artefact as a reference, and in the early 1890s, Michelson (of Michelson and Morley), together with one Jean-René Benoît, using interferometry, managed to measure the prototype to within a tenth of the wavelength of the red line of cadmium.

Nonetheless, it was not until 1960 that the 11e Conférence générale des poids et mesures would agree a wavelength-based standard for the metre:

“The metre is the length equal to 1,650,763.73 wavelengths in vacuum of the radiation corresponding to the transition between the levels 2p10 and 5d5 of the krypton 86 atom.”

At that same conference, which, incidentally, formally established Le Système international d’unités, the definition of the second was officially ratified as:

“1/31,556,925.9747 of the tropical year for 1900 January 0 at 12 hours ephemeris time60

It had been redefined four years earlier because the daily rotation of the Earth was not uniform enough to measure accurately, but the point about this definition is that it was not measured at all; it was calculated based on Newcomb’s Tables of the Sun, and Brown’s Tables of the Moon. However, even as this definition was being ratified, Louis Essen at the National Physics Laboratory in England and William Markowitz at the US Naval Observatory were collaborating on a new definition of the ephemeris second in terms of the “hyperfine transition frequency of the caesium atom”, which turned out to be

“the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom”

as ratified in 1967 at the 13e Conférence générale des poids et mesures. The Earth herself may wobble and slow, as indeed she does, but the second would henceforth be divorced from all that. When this new definition was compared to the observed ephemeris second, it agreed to within a tenth of a nanosecond, the time it takes light in vacuum to go three centimetres, or a little over an inch.

Talking of light, in 1975 the 15e Conférence générale des poids et mesures recommended that c, the speed of light in vacuum, should be set at 299,792,45861metres per second, based on a number of experiments and using the above definition of the second; the 17e Conférence générale des poids et mesures agreed in 1983. This allowed the metre, too, to let slip the bonds of Earth, and take its value from c, which thereby became the universe’s first quasi-dimensionless62physical constant. I say ‘quasi-‘ because any speed is described by two factors, one of which is invariably a unity – mph, fps, etc. This inversion of the usual definition – in which any uncertainty is seen in the value of a factor – leaves the actual metre with the task of reconciling the definition with reality, and absorbing any disparity, should there be one (Famously, the speed of light used to vary somewhat, which was embarrassing. Now the metre varies somewhat but, with the speed of light fixed, we have no way of measuring it. Sneaky, no?).

And they’re going to do it again. This November, 2018, in Versailles, France, representatives from 57 countries plan to revise the SI, finishing the job of creating a complete system that does not depend on physical objects. Instead, everything will be based entirely on the speed of light and other “constants” of physical science, resulting in a measurement system that might truly and finally be “for all people for all time”, as the Marquis de Condorcet had hoped.

When implemented on May 20th, 2019, the kilogram, based on the definition of Planck’s constant, which has recently been defined as having a value of 6.626 069 83*10-34 Kg M2/Sec, in combination with existing definitions of the metre and the second, will also cease to depend on a material artefact, and be based on a physical constant that would be universal in its scope and application. The work begun by the Babylonians 4,000 years ago, and carried on by the French revolutionaries, will culminate in a system that will work “for all beings, human and alien, for all time and throughout the universe”, based entirely and uniquely on this planet.

Not so insignificant now, eh, Carl?

Welcome to the Clapham version of life!

The man on the Clapham omnibus is a hypothetical ordinary and reasonable person, used as a standard by the courts in English law. He is fairly well educated and intelligent, but nondescript. If you are Australian, you may know him as the man on the Bondi tram. In other words, your average person.

Also, this is a “join-the-dots” blog. In the book equivalent, a number of dots are arranged apparently haphazardly on a page, but if you join them in a particular sequence, they reveal a picture. In these articles, all the pictures are mine, but the dots, the scientific and historical bits of knowledge, are supplied by other people.

It almost goes without saying, but I trust the dots. They are for the most part the product of a lifetime’s study, occasionally with a dash of genius thrown in. If you doubt a dot, you must take it up with its creator, all of whom are listed, where known. However, if your issue is with the picture, the implications I have imputed to them, then blame me alone.

E=Mc2: So, just a coincidence then?

In 1946, Albert Einstein was in sombre mood. Gone was the euphoria of his annus mirabilis forty years earlier, when his paper on the equivalence of mass and energy was first published. Now, with Hiroshima and Nagasaki fresh in his memory, Einstein, too, found himself “the destroyer of worlds”, as Oppenheimer put it. That is why, when he was approached by the publishers of Science Illustrated in the US to write an article on “what takes place in the operation of his law”, he entitled it:

E=MC2 – the most urgent problem of our time.

Like his publishers, Einstein wanted to give the general public some basic understanding of the science involved in nuclear power and its terrible potential. In the process, from our modern point of view, he manages to clarify a few basic issues with the formula itself. Oddly, considering it is probably the most famous equation in the world, there is still considerable dispute as to what it actually means. What, for example, is equivalence? Does it mean that mass and energy are the same thing, or two equivalent but different things? Is one just convertible into the other? What is the role of c2 in the equation? And so on.

The Stanford definition63is quite specific:

According to Einstein’s famous equation E = mc2, the energy E of a physical system is numerically equal (my emphasis) to the product of its mass m and the speed of light c squared.

In the article, being forced to write for a readership of non-mathematicians, Einstein’s use of plain language makes his own understanding of the concepts behind the symbols quite clear:

“E is the energy that is contained in a stationary body; m is its mass. The energy that belongs to the mass m is equal to this mass, multiplied by the square of the enormous speed of light – which is to say, a vast amount of energy for every unit of mass.”

The first major clue is his use of the word “enormous”. The speed of light is constant; it does not change.2However, if it is to “multiply” mass, the speed of light must be a number, and quite a high one, too. In the article, Einstein uses 186,000. That is the speed of light in miles per second, which would be familiar to his US audience, but he is just using it as a number that gives him a square of 34 billion, which is large enough to account for Hiroshima.

The important point is that, for bombs – or, more usefully, nuclear power stations – to work, some actual number is required. With bombs, you might well wonder what the real mass was, given that it is now spread across several square miles of former city, but nuclear power stations offer a much more controlled environment, where E = mc2 yields fixed values that are a reliable part of the fuel purchasing calculation.

But how can this be? The speed of light can be virtually any number, from one (light years per year) to ten billion (feet per second), anything in between, or even far greater (millimetres per hour, anyone?). The answer lies in the fact, as is clear from the article, that Einstein views the equivalence of energy and mass as a fixed ratio. How could it be otherwise? It does, of course, vary from measurement system to measurement system, from SI Units to Imperial Units to US Customary Units, and there are elaborate conversion3rules and ratios to convert from one to the other, but it has to be borne in mind that these are competing systems yielding different results for the same formula. However, it is a logical impossibility for the actual ratio to vary, or the bombs wouldn’t work, or at least would have wildly varying impact.

For instance, he suggests an experiment to verify the relationship:

“I can easily supply energy to the mass –for instance, if I heat it by 10 degrees. So why not measure the mass increase, or weight increase, connected with this change? The trouble here is that in the mass increase, the enormous factor c2 occurs in the denominator of the fraction. In such a case the increase is too small to be measured directly; even with the most sensitive balance.”

It is obvious from this that, to his mind, the energy implicit in mass and the energy released in, say, nuclear fission – the ‘bound’ and the ‘unbound’ energies, to coin a phrase – are in a true ratio, i.e. between two states of the same property of nature. If he is right – and who am I to quibble? – then the factor c2, being a true ratio, is dimensionless, and the choice of the correct “speed of light” is central to the validity of the formula.

The SI Units (or metric system, as it is known) has a value for c2 of approximately 90 quadrillion, while the other two have values of 34 billion in miles per second, or almost a quintillion in feet per second. No kind of conversion ratio can make those compatible; they are either right or wrong. Einstein knew that. He just didn’t know which was which.

Could Einstein’s formula be tested experimentally? To be valid, we would need values for mass and energy that can be established through direct measurement in physical experiments. The ratios between these values should be judged either directly if the units are compatible, or, as a last resort, through established convention.

In the World Year of Physics 2005, as a centennial celebration of Albert Einstein’s achievements in 1905, just such a test was performed4, essentially the one that Einstein proposed for himself, except that we do now have a sufficiently “sensitive balance”: “the loss of mass to an atom and a neutron, as a result of the capture of the neutron and the production of a gamma ray, has been used to test mass–energy equivalence to high precision, as the energy of the gamma ray may be compared with the mass defect after capture. In 2005, these were found to agree to 0.00004%, the most precise test of the equivalence of mass and energy to date.” That’s four myriadths of a per cent. Four myriadths of a per cent is amazingly precise – four parts per ten million – as well as being a tribute to the sophistication of the measurement technology to which it bears witness. We are the first generation(s) with the power to know this.

So, which was it to be? SI, Imperial, US? Given the almost unlimited number of possible results, the most likely outcome of the Direct Test was none of the official systems. There was no design constraint that would force a choice between the three main measurement systems. And yet, what was this precise value? 89,875,517,873,681,800.00, otherwise known as the SI value for c2.

This was an astonishing result, not just for its precision, but because of where we got the number to which we’re comparing it (See A Short History of c). Almost any other number from the huge range of possible speeds of light was infinitely more likely. Yet we have this one, and with it a strong suspicion that it is dimensionless, which reinforces the probability that energy and mass really are two states of the same property of nature.

Not that it should come as a surprise. We have been running nuclear power stations for decades, and this has been the value for the mass/energy ratio for all that time. Nonetheless, it has never been academically verified until now. You may, of course, still be entertaining doubts about the directness of the test, but you don’t have to take it from me; feel free to ponder the legitimacy or otherwise of this experiment. However, remember that this test convinced Stephen Hawking, and all other physicists of our time, of the validity of Einstein’s formula, so answers on a postcard please, to . . .

Again, what does this all mean?

Well, for a start, it means that, thanks to all the people involved, we now have an extremely precise numerical value for c, the speed of light. Why is this important? Because c is a universal physical constant, and it shows up in calculations for almost all the other physical constants. Take the natural units, for example:

“Physical constant.” Wikipedia, The Free Encyclopedia. Retrieved 30 Jul. 2017

The point is, we humans wrote all those, and all the formulae that use them. Bear in mind that the value we have given c2 is around ninety quadrillion, so its position above or below the line makes an enormous difference, as Einstein pointed out in his article. We use these constants all the time, and we’ve measured the results. They are correct. They work.

So, Rainville et al have established that the measured ratio between the binding energy in a unit of mass and that energy once released is 89,875,517,873,681,800.00, which just happens to be the very same number that the French and the Babylonians conspired to set for c2. So, is this the coincidence of the title?

Yes. On the one hand we now have scientific proof that Einstein’s theory and its mathematical predictions were correct, including the value for c2, and on the other, the historical fact that it was the Babylonians who, 4,000 years ago, chose to divvy up into units of time the rotation of what we are assured – by Carl Sagan and Stephen Hawking among other luminaries – is merely “an insignificant planet of a humdrum star lost in a galaxy tucked away in some forgotten corner of a universe” – which, in combination with the arbitrary and inaccurate measurement of the surface of that same planet by the French, gave us the 299,792,458.00 value for c in the first place. I cannot begin to imagine what possible causal connection there could be between their activities and the empirically confirmed fundamental constants of the universe. On the other hand, you’ve got to admit, as coincidences go, it’s a Duesy5, because, thanks to Rainville et al, we now know it to be so.

If it’s not a coincidence, and there is a causal connection, that would mean that we on this planet, because we’re on this planet and no other6, have been able to define celeritas or constant, or whatever you think the c stands for, for the entire universe, and thus all formulae with c in them. Apart from the Planck units, that includes most of the universal constants, and therefore almost all of modern physics. This is completely artificial, in the original sense of the word. There is no equivalent in nature to one 86,400th of the averaged and corrected time it takes our “insignificant planet” to rotate on its own axis, even if you take a year’s worth7, just as there is nothing natural, despite the best efforts of Méchain and Delambre, about an incorrect measurement of a quarter arc of that same planet. But it is very human8. All the faffing about, rounding off and shrugging, not to mention the warfare and terror. Imagine that the English and French had got along, and collaborated through Greenwich instead. Or worse, that the English had gone it alone with yards per second; c would be 327,853,032.0688, and c2 would be 107,487,610,636,706,0009, or nearly 20% greater than the number we have now; no four myriadths10of a per cent there. I’m not even going to mention miles per second11.

So it has to be just the most amazing coincidence.

Unless, of course, it isn’t, but be careful with that. That way madness lies12.

 

[i] Adams, C.F. (editor) (1850–56), The works of John Adams, second president of the United States, vol. VIII, pp. 255–257, quoted in Ayling, p. 323 and Hibbert, p. 165, retrieved from Wikipedia George III, 9/9/14

[ii] Alder, Ken (2002). The Measure of all Things—The Seven-Year-Odyssey that Transformed the World. London: Abacus. ISBN 0-349-11507-9.

Understanding Quantum Mechanics

“If you think you understand quantum mechanics, then you don’t understand quantum mechanics”
Richard Feynman (allegedly)

And on that encouraging note . . .

The Clapham Interpretation

Not that quantum mechanics doesn’t deserve its fearsome reputation. From the beginning, Schrödinger was in two minds about it, and Heisenberg equally uncertain. And they understood it (allegedly). Nonetheless, there is a way of thinking about quantum mechanics that works for me, and might perhaps for you, too.

But first, a little background . . .

At the turn of the last century, Max Planck, a physicist friend of Einstein’s, and the unenthusiastic13father of quantum mechanics, was thinking about warm, black bodies2. A “black body” is anything that absorbs all frequencies of radiation, hence the “black”, yet radiates heat, whence the “warm”. By the time we join him, he is contemplating the sun3, the ultimate “black” body, and wondering why it doesn’t go out. His problem was that light waves carry away energy in the form of heat, and theoretically (blame Maxwell4) there was no limit to the total number of possible light waves; limitless radiation = infinite energy transfer = Phuttt! No more heat. However, the sun was still shining, so there had to be something wrong with the theory. Eventually he decided that, depending on the frequency, there must be some limit, that light waves could only carry so much5energy, and the equation he came up with was E=hν (the Greek letter Nu), where E is the energy, ν the frequency, and h is a constant he rather unimaginatively named Planck’s Constant.

Of course nobody believed him, until Einstein took a little time off from completely re-writing the rules of the universe to point out that the photoelectric effect6clearly showed his ‘energy quanta’ knocking electrons off the surface of metal precisely following Planck’s formula. Suddenly, he was right, but then it all started to go pear-shaped. To begin with, what was a ‘quantum’? What do we call this thing that floats like a wave and stings like a particle? How do we classify it? The key to this conundrum turned out to be the ‘double slit’ experiment. This had actually been performed a hundred years earlier by Thomas Young7to “prove” that light came in waves. He set up a card with two parallel slits in it, shone a light through them, and, lo and behold, the resulting image was an interference pattern. Everybody knew about peaks and troughs from messing about in boats, so this was proof positive that light comes in waves, and thus a great place to start again now that it wasn’t8. Sure enough, when in 1909 one G. I. Taylor tried it with a very low energy light source, he got dots (particles) in an interference (wave) pattern, and that’s what everybody has been getting ever since, i.e. wave/particle duality.

This is the central9mystery in quantum mechanics, and demystifying it is the justification for this article.

Time now for:

The Clapham Interpretation

“What now appear as the paradoxes of quantum theory will seem as just common sense to our children’s children”
Stephen Hawking, The Second Millennium Evening at The White House, March 6, 1998

 

Introduction (and this is where the story really starts)

Like peanut butter, light comes in two varieties; chunky and smooth, the big difference between light and peanut butter being that light seems to be able to be both chunky and smooth at the same time. Obviously, that’s not possible, but to understand why people think it is, you need to know something about time.

Picture the scene: you are on your way to or from Clapham by public transport. You turn to one of your fellow travellers10and ask which, in their opinion, presents the greater opportunities: the past or the future. They will probably suggest the future, partly on the grounds that it contains a wider range of options, the past being over and done with, and partly just because it is unknown and largely unknowable. Soon you should find you are both able to agree that the future consists entirely of events that have yet to occur, if ever, while all those that have already happened form the past. That leaves us only the present to consider.

The present is more interesting, and somewhat less intuitive. Most people, for instance, would consider the present, otherwise known as “Now”, as lasting a reasonable amount of time, enough at least to get stuff done. However, in reality11, it can’t possibly be long enough for things to happen during it, because then the past would start at that point, by definition. It has to be shorter than that.

In fact, and this is the key, no change at all can take place in the present if it’s to avoid instantly becoming the past. At the same time, it can’t simply be zero, as that would mean it doesn’t exist, which it patently does. It has to be greater than zero, just enough to form a buffer between future and past, but in the course of which nothing can happen.

In short, the present is binary.

So Time is short. But how short is it? Thanks again to Max Planck who reluctantly defined it for us, we know how short it is, and it’s very, very short: 5.39116(13)×10­-44 seconds. He called it Planck Time, and defined it as “the time required for light to travel, in a vacuum, a distance of 1 Planck Length”, 1.616229(38)12×10−35 metres, or the length of what we now call a photon, i.e. a quantum of light. This is not a coincidence. This is why the speed of light is the speed it is13. It can only go in tiny steps, albeit at a rapid rate.

Now, I know what you’re thinking: If the present’s that short, and nothing can happen during it, then how could anything ever happen? Good point. Think of each present as being like a single video frame, and each quantum a single pixel; nothing changes within a video frame, but it is not exactly the same as the ones before and after it, which gives the sensation of movement and change. That’s why, for example, quanta/pixels appear to leap into being; the transition takes no time at all. More to the point, the fact that they do leap is a comforting proof of concept, Niels Bohr’s distaste for it notwithstanding. Furthermore, each element (quantum) has its own timeline (unless they are ‘entangled’, in which case they share14). For each quantum state in any given timeline there are an infinite number of possible subsequent states of varying probability, from “no change” to “different in some way”.

The rule is: “If nothing happens, nothing happens”. So long as an element’s possible future states remain unchanged – in what is called a ‘wave’ state because that is how it behaves – then successive presents will remain in that state until something happens, i.e. some event15causes an instantaneous change to a ‘past’, or what is called ‘particle’, state which then simply becomes part of the fabric of the universe in that form (See definition of ‘past’ above). That surely is precisely what we should expect to see – and actually do see – when we look at the sub-molecular quantum world: two possible states, one probabilistic while the probabilities still exist, and the other fixed once an event has occurred. It should be borne in mind, however, that any event instantly generates its own probable subsequent states which are superposed on the past ‘particle’16state.

I call this the ‘sequential present’17.

As we will see, the Clapham Interpretation resolves a number of apparent paradoxes in quantum mechanics, among them the Observer Effect, Schrödinger’s Cat, the EPR Paradox, the Multiverse, the Double-Slit Experiment, the Block Universe and the Arrow of Time, to name but a few. It also reconciles Classical Physics with the Theory of Relativity and provides a basis for a quantum theory of gravity, with implications for String Theory, Brane Theory and the Holographic Universe.

It has a nice GUT feel to it.

«§»

You should know that Einstein and I part ways at this point. He is famously quoted as saying that physicists believe the separation between past, present, and future is only an illusion, albeit a convincing one. However, he wrote that to the recently bereaved widow of his dear friend, Michele Besso, and it should not be taken out of context. I myself have said at funerals, “I’m sure he is looking down at us now”18, but this is not my professional opinion, and as far as I know, neither was it Einstein’s. He also said that Besso had “left this strange world a little ahead of me19,” and that it meant nothing, which would imply a belief in the afterlife that, again to my knowledge, he did not have. Nonetheless, he does seem to have believed in some form of “block” universe in which the time dimension of spacetime stretches back to the Big Bang and forward to whatever crunch-like end awaits us, and there, obviously, we are at odds.

A good part of the reason for this belief is a basic problem with classical physics: i.e. that the mathematics doesn’t show a direction for time. With the exception of the Second Law of Thermodynamics, there is nothing in the maths to show why it shouldn’t work in either time direction, and therefore, according to mathematicians, it should work as well backwards as forwards, the one tiny drawback being that it actually doesn’t. Broken crockery never spontaneously reassembles itself, for example, so the second law holds.

Mathematics is a language, and as such is perfectly capable of describing reality in detail. However, like any other language, it is also perfectly capable of writing fiction. Like any good fiction, it only has to fit within our perception of reality to be believable. Remember what Einstein said: “As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.”

On the other hand, in a Clapham universe, physics only ever has to work in the present, because there is literally nowhen else. So that’s a relief. Also, classical – macro – physics is concerned largely with the study of matter in its past, ‘collapsed’, form at speeds that demand negligible quantum or relativistic adjustment.

One thing about the sequential present would have pleased Einstein, though. Since quantisation is a property of the sequencing, it must affect the three physical dimensions equally, so the sequencing rate not only constrains the “speed” of light, but also means it cannot vary in any direction. We know this to be so because we have measured it, and the speed of light in vacuum is the same in all directions, regardless of how, where and at what speed you yourself are travelling20. As you know, this is impossible under the laws of the three-dimensional universe that Newton described, but entirely consistent if its laws have an independent source. This is another comforting proof of concept, uniting, as it does, the quantum universe with relativity21as descriptions of the same reality22.

«§»

If this is all true, what would that imply?

It may not be immediately obvious, but this interpretation implies that the ‘time’ component of spacetime is a universal (i.e. non-local) quantum ‘protostrate’, or ‘layer’ if you prefer, depending on how you want to visualise it. That in turn means that, along with the block universe, we have to bid a fond farewell to the Many Worlds of Hugh Everett in favour of what we might call the “Sliver” universe23, which is all a shame from the point of view of science fiction, but we get a lot in exchange. For a start, we get back the irreversible arrow of time, in fact Time itself, together with all of classical physics and, for all I know, entropy; anyway, the old familiar universe the way we are used to seeing it, where Tempus definitely fugit.

To start at the very beginning, this would mean that at the point of the Big Bang the entire universe briefly consisted of a single photon24, followed immediately (5.39116*10-44 seconds later, and every 5.39116*10-44 seconds after that until the present day)25by exponentially expanding “presents”26. It is probable that the initial phase of expansion was ‘dimensionless’, not only in the sense that the three physical dimensions were not yet formed, but that the ‘metric’27itself was also coming into being. Others have calculated that: “During inflation, the metric changed exponentially, causing any volume of space that was smaller than an atom to grow to around 100 million light years across in a time scale similar to the time when inflation occurred (10−32 seconds)”. At the moment, everyone more or less agrees that this first level of expansion is over by 10-32 seconds into existence. That, however, was based on a single Big Bang, although expansion continues to be the default state of the universe. Of course, as my daughter’s Satnav would say, this will all need “recalculating”.

We are in the habit of calling this component Time, largely because one of its side effects is to provide us, as Einstein so neatly put it, with “the order of events by which we measure it”, i.e. the sequential present or, as Toynbee is supposed to have said about history: “One damned thing after another”28. It is this sequencing that gives us the irreversible Arrow of Time, the sensation that Time passes in only one direction. However, the Maoris have it right: it flows from the {superposed} future into the {collapsed} past, not the other way about; future first, past last.

The next big question is: “Why does light have a speed at all?”

“Well, it has to get around,” I hear you say, and that’s a good point, but not strictly true. First of all, just as in the Bible’s “Fiat lux”, we are using the word ‘light’ synecdochically29. We call it the Speed of Light, and that is certainly one of the things it’s the speed of, along with all electromagnetic radiation, but in reality it is simply the rate at which quantum fields can refresh and propagate themselves. It could be anything. I tend myself to think of it as the Speed of Time, but in fact it is just the Speed of Existence. Also, unlike the luminiferous æther, the sequential present is simultaneously medium and waveform. That is why it doesn’t accelerate. In our three-dimensional universe, it is always a uniform speed.

However, that speed does vary. c, the universal constant, is handy for nipping about in the vacuum of space, but ‘light’ can go at practically any speed, and even stop. It all depends on its surroundings. The slightest barrier to its progress will cause it to slow. The cosmic microwave background (CMB) was almost perfectly uniform, with only one part in a hundred thousand variation, yet that was enough. Even the slightest slowing causes the physical realm to contract/coalesce correspondingly, thus slightly increasing the density of the three-dimensional universe at that point, which in turn slows and contracts the next ‘refresh’, and so on. The values of the “strings” are a function of the “speed”. At its simplest, I could imagine it to look something like this: c*(x,y,z), where all physical dimensions, in fact our whole universe, lie within the parentheses. Any reduction in c would therefore entail a reduction in the dimensional values of the universe at that point.

We know all this is actually happening partly because Einstein predicted it (which certainly made us go out and look for it), partly because, if we didn’t allow for it, my daughter’s Satnav would take me to the wrong house, but mostly because, when time slows, the clocks do, too. Now, clocks are strictly from the material world30; you can adjust them to tell whatever time you want, but for Time itself to adjust them shows the trans-dimensional mechanism exists. This is not like length contraction31, where it is just a perception; this is the ‘Time’ component of spacetime actually altering the ‘Space’ part. It goes without saying that some such process that increases the density of regions of space is an essential prerequisite for the formation of stars and other matter; our word for that process is Gravity, so ‘Gravity’ and ‘Time’ are inextricable aspects of the flow of existence.

Now the big question is: If ‘light’, what about billiard balls?

For this we need look no further than the double slit experiment which, in the Clapham Interpretation, is no longer Feynman’s “mystery”, but its experimental verification. If you recall, we left it back in 1909 with G.I. Taylor watching an interference pattern form from individual “corpuscles32” of light, and that is how it has been ever since, although the technology required to achieve it has become much more sophisticated, what with lasers, half-silvered mirrors and all. In the Clapham Interpretation, the interference pattern results from the photons each randomly ending up on one among many33possible future34paths. Over the years, people have tried various techniques to see which path the photon actually took, but the use of any kind of detector has always resulted in the photon finding itself going through one slit or the other, never both, as a wave would have done, so no pattern is formed.

If you have followed this so far, of course, you will remember the “If nothing happens, nothing happens” rule, and you will realise that detecting a particle is an event that inevitably takes place in the present, thus collapsing any future possibilities into a ‘past particle’. So far, so good. However, that immediately raises the question: if it’s a past particle, how does it continue on its way in the present? This has long been one of the great mysteries of physics (until now, of course), and it’s called “inertia”.

“As […] it is now generally accepted that there may not be one which we can know, the term “inertia” has come to mean simply the phenomenon itself, rather than any inherent mechanism.35

Clapham, however, does have an inherent mechanism – the expansion of the universe – or light would have no speed. A photon has no mass, and thus no momentum nor any inertia, and it never accelerates, so no relativistic mass. In short, it has nothing but unattributable speed. Unless, of course, we can attribute it.

As Richard Feynman was fond of pointing out, you can’t use analogies to describe the quantum protostrate because there is nothing in our ordinary experience that is remotely similar, but I can give you an idea of how difficult it is to describe. When you are doing this – trying to visualise the protostrate, I mean – you have to remember that, although its quantumness suffuses the entire universe, and its effects can be seen at every level, from the microscopic to the galactic, it does not form part of our physical, three-dimensional universe. It is on another plane of Being entirely. If you were to make a two-dimensional drawing of our universe, the protostrate would be the paper. There is no part of the drawing that does not depend on the paper for its existence, and I am including those parts of the drawing where there is no image, but it and the paper belong to different actualities.

Just to be clear on this point, you may in your travels have seen something like this:

This represents a solid three-dimensional sphere passing through a two-dimensional plane, and makes the point that any such event, viewed from the perspective of an inhabitant of that plane, would be perceived as a circle that grows and then shrinks over time.

This is not at all what we are talking about here. It is an illustration of an extra dimension in the same order of dimensions, just as we can make three-dimensional models of four-dimensional shapes, and even draw them in two-dimensions, as here:

This is possible only because the dimensions involved are all of the same order. No such images are possible of the quantum realm.

Nonetheless, inadequate though our imaginations may be, we know a lot about how the protostrate works. We know, for instance, that left to its own devices (in a vacuum) it will ‘push’36, causing our universe to expand, but if it encounters any resistance, it ‘pulls’37, causing our universe to contract/coalesce38. It does this at every scale, from the forces that are causing our ‘local group’ of galaxies to consolidate, ultimately driving the Milky Way to collide with Andromeda39, at the same time as expanding voids in the universe and forcing distant galaxies away, never to be seen again, while at the other extreme forming and differentiating all the elements that go to make up the stars and planets that populate that universe (We happen to be in a period of overall expansion at the moment, but because these effects are opposed, the net result in the past may well have been to cancel each other out, or even to have given rise to some level of contraction).

Because the effects are translated into the physical realm, we experience the expansion of the universe and the forces of gravity as being in opposition, but in the quantum protostrate they are just a matter of degree. We see planets, stars and nebulae in a vacuum, but they are merely different expressions of a single entity. We call it Dark Energy and Dark40Matter to distinguish its effects, but everything is just the result of a continuum of values in the protostrate. We know this is happening because the Earth is not flat. It is not only the speed of light/time/existence that is the same in all directions: gravity is, too, so the abovementioned “ball of strings” applies here as well, with the result that everything turns out to be balls, one of them being this planet.

While we’re on the serious subject of gravity, what’s wrong with this picture?

It’s very popular. It shows gravity as an effect of the curvature of spacetime, and it makes perfect sense to us. We’ve all seen motorcyclists on the Wall of Death, or cyclists on the velodrome; or if not, how about stuff going down a plughole? Anyway, it looks familiar, helped by the fact that the force of gravity obviously points down, as it should.

But what if I turn it upside down? If it’s an accurate depiction of spacetime, it should work regardless of orientation. Does it still work?

I didn’t think so. Now we still see gravity as pointing down, but the Earth looks decidedly unsafe.

The problem is all those straight lines. Every physics student is taught that light travels in straight lines, except, of course, when it doesn’t. To get around that, light has to nip diagonally across the fabric of spacetime to line up with us on the far side, to explain the illusion that the star we are looking at is far to the right of its real position.

In a quantum world, however, there is no distinction to be made between how light behaves and the structure of spacetime. They are the same. Light simply follows the structure, as here,

which, given symmetry, demonstrates that spacetime is more concentrated in the presence of density, just as you would expect, and that the process which forms matter and the one that creates gravity are inseparable and the same (see inertia above).

At this point you could be forgiven for wondering if there might be something in the way of proof of any of this, and there is, although, unfortunately, it is purely mathematical. Stick with me, though. You will recall my mentioning Feynman’s “sum over histories” approach. This is now known as the Path Integral Formulation, and began in the ‘30s with an idea of Paul Dirac’s that electrons had a ‘magnetic moment’ that should have a strength of exactly 1. Skipping ahead to the late ‘40s, this led to Feynman, Julian Schwinger and Sin-Itiro Tomonaga developing a quantum theory of electricity and magnetism that could actually be calculated. In his 1927 paper on “The quantum theory of the emission and absorption of radiation”, Dirac came up with the name Quantum Electro-Dynamics, or QED, and that is what it is called to this day.

The calculations turned out to be astonishingly accurate, making it perhaps the most accurate theory in physics in terms of matching theory with experiment. In Feynman’s time, the theoretical value of Dirac’s number was 1.0015965246, while the experimental result was 1.00115965221 (±4 in the last decimal place), the equivalent of measuring the distance from New York to LA to within the width of a human hair. Now that distance has been extended from the Earth to the Moon to within the width of a human hair41.

“Wait a minute!” I hear you cry. “What has an electron’s magnetic moment to do with anything when it’s at home? What’s that proof of?” Nothing, really, but it has what Feynman called a “Probability Amplitude” (rhymes with “Likelihood”, but only if you’re Scots)42, so it can be predicted and then checked to see what actually happened. The important point is that, even retrospectively, probabilities are the exclusive province of the future. That in turn means that the structure of the Clapham Interpretation, i.e. future full of possibilities (wave structure) – quantum present – past consisting of (collapsed) particle, is what Feynman and his descendants have been measuring, and that is proof of the concept.

QED.

 

Miscellanea

The recognition of the quantum realm was a defining moment in science. From this point on, scientists abandoned their traditional role of devising formulae that accurately reflect reality, choosing instead to devise realities that accurately reflect the formulae. This brought us problems right from the start: Heisenberg’s uncertainty principle43, for example, and the EPR paradox; Schrödinger’s cat and the superposition of states, through ultimately to the multiverse. Niels Bohr didn’t even like quantum leaps44, as I’ve said; but Feynman was right: they are all implicit in wave/particle duality.

Schrödinger’s Cat

Because physics can only work in the present, by definition, but we know it to have always worked in the past, and we can calculate how it will work in the future, we have always assumed its laws to be permanent. Heisenberg made the understandable error of assuming that the quantum state formed part of those laws, and would apply at all times, although specifically in the present. However, all that remains of the past is baryonic matter – matter in its collapsed state – and the future, again by definition, does not exist except in potential, whence the quantum realm derives its properties and laws.

So, in the end all we can say about Schrödinger’s cat is that it was alive when it went into the box, and no one knew at what point it would die in the future. Pretty much the way I feel when I go to bed. Until it happens, it’s in the future, which is uncertain; when it happens, it goes into the past. The present is just where that transaction takes place. No observer is required.

Heisenberg’s Uncertainty Principle

Obviously this is largely covered above, but there is one important and general point to make. Scientists – Popper scientists, that is – should only be interested in two things: facts and knowledge. That is why it is called Science, from the Latin Scientia, the noun form of Scire; To Know. No Popper scientist can be interested in Truth. If they were, it would be called Verity, or possibly Alethea, which would be sweet, but wrong.

How do we know Science today has nothing to do with Truth? Because we have constantly heard everybody, from Cardinal Bellarmine to Carl Sagan, assert: “Extraordinary claims require extraordinary evidence.” Now, obviously, there is no rational connection between the objective truth of something, and any evidence for it. As Feynman himself was fond of pointing out, there is far more in the universe that is true than can be proven scientifically. The only path evidence provides is to knowledge.

More to the point, the amazing fruitfulness of Popper science depends entirely on every scientist’s willingness to abandon knowledge as soon as it fails to fit the facts, something it would be philosophically and psychologically impossible to do if it concerned Truth. As a result, the ancient Greek ideal of Knowledge as the agreed overlap between Truth and Belief, i.e. Justified True Belief, is irrelevant to Popper scientists. To them, knowledge is simply Justified (Justifiable?) Belief, and the moment it can no longer be justified, it can be abandoned and replaced.

Nonetheless, the hankering for Eternal Truths – and their author’s place in the Pantheon of Science – lingers on. Heisenberg, and through him all the rest of us, fell victim to it. That is why I call it the Great Scientific Fallacy (GSF).

The EPR Paradox

Imagine that somewhere in the plotline of Godfather II, Don Vito Corleone is betrayed by twin brothers. He determines that they must die, and he alone is to kill them. They head for the (widely separated) hills, where they live in fear (and anonymity). Eventually, Don Vito dies, and simultaneously their curse is lifted. No information travels anywhere, there is no “spooky action at a distance”. Nonetheless, the probability of either of them dying at the hands of Don Vito drops to nil.

Entanglement at the quantum level is an aspect of the universal protostrate. Entangled quanta thus share a timeline. If nothing happens, nothing happens; the future probabilities of the two possicles remain unchanged, and therefore identical. Should something happen, e.g. one of them is measured in the dimensional universe, they both simultaneously “collapse” into past particles, regardless of wherever in that universe they may be.

That’s about it, really. Same goes for Quantum Teleportation.

The Multiverse

This is often referred to as Hugh Everett’s Many-Worlds Interpretation, although he never called it that. The story goes that he was pondering the double slit experiment (see above) when it occurred to him that another way of viewing the photon’s appearing to pass through both slits at the same time would be if the universe itself split into two identical universes, with the photon passing through the left slit in one, and through the right slit in the other.

This has now gone from unthinkable nonsense to accepted mainstream thinking, championed by most, if not all, leading quantum theoreticians. Sadly, if he had only given it just a moment more’s thought, he would have realised that if the universe were to split into two different realities, then we would not see the interference pattern. We would only see one inheritor reality or the other. For the interference pattern to be visible requires that both realities be present, and thus the universe cannot have split.

Shame about that.

Quantum Computing

The universe is binary, with a helluva flop rate. Furthermore, as I understand it, any well constructed formula only has one possible future state, so the probabilities should work. The drawback is what is called the Holevo Bound. To save you looking it up:

“In essence, the Holevo bound proves that given n qubits, although they can “carry” a larger amount of (classical) information (thanks to quantum superposition), the amount of classical information that can be retrieved, i.e. accessed, can be only up to n classical (non-quantum encoded) bits. This is surprising, for two reasons: (1) quantum computing is so often more powerful than classical computing, that results that show it to be only as good or inferior to conventional techniques are unusual, and (2) because it takes 2n − 1 complex numbers to encode the qubits that represent a mere n bits.” Wikipedia

In short, precisely what you would expect from qubit pasticles.

Hawking Chronology Protection Hypothesis

This is it.

Synchronicity

The sequential present provides Relativity with Einstein’s array of synchronised clocks. Nonetheless, so long as E=mc2 remains just a coincidence (see elsewhere on this blog), the story is not over.

Source

The source of the protostrate is the same as that of the Big Bang, wherever you imagined that to be, just in instalments. Those who, like Elon Musk, believe we are living in a computer simulation will be pleased to learn that, like a computer, this universe needs to remain plugged in.

The c Paradox

In order for c to be the speed of light, light must travel at that speed; however, at that speed, light does not travel; it is already there, where it has always been45. Even as a schoolboy I knew, if nothing can travel at the speed of light, that light must then either not travel, or be nothing. It simply didn’t occur to me that both would be true. Nonetheless, a combination of time dilation, which they were so worried about when defining the second4647, and length contraction, which is to speed as perspective is to distance48, if taken to its logical conclusion, means that, at the speed of light, all meaningful concepts of measurable existence cease to be. At that speed, and only electromagnetic radiation can actually “travel” at that speed, the entire universe is a dimensionless, timeless point in a non-existent void; essentially the initial conditions of the Big Bang, which is where this story started.

You can see why I say that the Clapham Interpretation has a nice GUT feel to it.

 

Conclusion

As I started with a quote from Feynman, I thought it might be as well to end on one:

“The chance is high that the truth lies in the fashionable direction. But, on the off-chance that it is in another direction — a direction obvious from an unfashionable view of field theory — who will find it? Only someone who has sacrificed himself by teaching himself quantum electrodynamics from a peculiar and unfashionable point of view; one that he may have to invent for himself.”

Richard Feynman,
“The Development of the Space-Time View of Quantum Electrodynamics”, Nobel Lecture (11 December 1965)