# Newton’s First Law is not a special case of his Second Law

When teaching introductory mechanics in physics, it is common to teach Newton’s first law of motion (N1) as a special case of the second (N2). In casual classroom lore, N1 addresses the branch of mechanics known as statics (zero acceleration) while N2 addresses dynamics (nonzero acceleration). However, without getting deep into concepts associated with Special and General Relativity, I claim this is not the most natural or effective interpretation of Newton’s first two laws.

N1 is the law of inertia. Historically, it was asserted as a formal launching point for Newton’s other arguments, clarifying misconceptions left over from the time of Aristotle. N1 is a pithy restatement of the principles established by Galileo, principles Newton was keenly aware of. Newton’s original language from the Latin can be translated roughly as “Law I: Every body persists in its state of being at rest or of moving uniformly straight forward, except insofar as it is compelled to change its state by force impressed.” This is attempting to address the question of what “a natural state” of motion is. According to N1, a natural state for an object is not merely being at rest, as Aristotle would have us believe, but rather uniform motion (of which “at rest” is a special case). N1 claims that an object changes its natural state when acted upon by external forces.

N2 then goes on the clarify this point. N2, in Newton’s language as translated from the Latin was stated as “Law II: The alteration of motion is ever proportional to the motive force impress’d; and is made in the direction of the right line in which that force is impress’s.” In modern language, we would say that the net force acting on an object is equal to its mass times its acceleration, or

$\vec{F}_{\rm net}=m\vec{a}$

In the typical introductory classroom, problems involving N2 would be considered dynamics problems (forgetting about torques for a moment). A net force generates accelerations.

To recover statics, where systems are in equilibrium (again, modulo torques), students and professors of physics frequently then back-substitute from here and say something like: in the case where $\vec{a}=0$ clearly we recover N1, which can now be stated something like:

$\vec{a}=0$
if and only if
$\vec{F}_{\rm net}=0$

This latter assertion certainly looks like the mathematical formulation of Newton’s phrasing of N1. Moreover, it seemed to follow from the logic of N2 so, ergo, “N1 is a special case of N2.”

But this is all a bit too ham-fisted for my tastes. Never mind the nonsensical logic of why someone as brilliant as Newton would start his three laws of motion with special case of the second. That alone should give one pause. Moreover, Newton’s original language of the laws of motion is antiquated and does’t illuminate the important modern understanding very well. Although he was brilliant, we definitely know more physics now than Newton and understand his own laws at a deeper level than he did. For example, we have an appreciation for Electricity and Magnetism, Special Relativity, and General Relativity, all of which force one to clearly articulate Newton’s Laws at every turn, sometimes overthrowing them outright. This has forced physicists over the past 150 yeas to be very careful how the laws are framed and interpreted in modern terms.

So why isn’t N1 really a special case of N2?

I first gained an appreciation for why N1 is not best thought of as a special case of N2 when viewing the famous educational film called Frames of Reference by Hume and Donald Ivey (below), which I use in my Modern Physics classes when setting up relative motion and frames of reference. Then it really hit home later while teaching a course specifically about Special Relativity from a book by the same name by T.M by Helliwell.

A key modern function of N1 is that it defines inertial frames. Although Newton himself never really addresses inertial frames in his work, this modern interpretation is of central importance in modern physics. Without this way of interpreting it, N1 does functionally become a special case of N2 if you treat pseudoforces as actual forces. That is, if “ma” and the frame kinematics are considered forces. In such a world, N1 is redundent and there really are only two laws of motion (N2 and the third law, N3, which we aren’t discussing here). So why don’t we frame Newton’s laws this way. Why have N1 at all? One might be able to get away with this kind of thinking in a civil engineering class, but forces are very specific things in physics and “ma” is not amongst them.

So why is “ma” not a force and why do we care about defining inertial frames?

Basically an inertial frame is any frame where the first law is obeyed. This might sound circular, but it isn’t. I’ve heard people use the word “just” in that first point: “an inertial frame is just any frame where the first law is obeyed.” What’s the big deal? To appreciate the nuance a bit, the modern logic of N1 goes something like this:

if
$\vec{F}_{\rm net}=0$
and
$\vec{a}=0$
then you are in an inertial frame.

Note, this is NOT the same as a special case of N2 as stated above in the “if and only if” phrasing

$\vec{a}=0$
if and only if
$\vec{F}_{\rm net}=0$

That is, N1 is a one-way if-statement that provides a clear test for determining if your frame is inertial. The way you do this is you systematically control all the forces acting on an object and balance them, ensuring that the net force is zero. A very important aspect of this is that the catalog of what constitutes a force must be well defined. Anything called a “force” must be linked back to the four fundamental forces of nature and constitute a direct push or a pull by one of those forces. Once you have actively balanced all the forces, getting a net force of zero, you then experimentally determine if the acceleration is zero. If so, you are in an inertial frame. Note, as I’ve stated before, this does not include any fancier extensions of inertial frames having to do with the Principle of Equivalence. For now, just consider the simpler version of N1.

With this modern logic, you can also use modus ponens and assert that if your system is non-inertial, then you have can have either i) accelerations in the presence of apparently balanced forces or ii) apparently uniform motion in the presence of unbalanced forces.

The reason for determining if your frame is inertial or not is that N2, the law that determines the dynamics and statics for new systems you care about, is only valid in inertial frames. The catch is that one must use the same criteria for what constitutes a “force” that was used to test N1. That is, all forces must be linked back to the four fundamental forces of nature and constitute a direct push or a pull by one of those forces.

Let’s say you have determined you are in an inertial frame within the tolerances of your experiments. You can then go on to apply N2 to a variety of problems and assert the full powerful “if and only if” logic between forces and accelerations in the presence of any new forces and accelerations. This now allows you to solve both statics (no acceleration) and dynamics (acceleration not equal to zero) problems in a responsible and systematic way. I assert both statics and dynamics are special cases of N2. If you give up on N1 and treat it merely as a special case of N2 and further insist that statics is all N1, this worldview can be accommodated at a price. In this case, statics and dynamics cannot be clearly distinguished. You haven’t used any metric to determine if your frame is inertial. If you are in a non-inertial frame but insist on using N2, you will be forced to introduce pseudoforces. These are “forces” that cannot be linked back to pushes and pulls associated with the four fundamental forces of nature. Although it can be occasionally useful to use pseudoforces as if they were real forces, they are physically pathological. For example, every inertial frame will agree on all the forces acting on an object, able to link them back to the same fundamental forces, and thus agree on its state of motion. In contrast, every non-inertial frame will generally require a new set of mysterious and often arbitrary pseudoforces to rationalize the motion. Different non-inertial frames won’t agree on the state of motion and won’t generally agree on whether one is doing statics or dynamics! As mentioned, pseudoforces can be used in calculation, but it is most useful to do so when you actually know a priori that you are in a known non-inertial frame but wish to pretend it is inertial for practical reasons (for example, the rotating earth creates small pseudoforces such as the Coriolis force, the centrifugal force, and the transverse force, all byproducts of pretending the rotating earth is inertial when it really isn’t).

Here’s a simple example that illustrates why it is important not to treat N1 as special case of N2. Say Alice places a box on the ground and it doesn’t accelerate; she analyzes the forces in the frame of the box. The long range gravitational force of the earth on the box pulls down and the normal (contact) force of the surface of the ground on the box pushes up. The normal force and the gravitational force must balance since the box is sitting on the ground not accelerating. OR SO SHE THINKS. The setup said “the ground” not “the earth.” “The ground” is a locally flat surface upon which Alice stands and places objects like boxes. “The earth” is a planet and is a source of the long range gravitational field. You cannot be sure that the force you are attributing to gravity really is from a planet pulling you down or not (indeed, the Principle of Equivalence asserts that one cannot tell, but this is not the key to this puzzle).

Alice has not established that N1 is true in her frame and that she is in an inertial frame. This could cause headaches for her later when she tries to launch spacecraft into orbit. Yes, she thinks she knows all the forces at work on the box, but she hasn’t tested her frame. She really just applied backwards logic on N1 as a special case of N2 and assumed she was in an inertial frame because she observed the acceleration to be zero. This may seem like a “difference without a distinction,” as one of my colleagues put it. Yes, Alice can still do calculations as if the box were in static equilibrium and the acceleration was zero — at least in this particular instance at this moment. However, there is a difference that can indeed come back and bite her if she isn’t more careful.

How? Imagine that Alice was, unbeknownst to her or her ilk, on a large rotating (very light) ringworld (assuming ringworlds were stable and have very little gravity of their own). The inhabitants of the ringworld are unaware they are rotating and believe the rest of the universe is rotating around them (for some reason, they can’t see the other side of the ring). This ringworld frame is non-inertial but, as long as Alice sticks to the surface, it feels just like walking around on a planet. For Bob, an inertial observer outside the ringworld (who has tested N1 directly first), there is only once force on the box: the normal force of the ground that pushes the box towards the center of rotation and keeps the box in circular motion. All other inertial observers will agree with this analysis. This is very clearly a case of applying N2 with accelerations for the inertial observer. The box on the ground is a dynamics problem, not a statics problem. For Alice, who believes she is in an inertial frame by taking N1 to be a special case of N2 (having not tested N1!), she assumes there are two forces keeping the box in static equilibrium — it appears like a statics problem. Is this just a harmless attribution error? If it gives the same essential results, what is the harm? Again, in an engineering class for this one particular box under these conditions, perhaps this is good enough to move on. However, from a physics point of view, it introduces potentially very large problems down the road, both practical and philosophical. The philosophical problem is that Alice has attributed a long range force where non existed, turning “ma” into a force of nature, which is isn’t. That is, the gravity experienced by the ringworld observer is “artificial”: no physical long range force is pulling the box “down.” Indeed “down,” as observed by all inertial observers, is actually “out,” away from the ring. Gravity is a pseudoforce in this context. There has been a violation of what constitutes a “force” for physical systems and an unphysical, ad hoc, “force” had to be introduced to rationalize the observation of what appears to be zero local acceleration. Again, let us forgo any discussions of the Equivalence Principle here where gravity and accelerations can be entwined in funny ways.

This still might seem harmless at first. But image that Alice and her team on the ring fire a rocket upwards normal to the ground trying to exit or orbit their “planet” under the assumption that it is a gravitational body that pulls things down. They would find a curious thing. Rockets cannot leave their “planet” by being fired straight up, no matter how fast. The rockets always fall back and hit the ground and, despite being launched straight up with what seems to be only “gravity” acting on it, yet rocket trajectories always bend systematically in one directly and hit the “planet” again. Insisting the box test was a statics problem with N1 as a special case of N2, they have no explanation for the rocket’s behavior except to invent a new weird horizontal force that only acts on the rocket once launched and depends in weird ways on the rocket’s velocity. There does not seem to be any physical agent to this force and it cannot be attributed to the previously known fundamental forces of nature. There are no obvious sources of this force and it simply is present on an empirical level. In this case, it happens to be a Coriolis force. This, again, might seem an innocent attribution error. Who’s to say their mysterious horizontal forces aren’t “fundamental” for them? But it also implies that every non-inertial frame, every type of ringworld or other non-inertial system, one would have a different set of “fundamental forces” and that they are all valid in their own way. This concept is anathema to what physics is about: trying to unify forces rather than catalog many special cases.

In contrast, you and all other inertial observers, recognize the situation instantly: once the rocket leaves the surface and loses contact with the ringworld floor, no forces act on it anymore, so it moves in a straight line, hitting the far side of the ring. The ring has rotated some amount in the mean time. The “dynamics” the ring observers see during the rocket launch is actually a statics (acceleration equals zero) problem! So Alice and her crew have it all backwards. Their statics problem of the box on the ground is really a dynamics problem and their dynamics problem of launching a rocket off their world is really a statics problem! Since they didn’t bother to sysemtically test N1 and determine if they were in an inertial frame, the very notions of “statics” and “dynamics” is all turned around.

So, in short, a modern interpretation of Newton’s Laws of motional asserts that N1 is not a special case of N2. First establishing that N1 is true and that your fame is inertial is critical in establishing how one interprets the physics of a problem.

# Coldest cubic meter in the universe

My collaborators in CUORE, the Cryogenic Underground Observatory for Rare Events, at the underground Gran Sasso National Laboratory in Assergi, Italy, have recently created (literally) the coldest cubic meter in the universe. For 15 days in September 2014, cryogenic experts in the collaboration were able to hold roughly one contiguous cubic meter of material at about 6 mK (that is, 0.006 degrees above absolute zero, the coldest possible temperature).

At first, a claim like “this is the coldest cubic meter in the [insert spacial scale like city/state/country/world/universe]” may sound like an exaggeration or a headline grabbing ruse. What about deep space? What about ice planets? What about nebulae? What about superconductors? Or cold atom traps? However, the claim is absolutely true in the sense that there are no known natural processes that can reliable create temperatures anywhere near 6 mK over a contiguous cubic meter anywhere in the known universe. Cold atom traps, laser cooling, and other remarkable ultracold technologies are able to get systems of atoms down to the bitter pK scale (a billionth of a degree above absolute zero). However, the key term here is “systems of atoms.” These supercooled systems are indeed tiny collections of atoms in very small spaces, nowhere near a cubic meter. Large, macroscopic superconductors can operate at liquid nitrogen or liquid helium temperatures, but those are very warm compared to what we are talking about here. Even deep space is sitting a at a balmy 2.7 K thanks to the cosmic microwave background radiation (CMBR). Some specialized thermodynamic conditions, such as those found the the Boomerang Nebula, may bring things down to a chilly 300-1000 mK because of the extended expansion of gases in a cloud over long times. The CMB cold spot is only 70 micro-kelvin below the CMBR.

However, the only process capable of reliably bringing a cubic meter vessel down to 6 mK are sentient creatures actively trying to do so. While nature could do it on its own in principle, via some exotic process or ultra-rare thermal fluctuation, the easiest natural path to such cold swaths of space, statistically sampled over a short 13.8 billion years, is to first evolve life, then evolve sentient creatures who then actively perform the project. So the only other likely way for there to be another competing cubic meter sitting at this temperature somewhere in the universe is for there to be sentient aliens who also made it happen. The idea behind the news angle “the coldest cubic meter” was the brainchild of my collaborator Jon Ouelett, a graduate student in physics at UC Berkeley and member of the CUORE working group responsible for achieving the cooldown. His take on this is written up nicely in his piece on the arXiv entitled The Coldest Cubic Meter in the Known Universe.

I’ve been member of the CUORE and Cuoricino collaborations since 2004 when I was a postdoc at Lawrence Berkeley Laboratory. I’m now a physics professor at California Polytechnic State University in San Luis Obispo and send undergraduate students to Gran Sasso help with shifts and other R&D activities during the summers through NSF support. Indeed, my students were at Gran Sasso when the cooldown occurred in September, but were working on another part of the project doing experimental shifts for CUORE-0. CUORE-0 is a precursor to CUORE and is currently running at Gran Sasso. It is cooled down to about 10 mK and is perhaps a top-10 contendeder for the coldest contiguous 1/20th of a cubic meter in the known universe.

I will write more about CUORE and its true purpose in coming posts.

On a speculative note, one must naturally wonder if this kind of technology could be utilized in large scale quantum computing or other tests of macroscopic quantum phenomenon. While there are many phonon quanta associated with so many crystals at these temperatures (and so the system is pretty far from the quantum ground state, and has likely decohered on any time scales we could measure) it is still intriguing to ask if some carefully selected macroscopic quantum states of such a large system could be manipulated systematically. Large-mass gravitational wave antennae, or Weber bars, have been cooled to a point where the system can be considered in a coherent quantum state from the right point of view. Such measurements usually take place with sensative SQUID detectors looking for ultra-small strains in the material. Perhaps this new CUORE technology, involving macroscopic mK-cooled crystal arrays, can be utilized in a similar fashion for a variety of purposes.

# Boltzmann Brains by Agapanthus on Ultima Thule

One of my ambient music pieces, Boltzmann Brains (inspired by the weird physics idea of Boltzmann Brains), was just featured on the Australian radio show and podcast Ultima Thule. It was fun to see the tagline “The suspendedly animated sounds of Peter Challoner, Thomas D Gutierrez and Brian Eno.” (Oh, yeah, and that other guy we almost forgot “Brian Eno”, whoever he is). The Ultima Thule podcast is one of my favorite podcasts and long ago replaced Hearts of Space as my go-to ambient and atmospheric fix, so having my own music appear on the show was a real treat.

# RHIC/AGS User’s Meeting Talk and Yeti Pancakes

I was recently invited to give a talk on neutrinoless double beta decay at the RHIC/AGS User’s Meeting at at Brookhaven National Laboratory. The talk was entitled “Neutrinoless Double Beta Decay: Tales from the Underground” and was a basic overview (for other physicists, targeted primarily at graduate students) of neutrino physics and the state of neutrinoless double beta decay. The talk was only 20+5, so there wasn’t time to get into a lot of detail.

It was great to be back at BNL and see some of my old friends and colleagues. It was particularly nice to see my mentor and friend Professor Dan Cebra again and meet his recent crew of graduate students.

Being asked to give a neutrinless double beta decay talk at a meeting entirely focused on the details of heavy ion physics is a little like a yeti a pancake: they are terms not usually used in the same sentence, but somehow it works. Their motivation was noble. At these meetings, the organizers typically pick a couple topics in nuclear physics that are outside their usual routine and have someone give them a briefing on it. This was exactly in that spirit.

To download the Standard Model Lagrangian I used in the talk, visit my old UC Davis site where you can find pdf and tex versions of it for your own use. If you are interested in investigating the hadron spectra I show in the talk, you can download my demonstration available in CDF format from the Wolfram Demonstration Project. The Feynman diagram for neutrionless double beta decay was taken from wikipedia. Most of the other figures are standard figures used in neutrinoless double beta decay talks. As a member of the CUORE collaboration, I used vetted information regarding our data and experiment.

Enjoy

# Cal Poly Open House, All That Glitters Green and Gold 2014, Faculty Address

I was asked to give the 2014 Cal Poly Open House, All That Glitters Green and Gold 3 minute faculty address to about 600+ prospective students and parents for the College of Agriculture, Food, and Environmental Sciences, College of Architecture and Environmental Design, and the Orfalea College of Business. I somehow managed to get “quantum”, “atheist”, “delocalize”, and “live long and prosper” in there. In hindsight, asking someone from physics to do this for these Colleges is a bit like asking Snape to give the opening address to Hufflepuff. Still, it was great fun and a true honor. Here is a transcript of the speech.

Thank you President Armstrong. Welcome and good morning! I’m Tom Gutierrez, a professor in the Physics Department here at Cal Poly. I’m also currently the advisor for the Society of Physics Students, Sigma Pi Sigma (the physics honor society), and student club AHA (the Alliance of Happy Atheists).

How many of you watch or have seen the TV show The Big Bang Theory? Sadly, in my department it’s basically considered a documentary. I don’t watch it regularly, but to appreciate where I’m coming from: understand that I when I first saw it mistook it for a NOVA special on how physicists can actually improve their social skills. With that awkward introduction…

Why am I, a physics professor, speaking to you today? I’m here to give you a brief faculty perspective of Cal Poly. Cal Poly is a comprehensive polytechnic university that embraces a Learn-By-Doing philosophy. And physics, the most fundamental of all sciences, is at the very core of this mission. For a comprehensive polytechnic university in the 21st century, physics is the technical analog to the “liberal arts.” All technical majors across all Colleges at the University must take physics and almost all majors allow physics as an elective or as a general education course. This frequently puts my department at the nexus of the University and gives me the pleasure of interacting with a large cross section of our students on a regular basis.

I teach a wide spectrum of courses in the physics department. While it’s true most of my students are from engineering and the College of Science and Math, some of the most hard working and thoughtful students I’ve had have come from the Colleges represented in the session this morning, which include business, animal science, architecture, and forestry majors to name a few. To facilitate the Learn-By-Doing philosophy in practical terms, Cal Poly fosters amongst faculty what is known as the Teacher-Scholar Model. Faculty across all Colleges are carefully selected 1) for their passion for teaching and working with students and 2) for being engaged with active work in their fields. In my own experience, most educational institutions choose one or the other focus for faculty: a professor is either a teacher or a scholar. While there are many fine examples of each amongst today’s universities, one vocation typically suffers at the expense of the other. However, Cal Poly celebrates both forms of professional expression for individual faculty — and this generates a powerful and singular learning environment for the students who come here. Faculty engaged in their fields can bring real-world knowledge and research into the classroom. Conversely, teachers can bring their students and pedagogical wisdom into the real world.

My own work in particle physics, sponsored by the National Science Foundation, has allowed me to bring students to work at an underground lab in Italy and experience the joys of doing cutting-edge science. Students then bring this experience to their jobs and graduate programs. The message I’m getting from my colleagues at other institutions and in industry? “Send us more Cal Poly students!” Faculty at Cal Poly are allied with the student. We want you to graduate as lifelong learners who find a productive career and make a difference in the world. At Cal Poly, we want you to grow as a person and to challenge your pre-existing assumptions about how the world works. We want you to discover your Personal Project; think big, make collaborations, and not just dream, but discover how to translate those dreams into actions.

Anyway, enjoy the rest of your stay and come visit the Physics Department and CoSaM open house in the Baker Science building if you get a chance. May your quantum wave function always remain delocalized. Live long and prosper. Thank you!

I just listened to David Brin’s excellent Uplift War on audiobook, and unapologetically declare to the world that I “read the book.” For full disclosure, I’ve also read Martian Chronicles by Ray Bradbury, all three Hunger Games novels by Suzanne Collins, Sara Gruen’s Water for Elephants, Neverwhere by Neil Gaiman, Packing for Mars by Mary Roach, and Letters to a Young Contrarian by Christopher Hitchens, amongst others. All on audiobook. From a social point of view, if you and I got together to discuss these works, my experience of them would be such that you would not be able to determine by our conversation if I listened to it or physically read the words on a page. In this sense, I can responsibly claim to have “read the book” even if my eyes never looked at the words. That is, using Brin’s work as an example, unless you asked me to spell the names “Uthacalthing”, “Tymbrimi”, or “Athaclena” (which I did not know how to spell until I looked them up just now) — but then I’d ask you to pronounce them and we’d be even.

Do all books lend themselves to this audio mode of reading? No. Obviously not. Exceptions include works that rely directly on the shapes of words or encoding extra information in the precise layout of the text, font, or presentation. If the work involves lots of pictures, illustrations, data, or equations, audiobooks are not going to work very well. But the bulk of modern fiction lends itself wonderfully to audiobooks as does much non-fiction. Like so many other things in life, one needs to account for individual cases. Also, this equivocation is not appropriate for people (e.g. children) learning to read symbols on the page. An audio experience is not an adequate substitute for that kind of information processing during those fragile formative years. This argument is directed at people who have mastered both reading and listening and are educated adults.

A couple tangential examples, that inform the discussion. A formally trained and competent musician can look at a piece of written music and, for all practical purposes, “listen” to it by reading it with their eyes. The audio performance itself, of course, also has aesthetic value for that musician. But it would probably be appropriate for someone in that position to say, in either context, that they “listened” or “heard” the piece even if it merely involved reading the sheet music. Indeed, musicians who can read written music like that do refer to reading sheet music as having “heard” or having “listened” to the piece. In contrast, many bands we worship refer to “writing” music for their albums. However, rarely are any notes or music written down in any formal sense. Many rock/pop bands “write” music by playing it and piecing together sections into things than sound nice after editing (if they are lucky). Later, some music grad student, desperate to eat and pay rent, will be hired by a company to transcribe the sounds on the album into written notes, so other people without ear training can also play the songs; but that isn’t the way the band itself usually “writes” music — unless you are Yes or Dream Theater. If the Rolling Stones speak of “writing” music for a new album, they almost certainly mean a wanton, drug-infused geriatric orgy in the Caribbean that might have involved Keith Richards bringing his guitar. But the term “writing music” is still used. We can also reverse the situation and look at words on a page that were meant to be spoken out loud, such as plays. Take Shakespeare. Certainly the stage play is considered a respectable form of literary art and Shakespeare is arguably the greatest writer of the English language. But the plays he wrote were meant, designed, crafted to be read aloud and listened to. Yet we read them. Can you still read Shakespeare and claim to have experienced the work in an intellectually satisfying way and be conversational about it? Obviously. Does the stage work bring the work to life in a different way? Clearly.

Also, reading words on a page is not itself a magic recipe for intellectual absorption. Reading text can be pathologically passive if one is not actively engaged, and does not imply extra profound and deep understanding. Let me give an example from my own experience in the classroom. I tell students to “read chapter 10″ from the text. And, indeed some do look at it with their eyes and the words are streamed through their thinking in some fashion. But in many cases no cognitive engagement has occurred. By speaking to them, I can tell that they did not, in fact, “read” the text as I meant the term “read.” In this context, “read” did not necessarily literally mean merely looking at the words, although it might conveniently involve that biomechanical process. I really just wanted them to come to class having processed and understood the material provided in the book by whatever means necessary. If that involves listening to the audiobook, it just doesn’t matter to me (although, good luck learning quantum mechanics from an audiobook).

Does watching a movie adaptation of a book count as “reading the book?” Not in my opinion. Putting audiobooks in the same category as movie interpretation of books is missing the point. I claim the unabridged audiobook is not, fundamentally, a different medium than the original work — not any different than the braille modes of reading that are considered “legitimate” reading. When we read books using the written word we are, in fact, “speaking” the words to ourselves in our head anyway exactly in the way the book is being read in an audiobook. A movie, even one adapted to be nearly identical to the book, is usually abridged and has been altered from the original work in fundamentally different ways. Moreover, one is not required to visualize the plot and characters in the same way as one does in reading text or listening to a reading of text.

I am not judging all these different modes or ranking them. They each serve their purpose and can give pleasure and intellectual stimulation in their own way. But I argue that, under many common situations, listening to audiobooks accomplishes the same social and intellectual function as reading text and can thus be responsibly declared a form of “reading the book.”

# Farewell Stuart

I am very saddened today to hear of the sudden passing of my colleague Stuart Freedman. He was a great scientist and a great mentor. I will miss his dry wit and gift to see right to the heart of an issue. As part of his Ph.D. work circa 1972 with Clauser he performed the first experimental result to show a violation of Bell’s inequality, demonstrating that quantum mechanics was not only complete but non-local in character. This was during a time where “dabbling” in the foundations of quantum mechanics was not particularly fashionable. However, his ambitious result paved the way for the later celebrated work of Aspect et al. and is sadly often forgotten in such discussions. The breadth of his contributions to science is uncanny, spanning many fields and specialties following him from Berkeley to Princeton, Stanford, University of Chicago, and back to Berkeley. He was a fellow of the American Physical Society and Member of the National Academy. At Berkeley, he held the prestigious Luis W. Alvarez Chair in Experimental Physics. I was most familiar with him in his recent role as the US spokesman for the CUORE collaboration, meeting him in 2005 while I was still a postdoc at Berkeley Lab. His voice of scientific leadership in our work will be greatly missed. It was a privilege to have worked with him and collaborated with him, and to name him amongst my mentors. Farewell, Stuart. You will be missed.

# A Guided Tour of Your Recently Acquired Vacuum State

P.S. In the talk, I don’t give a photo credit for my 50s flying car (to represent the “guided tour”), which I got from vintage ad I believe to be in the public domain (e.g. you can get it here, although this isn’t where I downloaded it from). In the talk I did not give credit for the two Feynman diagrams (1 and 2) for the “Golden Channels,” which I got from wikipedia. The photo of John Ellis is by Josh Thompson and was obtained from Flickr.

# The Universe: A Computer Simulation?

An unpublished paper on the arXiv is claiming to have formulated a suite of experiments, as informed by a particular kind of computer approximation (called “lattice QCD” or L-QCD), to determine if the universe we perceive is really just an elaborate computer simulation. It is creating a buzz (e.g. covered by the Skeptics Guide to the Universe, Technology Review, io9, and probably elsewhere).

I have some problems with the paper’s line of argument. But let me make it clear that I have no fundamental problem with the speculation itself. I think it is a fun and interesting to ponder the possibility of living in a simulation and to try and formulate experiments to demonstrate it. It is certainly an amusing intellectual exercise and, at least in my own experience, this was an occasional topic of my undergraduate years. More recently than my undergraduate years, Yale philosopher Nick Bostrom put forth this famous arguments in more quasiformal terms, but the idea had been hovering there (probably with a Pink Floyd soundtrack) for a long time.

The paper is not “crackpot”, but is highly speculative. It uses a legitimate argumentation technique, if used properly (and the authors basically do), called reductio ad absurdum: reduction to the absurd. Their argument goes like this:

1. Computer simulations of spacetime dynamics, as known to humans, always involve space and time lattices as a stage to perform dynamical approximations (e.g. finite difference methods etc.);
2. Lattice QCD (L-QCD) is a profound example of how (mere) humans have successfully simulated, on a lattice, arguably the most complex and pure sector of the Standard Model: SU(3) color, a.k.a. quantum chromodynamics, the gauge theory that governs the strong nuclear force as experienced by quarks and gluons;
3. L-QCD is not perfect, and is still quite crude in its absolute modern capabilities (I think most people reading these articles, given the hype imparted to L-QCD, would be shocked at how underwhelming L-QCD output actually is, given the extreme amount of computing effort and physics that goes into it). But it is, under the hood, the most physically complete of all computer simulations and should be taken as a proof-of-principle for the hypothetical possibility of bigger and better simulations — if we can do it, even at our humble scale, certainly an übersimulation should be possible with sufficient computing resources;
4. Extrapolating (this is the reductio ad absurdum part), L-QCD for us today implies L-Reality for some other beyond-our-imagination hypercreatures: for we are not to be taken as a special case for what is possible and we got quite a late start into the game as far as this sentience thing goes.
5. Nevertheless, nuanced flaws in the simulation that arise because of the intrinsic latticeworks required by the approximations might be experimentally detectable.

Cute.

Firstly, there is an amusing recursive metacognative aspect to this discussion that has its own strangeness; it essentially causes the discussion to implode. It is a goddamn hall of mirrors from a hypothesis testing point of view. This was, I believe, the point Steve Novella was getting at in the SGU discussion. So, let’s set aside the question of whether a simulation could

1. accurately reconstruct a simulation of itself and then
2. proceed to simulate and predict its own real errors and then
3. simulate the actual detection and accurate measurement of the unsimulated real errors.

Follow that? For the byproduct of a simulation to detect that it is part of an ongoing simulation via the artifacts of the main simulation, I think you have to have something like that. I’m not saying it’s not possible, but it is pretty unintuitive and recursive.

My main problem with the argument is this: a discrete or lattice-like character to spacetime, with all of its strange implications, is neither a necessary nor sufficient condition to conclude we live in a simulation. What it would tell us, if it were to be identified experimentally, is that: spacetime has a discrete or lattice-like character. Given the remarkably creative and far-seeing imaginative spirit of the project, it seems strangely naive to use such an immature, vague “simulation = discrete” connection to form a serious hypothesis. There very well may be some way to demonstrate we live in a simulation (or, phrased more responsibly, falsify the hypothesis that we don’t live in a simulation), but identifying a lattice-like spacetime structure is not the way. What would be the difference between a simulation and the “real” thing. Basically, a simulation would make error or have inexplicable quirks that “reality” would not contain. The “lattice approximation errors” approach is pressing along these lines, but is disappointingly shallow.

The evidence for living in a simulation would have to be much more profound and unsubtle to be convincing than mere latticworks. Something like, somewhat in a tongue-and-cheek tone:

1. Identifying the equivalent of commented out lines of code or documentation. This might be a steganographic exercise where one looks for messages buried in the noise floor of fundamental constants, or perhaps the laws of physics itself. For example, finding patterns in π sounds like a good lead, a la Contact, but literally everything is in π an infinite number of times, so one needs another strategy like perhaps π lacking certain statistical patterns. If the string 1111 didn’t appear in π at any point we could calculate, this would be stranger than finding “to be or not to be” from Hamlet in ASCII binary;
2. Finding software bugs (not just approximation errors); this might appear as inconsistencies in the laws of physics at different periods of time;
3. Finding dead pixels or places where the hardware just stopped working locally; this might look like a place where the laws of physics spontaneously changed or failed (e.g. not a black hole where there is a known mechanism for the breakdown, but something like “psychics are real”, “prayer works as advertised”, etc.);

I’m just making stuff up, and don’t really believe these efforts would bear fruit, but those kinds of thing, if demonstrated in a convincing way, would be an indication to me that something just wasn’t right. That said, the laws of physics are remarkably robust: there are no known violations of them (or nothing that hasn’t been able to be incorporated into them) despite vigorous testing and active efforts to find flaws.

I would also like to set a concept straight that I heard come up in the SGU discussion: the quantum theoretical notion of the Planck length does not imply any intrinsic clumpiness or discreteness to spacetime, although it is sometimes framed this way in casual physics discussions. The Planck length is the spatial scale where quantum mechanics encounters general relativity in an unavoidable way. In some sense, current formulations of quantum theory and general relativity “predict” the breakdown of spacetime itself at this scale. But, in the usual interpretation, this is just telling us that both theories as they are currently formulated cannot be correct at that scale, which we already hypothesized decades ago — indeed this is the point of the entire project of M-theory/Loop quantum gravity and its derivatives.

Moreover, even working within known quantum theory and general relativity, to consider the Planck length a “clump” or “smallest unit” of spacetime is not the correct visualization. The Planck length sets a scale of uncertainty. The word “scale” in physics does not imply a hard, discrete boundary, but rather a very, very soft one. It is the opposite of a clump of spacetime. The Planck length is then interpreted as the geometric scale at which spacetime is infinitely fuzzy and statistically uncertain. It does not imply a hard little impenetrable region embedded in some abstract spacetime latticeworks. This breakdown of spacetime occurs at each continuous point in space. That is, one could zoom into any arbitrarily chosen point and observe the uncertainty emerge at the same scale. Again, no latticeworks or lumpiness is implied.