Gray Hair Issue in A Rose For Emily

A Rose for Emily is a classic short story by William Faulkner.  There are spoilers here, so if you haven’t read it, I suggest doing so before proceeding.  It is a fun, quick read.  If you want, you can read the plot summary on the Wikipedia page.  I will identify the plot points I think are important for my analysis, but will assume the reader is familiar with the story.

SPOILER ALERT

Some technical observations

The story has many layers to it, technical, literary, and symbolic.  For example, on a technical level, Faulkner mostly uses the interesting first person plural point of view.  That is, the story is narrated abstractly by “the town” that refers to itself as “we,” yet using the tone as if it were an individual.  That is, “we” thinks of itself as a single person.  Perhaps this is meant to imply that a single person from the town is telling the story as an old yarn for a passerby on behalf of the rest?  But we are never told who this narrator is or what their actual role is in the story.  They seem to be in on every detail of the plot in an omniscient way that no single person could realistically know.  In any case, this point of view does add a layer of abstraction (for me, anyway).

Another technical twist is how Faulkner really gets us turned around with the timeline.  This type of non-linear plot seems natural in the telling (as if it were told from the collective memory of the entire town).  In fact, the timeline has even been analyzed by computer algorithms to find inconsistencies.

Summary and question

The story is about a woman who killed her lover years ago and has been sleeping with his dead body.  Early in the story it is obvious she killed someone.  Eventually the reader can figure out it is Homer Barron, her lover.  The climax is the realization she has been sleeping with the body.

My question is: how recently had she slept with the body?  My assumption, since I first read the story as a youngster, was that she had been sleeping in that bed with him right up until her death.  But that isn’t consistent with the information in the story.  My conclusion:  Although she died when she was seventy-four, she must have stopped sleeping with the body when she was in her mid-thrities.  What is my reasoning?

A little preamble

Most of the time in the story, Faulkner is just playing with us.  He wants the the reader to believe the town folks are just daft and couldn’t figure out there was a body in the house and that she killed someone (or was about to, depending on where we are on timeline).  Later, when it is mentioned that Homer Barron vanished, we as the reader think we have it all figured out.  You could see that coming from a mile away!  How very clever we are!  In fact, you start to question the competence of Faulkner because it looks like he’s is going to end with a softball murder mystery.  Sure the writing is pretty like poetry, but couldn’t he have had a better, less cliche, plot? 

You start to question the competence of Faulkner because it looks like he’s is going to end with a softball murder mystery.

The clues and my case

The cracks of my established assumptions start in Section V after she dies:

“Already we knew that there was one room in that region above stairs which no one had seen in forty years, and which would have to be forced”

The key terms are “no one had seen in forty years” and “had to be forced.”  Taken literally, “no one” includes her.  That the door had to be forced emphasizes the door wasn’t just locked, but stuck because of neglect.  Also, there is no mention of a key.  If Faulkner wanted to emphasize that she could have, in principle, been in the room over the intervening forty ears, he only needed to add the adjective “locked” to “door.”  But he didn’t. Then they bust it down.  Since she died at seventy-four, going forty years back, she had to be about thirty-four since being in that room.

When they bust into the room they find the body of Homer Barron on a decrepit bed.  The piece finishes with the famous climax:

“Then we noticed that in the second pillow was the indentation of a head. One of us lifted something from it, and leaning forward, that faint and invisible dust dry and acrid in the nostrils, we saw a long strand of iron-gray hair.

Yikes!  It isn’t a murder mystery at all.  We realize that we were supposed to figure out early in the story that she murdered him.  It was a ploy to lead us into a false sense of security.  No, the mystery isn’t that she just murdered him, but had been sleeping with him, perhaps even engaging in necrophilia.  Ew!

Right before the climax, we get a description of the pillow:

 “and upon him and upon the pillow beside him lay that even coating of the patient and biding dust.”

Notice that the second pillow, with the iron gray hair, was as dusty as the room. I assert it also hasn’t been used for forty years.  Indeed, these are the exact words one would use to describe a pillow that hadn’t been used in decades.

All this implies she not only had to be about thrirty-four when she was last in the room, but it was also implies that this was the last time she slept with the body.

Timeline of gray hair development?

Earlier in Section III, he states

“‘I want some poison,’ she said to the druggist. She was over thirty…”

So she must kill Homer when she is older than thirty.

In Section IV Faulkner describes the evolution of her gray hair and the passage of time:

“When we next saw Miss Emily, she had grown fat and her hair was turning gray. During the next few years it grew grayer and grayer until it attained an even pepper-and-salt iron-gray, when it ceased turning. Up to the day of her death at seventy-four it was still that vigorous iron-gray…”

The paragraph prior describes the period time right after her lover, Homer Barron, disappears.  Then “some time” passes.  Then they “next saw Miss Emily” and her hair is graying and turns grayer and grayer over the next “few years” and it seems to saturate to iron gray at this time.  Then she does the china-painting when she was about forty, presumably when her hair was already saturated gray.  Then there is an extended period when they don’t see her.  When she dies at age seventy-four, she still has the iron gray hair.

 It makes you wonder if her early graying had something to do with the stresses of engaging in necrophilia.

So, the timeline of the gray hair on the pillow (as I now interpret it) goes something like this:

  1. “Over thirty:” kills Homer with arsenic, hides the body in the house (smell had to start around here, right?)
  2. Early-thrities: the town folk next see her again, hair turning gray
  3. Mid-thirties: “the next few years” hair turns grayer and grayer, saturating in an iron gray color
  4. “About forty:” Starts china-painting, hair already iron gray
  5. Mid-Sixties: they try and collect taxes, “vanquished them, horse and foot, just as she had vanquished their fathers thirty years before about the smell”
  6. Forties through seventies: seen occasionally in the window
  7. Seventy-four: she dies, room is busted open after being closed for forty years, iron-gray hair is next to pillow, bringing us to somewhere around (3) when she last left the iron-gray hair.

Anyway, this is very different from my image of her sleeping with the body up to the age of seventy-four.  The story implies that she last slept with the body as a woman around age thirty-four, leaving the iron-gray strand on the pillow.  After that, she sealed the room for forty years before her secret was discovered by the towns people after she died.

It makes you wonder if her early graying had something to do with the stresses of engaging in necrophilia.

Wrap up

Perhaps all this theory is well known amongst Faulkner scholars and high school English teachers, but I had fun teasing out these clues.

I think I have made a pretty good case, based on the text itself, that she hadn’t slept with the body for about forty years before her death. I’m not sure “if” or “how” this changes any of the story’s message.  Perhaps it implies she herself stopped clinging to the past long ago, but was still willing to let it fester in the sad recesses of her mind.

If you assume she had been sleeping the body until her death, you have to add extra information not provided: perhaps there was a key, perhaps the towns folk kicked up dust and it landed on the pillow, perhaps she lay softly enough on the pillow to not disturb the dust, perhaps by “no one” it means “no one but her.”

Faulkner was more about symbology of the Old South than murder mysteries.  My observations may highlight unimportant details that aren’t important for the basic message.  Still, if my hypothesis holds together, I have another question: why did she stop sleeping with the body when she was thirty four?

Observations of Cal Poly SLO Veritas Forum 2016, “Can Science Explain Everything”

The Veritas Forum ullresented a discussion entitled “Can Science Explain Everything” held at Cal Poly, San Luis Obispo (California) on January 27, 2016.  Although it is an independent entity, is appears to be closely related to Cru Central Coast.  Cru is a national Christian organization that typically operates in and around college campuses.  It was formerly known as Campus Crusade for Christ.  I’m not sure if there is a formal relationship between Cru and Veritas, but they appear closely connected in our area.  Because the two seemed so entangled, I will probably not be consistent with my language in identifying the components of the event that are associated with each organization.

The following commentary is primarily about this specific event, with some references to recent ones at Cal Poly, and not The Veritas Forum nor Cru in general.

In case you don’t wan to read the whole TL;DR thing, my personal position on the event:

Summary: my observations of the event

The meeting was enjoyable, but bigger than I expected.  Both sides were provocative, entertaining, and articulate. However, the question was loaded and had many unstated major premises.  There was asymmetry between the profile of their speaker compared to ours.  The speakers talked past each other because most terms being used were not well defined.  The question itself should have been “Can Subjective Experiences be Described Objectively.”  But this is an entirely different talk that could be completely secular in nature, diving deep into formal philosophy.

Summary: my answer to the question “Can Science Explain Everything?”

Science can, in principle, and provisionally explain things that can be explained.  It cannot explain things that are not explainable.  However, (at least) one twist: you don’t know in advance what can and can’t be explained, nor what fraction of explainable things are known. The best we can do is, as observations arise, assume something is explainable and move forward with tests and more observations.  If something appears unexplainable then we should still try to figure it out, living with any mystery or uncertainty.  Never declare anything unexplainable, this is a privileged assertion that is unavailable by definition.  My advice?  Avoid filling mysteries with oddly specific answers.  Learn to live with mystery and assume everything can be figured out in principle.  If you find an explanation, treat that as a provisional placeholder — one that can be ejected when better evidence and information come along.

Avoid filling mysteries with oddly specific answers.  Learn to live with mystery and assume everything can be figured out in principle.  

Full(er) report

The Veritas Forum is a yearly event on the Cal Poly campus that aims to faciliate dialog between the Christian worldview and other views, typically those traditionally in tension with Christianity like atheism or secularism.  The Veritas Form is a regular event across Cru-active campuses around the United States, Canada, and some European countries.

The speaker for Cru was Ian Hutchinson, an engineering professor at MIT who specializes in plasma physics.  He is an outspoken advocate of the Christian religion and is keenly interested in the interplay between religion and science.  He focuses primarily on deflating scientism.

Representing the atheist view was Paul Rinzler, a Cal Poly professor in the music department. He is on the board of directors of Atheists United San Luis Obispo and is also the co-avisor of the Cal Poly student club AHA (Alliance for Happy Atheists).  I am the primary faculty advisor for AHA and have been for about five years.  I have been contacted in the past about being the atheist representative for Veritas.  I have politely declined.  However, in the past I have recommended Pete Schwartz from the Cal Poly physics department as well as Ken Brown from Cal Poly’s philosophy department.  Both Pete and Ken participated in 2014 and 2015 respectively.  This year, I was invited to participate in a faculty Q&A held on the Thursday afterward (I was not able to attend because of other time commitments).  I should note that AHA is a very small student club with perhaps 15-20 members. 

Science can provisionally explain things that can be explained.  It cannot explain things that are not explainable.  However, you don’t know in advance what can and can’t be explained, nor what fraction of explainable things are known. 

I had the pleasure of having dinner the evening of the event with Paul and Ian, along with student representatives from AHA, Veritas, and Cru (both the local reps and some regional reps) .  Even one of my colleagues from the physics department, who is involved in Veritas (and/or Cru?), was also at dinner.  I didn’t expect to see her, but it was fun.

First, before getting into my concerns and observations about the event, I would like to make it clear that I found found the individuals in Cru (and/or Veritas?) as well as Ian, to be very pleasant and friendly.  We had much in common and had some very nice discussions about side topics peripheral to the main religious theme of the Forum: music, physics, culture, work, “small world” social connections, and so on.  My critique and observations are not judgements of the individuals.  They are a passionate, hardworking group that profoundly believe in what they are doing.  When I was an undergraduate, I regularly participated (as an atheist!) in a Christian Youth Group circa 1989.  I made some great friends who I keep in touch with to this day.  There was real camaraderie, honest discussion, and genuine respect amongst us all.  It was primarily a social group.  The emphasis was on love in the form of philia, brotherly love.  They were very accepting and I really have fond memories of that period of my life.  However, there was a part of the group’s activities that focused on what is referred to as agape.  This is a kind of spiritual love between an individual and god (in this case the Christian god). I couldn’t easily relate to this.  Naturally, for a bunch of high school and undergraduates, there was plenty of eros to go around as well.

Anyway, Cru and the associated student participants and attendees (going beyond the Cru leadership) reminded me very much of this experience I had in Youth Group.  In fact, their attitudes and personalities felt very natural and comfortable to me for this very reason.  I found their friendliness continuous and I actually wanted to spend time with them as individuals.  Again, this is largely due to my very positive experiences in the Youth Group where I never felt judged as an atheist, but rather accepted as a person.

That said, I have concerns about the event and its content I feel compelled to discuss here.

Despite appearing very open and named Veritas (“truth”), there is a fundamental dishonesty to the entire event.  This dishonestly is not necessarily a conscious one on the part of the organizers, although there is certainly a marketing angle that certainly must drive this at higher corporate levels. The Veritas Forum aims to host an honest intellectual discussion between opposing views.  They seem to genuinely want to have a serious conversation about the topics they propose.

Why do I think it may be dishonest?

Scale

First, some context.  Cru is not a small organization.  Cru and The Veritas Forum bring considerable resources to the event.  They are very professional and it is a well-oiled machine.  This is no smalltime operation.  They aim to present themselves as a TED-style or Intelligence^2 experience.  They effortlessly filled the 1200-set Performing Arts Center at Cal Poly (PAC), offering free attendance.  They had a full compliment of ushers provided by Cal Poly and had access to all the resources available to the venue.  This use of the PAC is very, very expensive.  And, this was not the first show on the speaker’s agenda.  He had just come from two other Veritas events in the past two days in other states.  Frankly, I had no idea what I was getting into.

Now, this would not be a bad thing by itself.  In fact, it could easily be viewed as a good thing. They bring fairly high profile speakers from their camp.   They are the Lawrence Krausses, the Sam Harrises, and the Michael Shermers of their world: medium level celebrities who have books published and who do many, many speaking engagements on the topics being discussed.  In other words, they are refined professionals with considerable experience in public discussions on the topics of interest.  They have their talking points and messages keenly refined.  Moreover, they have “heard it all.”  They are performers who know how to work their audience.  They know exactly what to expect.

Again, why is this bad?

However, at least for our events, they do not have high profile atheist speakers.  They select, in coordination with AHA,  someone locally or on campus to speak for the atheists, usually seeking a local scientist.   This might seem very fair, even generous.  In fact, a certain part of me does think it is cool. Perhaps it is.  It gives local personalities, largely unknown, a chance to shine a bit and give some public exposure to AHA.  But, as I mentioned above, AHA is a ragtag student club on campus with perhaps 10-20 undergraduate members.  That said, we were billed as number two amongst the event’s sponsors, after Veritas itself but before ASI (Associated Students Incorporated — the main corporate representation of students on campus independent of, but strongly tied to, the university).  ASI is basically in charge of managing the event via the student clubs and facilities.  They are the formal interface between external entertaiment and the university, which generally disassociates itself from specific events.

But, upon reflection, there is something very odd about these practices.  The local intellectuals are not usually plugged into the main issues being discussed and are not accustomed to speaking about these issues in a public way.  Unless trained in the style of the arguments that are made, it doesn’t help if you are a scientist or not, even for scientific topics.  Basically, the whole affair is, intentionally or not, biased strongly toward Veritas while, superficially, seeming fair.  Also, while it may seem generous to put AHA on the same footing as ASI and Veritas, we essentially did nothing.  We were made to feel welcome, allowed to set up a booth, wined and dined, allowed to select a representative, but never had any input into the logistics of the event or how it would be run.  In short, we were way out of our league.  One is left with the vague sense that the purpose of AHA is really to lend credibility to the event.  By placing an atheist club on campus on the same footing as Veritas itself, it gives the perception that the discussion is totally symmetric.  Unless you are familiar with both Veritas and AHA, the asymmetry would not be apparent at all.  Little would you know that AHA, with an operating budget of about $300, has a hard time filling a small classroom once a quarter for a group meeting.  Depending on the club leadership year-to-year, we may not be organized enough to make T-shirts, never mind organize any events with international forum sponsors.

I vacillate between my thoughts on this.  On one hand it seems very warm, open, and generous to allow the local “opposition,” no matter how modest, to participate and be billed as equals.  But another part of me feels uncomfortable with the idea.  However, if I’m honest, I’d be upset if they didn’t coordinate with us, given the topics being discussed.  I guess I can’t really have it both ways, which is why I’m admitting that I’m not entirely sure how I feel about it.

It is this discomfort that has demotivated me from participating in the past.  The irony is that historically I have been viewing these events in a backwards way.  I was seeing Veritas as a fringe group whose views I didn’t want to dignify.  They were on footing with the rare-earther, the flat-earthers, or other minority extremists.  I imagined that Veritas was a local operation, a student club.  I didn’t want to lend my “gravitas” as a physics professor to such an event and lend credibility to their arguments.  However, this is, in some sense, backwards.  Their view is the status quo.  They are huge.  It is them lending gravitas to us.  WE are the small players here.  If anything, we are the ones who should be advertising our association with THEM.  But, in their world, our names to lend some marketing credibility.  They can say “atheist organization AHA was involved” or “we had distinguished Dr. YYZ, professor of science XYZ discuss ABC with our Dr. ZZY.”

Event Title

The title of this forum was quite curious.  “Can Science Explain Everything?” seems an interesting and promising line of discussion.  It certainly got my attention, getting me thinking about it right away, which seems like a good thing, right?  But it is like a leading poll question that hasn’t been vetted properly.  It (unconsciously) primes an answer.  

“Is There A Blue Gnome Eating a Yeti in Oregon?”  Such a question implies the existence of blue gnomes, the existence of yetis, that said gnomes have the possibility of eating said yetis, and that they both have a chance of being in Oregon.  Not one of those major premises has been established, but the question itself implies that they have been.

Or implies that it is even a good question to ask.  It isn’t a neutral title and places science in a defensive position.  The title has more than a few major unstated premises.  For example, has science actually ever claimed to be able to explain everything? Indeed, can you really refer to science as an entity? Science.  It isn’t itself a worldview rather a procedure (however, see my equivocation discussion below).  Why not frame it as “Can Religion Explain Everything?” or “Can Christianity Explain Everything?”  The very asking of a question in this context implies it is a good question to ask: “How Many Radians Can Actually Dance on the Head of a Pin?”, “Is There A Blue Gnome Eating a Yeti in Oregon?”  Such a question implies the existence of blue gnomes, the existence of yetis, that said gnomes have the possibility of eating said yetis, and that they both have a chance of being in Oregon.  Not one of those major premises has been established, but the question itself implies that they have been.

Equivocation of Vocabulary

I think equivocation was a big problem at this event (and this isn’t by any means unique to Veritas but is common practice in all events of this forum-y kind).  Terms were used inconsistently during the discussion, sometimes sentence-by-sentence by a single speaker: “science,” “explain,” “everything,” “god,” “religion,” “faith,” “Christianity,” “belief,” “know” (e.g. epistemology), “morality,” “meaning,” “love,” “genius,” and so on.  This made things very, very confusing.  I had to constantly fill in my own definition of what those terms meant, as did every other listener.  Yes, I understand you can’t define every word every time you use it;  that clearly would not work. But it seems like you should define some core ones central to the discussion. By allowing everyone to fill in the blank, one couldn’t help but be biased and hear what you wanted to hear.  Paul at least attempted to make this point: we need to define terms so we know what we are both talking about.  His point was that, if you are just having a subjective experience in your head and sharing it as such, it was fine to leave it fuzzy.   But if two people are having an objective discussion, How can this happen if we aren’t using terms the same way? It is the old chess vs. checkers problem: you are about to place your opponent in a knight fork when suddenly they start jumping your pieces and say “king me.”  If you aren’t playing the same game, how can you possibly begin? This line of critique was often dismissed, usually respectfully, as attempting to quantify the unquantifiable or to deconstruct the undeconstructable.  But most of these terms were not used in an arbitrary way, but rather in a very specific way that alluded to a specific definition.  I think one of the most important problems was a tendency to conflate “science” with “scientism,” as if they were the same thing.  Most scientists don’t equate scientism with science.  Most don’t know what scientism is, but I’m guessing many scientists would lean in the direction of scientism.  Scientism is basically the tendency to put “faith in science,” and that it can, in essence, explain everything.  It is science transformed into a blind belief system.  You see some of this creeping into popular culture.  The meme-machine I Fucking Love Science (IFLS) is basically in this category.  Yes, many of the things IFLS promotes are very neat, inspiring, and, sometimes, mind blowing.  But it is a scientism honeypot.  True Believers flock to it en mass.  It goes beyond just popularizing science, which is basically a good think (e.g. Carl Sagan or Neil deGrasse Tyson).  IFLS takes an attitude that is just a little immature while being unapologetically zealous. Nevertheless, scientism is an cultural entity — an opinion — while science itself is a process or method.  If you know the difference, it is very confusing when the terms are used interchangeably.   If you don’t know the difference, it can really distort the discussion.

What it was really about

Let me conclude with an opinion about the content of the discussion. The most frustrating part of the event, which I can’t really blame Veritas for, is that the discussion danced around its core question.  Although they framed the question in a provocative way, the discussion really had nothing to do with religion nor science.  The real question was: “can you objectively describe a subjective experience.”  Although I’m no expert, this is a well known problem that comes up when discussing the philosophical nature of consciousness (and artificial intelligence), even in a purely secular context.

For example, the term “qualia” is used to describe the internal subjective sensation of being self-aware.  Through the integrated experience of your brain, your senses, and other internal mental process, you feel a “real me,” independent of the body, actively engaged in the world.  It is the Cartesian theater and homunculus: a typical modern person might describe being self aware as a vague sensation of a “little me” watching your experience on a big movie screen in you head directly behind your eyes.  Of course, this isn’t really the way it works.

When you subscribe to the secular worldview (which I do) there is the trope: “the mind is what the brain does.” You will usually concede that it is not clear how to objectively describe qualia — an apparently pure subjective experience.  You can measure brain function, make neural maps, measure neurotransmitter levels, and so on — make all the objective measurements you want — but the subjective experience of being aware seems to be always behind a veil.  If you were to make a machine that had all of the objective elements of what a conscious being had, you would still not be able to establish it had qualia, even if it were to describe the sensation directly to you.  Indeed, this happens every day: you also assume other people have qualia.  People certainly act like they have a similar qualia as you do — and there is perhaps good, intuitive reasons to believe it — but it isn’t something that seems to be able to be quantified based on our current abilities and imaginations.

So, the question remains: can we know something exists even if it can be described objectively?  Ian’s answer is “yes” and it applies to, amongst many things,  “love” and “faith in the existence of god.”  He would call this “a different kind of knowledge.”  For a secular person, the equivalent would be that we “know” from personal experience that qualia exists (“I think therefore I am” sorts of lines).  Nevertheless, such a subjective experience seems to elude objective description.  So, in some sense, topics like this drive home the point that we secular atheists have to be careful.  We can’t, on one hand, say that qualia is a subjective experience that we know exists (i.e. is a form of knowledge) while dismissing other subjective experiences as mere products of the brain.  Perhaps there is a “different kind of knowledge” beyond objective knowledge.

Well, not so fast.

Can science explain everything?  We need to define some terms.

Science: its a tool that can map out the consistent structure and patterns of our reality through systematic hypothesis testing and strict evidence-based refinement of these hypotheses.  Remove the culture and opinion-driven scientism from the argument.  In science as I have defined it, knowledge is always provisional.  It is subject to change if new tests or new evidence develops for new ideas. Scientific knowledge can change based on evidence and systematic testing.   This is in contrast to faith-based knowledge, which is the acceptance of an idea without systematically testable evidence.  It is a Belief.

Explainable: this is when a claim is systematically testable enough that a consistent model can be developed through one or more lines of evidence.  This model should be able predict new testable things and fit well into established knowledge of formerly explainable things.  Under these conditions, the claim can be provisionally “explained” subject to ongoing testing and evidence.  When multiple lines of evidence support a claim under some well defined set of physical conditions, we might call this a Law or a Theory.

Everything: Well, its everything.  Vast swaths of everything include many strange things beyond our ability to process.  In our context “everything” can be bundled into “explained,” “unexplained,” “unexplainable,” and a host of categories we don’t even know exist.  In each of the cases, we can break explainability into “known” and “unknown.”  This is going to get very Rumsfeld-ian, so forgive me.  Basically, there are

  1. explainable things we are working on, but don’t know if they are explainable
  2. explainable things we are working, but suspect are explainable
  3. various categories of explainable things we aren’t working on
  4. unexplainable things we are working on, but don’t know they are unexplainable
  5. explainable things we don’t know exist
  6. unexplainable things we don’t know exist

In addition, we can’t know a priori if something is explainable or not.  To twist the dagger even further, we never know if something is unexplainable.  Basically, we observe things, try to test them with science and build lines of evidence and models of consistency.  If this observation turns out to be in the unexplainable category, we will never know it.  We can give up trying to explain it (but that might mean it is explainable but we gave up too early).  Or it can mean that it is genuinely unexplainable.  Both results look the same.  If it can be explained, it will be with science (albeit with provisional knowledge at each phase).

Summary: my observations of the event

The meeting was enjoyable, but bigger than I expected.  Both sides were provocative, entertaining, and articulate. However, the question was loaded and had many unstated major premises.  There was asymmetry between the profile of their speaker compared to ours.  The speakers talked past each other because most terms being used were not well defined.

Summary: my answer to the question “Can Science Explain Everything?”

Science can, in principle, and provisionally explain things that can be explained.  It cannot explain things that are not explainable.  However, (at least) one twist: you don’t know in advance what can and can’t be explained, nor what fraction of explainable things are known. The best we can do is, as observations arise, assume something is explainable and move forward with tests and more observations.  If something appears unexplainable then we should still try to figure it out, living with any mystery or uncertainty.  Never declare anything unexplainable, this is a privileged assertion that is unavailable by definition.  My advice?  Avoid filling mysteries with oddly specific answers.  Learn to live with mystery and assume everything can be figured out in principle.  If you find an explanation, treat that as a provisional placeholder — one that can be ejected when better evidence and information come along.

 

 

 

The Best Nest

A0742B3C-042C-4203-B8E2-CD22B81D2971

The Best Nest by P.D. Eastman

 

The classic children’s book The Best Nest by children’s author P.D. Eastman, published in 1968, is one of the books that really sticks with me from my childhood.

I recall my mom reading it to me when I was about four or five. I’ve read it to my kids for years and my four-year-olds particularly adore it. It is the simple story of how a mama and papa bird go through a series of misadventures in an effort to find a new home, only to discover that their original home was really the best one after all. We find out at the end that the mama bird was ready to lay an egg and the whole effort was driven by her motherly instinct to find a safe space for her baby. It is sappy, and reinforces certain gender stereotypes, but is ultimately good-natured. While simple, it does follow the classic hero’s journey. After hardship and adventure, you find your way back to where you started as changed person (or bird, in this case), now wiser to the ways of the world (like not to nest in bell towers). When we got it for my kids years ago, I had instant flashbacks with the artwork, recalling fixations I’d had with certain details that, as an adult, I would never have noticed: the way the straw stuck out in their mouths, the particular hat the mama bird wore, the particular angle and character of the rain that came down on the papa bird at the end. All of it jumped out again.

 

879ABDAA-6F12-450D-A1CF-AB83334CA602

The church in the town featured in The Best Nest

One of the big turning points in the story is when the birds find this wonderful space for their nest. It is huge. It has all sorts of great views of the area. The mother bird thinks it is the best place. However, we, the reader, know that something will go terribly wrong: the space is really a bell tower for a church. The papa bird goes out to find new materials for their nest while the mama sets up shop. Well, sure enough, a funky beatnik proto-hippy guy named Mr. Parker, comes to the church and rings the hell out of that bell like he has no other outlet for his life’s frustrations.  The guy clearly loves his job. The papa bird comes back to find the place littered with bird feathers and no mama bird. He fears the worst and goes on a quest to find her.

Oberlander R.D. #1 Waldoboro, Ma...

Oberlander, R.D. #1, Waldoboro, Ma…

Before they find the bell tower, they look in other places for a new nest. One of the potential nests is a mailbox. Now, as I mentioned, as a kid I had particular fixations in details I would never had seen as an adult; conversely, in reading it to my children, I also found details I would never have found as a kid.  For example, one of the reasons they decided not to pick the mailbox is that, while they were checking it out, a mailman comes by and puts some mail into the mailbox.  Definitely not an ideal space for a pair of birds.

However, the piece of mail has an address on it (upside down in the text of the book):

…Oberlander
R.D. #1
Waldoboro, Ma…
Circa 2016, there is indeed an [Old] Road 1 in Waldoboro, Maine.  There is also an Oberlander family name that appears in that town’s older records.  That’s sort of neat.  Naturally, using Google Streeview, I wandered around to see if I could find the church where the bell tower was.  While not definitive, I have two candidates.  Sure, these churches are pretty generic shapes for the area.  Nevertheless, with a specific town to focus on, you can be pretty sure it must be one of two churches, or a composite, that P.D. used as a template.  He could have also just made something up from memory or imagination.
The first one, Broad Bay Congregational Church, has the correct weathervane, the correct three-window structure, a circular region in the middle, and an obvious bell tower.  It also has a front that is roughly consistent with the drawing, although obviously updated (e.g. it has two windows on each side of the door).
Waldoboro Broad Bay Congregational Church, 941 Main St, Waldoboro, Maine)

Waldoboro Broad Bay Congregational Church 941 Main St, Waldoboro, Maine

The second one, Waldoboro United Methodist Church, also has the three window configuration in the side, has similar slats near the bell tower as the drawing in the story (the slats were one of the weirdly specific things I fixated on as a child), and a pointy tower that resembles the one in the drawing.  But it does not have the right window configuration, the weathervane, nor the circular slats.
Waldoboro United Methodist Church (side view), 85 Friendship Street (Route 220), Waldoboro, Maine)

Waldoboro United Methodist Church (side view) 85 Friendship Street (Route 220), Waldoboro, Maine

Waldoboro United Methodist Church (front view), 85 Friendship Street (Route 220), Waldoboro, Maine.

Waldoboro United Methodist Church (front view), 85 Friendship Street (Route 220), Waldoboro, Maine.

My hunch is that the first one, Broad Bay Congregational Church, is the one in the story.  I suspect that during the time since P.D. Eastman wrote the story (circa 1968),  it has had a few upgrades.
But, as I said earlier, these are very common generic “Protestant-style” East Coast churches.  The story might have nothing to do with these specific churches.
Anyway, I had fun with this little distraction.  If anyone knows more about this Easter egg planted by P.D. Eastman, about any connection he may have had to the Waldoboro region, or the reason he might have picked “Oberlander” for the recipient of the letter on R.D #1, I’d love to hear about it.

Knight in November video

I’ve recently released a new video for the song Knight in November off of the 2012 Agapanthus album Smug. I wrote the lyrics and music in the early 90s and recorded an ambient version of it circa 2002. This version was titled Night in November (not to be confused with the 1994 play by the same name) and also released on Smug. The original lyrics were about one particularly moody journey to Mount Hamilton’s Lick Observatory with a good friend. Driving up to Mount Hamilton was a frequent midnight pilgrimage in my youth while growing up in San Jose.

I recorded a heavier variant of the original song, now called Knight in November, in the summer of 2012. The lyrical verses sound like they just repeat the same phrase five times. But actually they form a set of (mostly) nonsensical homophones. Check out the video below to appreciate the effect. The tune can be downloaded for free from soundcloud (note the version on the album and soundcloud is a slightly different mix in the video).

Also below is the audio for the ambient piece, Night in November, from soundcloud. Hope you enjoy.

Knight In November from Thomas D. Gutierrez on Vimeo.

->

Thus spake Rankine: “U” for potential energy

Why is the symbol U often used to represent potential energy? This question came up recently in a faculty discussion.

Before you get too excited, this post won’t resolve the issue. However, the earliest use of the letter “U” for potential energy was in a paper from 1853 by William John Macquorn Rankine: “On the general law of the transformation of energy,” Proceedings of the Philosophical Society of Glasgow, vol. 3, no. 5, pages 276-280; reprinted in: (1) Philosophical Magazine, series 4, vol. 5, no. 30, pages 106-117 (February 1853). The wikipedia article on potential energy indicates that his article is the first reference to a modern sense of potential energy. Below is the original text (yellow highlighting mine). I think we will have to ask Bill Rankine why he chose the symbol “U”:

“Let U denote this potential energy.”
Thus spake Rankine.

screenshot_165

The field near a conductor

This post is directed primarily at physics students and instructors and stems from discussions with my colleague Prof. Matt Moelter at Cal Poly, SLO. In introductory electrostatics there is a standard result involving the electric field near conducting and non-conducting surfaces that confuses many students.

Near a non-conducting sheet of charge with charge density \sigma, a straightforward application of gauss’s law gives the result

\vec{E}=\frac{\sigma}{2\epsilon_0}\hat{n}\ \ \ \ (1)

While, near the surface of a conductor with charge density \sigma, again, an application of gauss’s law gives the result

\vec{E}=\frac{\sigma}{\epsilon_0}\hat{n}\ \ \ \ (2)

The latter result comes about because the electric field inside a conductor in electrostatic equilibrium is zero, killing off the flux contribution of the gaussian pillbox inside the conductor. In the case of the sheet of charge, this same side of the pillbox distinctly contributed to the flux. Both methods are applied locally to small patches of their respective systems.

Although the two equations are derived from the same methods, they mean different things — and their superficial resemblance within factors of two can cause conceptual problems.

In Equation (1) the relationship between \sigma and \vec{E} is causal. That is, the electric field is due directly from the source charge density in question. It does not represent the field due to all sources in the problem, only the lone contribution from that local \sigma.

In Equation (2) the relationship between \sigma and \vec{E} is not a simple causal one, rather it expresses self-consistentancy, discussed more below. Here the electric field represents the net field outside of the conductor near the charge density in question. In other words, it automatically includes both the contribution from the local patch itself and the contributions from all other sources. It is has already added up all the contributions from all other sources in the space around it (this could, in some cases, include sources you weren’t aware of!).

How did this happen? First, in contrast to the sheet of charge where the charges are fixed in space, the charges in a conductor are mobile. They aren’t allowed to move while doing the “statics” part of electrostatics, but they are allowed to move in some transient sense to quickly facilitate a steady state. In steady state, the charges have all moved to the surfaces and we can speak of an electrostatic surface distribution on the conductor. This charge mobility always arranges the surface distributions to ensure \vec{E}=0 inside the conductor in electrostatic equilibrium. This is easy enough to implement mathematically, but gives rise to the subtle state of affairs encountered above. The \sigma on the conductor is responding to the electric fields generated by the presence of other charges in the system, but those other charges in the system are, in turn, responding to the local \sigma in question. Equation (2) then represents a statement of self-consistency, and it breaks the cycle using the power of gauss’s law. As a side note, the electric displacement vector, \vec{D}, plays a similar role of breaking the endless self-consistency cycle of polarization and electric fields in symmetric dielectric systems.

Let’s look at some examples.

Example 1:
Consider a large conducting plate versus large non-conducting sheet of charge. Each surface is of area A. The conductor has total charge Q, as does the non-conducting sheet. Find the electric field of each system. The result will be that the fields are the same for the conductor and non-conductor, but how can this be reconciled with Equation (1) and (2) which, at a glance, seem to give very different answers? See the figure below:

Conductor_1

For the non-conducting sheet, as shown in Figure (B) above, the electric field due to the source charge is given by Equation (1)

\vec{E}_{nc}=\frac{\sigma_{nc}}{2\epsilon_0}\hat{n}

where

\sigma_{nc}\equiv\sigma=Q/A

(“nc” for non-conducting) and \hat{n}=+\hat{z} above the positive surface and \hat{n}=-\hat{z} below it.

Now, in the case of the conductor, shown in Figure (A), Equation (2) tells us the net value of the field outside the conductor. This net value is expressed, remarkably, only in terms of the local charge density; but remember, for a conductor, the local charge density contains information about the entire set of sources in the space. At a glance, it seems the electric field might be twice the value of the non-conducting sheet. But no! This is because the charge density will be different than the non-conducting case. For the conductor, the charge responds to the presence of the other charges and spreads out uniformly over both the top and bottom surface; this ensures \vec{E}=0 inside the conductor. In this context, it is worth point out that there are no infinitely thin conductors. Infinitely thin sheets of charge are fine, but not conductors. There are always two faces to a thin conducting surface and the surface charge density must be (at least tacitly) specified on each. Even if a problem uses language that implies the conducting surface is infinitely thin, it can’t be.

For example, the following Figure for an “infinitely thin conducting surface with charge density \sigma“, which then applies Equation (2) to the setup to determine the field, makes no sense:

nonsenseconductor copy

This application of Equation (2) cannot be reconciled with Equation (1). We can’t have it both ways. An “infinitely thin conductor” isn’t a conductor at all and should reduce to Equation (1). To be a conductor, even a thin one, there needs to be (at least implicitly) two surfaces and a material medium we call “the conductor” that is independent of the charge.

Back to the example.

Conductor_1

If the charge Q is spread out uniformly over both sides of the conductor in Figure (A), the charge density for the conductor is then

\sigma_c=Q/2A=\sigma_{nc}/2=\sigma/2

(“c” for conducting). The factor of 2 comes in because each face has area A and the charge spreads evenly across both. Equation (2) now tells us what the field outside the conductor is. This isn’t just for the one face, but includes the net contributions from all sources

\vec{E}_{c}=\frac{\sigma_c}{\epsilon_0}\hat{n}=\frac{\sigma_{nc}}{2\epsilon_0}\hat{n}=\vec{E}_{nc}.

That is, the net field is the same for each case,

\vec{E}_{c}=\vec{E}_{nc}.

Even though Equations (1) and (2) might seem superficially inconsistent with each other for this situation, they give the same answer, although for different reasons. Equation (1) gives the electric field that results directly from \sigma alone. Equation (2) gives a self consistent net field outside the conductor, which uses information contained in the local charge density. The key is understanding that the surface charge density used for the sheet of charge and the conductor are different in each case. In the case of a charged sheet, we have the freedom to declare a surface with a fixed, unchanging charge density. With a conductor, we have less, if any, control over what the charges do once we place them on the surfaces.

It is worth noting that each individual surface of charge on the conductor has a causal contribution to the field still given by Equation (1), but only once the surface densities have been determined — with one important footnote. The net field in each region can be determined by adding up all the (shown) individual contributions in superposition only if the charges shown are the only charges in the problem and were allowed to relax into this equilibrium state due to the charges explicitly shown. This last point will be illustrated in an example at the end of this post. It turns out that you can’t just declare arbitrary charge distributions on conductors and expect those same charges you placed to be solely responsible for it. There may be “hidden sources” if you insist on keeping your favorite arbitrary distribution on a conductor. If you do, you must also account for those contributions if you want to determine the net field by superposition. However, all is not lost: amazingly, Equation (2) still accounts for those hidden sources for the net field! With Equation (2) you don’t need to know the individual fields from all sources in order to determine the net field. The local charge density on the conductor already includes this information!

Example 2:
Compare the field between a parallel plate capacitor with thin conducting sheets each having charge \pm Q and area A with the field between two non-conducting sheets of charge with charge \pm Q and area A. This situation is a standard textbook problem and forms the template for virtually all introductory capacitor systems. The result is that the field between the conducting plates are the same as the field between the non-conducting charge sheets, as shown in the figure below. But how can this be reconciled with Equations (1) and (2)? We use a treatment similar to those in Example 1.

Plates

Between the two non-conducting sheets, as shown in Figure (D), the top positive sheet has a field given by Equation (1), pointing down (call this the -\hat{z} direction) . The bottom negative sheet also has a field given by Equation (1) and it also points down. The charge density on the positive surface is given by \sigma=Q/A. We superimpose the two fields to get the net result

\vec{E}=\vec{E}_{1}+\vec{E}_2=\frac{+\sigma}{2\epsilon_0}(-\hat{z})+\frac{-\sigma}{2\epsilon_0}(+\hat{z})+=-\frac{\sigma}{\epsilon_0}(\hat{z}).

Above the top positive non-conducting sheet the field points up due to the top non-conducting sheet and down from the negative non-conducting sheet. Using Equation (1) they have equal magnitude, thus the fields cancel in this region after superposition. The fields cancel in a similar fasshion below the bottom non-conducting sheet.

Unfortunately, the setup for the conductor, shown in Figure (C), is framed in an apparently ambiguous way. However, this kind of language is typical in textbooks. Where is this charge residing exactly? If this is not interpreted carefully, it can lead to inconsistencies like those of the “infinite thin conductor” above. The first thing to appreciate is that, unlike the nailed down charge on the non-conducting sheets, the charge densities on the parallel conducting plates are necessarily the result of responding to each other. The term “capacitor” also implies that we start with neutral conductors and do work bringing charge from one, leaving the other with an equal but opposite charge deficit. Next, we recognize even thin conducting sheets have two sides. That is, the top sheet has a top and bottom and the bottom conducting sheet also has a top and bottom. If the conducting plates have equal and opposite charges, and those charges are responding to each other. They will be attracted to each other and thus reside on the faces that are pointed at each other. The outer faces will contain no charge at all. That is, the \sigma=Q/A from the top plate is on that plate’s bottom surface with none on the top surface. Notice, unlike Example 1, the conductor has the same charge density as its non-conducting counterpart. Similar for the bottom plate but with the signs reversed. A quick application of gauss’s law can also demonstrate the same conclusion.

With this in mind, we are left with a little puzzle. Since we know the charge densities, do we jump right to the answer using Equation (2)? Or do we now worry about the individual contributions of each plate using Equation (1) and superimpose them to get the net field? The choice is yours. The easiest path is to just use Equation (2) and write down the results in each region. Above and below all the plates, \sigma=0 so $\vec{E}=0$; again, Equation (2) has already done the superposition of the individual plates for us. In the middle, we can use either plate (but not both added…remember, this isn’t superposition!). If we used the top plate, we would get

\vec{E}=\frac{\sigma}{\epsilon_0}(-\hat{z})=-\frac{\sigma}{\epsilon_0}\hat{z}

and if we used the bottom plate alone, we would get

\vec{E}=\frac{-\sigma}{\epsilon_0}\hat{z}=-\frac{\sigma}{\epsilon_0}\hat{z}.

They both give the same individual result, which is the same result as the non-conducting sheet case above where we added individual contributions.

If were were asked “what is the force of the top plate on the bottom plate?” we actually do need to know the field due to the charge on the single top plate alone and apply it to the charge on the second plate. In this case, we are not just interested in the total field due to all charges in the space as given by Equation (2). In this case, the field due to the single top plate would indeed be given by Equation (1), as would the field due to the single bottom plate. We could then go on to superimpose those fields in each region to obtain the same result. That is, once the charge distributions are established, we can substitute the sheets of non-conducting charge in place of the conducting plates and use those field configurations in future calculations of energy, force, etc.

However, not all charge distributions for the conductor are the same. A strange consequence of all this is that, despite the fact that Example 1 gave us one kind of conductor configuration that was equivalent to single non-conducting sheet, this same conductor can’t be just transported in and made into a capacitor as shown in the next figure:

Conductor_3

On a conductor, we simply don’t have the freedom to invent a charge distribution, declare “this is a parallel plate capacitor,” and then assume the charges are consistent with that assertion. A charge configuration like Figure (E) isn’t a parallel plate capacitor in the usual parlance, although the capacitance of such a system could certainly be calculated. If we were to apply Equation (1) to each surface and superimpose them in each region, we might come to the conclusion that it had the same field as a parallel plate capacitor and conclude that Figure (E) was incorrect, particularly in the regions above and below the plates. However, Equation (2) tells us that the field in the region above the plates and below them cannot be zero despite what a quick application of Equation (1) might make us believe. What this tells us is that there must unseen sources in the space, off stage, that are facilitating the ongoing maintenance of this configuration. In other words, charges on conducting plates would not configure themselves in such a away unless there were other influences than the charges shown. If we just invent a charge distribution and impose it onto a conductor, we must be prepared to justify it via other sources, applied potentials, external fields, and so on.

So, even though plate (5) in Figure (E) was shown to be the same as a single non-conducting plate, we can’t just make substitutions like those in this figure. We can do this with sheets of charge, but not with other conductors. Yes, the configuration in Figure (E) is physically possible, it just isn’t the same as a parallel plate capacitor, even though each element analyzed in isolation makes it seem like it would be the same.

In short, Equations (1) and (2) are very different kinds of expressions. Equation (1) is a causal one that can be used in conjunction with the superposition principle: one is calculating a single electric field due to some source charge density. Equation (2) is more subtle and is a statement of self-consistency with the assumptions of a conductor in equilibrium. An application of Equation (2) for a conductor gives the net field due to all sources, not just the field do to the conducting patch with charge density sigma: it comes “pre-superimposed” for you.

Quick 4-dimensional visualization

How can you visualize a 4th spatial dimension? There has been much written and discussed on this topic; I won’t pretend that this post will compete with the vast resources available online. However, I do feel that I can contribute one small visualization trick for hypercubes that, for some reason, has not been emphasized very much elsewhere (although it is out there), which helped me get a foothold into the situation.

My first exposure as a kid to the topic of visualizing higher dimensions was given by Carl Sagan on the original Cosmos. In it, he introduces a hypercube called a tesseract:

While Cosmos is an inspirational introduction, it isn’t very complete. Still, there are many great resources on the web to help appreciate and understand the tesseract on many levels from rotations to inversions and beyond. They are part of a larger class of very cool objects known as polytopes. You are one google search away from vast resources on this topic. I won’t even bother compiling links.

What I hope to accomplish is to give you an intellectual foothold into the visualization, which will help considerably as you delve further into the topic.

Below is an image of a tesseract taken from the Wikipedia page on tesseracts
Schlegel_wireframe_8-cell

In what sense is this object a hypercube? Well, strictly speaking, this object is not a tesseract or hypercube. Technically, it is a two-dimensional projection (i.e. it is on this web page) of a three-dimensional shadow (the wire frame object if it were in 3D) of a 4D hypercube.

But how exactly can this object help us see into a 4th spatial dimension? Here is a visualization trick I’ve found most helpful for me:

Let’s start with something familiar. I can draw two parallelograms, one larger than the other, then connect the corresponding vertices. One’s mind will quickly interpret this as a cube as viewed from some angle, although it is just a two dimensional thing on a page. Your mind naturally views the (slightly) smaller parallelogram as being the same actual size as the larger one. It just looks smaller because we interpret it as farther away, thanks to perspective. Furthermore, all the angles, although drawn otherwise, are interpreted as right angles. The description makes it sound more complex than it is; it is just the representation of an ordinary cube viewed from some angle outside the page:

Cube1

In the drawing, the parallelograms are almost the same size, so it is easy to flip back and forth between which one is the “front” face and which is the “back” face, generating weird distortions if it is viewed “incorrectly.”

Now, I rotate the cube so we are looking directly down one face. Think of this drawing as looking down a crude wirefame corridor:
Cube2

However, on the page it is really just two nested squares with connected vertices. Still, one’s brain fills in the three dimensional details pretty naturally. Viewed this way, the smaller square is just further away and the angles are all right angles. If the smaller square were made smaller, even going to a point, you could imagine that the end of the corridor was just very far away.

The tesseract projection really is not really much different:

Schlegel_wireframe_8-cell

The visualization tool to remember is that the smaller cube only looks smaller because of perspective: the two cubes are actually the same size but the smaller cube only looks smaller because it is farther away. Further away in what direction? Into a 4th spatial dimension! When looking at the tesseract projection, think of it as looking “down” a kind of wireframe corridor directed such that the farthest point is actually at the mutual center of the cubes. This is the same sense that an ordinary long corridor drawn in two dimensions would have the far point (at infinity) located at the center of the squares. This mutual center is then interpreted as pointing in a direction not in ordinary three dimensional space; indeed, all six faces of the larger cube look “down” this corridor toward the other end. If you had such a hypercube in your living room, each of the six faces would act as a separate corridor directed towards the far point in a fourth spatial dimension. If your friend walked into the cube and continued down the corridor, they would not exit on the other side of the cube in your living room but rather would get smaller and smaller walking toward the center of the cube.

If you were the one doing the walking, it would be just like walking down a corridor into another room, albeit one that was entirely embedded — from all directions — within another one in three dimensions.

This is basically the idea behind Dr. Who’s tardis, as explained by the Doctor himself (although in his usual curt and opaque way):

You could think of the outer cube a crude 2 x 2 x 2 meter exterior to a tardis. The inner cube might be a 2 x 2 x 2 meter room inside the tardis (the same shape as the outside) 2 meters away into the 4th dimension. However, the tardis isn’t a mere hypercube. It has rooms inside that are bigger than the outside of the tardis. But to get them to fit inside the outer cube, you just put them farther away into the extra dimension. That is, you can visualize the inner cube as being a 100 x 100 x 100 meter room inside a 2 x 2 x 2 meter exterior box — except imagine you are 1000 meters away looking “down” the corridor of the 4th dimension, so the giant room looks small and thus fits fine into the exterior. This is exactly the point the Doctor is trying to make in the clip.

This idea was also a part of the plot of Stranger in a Strange Land by Heinlein. Valentine Michael Smith can make things vanish into a fourth dimension. The effect, as viewed by all observers in our own three dimensions, is to see the object get smaller and smaller from all angles until it vanishes. This is akin to walking down the corridor of the tesseract towards the center. The object appears to get smaller only because it is further away in this other direction outside of our usual three.

In my opinion, visualizing the tesseract as looking down a corridor into another spatial dimension with added perspective is the best first step in appreciating higher dimensional thinking. Here is a neat looking game that emphasizes the perspective approach and gives some practical practice with these ideas.

Update: Sean Carroll also just posted something on tesseracts on his blog Nov. 7.

Sexual harassment in NYC measuring mental illness?

An upsetting video (SFW) by Rob Bliss shows a woman being repeatedly verbally harassed as she walks the streets of New York City. The video is an edited sample of a 10 hour experiment. The actress, Shoshana B. Roberts, and Bliss were working on a project for Hollaback, an advocacy group trying to end street harassment. According to Bliss, who used a hidden camera and discreetly walked several paces in front of Roberts, the actress was harassed about 100 times during her 10 hour walk around the City (not all are shown in the video). In the video, one can clearly see Roberts simply walking and looking forward, minding her own business, not engaging or inviting conversation or interaction. Yet various men constantly vie for her attention, sometimes very aggressively, using a spectrum of nearly universally inappropriate strategies. This included many expressions like unsolicited neutral comments, catcalls, inappropriate remarks (usually about her looks), aggressive talking, shouting, following, and so on. The Washington Post has a good article summarizing the project and players. Here is the original video

A similar project was done on The Daily Show by comedian and correspondent Jessica Williams

I personally found the videos very disturbing and significant on many levels. They have helped me appreciate the issues women face while just walking from point A to B. Yes, as a man I have to navigate the occasional nuisance while walking along the street, but nothing like those shown in the videos. If these projects represent typical experiences for women, this represents a serious social problem. Even if it is atypical, a notion these videos do not support (the women in the videos seem “typical” — for example, no one is a recognizable popular celebrity whose presence might be especially socially disruptive), it is still upsetting. No one should need to experience interactions like that just walking around (including celebrities).

While emotionally impactful, it is important to realize the videos in no way represent a scientific experiment. There is no baseline measurement or control group. However, the video below might be a pretty decent effort as a control experiment:

In all seriousness, despite a lack of scientific rigor, I am willing to accept that the videos are broadly representative of the experiences many women have walking around. They demonstrate to me that the harassment is real, unsolicited, annoying, and occasionally terrifying. No one should have to put up with behavior like that and it is a terrible thing to be subjected to. We, as a society, need to figure out how to understand and manage this.

Other than the fact that all the harassers were men, one rather conspicuous thing jumped out at me while watching these videos: the men in the video seemed to be mentally and/or emotionally ill individuals. This in no way justifies their behavior and the harassment is clearly real. But seriously, what kind of person just starts randomly talking to another person about ANYTHING as they walk down the street, with no other context, demanding all of their attention? Someone who is mentally ill, practically by definition. Sure, talking to someone randomly on the street is occasionally appropriate. The annoying sales person can be given a legitimate excuse, even if frustrating. A panhandler is perhaps also in a special category (panhandling is not necessarily acceptable, but it is understood to a degree). Yes, the occasional “hello” or “have a good day” to a stranger might work when it is natural — which it usually isn’t while just walking down the street minding your own business. That they were mostly non-white men in Bliss’s video is likely a selection bias on the part of the editor. That they were men shows a clear testosterone connection.

In the videos, the perpetrators seem to be men who lack self control, who genuinely can’t manage their own impulses, physical and verbal, who don’t understand social conventions and basic etiquette. Self evidently, they are men who lack empathy or understanding of another person’s physical and emotional space. It is as if they have some kind of aggressive nervous tick they can’t control. The adult human mind is full of noise; there are impulses coming from many sectors of the psyche. However, most people, emotionally and mentally healthy adults, men and women alike of all walks of life, learn how to manage those internal impulses. Adults who can’t do that usually have some kind of brain damage, perhaps to the frontal lobe where impulse control is seated, or are not emotionally or mentally healthy in some other way.

A back-of-the-envelope calculation is worth doing. How many people does one expect to be in “interaction range” during a 10 hour excursion in New York City and what percentage is the observed 100 harassments of that number? This will help set the scale for the fraction of individuals harassing these women.

1) The population density of New York City: 26403 people per square mile ~ 0.01 people per square meter ~ 1 person per 100 square meters

2) 100 square meters might be regarded as a sensible “interactions zone” around a typical person walking around: +/- 5 meters in each direction

3) The typical walking speed of a person is around: 1.5 m/s

4) Imagine breaking New York City into a grid of 10 x 10 meter squares

5) The time to transverse 10 meters and move to one unique 10 square meter cell: about 6.67 seconds

6) There are 3600 seconds in 1 hour, so a 10 hour walk in NYC will sample about 5400 unique people on average in New York

7) If there were 100 harassment events/5400 persons during the walk in the video, this is about a 1.8% or 2% effect

That is, about 2% of the people Roberts interacted with during her excursion with Bliss harassed her to various degrees, violating her personal mental and emotional state. Again, this obviously isn’t scientific, but rather just a back-of-the-envelope. If I had to guess, I would say I underestimated the number of unique people per square meter one encounters on the street during the day in NYC. In other words, 2% is probably high.

If you asked me in advance “what fraction of people in New York City have mental problems involving a pathological lack of self control?” I would likely have guessed something like 10%. So, I could easily believe that the 2% number is looking at a subset of that that group, representing adult men whose mental illness, emotional illness, and excessive lack of self control is particularly aggressive and directed towards women. This 2% number then represents about 4% of the male population. This, I believe, is what these videos are measuring: a mental health problem specific to some men. It also explains the relative uniformity of the distribution across New York City, a point emphasized in William’s video.

The good news is, if it is a specific kind of mental health problem intrinsic to some population of men, and not some completely ill-defined problem, then perhaps this points to a strategy to help organizations like Hollaback end the awful street harassment many women experience.

Let me clarify that:
I’m in no way claiming that all harassment directed toward women across all social and cultural modes is due to mental illness alone; the causes of harassment are surely complex, perhaps involving trained dysfunctional socialized behaviors from early childhood, personality disorders, and other extensions of “healthy” mental states — but which are not a form of mental illness per se. I also hope that I have not given the impression that am I rationalizing away the effect or removing the element of personal responsibility from perpetrators. I’m merely proposing that one contribution to the problem — particularly in the context of aggressive street harassment of the sort shown in the video — may be a particular form of mental illness. I’m suggesting that scientifically exploring this contribution, by trained professionals, may be worthwhile.

Newton’s First Law is not a special case of his Second Law

When teaching introductory mechanics in physics, it is common to teach Newton’s first law of motion (N1) as a special case of the second (N2). In casual classroom lore, N1 addresses the branch of mechanics known as statics (zero acceleration) while N2 addresses dynamics (nonzero acceleration). However, without getting deep into concepts associated with Special and General Relativity, I claim this is not the most natural or effective interpretation of Newton’s first two laws.

N1 is the law of inertia. Historically, it was asserted as a formal launching point for Newton’s other arguments, clarifying misconceptions left over from the time of Aristotle. N1 is a pithy restatement of the principles established by Galileo, principles Newton was keenly aware of. Newton’s original language from the Latin can be translated roughly as “Law I: Every body persists in its state of being at rest or of moving uniformly straight forward, except insofar as it is compelled to change its state by force impressed.” This is attempting to address the question of what “a natural state” of motion is. According to N1, a natural state for an object is not merely being at rest, as Aristotle would have us believe, but rather uniform motion (of which “at rest” is a special case). N1 claims that an object changes its natural state when acted upon by external forces.

N2 then goes on the clarify this point. N2, in Newton’s language as translated from the Latin was stated as “Law II: The alteration of motion is ever proportional to the motive force impress’d; and is made in the direction of the right line in which that force is impress’s.” In modern language, we would say that the net force acting on an object is equal to its mass times its acceleration, or

\vec{F}_{\rm net}=m\vec{a}

In the typical introductory classroom, problems involving N2 would be considered dynamics problems (forgetting about torques for a moment). A net force generates accelerations.

To recover statics, where systems are in equilibrium (again, modulo torques), students and professors of physics frequently then back-substitute from here and say something like: in the case where \vec{a}=0 clearly we recover N1, which can now be stated something like:

\vec{a}=0
if and only if
\vec{F}_{\rm net}=0

This latter assertion certainly looks like the mathematical formulation of Newton’s phrasing of N1. Moreover, it seemed to follow from the logic of N2 so, ergo, “N1 is a special case of N2.”

But this is all a bit too ham-fisted for my tastes. Never mind the nonsensical logic of why someone as brilliant as Newton would start his three laws of motion with special case of the second. That alone should give one pause. Moreover, Newton’s original language of the laws of motion is antiquated and does’t illuminate the important modern understanding very well. Although he was brilliant, we definitely know more physics now than Newton and understand his own laws at a deeper level than he did. For example, we have an appreciation for Electricity and Magnetism, Special Relativity, and General Relativity, all of which force one to clearly articulate Newton’s Laws at every turn, sometimes overthrowing them outright. This has forced physicists over the past 150 yeas to be very careful how the laws are framed and interpreted in modern terms.

So why isn’t N1 really a special case of N2?

I first gained an appreciation for why N1 is not best thought of as a special case of N2 when viewing the famous educational film called Frames of Reference by Hume and Donald Ivey (below), which I use in my Modern Physics classes when setting up relative motion and frames of reference. Then it really hit home later while teaching a course specifically about Special Relativity from a book by the same name by T.M by Helliwell.

A key modern function of N1 is that it defines inertial frames. Although Newton himself never really addresses inertial frames in his work, this modern interpretation is of central importance in modern physics. Without this way of interpreting it, N1 does functionally become a special case of N2 if you treat pseudoforces as actual forces. That is, if “ma” and the frame kinematics are considered forces. In such a world, N1 is redundent and there really are only two laws of motion (N2 and the third law, N3, which we aren’t discussing here). So why don’t we frame Newton’s laws this way. Why have N1 at all? One might be able to get away with this kind of thinking in a civil engineering class, but forces are very specific things in physics and “ma” is not amongst them.

So why is “ma” not a force and why do we care about defining inertial frames?

Basically an inertial frame is any frame where the first law is obeyed. This might sound circular, but it isn’t. I’ve heard people use the word “just” in that first point: “an inertial frame is just any frame where the first law is obeyed.” What’s the big deal? To appreciate the nuance a bit, the modern logic of N1 goes something like this:

if
\vec{F}_{\rm net}=0
and
\vec{a}=0
then you are in an inertial frame.

Note, this is NOT the same as a special case of N2 as stated above in the “if and only if” phrasing

\vec{a}=0
if and only if
\vec{F}_{\rm net}=0

That is, N1 is a one-way if-statement that provides a clear test for determining if your frame is inertial. The way you do this is you systematically control all the forces acting on an object and balance them, ensuring that the net force is zero. A very important aspect of this is that the catalog of what constitutes a force must be well defined. Anything called a “force” must be linked back to the four fundamental forces of nature and constitute a direct push or a pull by one of those forces. Once you have actively balanced all the forces, getting a net force of zero, you then experimentally determine if the acceleration is zero. If so, you are in an inertial frame. Note, as I’ve stated before, this does not include any fancier extensions of inertial frames having to do with the Principle of Equivalence. For now, just consider the simpler version of N1.

With this modern logic, you can also use modus ponens and assert that if your system is non-inertial, then you have can have either i) accelerations in the presence of apparently balanced forces or ii) apparently uniform motion in the presence of unbalanced forces.

The reason for determining if your frame is inertial or not is that N2, the law that determines the dynamics and statics for new systems you care about, is only valid in inertial frames. The catch is that one must use the same criteria for what constitutes a “force” that was used to test N1. That is, all forces must be linked back to the four fundamental forces of nature and constitute a direct push or a pull by one of those forces.

Let’s say you have determined you are in an inertial frame within the tolerances of your experiments. You can then go on to apply N2 to a variety of problems and assert the full powerful “if and only if” logic between forces and accelerations in the presence of any new forces and accelerations. This now allows you to solve both statics (no acceleration) and dynamics (acceleration not equal to zero) problems in a responsible and systematic way. I assert both statics and dynamics are special cases of N2. If you give up on N1 and treat it merely as a special case of N2 and further insist that statics is all N1, this worldview can be accommodated at a price. In this case, statics and dynamics cannot be clearly distinguished. You haven’t used any metric to determine if your frame is inertial. If you are in a non-inertial frame but insist on using N2, you will be forced to introduce pseudoforces. These are “forces” that cannot be linked back to pushes and pulls associated with the four fundamental forces of nature. Although it can be occasionally useful to use pseudoforces as if they were real forces, they are physically pathological. For example, every inertial frame will agree on all the forces acting on an object, able to link them back to the same fundamental forces, and thus agree on its state of motion. In contrast, every non-inertial frame will generally require a new set of mysterious and often arbitrary pseudoforces to rationalize the motion. Different non-inertial frames won’t agree on the state of motion and won’t generally agree on whether one is doing statics or dynamics! As mentioned, pseudoforces can be used in calculation, but it is most useful to do so when you actually know a priori that you are in a known non-inertial frame but wish to pretend it is inertial for practical reasons (for example, the rotating earth creates small pseudoforces such as the Coriolis force, the centrifugal force, and the transverse force, all byproducts of pretending the rotating earth is inertial when it really isn’t).

Here’s a simple example that illustrates why it is important not to treat N1 as special case of N2. Say Alice places a box on the ground and it doesn’t accelerate; she analyzes the forces in the frame of the box. The long range gravitational force of the earth on the box pulls down and the normal (contact) force of the surface of the ground on the box pushes up. The normal force and the gravitational force must balance since the box is sitting on the ground not accelerating. OR SO SHE THINKS. The setup said “the ground” not “the earth.” “The ground” is a locally flat surface upon which Alice stands and places objects like boxes. “The earth” is a planet and is a source of the long range gravitational field. You cannot be sure that the force you are attributing to gravity really is from a planet pulling you down or not (indeed, the Principle of Equivalence asserts that one cannot tell, but this is not the key to this puzzle).

Alice has not established that N1 is true in her frame and that she is in an inertial frame. This could cause headaches for her later when she tries to launch spacecraft into orbit. Yes, she thinks she knows all the forces at work on the box, but she hasn’t tested her frame. She really just applied backwards logic on N1 as a special case of N2 and assumed she was in an inertial frame because she observed the acceleration to be zero. This may seem like a “difference without a distinction,” as one of my colleagues put it. Yes, Alice can still do calculations as if the box were in static equilibrium and the acceleration was zero — at least in this particular instance at this moment. However, there is a difference that can indeed come back and bite her if she isn’t more careful.

How? Imagine that Alice was, unbeknownst to her or her ilk, on a large rotating (very light) ringworld (assuming ringworlds were stable and have very little gravity of their own). The inhabitants of the ringworld are unaware they are rotating and believe the rest of the universe is rotating around them (for some reason, they can’t see the other side of the ring). This ringworld frame is non-inertial but, as long as Alice sticks to the surface, it feels just like walking around on a planet. For Bob, an inertial observer outside the ringworld (who has tested N1 directly first), there is only once force on the box: the normal force of the ground that pushes the box towards the center of rotation and keeps the box in circular motion. All other inertial observers will agree with this analysis. This is very clearly a case of applying N2 with accelerations for the inertial observer. The box on the ground is a dynamics problem, not a statics problem. For Alice, who believes she is in an inertial frame by taking N1 to be a special case of N2 (having not tested N1!), she assumes there are two forces keeping the box in static equilibrium — it appears like a statics problem. Is this just a harmless attribution error? If it gives the same essential results, what is the harm? Again, in an engineering class for this one particular box under these conditions, perhaps this is good enough to move on. However, from a physics point of view, it introduces potentially very large problems down the road, both practical and philosophical. The philosophical problem is that Alice has attributed a long range force where non existed, turning “ma” into a force of nature, which is isn’t. That is, the gravity experienced by the ringworld observer is “artificial”: no physical long range force is pulling the box “down.” Indeed “down,” as observed by all inertial observers, is actually “out,” away from the ring. Gravity is a pseudoforce in this context. There has been a violation of what constitutes a “force” for physical systems and an unphysical, ad hoc, “force” had to be introduced to rationalize the observation of what appears to be zero local acceleration. Again, let us forgo any discussions of the Equivalence Principle here where gravity and accelerations can be entwined in funny ways.

This still might seem harmless at first. But image that Alice and her team on the ring fire a rocket upwards normal to the ground trying to exit or orbit their “planet” under the assumption that it is a gravitational body that pulls things down. They would find a curious thing. Rockets cannot leave their “planet” by being fired straight up, no matter how fast. The rockets always fall back and hit the ground and, despite being launched straight up with what seems to be only “gravity” acting on it, yet rocket trajectories always bend systematically in one directly and hit the “planet” again. Insisting the box test was a statics problem with N1 as a special case of N2, they have no explanation for the rocket’s behavior except to invent a new weird horizontal force that only acts on the rocket once launched and depends in weird ways on the rocket’s velocity. There does not seem to be any physical agent to this force and it cannot be attributed to the previously known fundamental forces of nature. There are no obvious sources of this force and it simply is present on an empirical level. In this case, it happens to be a Coriolis force. This, again, might seem an innocent attribution error. Who’s to say their mysterious horizontal forces aren’t “fundamental” for them? But it also implies that every non-inertial frame, every type of ringworld or other non-inertial system, one would have a different set of “fundamental forces” and that they are all valid in their own way. This concept is anathema to what physics is about: trying to unify forces rather than catalog many special cases.

In contrast, you and all other inertial observers, recognize the situation instantly: once the rocket leaves the surface and loses contact with the ringworld floor, no forces act on it anymore, so it moves in a straight line, hitting the far side of the ring. The ring has rotated some amount in the mean time. The “dynamics” the ring observers see during the rocket launch is actually a statics (acceleration equals zero) problem! So Alice and her crew have it all backwards. Their statics problem of the box on the ground is really a dynamics problem and their dynamics problem of launching a rocket off their world is really a statics problem! Since they didn’t bother to sysemtically test N1 and determine if they were in an inertial frame, the very notions of “statics” and “dynamics” is all turned around.

So, in short, a modern interpretation of Newton’s Laws of motional asserts that N1 is not a special case of N2. First establishing that N1 is true and that your fame is inertial is critical in establishing how one interprets the physics of a problem.

Coldest cubic meter in the universe

My collaborators in CUORE, the Cryogenic Underground Observatory for Rare Events, at the underground Gran Sasso National Laboratory in Assergi, Italy, have recently created (literally) the coldest cubic meter in the universe. For 15 days in September 2014, cryogenic experts in the collaboration were able to hold roughly one contiguous cubic meter of material at about 6 mK (that is, 0.006 degrees above absolute zero, the coldest possible temperature).

At first, a claim like “this is the coldest cubic meter in the [insert spacial scale like city/state/country/world/universe]” may sound like an exaggeration or a headline grabbing ruse. What about deep space? What about ice planets? What about nebulae? What about superconductors? Or cold atom traps? However, the claim is absolutely true in the sense that there are no known natural processes that can reliable create temperatures anywhere near 6 mK over a contiguous cubic meter anywhere in the known universe. Cold atom traps, laser cooling, and other remarkable ultracold technologies are able to get systems of atoms down to the bitter pK scale (a billionth of a degree above absolute zero). However, the key term here is “systems of atoms.” These supercooled systems are indeed tiny collections of atoms in very small spaces, nowhere near a cubic meter. Large, macroscopic superconductors can operate at liquid nitrogen or liquid helium temperatures, but those are very warm compared to what we are talking about here. Even deep space is sitting a at a balmy 2.7 K thanks to the cosmic microwave background radiation (CMBR). Some specialized thermodynamic conditions, such as those found the the Boomerang Nebula, may bring things down to a chilly 300-1000 mK because of the extended expansion of gases in a cloud over long times. The CMB cold spot is only 70 micro-kelvin below the CMBR.

However, the only process capable of reliably bringing a cubic meter vessel down to 6 mK are sentient creatures actively trying to do so. While nature could do it on its own in principle, via some exotic process or ultra-rare thermal fluctuation, the easiest natural path to such cold swaths of space, statistically sampled over a short 13.8 billion years, is to first evolve life, then evolve sentient creatures who then actively perform the project. So the only other likely way for there to be another competing cubic meter sitting at this temperature somewhere in the universe is for there to be sentient aliens who also made it happen. The idea behind the news angle “the coldest cubic meter” was the brainchild of my collaborator Jon Ouelett, a graduate student in physics at UC Berkeley and member of the CUORE working group responsible for achieving the cooldown. His take on this is written up nicely in his piece on the arXiv entitled The Coldest Cubic Meter in the Known Universe.

I’ve been member of the CUORE and Cuoricino collaborations since 2004 when I was a postdoc at Lawrence Berkeley Laboratory. I’m now a physics professor at California Polytechnic State University in San Luis Obispo and send undergraduate students to Gran Sasso help with shifts and other R&D activities during the summers through NSF support. Indeed, my students were at Gran Sasso when the cooldown occurred in September, but were working on another part of the project doing experimental shifts for CUORE-0. CUORE-0 is a precursor to CUORE and is currently running at Gran Sasso. It is cooled down to about 10 mK and is perhaps a top-10 contendeder for the coldest contiguous 1/20th of a cubic meter in the known universe.

I will write more about CUORE and its true purpose in coming posts.

On a speculative note, one must naturally wonder if this kind of technology could be utilized in large scale quantum computing or other tests of macroscopic quantum phenomenon. While there are many phonon quanta associated with so many crystals at these temperatures (and so the system is pretty far from the quantum ground state, and has likely decohered on any time scales we could measure) it is still intriguing to ask if some carefully selected macroscopic quantum states of such a large system could be manipulated systematically. Large-mass gravitational wave antennae, or Weber bars, have been cooled to a point where the system can be considered in a coherent quantum state from the right point of view. Such measurements usually take place with sensative SQUID detectors looking for ultra-small strains in the material. Perhaps this new CUORE technology, involving macroscopic mK-cooled crystal arrays, can be utilized in a similar fashion for a variety of purposes.