RHIC/AGS User’s Meeting Talk and Yeti Pancakes

I was recently invited to give a talk on neutrinoless double beta decay at the RHIC/AGS User’s Meeting at at Brookhaven National Laboratory. The talk was entitled “Neutrinoless Double Beta Decay: Tales from the Underground” and was a basic overview (for other physicists, targeted primarily at graduate students) of neutrino physics and the state of neutrinoless double beta decay. The talk was only 20+5, so there wasn’t time to get into a lot of detail.

It was great to be back at BNL and see some of my old friends and colleagues. It was particularly nice to see my mentor and friend Professor Dan Cebra again and meet his recent crew of graduate students.

Being asked to give a neutrinless double beta decay talk at a meeting entirely focused on the details of heavy ion physics is a little like a yeti a pancake: they are terms not usually used in the same sentence, but somehow it works. Their motivation was noble. At these meetings, the organizers typically pick a couple topics in nuclear physics that are outside their usual routine and have someone give them a briefing on it. This was exactly in that spirit.

To download the Standard Model Lagrangian I used in the talk, visit my old UC Davis site where you can find pdf and tex versions of it for your own use. If you are interested in investigating the hadron spectra I show in the talk, you can download my demonstration available in CDF format from the Wolfram Demonstration Project. The Feynman diagram for neutrionless double beta decay was taken from wikipedia. Most of the other figures are standard figures used in neutrinoless double beta decay talks. As a member of the CUORE collaboration, I used vetted information regarding our data and experiment.




Cal Poly Open House, All That Glitters Green and Gold 2014, Faculty Address

I was asked to give the 2014 Cal Poly Open House, All That Glitters Green and Gold 3 minute faculty address to about 600+ prospective students and parents for the College of Agriculture, Food, and Environmental Sciences, College of Architecture and Environmental Design, and the Orfalea College of Business. I somehow managed to get “quantum”, “atheist”, “delocalize”, and “live long and prosper” in there. In hindsight, asking someone from physics to do this for these Colleges is a bit like asking Snape to give the opening address to Hufflepuff. Still, it was great fun and a true honor. Here is a transcript of the speech.

Thank you President Armstrong. Welcome and good morning! I’m Tom Gutierrez, a professor in the Physics Department here at Cal Poly. I’m also currently the advisor for the Society of Physics Students, Sigma Pi Sigma (the physics honor society), and student club AHA (the Alliance of Happy Atheists).

How many of you watch or have seen the TV show The Big Bang Theory? Sadly, in my department it’s basically considered a documentary. I don’t watch it regularly, but to appreciate where I’m coming from: understand that I when I first saw it mistook it for a NOVA special on how physicists can actually improve their social skills. With that awkward introduction…

Why am I, a physics professor, speaking to you today? I’m here to give you a brief faculty perspective of Cal Poly. Cal Poly is a comprehensive polytechnic university that embraces a Learn-By-Doing philosophy. And physics, the most fundamental of all sciences, is at the very core of this mission. For a comprehensive polytechnic university in the 21st century, physics is the technical analog to the “liberal arts.” All technical majors across all Colleges at the University must take physics and almost all majors allow physics as an elective or as a general education course. This frequently puts my department at the nexus of the University and gives me the pleasure of interacting with a large cross section of our students on a regular basis.

I teach a wide spectrum of courses in the physics department. While it’s true most of my students are from engineering and the College of Science and Math, some of the most hard working and thoughtful students I’ve had have come from the Colleges represented in the session this morning, which include business, animal science, architecture, and forestry majors to name a few. To facilitate the Learn-By-Doing philosophy in practical terms, Cal Poly fosters amongst faculty what is known as the Teacher-Scholar Model. Faculty across all Colleges are carefully selected 1) for their passion for teaching and working with students and 2) for being engaged with active work in their fields. In my own experience, most educational institutions choose one or the other focus for faculty: a professor is either a teacher or a scholar. While there are many fine examples of each amongst today’s universities, one vocation typically suffers at the expense of the other. However, Cal Poly celebrates both forms of professional expression for individual faculty — and this generates a powerful and singular learning environment for the students who come here. Faculty engaged in their fields can bring real-world knowledge and research into the classroom. Conversely, teachers can bring their students and pedagogical wisdom into the real world.

My own work in particle physics, sponsored by the National Science Foundation, has allowed me to bring students to work at an underground lab in Italy and experience the joys of doing cutting-edge science. Students then bring this experience to their jobs and graduate programs. The message I’m getting from my colleagues at other institutions and in industry? “Send us more Cal Poly students!” Faculty at Cal Poly are allied with the student. We want you to graduate as lifelong learners who find a productive career and make a difference in the world. At Cal Poly, we want you to grow as a person and to challenge your pre-existing assumptions about how the world works. We want you to discover your Personal Project; think big, make collaborations, and not just dream, but discover how to translate those dreams into actions.

Anyway, enjoy the rest of your stay and come visit the Physics Department and CoSaM open house in the Baker Science building if you get a chance. May your quantum wave function always remain delocalized. Live long and prosper. Thank you!

Reading Audiobooks

If you listened to an audiobook is it responsible to say in conversation that you “read the book” without qualifying that it was an audiobook? Does listening to an audiobook constitute “reading a book”? The answer is “yes,” although this will require some explanation. The question sounds strange because I just said that you listened to it, and didn’t read it, didn’t I? And “reading” isn’t “listening” so how can listening to an audiobook allow one to claim one has “read the book”? I think some of this discussion is motivated by my own enjoyment of audiobooks in the face of those who can only be called “reading snobs” who dogmatically believe that books can only be “properly” processed one way: via the written text and with one’s eyes. There also may be a perception that listening to an audiobook is somehow easier or intellectually lighter than reading the text of a book. To this I say: try listening to an audiobook sometime. In my experience, audiobooks can be very intellectually satisfying and may even be a heavier cognitive load than reading text because your eyes are free to roam and process independent information. This forces no small measure of mental discipline to remain focused and engaged but still function (e.g. if you are walking or driving).

I just listened to David Brin’s excellent Uplift War on audiobook, and unapologetically declare to the world that I “read the book.” For full disclosure, I’ve also read Martian Chronicles by Ray Bradbury, all three Hunger Games novels by Suzanne Collins, Sara Gruen’s Water for Elephants, Neverwhere by Neil Gaiman, Packing for Mars by Mary Roach, and Letters to a Young Contrarian by Christopher Hitchens, amongst others. All on audiobook. From a social point of view, if you and I got together to discuss these works, my experience of them would be such that you would not be able to determine by our conversation if I listened to it or physically read the words on a page. In this sense, I can responsibly claim to have “read the book” even if my eyes never looked at the words. That is, using Brin’s work as an example, unless you asked me to spell the names “Uthacalthing”, “Tymbrimi”, or “Athaclena” (which I did not know how to spell until I looked them up just now) — but then I’d ask you to pronounce them and we’d be even.

There are two elements to consider: 1) reading as a physical method of information transfer and 2) reading as an intellectual exercise involving mental content absorption. Both senses of the term “read” are used regularly and interchangeably and we will need to remind ourselves what is really important. Certainly a quick scan of the dictionary (and one’s own experience) demonstrates the word “read” in the English language is used in many different ways. One of those ways is the specific biomechanical method of scanning physical symbols with one’s eyes. However, that same exact sense of the term also describes the methods of a person using their fingers to process braille symbols. Feeling something is biomechanically and mentally nothing like seeing it, but we are still comfortable using “read” in that context, allowing “read” to span the senses because it accomplishes the same intellectual function as reading symbols with one’s eyes. This is important. Other grammatically correct uses of the word “read” also include a broader defintions involving generalized mental information processing and experiences. Phrases like “I read you loud and clear” (e.g. for a radio transmission, when listening), “reading a situation” (assessing the subtleties of a situation, including your own intellect), performing “a cold reading” (another situational assessment tool used by magicians and “psychics” involving both mental acuity and all the senses), “I’m taking a sensor reading” (to describe a technology-based data acquisition process), and so on. For the primary defintions of “read” Merriam-Webster is actually rather non-committal about the method and focuses on generic sensory information processing, definitely emphasizing the written text and braille, but not insisting upon it, allowing for many other modes.

In this spirit, let’s examine what people mean in conversation about a work of fiction when they say “I read the book” or ask “did you read the book?” Let’s assume we are dealing with educated adults and not people just learning to read text. I assert that what people universally mean by the question “did you read the book?” is “did you intellectually and emotionally absorb and process the content of the work that was created by the author?” If I listened to an unabridged audiobook in an active and engaged way, I think the answer is unambiguously “yes, I read the book.” Sure, from a methodological point of view, I did not literally (literally) read the physical text on the page with my eyes. However, this is not usually the important part of the novel, nor is it generally the important part of “reading novels.” The mechanics of looking at words is not typically the essential experience of reading books. The important part is mentally absorbing the content of the work, which is actually the core definition of the word “read” to begin with. Why would anyone care about your particular mode of information transfer? What they (hopefully!) care about is the experience you had intellectually and emotionally absorbing the content and your ability to discuss that experience in a way that transcends the transfer mode.

Do all books lend themselves to this audio mode of reading? No. Obviously not. Exceptions include works that rely directly on the shapes of words or encoding extra information in the precise layout of the text, font, or presentation. If the work involves lots of pictures, illustrations, data, or equations, audiobooks are not going to work very well. But the bulk of modern fiction lends itself wonderfully to audiobooks as does much non-fiction. Like so many other things in life, one needs to account for individual cases. Also, this equivocation is not appropriate for people (e.g. children) learning to read symbols on the page. An audio experience is not an adequate substitute for that kind of information processing during those fragile formative years. This argument is directed at people who have mastered both reading and listening and are educated adults.

To be clear, I’m not suggesting we follow the reductio ad absurdum path and call all forms of information processing “reading” in every context for all conversations. That is a straw man of my argument. I’m merely suggesting that actively listening to an unabridged audiobook can, for social and intellectual purposes, be considered “reading a book” based on the sense of the word “read” one uses in conversations of that kind. There is nothing more I would gain from a content or entertainment perspective by re-reading the book using physical text in order to “elevate” myself to “having read the [text of] the book.” Nor am I suggesting that we substitute listening to audiobooks in place of reading text in schools, although I do think both forms could be used in tandem or parallel. As mentioned above, reading symbols is obviously a critical core skill that must be developed actively and early. But, once mastered, I assert that the two forms of information processing, listening and reading, blur into each other and naturally compliment each other. And I’m certainly not dismissing the process of reading physical text as an intellectual and worthy exercise. I still read many books this way. Also, I’m not claiming that there is absolutely no difference cognitively between how the brain processes words and symbols and how it processes sounds. But I do think that in the case of listening to a word-for-word reading of unabridged audiobooks, and for the educated person who has mastered both reading and listening, the audio experience and the reading experience merge for all practical purposes into a common intellectual experience with only minor variations that do not favor systematically one mode over another except by the taste of the user.

A couple tangential examples, that inform the discussion. A formally trained and competent musician can look at a piece of written music and, for all practical purposes, “listen” to it by reading it with their eyes. The audio performance itself, of course, also has aesthetic value for that musician. But it would probably be appropriate for someone in that position to say, in either context, that they “listened” or “heard” the piece even if it merely involved reading the sheet music. Indeed, musicians who can read written music like that do refer to reading sheet music as having “heard” or having “listened” to the piece. In contrast, many bands we worship refer to “writing” music for their albums. However, rarely are any notes or music written down in any formal sense. Many rock/pop bands “write” music by playing it and piecing together sections into things than sound nice after editing (if they are lucky). Later, some music grad student, desperate to eat and pay rent, will be hired by a company to transcribe the sounds on the album into written notes, so other people without ear training can also play the songs; but that isn’t the way the band itself usually “writes” music — unless you are Yes or Dream Theater. If the Rolling Stones speak of “writing” music for a new album, they almost certainly mean a wanton, drug-infused geriatric orgy in the Caribbean that might have involved Keith Richards bringing his guitar. But the term “writing music” is still used. We can also reverse the situation and look at words on a page that were meant to be spoken out loud, such as plays. Take Shakespeare. Certainly the stage play is considered a respectable form of literary art and Shakespeare is arguably the greatest writer of the English language. But the plays he wrote were meant, designed, crafted to be read aloud and listened to. Yet we read them. Can you still read Shakespeare and claim to have experienced the work in an intellectually satisfying way and be conversational about it? Obviously. Does the stage work bring the work to life in a different way? Clearly.

Also, reading words on a page is not itself a magic recipe for intellectual absorption. Reading text can be pathologically passive if one is not actively engaged, and does not imply extra profound and deep understanding. Let me give an example from my own experience in the classroom. I tell students to “read chapter 10″ from the text. And, indeed some do look at it with their eyes and the words are streamed through their thinking in some fashion. But in many cases no cognitive engagement has occurred. By speaking to them, I can tell that they did not, in fact, “read” the text as I meant the term “read.” In this context, “read” did not necessarily literally mean merely looking at the words, although it might conveniently involve that biomechanical process. I really just wanted them to come to class having processed and understood the material provided in the book by whatever means necessary. If that involves listening to the audiobook, it just doesn’t matter to me (although, good luck learning quantum mechanics from an audiobook).

Does watching a movie adaptation of a book count as “reading the book?” Not in my opinion. Putting audiobooks in the same category as movie interpretation of books is missing the point. I claim the unabridged audiobook is not, fundamentally, a different medium than the original work — not any different than the braille modes of reading that are considered “legitimate” reading. When we read books using the written word we are, in fact, “speaking” the words to ourselves in our head anyway exactly in the way the book is being read in an audiobook. A movie, even one adapted to be nearly identical to the book, is usually abridged and has been altered from the original work in fundamentally different ways. Moreover, one is not required to visualize the plot and characters in the same way as one does in reading text or listening to a reading of text.

I am not judging all these different modes or ranking them. They each serve their purpose and can give pleasure and intellectual stimulation in their own way. But I argue that, under many common situations, listening to audiobooks accomplishes the same social and intellectual function as reading text and can thus be responsibly declared a form of “reading the book.”

Farewell Stuart

I am very saddened today to hear of the sudden passing of my colleague Stuart Freedman. He was a great scientist and a great mentor. I will miss his dry wit and gift to see right to the heart of an issue. As part of his Ph.D. work circa 1972 with Clauser he performed the first experimental result to show a violation of Bell’s inequality, demonstrating that quantum mechanics was not only complete but non-local in character. This was during a time where “dabbling” in the foundations of quantum mechanics was not particularly fashionable. However, his ambitious result paved the way for the later celebrated work of Aspect et al. and is sadly often forgotten in such discussions. The breadth of his contributions to science is uncanny, spanning many fields and specialties following him from Berkeley to Princeton, Stanford, University of Chicago, and back to Berkeley. He was a fellow of the American Physical Society and Member of the National Academy. At Berkeley, he held the prestigious Luis W. Alvarez Chair in Experimental Physics. I was most familiar with him in his recent role as the US spokesman for the CUORE collaboration, meeting him in 2005 while I was still a postdoc at Berkeley Lab. His voice of scientific leadership in our work will be greatly missed. It was a privilege to have worked with him and collaborated with him, and to name him amongst my mentors. Farewell, Stuart. You will be missed.

A Guided Tour of Your Recently Acquired Vacuum State

On Thursday (Nov. 8, 2012) I gave a colloquium at Cal Poly, San Luis Obispo, with my wife, Prof. Jennifer Klay, on the status of the Higgs search at the LHC. It was a joint talk meant to summarize the recent discovery, by CMS and Atlas in July, of a Higgs-like particle with a mass around 125 GeV/c^2. We each had about 23 minutes: me the theory, Dr. Klay the experimental results. The audience was made primarily of undergraduates with a mix of professors and other attendees; almost none had any particle physics background. While constructing the talk, I had to resist the urge to try and summarize all of the detailed theoretical and mathematical machinery that goes with the Standard Model. This would have been rather ineffective. The point of a colloquium is to communicate ideas, not to plow people down and confuse them. Instead, I tried to remain true to the spirit of the colloquium and targeted the audience I knew would be present: physics major undergraduates who had taken their first course in modern physics. I felt that this was the simplest I could make the material while still building a case for the Higgs. It allowed me to draw from accessible analogies that were, although imperfect, basically physically responsible. At some point, I will post a more complete narrative of the talk. But, for now, I simply wanted to make the talk available for any interested parties (note: it is about 50MB, so be patient). To download the Standard Model Lagrangian I used in the talk, visit my old UC Davis site where you can find pdf and tex versions of it for your own use. If you are interested in investigating the hadron spectra I show in the talk, you can download my demonstration available in CDF format from the Wolfram Demonstration Project. Enjoy.

P.S. In the talk, I don’t give a photo credit for my 50s flying car (to represent the “guided tour”), which I got from vintage ad I believe to be in the public domain (e.g. you can get it here, although this isn’t where I downloaded it from). In the talk I did not give credit for the two Feynman diagrams (1 and 2) for the “Golden Channels,” which I got from wikipedia. The photo of John Ellis is by Josh Thompson and was obtained from Flickr.

The Universe: A Computer Simulation?

An unpublished paper on the arXiv is claiming to have formulated a suite of experiments, as informed by a particular kind of computer approximation (called “lattice QCD” or L-QCD), to determine if the universe we perceive is really just an elaborate computer simulation. It is creating a buzz (e.g. covered by the Skeptics Guide to the Universe, Technology Review, io9, and probably elsewhere).

I have some problems with the paper’s line of argument. But let me make it clear that I have no fundamental problem with the speculation itself. I think it is a fun and interesting to ponder the possibility of living in a simulation and to try and formulate experiments to demonstrate it. It is certainly an amusing intellectual exercise and, at least in my own experience, this was an occasional topic of my undergraduate years. More recently than my undergraduate years, Yale philosopher Nick Bostrom put forth this famous arguments in more quasiformal terms, but the idea had been hovering there (probably with a Pink Floyd soundtrack) for a long time.

The paper is not “crackpot”, but is highly speculative. It uses a legitimate argumentation technique, if used properly (and the authors basically do), called reductio ad absurdum: reduction to the absurd. Their argument goes like this:

  1. Computer simulations of spacetime dynamics, as known to humans, always involve space and time lattices as a stage to perform dynamical approximations (e.g. finite difference methods etc.);
  2. Lattice QCD (L-QCD) is a profound example of how (mere) humans have successfully simulated, on a lattice, arguably the most complex and pure sector of the Standard Model: SU(3) color, a.k.a. quantum chromodynamics, the gauge theory that governs the strong nuclear force as experienced by quarks and gluons;
  3. L-QCD is not perfect, and is still quite crude in its absolute modern capabilities (I think most people reading these articles, given the hype imparted to L-QCD, would be shocked at how underwhelming L-QCD output actually is, given the extreme amount of computing effort and physics that goes into it). But it is, under the hood, the most physically complete of all computer simulations and should be taken as a proof-of-principle for the hypothetical possibility of bigger and better simulations — if we can do it, even at our humble scale, certainly an übersimulation should be possible with sufficient computing resources;
  4. Extrapolating (this is the reductio ad absurdum part), L-QCD for us today implies L-Reality for some other beyond-our-imagination hypercreatures: for we are not to be taken as a special case for what is possible and we got quite a late start into the game as far as this sentience thing goes.
  5. Nevertheless, nuanced flaws in the simulation that arise because of the intrinsic latticeworks required by the approximations might be experimentally detectable.


Firstly, there is an amusing recursive metacognative aspect to this discussion that has its own strangeness; it essentially causes the discussion to implode. It is a goddamn hall of mirrors from a hypothesis testing point of view. This was, I believe, the point Steve Novella was getting at in the SGU discussion. So, let’s set aside the question of whether a simulation could

  1. accurately reconstruct a simulation of itself and then
  2. proceed to simulate and predict its own real errors and then
  3. simulate the actual detection and accurate measurement of the unsimulated real errors.

Follow that? For the byproduct of a simulation to detect that it is part of an ongoing simulation via the artifacts of the main simulation, I think you have to have something like that. I’m not saying it’s not possible, but it is pretty unintuitive and recursive.

My main problem with the argument is this: a discrete or lattice-like character to spacetime, with all of its strange implications, is neither a necessary nor sufficient condition to conclude we live in a simulation. What it would tell us, if it were to be identified experimentally, is that: spacetime has a discrete or lattice-like character. Given the remarkably creative and far-seeing imaginative spirit of the project, it seems strangely naive to use such an immature, vague “simulation = discrete” connection to form a serious hypothesis. There very well may be some way to demonstrate we live in a simulation (or, phrased more responsibly, falsify the hypothesis that we don’t live in a simulation), but identifying a lattice-like spacetime structure is not the way. What would be the difference between a simulation and the “real” thing. Basically, a simulation would make error or have inexplicable quirks that “reality” would not contain. The “lattice approximation errors” approach is pressing along these lines, but is disappointingly shallow.

The evidence for living in a simulation would have to be much more profound and unsubtle to be convincing than mere latticworks. Something like, somewhat in a tongue-and-cheek tone:

  1. Identifying the equivalent of commented out lines of code or documentation. This might be a steganographic exercise where one looks for messages buried in the noise floor of fundamental constants, or perhaps the laws of physics itself. For example, finding patterns in π sounds like a good lead, a la Contact, but literally everything is in π an infinite number of times, so one needs another strategy like perhaps π lacking certain statistical patterns. If the string 1111 didn’t appear in π at any point we could calculate, this would be stranger than finding “to be or not to be” from Hamlet in ASCII binary;
  2. Finding software bugs (not just approximation errors); this might appear as inconsistencies in the laws of physics at different periods of time;
  3. Finding dead pixels or places where the hardware just stopped working locally; this might look like a place where the laws of physics spontaneously changed or failed (e.g. not a black hole where there is a known mechanism for the breakdown, but something like “psychics are real”, “prayer works as advertised”, etc.);

I’m just making stuff up, and don’t really believe these efforts would bear fruit, but those kinds of thing, if demonstrated in a convincing way, would be an indication to me that something just wasn’t right. That said, the laws of physics are remarkably robust: there are no known violations of them (or nothing that hasn’t been able to be incorporated into them) despite vigorous testing and active efforts to find flaws.

I would also like to set a concept straight that I heard come up in the SGU discussion: the quantum theoretical notion of the Planck length does not imply any intrinsic clumpiness or discreteness to spacetime, although it is sometimes framed this way in casual physics discussions. The Planck length is the spatial scale where quantum mechanics encounters general relativity in an unavoidable way. In some sense, current formulations of quantum theory and general relativity “predict” the breakdown of spacetime itself at this scale. But, in the usual interpretation, this is just telling us that both theories as they are currently formulated cannot be correct at that scale, which we already hypothesized decades ago — indeed this is the point of the entire project of M-theory/Loop quantum gravity and its derivatives.

Moreover, even working within known quantum theory and general relativity, to consider the Planck length a “clump” or “smallest unit” of spacetime is not the correct visualization. The Planck length sets a scale of uncertainty. The word “scale” in physics does not imply a hard, discrete boundary, but rather a very, very soft one. It is the opposite of a clump of spacetime. The Planck length is then interpreted as the geometric scale at which spacetime is infinitely fuzzy and statistically uncertain. It does not imply a hard little impenetrable region embedded in some abstract spacetime latticeworks. This breakdown of spacetime occurs at each continuous point in space. That is, one could zoom into any arbitrarily chosen point and observe the uncertainty emerge at the same scale. Again, no latticeworks or lumpiness is implied.

Transcendental Mathy Music

Ever wonder what π or e or other number sequences sound like when mapped into some musical sound space? I’ve written a little Mathematica Notebook, downloadable here, that lets you tinker with these possibilities with a simple interface. You can then save your work to a MIDI file which can then be loaded into your favorite music software like CuBase, Logic, Pro Tools, or even Garage Band. I would offer the full notebook as a free CDF file, but Wolfram’s current CDF format does not support writing out to files yet. However, below is the basic interface you can tinker with on this web page. You will need the free Mathematica CDF plugin installed (or a copy of Mathematica 8).

On my latest album, Smug, I use this software to create two pieces based on the trancendental number e and π. Descriptions below.

Smuggy E:
The first riff uses the first 15 digits of the transcendental number e=2.71828182845904 (0=C, 1=C#, 2=D etc.) in 15/16 time it then modulates so 0=F, 1=F#, 2=G
the interlude riff is the speed of light in vacuum c=299792458.0 m/s with (0=C, 1=C#, 2=D etc.) in 6/4. Yes, I cheated a little adding the “.0″ on the end of c since, in m/s, c is defined as an exact integer.

Smuggy π:
Similar to above, uses the first 10 digits of pi and the speed of light with the mapping (0=C, 1=C#, 2=D etc.).

These are just a few of the literally infinite possibilities one can create using amusing number mappings. Let me know if you create (or have created) any of your own. I’d be interested to hear!

Chick-Fil-A Apparently Backtracks: An Unfortunate Development

I was reasonably optimistic about Chick-Fil-A’s apparent overture to end their practices of donating to anti-LGBT organizations. But apparently their president, Dan Cathy, issued a statement actively contradicting previous reports in the form of a correction. Cathy was quoted as saying:

“There continues to be erroneous implications in the media that Chick-fil-A changed our practices and priorities in order to obtain permission for a new restaurant in Chicago. That is incorrect. Chick-fil-A made no such concessions, and we remain true to who we are and who we have been.”

With nothing more than a vague intuition, my sense is that there may be some confusion or strife within the company itself. Other than simply outright incorrect reporting, what else can explain such polar views coming from the same corporation? However, as a private company, Cathy’s public word is probably a better indicator of the company’s position than an arbitrator in Chicago.

Positive Chick-Fil-A Development

Some positive news from the Chick-Fil-A LGBT civil rights front. In what appears to be a rather illuminated and magnanimous gesture, following a retreat with campus leaders in Atlanta, the Chick-Fil-A organization has issued a statement indicating they will acknowledge the civil rights of all people, including those who associate with the LGBT community. They will also stop their controversial donations to anti-LGBT groups through their non-profit charitable arm WinShape. According to the Civil Rights Agenda (TCRA), a watchdog organization for LGBT rights who was involved in the disucssions, an internal Chick-Fil-A memo to franchisees and stakeholders indicates:

• As a company, they will “treat every person with honor, dignity and respect-regardless of their beliefs, race, creed, sexual orientation and gender…[their]…intent is not to engage in political or social debates.” source

• “The WinShape Foundations is now taking a much closer look at the organizations it considers helping, and in that process will remain true to its stated philosophy of not supporting organizations with political agendas.” In meetings the company executives clarified that they will no longer give to anti-gay organizations, such as Focus on the Family and the National Organization for Marriage. source

This blog entry is not meant to be a full summary of the discussion. More detailed articles on the topic: Boston Spirit Magazine, Huffington Post, and Chicago Phoenix.