Monday, April 13, 2015

My favorite moment from when I was in grad school was when…

No doubt there’s a lot of bitterness out there about graduate school these days. There’s a steady drumbeat of despair about getting jobs, dealing with the frustrations of failed projects, the pain of publishing, all amid the backdrop of decreased funding. Here are a couple of examples I just saw:
Reading that last one had me nodding in agreement–there are indeed many tough times in grad school. But wait, weren’t there a lot of good ones, too? In fact, looking back at it, grad school was one of the happiest times in my life. And there were many great moments I will never forget. Here are a few of mine:

  • I remember the very first time I saw single molecule RNA FISH spots in the microscope, which came after months of optimization (i.e., messing around). There they were, super bright and unmistakable! I ran and got my advisor Sanjay, who was all smiles.
  • Talking about conspiracies, both scientific and political, with Musa.
  • The first time I saw an endogeneous RNA via RNA FISH (instead of the transgenic RNA spots from before). I felt like I was at the beginning of something very cool, like there were endless possibilities ahead of me. Also wish I had figured it out a couple years earlier... :)
  • When I felt like I had finally figured out cloning with a long string of flawless ligations (winning streak since broken, by the way!).
  • Writing a super complicated (for me) simulation directly in C (implicit method with Newton-Raphson for solving a non-linear PDE in 3D, I think) all in one go and having it work perfectly the first time I compiled it. Yes!
  • When I was feeling like nothing was working and my project was hopeless, and I walked into Sanjay’s office and talked to him for a half an hour, and came out feeling like a million bucks.

I bet many of you have a few of these too, so please leave them in the comments. Would be nice in this particular day and age to have a list of reasons reminding us why grad school might not be so bad after all.

Sunday, April 12, 2015

Why is everything broken? Thoughts from the perspective of methods development

I don't know when this "[something you don't like] is broken" thing became a... thing, but it's definitely now a... thing. I have no real idea, but I'm guessing maybe it started with the design police (e.g. this video), then spread to software engineering, and now there's apparently 18 million things you can look at on Google about how academia is broken. Why are so many things seemingly broken? I think the answer in many cases is that this is the natural steady-state in the evolution of design.

To begin with, though, it's worth mentioning that some stuff is just broken because somebody did something stupidly or carelessly, like putting the on/off switch somewhere where you might hit it by accident. Or the "Change objectives" button on a microscope right next to other controls so that you might hit it accidentally while fumbling around in the dark (looking at you, Nikon!). Easy fodder for the design police. Fine, let's all have a laugh, then fix it.

I think a more interesting reason why many things are apparently broken is because that's in some ways the equilibrium solution. Let me explain with a couple examples. One of the most (rightly) ridiculed examples of bad design is the current state of the remote control:


Here's a particularly funny example of a smart home remote:
Yes, you can both turn on your fountain and source from FTP with this remote.

Millions of buttons of unknown function, hard to use, bad design, blah blah. But I view this not as a failure of the remote, but rather a sign of its enormous success. The remote control was initially a huge design win. It allowed you to control your TV from far away so that you didn't have to run around all the time just to change the channel. And in the beginning, it was just basically channel up/down, volume up/down and on/off. A pretty simple and incredibly effective design if you ask me! The problem is that the remote was a victim of its own success: as designers realized the utility of the remote, they began to pile more and more functionality into it, often with less thought, and potentially pushing beyond what a remote was really inherently designed to do. It was the very success of the remote that made it ripe for so much variation and building-upon. It's precisely when the object itself becomes overburdened that the process stops and we settle into the current situation: a design that is "broken". If everything evolves until the process of improvement stops by virtue of the thing being broken, then practically by definition, almost everything should be broken.

Same in software development. Everyone knows that code should be clean and well engineered, and lots of very smart people work hard to ensure that they make as smart decisions as possible. Why, then, do things always get refactored? I think it's because any successfully designed object (in this case, say, a software framework) will rapidly get used by a huge number of people, often for things far beyond its original purpose. The point where the progress stalls is again precisely when the framework's design is no longer suitable for its purpose. That's the "broken" steady state we will be stuck with, and ironically, the better the original design, the more people will use it and the more broken it will ultimately become. iTunes, the once transformative program for managing music that is now an unholy mess, is a fantastic example of this. Hence the need for continuous creative destruction.

I see this same dynamic in science all the time. Take the development of a new method. Typically, you start with something that works really robustly, then push as far as you can until the whole thing is held together with chewing gum and duct tape, then publish. Not all methods papers, but many are like this, with a method that is an amazing tour-de-force... and completely useless to almost everyone outside of that one lab. My rule of thumb is that if you say "How did they do that?" when you read the paper, then you're going to say "Hmm, how are we gonna do that?" when you try to implement in your own lab.

Take CRISPR as another example. What's really revolutionary about it is that it actually works and works (relatively) easily, with labs adopting it quickly around the world. Hence, the pretty much insane pace of development in this field. Already, though, we're getting to the point where there are massively parallel CRISPR screens and so forth, things that I couldn't really imagine doing in my own lab, at least not without a major investment of time and effort. After a while, the state of the art will be methods that are "broken" in the sense that they are too complex to use outside of the confines of the lab that invented it. Perhaps the truest measure of a method is how far it goes before getting to the point of being "broken". From this viewpoint, being "broken" should be in some ways a cause for celebration!

(Incidentally, one could argue that grant and paper review and maybe other parts of academia are broken for some of the same reasons.)

Saturday, April 11, 2015

Gregg Popovich studied astronomical engineering

I was just reading this SI.com piece about Gregg Popovich, legendary NBA coach of the San Antonio Spurs, and found this line to be really interesting:
By his senior year he was team captain and a scholar-athlete, still the wiseass but also a determined cadet who loaded up with tough courses, such as advanced calculus, analytical geometry, and engineering—astronomical, electrical and mechanical. [emphasis mine]

Now, I'm pretty sure they meant aeronautical engineering, but that got me wondering if there is such a thing as astronomical engineering. Well, Wikipedia says there is something called astroengineering, which is about the construction of huge (and purely theoretical) objects in space. I wonder if Pop is thinking about Dyson spheres during timeouts.

Thursday, April 9, 2015

Dr. Padovan-Merhar!

Olivia is now Dr. Padovan-Merhar! Just defended on Tuesday–second student to graduate from the lab. And her paper just came out in Molecular Cell!


Saturday, April 4, 2015

Interpretive drawing of the lab


Why not test your blood every quarter?

Lenny just pointed me to a little internet kerfuffle emerging because of Mark Cuban’s twittering about saying it would be a good thing to run blood tests all the time. Here's what Cuban said:







Essentially, his point is that by using the much larger and more well-controlled dataset that you could get from regular blood testing, you would be able to get a lot more information about your health and thus perhaps be able to earlier action on emerging health issues. Sounds pretty reasonable to me. So I was surprised to see such a strong backlash from the medical community. The counter-argument seems to have a couple of main points:
  1. Mark Cuban is a loudmouth who somehow made billions of dollars and now talks about stuff he doesn’t know anything about.
  2. Just as whole body scans can lead to tons of unnecessary interventions for abnormalities that are ultimately benign, regular blood testing would lead to tons of additional tests and treatments that would be injurious to people.
  3. Performing blood tests on everyone is prohibitively expensive, so we’d end up with “elite” patients and non-elite patients.
I have to say that I find these counterarguments to be essentially anti-scientific. On the face of it, of course Cuban is right. I’ve always been struck by how unscientific medical measurements are. If we wanted to measure something in the lab, we would never be as haphazard and uncontrolled as people are in the clinic. There are of course good reasons why it’s more difficult to do in the clinic, but just because something is hard does not mean that it is fundamentally bad or useless.

I think this feeds into the most interesting aspect of the argument, namely whether it would lead to a huge increase in false positives and thus unnecessary treatment. Well, first off, doing a single measurement is hardly a good way to avoid false positives and negatives. Secondly, yes, in our current medical system, you might end up with more unnecessary treatment–with many noting that getting into the system is the surest way to end up less healthy. That is more of an indictment of the medical system than of Cuban’s suggestion. Sure, it would require a lot more research to fully understand what to do with this data. But without the data, that research cannot happen. And having more information is practically by definition a better way to make decisions than less information, end of story. To argue otherwise sounds a lot like sticking your head in the sand. I'm also not so sure that doctors wouldn't be able to make wise judgements based on even n=1 data without extensive background work. Take a look at Mike Snyder's Narcissome paper (little Nature feature as well). He was able to see the early onset of Type II diabetes and make changes to stave off its effects. Of course, he had a big team of talented people combing over his data. But with time and computational power, I think everyone would have access to the interpretation. What's sad is for people to not have the data.

Leading to another interesting point from the medical research standpoint. If it were really rich people making up the primary dataset, I don’t think that’s a bad thing. Medicine has a pretty long history of doing testing primarily on non-elite patients, after all.

Friday, April 3, 2015

Theorists give great talks

We just had Rob Phillips come visit Penn and give a talk in the chemistry department. It was great! A few months back, we also had Jane Kondev come give a talk in bioengineering that was similarly a lot of fun. Now, Jane and Rob have a lot in common (both are cool, interesting people), but I think one common thread that links them is that they are both theorists by training. (Both do have strong experimental work happening in their group now, by the way.) I think theorists (at least in our sort of systems biology) give some of the most engaging talks, and I think the reasons why are illustrated in some of the best features of both their presentations.

The first departure from business as usual is in the amount of data presented. In Rob’s case, he presented almost no data from his own lab. Jane’s talk also had a lot of background from other people’s work, and the work he did present always came with heavy references to other literature and findings. This allows them to set up the conceptual issues well, as well as their place in the context of science. I think that some people feel like bringing up other work distracts from their own work, and that they don’t have time for it because they have so much of their own to present. I think that Rob and Jane’s talks prove these concerns to be overblown. Rather, I think that their talks feel rich with history and thus significance. Those are good things.

The other main thing I’ve noticed in talks by theorists is that they emphasize the conceptual. Most talks suffer not from a lack of data but an overabundance of data. Here’s a simple rule: if you’re not going to explain a piece of data, don’t show it. If it’s impossible for the audience to truly grasp how the data you show proves your point, then you may as well not show it and just tell them that it all works out. Often times in bad talks, it’s hard to tell that this is happening because people haven’t even set up the question well.

Which brings me to another nice thing about theorists: they aren’t afraid to delve into what might be called philosophy. For some of us, I think there is maybe a fear that people won’t take us seriously if we muse about the big picture in our talks. I think those fears are ill-founded. Overall, I think biomedical science could do with a little more thinking and a little less doing. Another nice thing about this is that for trainees, it can be very inspiring to think about deeper problems. Isn’t that what got us all into this in the first place?

On a related but peripheral note, I was at a conference a couple years ago and was shocked by what I was hearing from the students and postdocs. I asked one student what they thought about some fundamental question about the field, and they responded with a blank stare as though they had never been asked that question before. Another postdoc I met, when asked about some underpinnings of the field, literally responded with “I just want to get an assay that works and get a bunch of data”. If that’s you, go see a talk by a theorist and get back in touch with your inner scientist!

Wednesday, April 1, 2015

Feeling dated

Conversation in the lab:

Andrew: "Isn't there some song that the CIA plays to prisoners to get them to talk?"

Arjun: "Probably some crazy pop songs. There was this one from grad school that used to play on the radio in the lab that drove me nuts. I can't remember it, though."

Olivia: "What was it? I want to know–I feel like it would date you."

Sara: "Was it by John Lennon?"

Ouch.