Moore’s Law for Space-Based Imaging

Hundreds of eyes in the sky, just like this one. Getting better, cheaper, and more numerous every year.

Planet Labs Dove

Imagine a network of satellites, all taking daily pictures of the earth beneath them and reporting that data back to earth in real time.  Owned by the CIA? DOD? NSA?  Nope.  Silicon Valley startup Planet Labs, the latest disruptive innovator in space will have that many small imaging satellites in orbit by the end of this year:

But that is just the start. Last week, Planet Labs announced that it would put about 100 satellites into space from the United States and Russia, bringing the total number of “Doves,” as the company calls them, to 131. That larger network, which Planet Labs hopes to complete within a year, is expected to create a daily photo mosaic of most of Earth.

That mosaic could be valuable to private customers, like agricultural companies monitoring farmlands, or even to governments trying to figure out how to aid natural disaster victims. The company has so far booked contracts worth more than the $65 million in private equity it has raised, according to Will Marshall, the company’s co-founder and chief executive.

Like many disruptive innovations, these satellites (built from mobile phone components!) aren’t as good as full scale ones costing hundreds of millions of dollars, but they cost a tiny fraction of that.  And they’re getting better, and cheaper — really quickly:

By making little machines that are often updated, Mr. Gillmore said, “we’re building satellites with computers that are six months old. Lots of satellites have 10-year-old computers.” Version nine, which is almost complete, cost about 35 percent less than the current version in space, and was made four times faster, he estimated.

And that really is just the start.  The company is planning to add more and more new and improved versions of these satellites over time.  Think of a Moore’s Law for space-based imaging.  More coverage area, more images per day, improved resolution, perhaps observations at additional wavelengths besides visible light.  Real time crop imaging, firefighting, climate monitoring, ecological studies, global security (or insecurity)… What applications might exist that haven’t even yet been conceived of?

Posted in Innovation, Space | Tagged , , , | Leave a comment

More Biotech Hub Rankings

GEN (Genetic Engineering and Biotechnology News) has released their own, more comprehensive biotech hub rankings.  As expected, Boston-Cambridge and the Bay Area are at the top, with San Francisco edging out Boston in most criteria.  The GEN rankings include not only VC investing, but also data on biotechnology patents since 1976, an estimate of the the number of biomedical employees in the area, NIH grant dollars, and regional square footage of lab space.  Some of those numbers are pretty eye-popping.  According to GEN, the Bay area alone has nearly 30 million square feet of lab space, and has generated over 3,400 biotechnology patents.  Boston-Cambridge meanwhile clocks in at 2,900 patents, and nearly 19 million square feet of space (with another 3 million+ coming on line in the near future).

Posted in Innovation, Research | Tagged , , , | Leave a comment

Venture Funding of Biotech is VERY Concentrated… and Very Limited

Fierce Biotech has released their latest analysis of venture capital funding of biotech in the U.S. last year, broken down by metropolitan area:


San Francisco is back out in front, edging out Boston-Cambridge.  After San Diego and Washington, funding tails off rapidly, and by the time you hit #15 Chicago, you’re down to about 1% of the pie.

There are a couple of key takeaways.  First, biotech startup funding is very concentrated, with the big three cities (San Francisco, Boston-Cambridge, and San Diego) totaling more than 60% of all funding ($2.5B).  That number, incidentally, is 25% more than the total of all biotech VC funding in Europe.

Second interesting point: there really isn’t that much VC funding of biotech!  The total is only about $4.5B in the U.S., something just under $2B in Europe, and a smattering elsewhere.  In other words, the level of annual global VC biotech funding is smaller than the R&D budget of just one large pharma company.  Keep that in mind when you hear multiple pharma companies planning on sourcing a large part of their pipeline from licensing and acquisitions.

Finally, recognize that distribution?  Yup, Pareto.  Apparently biotech hub performance is just like employee performance.

Posted in Innovation, Research | Tagged , , , , , , | Leave a comment

Icicle Forest

Icicle Forest, February 2014

Eastern Massachusetts, February 2014

Image | Posted on by | Tagged , , , | Leave a comment

Employee Performance Does Not Follow a Bell Curve

Here’s a great post written by Josh Bersin that’s gotten a lot of attention in the past few days:  The Myth of the Bell Curve.  As I mentioned several weeks ago, more and more evidence shows that employee performance in the 21st century does not follow a bell curve, and that forced stack ranking is a destructive practice.

As Josh explains, employee performance more typically follows a power law (or Pareto) distribution with a sharp peak of just a few highly performing employees, followed by a long tail of what might be considered below average performers.  But a better way to think of the distribution is of a set of hyper-performers embedded within — and often tremendously helped by — a large pool of normal performers.  Yes, there are occasionally truly poor performers, but it makes no sense to arbitrarily label 5-10% of all employees as such.  In fact, even without forced bottom rankings, there are several ways in which the assumptions behind bell curve-driven evaluation schemes hurt performance and undermine the teamwork and collaboration essential for success in a modern information-based economy (or academic environment).

Posted in Fixing Big Pharma Research, Management | Tagged , | 2 Comments

P values. I do not think that value means what you think it means.

Regina Nuzzo has a news feature in Nature about P values that every biomedical scientist should read.  P values — the common measure of statistical significance (i.e. the “believability” of an experiment) — do not mean what most scientists think they mean.

The P-value calculation was originally developed in the 1920s by the statistician Ronald Fisher as a way to judge whether an observed result was worth looking into further.

Researchers would first set up a ‘null hypothesis’ that they wanted to disprove, such as there being no correlation or no difference between two groups. Next, they would play the devil’s advocate and, assuming that this null hypothesis was in fact true, calculate the chances of getting results at least as extreme as what was actually observed. This probability was the P value. The smaller it was, suggested Fisher, the greater the likelihood that the straw-man null hypothesis was false.

For many biomedical experiments, an experiment with a P-value below of 0.05  or 0.01 is considered “statistically significant”, and therefore interpreted as a believable result.  Many experiments can have calculated P-values of 0.001 or even lower. Attracted by the apparent precision of a calculated P-value and it’s resemblance to a true probability calculation, working scientists have come to interpret the P-value as the actual probability of their result being correct.  But that is not true.  The P-value summarizes data in the context of a specific null hypothesis, but it does not take into account the odds that the real effect was there in the first place.

The mathematics are complicated, but by one widely used calculation quoted by Regina, a P-value of 0.01 actually corresponds in the real world to an 11% probability that the experimental result might be due to random chance.  For P=0.05, that the probability rises to 29%!  Even worse, some scientists are guilty of data-dredging or “p-hacking”, the practice of trying different conditions until you get the resulting P-value you want.  As a consequence, the P-value assumptions of random sampling go out the window and, if you’ve tortured the data enough, the calculation becomes meaningless.  No wonder that the overall level of reproducibility of biomedical research has been called into question.

A statistically significant P-value is in fact just an invitation to repeat the experiment.  A practicing scientist needs to realize that, even with a highly “significant” P-value, there is still a relatively high probability that the result will not repeat.  The best advice — something that I learned in the first week of grad school — is that you shouldn’t believe anything until you see n=2.  Better yet, n=3.

Posted in Data Analysis | Tagged , , | Leave a comment

Product Cycles in the Pharma Industry and How to “Shorten” Them – Part 1

A few weeks ago I commented on what may be the fundamental limit on a stable Pharmaceutical industry — products have to be on the market for at least as long as it takes to replace them.  Cash flow significant enough to fund serious research only lasts as long as a drug has market exclusivity. Thus, to first approximation, a firm has to develop a new drug before it runs out of cash from sales of existing products.

Thinking about it a little further, I realized that it already takes an unsustainably long time to develop new products completely from scratch, but that the overall academic/startup/existing firm ecosystem has developed (or in some instances is in the process of further adapting) to enable these long product development times.

First of all, the very basic biomedical research needed to create a key enabling technology (say monoclonal antibodies), or to discover a promising drug target or pathway takes a lot of time — so much time that that research can’t actually be part of a conventional product development cycle.  Fortunately, the scale can be small, and there are a lot of interesting scientific discoveries to be made — all of which is a good fit for academic laboratories.  Much of this basic research that can later lead to commercial applications is in fact funded through government or foundation-sponsored grant money.  Such academic research follows it’s own funding cycle, and it has it’s own reward system that focuses on the production of trained scientists and significant research results, manifested as published papers (and patents).  Papers have a much shorter “development cycle” than marketed drugs!

Later, more applied research (which requires larger numbers of people and significant funding increases) is often spun off in the form of a start up company, funded through seed money, angel investing, venture capital, etc.  Here the funding cycle and system of reward is based on increasing valuation derived from estimates of future profitability as the firm develops its (as yet unmarketed) products. At the end, the company can go public or be bought by a larger existing firm.  In either case, all that early product development work typically occurs prior to, and almost completely uncoupled from, product sales.  Of course the true overall cycle time wasn’t really shortened, it’s just that parts of development were structured in ways that had their own sustainable cycle of funding and reward.

The net effect is that an existing drug company can either buy a startup company or launch an internal project based on collaborations (or published research) with a significantly shorter effective development time (at least as experienced by the existing firm).  Needless to say, this relationship holds for all sorts of science and technology companies, where government- or foundation-sponsored basic research can take place on a long time scale that is not constrained by the need for short term profits.

Getting back to Pharma, an important implication is that a pharmaceutical firm can benefit if it can apply this sort of decoupling or otherwise off-load development from it’s own internal R&D portion of the overall product development cycle.  I have some thoughts on that, which I will put in another post…

Posted in Fixing Big Pharma Research, Management, Research | Leave a comment