Friday, August 26, 2016

Paul's Update Special 8/26

Daniel Sarewitz is a professor of science and society at Arizona State University’s School for the Future of Innovation and Society, and the co-director of the university’s Consortium for Science, Policy, and Outcomes. He is also the co-editor of Issues in Science and Technology and a regular columnist for the journal Nature.



Science isn’t self-correcting, it’s self-destructing. To save the enterprise, scientists must come out of the lab and into the real world.

Science, pride of modernity, our one source of objective knowledge, is in deep trouble. Stoked by fifty years of growing public investments, scientists are more productive than ever, pouring out millions of articles in thousands of journals covering an ever-expanding array of fields and phenomena. But much of this supposed knowledge is turning out to be contestable, unreliable, unusable, or flat-out wrong. Along the way it is also undermining the four-hundred-year-old idea that wise human action can be built on a foundation of independently verifiable truths. Science is trapped in a self-destructive vortex; to escape, it will have to abdicate its protected political status and embrace both its limits and its accountability to the rest of society.

Much of the problem can be traced back to a bald-faced but beautiful lie upon which rests the political and cultural power of science. 
Scientific progress on a broad front results from the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown.

Featured on the cover of Time magazine, M.I.T. engineer Vannevar Bush was dubbed the “General of Physics.” As the war drew to a close, Bush envisioned transitioning American science to a new era of peace, where top academic scientists would continue to receive the robust government funding they had grown accustomed to since Pearl Harbor but would no longer be shackled to the narrow dictates of military need and application, not to mention discipline and secrecy. Instead, as he put it in his July 1945 report Science, The Endless Frontier, by pursuing “research in the purest realms of science” scientists would build the foundation for “new products and new processes” to deliver health, full employment, and military security to the nation.

The increase for basic research at universities and colleges, which rose from $82 million to $24 billion, a more than fortyfold increase when adjusted for inflation. By contrast, government spending on more “applied research” at universities was much less generous, rising to just under $10 billion. The power of the lie was palpable: “the free play of free intellects” would provide the knowledge that the nation needed to confront the challenges of the future.

To go along with all that money, the beautiful lie provided a politically brilliant rationale for public spending with little public accountability. Politicians delivered taxpayer funding to scientists, but only scientists could evaluate the research they were doing. Outside efforts to guide the course of science would only interfere with its free and unpredictable advance.

Somehow, it would seem, even as scientific curiosity stokes ever-deepening insight about the fundamental workings of our world, science managed simultaneously to deliver a cornucopia of miracles on the practical side of the equation, just as Bush predicted: digital computers, jet aircraft, cell phones, the Internet, lasers, satellites, GPS, digital imagery, nuclear and solar power.

So one might be forgiven for believing that this amazing effusion of technological change truly was the product of “the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown.” But one would be mostly wrong.

The story of how DOD mobilized science to help create our world exposes the lie for what it is and provides three difficult lessons that have to be learned if science is to evade the calamity it now faces.

First, scientific knowledge advances most rapidly, and is of most value to society, not when its course is determined by the “free play of free intellects” but when it is steered to solve problems — especially those related to technological innovation.

Second, when science is not steered to solve such problems, it tends to go off half-cocked in ways that can be highly detrimental to science itself.

Third — and this is the hardest and scariest lesson — science will be made more reliable and more valuable for society today not by being protected from societal influences but instead by being brought, carefully and appropriately, into a direct, open, and intimate relationship with those influences.

How DOD Gave Science Its Mojo
Almost immediately after World War II, the Department of War — soon renamed the Department of Defense — began to harness together the complete set of players necessary to ensure the United States would have all the technologies needed to win the Cold War. At the same time, protected from both the logic of the marketplace and the capriciousness of politics by the imperative of national defense, DOD was a demanding customer for some of the most advanced technological products that high-tech corporations could produce. In the late 1950s and well into the 1960s, as the role for computers in military affairs was growing but the science wasn’t keeping up, DOD’s Advanced Research Projects Agency essentially created computer science as an academic discipline by funding work at M.I.T., Carnegie Mellon, Stanford, and other institutions.

Another example: The earliest jet engines, back in the 1940s, needed to be overhauled about every hundred hours and were forty-five times less fuel-efficient than piston engines. And another: AT&T’s Bell Labs, where the transistor effect was discovered, could use the demands (and investments) of the Army Signal Corps for smaller and more reliable battlefield communication technologies to improve scientific understanding of semiconducting materials as well as the reliability and performance of transistors. It was military purchases that kept the new transistor, semiconductor, and integrated-circuit industries afloat in the early and mid-1950s.

Today, DOD continues to push rapid innovation in select areas, including robotics (especially for drone warfare) and human enhancement (for example, to improve the battlefield performance of soldiers). But through a combination of several factors — including excessive bureaucratic growth, interference from Congress, and long-term commitments to hugely expensive and troubled weapons systems with little civilian spillover potential, such as missile defense and the F-35 joint strike fighter — the Pentagon’s creativity and productivity as an innovator has significantly dissipated.

War on Cancer
Fran Visco was diagnosed with breast cancer in 1987. A Philadelphia trial lawyer intimidated by no one, she chose to be treated with a less toxic chemotherapy than the one her doctor recommended. Visco was a child of the lie. “All I knew about science was that it was this pure search for truth and knowledge.” So, logically enough, she and the other activists at NBCC started out by trying to get more money for breast cancer research at the country’s most exalted research organization, the National Institutes of Health’s National Cancer Institute. But Visco was also a child of the Sixties with a penchant for questioning authority, and she wanted to play an active role in figuring out how much money was needed for research and how best to spend it. 

Through an accident of congressional budgeting, it turned out that the only way to meet the $300 million goal was to have most of the money allocated to the Department of Defense. So in November 1992, Congress appropriated $210 million for a peer-reviewed breast cancer research program to be administered by the Army. 

When Visco went to DOD, “it was a completely different meeting.” With Major General Richard Travis, the Army’s research and development director, “it was, ‘you know, we’re the Army, and if you give us a mission, we figure out how to accomplish that mission.’” It was, “‘Ladies, I’m going to lead you into battle and we’re going to win the war.’” Although Visco was at first “terrified” to find herself working with the military, she also found it refreshing and empowering — a “fantastic collaboration and partnership.”

During its first round of grantmaking in 1993–94, the program funded research on a new, biologically based targeted breast cancer therapy — a project that had already been turned down multiple times by NIH’s peer-review system because the conventional wisdom was that targeted therapies wouldn’t work. The DOD-funded studies led directly to the development of the drug Herceptin, one of the most important advances in breast cancer treatment in recent decades. There have been few major advances in breast cancer treatment since then.

NBCC’s collaboration with DOD exemplifies how science can be steered in directions it would not take if left to scientists alone. But that turned out not to be enough. Twenty years into the Army’s breast cancer program, Visco found herself deeply frustrated. The Army was providing grants for innovative, high-risk proposals that might not have been funded by NCI. But that’s where the program’s influence ended. What Visco and Gen. Travis had failed to appreciate was that, when it came to breast cancer, the program lacked the key ingredient that made DOD such a successful innovator in other fields: the money and control needed to coordinate all the players in the innovation system and hold them accountable for working toward a common goal.

Ultimately, “all the money that was thrown at breast cancer created more problems than success,” Visco says. What seemed to drive many of the scientists was the desire to “get above the fold on the front page of the New York Times,” not to figure out how to end breast cancer.

The Measure of Progress
For much of human history, technology advanced through craftsmanship and trial-and-error tinkering, with little theoretical understanding. The systematic study of nature — what we today call science — was a distinct domain, making little or no contribution to technological development. Science has been such a wildly successful endeavor over the past two hundred years in large part because technology blazed a path for it to follow. Not only have new technologies created new worlds, new phenomena, and new questions for science to explore, but technological performance has provided a continuous, unambiguous demonstration of the validity of the science being done.

Vannevar Bush’s beautiful lie makes it easy to believe that scientific imagination gives birth to technological progress, when in reality technology sets the agenda for science, guiding it in its most productive directions and providing continual tests of its validity, progress, and value. Absent their real-world validation through technology, scientific truths would be mere abstractions.

Einstein, We Have a Problem
The science world has been buffeted for nearly a decade by growing revelations that major bodies of scientific knowledge, published in peer-reviewed papers, may simply be wrong. What is to be made of this ever-expanding litany of dispiriting revelations and reversals? Well, one could celebrate. “Instances in which scientists detect and address flaws in work constitute evidence of success, not failure,” a group of leaders of the American science establishment — including the past, present, and future presidents of the National Academy of Sciences — wrote in Science in 2015, “because they demonstrate the underlying protective mechanisms of science at work.” But this happy posture ignores the systemic failings at the heart of science’s problems today.

Richard Horton, editor-in-chief of The Lancet, puts it like this:
The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness.

Part of the problem surely has to do with the pathologies of the science system itself. Academic science, especially, has become an onanistic enterprise worthy of Swift or Kafka. As a university scientist you are expected to produce a continual stream of startling and newsworthy findings. Here’s how the great biologist E. O. Wilson describes the life of an academic researcher:
You will need forty hours a week to perform teaching and administrative duties, another twenty hours on top of that to conduct respectable research, and still another twenty hours to accomplish really important research.... Make an important discovery, and you are a successful scientist in the true, elitist sense in a profession where elitism is practiced without shame.... Fail to discover, and you are little or nothing.

To bring in research grants, you need to show that your previous grants yielded “transformative” results and that your future work will do the same. To get papers published, you need to cite related publications that provide support for your hypotheses and findings. The scientific publishing industry exists not to disseminate valuable information but to allow the ever-increasing number of researchers to publish more papers — now on the order of a couple million peer-reviewed articles per year — so that they can advance professionally. As of 2010, about 24,000 peer-reviewed scientific journals were being published worldwide to accommodate this demand.

These figures would not have shocked the historian of science and physicist Derek de Solla Price, who more than half a century ago observed that “science is so large that many of us begin to worry about the sheer mass of the monster we have created.” One cumulative result of these converging stresses (a result that Price did not anticipate) is a well-recognized pervasive bias that infects every corner of the basic research enterprise — a bias toward the new result. Yet, to fixate on systemic positive bias in an out-of-control research system is to miss the deeper and much more important point. The reason that bias seems able to infect research so easily today is that so much of science is detached from the goals and agendas of the military-industrial innovation system, which long gave research its focus and discipline. 

Lemmings Studying Mice
A neuroscientist by training, Susan Fitzpatrick worries a lot about science and what Price called the “sheer mass of the monster.” “The scientific enterprise used to be small, and in any particular area of research everyone knew each other; it had this sort of artisanal quality,” she says.

As president of the James S. McDonnell Foundation, which funds research on cognition and the brain, Fitzpatrick is concerned about where research dollars are flowing. Just as Visco observed what she called the “lemming effect” — researchers running from one hot topic to the next — Fitzpatrick also sees science as driven by a circular, internal logic. “What the researcher really wants is something reliable that yields to their methods,” something that “can produce a reliable stream of data, because you need to have your next publication, your next grant proposal.”

More than one hundred different strains of mice have been developed for the purpose of studying Alzheimer’s, and numerous chemical compounds have been shown to slow the course of Alzheimer’s-like symptoms in mice. Yet despite the proliferation of mouse and other animal models, only one out of 244 compounds that made it to the trial stage in the decade between 2002 and 2012 was approved by the FDA as a treatment for humans — a 99.6 percent failure rate, and even the one drug approved for use in humans during that period doesn’t work very well.

A search for article titles or abstracts containing the words “brain” and “mouse” (or “mice” or “murine”) in the NIH’s PubMed database yields over 50,000 results for the decade between 2005 and 2015 alone. If you add the word “rat” to the mix, the number climbs to about 80,000. It’s a classic case of looking for your keys under the streetlight because that’s where the light is: the science is done just because it can be. The results get published and they get cited and that creates, Fitzpatrick says, “the sense that we’re gaining knowledge when we’re not gaining knowledge.”

But it’s worse than that. Scientists cite one another’s papers because any given research finding needs to be justified and interpreted in terms of other research being done in related areas — one of those “underlying protective mechanisms of science.” But what if much of the science getting cited is, itself, of poor quality?

A scientific model allows you to study a simplified version, or isolated characteristics, of a complex phenomenon. This simplification is sometimes justified, for instance, if the cause-and-effect relations being studied in the model (say, the response of an airfoil to turbulence in a wind tunnel) operate in the same way in the more complex context (an airplane flying through a storm). Fitzpatrick thinks that such reasoning is not justified when using mouse brains to model human neurodegenerative disease.

But her concerns about this way of approaching brain science have more devastating implications when the models are extended still further to explore the neurological aspects of human behavioral dysfunction. The problem, as Fitzpatrick explains it, is that in this space between the proxy — say, measuring inhibitory control in a mouse, or for that matter a person — and a complex behavior, such as drug addiction, lies a theory about what causes crime and addiction and anti-social behavior. The theory “has ideological underpinnings. It shapes the kind of questions that get asked, the way research gets structured, the findings that get profiled, the person that gets asked to give the big speech.”

Technology keeps science honest. But for subjects that are incredibly complex, such as Alzheimer’s disease and criminal behavior, the connection between scientific knowledge and technology is tenuous and mediated by many assumptions — assumptions about how science works (mouse brains are good models for human brains); about how society works (criminal behavior is caused by brain chemistry); or about how technology works (drugs that modify brain chemistry are a good way to change criminal behavior).

But Is It Science?
Problems of values, assumptions, and ideology are not limited to neuroscience but are pervasive across the scientific enterprise. This combination of predictable behavior and invariant fundamental attributes is what makes the physical sciences so valuable in contributing to technological advance — the electron, the photon, the chemical reaction, the crystalline structure, when confined to the controlled environment of the laboratory or the engineered design of a technology, behaves as it is supposed to behave pretty much all the time.

But many other branches of science study things that cannot be unambiguously characterized and that may not behave predictably even under controlled conditions — things like a cell or a brain, or a particular site in the brain, or a tumor, or a psychological condition. Or a species of bird. Or a toxic waste dump. Or a classroom. Or “the economy.” Or the earth’s climate. Such things may differ from one day to the next, from one place or one person to another. Their behavior cannot be described and predicted by the sorts of general laws that physicists and chemists call upon.

To ensure that science does not become completely infected with bias and personal opinion, Weinberg recognized that it would be essential for scientists to “establish what the limits of scientific fact really are, where science ends and trans-science begins.” But doing so would require “the kind of selfless honesty which a scientist or engineer with a position or status to maintain finds hard to exercise.” Weinberg’s pleas for “selfless honesty” in drawing the lines of expertise have gone largely unheeded, as scientists have, over the past forty years, generally sought not to distinguish trans-science from science but to try — through what amounts to a modern sort of alchemy — to transmute trans-science into science.

The profusion of partial truths, each defended by its own set of experts, is what comes to pass when science tries to answer trans-scientific questions like: Are genetically engineered crops necessary for feeding a burgeoning global population? Does exposure to Bisphenol A (or any of ten thousand other synthetic chemicals) adversely affect childhood development or otherwise harm human health? Do open markets benefit all trading partners? What will be the future economic costs of a warming climate to a particular nation or region? Does standardized testing improve educational outcomes? Why is obesity rising in America, and what can be done about it?

If both scientific research and political debates over such questions seem to drag on endlessly, surely one reason is that we have the wrong expectations of science. Our common belief is that scientific truth is a unitary thing — there is one fact of the matter, that’s why the light always goes on when I flip the switch. But trans-scientific questions often reveal multiple truths, depending in part on what aspects of an issue scientists decide to do research on and how they go about doing that research. Sometimes the problem is not that it is hard to come up with facts, but that it is all too easy.

There is a very good reason why the problem of poor-quality science is showing up most conspicuously in biomedical research. Even as government funding for biomedical science in the United States equals that of all other fields of research combined, diseases remain uncured, pharmaceutical innovation has slowed to a crawl, and corporate investments are extremely risky because of the staggering failure rates of new drug trials. Biomedical science is failing the truth-test of technology.

Datageddon
These difficulties are about to get much worse. Many fields of science are now staking their futures on what is sometimes called “big data” — the creation of enormous new data sets enabled by new technological tools for collecting, storing, and analyzing virtually infinite amounts of information.

If mouse models are like looking for your keys under the street lamp, big data is like looking all over the world for your keys because you can — even if you don’t know what they look like or where you might have dropped them or whether they actually fit your lock. As Michelle Gittelman, a professor of management at Rutgers University who studies pharmaceutical innovation, puts it in a recent paper:
The biotechnology revolution was bound to fail, given the limits of predictive science to solve problems in complex natural phenomena.... [T]he experience of genetics in medical research has demonstrated that a moving frontier in scientific knowledge does not translate to a corresponding advance in technological innovation.

Science is in a pincer grip, squeezed between revelations that entire areas of scientific inquiry are no good, and the willy-nilly production of unverifiable knowledge relevant to the unanswerable questions of trans-science. The profusion of partial truths, each defended by its own set of experts, is what comes to pass when science tries to answer trans-scientific questions. 

Science is in a pincer grip, squeezed between revelations that entire areas of scientific inquiry are no good, and the willy-nilly production of unverifiable knowledge relevant to the unanswerable questions of trans-science. Even as the resulting chaos compromises technological progress — aimed at, say, preventing or curing breast cancer — the boundary between objective truth and subjective belief appears, gradually and terrifyingly, to be dissolving.

Managing for Truth
For twenty years Jeff Marqusee had to come up with practical solutions to environmental problems for the Department of Defense. His approach was nothing short of heresy. “You want to actually manage research.” With a Ph.D. from M.I.T., Marqusee too is a child of the lie.  Like Visco and Fitzpatrick, Marqusee thinks that the absence of accountability has led to “a system which produces far too many publications” and has “too many mouths to feed.”

When Marqusee talks about the need to “manage research” he doesn’t mean telling scientists how they should do their work, or even what they should work on; he means making sure that the science that’s being done makes sense in terms of the goal to which it is supposed to contribute. Marqusee came to realize that if he funded scientists and left them alone to do their work, he’d end up with a lot of useless knowledge and a lot of unsolved problems. It’s not as though he didn’t fund rigorous, fundamental research: “Sure we wanted to have high-quality publications, we wanted to advance the scientific field, but why? Because we had a problem we wanted to solve.” The beautiful lie insists that scientists ought to be accountable only to themselves. Marqusee’s advice to his staff was precisely the contrary: “Have no constituency in the research community, have it only in the end-user community.”

Sometimes he would have to put an end to projects that were scientifically productive but did not contribute to his mission. But if your constituency, to use Marqusee’s term, is society, not scientists, then the choice of what data and knowledge you need has to be informed by the real-world context of the problem to be solved. The questions you ask are likely to be very different if your end goal is to solve a concrete problem, rather than only to advance understanding. Marqusee quips that the best way to reorient scientists would be to “pay them to care about the problem.”

This is really all that Fran Visco is asking for. Of course ending breast cancer is a vastly more complex scientific and organizational problem than finding a cheap and fast way to diagnose sickle-cell disease. But that would seem to be all the more reason why, after all the billions spent — and with forty thousand women a year still dying from the disease in the United States alone — someone needed to be accountable for driving the system toward a solution. So Visco and her colleagues decided that NBCC would shoulder that burden and start managing the science itself. 

Lacking a big checkbook to fund research directly, NBCC instead began to bring scientists together to compare ideas and results, foster collaborations that weren’t happening but should have been, and accelerate the process of developing and testing a vaccine. It set a deadline — 2020 — for ending breast cancer and staked its credibility on achieving at least major progress by that date. Visco rejects the idea that after decades of research and billions in funding, ending breast cancer can still be a matter of just waiting for someone to make an unexpected discovery.

NBCC has attracted about thirty scientists, many of them from leading cancer research groups, to work on the Artemis vaccine project, now in its sixth year. They have selected the antigens that will be targeted by the vaccine, and are starting to plan for clinical trials. y, NBCC also started a second leg of the Artemis Project, this one focused on stopping breast cancer from metastasizing to other parts of the body, a problem that had, like vaccine research, mostly been neglected by the mainstream research community.

The Artemis Project is different from science-as-usual in many ways. It is small, collaborative, and focused not on producing good science for its own sake, nor on making a profit, but on solving a problem. The Artemis Project is different from science-as-usual in many ways. It is small, collaborative, and focused not on producing good science for its own sake, nor on making a profit, but on solving a problem. 

Returning to Our World
Is science today just the latest candidate for inclusion in the growing list of failing institutions that seems to characterize our society? As with democratic politics, criminal justice, health care, and public education, science’s organization and culture are captured by a daunting, self-interested inertia, and a set of values reflecting a world that no longer exists.

In the future, the most valuable science institutions will be closely linked to the people and places whose urgent problems need to be solved; they will cultivate strong lines of accountability to those for whom solutions are important; they will incentivize scientists to care about the problems more than the production of knowledge. The science they produce will be of higher quality, because it will have to be.

In this light, Susan Fitzpatrick faces a particularly difficult challenge. She wants the philanthropic foundation that she leads to maximize the potential for neuroscience to help reduce human suffering, but she doesn’t think that this field has much to say yet about lessening the terrible burdens of most brain diseases. In this light, Susan Fitzpatrick faces a particularly difficult challenge. She wants the philanthropic foundation that she leads to maximize the potential for neuroscience to help reduce human suffering, but she doesn’t think that this field has much to say yet about lessening the terrible burdens of most brain diseases. 

Perhaps for now, research to help people with these diseases ought to aim at more practical questions. “I don’t think you can tell people ‘Well, we’ve got another forty years of research that we’re going to have to do’ when we also don’t know if there are better ways of supporting people.” 

Advancing according to its own logic, much of science has lost sight of the better world it is supposed to help create. Shielded from accountability to anything outside of itself, the “free play of free intellects” begins to seem like little more than a cover for indifference and irresponsibility. The tragic irony here is that the stunted imagination of mainstream science is a consequence of the very autonomy that scientists insist is the key to their success. Only through direct engagement with the real world can science free itself to rediscover the path toward truth.

Dr. Christian Jarrett seeks out exciting new research and showcases its relevance for life. 


A backup plan is like an emotional safety net – it’s comforting and helps combat the fear of failure. And yet, ironically, the very act of devising this secondary plan could make it more likely that your primary goal will fail.

To test their theory that backup plans sap motivation, the researchers conducted four experiments involving hundreds of people who were asked to decipher scrambled sentences in a given timeframe. The rewards for success varied across the experiments and included a free snack or extra payment.

One clarification – this new research is about backup plans that involve identifying a new goal if your primary goal fails (like applying for a staff position if your book proposal gets rejected). It isn’t about identifying multiple means to achieve the same primary goal – for example, doing research to find as many agents as possible to whom to submit your proposal. Lots of research suggests that finding multiple strategies towards the same goal increases commitment and motivation.

To make a plan B, or not to make a plan B?

The findings suggest that one way to decide is to weigh up whether your bigger concern for a particular goal is excessive anxiety or flagging motivation. Say you’re terrified that your client pitch is going to bomb – and the reason is not through lack of preparation – then it makes sense to have a backup plan in place (for example, you could make parallel plans to approach different clients). On the other hand, if your problem is one of motivation – you’re struggling to switch off the football game and getting to work on your pitch – it might well be better not to give yourself the comfort of a backup plan. In this case, thinking through a backup risks undermining your energy still further, giving you the perfect excuse to keep watching the game.



Bloomberg's 2015 ranking of the world's 50 most innovative countries takes a more prosaic approach to the question, focusing on six tangible activities that contribute to innovation. The point of this list isn't to award bragging rights to one country over another—it's to see whether a broad formula for innovation can in fact be identified, and what companies, and governments, need to do to reproduce it.



Many companies face difficult problems that require focused attention to solve, but many do not know where to begin. Sprint: How to Solve Big Problems and Test New Ideas in Just Five Days, a new book by Jake Knapp with John Zeratsky and Braden Kowitz, partners at Google Ventures, offers a plan for identifying the best solution to a problem, developing a prototype and testing it with customers.

Knowledge@Wharton: How did this process come about?

Jake Knapp: It all dates back 12 years to when my son was born, and I started to freak out about making good use of my work time so that I could spend more time with him and feel good about when I did go into the office.

Knowledge@Wharton: John, when did you hear first about these ideas?

John Zeratsky: When Jake joined our team at Google Ventures, I had heard that there was this thing called “sprint” that was going around. It was taking Google by storm. There were probably a dozen or more teams who were running sprints, and everything from Chrome to YouTube to Google X. It became clear to me that with this sprint process, we didn’t need to have the answers because we had a process for finding the answers.

Knowledge@Wharton: Jake, companies have these problems day in, day out. What ends up being the biggest problem in trying to fix the problem?

Knapp: One of the biggest problems that we see over and over again … is getting stuck in abstract land. It’s easy to have a lot of arguments when you’re in abstract land and it’s easy to also make a lot of assumptions and spend a lot of time on what turns out to be a hunch. Then, we get concrete really fast. We’ll make a prototype, and by the end of the week, we’re testing it with customers and actually finding out — do those concrete ideas work or not?

Knowledge@Wharton: You literally break this down day by day. Take people through the process of how this all develops. Start with Monday.

Knapp: On Monday you’ve got your team. If you’re doing software, there’s an engineer, there’s a product manager, there’s a marketer, but any kind of project, whatever it is, will have different skill sets. You bring those folks together, including a decision maker, so a decision maker is in the room, and you’ve cleared your calendar for a week. On Monday you’re going to make a map of the problem, share the information that you have.

Then, on Tuesday we come up with solutions. But instead of a group brainstorm where everybody’s shouting out ideas, we do individual sketching — very detailed, quiet work that has a lot of depth to it.

On Wednesday, then, you’ve got competing solutions that are at a high level of detail, and we make decisions. Again, we rely on that decision maker who’s in the room, but we also use some structured processes to make sure that there are no sales pitches and no big, long arguments — we just cut right to the chase.

On Thursday, we build a prototype. This is a realistic prototype. It looks like what the product or the service might look like when it’s all finished.

Then, Friday we’re going to bring in five customers, one at a time, and do one-on-one interviews with those customers. The rest of the team is going to watch over video, take notes. By the end of the day on Friday, you’ve got some clarity about what to do next.

Knowledge@Wharton: John, how many times have you put this plan into place and seen it work at this point?

Zeratsky: We’ve done well over 100 sprints at this point. We find that in most businesses, there is something. There is that new product idea, there’s the way of improving the marketing, there’s something that’s just on everybody’s minds. And so, it’s actually a really helpful … to say, “Let’s pick the biggest problem we can find and let’s commit five days to it in a really focused way.”

Knowledge@Wharton: If you get to Friday and you do your testing and it doesn’t go the way you want, what’s the next step in the process?

Zeratsky: Sometimes we call it “an efficient failure.”… It’s rarely the case that it’s just this total failure where you need to go back to the drawing board. What’s more often the case is that certain parts, certain details of what you thought were going to work did not work when you actually put them in front of customers. We say it’s efficient because then you got the opportunity that very next week to turn around and start trying to fix those problems.