Friday, November 11, 2016

Paul's Update Special 11/11




The concept of jobs to be done provides a lens through which to understand value creation. The framework looks at customer motivations in business settings. The term was made popular by business leader Clayton Christensen in his book The Innovator’s Solution, the follow-up to his landmark work The Innovator’s Dilemma. It’s a straightforward principle: people “hire” products and services to get a job done. Others have operationalized aspects of the jobs to be done approach. Most notable is Tony Ulwick’s “Outcome Driven Innovation” technique. Bob Moesta and colleagues at The Rewired Group have also used jobs to be done effectively as consultants. It’s a straightforward principle: people “hire” products and services to get a job done. 

SIX ELEMENTS OF JTBDs
Based on Christensen’s theory and what others have written about JTBD, I’m proposing a simple model that has six elements. These are grouped into two areas, seen along two axes in the diagram below:
Rectangle.png

On the one hand, there are three dimensions of a given job (horizontal in the diagram above)

  • Functional job – the actions a person takes or the task at hand
  • Emotional job –how people feel while completing a job
  • Social job – Aspects of a social job reflect how someone is perceive by others

The context of the job is also part of the model. This includes three elements (vertical in the diagram above):

  • Situation – the circumstances of a job
  • Motivation – the trigger that led to action, often a problem or challenge to overcome
  • Desired outcome – the expected result, by which a person will measure success

The jobs theory sees people as goal-driven actors. The job is really about progress toward a goal.

PUTTING JTBD TO WORK

  1. Understand the market – Structure user research findings around the six elements in my framework. While interviewing, and observing people in your target market, capture and organize insights using this approach.
  2. Design for the market – Use JTBD to guide design and development decisions. For instance, in agile software development, Alan Klement recommends writing jobs stories rather than user stories. Then you can take these job stories into design sprints and brainstorming sessions to use are starting points
  3. Talk to the market – Think about how JTBD might help you address customers in marketing and advertising. Typically, you'll shift language from features to desired outcomes.
  4. (Re)define markets - JTBD ultimately help answer the strategic question, What business are we really in? The approach expands your strategic field of vision beyond your current capabilities and core competency.

CONCLUSION
JTBD theory offers a rich framework for understanding customers. But you may be thinking that it sounds familiar or that you’ve been advocating this for years. Well, you’re right At its core, JTBD theory overlaps greatly with existing approaches of user-centered design, design thinking, goal-driven design and more. Still, at a minimum the JTBD approach offers a fresh perspective on creating value for customers. What’s different, I believe, is the source of the approach: it comes from thoughts leaders in business. The momentum around JTBD, particularly with Clayton Christensen’s new book "Competing Against Luck", will likely increase over the next several years. My experiences working with JTBD over that past 8+ years has been positive. I encourage you to consider how it might help your situation. I hope that the model proposed here is helpful.



Managers at all levels will have to adapt to the world of smart machines. The fact is, artificial intelligence will soon be able to do the administrative tasks that consume much of managers’ time faster, better, and at a lower cost. How can managers — from the front lines to the C-suite — thrive in the age of AI? To find out, we surveyed 1,770 managers from 14 countries and interviewed 37 executives in charge of digital transformation at their organizations. Using this data, we identified five practices that successful managers will need to master.

Practice 1: Leave Administration to AI
According to the survey, managers across all levels spend more than half of their time on administrative coordination and control tasks. AI will automate many of these tasks. Report writing is another relevant example. The Associated Press expanded its quarterly earnings reporting from approximately 300 stories to 4,400 with the help of AI-powered software robots. In doing so, technology freed up journalists to conduct more investigative and interpretive reporting. Imagine technology like this drafting your next management report. The managers we surveyed see such change in a positive light: Eighty-six percent said they would like AI support with monitoring and reporting.

Practice 2: Focus on Judgment Work
Many decisions require insight beyond what artificial intelligence can squeeze from data alone. Managers use their knowledge of organizational history and culture, as well as empathy and ethical reflection. This is the essence of human judgment — the application of experience and expertise to critical business decisions and practices. Managers we surveyed have a sense of a shift in this direction and identify the judgment-oriented skills of creative thinking and experimentation, data analysis and interpretation, and strategy development as three of the four top new skills that will be required to succeed in the future.

Practice 3: Treat Intelligent Machines as “Colleagues”
Managers who view AI as a kind of colleague will recognize that there’s no need to “race against a machine.” While human judgment is unlikely to be automated, intelligent machines can add enormously to this type of work, assisting in decision support and data-driven simulations as well as search and discovery activities. In fact, 78% of the surveyed managers believe that they will trust the advice of intelligent systems in making business decisions in the future. Not only will AI augment managers’ work, but it will also enable managers to interact with intelligent machines in collegial ways, through conversation or other intuitive interfaces. AI will be their always-available assistant and adviser.

Practice 4: Work Like a Designer
While managers’ own creative abilities are vital, perhaps even more important is their ability to harness others’ creativity. Manager-designers bring together diverse ideas into integrated, workable, and appealing solutions. They embed design thinking into the practices of their teams and organizations. A third of the managers in our survey identified creative thinking and experimentation as a key skill area they need to learn to stay successful as AI increasingly takes over administrative work.

Practice 5: Develop Social Skills and Networks
The managers we surveyed recognized the value of judgment work. But they undervalued the deep social skills critical to networking, coaching, and collaborating that will help them stand out in a world where AI carries out many of the administrative and analytical tasks they perform today. While they will use digital technologies to tap into the knowledge and judgment of partners, customers, and communities, they must be able to tease out and bring together diverse perspectives, insights, and experiences.

Steps to Success
AI will ultimately prove to be cheaper, more efficient, and potentially more impartial in its actions than human beings. But such a scenario should not be cause for concern for managers. It just means that their jobs will change to focus on things only humans can do. To prepare themselves and their organizations for the kinds of human-led work that will gain prominence as technology takes on more routine tasks, leaders must take the following steps:
  • Explore early. To navigate in an uncertain future, managers must experiment with AI and apply their insights to the next cycle of experiments.
  • Adopt new key performance indicators to drive adoption. AI will bring new criteria for success: collaboration capabilities, information sharing, experimentation, learning and decision-making effectiveness, and the ability to reach beyond the organization for insights.
  • Develop training and recruitment strategies for creativity, collaboration, empathy, and judgment skills. Leaders should develop a diverse workforce and team of managers that balance experience with creative and social intelligence — each side complementing the other to support sound collective judgment.

While oncoming disruptions won’t arrive all at once, the pace of development is faster and the implications more far-reaching than most executives and managers realize. Those managers capable of assessing what the workforce of the future will look like can prepare themselves for the arrival of AI.



We should pay special attention to those whose ideas had impact far beyond their own lifespan.  It is they who were able to see not only the problems of their day but ones that, although they seemed minor or trivial at the time, would become consequential—even determinant—in years to come.  Here are four such men and what we can learn from them.

Vannevar Bush and the Emerging Frontier of Science
By any measure, Vannevar Bush was a man of immense accomplishment.  A professor at MIT who invented one of the first working computers, he also co-founded Raytheon, a $30 billion dollar company that prospers to this day. Yet even these outsized achievements pale in comparison to how Bush fundamentally changed the relationship of science to greater society.  

In the late 1930’s, as the winds of war began to stir in Europe, Bush saw that the coming conflict would not be won by bullets and bombs alone.  Science, he saw, would likely tip the balance between victory and defeat. It was that insight which led to the establishment of Office of Scientific Research and Development (OSRD).  With Bush at its helm, the agency led the development of the proximity fuze, guided missiles, radar, more advanced battlefield medicine and, not least of all, the Manhattan Project which led to the atomic bomb. 

As the war came to a close, President Roosevelt asked Bush to write a report on how the success of the OSRD could be replicated in peacetime.  That report, "Science: The Endless Frontier", outlined a new vision of the relationship between public and private investment, with government expanding scientific horizons and industry developing new applications. Bush’s report led to the foundation of the NSF, NIH, DARPA and other agencies, which have funded early research in everything from the Internet and GPS, to the Human Genome Project and many of our most important cures.  It has been Bush’s vision, perhaps more than almost anything else, that has made America an exceptional nation. Oh, and he also wrote an essay in 1945 that not only laid out what would become the Internet, but influenced many of the key pioneers who designed it.

Marshall McLuhan and the Global Village
Marshall McLuhan was one of the first to see the subtle, but undeniable influence of popular culture. While many at the time thought of mass media as merely the flotsam and jetsam of the modern age, he saw that the study of things like newspapers, radio and TV could yield important insights. Central to his ideas about culture was his concept of media as “extensions of man.”  Following this line of thought, he argued that Gutenberg’s printing press not only played a role in spreading information but also in shaping human thought. Essentially, the medium is the message.

McLuhan argued further that the new age of electronic media would disrupt the private experience and specialization that the dominance of printed media brought about and usher in a new era of collective, transnational experience that he called the global village. Importantly, however, he did not see the global village as a peaceful place.  Rather than promoting widespread harmony and understanding, he predicted that the ability to share experiences across vast chasms of time and space would lead to a new form of tribalism, a result in a “release of human power and aggressive violence” greater than ever in history.

Richard Feynman Sees “Plenty of Room at the Bottom”
When Richard Feynman stepped up to the podium to address the American Physical Society in 1959, his talk, modestly titled "There’s Plenty of Room At The Bottom" would launch a revolution in physics and engineering that continues to play out to this day.  Starting from a seemingly innocent question about shrinking an encyclopedia down to the size of a postage stamp, he proceeded over the next hour to invent the new field of nanotechnology

The talk, which is surprisingly easy and fun to read, also gives a fascinating window into how a genius thinks.  After pondering the problem of shrinking things down to the size of molecules, he proposes some solutions, then thinks some more about what issues those ideas would create, proposes some more fixes and on and on until a full picture emerged. He was also a pioneer in parallel computing and did important work in virology.  All of this in addition to his day job as a physicist, for which he won the Nobel prize in 1965.

Tim Berners-Lee Creates a Web of Data
Tim Berners-Lee is most famous for his creation of the World Wide Web.  In November 1989, he created the three protocols—HTTP, URL, and HTML—that we now know as the “Web” and released his creation to the world, refusing to patent it.  Later, he helped set up the W3C consortium that continues to govern and manage its growth and further development. The truth is, however, that the Web wasn’t a product of any great vision, but rather a solution to a particular problem that he encountered at CERN.  Physicists would come there from all over the world, work for a period of time and then leave.  Unfortunately, they recorded their work in a labyrinth of different platforms and protocols that didn’t work well together.

So Berners-Lee set out to solve that problem by creating a universal medium that could link information together.  He never dreamed it would grow into what it did.  If he had, he would have built it differently.  He wrote at length about these frustrations in his memoir, Weaving The Web.  Chief among them was the fact that while the Web-connected people, it did little for data.
So he envisioned a second web, which he called the Semantic Web.  Much like his earlier creation, the idea outstripped even what he imagined for it.

The Best Way to Predict the Future is to Create it
Take a hard look at these four visionaries and some common themes emerge.  First, all except McLuhan took an active role in bringing their ideas into realities. Another commonality is that, while their ideas didn’t meet with immediate acceptance, they stuck with them. Third, they set out to uncover fundamental forces.  It was that quest for basic understanding that led them to ask questions and find answers that nobody else could imagine at the time.  They weren’t just looking to solve the problems of their day but sought out problems that transcended time. In effect, they were able to see the future because they cared about it.



Thanks to the likes of Google, Amazon, and Facebook, the terms artificial intelligence (AI) and machine learning have become much more widespread than ever before. They are often used interchangeably and promise all sorts of outcomes from smarter home appliances to robots taking our jobs. But while AI and machine learning are very much related, they are not quite the same thing.

AI is a branch of computer science attempting to build machines capable of intelligent behavior, while 
Stanford University defines machine learning as “the science of getting computers to act without being explicitly programmed”. 

You need AI researchers to build the smart machines, but you need machine learning experts to make them truly intelligent.

Big technology players such as Google and Nvidia are currently working on developing this machine learning; desperately pushing computers to learn the way a human would in order to progress what many are calling the next revolution in technology – machines that ‘think’ like humans.

Suppose you were searching for ‘WIRED’ on Google but accidentally typed ‘Wired’. After the search, you’d probably realize you typed it wrong and you’d go back and search for ‘WIRED’ a couple of seconds later. Google’s algorithm recognizes that you searched for something a couple of seconds after searching something else, and it keeps this in mind for future users who make a similar typing mistake. As a result, Google ‘learns’ to correct it for you.

While this is a very basic example, data scientists, developers, and researchers are using much more complex methods of machine learning to gain insights previously out of reach. Programs that learn from experience are helping them discover how the human genome works, understand consumer behavior to a degree never before possible and build systems for purchase recommendations, image recognition, and fraud prevention, among other uses.

Segregation: A Global History of Divided Cities: challenging,impressive
The University of Chicago Press
book cover
Segregation brings to mind apartheid South Africa or the American South in the age of Jim Crow—societies premised on racial separation. But as Carl H. Nightingale shows us in our free e-book for November, segregation is everywhere, deforming cities and societies worldwide. Segregation: A Global History of Divided Cities journeys back to archaeological evidence of segregation’s ancient roots, courses through the era of European colonialism, to the aggressive segregation movements of the twentieth century, and ends in Johannesburg and Chicago. Segregation tours our divided world. Get it free in November.
“The scope of the work is challenging and impressive.… This book deserves to be widely read.”—Times Higher Education
catalog cover
The Great Chicago Book Sale is Back! Get up to 90% off list prices on more than 600 books from Chicago and the fine publishers we distribute! Welcome to our 2016-17 Sales Catalog. Use promo code AD1561 for prices that start as low as $5. Fiction and non-fiction, humanities, social science, and natural science, plus lots of art books and great gifts are all in the sale catalog.
About Chicago's e-books: The University of Chicago Press has more than 4,000 titles in its Chicago Digital Editions e-book program. Some of Chicago's e-books are DRM-free, while others require Adobe Digital Editions software, which is freely downloadable. Chicago Digital Editions are powered by BiblioVault.
This is the November 2016 free e-book notification.

No comments:

Post a Comment