Friday, August 12, 2016

Paul's Update Special 8/12



Do universities need to rethink what they do and how they do it now that artificial intelligence is beginning to take over graduate-level roles? When university leaders reflect on the future for their institutions and their sector, they often neglect the threats posed by artificial intelligence. When Times Higher Education recently asked sector leaders for their predictions of what universities would look like in 2030, there was scant mention of the impact of technology except in so far as it directly affects pedagogy, via innovations such as massive open online courses.

But the torrent of articles and books published in recent years shows that humans are once again worried that robots are about to displace them in the workplace – and that could have profound effects on universities. It would be a mistake for academics to think that machines will sweep away all the low-skilled, non-graduate jobs before they start nibbling into the more complex professional roles they envisage for their students.

According to Martin Ford, a Silicon Valley entrepreneur whose 2015 book The Rise of the Robots makes grim reading for white-collar workers, it is “becoming clear that smart software automation applications are rapidly climbing the skills ladder and threatening jobs taken by university graduates”. Even whizz-kids with plum jobs in financial services are not safe. In an article titled “The Robots are Coming for Wall Street”, The New York Times reported in February on software written by a company called Kensho that can automatically predict how markets will move in response to different types of world events. The legal field is also being shaken up, Ford tells THE. The consultancy Deloitte has predicted that 100,000 legal jobs in the UK alone could be automated over the next 20 years.

“Could another person learn to do your job by studying a detailed record of everything you’ve done in the past?” Ford asks in The Rise of the Robots. “If so, then there’s a good chance that an algorithm may someday be able to learn to do much, or all, of your job.” 

The good news for universities and academics is that there are still skills that observers believe machines will struggle to master – although they are unsure for how long. The most obvious one is creativity. Massachusetts Institute of Technology academics Erik Brynjolfsson and Andrew McAfee write in their 2014 book The Second Machine Age. “We’ve never seen a truly creative machine, or an entrepreneurial one, or an innovative one.” Brynjolfsson and McAfee themselves point out that what is defined as “routine” and not particularly creative keeps changing as machines master new tasks.

Academics will be pleased to hear that MIT’s Brynjolfsson thinks that the humble essay is still a good way to train students to come up with these kinds of fresh ideas, since it “forces you to organise your thoughts”. The need to foster creativity also “means doing more unstructured work”, he adds. At MIT, “we do much less rote and repetitive learning. The things people enjoy the most…are the things machines do worst.”

Another trait that experts seem to agree will remain uniquely human for some time is social intelligence. “While algorithms and robots can now reproduce some aspects of human social interaction, the real-time recognition of natural human emotion remains a challenging problem, and the ability to respond intelligently to such inputs is even more difficult,” write University of Oxford academics Carl Frey and Michael Osborne in a widely cited 2013 paper that predicts that nearly half of US jobs will be automatable in the next 20 years. Related to social intelligence is the capacity to think about human values. 

Another human trait that machines are struggling to emulate is less cerebral: dexterity. Prominent examples are the “many skilled trade occupations typically shunned by university graduates: plumbers, electricians and the like”.

Worryingly for universities, it might turn out to be far more cost-effective for businesses to automate desk-bound “knowledge workers” than those who move things around for a living. This is because a single computer can use its vast processing power to do the job of tens of thousands of human brains. But a robot on a production line can, at most, replace a handful of humans. It cannot be in two places at once.

Also ominous is Brynjolfsson’s confession that while The Second Machine Age cited three uniquely human skills, he now believes that this may be down to two. While creativity and “complex communication” with humans remain, pattern recognition is being mastered by computers with surprising rapidity. To help students cope with this blistering pace of change, everyone seems to agree that universities must, above all, instil in students a hunger to continue their education beyond graduation and keep their skill sets as broad as possible.

There are, of course, numerous sceptics who doubt that increasingly capable machines will leave humans workless. “We’ve always been finding new jobs,” says Stephen Watt, a computer science professor and dean of mathematics at the University of Waterloo in Canada. “My view is that this has been happening for decades, if not centuries, and it will continue for decades and centuries.” Watt is also bullish about the future for universities. He believes that the technological churn will create an unprecedented demand from students to be taught “how to learn”. 

Ford takes a much more pessimistic view. Graduates on both sides of the Atlantic are already overqualified and struggle to find jobs that use their skills, he writes in The Rise of the Robots, and this will only get worse as automation progresses. He thinks that the “conventional wisdom” that more investment in education will allow humans to stay ahead of machines amounts to little more than a doomed attempt to funnel ever more disappointed graduates into a shrinking set of “graduate-level” jobs.

Efficiency drive: could researchers be automated?

A clue to the rapidly advancing impact of intelligent machines on scientific research appeared on the front cover of Nature earlier this year. “Machine-learning algorithm mines unreported ‘dark’ reactions to predict successful syntheses,” the journal reported.

The algorithm in question was the work of chemistry and computer science professors at Haverford College in Pennsylvania, who took unpublished data about failed chemical reactions and used it to create an algorithm that proved to be better at predicting reaction results than the team itself.

This is just one of the ways that machines are assisting, if not quite yet replacing, researchers in the lab. One of the most famous tools that scientists now have at their disposal is Eureqa: software produced by a company called Nutonian that takes data and works out a model explaining the results. Another area in which machines can assist is in simplifying literature reviews. Another area in which machines already excel is finding patterns in data. This might sound like low-level statistical grunt work, but it means that they are technically capable of making some of the most famous discoveries in science. 

Schmidt admits that the most Eureqa can currently do is iterate variations of existing experiments to get more accurate explanatory models by gathering new data points. His company is working on systems with “computational curiosity” that would be able to come up with wholly new ideas, but that is “probably two to three decades” away.



Visual communication is a must-have skill for all managers, because more and more often, it’s the only way to make sense of the work they do. Data is the primary force behind this shift. Decision making increasingly relies on data, which comes at us with such overwhelming velocity, and in such volume, that we can’t comprehend it without some layer of abstraction, such as a visual one. A typical example: At Boeing the managers of the Osprey program need to improve the efficiency of the aircraft’s takeoffs and landings. But each time the Osprey gets off the ground or touches back down, its sensors create a terabyte of data. Ten takeoffs and landings produce as much data as is held in the Library of Congress. Without visualization, detecting the inefficiencies hidden in the patterns and anomalies of that data would be an impossible slog.

Thanks to the internet and a growing number of affordable tools, translating information into visuals is now easy (and cheap) for everyone, regardless of data skills or design skills. As the presentation expert Nancy Duarte puts it, “Don’t project the idea that you’re showing a chart. Project the idea that you’re showing a reflection of human activity, of things people did to make a line go up and down. It’s not ‘Here are our Q3 financial results,’ it’s ‘Here’s where we missed our targets.’”

Managers who want to get better at making charts often start by learning rules. To start with chart-making rules is to forgo strategy for execution; it’s to pack for a trip without knowing where you’re going. Your visual communication will prove far more successful if you begin by acknowledging that it is not a lone action but, rather, several activities, each of which requires distinct types of planning, resources, and skills.

The typology described in this article is simple. By answering just two questions, you can set yourself up to succeed. To start thinking visually, consider the nature and purpose of your visualization:

Is the information conceptual or data-driven?

CONCEPTUAL
FOCUS: Ideas
GOALS: Simplify, teach (“Here’s how our organization is structured.”)

DATA-DRIVEN
FOCUS: Statistics
GOALS: Inform, enlighten (“Here are our revenues for the past two years.”)

Am I declaring something or exploring something?

DECLARATIVE
FOCUS: Documenting, designing
GOALS: Affirm (“Here is our budget by department.”)

EXPLORATORY
FOCUS: Prototyping, iterating, interacting, automating
GOALS: Confirm (“Let’s see if marketing investments contributed to rising profits.”) and discover (“What would we see if we visualized customer purchases by gender, location, and purchase amount in real time?”)

The first question is the simpler of the two, and the answer is usually obvious. Either you’re visualizing qualitative information or you’re plotting quantitative information: ideas or statistics. If the first question identifies what you have, the second elicits what you’re doing: either communicating information (declarative) or trying to figure something out (exploratory).

The Four Types
The nature and purpose questions combine in a classic 2×2 to define four types of visual communication: idea illustration, idea generation, visual discovery, and everyday dataviz.

                    Inline image
Idea Illustration. We might call this quadrant the “consultants’ corner.” Consultants can’t resist process diagrams, cycle diagrams, and the like. At their best, idea illustrations clarify complex ideas by drawing on our ability to understand metaphors (trees, bridges) and simple design conventions (circles, hierarchies). Org charts and decision trees are classic examples of idea illustration.

Idea Generation. Like idea illustration, idea generation relies on conceptual metaphors, but it takes place in more-informal settings, such as off-sites, strategy sessions, and early-phase innovation projects. It’s used to find new ways of seeing how the business works and to answer complex managerial challenges: restructuring an organization, coming up with a new business process, codifying a system for making decisions.

Visual Discovery. This is the most complicated quadrant, because in truth it holds two categories. Recall that we originally separated exploratory purposes into two kinds: testing a hypothesis and mining for patterns, trends, and anomalies. The former is focused, whereas the latter is more flexible. The bigger and more complex the data, and the less you know going in, the more open-ended the work.

Everyday Dataviz. Whereas data scientists do most of the work on visual exploration, managers do most of the work on everyday visualizations. This quadrant comprises the basic charts and graphs you normally paste from a spreadsheet into a presentation. They are usually simple—line charts, bar charts, pies, and scatter plots.

Visualization is merely a process. What we actually do when we make a good chart is get at some truth and move people to feel it—to see what couldn’t be seen before. To change minds. To cause action.

Some basic common grammar will improve our ability to communicate visually. But good outcomes require a broader understanding and a strategic approach—which the typology described here is meant to help you develop.



It’s clear the world is continuing to change and, along with it, so are jobs and the criteria that determine the ideal candidate. What will the world look like in the future and what sort of employee it will demand?

Sense-making: Essentially defined as “the ability to determine the deeper meaning behind something given,” sense-making is a necessary skill if we are to keep the shades of gray that define us.

Cross Cultural Competency: It’s already becoming clear that people who speak multiple languages are more likely to occupy an important position. Moreover, plenty of companies are now going global, extending all over the world. As a result, their ideal employees need to be able to adapt to different cultures.

Cognitive Load Management: It’s no secret we live in an era where we are bombarded by information. The difference in functionality is how we adapt to it, how we filter it, and how we manage to correctly prioritize it. Possess this skill and you may also find it a lot easier to filter through job applications and understand which one is most suitable.

Novel and Adaptive Thinking: Employers look less in the direction of people who wholly abide by conventions and more toward people with the ability to bring innovations and solutions no one else would have thought of.

Social Intelligence: Social intelligence refers to the capacity to understand that others don’t share the same goals and motivations as we do and the key to all is asking the right questions and getting their own goals to align with our own.

These will be the keys to being the ideal employee in a few decades. Are you ready for the future?

No comments:

Post a Comment