Module 5: Artificial intelligence in business and society

Professor Patrick Winston gives a detailed example of another type of AI, discussing some of his recent research on building AI systems that learn in much the same way that humans often do: by understanding stories.

  • Rule based systems, 
  • searching many possibilities
  • linear regression, probabilistic reasoning
  • next
    • 5 to 10 years… What’s possible with data and deep nets
    • algorithms are free… data is the asset
    • AI Forth Wave
      • Massive, free computing
      • Excited people
      • Emerging round table
      • Accumulated progress
      • Better questions 
        • What makes us humans different than other specieis, past and present?
          • Human have been around 200k years
          • about 70k began to advance
          • We can take two disparate concepts, and make a new concept without impacting the original two concepts.
          • “Merge operation” give us an inner-language…. only we can tell stories
          • AI
            • Artificial Perception
            • Story understanding
              • Recipe following: 
              • reasoning
              • Strong Story Hypotheses:
                  • self awareness is a suitcase term
                  • `System will be able to explain what they do and why?
                  • machines and programs will tell their own stories.
          • smarter application
          • applications that can explain themselves
          • applications that understand us
          • better understanding of ourselves and each other, and that will take us to a new level.

In Module 4, you learned about robotics, which was defined as the automation of physical tasks. However, some people are also using the term “robot” to refer to systems that automate certain kinds of purely information-processing tasks. For instance, the financial world refers to so-called “robo-advisers” that help clients manage their investments, sometimes with a human adviser in the loop and sometimes not. 

MIT Professor Andrew Lo discusses some of his research on AI and investment management. He uses the example of index funds to illustrate how algorithms can be applied to the financial world. He introduces the idea of “precision indexes,” which (akin to “personalized medicine” that is specific to a given individual) are automated portfolios that take an individual’s personalized criteria into account to make decisions. Professor Lo notes that organizations are currently missing the opportunity to model actual human behavior, rather than modeling assumptions about what human behavior should be in the investment world. He closes by elaborating on the notion of “bounded rationality.”

  • index funds… assets held in proportion to their market capitalization
    • People tried to do equal balances, but it was difficult to manage
  • Three new criteria for index funds
    • Transparent
    • Investable
    • Systematic
  • If you look at a spectrum from index to hedge, what if you could take a full spectrum approach…”Precision Indexes” E.g. the Shawn Harris 500, based on my particulars… The hardware and software exists today to do this, what we don’t have yet is the algorithms… See “Personal indexes” paper.
  • We are missing the ability to model actual behavior, not just modeled behaviors… artificial stupidity…artificial humanity… learned common sense…
  • Bounded Rationality
    • We don’t know what the optimal solution is, we develop rules of thumbs that are good enough.
    • 4 Themes:
      • Evolution models of behavior
      • surveyor of investor risk preference
      • heuristics and algorithms to automate systems
      • learn from big data to learn from actual behaviors

The Future of Work

AI and robots are set to play a big role in the future workforce as collaboration between people and computers increases. Although the media seems to relish reporting that robots will replace human workers, it is more likely that people’s jobs will change and evolve, so that people work alongside AI and focus their energies on the tasks they do best.

O-ring principle as a collection of tasks that need to be done together to successfully accomplish a main task. If some of the tasks involved can be automated, the economic value of the human inputs for the other tasks that can’t be done by machines will increase. O-ring Harvard Economist Michael Cramer… As you improve the reliability of other components, other items become more important…. Humans becoming the  O-rings.

Never Enough Principle “Insatiability” As we gain more wealth, as tech expands, we think of more things to do.  Invention is the mother of necessity.

Issue will be how wealth is used. Think Saudi Arabia vs Norway. With respects to the use

Are we making progress in areas that are not making significant productivity gains.

“We all know more than we can tell.”  we have gotten past this now.. The degree of uncertainty is truly unknown.

Humans are good on small data, based on our models of the world. We can make inferences based on disparate data.

The key challenges for executives, will be:

(1) shifting the training of employees from a focus on prediction-related skills to judgment-related ones;

(2) assessing the rate and direction of the adoption of AI technologies in order to properly time the shifting of workforce training (not too early, yet not too late); and

(3) developing management processes that build the most effective teams of judgment-focused humans and prediction-focused AI agents.

Professor Daniela Rus discusses the impact that robots will have on the workforce. She uses an example of applying an any-time optimal algorithm to match the taxi supply and demands in New York City, which reduces the number of taxis needed. She examines whether this could result in taxi drivers losing jobs. Professor Rus also explores the potential benefits of autonomous vehicles on mobility and quality of life. She discusses AI’s impact on fields such as healthcare, law, and education before talking about the current limitations of putting AI to work.

  • NYC 14,000 taxis, MIT algorithm says only 3,000 taxis are required with a capacity of 4 passengers, satisfying 98% of demand within a 2.8min waiting time and a trip delay of 3.5min.
  • Level 4 autonomy is here. autonomy in some environments… Level 5 has a way to go…
  • Think of the walking stick being replaced.
  • Machines are better medical/legal/teachers predictors, still need human judgment and emotional connections 
    • Machine learning has potential applications in so many fields, and medicine is a great example. Machines today can read more radiology scans in one day than a radiologist will see in a lifetime. So, a new AI-based approach was tasked with classifying radiology scans of lymph nodes as cancer or not cancer. The machine had 7.5% error, as compared to the 3.5% error of the human. But working together with a human, the machine and the human together achieved 0.5% error, which is a significant improvement over the state of the art.
  • Gaps/limitations in AI in breadth and depth perception, reasoning, creativity, thinking….
    • no universal tools
    • Crunching data does not translate in to knowledge.
    • complex calculations doe not produce autonomy.
    • 99.99% is exponentially harder than 90% correct.
    • Perception and action
    • tasks  with physical contact….

Professor Rus talks about jobs in terms of the tasks they entail. She sees a future partnership between people and machines in which each performs the elements of the job to which they are best suited: machines doing what’s easiest for machines and people focusing on the strategic tasks. She discusses two points of concern: productivity and job quality or wages. Lastly, she emphasizes lifelong learning and reinforces the idea of collaboration between people and computers.

    • There will  be a focus on tasks, not jobs.
  • must be a lifelong learner…

Professor Malone asks Professor Frank Levy about the implications of AI on employment and the future of work.  

  • Politics needs to be a part of the story
  • More people will be knocked out of mid-skill jobs
  • Physical issues at the bottom, non-repetitive work at the top.
  • Writing for MIT Sloan Management Review, H. James Wilson, Paul R. Daugherty, and Nicola Morini-Bianzino describe the jobs that AI will create and divide them into three new categories: trainers, explainers, and sustainers. Humans in these roles will work alongside machines, ensuring that machines are working in an effective and ethical manner.
  • Read about which aspects of various jobs could be automated and which are less able to be automated, and gain further insight into the graph that Professor Rus used in Video 2 to illustrate automation across different activities in different sectors.
  • Will a robot take your job? In January 2017, McKinsey Global Institute published a report estimating that by 2055 (give or take 20 years), around 50% of today’s work tasks could be automated. In this interactive graphic, you can input a job title or industry to find its automation potential.
  • Ravin Jesuthasan and John Boudreau, in the Harvard Business Review, provide a four-step approach for thinking about how automation will affect job design. 
  • Read about the productivity benefits of automation along with its impact on various industries and implications for policymakers. (Access the report by clicking on the download link.)
  • Have a look at five management strategies for getting the most from AI.

Over the longer term, a task-based view of work will be needed to make the best use of AI, to understand which tasks can be automated and which ones are better suited for people to do. New jobs will be created that are still unimagined. Institutions and society – the education system in particular – have a role to play to unlock the full potential of both people and machines in the future.

General ethical concerns surrounding AI

MIT Professor Iyad Rahwan highlights general ethical concerns about AI and explores why organizations should care about AI ethics. He describes how to balance the benefits and risks of AI, and he explains how people have been addressing these problems by using “a human in the loop.” He describes the regulation of human behavior and then examines the challenges involved with regulating the behavior of machines. He concludes by discussing preconditions to promote public trust in machines. 

  • Have discussed… technical aspects of AI, business value, strategic value, future of work
  • Ethics: moral principles that govern behavior
  • AI benefits:
    • Better recommendations
    • Safer Cars
    • Better medical diagnosis
    • Plus+
  • AI Risk
    • filter bubbles
    • fake news
    • unfair matching
  • Need to put a “human in the loop”
    • AI = prediction human = judgment?
  • Society in the loop
    • human in the loop, with a social contract.
    • Regulatory forces, more than just the law…
      • Law
      • Norms
      • Market
      • Architecture
  • We have safety standards, liability laws, and consumer expectations
  • Regulating AI is different
    • not passive, 
    • have autonomy, 
    • have intentionality 
    • can adapt and learn
  • cant certify at design time, as they will adapt and learn as they interact with the real world.
    • will need to enforce the law, as they act
  • Agency vs Experience
    • machines don’t care about out norms
    • How do you assign intentiality
  • Need to understand emerging norms. What do we expect from AI, need to help adoption, while not over regulating.

Professor Rahwan provides a case study of the ethics of autonomous vehicles. He poses a scenario: what if an autonomous vehicle’s brakes become inoperable, and the car is heading toward a group of pedestrians who would be killed if the car hit them. The car has a choice to swerve and hit only one pedestrian rather than the group. Should the car swerve? Or, what if the car could swerve and avoid the pedestrians, but would thereby harm the occupant(s) in the car? At issue is that a machine would be making a moral decision. Professor Rahwan discusses approaches to this dilemma, including how different countries have started tackling such questions.

  • Accidents by car: 1.2m -> 120k human to autonomous…. 90% accident are attributed to human error.
    • what socially acceptable behavior
  • social dilemma:
    • I would not want to be sacrificed
    • but everyone else should
    • Our choices have externalities, we can’t be selfish
    • People are less likely to purchase a car that will sacrifice them
  • adaption and capacity
    • theory of mind, mind perception
    • bars on the front of the car. US ok, Europe no
    • our new issue is that its a software decision
  • open issue
    • Germany created an autonomous car ethics commission
      • Legal scholars
      • Ethics experts
      • Engineers
      • Consumer protection groups
      • Religious leaders
    • recommendations
      • avoidance of critical dilemma situation.
      • should not be any discrimination…
      • you can take total number of casualties in to account, not mandated
      • person who does act on purpose, should not jeopardize people int he car.
    • Check lout the Moral machine
  • AI is a new kind of challenge, that we need to take seriously to address Law, Norms, Market, and Architectures.

2.3 Crowdsourced workers

Platforms such as Amazon’s Mechanical Turk (MTurk) and CrowdFlower, as well as vendor-managed systems like Clickworker, let companies hire contract workers to complete tasks, down to the level of microtasks that may only take a few seconds to complete. Read about the ethical issues that arise regarding the many crowdworkers whose low-paid, behind-the-scenes labor underlies many AI systems.

2.4 Biased algorithms

AI systems can excel at identifying patterns that let companies target specific customers more precisely. As you’ve seen throughout the program, this ability helps companies serve the unique needs of niche customers. But, sometimes this targeting can go awry. For example, Facebook’s algorithms enabled advertisers to reach self-described racists. Facebook’s COO, Sheryl Sandberg, publicly apologized for this “totally inappropriate” outcome and Facebook pledged to add 3,000 people to its 4,500-member team of employees to review and remove content that violates its community guidelines.

In another example, Microsoft set out to build a chatbot that could tweet like a teenager.  Microsoft announced “Tay” on March 23, 2016, describing it as “an experiment in conversational understanding” and released it on Twitter. The idea was that the more Tay engaged in conversation with people on Twitter, the smarter it would become. Unfortunately, Tay learned all too well. As people sent racist, misogynist, anti-Semitic Tweets its way, Tay started responding in a similar tone, not simply repeating back statements, but creating new ones of its own in the same unfortunate vein. At first, Microsoft deleted the offensive statements, but, within 24 hours, shut down Tay to “make some adjustments.” 

  1. Four researchers in the field of AI share their views, concerns, and possible solutions for reducing and avoiding societal risks associated with AI. 
  2. Read about the differing views of Elon Musk and Mark Zuckerberg on the safety of AI.
  3. According to Sandra Wachter, Researcher in Data Ethics at the University of Oxford, although building systems that can detect bias is complicated, it is in principle possible and “is a responsibility that we as society should not shy away from.”
  4. In 1942, science fiction writer Isaac Asimov coined his Three Laws of Robotics in his short story, “Runaround.” The three laws are outlined in Figure 1.

 

Humans seamlessly integrate perception, cognition and action.

AI raises serious ethical concerns, as smart machines will make decisions that may have life-and-death implications. Many AI systems also rely on low-paid workers who labor behind the scenes. Finally, AI has the potential to exacerbate and amplify the negative qualities of humans. As a result, executives considering adoption of AI systems need to reflect thoughtfully on the ethical aspects of their choices.