Search

Senior industry leaders need to learn about AI - Reuters

The company and law firm names shown above are generated automatically based on the text of the article. We are improving this feature as we continue to test and develop in beta. We welcome feedback, which you can provide using the feedback tab on the right of the page.

October 22, 2021 - Imagine this. You are President of the United States. It's your dream job, because you have more power than anyone else in the world, and nobody ever criticizes you. It's nothing but four years of Nirvana (the transcendent state, not the Seattle Grudge band).

In walks your Secretary of State to tell you about a new policy that the U.S. adopted to impose sanctions on France to address the fact that their wine tastes too good. Apparently, the French didn't take it very well, and are retaliating with their own sanctions. You ask who put the U.S. policy in place, and the Secretary explains that it was Jake, a junior analyst on the France desk. You ask why such an important decision was made by a junior analyst, and the Secretary explains, "He knows French."

Honestly, this happens every day in corporate America, but instead of U.S./France sanctions, it's adopting the use of algorithms that play important business functions that, when done incorrectly, can lead to liability. For example, when algorithms that facilitate selection of qualified individuals for employment, promotion, credit, or for the provision of medical care, government services, or even entrance into office buildings, are created in a way that could lead to adverse disparate impact on racial and ethnic minorities, women, or other protected groups, such algorithms not surprisingly may violate the law. Further, quite apart from discriminatory impact, algorithms that simply do not work as intended could cause injury and actionable claims.

In both events, there is substantial risk of federal or state enforcement action. Consider:

•The U.S. Equal Employment Opportunity Commission and similar agencies have explained that deficient AI can violate employment discrimination laws (e.g., Commissioner Keith Sonderling spoke at a Sept. 1 webinar, sponsored by the EEOC Chicago, Houston and Miami Districts, on "The EEO Implications of Using Artificial Intelligence and Machine Learning in Employment Decisions").

•Defective algorithms can violate federal and state fair credit and consumer protection laws. For example, according to a Federal Trade Commission report on "Big data: A Tool for Inclusion or Exclusion?" "one credit card company settled FTC allegations that it failed to disclose its practice of rating consumers as having a greater credit risk because they used their cards to pay for marriage counseling, therapy, or tire-repair services, based on its experiences with other consumers and their repayment histories."

•Poorly designed and inadequately tested algorithms used by customers can result in class action product liability and governmental and private attorney general instituted litigation;

•Algorithms that are not transparent and deceptively mislead consumers in advertising can run afoul of various federal and state unfair trade practice prohibitions;

•Delegation of data preservation and access to deficient AI, or the improper use of private data, can implicate federal, state (e.g., the California Privacy Rights Act), and even international (the General Data Privacy Regulation) law and result in ruinous fines;

•Erroneous AI decision-making regarding government claims submission, e.g., with respect to health care reimbursement and government contracts, could result in treble-damage liability under laws like the federal False Claims Act;

•Algorithms that drive medical practice, if insufficiently designed and tested, can violate U.S. Food and Drug Administration requirements and lead to claims ranging from unlawful discrimination to medical malpractice.

Andrew Smith, Director, FTC Bureau of Consumer Protection, perhaps sums up the government's view of AI in an April 8, 2020, blog post, explaining that "the use of AI tools should be transparent, explainable, fair, and empirically sound, while fostering accountability."

And these are just examples of legal liability. I left out obvious business risk including harm to reputation such as Microsoft's use of the Tay chatbot that shortly after being launched, was coerced by users into engaging in racist rants on Twitter.

But here's the truly disturbing part. When there's a problem such as these, it becomes apparent that the algorithm was put in place by Jake in the IT department. Why? Because he knows Python.

Senior industry leaders, including in-house counsel, presumably have qualifications for making important decisions that involve complex strategy and entail bet-the-farm outcomes. They have years of experience, and that experience typically covers a broad range of scenarios that produce a wide range of risks that need to be navigated. But all too often, today that expertise does not include understanding AI and the risks associated with it. Senior leaders and counsel shy away because a reasonable understanding requires some measure of math, statistics and computer science. And that is scary stuff to someone who's decades beyond school.

I am 60 years old, and I faced that problem. In my day job, I advise those developing or using AI on legal and regulatory requirements. Unfortunately, though, I didn't understand how these algorithms really worked. To address that deficit, I went back to the University of Michigan, for an online Master's of Applied Data Science. I had never written a line of code in my life.

That was almost three years ago. I will graduate this December.

Going back to school at an advanced age can be terrifying. At times, I also found it humiliating. I clearly knew less than most of my classmates (who were, by the way, roughly the same age as my children). But I do have something that most millennials don't — bad knees. That meant I could sit in a chair for endless hours and work on homework without the temptation to do something else. And the pandemic helped by keeping me at home.

In order to participate in any discussion, you need to understand the vocabulary. Like any technical subject, data science has certainly its share of esoteric terminology and acronyms. It's important to know both what the words mean, and to develop an intuitive understanding, so that you can ask intelligent questions.

Artificial intelligence is presented as being kinda mysterious. Almost magic. But it's not. Far from it. It's really just math and statistics. And it's helpful to understand at least intuitively what math is involved, to get a sense of how the inputs into an algorithm can affect the output of the algorithm. You start to learn what is random, and what is not. You start to appreciate that a lot of the exercise in data science is clarifying the signal and getting rid of the noise. You need to appreciate what goes on between inputting and outputting that produces the output, and what might produce inaccurate results or results that are prone to change over time as the model is continuously used.

Data scientists often complain that the vast majority of what they do is simply cleaning up the data, and that's probably true. But it's really important to understand all the different ways that unclean data can lead to an erroneous result.

While I chose a master's, that's not the only way to do this. In my Facebook feed I get an endless stream of ads from nearly every major university in the country touting their courses and certifications in AI. The courses are typically called something like "Data Science for Leaders."

These educational programs can give a leader a reasonable understanding in as little as six weeks but more typically six to nine months. Indeed, there's almost an endless number of ways to obtain education regarding AI, including courses in online platforms such as Coursera that offer asynchronous learning you pursue at your own pace. These are particularly good for learning Python as a prelude to data science. An extremely popular course is Python for Everyone, by Chuck Severance at the University of Michigan. He got me through the topic when I started with absolutely no background. His secret is to just talk like a regular guy.

From my standpoint, I suggest that senior leaders stay focused on what they need to know. Senior leaders do not want themselves to become data scientists, so there's a lot of content in a typical master's program that would be useless, such as learning how to present or communicate data science models, how to use SQL databases and how to write Python code to efficiently manage large data sets.

Instead, I would recommend focusing on the core topics of supervised, unsupervised and reinforcement learning, and the courses that are essential to prepare you for those topics. Typically, that will include classes on math, statistics and manipulating data to prepare it for model learning.

It really doesn't matter how, but I would strongly urge senior managers and their counsel to get educated about AI so they can fulfill their responsibilities of providing leadership on important topics that have significant consequences for their businesses. In fact, as with cybersecurity, officers and directors could end up personally involved in litigation. It's just not safe to unthinkingly delegate that responsibility to junior people simply because they have a more current education. Not to mention the fact the board might decide that Jake should be the CEO. After all, he knows Python.

Opinions expressed are those of the author. They do not reflect the views of Reuters News, which, under the Trust Principles, is committed to integrity, independence, and freedom from bias. Westlaw Today is owned by Thomson Reuters and operates independently of Reuters News.

Bradley Merrill Thompson is a member of Epstein Becker Green in the Washington, D.C., office and serves as Chairman of the Board and Chief Data Scientist of EBG Advisors, Inc. He leads an initiative to comprehensively serve the legal needs of clients that develop or use AI tools and has been deeply involved in some of the most innovative technologies in this space in the United States and internationally. He regularly advises developers of new "software as a medical device" (or "SaMD") products seeking FDA approval through the de novo process, the 510(k) process, and even in tandem with drug products. He can be reached at bthompson@ebglaw.com. The author thanks partners Jason Christ, Adam Forman, Stuart Gerson and Nathaniel Glasser for their comments on this article.

Adblock test (Why?)



"about" - Google News
October 22, 2021 at 10:17PM
https://ift.tt/3E7H75O

Senior industry leaders need to learn about AI - Reuters
"about" - Google News
https://ift.tt/2MjBJUT


Bagikan Berita Ini

0 Response to "Senior industry leaders need to learn about AI - Reuters"

Post a Comment

Powered by Blogger.