Spoiled for Choice: AI Regulation Possibilities

Spoiled for Choice: AI Regulation Possibilities

William O’Reilly

 

I. Introduction

Americans want innovation and they believe advancing AI benefits everyone.[1] One solution to encourage this is to roll back regulations.[2] Unfortunately, part and parcel with the innovations are several harms that are likely to result from the inappropriate use of personal and proprietary data and AI decision-making.[3]  There is an option to ignore this potential harm and halt regulations to encourage the spread of personal information.[4] This option is not in the best interest of the country because the U.S. is already losing the innovation race in some respects. Innovation can still occur despite heavy regulations. Virginia is the latest state to pursue the “no regulation” strategy, and it provides a good microcosm to highlight the challenges and advantages of this approach.[5] Virginia’s absence of regulation falls on a spectrum of legislation that demonstrates options for states to protect rights and innovation. As this article discusses further, curbing AI regulation on companies will not advance innovation enough to justify the civil rights violations perpetuated by current AI use.

Continue reading

Addressing the Vectors for Attack on Artificial Intelligence Systems Used in Clinical Healthcare through a Robust Regulatory Framework: A Survey

PDF Link

Addressing the Vectors for Attack on Artificial Intelligence Systems Used in Clinical Healthcare through a Robust Regulatory Framework: A Survey

By Benjamin Clark

Introduction and Overview

Artificial intelligence has captivated the current interest of the general public and academics alike, bringing closer attention to previously unexplored aspects of these algorithms, such as how they have been implemented into critical infrastructure, ways they can be secured through technical defensive measures, and how they can best be regulated to reduce risk of harm. This paper will discuss vulnerabilities common to artificial intelligence systems used in clinical healthcare and how bad actors exploit them before weighing the merits of current regulatory frameworks proposed by the U.S. and other nations for how they address the cybersecurity threats of these systems.

Primarily, artificial intelligence systems used in clinical research and healthcare settings involve either machine learning or deep learning algorithms.[1] Machine learning algorithms automatically learn and improve themselves without needing to be specifically programmed for each intended function. [2] However, these algorithms require that input data be pre-labeled by programmers to train algorithms to associate input features and best predict the labels for output, which involves some degree of human intervention.[3] The presence of humans in this process is referred to as “supervised machine learning” and is most often observed in systems used for diagnostics and medical imaging, in which physicians set markers for specific diagnoses as the labels and algorithms are able to categorize an image as a diagnosis based off the image’s characteristics.[4] Similarly, deep learning is a subset of machine learning characterized by its “neural network” structure in which input data is transmitted through an algorithm through input, output, and “hidden” layers to identify patterns in data.[5] Deep learning algorithms differ from those that utilize machine learning in that they require no human intervention after being trained; instead, deep learning algorithms process unlabeled data by determining what input is most important to create its own labels.[6]

Continue reading