What Kind of AI Should We Be Concerned About?
You have by now read tens of times about the benefits and risks of using artificial intelligence (AI) to drive innovation. So I will spare you these platitudes, and start straight out with my main point today. It is: to adequately approach AI risks and responsible innovation in practice, we should view AI as what it is.
To govern AI, regulations conceptualize it through their legal definitions. They are largely identical across the influential pieces of legislation and policy, such as the European Union AI Act, the Council of Europe’s Framework Convention on AI, the OECD Recommendations on AI, and the United States NIST’s AI Management Framework. An AI system is considered to be software that meets all five criteria below:
(i) is embedded in a machine (e.g. computer),
(ii) infers from input (e.g. data, rules), and
(iii) generates output (e.g. predictions, content, recommendations)
(iv) based on a set of explicitly defined (i.e. programmed) or implicit (i.e. inferred from rules) objectives,
while being able
(v) to operate with some degree of independence and without human involvement, and
(vi) to self-learn and change over time.
The EU AI Act further distinguishes – but considers as AI — general-purpose AI models. They are defined as statistical and computational models trained “with a large amount of data using self-supervision at scale”, that exhibit “significant generality” and are “capable of competently performing a wide range of distinct tasks”. An example is large language models.
The practical meaning of these notions is expected to be clarified in forthcoming guidance, including by the European Commission. However, software applications that incorporate general-purpose AI models arguably qualify as AI systems given these models’ quintessential capabilities to operate and learn independently. Hence, my second to sixth comment below indirectly holds to them as well.
Some scholars criticize the definition of AI systems as too narrow. It does not capture software failing to exhibit autonomy or adaptiveness under (v) and (vi) above. As a result, the AI regulations and policies above would arguably not apply to such software. Some of it, however, automates data analytics and consequential processes, such as profiling, performance scoring or diagnostics. Examples of such software are the so-called “locked” algorithms approved as medical devices and the Horizon software used by the British Post in the past .
This brings me to the six facets of my point:
Various software that is marketed as AI do not qualify as such under AI regulations despite its risk profile. As a result, such software would not be formally required to meet the attendant technical or other regulatory requirements under these regulations. Unless subject to sector-specific mandatory safety rules, this software may go largely unchecked. If flawed, it may pose systemic risks or cause substantial harm to individuals and social groups.
The narrow definition of AI systems thus subtly preordains AI providers’ and deployers’ approaches to risk management at least in the European Union. With regards to software that qualifies as AI systems, AI providers are statutorily required to take various types of risk management and quality assurance measures depending on the risk profile of the system. Then, deployers, such as corporate users, can rely and build upon these measures to navigate concerns. With regards to software not qualifying as AI system or as any other regulated product (e.g. medical device), deployers would need to secure risk management and quality assurance from AI providers through the usual contractual means, such as bespoke obligations for and representations and warranties by AI provider in IT development or licensing agreements.
The definitions above conceptualize AI as a product. Does it, however, suffice to look at AI only as a product? It is often deployed to optimize outcomes, i.e. to render more efficient task execution or decision-making. How AI is used to pursue these narrow goals and if this use aligns with the broader underlying context ultimately undergird AI’s ethical, legal and social implications. Think the British Post Office Horizon example above or software used to predict and manage health needs and outcomes. Hence, viewing AI as business and organizational practices of automating decision-making and other tasks in given ways speaks more truly to AI’s rationale and imprint.
Viewing AI in this way transcends the narrow scope of AI regulations, and crystallizes the application of various other laws to AI. For example, personal data processing pertaining to or constituting profiling triggers the application of the European Union data protection rules (see here and here). Automated filtering of resumes and job applicants’ pre-selection may fall within the remit of non-discrimination laws. AI-assisted workplace tracking would need to conform with employment and privacy regulations. Leveraging AI to grow a company’s market position would arguably need to accord with the EU, US and UK antitrust rules.
Socially consequential business and organizational practices should also withstand public scrutiny for their social acceptability. The use of AI makes no exception. This means, AI applications must align with the values and civic goals of affected individuals, communities and the society at large. To pass the test, AI should be used ethically, i.e. in ways that support individual and collective privacy, autonomy, fairness and flourishing. What this entails conceptually and practically I discuss in my bio- and AI ethics newsletter (link here if you wish to subscribe). Yet, in a nutshell, ethical and social concerns add to the list of issues that AI providers and deployers should consider and address in order to avoid social mistrust and aversion towards their AI-assisted operations.
Equally importantly, viewing AI as business and organizational practices helps overcome a persistent misconception: should AI (as a product) meet a certain set of technical specifications and standards endorsed by AI regulations, it would be legally, ethically and socially compliant. Yet, how businesses and other organizations use AI and thus impact the environments of application ultimately defines or defies the legality, morality and social acceptability of AI as business and organizational practices.
In short, AI is as much products as business and organizational practices of using such products to automate and optimize processes. Taking AI for what it is helps better reconcile and address its implications. This means considering the requirements of not only bespoke AI regulations but also of other applicable laws, as well as the need of additional contractual and internal safeguards that deployers in particular should introduce to secure sustainable and responsible use of AI in their own operations.
This material is for informational purposes only and does not constitute legal advice. Viewing or using this content does not establish an attorney-client relationship. If you need legal assistance, you should seek advice from a qualified attorney.