The Practical Utility of Viewing AI as Practices
Past issues of this newsletter advocate for approaching artificial intelligence (AI) as business and organizational practices of using software and data to automate task execution and decision-making. This more holistic approach helps overcome a narrow and unbecoming conception of AI merely as a product. Viewing AI as automation practices more adequately distills AI’s rationale and social imprint, and surfaces its broader legal and ethical implications. This in turn supports better risk management, sustainable use and responsible innovation. A practical example in the last issue illustrated how conceptualizing AI as practices helped elicit, assess and manage the implications of AI for copyright.
Yet, beyond the conceptual rectitude above, what is the exact practical utility of this approach? It is – in my view – twofold: analytical and procedural. On the one hand, the approach brings analytical clarity to AI governance efforts by providing a structured framework for untangling AI, namely as:
Conduct, i.e. a string of decisions and actions taken at various stages of automation as to how to design, program, deploy and/or use it and how and what objectives to pursue thereby;
In which various professionals or organizational bodies engage, for example computer engineers, data scientists, mid- or senior-level managers, data protection officers, corporate boards or committees;
That is attributable to these professionals and/or bodies, i.e. they influence the actions and/decisions;
Potentially consequential, i.e. the decisions and actions shape the course, outcomes and effects of automation;
Manageable, i.e. these course, outcomes and effects can be screened and evaluated, and attendant risks could be preempted or mitigated through further appropriate decisions and/or actions;
Attaches liability, i.e. the professionals, bodies and organizations carry legal and/or financial responsibility for the actions and decisions that they take.
Why and how does this matter? Various technology regulations, especially in the European Union (the EU), require developers and users of AI-assisted solutions to assess and mitigate the associated risks at various stages of the development and use lifecycle. The next issue of the newsletter will break these requirements further down. For the sake of this argument, however, here comes a quick rundown. Developers and/or users of AI systems must carry out risk assessments under:
o The EU General Data Protection Regulation (the GDPR), in the form of data protection impact assessments when the data processing involved poses risk to the privacy and other fundamental rights of individuals whose data is processed, which is arguably the case with AI;
o The EU AI Act, in the form of ongoing iterative risk assessments and mitigation measures by high-risk AI providers, as well as fundamental rights impact assessments by certain deployers of high-risk AI (e.g. public authorities, private entities tasked with the discharge of public services, financial institutions and life insurers);
o The EU Digital Services Act, in the form of assessments of systemic risks posed by the design and functioning of very large online platforms and of very large online search engines, including their AI-powered functionalities;
o The EU Cyber Resilience Act, in the form of assessments of the cybersecurity risks stemming from digital products and systems, including software and products that embed AI;
o The EU Corporate Sustainability Directive, in the form of comprehensive due diligence by larger EU-based companies of their own and third-party vendors’ activities, including AI-assisted ones, as to their possible adverse effects on human rights and the environment;
o The national legislation implementing the EU NIS 2 Directive, in the form of cybersecurity risk assessments and management measures with regards to digital and other critical and technology-enabled networks and infrastructure, including those integrating AI solutions.
More generally, business and organizational practices that interfere with third parties’ rights and legitimate interest would typically necessitate a sensible evaluation of the potential ramifications of this interference. An example to that effect is the impact of AI on creators’ copyrights discussed in the last issue, as well as the concerns about the potential for bias and discrimination in AI-assisted pre-screening and selection of job applicants.
In addition, corporate users of IT (including automation) solutions routinely manage the associated risks through contractual arrangements. Technology transfer, IT outsourcing and in-licensing agreements typically prescribe detailed metrics and protocols for technical due diligence, service levels, data protection and cyber security, incident reporting, etc. As part of the arrangements, IT vendors undertake wide-ranging obligations for risk screening, management and mitigation, as well as for indecent reporting and remediation. Inherent risks are subject to meticulous representations, and expansive warranties back up the performance and legal conformity of the IT solution. The failure to observe these obligations and warranties can result in – at times substantial -- indemnities and compensations.
This brings us to the second benefit of viewing AI as practices: the procedural value thereof. In addition to providing clarity, the analytical framework above also helps chart pathways to addressing the risks. It enables decision-makers, legal and compliance functions, contract negotiators and the other professionals involved to deliver on risk assessment and management mandates by:
Breaking AI down into decisions and actions and identifying those that fundamentally underpin the course, outcomes and various effects of the automation;
Evaluating the impact and consequences of these specific decisions and actions beforehand and then iteratively;
Taking steps to preempt or mitigate these adverse risks and consequences by altering or abandoning the decisions or actions;
Instituting internal procedures, protocols and controls, and/or contractual safeguards in relations with third-party IT vendors, that ensure adequate execution of the altered decisions / actions;
Screening and revisiting as per above and as needed.
Let’s take the oversimplified example of a corporate user commissioning a third-party vendor with the development and integration of a computer program for automating text generation. And let’s look at the training of the program as a specific case-study to glean from. Viewing AI as practices invites a careful examination, evaluation and re/calibration of a multitude of decisions and actions in close collaboration among the corporate user’s and the vendor’s IT specialists, data scientists, data protection officers and managements. These decisions and actions concern, for example:
- How the vendor would procure the training data, e.g. by scraping the internet or using publicly available datasets or proprietary data of the corporate user.
- Assuming the parties opt for web scraping, how and what data to scrape.
- If the vendor engages in indiscriminate scraping through text and data mining, some of the harvested data may constitute copyrighted materials. Then, the corporate users and the vendor would need to decide on and follow through with measures that manage the legal aspects and associated risks highlighted in the previous issue of the newsletter (see also here). For example, what kind of websites to mine, what types of web crawlers to use, how to identify opt-outs in order to honor them, how to go about copyright licensing if needed.
- Bulk mining of the worldwide web may inevitably yield also some amounts of personal data (e.g. originating from personal webpages, social media profiles or posts). Collecting, further curating, labeling, storing, etc., personal data would qualify as processing under most current data privacy laws. Processing personal data of individuals in the EU triggers the application of the EU GDPR at least in cases when the vendor is established in the EU. To manage that aspect, the corporate user and the vendor would need to decide on and implement processing protocols that preempt or at least limit personal data collection. Examples of measures to that effect include filtering the scraped data.
- If filtering proves insufficient or ineffective, personal data may still end up in the training corpora. Then, the vendor and the corporate user should consider if the large-scale and bulk acquisition of data qualifies for data protection impact assessment (DPIA) as “automated processing”, for example under the EU GDPR. And if yes, the DPIA itself will involve a set of decisions and intervention regarding eliminating or mitigating the impact of web scraping of data subjects, which would be a topic of another issue of the newsletter.
- Proper execution of each of these decisions and measures would merit corresponding contractual obligations, representations and warranties in the IT outsourcing agreement, such as an obligation for the vendor to conduct a sufficiently detailed and considerate DPIA, or to procure copyright licenses for copyrighted data, or warranties that the automation tool infringes no third-party copyrights, etc. The IT outsourcing agreement would typically attach a data protection addendum that prescribes the type of data processing and measures warranting its compliance with applicable data privacy laws.
For brevity’s sake, this example ends here. You have probably by now grasped its jest and the analytical and procedural utility of viewing AI as data-driven automation practices, rather than merely as a product. If questions still linger, please feel free to reach out and we can continue the conversation offline.