SC22 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

Birds of a Feather Archive

AI Is Not Neutral! Ethical Concerns of Coupling AI with HPC


Authors: Jesmin Jahan Tithi (Intel Labs), Sameh Abdulah (King Abdullah University of Science and Technology (KAUST)), David Keyes (King Abdullah University of Science and Technology (KAUST)), Tony Hey (Rutherford Appleton Laboratory, Science and Technology Facilities Council (STFC))

Abstract: HPC is increasingly employed in AI. Although HPC itself is natively ethically neutral, its use to enable AI applications that can have harmful impacts on humans and society and can render HPC collusive and ethically liable. This BoF will consider the ethical implication of the coupling of AI and HPC and the formation of guidelines for the HPC community to ensure that researchers consider potentially harmful consequences of their research and adhere to best practices for sustainable and ethical use of HPC resources.

Long Description: The coupling between HPC and AI is intensifying. HPC researchers are improving the performance of AI applications, e.g., GPT-3 and large-scale recommendation models. At the same time, AI is incorporated into HPC-based simulations used for policy support whose traditional claims to reliability, involving validation, verification, and uncertainty quantification, and whose traditional standards of reproducibility may not extend readily to AI-assisted simulations. Crossing HPC and AI brings into HPC new questions of ethics and trustworthiness concerning, for example, fundamental human rights, fairness, privacy, safety, and security. The AI community has been grappling intensively with such issues. It is incumbent on the HPC community now to do the same.

While HPC and AI have some ethical risks in common, such as dual-use of computational pharmacology to save lives or create bio-weapons, AI brings data containing biases, lack of explainability, and sometimes insurmountable obstacles to reproducibility. The lack of reproducibility is exacerbated by the massive amounts of energy consumed in AI training, which few can afford, and which, arguably, is an unethical addition to the world’s carbon footprint. According to the EU’s high-level expert group (https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai), AI applications should respect fundamental human rights/autonomy, prevent harm, be fair, and be explicable to be trustworthy. Coupling AI with HPC creates propagates the need to be mindful of these principles.

Through this BOF, we aim to find out what the HPC community thinks about the ethical aspects of coupling HPC with AI. We list some possible questions to discuss during the BoF.

How to deal with potentially harmful applications of AI? The creator of AI technologies may not be in full control of them once unleashed. For example, AI technologies for drug discovery could be misused for de novo design of biochemical weapons with very little effort (https://www.nature.com/articles/s42256-022-00465-9). HPC can enable both.

How can we protect privacy and equality of opportunity? AI can exert control over human lives, threatening autonomy, security, and privacy. A keynote on digital twins at ISC22 noted how empowered AI could allow virtual teleportation and prediction of multiple futures, potentially allowing someone to make optimal present decisions. Can such “superpowers” be accorded to some people without disadvantaging others?

Is it robust to use AI for safety-critical scientific computing? AI is being adopted in scientific and engineering codes, e.g., to tune model parameters, a task for which mathematical models as close as possible to first principles have been used traditionally. AI has proven to be fast and effective in many such instances. However, AI-based models often lack the explainability of the models they replace. How can AI-based models earn the same certifications as the models they replace?

How sustainable is it to use HPC resources to train very large AI models? The required resources and computing power to train large AI models can be significant. Data center training AI models are CO2 emission equivalents for large fleets of fossil-fuel cars. Who determines what portion of the CO2 budget can be allocated to AI, even when rendered as efficient as possible using HPC technologies?


URL:


Back to Birds of a Feather Archive Listing