If you’ve ever pondered the meaning of existence, then you’ve partaken in existentialist philosophy. Searching for the purpose of life can be disheartening.
Two existentialists walk into a bar. The bartender asks what they want. The first says, “I couldn’t care less”. The second says, “I could”.
‘Existential threats’ are a challenge to something's very existence, and also disheartening. People have classified many global catastrophic risks as ‘existential’, from asteroid collisions with earth to pandemics, extraterrestrials, the sun becoming a red giant, or entropy leading to the end of time.
A global warming cartoon from Private Eye reads, “This AI stuff is terrifying – we’ve created technology that could destroy us”. The same could be said of nuclear warfare, bioweapons, or the ‘grey goo’ of nanotechnology.
One division of existential threats is those we can do something about versus those that we can’t. Another division is those we are causing versus those that happen to us. What galls most people who face the existential crises of AI (machine-learning) and global warming is that they are self-inflicted. We could stop them cold if we change our behaviour.
AI and global warming have two interesting co-dependencies. First, AI makes global warming worse. Online AI searches, including those about how to stop global warming, consume far more energy than traditional information searches or ‘googling’. Second, AI is likely to be an important technology in stopping global warming. In 2016, Google attributed data centre cooling bill reductions of 40% to its use of AI from DeepMind, a firm it purchased in 2014. AI has long powered sophisticated global warming models.
The 1998 Wingspread Conference on the Precautionary Principle defined the principle as: “When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause-and-effect relationships are not fully established scientifically”. Both AI and burning fossil fuels raise threats. And yes, precautionary measures should be taken, but AI and burning fossil fuels are both ‘genies that are out of the bottle’, so the measures to be taken largely fall onto markets and regulation.
UN COP is the Conference of the Parties to the United Nations Framework Convention on Climate Change (UNFCCC), commonly referred to as the UNFCCC COP. The ultimate objective of the Framework Convention, as stated in 1992, is “stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic [i.e., human-caused] interference with the climate system”. Despite efforts from the first conference (COP1) held in 1995 in Berlin, annual global greenhouse gas emissions continue to rise and, though they may have stabilised on a per capita basis, show little sign of falling soon.
Yet, underpinning optimism, credible emission trading schemes (ETSs) now cover 23% of global emissions. ETSs can and do work, as shown round the world, once the permit issuance is restricted and the markets enforced. A major breakthrough was the implementation of a Chinese ETS in 2021. As most of these systems cover roughly 50% of their national or regional emissions, approximately 46% of emissions could be covered by achieving 100% national coverages. Clearly COP could focus on 100% of nations having an ETS. Two of the larger outliers with no ETS are the USA (14% of global emissions) and India (7%).
Meanwhile, ESG has become, incorrectly, synonymous with preventing global warming. Rating agencies and consultancies are developing ESG scores for clients, for a wide variety of issues using a wide variety of algorithms. But as the MIT ‘Aggregate Confusion’ project found, companies could be in the top 5% on one ESG rating algorithm and the bottom 20% on another. And reputable firms like Shell find their ratings to be A from one rating agency and C+ from another. A recent issue of the Financial Times’s Responsible Investing supplement, FTfm, notes that “a lack of definitions and data is a sizeable obstacle to sustainable (ESG) investing”.
Advisors to financial services firms have been pushing ESG approaches hard, not least because ESG work generates fees. Yet the resulting clamour and confusion and costs have led to firms leaving capital markets for private sources of funding, thus negating the intended outcome that ESG strictures should increase the cost of capital for non-compliers. Good, legal, cashflow is valuable on unlisted markets too.
Taxonomic confusions have led to perverse roadmaps and blockages. The threat of litigation has led firms, e.g. major insurers, US-listed companies, to drop the targets altogether. Social and governance issues are tangentially related to greenhouse gas emissions. Within ‘E’, there are market failures where ESG monitoring may help, e.g. forestry, water, biodiversity. However, ETSs can and do work. Perhaps the chant should be, “Take the ‘C’ out of ‘ESG’.
It would be nice to see a COP that sets out a much clearer financial compact combined with some straight talking: “We, the global financial services industry, can stop greenhouse gas emissions using markets, if governments commit to:
Can markets ‘fix’ the AI problem? Chlorofluorocarbon (CFC) ozone damage was reversed by banning CFCs. But there were existing alternatives to CFCs. Banning AI looks unlikely to succeed. SOX and NOX were solved largely by financial markets using markets, as there still needed to be some emissions. Greenhouse gases, as agreed at COP3, fall into a category similar to SOX and NOX – markets can fix this problem.
At the moment, legislators are passing laws, most of which have great difficulty defining AI. Those keen on market forces argue that an AI tax might help; others that insurance funds could be built to price risk and make restitutions for errors. The Precautionary Principle urges us to take some action now.
One further market approach is to create a market of certifiers of AI systems and the people who use them. Similar approaches of accreditation and certification, frequently using ISO standards, underpin a number of voluntary standards markets. Examples include shipping regulation, gas boiler installations, laboratory standards, wine appellations, or automotive safety. Over time, many voluntary standards markets become mandatory either because of convention - nobody buys from people without a quality mark - or legislation.
The CISI, acting in conjunction with the City of London Corporation, Worshipful Company of Information Technologists, British Computer Society, United Kingdom Accreditation Service, BSI, London Chamber of Commerce & Industry, ACCA, and Z/Yen Group, has initiated the 695th Lord Mayor’s Ethical AI Initiative. This initiative is bringing out a course in 2024 to educate and certify people who deploy AI systems ethically, using various tools already developed, such as ISO standards and ethical methodologies.
The initiative has been warmly received by policymakers, universities, and firms. The first people to go through the course are in financial services firms, but it is hoped that by early 2025 the open access course will be adapted for other sectors such as accountancy, law, media, pharmacology, and others.
In The End
A voluntary standards market for ethical AI is one attempt to address an existential risk. Alone, it is insufficient, but it is a start. Perhaps a mirror joke goes, two existentialists walk into a bar. The bartender asks what they want. The first says, “I could care less”. The second says, “I couldn’t”.