AI is generating concerns on both a sociological and human level. Generative AI raises even more issues and concerns. The climate issue is being highlighted. There is talk of responsible AI and a number of measures are being put in place
Artificial Intelligence (AI) and digitalization carry significant social risks, from job losses in incumbent industries to greater security concerns and increasing discrimination. Generative AI (GenAI) could automate a significant portion of a job’s tasks, leading to potential job losses in occupations affected the most. There is a major risk of misalignment between short-term financial incentives from the development and deployment of AI, specifically with GenAI and the interests of humanity. A recent study by the Capgemini Research Institute found that 72% of consumers are worried about the misuse of GenAI technology. According to the OECD AI policy observatory’s AI Incidents Monitor (AIM), examples of these risks have massively jumped up since the start of 2023.
There are also negative environmental impacts related to GenAI across the value chain that need to be considered. Carbon emissions across the entire value chain are expected to increase. The competition to build out data centre infrastructure has also raised questions about the capacity of national energy grids to cope with the expected jump in electricity demand linked to AI, and whether there is sufficient renewables generation in those markets to power the technology. E-waste and the need for rare minerals and metals for the infrastructure and production of GenAI applications are other potential risks we need to consider.
For investors, there can be no one-shot way to address the multi-faceted risks posed by the rapid adoption of AI and GenAI in the last two odd years. Engagement and stewardship tools will be most effective. Investors need companies (both developers and deployers) to apply Responsible AI practices across the organization to safeguard against the social and environmental risks that AI poses. Organizations must establish clear principles for how they apply GenAI and set up guardrails to ensure its safe implementation and to specifically avoid bias, discrimination, misinformation and breach of privacy.
While some environmental impacts, such as end-user energy consumption and data centre power efficiency can change with the wider decarbonisation of the grid, investors are becoming more concerned about the technology sector delivering on their climate goals. Both developers and deployers of AI will need to invest substantially in creating additional renewable power.
Download PDF for more information