In this editorial, our CEO Brian Hills discusses the news headlines that claim AI could lead to the extinction of humanity.
The opinions on AI have always been split between those who believe that it can improve our lives and those who fear that it will eventually take over our jobs and lead to an apocalypse. This dichotomy has persisted throughout AI’s development.
Recently, concerns about the impact of AI on humanity have been fueled by headlines suggesting that it could lead to the extinction of our species. These headlines were prompted by the testimony of Oxford Professor Michael Osborne and PhD student Michael Cohen to the UK Government’s Science and Technology Committee.
As the hype cycle around ChatCPT, DALL-E, and Lensa continued to build, an extreme view emerged regarding the potential risks of AI. This raised an important question about the responsibilities of those who develop and use data and AI technologies.
However, more recently, representatives from technology companies provided a less extreme viewpoint during a hearing on the Governance of AI held by the Science and Technology Committee. During the hearing, representatives from BT, Google, and Microsoft discussed their respective companies’ use of AI and the associated responsibilities.
The representatives from BT, Google, and Microsoft emphasized that generative AI, such as that used by ChatGPT, is only one aspect of the technology and doesn’t fully demonstrate its capabilities.
It’s highly unlikely that any of us will be around in 100 years to see whether Cohen’s and Osborne’s predictions about the impact of AI were correct. However, we have a responsibility to shape the development of AI in a way that benefits future generations, as demonstrated by the awareness of big tech companies. As regulations and governance around AI continue to evolve, we should focus on the legacy we leave behind. This involves considering three central areas of activity: innovation, education, and policy.
1) Educating technologists and the users of AI
AI has the potential to make our lives easier and more efficient by automating tedious tasks and augmenting human labour. However, it’s important to remember that, as Technology Ethicist Stephanie Hare notes, “Technology is not neutral. Every new technology is designed to improve the human experience, but there is always a potential for unintended consequences. Therefore, it’s essential to educate technologists and users to maximize benefits, predict use cases, and minimize harms.”
Furthermore, it’s important to consider the origin of AI. During the Governance of AI hearing, representatives from tech companies discussed how they embedded AI within their solutions, but noted that it’s not always possible to regulate tech created in certain overseas countries. As a result, they called for AI to be regulated based on use cases rather than where the technology was developed.
As Hugh Milward, General Manager, Corporate, External and Legal Affairs, Microsoft UK stressed in the hearing: “If AI is developed in a regime we don’t agree with, if we’re regulating its use, then that AI, in its use in the UK, has to abide by a set of principles – we can regulate how it’s used in the UK. It allows us then to worry less about where it’s developed and worry more about how it’s being used irrespective of where it’s being developed”
Having this awareness could mean the difference between AI being used for good or it being exploited further – thus giving it an unnecessarily bad name. However, we also need to be mindful of how quickly technology can evolve – AI included.
We have all seen how ChatGPT can be used to automate the creation of content. However, while its less mature predecessor, GPT-3, could formulate sentences, these had the potential to be deemed racist, sexist, and in some cases completely inappropriate. Through the development of the platform and training (which was outsourced to teams in Kenya who were tasked with removing this toxicity), we are seeing the potential for chatbots. While there are undoubtedly ethical issues concerning the employment of these teams, the education of the platform to understand right versus wrong reinforces how awareness of the responsibility of the tool can enhance reputation.
2) Increasing accessibility and public understanding of data and AI
Increasing education on data and AI will help to raise awareness of potential harms and enable us to work towards reducing risks. However, more work is still needed to ensure people are well-informed about AI and its uses.
As a responsible AI community, we must engage in more proactive public dialogue about how these technologies affect our daily lives. It’s essential to ensure that the public is informed about their rights and choices, even if opting out isn’t always feasible, such as being captured on CCTV.
The Scottish AI Alliance is a great example of an organization that collaborates with academia, charities, industry, and the public to encourage a better understanding of AI. By involving both adults and children, they aim to create an AI that is trustworthy, ethical, and inclusive.
3) Accelerating policy development
On top of education and awareness, there are many ethical and legal frameworks in place to regulate the development and use of AI, including the EU’s General Data Protection Regulation (GDPR) which governs the use of personal data, and the soon-to-be-published Online Safety Bill which the UK Government say will “make the UK the safest place in the world to be online.” Because historically, legislation is slow to follow innovation, researchers, engineers, governments, organisations, and individuals must work together – nationally and internationally – to ensure AI is used ethically and its impact on society is positive.
This includes legislation focused on the long-term planning of AI advances and initiatives to actively mitigate risks by promoting responsible innovation, addressing bias (which the hearing this week acknowledged), and preventing job loss due to automation.
Ultimately, like many innovations that came before AI, we can’t predict the future, but we can create responsible and ethical legal frameworks which could protect its reputation and reduce sensational headlines. It is something that the UK government is conscious of and seeking to tackle at the source.
As long as we educate and collaborate to create regulation that prevents dangerous types of AI whilst promoting safer designs that create economic value, we can mitigate the risks. If we believe in a future where AI adds value to our economy and society, it is everyone’s responsibility to step up, engage in the debates, debunk the hype, and shape the future.
Interested in learning more about generative AI? You might enjoy our blog, Marking ChatGPT’s Homework with TDL’s Data Scientists
Leave a Reply