KANISA GEORGE
In the 2004 science fiction action film I Robot, detective Del Spooner, played by Will Smith, battles highly intelligent, sophisticated robots in the year 2035. Set in a dystopian world, these intelligent robots fill public service positions and operate under rules to keep humans safe. For some time, movies have featured human existence in concert with robots and, in some cases, their dominance over civilisation. But until the late 1950s, artificial intelligence or AI was an imaginative concept that could be traced back to the days of philosophical thinking. In fact, mankind has long considered the idea of inanimate objects coming to life as intelligent beings, and by the turn of the 21st century it had gained international attention.
In his paper titled, What is Artificial Intelligence? American computer scientist John Mc Carthy defines AI as the science and engineering of making intelligent machines, especially intelligent computer programmes. It is a school of thought that focuses on a computer system's ability to perform tasks that typically require human intelligence, such as visual perception and speech recognition. Criticised by philosopher and AI critic Hubert Dreyfus for its inability to fully capture unconscious skills, opponents also raise concerns about laws and regulations. Although research on AI has moved with lightning speed, regulations on AI are still in their infancy, and guidelines and ethical codes aren't unanimously applied.
In this mysterious world of robots and androids, a multitude of legal issues can arise. Data protection and privacy, transparency, surveillance, public autonomous vehicles, and lethal autonomous weapons systems are concerns that directly impact human safety.
In response, many countries have developed or are in the process of developing national AI or digital strategies, especially in the area of data protection and lethal autonomous weapons. A comparative study done in 2019 revealed that Canada was the first country to launch a national AI strategy, with several other countries establishing specific commissions to look into regulation issues. Interestingly, except for the EU, no jurisdiction has yet published such specific ethical or legal frameworks for AI.
In April 2021, the European Commission released its highly-anticipated Artificial Intelligence (AI) Act. One key element of the proposed regulation focused on the need for an ecosystem of effective AI assurance, which gives citizens and businesses confidence that the use of AI technologies conforms to a set of agreed standards and is trustworthy in practice. The development of a risk framework under the Act aims to protect data and human players and is categorised based on potential dangers. For example, AI that poses limited risks, such as chatbots, will be subject to transparency obligations (for example, technical documentation on function, development, and performance) and may similarly choose to adhere to voluntary codes of conduct.
Self-driving cars no longer represent the