Programming an artificial future
Australia has a significant opportunity to not just get involved but potentially lead the way in the AI and digital transformation sectors.
When we think artificial intelligence, Arnold Schwarzenegger’s Terminator leaping off buildings and clinging to helicopters comes to mind. Or more recently, Iron Man’s many AI sidekicks like Jarvis and FRIDAY.
It’s all science fiction and storytelling but how far away are we really from this reality? Automated machines haven’t become part of our everyday life, but AI is slowly seeping into every facet of our lives, from Siri to self-driving cars, Netflix recommendations and predictive text. The infancy of AI is well and truly becoming indented in our day-to-day practices.
Leading the charge
While the future of fully-blown, or autonomous, AI seems a time away, the reality is we are already on our way, and Standards Australia intends to be a leading voice in supporting the ongoing development and use of AI. The technology is undoubtedly the future, underpinning the success of initiatives like smart cities, from simple pattern-recognising programs through to self-driving cars and beyond. Standards are the key to the safe and reliable use of this new technology as it continues to emerge as a life-changing innovation.
This year has seen the launch of the Artificial Intelligence Standards Roadmap: Making Australia’s Voice Heard. The roadmap provides recommendations to ensure Australia can support AI and its future across the globe. A stable foundation is important to allow innovative projects and enhancements like this to grow and evolve. And at the rate these projects are expanding, standards will be essential.
Since 2017, 14 of the world’s most advanced economies have announced over AU$86 billion in focused AI programs and activities. In response to this, Australia, along with the United States, UK, China, Germany and others, has identified AI standards and policy frameworks as national priorities. Standards can establish common building blocks, and risk management frameworks, for companies, governments and other organisations.
This growth in AI and the investment underpinning it has the potential to transform the lives of Australians. Developing standards to support this growth is essential to shaping the responsible design, deployment and evaluation of these new technologies.
Discussions transcend boarders
AI must be programmed to perform its intended function, to recognise patterns and make amendments to its programming to reflect the requirements these patterns recommend. The technology is essentially self-learning, and the wrong programming can have dire implications.
For example, self-driving cars are designed to recognise infinite possibilities and work through situations, which can be life or death. Standards have the ability to help support the formation of these programs on an ethical basis: What should these outcomes be? How should cars weigh the implications? What is the value of a human life?
These standards could also work to set the guidelines for how to program this software to accurately reflect agreed upon regulations and allow for the safe use of AI in projects like self-driving cars. Without these standards, the vehicles programmed could vary greatly across manufacturers and countries with different cultural interpretations and values.
One of the more common concerns is security. With AI being used in potentially sensitive sectors like health care, banking or surveillance, it’s essential the data is safe and secure. Standards will provide a path for limiting potential security breaches and the safe management of data. Programmers and developers of AI agree there is also the risk of potential bias in the system, after all the AI tech is programmed by humans and any potential bias could be passed onto this self-learning system. It will also fall to standards to help mitigate these outcomes by providing guidelines to support consistency.
This is a global discussion; technology is increasingly blurring national borders and pays no mind to politics. Data is easily shifted from one device to another so it’s essential AI is treated with this in mind. ISO and the International Electrotechnical Commission (IEC) established a subcommittee on artificial intelligence. It is the first standards activity to focus on the entire AI ecosystem. AI is not one technology, but a variety of software and hardware enabling technologies that can be applied in various ways in a potentially unlimited number of applications in almost every industry sector.
The roadmap itself outlines the steps necessary to ensure Australia is involved in the international discussion regarding AI as these developments become more frequent. This ranges from safety concerns and consumer demands to export and data sharing recommendations.
The launch follows a growing body of work on approaches to managing the impact of AI globally, which intersect with broader aspirations, such as those outlined in the United Nations Sustainable Development Goals. The opportunity, and challenge, for Australian stakeholders is to effectively use the standards process to promote, develop and realise the capabilities of responsible AI, delivering business growth, improving services and protecting consumers.
There is significant opportunity in the AI and greater digital transformation sector for Australia to not just get involved but potentially lead the way. The growth of AI is exponential and still a way from reality, but in order to one day reach an intended goal, there must be sound and proportionate regulatory and policy settings available to shape its evolution, and standards are essential to this.
The DTA has completed exploratory research aimed at helping it better understand the needs of...
The Office of the National Data Commissioner has released guidance on how to improve...
From automating expense policies and projecting budgets to analysing large-scale datasets, AI and...