Google forms advisory council to keep AI on the straight and narrow

Terrified of artificial intelligence taking over the world? Google hears your concerns. The tech giant is setting up an advisory committee, one that will effectively serve to interrogate Google's decisions as a developer of futuristic AI technology.

Named the Advanced Technology External Advisory Council (or ATEAC), the council will advise on the “responsible use and development” of AI within Google – a company involved in numerous applications for machine learning and the 'smart' handling of user data.

The debate around the ethics of AI is a heated one, with academics and consumers alike fearful of making technology that outstrips our ability to control it. 

Both Elon Musk and the late Stephen Hawking have spoken of AI as more an existential threat to mankind than nuclear weapons – and Google's own foray into military contracts show that the two threats may not be too distinct from each other in the future. (Google even backed out of a $10bn cloud computing contract with the Pentagon last year after internal resistance from Google staff.)

Don't be evil (please)

Google has always had ethics at the heart of its mission statement – literally “don't be evil” – but it can be difficult to see a massive and powerful conglomerate as a force for good in the world, given how much influence and control Google has over our data and communication systems.

Google's also not the first company to pledge to use AI for good, but what 'ethical AI' actually looks like is still far from common ground among today's tech elite.

The committee may act as a much-needed conscience as Google develops tomorrow's AI systems, but unless Google really listens to and acts on its new conscience, the future of AI looks as hazy and uncertain as ever.

Via Engadget

No comments yet.

Leave a Reply

in development