airport_transferbarbathtubbusiness_facilitieschild_activitieschildcareconnecting_roomcribsfree_wifigymhot_tubinternetkitchennon_smokingpetpoolresturantski_in_outski_shuttleski_storagesmoking_areaspastar
Sign In
Sculptures by Czech artist David Černý in Prague
Sculptures by Czech artist David Černý in Prague | © Armin S Kowalski / Flickr
Save to wishlist

In Prague, Artificial Intelligence is Becoming More Human

Picture of Claire Lancaster
Tech & Entrepreneurship Editor
Updated: 27 March 2018
As advanced forms of artificial intelligence move from futuristic vision to present-day reality, questions around the impact of AI have become increasingly urgent.

In a season 4 episode of the HBO show Silicon Valley, a team of developers peg their financial futures on an artificial intelligence (AI) app capable of identifying all types of food through nothing more than a snap from a phone camera.

After months of labour, the team take a test photo of a hotdog.

‘Hotdog’, reads the phone screen – the AI has successfully identified the food! But when they move on to pizza? ‘Not hotdog’. A bowl of chips? ‘Not hotdog’. Using the world’s most cutting-edge technology, they’ve successfully created – the Not Hotdog app.

While the the skit is fictional (after the show aired however, the app was created in real life by developers SeeFood Technology), Not Hotdog is a closer approximation of the real-world capabilities of much of today’s AI than the threat of a Skynet-style, Terminator bot army.

That’s because the Not Hotdog app, like the majority of AI today, is capable of only completing tasks it’s been designed to do. That is, if researchers show the AI thousands of images of hotdogs, it will be able to identify whether you’ve snapped a matching image. It can’t even tell you if you’re actually looking at a pizza, unless it’s been taught what the cheesy stuff is.

But in Prague, scientists at research and development company GoodAI are working towards creating a ‘more human’ artificial intelligence – known as general AI, or AGI – that’s capable of seeing, feeling, interacting, learning and using the data it gathers to generate behaviour, perform tasks, and respond to motivations given by human mentors.

Rather than being limited to completing a single task, such as driving a car, identifying food or running facial recognition software, GoodAI is working toward creating one ‘brain’ that can do any task put in front of it. The general AI might drive smart cars, or take the form of a personal assistant, colleague, rescue robot, or space explorer.

In their offices overlooking the Vltava river, the 20-person team is ‘currently working on solving the problem of gradual learning, which will give our AI the ability to use previously learned skills to more readily learn new skills’, GoodAI CEO Marek Rosa tells Culture Trip. ‘We’re also working language development, as once our AI can communicate with us it will be much easier to teach it new skills.’

https_blueprint-api-production.s3.amazonaws.comuploadsstorythumbnail5536415e4ddd3-60ed-4916-a871-0f8a03109fee
Skynet | © CAROLCO PICTURES

If it sounds like they’re building a machine to rule us all, it’s useful to first remember that every technology can be used for good and bad – and artificial intelligence is no different.

‘General AI will have the potential to help humanity in countless ways beyond our imagination,’ says Rosa. ‘It will essentially be a tool that humans can use to augment our own intelligence and help us solve complex problems which humans have not been able to solve yet, for example; the cure to cancer or the solution to climate change and producing more efficient renewable energy.’

‘However, we need to be careful that the correct safety procedures are taken during the development process,’ adds Rosa. ‘Essentially, an AGI will have whatever ethics or values it is given by its creator. We will teach our AI a deep understanding of human values and beliefs and it will be thoroughly tested in play environments before being used in the real world.’

The company is currently running the ‘Solving the AI Race’ round of its General AI Challenge, which is asking the public for suggestions to how GoodAI can mitigate the pitfalls of a race to AI. They’ve also been running workshops with key stakeholders through their AI Roadmap Institute to see what can be done on a geopolitical scale to ensure the safe development of AI.

While it’s hard to say when a functional AGI could become a reality, it’s become more of a priority for businesses, governments and other stakeholders.

‘We’re seeing more money than ever being poured into development,’ says Rosa. ‘Therefore, whether we are the first to create it or not, we need to be ready once it arrives. A lot of our work with the General AI Challenge and the Roadmap Institute is directed at making sure that when general AI arrives, no matter whose hands it is in, it is safe and that it has a positive impact on society.’

For more on the ways human bias and intention is shaping AI, read our two-part series on the way machine learning and big data are impacting our lives.