Processing…
Success! You're on the list.
NEWSLETTER

Processing…
Success! You're on the list.

TechTalks with TFN: What our panellists think about the future of AI

TFN-tecktalks-nov20
Picture credits: Tech Funding News

TFN’s second TechTalk, sponsored by Evolwe AI and hosted by Home Grown, discussed the Future of AI for Good. TFN’s Editor-in-Chief, Akansha Dimri, was joined by Aliya Grig, the founder of Evolwe Al, Paul Miller OBE, Founder, CEO, and managing partner of BGV, Adam Liska, co-founder of GlyphicAl and Ariana Alexander-Sefre, the founder & co-CEO of SPOKE.

The panel discussed the potential for AI in the coming years and decades, but also some of the challenges it presented and how society will need to address those challenges if it wants to maximise the benefits, and minimise the downside, of AI.

AI as a force for change

AI is becoming increasingly prominent in wider discourse, typically because of perceived risks. Although the panel were generally AI optimists, they recognised the need for implementation to be thoughtful.

“AI is an extension to us as humans, and I think it can hugely better how we exist in the world, how we create what we create,” said Ariana Alexander-Sefre. However, she added that society had to consider who it worked alongside AI. “Our system, in general, is deeply, deeply flawed. And until we start looking at the system, and the change that needs to exist within the system, then AI is going to be a catalyst to the things that are wrong, as well as the things that are right.”

The need for human application of AI

The problem-solving potential of AI was discussed by several of the panellists, with a suggestion that AI would work best as a human tool, rather than an autonomous entity.

“We’re still very much in the position where humans are in the loop,” said Paul Miller. ”I think any regulation will have more to do with keeping humans in the loop.” Part of that, he says, is ensuring that humans choose where AI is directed. “I’m not sure that letting AI decide what problems it solves is a particularly good idea. But lived experience, the problems that people face if we’re developing applications of AI that solve problems, where they are, then I’m very positive about the outlook.”

Bringing AI into the mainstream

A common theme among the panellists was that AI may be new to us, but humanity has always seen progress and eventually, new technology becomes our everyday tools.

Adam Liska pointed out that how AI was developed was a key part of social acceptance. Speaking about the leadership issues at OpenAI (the TechTalk took place during the brief period Sam Altman was sacked by OpenAI and joining Microsoft) he said, “I’m a little afraid about consolidation in this space towards big tech. So, I’m hoping OpenAI will continue to be a strong entity.”

One of the benefits of groups like OpenAI doing foundational research, he continued, was the openness it created. “One thing that I really like about Chat-GPT was the controlled deployment of the model where it was a safe application that people can start learning about,” he said. “That’s something that I would like to see over the next 10 years as the models become more capable, so people get used to using them.”

Picture credits: Tech Funding News

The need for AI governance

Finally, panellists shared a belief that the governance around AI is critically important. However, Aliya Grig perhaps went furthest, in suggesting that AIs themselves would need some form of governance.

Like the other panellists, she was clear that it shouldn’t be left to the tech sector to decide the direction for AI. “I believe that it’s our mission as a society to act altogether to provide a constitution for training in AI,” she said. However, she also highlighted the need to ensure that AIs themselves know the rules. She likened it to teaching children. “When you educate your kids you provide different concepts, different visions, different ideas,” she said. “And the same way, we should train AI to show as many concepts as possible to train better, and to show what is good and what is not.”

And, while she does not share the doom-laden science fiction view of AI, she did borrow one idea from the genre when talking about governance. “It’s not for giant companies like Google or Microsoft to do this,” she said, “but I do believe it’s important for us as AI researchers and companies to create a set of rules, a bit like Isaac Asimov’s Three Laws of Robotics!”

This article is part of a partnership with Evolwe – a deep tech company creating empathetic artificial intelligence technology and robots for human-like interactions and personalised experiences. Evolwe created the first empathetic AI companion – SensEIFurther, Evolwe AI has combined theory of mind, meta-cognition, psychometrics, and NLP to create state-of-the-art AI architecture. Evolwe has created an AI which is closest to humans in terms of providing empathy, reasoning and cognitive skills. They want AI to be empathetic, conscious and beneficial for society as a whole and are creating a new generation of AI robots that can benefit companies and end users. Evolwe has partnered with NVIDIA, Stanford University and AWS.

For partnering opportunities, contact [email protected]m or [email protected].

Related Posts
Total
0
Share

Get daily funding news briefings in the tech world delivered right to your inbox.

Enter Your Email
join our newsletter. thank you