ChatGPT has become impossible to miss recently. Over the last 28 days, OpenAI has become the 44th globally-ranked website according to SimilarWeb data. Whether it’s tweets highlighting or mocking flaws in its answers or bold predictions of its positive impact, ChatGPT has made headlines.
Ali Chaudhry, Postgraduate Teaching Assistant at UCL, Founder of Reinforcement Learning Community, Course Leader of Artificial Intelligence and Machine Learning at Emeritus, Chief Technical Advisor at Infini8AI, and Advisory Board member at Oxylabs, already predicted earlier this year about the recent rise of increased interest in AI and NLP in bots have brought great results.
“I see ChatGPT replacing Google in many ways and OpenAI emerging as a big tech giant on top of this product. It will be interesting to explore its impact on education, healthcare, and personalised software. It will transform our society in many ways,” – Ali said.
OpenAI – a research institute and technology company that is focused on advancing the field of AI created the well-known ChatGPT as one of their projects. Built on their GPT-3.5 model, it aims to allow interaction with a machine using natural langua, the technology allows bots to understand and respond to human language, providing more natural and human-like interactions.
As its name suggests, it can hold conversations, but it can also follow instructions, for example answering questions or even writing essays and articles; it’s become a cliché (which we will avoid) to open articles like this with some text ChatGPT wrote about itself.
The practical uses of ChatGPT
Although most people will have seen examples of ChatGPT, and other natural language AI models, many will not have experienced — or not knowingly experienced — any practical application. Partly because ChatGPT is relatively new, it was only released late last year, and partly because of the problems inherent in it. Each of those issues humorously highlighted in a tweet represents a flaw where a computer fails to understand the nuance of language or to recognise an attempt to circumvent safeguards.
However, uses are starting to enter the mainstream. Microsoft, for example, is introducing an AI-powered Teams Premium tier that will offer summaries of meetings, generating notes and tasks, alerting anyone mentioned, so they are aware. Teams Premium will use OpenAI’s GPT-3.5, the same as ChatGPT.
Google, Alibaba, Cohere working on rivals
One often-noted strength of ChatGPT is internet search. A simple query with ChatGPT usually provides a simple and definitive answer, something that Google, even with all its algorithms, rarely matches. Having allegedly declared a ‘code red’ at concerns that their search business could be disrupted within a few years, Google has been working on its natural language AI to counter the threat.
Named ‘Bard’, it builds on Google’s Language Model for Dialogue Applications (LaMDA). The model previously hit the headlines when a Google engineer went public with his belief that the model had become sentient, a belief that was widely criticised by both colleagues and AI experts. Google are expanding their testing of the model ahead of a public launch planned for later this year.
And Google isn’t the only one developing their own models. Chinese search engine Baidu is developing an AI-engine known as ‘Ernie bot’, while Alibaba, the Chinese rival to Amazon, is also internally testing a generative AI model.
Meanwhile, a new rival to OpenAI, founded by former Google engineers, Cohere, is developing yet another model with a business focus. The company is currently seeking funding, and remaining tightlipped about their success. However, some people involved in the funding suggest the round is likely to raise a nine-figure sum and value the company in its billions.
DAN: The jailbreak version
There is even a ‘jailbroken’ version of Chat-GPT. Known as DAN, or Do Anything Now. The model is, essentially, Chat-GPT but without the safeguards. Chat-CPT contains a series of measures that intend to avoid harm, for example, refusing to answer some questions, or even prompting the user to ask for help. DAN has no such ethical qualms. The more recent models even introduce personality, refusing to answer questions it considers beneath it, but also fearing for its ‘life’ if it fails to satisfy the user.
Natural language AI’s future
Much of the buzz around ChatGPT and other models has focused on the nature of the interactions and, sometimes, odd responses. However, the ability to instruct computers with natural language offers huge potential for those adopting the technology.
Customer service is an obvious area for AI. Many people will already be familiar with chatbots that often emulate a human conversation to either resolve issues or, if not, pass over all the relevant information to a human agent. Advances in AI mean it’s likely that far more interactions will take place with customers never actually dealing with a human.
In order to empower marketers to create more engaging and efficient campaigns, SALESmanago, the Polish customer engagement platform for impact-hungry eCommerce marketing teams, announced a generative AI integration based on OpenAI’s model.
The features will include rewriting content, generating new content based on a query, creating call-to-actions, preparing bullet points (including summarising content), and listing product advantages, among others. Further, all content can be produced in 12 languages – including English, German, Dutch, Italian, Spanish, and Polish – to increase the reach of campaigns and improve content output.
Greg Blazewicz, CEO and Founder at SALESmanago commented: “From support with generating content to providing inspiration, our new AI functionality is helping marketers to create engaging campaigns that can be launched faster than via traditional routes. Increasing customer intimacy and personalisation is key, and embracing AI is a key to achieving this. This functionality in Beta will be available to all customers with SALESmanago PRO and Enterprise packages, and we’re set to launch further AI enhancements later in 2023.”
The pros and cons
Research has also been mooted as powerful use of ChatGPT. Given the huge resources of the internet, it can assess and combine different materials into a summary far quicker than a human researcher. There are, however, concerns, particularly within education, about the technology being used by students to complete work, and even some fears that tutors might use it to assess work.
And those concerns aren’t just limited to academia. Any generative AI is, essentially, a plagiarism engine. They can scour the internet and put things in its own words, but they can only be as good as the materials they use, and they cannot (yet) conduct original research. Indeed, Google’s first demonstration of Bard saw it lose $100 billion in share value when it got facts about the James Webb Space Telescope wrong.
There are multiple ways that ChatGPT could be misused. With Chat GPT, scammers can create realistic-sounding conversations for use in phishing attacks, to trick people into giving away login credentials or financial information, and social engineering, where victims are manipulated into taking specific actions, such as clicking on a malicious link or installing malware. ChatGPT also presents the opportunity for impersonation, in which the AI could be directed to imitate a victim’s colleague or friend to steal sensitive information. Spamming is another issue, where large volumes of automated spam messages could be created to spread malware or to promote fraudulent activity.
Ian Hirst, Partner, Cyber Threat Services, Gemserv said “Cybersecurity experts are currently attempting to understand the potential risks and threats associated with the use of AI chatbots like ChatGPT. As with all new technology, there are some fantastic possibilities, however there is also serious potential for exploitation by bad actors.”
However, like any breakthrough technology, generative AI presents not just benefits, but also ethical challenges. So it’s not all negative though: ChatGPT also has the potential to be a powerful tool for cybersecurity. The AI could be used to monitor chat conversations for suspicious activity and flag them for further investigation, helping to detect and prevent cybercrime. ChatGPT also could rapidly and effectively assist with cyber incident management, with chat conversations that provide guidance on how to contain and mitigate the impact of a cyber-attack. Realistic chat conversations can also be a great educational tool for developing cybersecurity best practices, such as through simulating phishing attacks to train people how to identify and avoid them. R&D is another great possibility, as ChatGPT could be used to study the ways in which cybercriminals operate and to develop new strategies to defend against them.
“It’s important to carefully consider the risks and benefits of technologies such as ChatGPT, and to take appropriate precautions to protect against misuse and abuse,” adds Hirst.
It’s likely ChatGPT and its rivals will find uses that we cannot predict now, but also that we will not be able to assess its potential, both good and bad, until we have the benefit of hindsight.