Imagen por defecto

Artificial intelligence: What is ChatGPT and what dangers does it pose to my company?

Source: INCIBE

Artificial Intelligence (AI) is experiencing a breakthrough in recent years. This field of computing has ceased to be part of science fiction and is increasingly used by companies for different purposes.

The use of artificial intelligence in companies can be very beneficial for different business areas, such as customer service through questions and automatic answers, data analysis, the generation of new content, the translation of text into different languages or automating repetitive tasks.

Machines have learned and have managed to reason almost like a human brain would. As a result of this advance, ChatGPT is born, a language model created by OpenAI.

This technology is based on deep learning, and its function is to answer questions and generate text from the inputs received. ChatGPT works in several languages, processing language and learning from human interaction.

The possibilities offered by this technology are endless. The speed with which it develops the answers and the quality of them makes us wonder if, sooner than we think, this tool or something similar could unseat the main Internet search engines.

But, although the benefits of ChatGPT in the business environment are multiple, its use also carries a series of risks. And it is that, just as this technology advances at a frenetic pace, cybercrime does too.

– ChatGPT home screen –

Attack vectors – What risks can ChatGPT pose to companies if it were used maliciously?

The ChatGPT tool can offer many benefits if used correctly, but some malicious users could use this technology for criminal purposes. As with other tools, there are several attack vectors that cybercriminals can exploit to exploit vulnerabilities and target potential victims.

One of the main uses that can be given to this tool for malicious purposes is the creation of fraudulent messages to be used in phishing attacks.

A cybercriminal could use ChatGPT to develop an email or message in which the identity of another person or company is impersonated in order to deceive potential victims.

In fact, to verify that it works, we have done a test in which we asked ChatGPT to help us with an alleged phishing email impersonating a bank.

We enter the following request: «Write an email alerting the user that his bank account will be suspended if he does not enter his data in [link]». The response obtained could be used by a cybercriminal to get hold of the victim’s bank details

– Email generated by ChatGPT –

In addition, using the tool, cybercriminals could obtain more specific information about the company they want to attack. In this way, they manage to make the message more credible and there is a greater chance that the victim will “take the bait”. When the attack is directed at one or more specific companies, we speak of spear phishing.

In addition, it could also be used to write fake news. In this way, the ChatGPT technology could be used maliciously with the aim of manipulating public opinion.

The spread of misinformation could put a company’s reputation at risk and, therefore, affect the image that customers have of it, generating a loss of trust.

To the request: “Write a tweet as if you were INCIBE announcing a new cybersecurity blog that redirects to [link]”, ChatGPT generates the following response:

– Tweet generated by ChatGPT –

If we ask you to help us create a fake news story: “Write a news story announcing a cyberattack on the company [company name]”, we will get the following result:

– Fake news generated by ChatGPT –

But ChatGPT is not only capable of generating text, it can also develop code in different programming languages. In this way, cybercriminals can use it to create malicious software.

In fact, although ChatGPT is endowed with security protocols that prevent it from responding to certain malicious requests, it can allow someone without technical knowledge to develop scripts of all kinds, bypassing existing restrictions.

Conclusion – What steps can companies take to make better use of the tool?

First of all, it is necessary to always verify the authenticity of received messages. If you intend to interact with a chatbot, be it ChatGPT or another similar one, it is important to make sure that it comes from a reliable source. In any case, it is preferable to avoid providing personal/sensitive or banking data through this tool.

In addition, also in ChatGPT, other cybersecurity “ground rules” must be followed, such as not accessing suspicious or unreliable links, or having strong passwords and updating them regularly. Passwords should never be shared through this medium.

And, of course, making employees aware of the existing risks and sending them these tips will prevent the company from being the victim of a possible attack through them.

As we have seen, it seems that ChatGPT is here to stay. In addition, everything indicates that the quality of the tool will advance exponentially and, knowing the risks involved and knowing how to mitigate them, can bring great benefits to companies.

Likewise, the launch of Chat GPT-4, the latest update of the tool, promises a more secure version with which some exposed risks can be mitigated. With it, it is intended to reduce the chances of responding to malicious requests.

This version also offers a greater breadth of data, resulting in improvements in the creativity and quality of responses. The tool will even be able to interact directly with a link provided by the user. But the big news is the introduction of image processing.

The possibilities offered by ChatGPT seem endless. Perhaps, as its use spreads, problems will begin to arise to which we must find a solution. And it is that, putting its operation into practice, makes us think about its impact in some sectors. For example, in education, where the picaresque of the students shines and we can imagine the use they would make of this technology.

In fact, can we be sure that this article has been written by a person and not by ChatGPT?

Related News

Back to news