‘Artificial intelligence is a golden asset for hackers’

'Artificial intelligence is a golden asset for hackers'

Hackers and cyber criminals make full use of artificial intelligence (AI) to write phishing messages, among other things. Experts are afraid that chatbots can be used in the short term to carry out cyber attacks, detect software vulnerabilities, and develop malware. Regulation is not the solution to tackle online criminals.
That is what experts say to the Algemeen Dagblad.

Opportunities and risks of artificial intelligence

Artificial intelligence offers opportunities to advance the economy and society. Think of taking boring and repetitive work off your hands, detecting fraud early, organizing the decision-making process more efficiently, or solving the shortage in the labour market. But also easier monitoring of climate change, stimulating medical research, improving human rights and humanitarian efforts, and better enforcement of ceasefires in conflict situations.

At the same time, AI also has the necessary dangers. This is partly due to the potential for abuse and unpredictability of Technology and partly because artificial intelligence is developing at a rapid pace. Chatbots are currently popping up like mushrooms: in recent months, we have been introduced to ChatGPT, Google Bard, Claude 2 and Adobe Firefly, among others. And there are many more examples of generative AI.

Maasland: ‘Technology is developing at an unprecedented pace

Artificial intelligence is already being used extensively for criminal purposes, for example, to have phishing messages drawn up. According to Dave Maasland, CEO of cybersecurity agency ESET Netherlands, the elderly and inexperienced entrepreneurs are at great risk as a result.”We are in a zeitgeist where Technology is evolving at an unprecedented rate, making it difficult for people to keep up. That is where the great danger of this cocktail lies: that some target groups already have difficulty recognizing phishing, the Government does not have a uniform working method, and criminals have more options than ever,” he told AD.

Awareness training is an important instrument to increase awareness of digital dangers

Advanced language models such as ChatGPT can remove spelling and writing errors from existing texts and write grammatically error-free and professional-looking texts yourself. This makes recognizing phishing messages even more complicated for employees and consumers.

Research by Linden-IT shows that a third of entrepreneurs are concerned about the impact of AI in their industry. Research by I&O Research shows that a quarter of entrepreneurs in the Netherlands do nothing to increase the digital resilience of their company. These companies should be particularly concerned about their cyber resilience.

Industry organizations such as VNO-NCW and MKB Nederland advise entrepreneurs to train their employees in order to increase their awareness of digital dangers such as phishing.

Maasland: ‘Government must act quickly

An additional danger is that hackers and cybercriminals are currently training AI tools such as WormGPT and FraudGPT to carry out phishing attacks but also to develop malware. Cybersecurity company SlashNext recently discovered that DarkBART, a malicious variant of Google chatbot Bard, is being sold on the dark web. And the development of these rogue applications is still in its proverbial infancy.

Maasland believes that the Government should take action against the threats posed by such applications. If that doesn’t happen soon, there will be more casualties. “This can make people afraid to use technology, and the long-term consequences, if a large part of society cannot participate, are incalculable,” says Maasland.

Chatbots can do much more than write phishing messages

Michiel Steltman, director of the Digital Infrastructure Netherlands foundation, calls artificial intelligence ‘a golden asset’ for hackers and cybercriminals. He believes drafting credible phishing messages is just the tip of the iceberg. Hackers can also use AI to bypass a company’s security, detect vulnerabilities in commonly used software, or for identity fraud.

According to Steltman, however, there are other solutions than regulation. “The problem with criminals is that they don’t comply with the law, so regulation doesn’t help there. What remains is to increase the chance of being caught and improve resilience.”

In July, Pieter-Jaap Aalbersberg, the National Coordinator for Security and Counterterrorism (NCTV), warned that new technologies such as generative AI are popular with hackers. According to him, these new technologies also bring opportunities. We can use them to detect malware at an early stage or to establish the authenticity of text or images, he wrote in the  Cyber ​​Security Assessment Netherlands 2023.

Leave a Reply