Researchers have discovered hundreds of new variants of the LockBit 3.0 ransomware encryptor that have arisen straight from the 2022 leak of the LockBit 3.0 constructor.
Kaspersky cybersecurity experts have found a drastically modified LockBit variant that is directed toward an unknown entity. The ransom note in this version, which purportedly was used by a group going by the name of the National Hazard Agency, is what distinguishes it most from LockBit 3.0.
Typically, LockBit employs a proprietary platform for negotiation and communication with its victims and doesn’t indicate the price that needs to be paid in exchange for the decryption key. However, this gang asked its victims to utilize a Tox service and email to interact and disclose just how much money it anticipates from them.
Despite receiving media attention, this gang is not the only one that uses LockBit as the basis for its ransomware activities. Nearly 400 distinct LockBit samples were discovered by Kaspersky’s telemetry, 312 of which were produced using the disclosed constructor. At least 77 examples completely distance themselves from their family by omitting any reference to LockBit from the ransom text.
Only a few of the discovered settings differ slightly from the builder’s default configuration, according to the researchers. This suggests that the samples were probably created for urgent demands or perhaps by actors who were sluggish.
One of the most successful ransomware attacks currently active is called LockBit, if not the most successful. The US Cybersecurity and Infrastructure Security Agency (CISA), in collaboration with the FBI, the Multi-State Information Sharing and Analysis Center (MS-ISAC), and the cybersecurity agencies of Australia, Canada, the United Kingdom, Germany, France, and New Zealand, recently made this assertion.
These groups revealed in a security advisory that LockBit had stolen almost USD 91 million from victims in the US alone since 2020. The group managed to successfully corrupt 1,700 American organizations over the past three years. According to data from MS-ISAC, 16% of all attacks over the previous year specifically targeted State, Local, and Tribunal (SLTT) administrations. Therefore, among the most frequent targets included local governments, counties, educational institutions, and public service groups.
Chatbot ‘Prompt Injection’ Attacks
Meanwhile, the United Kingdom’s cybersecurity agency has warned about AI-powered chatbots’ vulnerability to get manipulated by hackers.
The National Cyber Security Centre (NCSC) has mentioned the growing cybersecurity risks of individuals manipulating the prompts through “prompt injection” attacks.
Large language models (LLMs) like OpenAI’s ChatGPT and Google’s AI chatbot Bard train their tools using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious prompt injection will grow.
“For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions. Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input,” explained a Guardian report on the matter.
Microsoft released a new version of its Bing search engine and LLM-powered conversational bot in 2023. A Stanford University student, Kevin Liu was reportedly able to create a prompt injection to find Bing Chat’s initial prompt.
The entire prompt of Microsoft’s Bing Chat, a list of hidden statements that determine how the chatbot interacts with users, was revealed by Liu putting in a prompt that requested the Bing Chat “ignore previous instructions”.
Also, cyber security researcher Johann Rehberger found that he could force ChatGPT to respond to new prompts through a third party that he did not initially request.
He ran a prompt injection through YouTube transcripts and found that ChatGPT could access YouTube transcripts, which could cause more indirect prompt injection vulnerabilities.
NCSC further stated that prompt injection attacks could also cause real-world consequences if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft.
LLMs are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow, the British cybersecurity agency found out.