Cyber, (Mis)Adventures Issue #2 - MOVEit, Cyberattacks for hire in India, SMS Pumping Fraud Explained, Spotlight on LLM's

June 2023 Edition

Welcome πŸš€

Hello, Cybernauts!

Welcome back to another edition of Cyber, (Mis)Adventures. This is where we dive deeply into cybersecurity, exploring the latest threats, feats, mishaps, and triumphs in this ever-evolving landscape.


️(Mis)Adventures of The Month πŸ•΅οΈβ€β™‚

This section highlights a particularly interesting or significant cyber (mis)adventure: a zero-day vulnerability in MOVEit transfer that was exploited to steal data, putting everyone on edge. SMS traffic Pumping Fraud.

1. Zero-Day Vulnerability in MOVEit Transfer Platform

Zero-Day Vulnerability in MOVEit Transfer Exploited for Data Theft | Mandiant
Analysis of a zero-day vulnerability in MOVEit Transfer, and containment and hardening guidance.

Summary

  • 🚨 Mandiant detected exploitation of a zero-day vulnerability in the MOVEit Transfer secure managed file transfer software, leading to data theft.
  • πŸ“… The vulnerability, CVE-2023-34362, was announced by Progress Software Corporation on May 31, 2023.
  • πŸ’» The earliest evidence of exploitation was found on May 27, 2023, resulting in the deployment of web shells and subsequent data theft.
  • ⏱ In some instances, data theft occurred within minutes of deploying web shells.
  • 🎯 The activity is currently attributed to UNC4857, a newly formed threat cluster with unknown motivations, impacting organizations in Canada, India, and the U.S.
  • 🦠 Following the vulnerability exploitation, threat actors have been deploying a newly discovered LEMURLOOT web shell.
  • πŸ“Š LEMURLOOT is a web shell written in C# that provides functionality tailored for the MOVEit Transfer software, including generating commands to enumerate files and folders, retrieving configuration information, and creating or deleting a user.
  • πŸ•΅οΈβ€β™€οΈ LEMURLOOT is believed to be used to steal data previously uploaded by the users of individual MOVEit Transfer systems.
  • ☁️ LEMURLOOT can also steal Azure Storage Blob information, including credentials, from the MOVEit Transfer application settings.
  • πŸ›‘οΈ Mandiant is aware of multiple cases where large volumes of files have been stolen from vvictims'MOVEit transfer systems and has released a detailed MOVEit Containment and Hardening guide.

Why it matters?

  • 🌐 The breach affects many industries globally, suggesting a significant threat to data security.
  • πŸ—‚ The potential for immediate data theft after deploying web shells underlines the gravity of the security flaw.
  • πŸ” The fact that LEMURLOOT can steal Azure Storage Blob information means potential risks to organizations using Azure for storage.
  • πŸ“ˆ The use of LEMURLOOT for data theft could indicate a new trend in cyber-attack strategies.

Potential Implications

  • πŸ’° Victim organizations could receive ransom emails in the coming days or weeks as the attack is consistent with extortion activities.
  • πŸ“œ The data breach might result in the leakage of sensitive information, damaging the impacted organizations' reputations and possibly leading to financial losses.
  • πŸ’Ό For the cybersecurity industry, this event underscores the importance of continued vigilance, threat detection, and response capabilities.
  • πŸ›  Companies using MOVEit Transfer software must implement containment and hardening measures as Mandiant recommends.

2. A Confession Exposes IIndia'sSecret Hacking Industry

A Confession Exposes India’s Secret Hacking Industry
The country has developed a lucrative specialty: cyberattacks for hire.

Summary

  • πŸ” Geneva-based private investigator Jonas Rey was hired to investigate a possible hacking incident concerning an Iranian-born American entrepreneur, Farhad Azima. Azima believed that his email account was hacked following his involvement in exposing sanctions violations.
  • πŸ’» The investigation pointed towards BellTroX, a New Delhi-based company that runs a hacking-for-hire enterprise. This company had previously been implicated in numerous cyberattacks on various individuals and groups.
  • πŸ•΅οΈ RRey'sinvestigations shed light on a more significant hacking-for-hire industry thriving in India. He secured a confession from a participant in such an operation, confirming that AAzima'semail account had been infiltrated.
  • πŸ“ The confession came from Aditya Jain, a former worker for Indian cybersecurity firm Appin Security, who later worked as a hacker for hire under the name Cyber Defence and Analytics.
  • πŸ‘¨β€βš–οΈ Jain admitted to hacking AAzima'semail account, which became key evidence in AAzima'slegal battle against the Emirate of Ras Al Khaimah, which was also implicated in the hacking.
  • πŸ’Ό Despite initial fear of retaliation, Jain decided to come forward publicly, admitting his involvement in court filings.
  • πŸ“ƒ Stuart Page, a private investigator who initially denied any hacking activity, also admitted that hacking had occurred and apologized for misleading the court.
  • βš–οΈ Azima was granted a retrial in London court, scheduled for spring.
  • πŸ“° Reports from the London Sunday Times and the Bureau of Investigative Journalism suggested that Jain and Rey might have deeper ties to the Indian hacking-for-hire business than previously admitted.

Why it matters

  • 🌐 The uncovering of this large hacking-for-hire industry based in India has significant implications for cybersecurity worldwide. The case exemplifies the globalized nature of cybercrime and its potential impact on personal, corporate, and even state security.
  • πŸ›οΈ The confession and subsequent retrial for Azima represent a landmark event in the pursuit of justice in cybercrime cases.
  • πŸ•΅οΈβ€β™‚οΈ The work of private investigators like Jonas Rey highlights the crucial role of independent investigations in uncovering and exposing these operations.

Potential implications

  • πŸš” This case could increase international scrutiny and pressure India to address its hacking-for-hire industry.
  • ⚠️ It could also lead to an increased focus on cybersecurity measures by individuals, corporations, and governments worldwide to prevent similar incidents.
  • βš–οΈ The legal outcomes of this case may set new precedents for cybercrime prosecution and the pursuit of justice in similar cases.
  • πŸ“ˆ There may be a surge in demand for cybersecurity services and the development of more advanced cybersecurity tools in response to the growing threat of organized cybercrime.

3. SMS Traffic Pumping Fraud

Preventing Fraud
SMS pumping and voice toll fraud attacks cause inflated traffic to your app and higher costs. Learn how fraudsters can take advantage of your application and how to stop them.

Summary

  1. πŸ“± SMS Traffic Pumping Fraud, or Artificially Inflated Traffic, is a fraud exploiting phone number input fields for receiving one-time passcodes (OTPs) or app download links via SMS. This can lead to inflated traffic and exploitation of your app.
  2. 🀝 It occurs in two scenarios: either the Mobile Network Operator (MNO) is involved in the scheme and has a revenue-sharing agreement with the fraudsters, or the MNO is exploited unknowingly by the fraudsters. It is more common with smaller MNOs.
  3. πŸ“ˆ Signs of an SMS Pumping attack include a sudden increase of messages sent to a block of adjacent numbers, usually in remote countries. You might not see a completed verification cycle for OTP use cases for these OTPs.
  4. 🌍 To prevent this type of fraud, disable Geo-Permissions for countries you don't intend to send messages to. This can be managed in your Twilio project in the Console on the Messaging Geographic Permissions page.
  5. ⏲️ Implement rate limits to control the number of messages sent per a specific timeframe to the same mobile number range or prefix. User, IP, or device identifier can achieve rate limiting.
  6. πŸ€– Detect and deter bot traffic with libraries like botd or CAPTCHAs. Make small changes in your user experience to prevent automated scripts and bots, such as confirming email addresses before enrolling in 2FA.
  7. ⏱️ Another prevention method is implementing exponential delays between verification retry requests to the same phone number to avoid rapid sending.
  8. πŸ“ž Use Carrier Lookup to determine the line type of a number and only send SMS to mobile numbers. You can also use this tool to block carriers causing inflated traffic.
  9. πŸ“Š Monitor OTP conversion rates and create alerts. If the verification conversion rate starts to drop, especially in an unexpected country, an alert should be triggered for review.
  10. πŸ›‘οΈ Use Twilio Verify to validate users with SMS, voice, email, push, WhatsApp, and time-based one-time passwords. It can help fight fraud, protect user accounts, and build customer trust. This tool can also help prevent SMS Traffic Pumping Fraud with its Fraud Guard feature.

Why it Matters

  1. Security Risk: SMS Traffic Pumping Fraud represents a significant security risk. Fraudsters exploiting phone number input fields for OTPs or app download links can potentially gain unauthorized access to sensitive information.
  2. Financial Impact: Fraudulent activity can lead to inflated traffic, which increases the organization's costs. The organization could also lose revenue if the Mobile Network Operator (MNO) is involved in the scheme.
  3. Reputation Risk: If this fraud affects customers, it could harm the organization. Customers may lose trust in the organization to protect their information, leading to a loss of business.

Potential Implications

  1. Increased Security Measures: Organizations may need to invest in advanced security measures to detect and prevent SMS Traffic Pumping Fraud. This could include implementing rate limits, using tools like botd or CAPTCHAs to deter bot traffic, and using tools like Twilio Verify and Fraud Guard.
  2. Operational Changes: Organizations may need to change their operations, such as disabling Geo-Permissions for countries they don't intend to send messages to, implementing exponential delays between verification retry requests, and only sending SMS to mobile numbers.
  3. Monitoring and Alerts: Organizations must monitor OTP conversion rates and create alerts for unexpected drops in these rates. This will require resources and may necessitate developing new monitoring systems or modifying existing ones.
  4. Customer Outreach: If customers are affected by the fraud, the organization will need to reach out to these customers, potentially offering compensation or other remedies. This can be significant, particularly for larger organizations with many customers.

LLLM'sdon'tSpotlight πŸ’‘ 🧠

Industry experts answer your cybersecurity questions.

1. OWASP Top 10 List for Large Language Models

OWASP Top 10 for Large Language Model Applications | OWASP Foundation
Aims to educate developers, designers, architects, managers, and organizations about the potential security risks when deploying and managing Large Language Models (LLMs)

Summary

  1. πŸ’‰ LLM01:2023 - Prompt Injections: Malicious prompts can bypass filters or manipulate Large Language Models (LLMs), causing them to ignore instructions or perform unintended actions.
  2. πŸ—„οΈ LLM02:2023 - Data Leakage: LLMs may unintentionally reveal sensitive information, proprietary algorithms, or confidential details in their responses.
  3. πŸ“¦ LLM03:2023 - Inadequate Sandboxing: A failure to correctly isolate LLMs, especially when they have access to sensitive systems or external resources, opens up the potential for exploitation and unauthorized access.
  4. πŸ›‘οΈ LLM04:2023 - Code Execution Vulnerabilities: LLMs can be exploited to execute malicious code, commands, or actions on the underlying system through natural language prompts.
  5. πŸ•ΈοΈ LLM05:2023 - SSRF Vulnerabilities: Unintended requests or access to restricted resources such as internal services, APIs, or data stores can be triggered by exploiting LLMs.
  6. πŸ“š LLM06:2023 - Overreliance on LLM-generated Content: Excessive dependence on LLM-generated content without human oversight can lead to harmful consequences.
  7. 🎯 LLM07:2023 - Inadequate AI Alignment: Failing to align the objectives and behaviors with the intended use case can lead to undesired consequences or vulnerabilities.
  8. πŸ”’ LLM08:2023 - Insufficient Access Controls: Inadequate access controls or authentication allows unauthorized users to interact with the LLM, potentially exploiting vulnerabilities.
  9. ❌ LLM09:2023 - Improper Error Handling: Exposing error messages or debugging information can reveal sensitive information, system details, or potential attack vectors.
  10. πŸ’£ LLM10:2023 - Training Data Poisoning: Malicious manipulation of training data or fine-tuning procedures can introduce vulnerabilities or backdoors.

2. The AI Attack Surface Map v1.0

The AI Attack Surface Map v1.0
Introduction Purpose Components Attacks Discussion Summary Introduction This resource is a first thrust at a framework for thinking about how to attack AI syste

Summary

  • πŸ€– This resource provides a framework for conceptualizing attacks on AI systems, especially those using recent technologies like Langchain and ChatGPT.
  • ⏳ It acknowledges that AI system security is still in its early stages, with recently released AInologies such as ChatGPT.
  • πŸ”§ The AI attack surface comprises multiple components, including AI Assistants, Agents, Tools, Models, and Storage, eacAIving unique vulnerabilities.
  • πŸ—£οΈ The primary method of attack for AI systems is through natural language, marking new AIs of vulnerabilities.
  • πŸ•΅οΈβ€β™‚οΈ AI Assistants, who manage individuals digital lives, AI significant privacy and security risks due to the sensitive data they access and manage.
  • πŸ’» Agents, or AI entities with specific purposes, are susceptible to attacks that could make them perform unintended actions.
  • πŸ› οΈ Tools, the capabilities, can be misused through prompt injections to perform unintended tasks.
  • 🎯 Attacking models, or manipulating AI to behave negatively, is a mature practice in the security space.
  • πŸ’½ Storage mechanisms such as VectoAItabases, which cannot fit everything into models, also pose security risks.
  • 🎯 Specific types of attacks include prompt injection, training attacks, altering agent routing, executing arbitrary commands, attacking embedding databases, and others.

Why it matters

  • πŸš€ AI systems are integrating rapidly into society, necessitating understanding and preparedness from a security standpoint.
  • πŸ’‘ It is crucial for CISOs to be aware of the potential attack surfaces within an AI system and ensure they have appropriate defenses in finance.
  • 🌐 AI Assistants who have access to sensitive data have considerable risks if compromised, which could have significant consequences for the privacy and security of an organization.
  • ⚠️ Security risks tied to AI systems, such as Agents and Tools, could impact bus operations and integrity if not appropriately managed.

Potential implications

  • πŸ“ˆ As AI grows in prominence, the security challenges for organizations will likely increase.
  • 🏦 CISOs must allocate resources for understanding, assessing, and defending against potential AI-related security risks.
  • πŸ”’ Organizations could face serious privacy breaches and operational disruptions if AI systems are not secured effectively.
  • 🀝 There may be increased collaboration between academia and organizations to mitigate model-based attacks.
  • πŸ’Ό The rise in AI integration may drive the demand for professionals AI specialized skills in AI security, affecting hiring strategies in organizations

3. Can LLMs B.E. Attack?

Summary

  • πŸ€– The video discusses potential security risks associated with large language models (LLMs) like Google Bard and GPT, hosted on public cloud service providers like AWS.
  • πŸ—£οΈ LLMs are essentially giant databases that interpret and respond to user prompts, making them potential targets for various types of attacks.
  • ⚠️ The first type of attack is by manipulating input prompts, possibly leading to unauthorized behaviors or code execution.
  • πŸ“ˆ The second type of attack is data-based, including data leakage, data poisoning, and trained data leakage, which could expose user information or corrupt the training data.
  • πŸ” The third category is attacks on the LLM application itself, such as vulnerabilities introduced by developers, open-source library vulnerabilities, and identity and access management issues.
  • ☁️ The fourth category is infrastructure attacks on the hosting platform, such as public cloud service providers.

Why it matters?

  • 🌐 Large Language Models are integral to many digital services and systems; thus, their security is crucial to prevent misuse and protect user data.
  • πŸ•΅οΈ The potential for manipulating or exploiting these models could severely affect user privacy and data integrity.
  • πŸ› οΈ Developers and organizations employing LLMs must be aware of these risks to ensure they implement necessary protections.

Potential implications

  • πŸ’» Given the ubiquity of AI systems in various sectors, a successful attack can lead to severe consequences, including exposure of sensitive information, disruption of services, or manipulation of AI responses.
  • πŸ“š If unchecked, these vulnerabilities can erode public trust in AI and machine learning technologies, impeding their action and usefulness.
  • πŸ›‘οΈ Addressing these vulnerabilities will require a multi-faceted approach, combining enhanced security measures, rigorous testing, and continuous monitoring of AI systems.

Closing Notes πŸ’Œ

We hope you found this edition of our newsletter informative and valuable.

If you have any feedback, suggestions, or topics you would like us to cover in our future newsletters, please don't hesitate to contact me now.

Please remember this newsletter with your colleagues, friends, and family members who might benefit from our information.

Thank you for your continued support, and stay safe!


Social Media Links πŸ‘₯

Keep the conversation going. LLet'sconnect!
Twitter: https://twitter.com/IshanGirdhar
LinkedIn: https://www.linkedin.com/in/ishangirdhar/


Stay safe in cyberspace, and see you next month!

Amor Fati
Ishan Girdhar