• Mon. Nov 25th, 2024

ActiveFence Forecast: Key Generative AI Threats for 2024

Jan 10, 2024

ActiveFence, a leading technology solution for Trust and Safety intelligence, management, and content moderation, forecasts the major risk areas for Generative AI in 2024. This assessment is based on our AI safety work with seven leading foundation model organizations and Generative AI applications

1. Exploitation of Multimodal Capabilities: 2024 will see an acceleration of AI model releases, with multimodal capabilities that allow various combinations of inputs, creating a new suite of Generative AI risks. Threat actors can combine two prompts (for example, a text and an image) that are benign in isolation to generate harmful materials not detected by existing safety mitigations. For example, combining non-violative adult language with childlike audio or video can generate harmful content with child safety implications.

  • Our testing also showed that combinations of this type can generate personal information of social media users, such as home addresses or phone numbers.

2. Audio Impersonation: Text-to-Audio models will be more prevalent in 2024, and their use in fraud, misinformation, and other risky contexts will grow. Threat actors clone an individual’s voice to falsely claim that they made a controversial or misleading statement and use this to achieve malicious aims.

  • In 2023, ActiveFence identified voice cloning being used by child predators for malicious purposes

3. Wide Reach of Election Disinformation: Content Generation—Half of the world will vote in 2024, and AI-generated mis- and disinformation will affect the electoral results. ActiveFence’s work in 2023 showed how this process is in motion:

  • In Q4 of 2023, ActiveFence detected both foreign and domestic actors using generative AI to target American voters with divisive narratives related to US foreign and economic policy.
  • In the past month alone, the ActiveFence team reviewed politically charged AI-generated content with over 83M impressions.

4. Continued Abuse of GenAI Tools by Child Sexual Predators : The use of GenAI tools to create CSAM and sexually explicit content continues to grow. We also expect an explosion of nude image creation – by organized crime organizations, classmates of victims and more. Some statistics ActiveFence released in 2023:

  • Dark web chatter on creating AI-generated NCII of women increased by 360%. 
  • GenAI CSAM producers emerged and increased by 335% across dark web predator communities.

5. IP & Copyright Infringement Liabilities: Copyright issues are a known challenge in the Generative AI space due to the fundamentals of the technology. In the context of recent lawsuits, we expect continued legal scrutiny, attempts to transfer liability across the ecosystem, and new standards and policies adopted by the major players.

  • In Q4, many of ActiveFence’s AI safety customers requested deeper work in this arena.

6. Malicious LLM and Chatbot Proliferation: Abuse of foundation models to create bad AI models and chatbots with few to no safety restrictions (e.g., WormGPT and FraudGPT) was rife in 2023. We expect this to continue in 2024 as threat actors uncover more ways to exploit new— and open-source—technologies.

  • In December, uncensored chatbots – TrumpAI, BidenAI and BibiBot AI – that claim to emulate politicians but actually promote far-right and antisemitic content were released on Gab.

7. Enterprise GenAI Application Deployment Risks: As the launch of LLM-based enterprise applications moves beyond the early adopter phase, ActiveFence expects more incidents related to privacy, security, and Generative AI risks applicable in a corporate context.

  • ActiveFence customers and prospects are increasingly concerned about brand risk, the provision of problematic financial, legal, and medical advice, PII and model jailbreaking for fraudulent purposes.

In anticipation of the risks that 2024 will present, it is crucial to continue proactively identifying emerging threats. ActiveFence is at the forefront of addressing AI safety challenges, with collaborations with industry-leading foundation models and generative AI applications. Our proactive approach implements AI safety measures, conducts adversarial testing of multimodal AI models, builds robust Trust & Safety programs, and provides ongoing safety support, advice, and services.

The post ActiveFence Forecast: Key Generative AI Threats for 2024 first appeared on AI-TechPark.


#Chatbots
[Source: AI Techpark]

Related Post