DeepKeep, a leading provider of AI-Native Trust, Risk, and Security Management, recently announced the product launch of its GenAI Risk Assessment module, designed to secure GenAI’s LLM and computer vision models, specifically focusing on penetration testing, identifying potential vulnerabilities and threats to model security, trustworthiness and privacy.
In this interview, Rony Ohayon, CEO and co-founder of DeepKeep, discussed the vulnerabilities and risks associated with AI systems and the need for comprehensive AI security solutions. DeepKeep’s solution involves penetration testing and providing protection tools to secure AI systems, addressing the challenges of AI security, privacy, and trustworthiness. He also shared his background in AI and the challenges of differentiating between real and fake news in the AI era, emphasizing the increasing difficulty in distinguishing between real and fake information due to advancements in AI technology. He further elaborated on the potential threats posed by AI in creating fake information and the importance of implementing AI security measures to mitigate these risks.