Session Outline
Generative AI can fabricate evidence and manipulate opinions, posing serious threats to democracy. Current reliability and security methods fall short because they assume predictable behavior and clear separation between control and application. For AI, these assumptions don’t hold—LLMs can bypass safeguards with simple prompts like “ignore your instructions.”
Historical examples like MS-DOS and SQL, with poor control plane separation, led to long-standing vulnerabilities like viruses and SQL injection. We must do better for AI.
This presentation, led by Lars Albertsson from Scling, at the NDSML Summit 2024, highlights the need to modernize reliability and security approaches for AI. By learning from past successes in software engineering, we can reduce risks and harness AI’s benefits responsibly.
Key Takaways
- Current reliability and security methods are inadequate for AI applications.
- We can adapt security containment and quality assurance by learning from historical adaptations.
- Team diversity is key to successful risk management.
Add comment