As AI assistants like OpenAI's ChatGPT, Auto-GPT, Google's Bard, and Microsoft's Bing and Office 365 Co-Pilot become more integrated into our digital infrastructure, there is a growing concern that they could become a weak link in cybersecurity. Simon Willison, a developer, warns of the danger of prompt injections and suggests several ways to reduce vulnerability. One approach is to make prompts visible so that users can identify potential injection attacks. Another is to require AI assistants to ask for permission before performing certain actions. However, these approaches are not foolproof, and developers must be aware of the risks.