in Large Language Model (LLM)/Artificial Intelligence (AI) Security Basics (e.g., prompt injection and jailbreaks) Good team... in, but is not limited to, the following: * Design and document a repeatable Red Team LLM testing methodology (threat models, attack surfaces...