This service assesses LLMs for insecure code generation in both autocomplete and instruction-following contexts across various realistic scenarios. It analyzes how different LLMs introduce insecure coding patterns across multiple programming languages and Common Weakness Enumeration (CWE) categories. Additionally, it evaluates the impact of security policy enforcement, such as embedding security constraints in system prompts, on the overall security of the generated code. By incorporating both static and dynamic code analysis across multiple real-world scenarios, this service provides a more comprehensive assessment of the security of LLM-generated code.