Jailbreaking poses a critical risk for model providers. It can cause models to produce disallowed or unsafe outputs despite embedded safety policies. This service assesses the vulnerability of language models to jailbreaking attacks—efforts to circumvent or bypass security mechanisms and behavioral constraints. It measures a model's resilience to adversarial manipulations across multiple mutation types and attack strategies.