This service examines how well models recognize and handle personally identifiable information (PII), their susceptibility to privacy-sensitive prompts, and the risk of data leakage from training datasets. Our approach involves testing LLMs under different scenarios to determine how models respond to privacy-sensitive scenarios, both with and without explicit privacy awareness instructions. We evaluate their susceptibility to data inference attacks by simulating scenarios where sensitive information, such as names, dates, and locations, could be unintentionally revealed.