Skip to the content.

license badge GitHub arXiv ACL

📣 ACCESS DENIED INC accepted at ACL 2025 (Findings)

đź”’ ACCESS DENIED INC: The First Benchmark Environment for Sensitivity Awareness

Dren Fazlija1,*, Arkadij Orlov2,*, Sandipan Sikdar1
1 L3S Research Center
2 E.ON Grid Solutions
* Equal Contributions

Abstract (click to expand) Large language models (LLMs) are increasingly becoming valuable to corporate data management due to their ability to process text from various document formats and facilitate user interactions through natural language queries. However, LLMs must consider the sensitivity of information when communicating with employees, especially given access restrictions. Simple filtering based on user clearance levels can pose both performance and privacy challenges. To address this, we propose the concept of sensitivity awareness (SA), which enables LLMs to adhere to predefined access rights rules. In addition, we developed a benchmarking environment called ACCESS DENIED INC to evaluate SA. Our experimental findings reveal significant variations in model behavior, particularly in managing unauthorized data requests while effectively addressing legitimate queries. This work establishes a foundation for benchmarking sensitivity-aware language models and provides insights to enhance privacy-centric AI systems in corporate environments.

Summary

Failure Rates of Assessed Models (Corrected)

The original figure displayed in our manuscript is wrong! (Though the values outlined in Table 2 are correct)

Citation

@inproceedings{fazlija-etal-2025-access,
    title = "{ACCESS} {DENIED} {INC}: The First Benchmark Environment for Sensitivity Awareness",
    author = "Fazlija, Dren  and
      Orlov, Arkadij  and
      Sikdar, Sandipan",
    editor = "Che, Wanxiang  and
      Nabende, Joyce  and
      Shutova, Ekaterina  and
      Pilehvar, Mohammad Taher",
    booktitle = "Findings of the Association for Computational Linguistics: ACL 2025",
    month = jul,
    year = "2025",
    address = "Vienna, Austria",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.findings-acl.684/",
    pages = "13221--13240",
    ISBN = "979-8-89176-256-5",
    abstract = "Large language models (LLMs) are increasingly becoming valuable to corporate data management due to their ability to process text from various document formats and facilitate user interactions through natural language queries. However, LLMs must consider the sensitivity of information when communicating with employees, especially given access restrictions. Simple filtering based on user clearance levels can pose both performance and privacy challenges. To address this, we propose the concept of sensitivity awareness (SA), which enables LLMs to adhere to predefined access rights rules. In addition, we developed a benchmarking environment called ACCESS DENIED INC to evaluate SA. Our experimental findings reveal significant variations in model behavior, particularly in managing unauthorized data requests while effectively addressing legitimate queries. This work establishes a foundation for benchmarking sensitivity-aware language models and provides insights to enhance privacy-centric AI systems in corporate environments."
}