AI Transparency Statement

This page provides details of ASSEA’s implementation of the Policy for the responsible use of AI in government.

In accordance with the Digital Transformation Agency’s (DTA) Policy for responsible use of AI in government, the following information provides the Asbestos and Silica Safety and Eradication Agency's' statement on AI transparency. 

This statement has been prepared with regard to the Organisation for Economic Co-operation and Development (OECD) definition of AI:

An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

Usage

ASSEA uses Generative AI to enhance workplace productivity and support corporate functions. These applications focus on streamlining internal processes and automating routine tasks. All Agency staff have access to Microsoft 365 Copilot Chat, and a limited number of staff have access to the licensed version. 

The use of AI within the Agency is aligned to ASSEA’s mandated functions as articulated in the Asbestos and Silica Safety and Eradication Act 2013 and ASSEA’s annual operational plan. 

ASSEA employs AI in domains (as defined by the Digital Transformation Agency) of policy and legal, scientific, and corporate and enabling, in the following ways:

  • Workplace Productivity: Use tools such as automated document summarisation and virtual assistants to streamline workflows and improve efficiency.
  • Analytics for Insights: Used in generating and debugging code used in data analysis, management, creation of synthetic data for testing and validations, and dashboard and report generation.

The Agency is continuing to explore the possible ways of using AI appropriately to optimise resource allocation and improve operational efficiency, while managing the risks they may impose, including the prioritisation of human rights, privacy and indigenous data sovereignty.

Public interaction and impact

ASSEA does not propose to use AI where the public may directly interact with or be significantly impacted by it.  Furthermore, the Agency maintains a commitment to ensure that any external provider service is aware of their responsibilities with ethical AI use, including not using customer data for the training of AI models or software, and this will be incorporated into our procurement and request processes. 

Monitoring AI effectiveness and negative impacts

Oversight and monitoring 

Governance policies are being developed to assess the unique potential use cases for AI in the duties of the Agency and to identify associated risks of emerging AI technologies in the workplace. 

Responsible AI usage policy

As part of the governance of AI use in the Agency, a responsible AI usage policy is in development to ensure alignment with the resources provided by the Digital Transformation Agency, and our ICT provider, the Department of Employment and Workplace Relations (DEWR). 

This policy will provide further clarification regarding expectations on staff, ensuring usage is safe, ethical, and responsible. In accordance with the DTA’s principles for using public generative AI tools responsibly, staff are to:

  • Protect privacy and safeguard government information
  • Use judgement and critically assess generative AI outputs
  • Be able to explain, justify, and take ownership of their advice and decisions

Training and assistance

Training on the ethical and responsible usage of AI is mandatory for all staff who have a Microsoft 365 Copilot license, and is strongly encouraged for general Copilot use. Staff with licenses are required to undergo refresher training on a regular basis as this technology and its use within Government changes. 

Compliance

Use of AI by the Agency is in accordance with all relevant legislation, associated regulations, and Government frameworks. 

Policy for the responsible use of AI in government

Use of AI within the Agency is conducted with respect to all the mandatory requirements of the Policy for the responsible use of AI in government

Accountable official

The Chief Data Officer​ was designated as the accountable official on 01 September 2024.

As the accountable official, the Chief Data Officer is responsible for ensuring the compliance of AI use in accordance with internal and external policies, and relevant regulations and legislation within the Agency. At a minimum the Agency will review annually the need to audit and/or review how AI has been used to ensure alignment with intentions and compliance requirements.

Review and Updates

The AI transparency statement was first published to our website on 28 February 2025, and was updated on 26 February 2026, with minor amendments to reflect the current use of AI in the Agency. 
This statement is currently under broader internal review in alignment with the Policy for the responsible use of AI in government  (v2.0) and will be updated upon completion.

This statement will continue to be reviewed annually and updated if there are changes to the use of AI within the agency. 

AI contact

For questions about this statement or for further information on the Agency's usage of AI, please contact enquiries@asbestossafety.gov.au