AI Transparency Statement

In accordance with the Digital Transformation Agency’s (DTA) Policy for responsible use of AI in government, the following information provides the Asbestos and Silica Safety and Eradication Agency's' statement on AI transparency. 

This statement has been prepared with regard to the Organisation for Economic Co-operation and Development (OECD) definition of AI:

An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

Usage

The Agency is currently exploring the possible ways of using AI appropriately for the following use cases and managing the risks they may impose, including the prioritisation of human rights, privacy and indigenous data sovereignty.

Primarily AI usage will align to ASSEA’s mandated functions as articulated in the Asbestos and Silica Safety and Eradication Act 2013 and ASSEA’s annual operational plan. 

Analytics for insights

We see the potential benefits in using AI to improve research and scientific outputs of the Agency through:

  • generating and debugging code used in data analysis, management, and creation of synthetic data for testing and validations
  • dashboard and report generation.

Workplace productivity

We see potential benefits in using AI to improve workplace productivity for the Agency. Those benefits include, but are not limited to:

  • managing and responding to general queries
  • document search and retrieval
  • insight generation
  • idea generation
  • summarisation of emails, documents, and other correspondence
  • task management
  • collation enhancement
  • formatting assistance.

Public interaction and impact

ASSEA does not propose to use AI where the public may directly interact with or be significantly impacted by it. Furthermore, the Agency maintains a commitment to ensure that any external provider service is aware of their responsibilities with ethical AI use, including not using customer data for the training of AI models or software, and this will be incorporated into our request processes. 

Monitoring AI effectiveness and negative impacts

Executive monitoring

The Executive and leadership teams have identified the associated risks of emerging AI technologies and their use in the workplace. 

Governance policies are proactively being developed to assess the unique potential use cases for AI in the duties of the Agency.

Responsible AI usage policy

As part of the governance of AI use in the Agency, a responsible AI usage policy will be developed to ensure alignment with the resources provided by the Digital Transformation Agency. 

Training and assistance

Training on the ethical and responsible usage of AI will be mandatory for all staff, before the usage of any AI tools within the agency. Staff will be required to undergo refresher training on a regular basis as this technology, and its use within Government, advances. 

Compliance

Use of AI by the Agency will be in accordance with all relevant legislation, associated regulations, and Government frameworks. 

Policy for the responsible use of AI in government

Any future use of AI by the Agency be conducted with respect to all the mandatory requirements of the Policy for the responsible use of AI in government

Accountable official

The Chief Data Officer was designated as the accountable official on 01 September 2024.

As the accountable official, the Chief Data Officer is responsible for ensuring the compliance of AI use in accordance with internal and external policies, and relevant regulations and legislation within the Agency. At a minimum the Agency will review annually the need to audit and/or review how AI has been used to ensure alignment with intentions and compliance requirements.

Review and Updates

The AI transparency statement was first published to our website on 28 February 2025.

This statement will be reviewed annually, and updated if there are changes to the use of AI within the agency. 

AI contact

For questions about this statement or for further information on the Agency's usage of AI, please contact enquiries@asbestossafety.gov.au