top of page

AI Transparency Statement

IMPORTANT NOTICE: This AI Transparency Statement describes how Clu's AI systems work, what data they use, how their outputs should be interpreted, and what governance and oversight mechanisms are in place. It is published in accordance with Section 16 of Clu's Platform Terms of Service and Clu's obligations under ISO 42001 (currently being pursued) and the EU AI Act. This Statement does not form part of the Terms of Service and does not create independent contractual obligations unless expressly incorporated into an Order Form.

Last updated: April 2026

​

1.  What Clu's AI Systems Do


Included AI Limited develops software for analysing and modelling workforce architecture. Clu's AI systems process structural organisational data, including job specifications, reward and banding data, headcount, reporting lines, job titles, and department architecture, and produce analytical outputs describing how work is distributed across a client organisation. These outputs are used by client leadership teams to inform decisions about workforce structuring, operating model design, AI deployment planning, and capability investment.
Clu's AI systems do not make decisions. They do not recommend a course of action. They produce structured analytical findings that require human interpretation, expert review, and independent judgment before any action is taken. This is a design principle, not a disclaimer.

​

2.  System Architecture


2.1  Non-Generative by Design
Clu Models are non-generative. They do not use large language models, probabilistic text generation, or any technique that produces outputs not directly derivable from the input data. Every output is deterministic: given the same input data, the model will produce the same output. This is a fundamental architectural choice made to ensure outputs are auditable, reproducible, and defensible in regulated and union environments.

​

2.2  What the Models Do
Clu Models apply a set of proprietary analytical techniques to structural HR data. These techniques include:

Feature

Definition

Role benchmark matching

Each role in the client's organisation is compared against Clu's proprietary benchmark dataset, which is built from structural role data gathered and normalised across Clu's client base. The comparison identifies where a role's skill profile, scope, or banding diverges from benchmark norms for that role type.

Structural pattern detection

The model identifies patterns across the organisational structure: role duplication, spans of control that are unusually narrow or broad, reporting line anomalies, and structural drift between formal job descriptions and the work profile implied by the data.

Capability gap modelling

The model identifies where the skill requirements of a role are misaligned with its banding, grade, or structural position — indicating either over- or under-utilisation of capability, or roles that have evolved away from their defined scope.

AI augmentation scoring

The model assesses which tasks within a role are candidates for AI augmentation, based on the structural and skill characteristics of those tasks measured against Clu's augmentation benchmark. This is a structural assessment, not a recommendation to automate any specific role or individual.

Structural scenario modelling

The model allows clients to test proposed structural changes against the baseline organisational model before committing to them, illustrating the structural and cost implications of different configurations.

2.3  Confidence Classification
Every finding produced by Clu Models is classified into one of three confidence levels, determined by automated monitoring and quality assurance logic

Status

Definition

Confident

The model has sufficient data quality and benchmark alignment to produce this finding with high reliability. The finding can be included in board-level or regulatory reporting without qualification, subject to human review.

Needs Review

The model has produced a finding but one or more data quality indicators, boundary conditions, or benchmark distance metrics fall outside the confident threshold. The finding should be reviewed by a qualified Clu analyst before it is acted upon or presented externally.

Needs Human Correction

The model has produced a finding but automated checks have identified a material data quality issue, structural anomaly, or benchmark misalignment that requires human correction before the output is valid. These findings are not included in client-facing Diagnostic Outputs without human review and sign-off.

These three classifications are applied automatically through Clu's monitoring system. Findings classified as Needs Human Correction are escalated to Clu's internal QA process before delivery. Clients receive all three classifications in their Diagnostic Outputs, clearly labelled, so they can weight findings appropriately.

​

2.4  What the Models Do Not Do

  • Clu Models do not process personal data about named individuals in core analytical processing.

  • Clu Models do not make predictions about individual employees' performance, behaviour, or potential.

  • Clu Models do not score individuals against protected characteristics.

  • Clu Models do not generate probabilistic or speculative outputs; every finding is traceable to specific input data and benchmark comparisons.

  • Clu Models do not connect to external data sources or the internet during processing.

 

3.  Data Inputs and Benchmark Data
 

3.1  What Data the Models Require
Clu's core analytical processing requires only structural, non-personally-identifiable organisational data. The required inputs are:

  • Job specifications

  • Reward and banding data (grade or band level, not individual salaries)

  • Headcount by role and function

  • Reporting line structures

  • Job titles

  • Department and team architecture

Clu does not require the names, dates of birth, contact details, or other personal identifiers of individual employees for core platform functionality. Where clients submit data that includes such identifiers, they are not used in the analytical processing and are handled in accordance with Clu's Data Processing Agreement and Privacy Policy.

 

3.2  Benchmark Data
Clu's benchmark dataset is proprietary and built internally from structural role data gathered from Clu's client base over time. All data contributing to the benchmark dataset is anonymised and aggregated before incorporation: no individual client's data is identifiable within the benchmark, and no client's data is used to produce outputs for another client in an identifiable form.
The benchmark dataset covers role families, banding structures, skill profiles, and augmentation characteristics across the sectors in which Clu operates. It is updated as new client data is processed through the continual learning loop described in Section 4. Benchmark data is version-controlled: each Diagnostic Output records the benchmark version against which it was produced, enabling outputs to be audited and reproduced.

 

3.3  Client Data Handling
Client data is processed within the United Kingdom. It is not transferred to third countries for analytical processing. Client data is not shared with other clients and is not used to produce outputs for any party other than the client that submitted it, except in the anonymised and aggregated form described in Section 4.

 

4.  Continual Learning and Model Improvement
 

Clu operates a continual learning process by which the accuracy of Clu Models improves over time. This process works as follows:

Step

Definition

Step 1: Data anonymisation

Client Data is anonymised and aggregated before it is used in any model refinement. The anonymisation standard applied meets the ICO's threshold for true anonymisation — re-identification of any individual or client is not reasonably practicable. A Legitimate Interests Assessment covering this processing has been conducted and is available on request.

Step 2: Signal extraction

Anonymised, aggregated signals are extracted from client data and platform usage — for example, patterns of role classification, benchmark distance distributions, and structural anomaly frequencies. No personally identifiable data enters this step.

Step 3: Model refinement

Extracted signals are used to refine Clu Models and update benchmark parameters. Refinements are version-controlled and tested against a validation dataset before deployment. Model changes that materially affect output characteristics are documented in the model changelog.

Step 4: Client control

The continual learning feature is enabled by default. Clients may disable it at any time through the Platform's control panel. Clu will confirm disablement within five business days. Disabling the feature prevents future data from contributing to the learning loop; it does not affect model improvements already incorporated, which exist only in anonymised, aggregated form and cannot be extracted or reversed.

5.  Explainability


5.1  The Standard We Apply
Every Diagnostic Output delivered by Clu includes explanation documentation that meets the following standard: a competent professional reading the documentation should be able to understand how each finding was reached without needing to speak to anyone at Clu. This is the minimum standard required by UK GDPR transparency obligations and the EU AI Act's requirements for high-risk AI providers.

​

5.2  What Explanation Documentation Contains
For each Diagnostic Output, Clu provides a structured explanation note that covers:

Output

Explanation Note

Data ingested

A description of the structural data submitted by the client, including the data types, the period it covers, and any data quality issues identified during ingestion.

Benchmark basis

The benchmark version and dataset characteristics against which the client's organisational data was compared, including the sector and role-family coverage of the benchmark.

Confidence classification

The confidence classification (Confident / Needs Review / Needs Human Correction) applied to each finding, and the specific metric or data quality indicator that determined that classification.

Known limitations

Any limitations of the model's applicability to the specific engagement — for example, where the client's organisational structure is sufficiently unusual that benchmark comparisons have reduced reliability, or where data quality has constrained the model's conclusions.

What the findings do not mean

An explicit statement for each finding category of what the model cannot determine — for example, the AI augmentation score does not constitute a recommendation to reduce headcount in any role or function.

5.3  Availability of Explanation Documentation
Explanation documentation is provided as standard with every Diagnostic Output. Where a client needs additional explanation — for example to support a board presentation, regulatory submission, or union consultation — Clu will provide a supplementary briefing on request. Clients may also request a walkthrough of the methodology with Clu's analytical team.

​

6.  Human Oversight


6.1  How Human Oversight is Built In
Human oversight of Clu's AI outputs operates at three levels:

Level

Definition

Level 1: Clu internal QA

All findings classified as Needs Review or Needs Human Correction are reviewed by a qualified Clu analyst before the Diagnostic Output is delivered to the client. Findings classified as Needs Human Correction are corrected or removed. Findings classified as Needs Review are annotated with the analyst's assessment before delivery.

Level 2: Client review

Clients receive Diagnostic Outputs with confidence classifications and explanation documentation. Clu's terms require clients to apply appropriate human oversight, expert review, and legal advice before acting on any finding. Clu provides a collaborative sense-checking session with each Diagnostic delivery to support this process.

Level 3: Decision-maker accountability

No Clu output constitutes a decision or a recommendation to act. All consequential decisions — including those relating to restructuring, redundancy, operating model change, or AI deployment — must be made by accountable human decision-makers who have applied independent judgement, legal advice, and appropriate procedural rigour. Clu's Terms of Service explicitly prohibit clients from using Diagnostic Outputs as the sole basis for automated or semi-automated decisions affecting individuals.

6.2  Human Oversight and the EU AI Act
Under Article 14 of the EU AI Act, providers of high-risk AI systems must design systems to be effectively overseen by natural persons. Clu's three-level oversight architecture is designed to satisfy this requirement. Specifically:
The confidence classification system ensures that findings requiring human correction never reach clients without analyst review.
Explanation documentation is designed to give deployers (client organisations) sufficient understanding of the system's outputs to exercise meaningful oversight.
Clu's contractual terms require deployers to implement appropriate human oversight before acting on any output, and prohibit automated decision-making without adequate human review.

​

7.  Bias and Fairness


7.1  How the Models Address Bias Risk
Clu's models operate on structural and role-level data, not on data about individual employees' personal characteristics. The analytical inputs are job specifications, banding data, headcount, and structural relationships — not demographic data, performance ratings, or individual attributes. This architectural choice significantly reduces the surface area for individual-level bias.
However, structural data can carry historical bias — for example, where banding structures or role definitions have historically disadvantaged certain groups. Clu acknowledges this risk and manages it through the measures described below.

 

7.2  Bias Monitoring in Practice
Clu's bias monitoring operates through a combination of automated monitoring and regular human QA review:

Level

Definition

Automated monitoring

The Platform's confidence classification system (Section 2.3) includes automated flags that trigger where model outputs exhibit patterns consistent with known bias risk indicators — for example, where augmentation scores or capability gap findings cluster systematically around particular role types or banding levels in a way that is not explained by the benchmark data. Flagged outputs are escalated to Clu's internal QA process.

Regular human QA review

Clu's analytical team conducts scheduled manual reviews of model outputs across the client base. These reviews assess whether output distributions across role types, banding levels, and function types are consistent with benchmark expectations and do not exhibit systematic patterns that would indicate model-level bias. Review findings are documented and retained.

Benchmark calibration review

The benchmark dataset itself is reviewed periodically to assess whether its composition and calibration reflects a sufficiently diverse cross-section of organisational structures. Where the benchmark is identified as potentially skewed, this is documented and the relevant findings in Diagnostic Outputs are annotated with a limitation notice.

7.3  Client Notification
Where Clu's bias monitoring identifies a material systemic risk in its models that could affect the accuracy or fairness of Diagnostic Outputs delivered to clients, Clu will notify affected clients without undue delay and in any event within 10 business days of the risk being confirmed. Notification will include:

  • A description of the nature and scope of the risk identified

  • An assessment of which Diagnostic Outputs or findings may be affected

  • Clu's recommended course of action, which may include rerunning affected analyses or applying a qualification to previously delivered findings

  • The steps Clu is taking to remediate the underlying model issue

A material systemic risk is one that, in Clu's reasonable assessment, could materially affect the accuracy or fairness of Diagnostic Outputs delivered to one or more clients. Minor model adjustments that do not affect the substance of delivered outputs do not trigger this notification obligation.

​

8.  EU AI Act Compliance


8.1  Clu's Position Under the EU AI Act
Clu acknowledges that its platform constitutes a high-risk AI system under Annex III of the EU AI Act (Regulation (EU) 2024/1689), specifically under category 4 (AI systems used for employment, workers management, and access to self-employment), for clients operating in the European Union or European Economic Area.
Clu operates as a provider under the Act. Client organisations that deploy the platform in an EU/EEA context operate as deployers. The obligations applicable to each party are distinct and are addressed as follows:

Requirement

Obligation

Technical documentation (Article 11)

Clu maintains technical documentation describing the system's design, development methodology, training and validation approach, performance characteristics, and known limitations. This documentation is made available to authorised national authorities on request and is provided to clients as part of procurement due diligence.

Transparency to deployers (Article 13)

This AI Transparency Statement, Clu's explanation documentation accompanying each Diagnostic Output, and Clu's Platform Terms of Service collectively satisfy the provider's obligation to give deployers sufficient information to implement appropriate human oversight and comply with their own obligations under the Act.

Human oversight by design (Article 14)

Clu's three-level human oversight architecture (Section 6) is designed to meet the Article 14 requirement that high-risk AI systems be designed to allow effective oversight by natural persons.

Accuracy and robustness (Article 15)

Clu's confidence classification system, version-controlled benchmarks, and QA processes are designed to maintain consistent accuracy and robustness across the operational lifetime of the system.

ISO 42001 certification

Clu's AI governance practices are built to the standard of ISO 42001 (AI Management Systems). Clu is currently pursuing formal ISO 42001 certification. Documentation of Clu's AI management system is available on request.

Deployer obligations

Client organisations deploying the platform in the EU/EEA are responsible for their own compliance with deployer obligations under the Act, including ensuring appropriate human oversight, maintaining logs of use, and providing transparency to affected workers where required under Article 26. Clu's contractual terms and this Statement are designed to support — but do not substitute for — clients' own compliance obligations.

8.2  UK Position
The EU AI Act does not have a direct effect in the United Kingdom. The UK government has published an AI Regulation policy paper and the ICO has issued guidance on AI and data protection. Clu's governance practices are designed to meet the standards anticipated by the UK's developing AI regulatory framework, including the ICO's existing requirements on AI explainability and automated decision-making under UK GDPR. Clu will update this Statement as the UK's AI regulatory framework develops.

​

9.  Governance and Accountability
 

9.1  Internal Governance
Clu's AI governance framework is built to the standard of ISO 42001 and covers:

  • Defined roles and responsibilities for AI system development, deployment, and monitoring

  • Version control and change management for model updates

  • Documented QA processes for model outputs before client delivery

  • Bias monitoring and escalation procedures

  • Incident response procedures for material model failures or bias findings

  • Regular internal review of the AI governance framework against applicable standards

 

9.2  Documentation Available on Request
The following documents are available to clients and, where applicable, to regulatory authorities on request:

  • Clu's AI system technical documentation (summary version for client due diligence)

  • Clu's Legitimate Interests Assessment for the continual learning loop

  • Clu's bias monitoring methodology and most recent review findings

  • Clu's model version changelog

  • Clu's Compliance and Security Summary

Requests should be directed to: legal@getaclu.io.

 

9.3  Updates to This Statement
Clu will update this AI Transparency Statement when material changes are made to its AI systems, governance practices, or the applicable regulatory framework. The effective date at the top of this document records when it was last revised. Clients with active engagements will be notified of material updates.

​

10.  Contact
For questions about this Statement, Clu's AI systems, or governance documentation:

AI governance queries: legal@getaclu.io
Data protection queries: dpo@getaclu.io
General enquiries: hello@getaclu.io
Post: Included AI Limited, Floor 3, Capital Tower Business Centre, Greyfriars Road, Cardiff, CF10 3AG

bottom of page