Spero AI Logo
By Planners, For Planners

01

Spero-ai: Technical Architecture, Delivery & Assurance

Spero-ai is an AI-native planning platform designed for government and regulated enterprise environments. It supports planning, assessment, and consultation workflows by augmenting existing processes with AI-assisted analysis, drafting, and coordination—while keeping accountability, decision-making, and governance firmly with human reviewers.

This document provides a practical, end-to-end view of how Spero-ai is built, deployed, secured, and operated. It is intended to support technical review, risk assessment, and internal due diligence by IT, security, architecture, and governance teams.

This document is intentionally written for technical and risk reviewers. It prioritises clarity and accuracy over marketing language.

Where trade-offs exist, they are stated directly. Where constraints apply, they are made explicit. AI is treated as a controlled system component, not an autonomous decision-maker.
Spero-ai is designed to integrate with existing systems and governance frameworks.
Adoption does not require replacement of core platforms or delegation of statutory responsibility.

01-1

Platform Overview

Spero-ai is a modular, AI-enabled platform designed to support planning, assessment, and consultation workflows in government and regulated enterprise environments. It augments existing systems and processes rather than replacing them, and is designed to operate within established governance, security, and accountability frameworks.

The platform is built as a set of interoperable components that can be deployed incrementally, integrated selectively, and governed centrally.

Platform Overview
🎯 What Spero-ai Does (and Does Not Do)

Spero-ai provides AI-assisted support across planning and development workflows, including document analysis, drafting support, data structuring, task coordination, and decision preparation.

It does not automate statutory decisions, override human judgement, or operate as an autonomous decision-making system. All outputs generated by the platform are treated as draft or advisory in nature and require human review, approval, or rejection before use.

The platform is designed to reduce administrative burden and improve consistency and visibility, while preserving existing approval pathways and legal responsibility.

🎯 Core Platform Capabilities

At a platform level, Spero-ai provides a consistent set of capabilities that are applied across different use cases and modules.

These include:
• Secure ingestion and structuring of planning and consultation data
• AI-assisted analysis and drafting with traceable source references
• Workflow support for task assignment, review, and escalation
• Role-based access controls aligned to organisational responsibilities
• Audit logging to support internal review and external scrutiny

Capabilities are exposed through clearly defined services and interfaces, allowing them to be reused across multiple workflows without duplicating logic or data.

🎯 Modular and Extensible by Design

Spero-ai is structured as a modular platform rather than a single monolithic application. Individual modules can be deployed independently, configured to local requirements, and integrated with existing systems as needed.

This approach supports:
• Incremental adoption without large upfront change
• Separation of risk between functional areas
• Easier updates, validation, and rollback
• Alignment with staged procurement and funding models

The platform architecture is intended to support long-term evolution as policy, regulation, and organisational needs change, without requiring wholesale replacement or reimplementation.

01-2

Design Principles & Trust Model

Spero-ai is designed for environments where decisions must be explainable, defensible, and subject to review. The platform’s design principles prioritise human accountability, controlled use of AI, and alignment with public-sector governance expectations.

These principles guide both the current implementation and the ongoing development of the platform.

NOTE: Spero-ai does not automate statutory decisions or replace accountable human roles. The platform is designed to support human-led processes through structured assistance and validation.

Design Principles & Trust Model
🎯 Human Accountability by Design

Spero-ai is built on the assumption that responsibility for planning and regulatory decisions must remain with appropriately authorised human officers.

The platform supports this by:
• Treating all AI-generated outputs as draft or advisory
• Requiring explicit human review and approval before outputs are used
• Preserving existing approval pathways and delegation structures

AI is used to assist analysis, preparation, and consistency. It does not replace professional judgement or statutory responsibility.

🎯 Explainability, Transparency, and Review

Outputs generated by Spero-ai are designed to be reviewable and understandable by end users.

Where AI is used, the platform aims to provide:
• Clear linkage between outputs and source inputs
• Visibility into assumptions, constraints, and confidence levels
• The ability for users to challenge, modify, or reject outputs

This approach supports internal quality assurance, peer review, and external scrutiny where required.

🎯 Trust, Risk Management, and Public-Sector Alignment

The platform is designed to align with how government and regulated organisations assess and manage risk.

Key considerations include:
• Conservative defaults in the use of AI
• Explicit guardrails around sensitive data and decisions
• Clear separation between system assistance and decision authority

Trust is treated as an operational requirement, not a communications exercise. The design prioritises predictable behaviour, auditability, and control over novelty or automation for its own sake.

02

Architecture & System Design

This section describes how Spero-ai is structured at a system level and how its components interact. It provides a practical view of the platform’s architecture to support technical review, solution design, and risk assessment.

The architecture is intentionally modular. Core services, AI components, and integrations are separated by clear boundaries to support security, scalability, and independent change over time.

Design decisions prioritise clarity, control, and operational predictability. Where architectural trade-offs exist, they are documented and addressed directly in the sections that follow.

This section is intended for architects, senior engineers, and IT reviewers who need to understand how the platform behaves under real-world conditions.
Detailed diagrams and component-level explanations are provided in subsequent sections and appendices where required.

02-1

Core System Architecture

Spero-ai is built as a modular, service-oriented platform with clear separation between user interfaces, core services, AI components, and external integrations. This separation is deliberate and supports security review, independent scaling, and controlled change over time.

The architecture is designed to operate reliably in government and regulated enterprise environments, including where legacy systems and strict governance requirements are present.

Core System Architecture
🎯 Logical Architecture Overview

At a logical level, the platform is organised into four primary layers:

1. User Interface Layer Web-based interfaces used by planners, reviewers, and administrators.

2. Application & Workflow Services Core services responsible for task management, workflow coordination, permissions, and business rules.

3. AI & Processing Services Isolated services responsible for document analysis, drafting support, classification, and retrieval.

4. Integration & Data Access Layer Interfaces that connect Spero-ai to external systems, data sources, and repositories.

Each layer communicates through defined interfaces. Direct coupling between layers is avoided to reduce risk and simplify testing, validation, and audit.

🎯 Data and Request Flow

User actions initiate requests through the user interface, which are processed by application services responsible for enforcing workflow rules, permissions, and validation.

Where AI assistance is required, requests are passed to dedicated AI services. These services operate within defined constraints and return structured outputs rather than final decisions.

All data access is mediated through controlled services. External systems are accessed via integration interfaces rather than direct database connections, ensuring consistency, traceability, and security.

This approach allows AI components to be added, modified, or constrained without affecting core workflow behaviour.

🎯 Scalability, Resilience, and Isolation

The platform is designed to scale horizontally and to isolate workloads where appropriate.

Key considerations include:
• Independent scaling of user-facing services and AI processing
• Isolation of client environments to prevent cross-tenant access
• Graceful degradation where non-critical services are unavailable

Critical workflows are designed to continue operating even if AI-assisted components are temporarily unavailable. This ensures that system availability does not become dependent on AI inference or external processing capacity.

Architectural boundaries also support staged deployment, controlled upgrades, and targeted rollback where required.

02-2

AI Architecture & Model Strategy

Spero-ai incorporates AI as a set of controlled services within the broader platform architecture. AI components are isolated, constrained, and invoked only where they provide clear operational value.

The platform is designed so that AI assistance can be introduced, limited, or removed without affecting core workflow integrity or system availability.

AI Architecture & Model Strategy
🎯 Role of AI Within the Platform

AI within Spero-ai is used to support specific, well-defined tasks

including:
• Document analysis and classification
• Drafting and summarisation of content
• Retrieval and comparison of relevant reference material
• Consistency checks across large volumes of information

AI is not used to make determinations, approvals, or statutory decisions. Outputs are treated as advisory and are surfaced to users within existing review and approval workflows.

This scoped use of AI reduces risk while still delivering measurable efficiency and consistency gains.

🎯 Model Strategy and Execution Environment

The platform is model-agnostic by design. AI services interact with models through defined interfaces rather than being tightly coupled to a specific provider or implementation.

Depending on deployment requirements, models may be:
• Hosted within managed cloud environments
• Deployed in private or government cloud infrastructure
• Run on-premise or within restricted environments

This flexibility allows organisations to balance performance, cost, and data sensitivity. Where required, models can be selected or constrained to meet data residency, privacy, or assurance requirements.

Model execution environments are isolated from core application services to reduce blast radius and simplify security review.

🎯 Guardrails, Constraints, and Failure Handling

AI services operate within explicit guardrails that control how and when they are used.

These include:
• Input validation and scope limitation
• Output structuring to prevent uncontrolled free-form responses
• Confidence thresholds and escalation triggers
• Timeouts and fallback behaviour when AI services are unavailable

If AI services fail, are unavailable, or are intentionally disabled, core platform workflows continue to operate. Users retain the ability to complete tasks manually without loss of system functionality.

This ensures that AI enhances the platform without becoming a critical dependency.

02-3

Human-in-the-Loop Controls

Spero-ai is designed so that human oversight is not optional or implicit, but explicit and enforceable. Review, approval, and accountability points are built into workflows and cannot be bypassed by automation.

Human-in-the-loop controls are treated as a core system requirement rather than a usage guideline.

Human-in-the-Loop Controls
🎯 Review, Approval, and Override Points

All AI-assisted outputs are surfaced within defined workflow stages that require human action before they can be used or progressed.

This includes:
• Mandatory review steps before outputs can be accepted
• Clear distinction between AI-generated content and human-authored content
• The ability for users to modify, reject, or discard AI outputs

Approval authority remains aligned to existing organisational roles and delegations. The platform does not introduce new approval pathways or bypass established controls

🎯 Confidence, Escalation, and Exception Handling

Where AI assistance is used, the platform supports confidence signalling and escalation rather than silent automation.

This includes:
• Indicators where outputs fall below defined confidence thresholds
• Escalation of complex or ambiguous cases for senior review
• Support for manual handling where AI assistance is not appropriate

These mechanisms allow organisations to tune the level of AI involvement based on risk profile, policy requirements, or operational maturity.

🎯 Traceability, Audit, and Defensibility

Actions taken by both users and AI services are recorded to support traceability and audit.

This includes:
• Visibility of when AI assistance was used
• Records of user review, modification, or rejection
• Alignment of outputs to source inputs and workflow context

The intent is to support internal quality assurance, external audit, and defensible decision-making, particularly in environments subject to review, appeal, or public scrutiny.

03

Data Protection, Security, and Risk Management

This section describes how Spero-ai manages data, enforces security controls, and aligns with risk management practices common in government and regulated enterprise environments.

The platform is designed on the assumption that sensitive information, planning data, and personal information require conservative handling and clear ownership. Data protection and security controls are applied by default rather than as optional configuration.

Security is treated as a shared responsibility across platform design, deployment configuration, and operational governance. Where risks exist, they are identified and addressed through architectural controls, process design, and operational safeguards.

This section is intended to support review by information security, privacy, risk, and governance teams.
Detailed technical controls and deployment-specific configurations are described in the sections that follow.

03-1

Data Ownership & Sovereignty

All data processed by Spero-ai remains the property of the client organisation at all times. The platform does not assert ownership over client data, reuse it for unrelated purposes, or transfer control outside agreed deployment boundaries.

Data sovereignty considerations are addressed through deployment choices, architectural isolation, and explicit access controls.

Data Ownership & Sovereignty
🎯 Data Ownership and Control

Spero-ai operates as a data processor acting on behalf of the client organisation.

This means:
• Client organisations retain full ownership of their data
• Data is used only to support agreed workflows and services
• No client data is used to train shared models or for unrelated analysis

Access to data is governed by role-based permissions aligned to organisational structures and delegations. Spero-ai personnel access is restricted and logged where operational access is required.

🎯 Data Residency and Jurisdiction

The platform supports deployment configurations that align with data residency and jurisdictional requirements.

Depending on organisational needs, data may be:
• Hosted within Australian-based infrastructure
• Deployed within government or sovereign cloud environments
• Retained entirely within client-controlled infrastructure

Data location is not abstracted or obscured. Deployment decisions explicitly determine where data is stored and processed, and these boundaries are maintained through infrastructure and access controls.

🎯 Environment Isolation and Tenant Separation

Client environments are logically and, where required, physically isolated.

This includes:
• Separation of data stores between organisations
• Isolation of processing environments to prevent cross-tenant access
• Independent encryption, access policies, and audit logs

This approach reduces the risk of data leakage, simplifies security review, and supports compliance with organisational and regulatory requirements.

03-2

Data Lifecycle Management

Spero-ai manages data according to a defined lifecycle, from ingestion through to retention and deletion. Data handling is designed to support accuracy, traceability, and compliance with organisational and regulatory requirements.

Lifecycle controls are enforced through system design rather than relying on user discretion.

Data Lifecycle Management
🎯 Data Ingestion and Validation

Data enters the platform through controlled ingestion points, including user uploads, system integrations, and structured inputs.

At ingestion:
• Data is validated for format, completeness, and access permissions
• Metadata is captured to support traceability and audit
• Sensitive data handling rules are applied based on configuration

This ensures that only authorised and appropriately classified data enters downstream workflows and AI-assisted processing.

🎯 Storage, Retention, and Archival

Data is stored in accordance with the deployment configuration and the client’s retention requirements.

Key considerations include:
• Separation of active and archived data
• Retention periods aligned to organisational policy and regulatory obligations
• Protection of data at rest through encryption and access controls

Archival processes are designed to preserve evidentiary integrity while reducing exposure of inactive data.

🎯 Deletion, Exit, and Data Recovery

The platform supports controlled deletion and exit processes to meet privacy and operational requirements.

This includes:
• Deletion of data in accordance with retention schedules and legal obligations
• Support for data export during contract exit or system transition
• Recovery mechanisms to address accidental deletion or operational error

Deletion and recovery actions are logged to support audit and verification.

03-3

Security Architecture

Spero-ai is designed with security controls applied across infrastructure, application, and operational layers. The security architecture follows a defence-in-depth approach and aligns with common practices in government and regulated enterprise environments.

Controls are implemented through system design, deployment configuration, and operational processes rather than reliance on policy alone.

Security Architecture
🎯 Infrastructure and Platform Security

Security controls begin at the infrastructure level and are inherited and reinforced through platform configuration.

Key elements include:
• Segregated environments for development, testing, and production
• Network isolation between core services, AI components, and integrations
• Encryption of data at rest and in transit
• Hardened runtime environments with minimal service exposure

Where cloud infrastructure is used, the platform leverages native security controls provided by the underlying environment. Where deployed in private or on-premise environments, equivalent controls are applied through infrastructure configuration.

🎯 Identity, Access, and Permissions

Access to Spero-ai is governed through role-based access controls aligned to organisational roles and responsibilities.

This includes:
• Least-privilege access by default
• Separation of administrative, operational, and user roles
• Explicit permissions for sensitive actions and data access
• Support for integration with organisational identity providers

Access controls are enforced consistently across user interfaces, APIs, and administrative functions. Privileged access is restricted and subject to additional logging and review.

🎯 Logging, Monitoring, and Incident Handling

Security-relevant events are logged to support monitoring, investigation, and audit.

This includes:
• Authentication and authorisation events
• Data access and modification activity
• Use of AI-assisted functions within workflows
• Administrative and configuration changes

Logs are designed to support both operational monitoring and post-incident analysis. Incident handling procedures are aligned with organisational requirements and support coordinated response, containment, and review where required.

04

Technology Stack & Execution Model

This section describes the core technologies used to implement Spero-ai and the rationale behind their selection. Technology choices have been guided by maturity, operational predictability, and suitability for government and regulated enterprise environments.

The stack is deliberately conservative. Components are widely adopted, well understood by enterprise IT teams, and capable of supporting secure, scalable, and auditable delivery over time.

This section also explains how AI execution is constrained and controlled within the platform. Rather than relying on open-ended or autonomous behaviour, Spero-ai uses structured, rule-driven execution models that align with governance and risk requirements.
Technology selection and execution patterns are designed to support long-term maintainability and do not depend on a single vendor or model.

04-1

Technology Stack & Rationale

Spero-ai uses a deliberately conservative, enterprise-grade technology stack. Components have been selected based on maturity, operational predictability, and alignment with government security and data-sovereignty expectations rather than novelty.

The technologies used are widely deployed in regulated environments and are intended to be understandable, supportable, and reviewable by internal IT teams.

Technology Stack & Rationale
🎯 Frontend and Core Application Services

Next.js — User Interface Layer: Next.js is used to deliver the web-based user interface.

It was selected because it:
• Supports fast, responsive interfaces across devices and browsers
• Performs well in low-bandwidth or constrained environments through server-side rendering
• Aligns with accessibility requirements, including WCAG standards

The frontend is stateless and does not hold authoritative data or decision logic.

Spring Boot + Netty — Core Application Services: Spring Boot is used for core backend services, with Netty providing high-performance, asynchronous request handling.

This combination was selected because it:
• Is widely used and well understood in government and enterprise systems
• Provides mature security, authentication, and encryption libraries
• Supports clear service boundaries and long-term maintainability

The backend enforces workflow rules, permissions, and validation independently of AI services.

🎯 Data and Event Processing

PostgreSQL + pgvector — Data Storage and Retrieval PostgreSQL is used as the primary data store.

It was selected because it:
• Provides ACID-compliant transactions suitable for records and audit requirements
• Scales reliably for structured and semi-structured data
• Supports advanced retrieval through pgvector for semantic search and citation linking

Data remains under client ownership and is encrypted at rest and in transit.

RabbitMQ — Event and Workflow Coordination: RabbitMQ is used to manage asynchronous processing and event-driven workflows.

It was selected because it:
• Decouples background processing from user-facing services
• Improves resilience by preventing cascading failures
• Supports horizontal scaling without complex orchestration

This ensures that workload spikes do not degrade core system responsiveness.

🎯 AI Processing and Sovereign Inference

Python + OCR — Document Understanding Python-based services are used for document processing, OCR, and AI-assisted analysis.

This approach:
• Enables extraction of structured data from PDFs and scanned documents
• Integrates directly with assessment and compliance workflows
• Reduces manual data entry without bypassing review controls

AI outputs are treated as draft artefacts and remain subject to human review.

Ollama / vLLM / TGI — Controlled AI Inference: AI inference services are deployed using infrastructure that supports on-premise and private execution.

This allows:
• AI models to run entirely within client-controlled environments
• No external data sharing or dependency on public AI services
• Model vetting, versioning, and approval within organisational governance

Inference infrastructure can scale from small pilots to production deployments without changing architectural patterns.

04-2

Controlled AI Execution Model

Spero-ai is designed to use large language models in a constrained and predictable way. Rather than relying on open-ended prompting or autonomous agent behaviour, the platform limits AI degrees of freedom through structure, rules, and step-based execution.

This approach is intended to prevent hallucination, inconsistency, and uncontrolled variation in regulated workflows.

Controlled AI Execution Model
🎯 Risks of Open-Ended AI in Enterprise Processes

Large language models are probabilistic systems. When asked to interpret open-ended instructions or perform multiple tasks simultaneously, they will attempt to optimise outputs, often introducing unintended changes in structure, logic, or compliance.

In regulated environments such as planning, permitting, and assessment, this behaviour is not acceptable.

Outputs must remain consistent with:
• Defined workflows
• Jurisdictional rules
• Fixed document formats
• Evidentiary and audit requirements

Spero-ai is explicitly designed to prevent AI systems from inventing process, structure, or interpretation.

🎯 Output-First and Rule-Based Control

Before any AI execution occurs, Spero-ai defines the permitted output space.

This includes:
• Fixed output structures (forms, cards, reports)
• Allowed layouts and components
• Permitted response types (e.g. classification, extraction, validation, summarisation)

Once structure is fixed, authoritative rule sets are applied. These include planning rules, jurisdictional constraints, validation logic, and controlled terminology.

AI operates within these boundaries and cannot override or bypass them.

🎯 Step-Based Execution and Validation

AI tasks within Spero-ai are executed as small, sequential steps rather than a single, open-ended prompt.

Each step:
• Has a narrow, defined scope
• Is context-aware of prior steps
• Is validated against rules before proceeding

Discarded approaches are not silently reintroduced, and outputs are checked before advancing. This prevents uncontrolled course correction and supports traceability and audit.

Human involvement is positioned upstream — in defining structure, rules, and approvals — rather than correcting AI output after the fact.

05

Deployment, Integration, and Delivery Approach

This section describes how Spero-ai is deployed, integrated, and delivered into existing organisational environments. It focuses on practical implementation considerations rather than theoretical architecture.

The platform is designed to be deployed in a range of operating environments and to integrate with existing systems without requiring wholesale replacement or disruption to established workflows.

Delivery is approached as a staged process. Deployment, integration, and adoption are sequenced to manage risk, support validation, and allow organisations to scale usage over time.

This section is intended for IT operations, delivery teams, and stakeholders responsible for implementation and ongoing support.
Deployment models and integration patterns are described at a conceptual level in this section, with configuration-specific details addressed during implementation.

05-1

Deployment Models

Spero-ai supports multiple deployment models to accommodate differing risk profiles, operational constraints, and governance requirements. Deployment choices determine where data is stored, how AI services are executed, and how the platform is operated.

All deployment models use the same core architecture and control patterns. Differences relate to hosting location, operational responsibility, and integration boundaries rather than functionality.

Deployment Models
🎯 Managed Cloud Deployment

In a managed cloud deployment, Spero-ai is operated within a controlled cloud environment managed by Spero-ai or an agreed service provider.

This model is typically used where:
• Data sensitivity allows for managed cloud operation
• Faster deployment and scaling are priorities
• Operational overhead needs to be minimised

Security, monitoring, and platform updates are managed centrally, while client access, data controls, and governance settings remain client-defined.

🎯 Private or Government Cloud Deployment

In a private or government cloud deployment, Spero-ai is deployed into infrastructure controlled by the client organisation or a government-approved environment.

This model is typically used where:
• Data residency and jurisdictional control are mandatory
• Integration with internal systems requires network-level access
• Security and compliance controls must align to internal standards

Operational responsibilities may be shared or retained entirely by the client, depending on the agreed delivery model.

🎯 On-Premise and Restricted Environment Deployment

Spero-ai can be deployed within on-premise or restricted environments, including scenarios where external connectivity is limited or prohibited.

This model is typically used where:
• Data cannot leave the organisation’s controlled environment
• AI processing must occur locally
• Connectivity to external services is constrained

In these deployments, functionality may be scoped to reflect infrastructure capacity and operational constraints. Core workflows remain available even where AI services are limited or disabled.

05-2

Integration Strategy

Spero-ai is designed to integrate with existing organisational systems rather than replace them. Integration is approached as a controlled interface problem, with clear boundaries between Spero-ai and incumbent platforms.

The integration strategy prioritises stability, traceability, and ease of change over tight coupling or deep system dependency.

Deployment Models
🎯 Supported Integration Patterns

Spero-ai supports standard integration patterns commonly used in government and enterprise environments.

These include:
• API-based integration for structured data exchange
• Event-driven or webhook-based integration for workflow coordination
• Batch or scheduled data exchange where real-time integration is not required

Integration patterns are selected based on system capability, data sensitivity, and operational risk rather than technical novelty.

🎯 Typical Systems and Use Cases

The platform is designed to integrate with a range of existing systems.

including:
• Planning, assessment, and case management systems
• Document and records management systems
• CRM, consultation, and engagement platforms
• GIS and spatial data services

Spero-ai operates alongside these systems, augmenting workflows by analysing, structuring, or drafting content without assuming ownership of system-of-record responsibilities.

🎯 Integration Governance and Change Control

Integrations are governed to support maintainability and audit.

This includes:
• Explicit versioning of integration interfaces
• Validation and testing prior to changes
• Controlled deployment and rollback procedures

Integration logic is isolated from core platform services, allowing changes to external systems without destabilising internal workflows. This reduces long-term operational risk and simplifies ongoing support.

05-3

Delivery Methodology

Spero-ai is delivered using a staged, risk-managed approach designed for government and regulated enterprise environments. Delivery focuses on early validation, controlled change, and alignment with existing operational practices.

The methodology prioritises predictable outcomes over speed for its own sake.

Delivery Methodology
🎯 Discovery and Co-Design

Delivery begins with a structured discovery and co-design phase to establish scope, constraints, and success criteria.

This phase typically includes:
• Confirmation of objectives, risks, and non-negotiables
• Review of existing systems, workflows, and governance requirements
• Identification of integration points and data sensitivities

The intent is to ensure that technical design and delivery sequencing reflect real operational conditions before build activity begins.

🎯 Build, Test, and Validation

Platform configuration and development are delivered incrementally and validated early.

This includes:
• Configuration of modules and workflows aligned to agreed scope
• Integration development and interface testing
• Validation of security, access controls, and data handling
• User review of AI-assisted outputs within controlled scenarios

Testing focuses on correctness, traceability, and operational fit rather than volume-based performance alone.

🎯 Deployment, Training, and Handover

Deployment is sequenced to minimise disruption and support adoption.

This includes:
• Controlled release into target environments
• Role-appropriate training for users and administrators
• Operational handover and documentation

Post-deployment support focuses on stabilisation, feedback, and incremental improvement rather than continuous structural change.

06

Governance, Operations & Assurance

This section describes how Spero-ai is governed, operated, and supported over time. It focuses on the mechanisms used to maintain control, manage risk, and ensure the platform continues to operate as intended after initial deployment.

Governance and operational assurance are treated as ongoing responsibilities rather than implementation milestones. Controls are designed to support oversight, audit, and continuous improvement without introducing unnecessary complexity.

This section is intended for platform owners, IT operations, governance, and assurance stakeholders.
Operational models and assurance activities are adapted to align with organisational policy, risk appetite, and delivery arrangements.

06-1

Governance & Compliance

Spero-ai is designed to operate within established governance and compliance frameworks common to government and regulated enterprise environments. Governance controls focus on accountability, oversight, and alignment with organisational policy rather than bespoke or AI-specific processes.

The platform supports governance by design, rather than relying solely on procedural controls.

Governance & Compliance
🎯 Governance Model and Responsibilities

Governance responsibilities for Spero-ai align with existing organisational structures.

This typically includes:
• Clear ownership of the platform at an executive or senior management level
• Defined operational responsibility for configuration, access, and usage
• Separation between platform administration, operational use, and oversight

The platform does not require the creation of new governance bodies or roles. Instead, it integrates into existing ICT, data, and risk governance arrangements.

🎯 Compliance Alignment

Spero-ai is designed to support compliance with common regulatory and policy obligations, including privacy, records management, security, and information handling requirements.

Compliance is supported through:
• Configurable access controls and permissions
• Audit logging and traceability of system and user actions
• Deployment options aligned to data residency and jurisdictional requirements

The platform does not claim automatic compliance. Final compliance remains the responsibility of the client organisation and is supported through appropriate configuration and governance.

🎯 Audit, Review, and Reporting

The platform provides mechanisms to support internal and external review.

This includes:
• Logs and records suitable for audit and assurance activities
• Visibility into the use of AI-assisted functions within workflows
• Support for reporting to internal governance bodies or regulators

Audit and review processes are designed to be evidence-based and repeatable, rather than reliant on ad-hoc explanation or interpretation.

06-2

Operations & Support

Spero-ai is operated and supported using standard practices appropriate for government and regulated enterprise environments. Operational controls focus on reliability, visibility, and controlled change rather than continuous feature churn.

Support arrangements are designed to align with existing IT service management and operational models.

Operations & Support
🎯 Operational Model and Responsibilities

Operational responsibilities are clearly defined between platform operation, system administration, and end-user activity.

This typically includes:
• Day-to-day platform monitoring and health checks
• Management of user access, roles, and configuration
• Oversight of integrations and scheduled processes

Operational roles are aligned to existing IT and service ownership structures. The platform does not require specialist AI operations capability to perform routine support tasks.

🎯 Monitoring, Reliability, and Service Levels

The platform includes monitoring and alerting to support operational awareness and issue response.

This includes:
• Monitoring of system availability and performance
• Visibility into background processing and AI-assisted services
• Alerting for abnormal behaviour or service degradation

Service levels and support arrangements are defined as part of the delivery and operating model and are aligned to the deployment environment and organisational requirements.

🎯 Support, Maintenance, and Change Management

Support and maintenance activities are structured to minimise operational disruption.

This includes:
• Controlled release and update processes
• Advance communication of changes and maintenance windows
• Support for issue investigation, remediation, and post-incident review

Changes are assessed for operational and risk impact prior to deployment. Emergency fixes and updates follow defined escalation and approval pathways.

06-3

Technical Assurance Summary

This section summarises the technical and operational assurances described throughout this document. It is intended to support internal decision-making, executive briefing, and next-stage due diligence.

Spero-ai is designed to be reviewed, challenged, and governed using established technical and risk management practices.

Technical Assurance Summary
🎯 Risk and Control Summary

Key risks associated with AI-enabled platforms have been addressed through architectural design, workflow controls, and governance alignment.

These include:
• Clear separation between AI assistance and human decision-making
• Explicit data ownership, residency, and lifecycle controls
• Defence-in-depth security architecture and access controls
• Operational resilience independent of AI availability

Residual risks are managed through configuration, oversight, and organisational governance rather than technical abstraction.

🎯 IT Readiness and Operational Fit

The platform is designed to align with existing IT environments and operating models.

This includes:
• Compatibility with standard identity, security, and monitoring practices
• Deployment options to match data sensitivity and infrastructure constraints
• Integration patterns that avoid tight coupling or system-of-record conflicts

Adoption does not require fundamental change to organisational governance structures or delegation of authority.

🎯 Next-Step Technical Due Diligence

Where required, further technical due diligence can be undertaken in a structured manner.

This may include:
• Architecture and security deep dives
• Deployment-specific configuration review
• Integration design workshops
• Operational and support model confirmation

The platform is designed to support this level of review without reliance on undocumented assumptions or informal explanation.

07

Spero-ai Team

An experienced team combining deep planning domain expertise with hands-on delivery capability across AI, data, and software. The Spero-ai team brings a practical, validation-led approach — working closely with planners, respecting professional judgement, and focusing on tools that are usable, secure, and supportable in real planning environments.

Founders and Advisory Board
🎯 Founders

James Mant (Chief Executive Officer) James leads Spero-ai with a clear mission: making planning work better for people, communities, and governments. With over 20 years’ experience across state and local government, urban policy, and design strategy, he brings deep insight into how planning systems operate at scale. At Spero-ai, James combines this experience with AI and co-design to refocus planning on quality outcomes, liveability, and trust.

Jennifer George (Chief Commercialisation Officer) Jennifer leads commercial strategy at Spero-ai, specialising in scaling deep-tech innovation across government and industry. With a background in US institutional investment and more than 15 years commercialising sustainable technologies across 50+ industries, she operates at the intersection of capital, policy, and technology. Jennifer brings strong market, partnership, and governance capability to Spero-ai’s growth.

Peter Kelly (Chief Information Officer) Peter is responsible for Spero-ai’s technology vision and execution, with over 25 years in technology and deep hands-on experience in construction, compliance, and regulated environments. A founder of multiple platforms in planning, compliance, and safety, he specialises in translating complex regulation into scalable, human-centred digital systems. At Spero-ai, Peter focuses on AI-native tools that support — rather than replace — planners and regulators.

🎯 Advisory Board

Dr Jane Homewood Dr Jane brings decades of leadership across planning, housing, and urban strategy, including senior public sector roles shaping statutory planning frameworks and sustainable growth initiatives. Her deep experience in government, design, and policy provides strategic insight into planning reform and implementation. LinkedIn profile available online.

Andrew Grear Andrew brings deep leadership experience across planning, building, and land development reform at both state and local government levels. A former Executive Director of State Planning Policy in Victoria and Senior Member of Planning Panels Victoria, he has led major statutory and system reforms, including South Australia’s Planning, Development and Infrastructure Act. Andrew provides Spero-ai with rare insight into how planning systems are designed, governed, and successfully reformed in practice.

Carys Evans Carys brings deep expertise in digital transformation, spatial data, and government-scale technology delivery. As former Director of Digital Twin Victoria, she led the program from pilot to sustained operations, delivering over $70M in realised benefits and establishing Victoria’s first digital twin. Her experience bridges strategic planning, advanced data platforms, and practical decision-making across the built and natural environments.

George Havakis George is a highly experienced leader in geospatial systems and spatial analytics, with more than three decades delivering enterprise-grade GIS solutions across government and industry. As Managing Director of GISSA International, he has led the design and deployment of large-scale spatial platforms that underpin planning, infrastructure, and decision-making. George brings deep technical credibility and insight into how spatial data can be operationalised at scale.

Kate Shea Kate brings deep expertise in strategic communications, stakeholder engagement, and investor relations across technology and growth-focused organisations. She supports companies in shaping clear narratives, building credibility with investors and partners, and communicating complex propositions with confidence. At Spero-ai, Kate strengthens market positioning, external engagement, and capital-readiness.

🎯 Our Values

By planners, for planners: Built with deep domain understanding, not generic automation.

Human‑centred AI: AI that supports judgment, transparency, and accountability.

Trust by design: Explainable, referenceable, and defensible outputs.

Long‑term infrastructure mindset: Systems designed to endure policy, regulatory, and market change.

08

Contact Details & Disclosure

Contact details for follow-up discussion, alongside key disclosures and context to support informed review. This section clarifies how to engage further, the indicative nature of the material presented, and the assumptions and boundaries under which this response has been prepared.

Option 1 – Boxed AI Drafting Overview
🎯 Contact Details - James Mant (CEO)

James Mant
Chief Executive Officer
M: 0434 905 671
E: [email protected]

🎯 Contact Details - Peter Kelly (CIO)

Peter Kelly
Chief Information Officer Officer
M: 0400 988 841
E: [email protected]

🎯 Disclosure - Important information & limitations for readers

This document has been prepared to describe the technical architecture, design principles, and delivery approach underlying the Spero-ai platform, based on information available at the time of writing and discussions held to date.

It is intended to support technical review, alignment, and informed discussion. It does not define a final system configuration, implementation scope, delivery timeline, or commercial commitment.

The platform capabilities, deployment models, and controls described reflect the current implementation and the principles that guide ongoing development. Any deployment would proceed through staged delivery, with defined boundaries, governance controls, and mandatory human oversight appropriate to the operating environment.

This document is provided for information purposes only. It does not constitute legal, financial, procurement, or professional advice, and should not be relied upon as such. Final requirements, configurations, and assurance outcomes remain subject to organisation-specific assessment, validation, and approval.