L o a d i n g

InsightGuard – AI Audit & Governance Console

Category
UX Design
Tools
Figma, ChatGPT
Systems Design
Information Architecture

Overview

What is it

An enterprise AI audit and governance console that centralizes model health monitoring, explainability, bias detection, drift monitoring, and compliance workflows.

Who it Serves

AI auditors, governance teams, compliance officers, ML engineers, and risk managers who need to evaluate and monitor AI model performance and fairness.

Why it Matters

As AI adoption accelerates, organizations face increasing regulatory pressure and reputational risk. This platform makes AI systems transparent, auditable, and trustworthy.

UX Process

Problem

Organizations deploying AI systems face critical challenges in maintaining transparency, accountability, and regulatory compliance.

Lack of transparency

AI systems operate as black boxes, making it difficult to understand how decisions are made or identify potential issues.

Manual documentation

Regulatory compliance requires extensive documentation that's typically created manually, leading to errors and inconsistencies.

Fragmented monitoring

Bias detection, drift monitoring, and explainability tools exist in silos, forcing auditors to switch between multiple systems.

Reactive governance

Teams discover issues after they've impacted users, rather than catching problems proactively through continuous monitoring.

Goal

Create a unified, enterprise-grade audit console using the IBM Carbon Design System

The platform needed to consolidate scattered AI governance tools into a single, cohesive experience that enables teams to:

  • Monitor model health, performance, and fairness in real-time
  • Conduct structured audits with clear workflows and documentation
  • Generate compliance reports for GDPR, EU AI Act, and internal policies
  • Investigate incidents and take corrective action quickly

My Role

UX Design
Interaction Design
Information Architecture
Systems Thinking
Rapid Prototyping
User Flows

Research

Conducted interviews with 12 AI auditors, compliance officers, and ML engineers across financial services, healthcare, and technology sectors to understand their workflows and pain points.

Key Pain Points
  • Bias detection: Teams struggle to identify fairness issues across multiple         demographic groups simultaneously
  • Explainability: Understanding why a model made a specific prediction         requires technical expertise and manual investigation
  • Reporting: Creating audit reports takes days of manual work, pulling data         from multiple sources
  • Context switching: Auditors lose productivity switching between 5-7         different tools during a single investigation
Regulatory Constraints
  • GDPR: Right to explanation requires documenting how automated decisions         are made
  • EU AI Act: High-risk AI systems require continuous monitoring and extensive         record-keeping
  • Internal policies: Organizations need custom audit checklists and approval         workflows
  • Audit trails: Every action must be logged with timestamps and user         attribution for compliance

Design Decisions

Why IBM Carbon Design System

Carbon was chosen because it's purpose-built for enterprise applications where information density, data visualization, and workflow efficiency are critical. The system provides:

  • Data-dense layouts: Tables, cards, and visualization components designed to display complex information without overwhelming users
  • Accessibility built-in: WCAG AA compliance ensures the platform is usable by auditors with diverse needs
  • Established patterns: Users familiar with IBM products can transfer their knowledge, reducing learning curve
  • Professional aesthetic: Clean, structured design communicates trust and credibility—essential for a governance tool
Information Architecture Rationale

The three-tier navigation structure (primary → detail → sub-tabs) was designed to balance discoverability with focus:

  • Primary navigation represents user goals (monitor, audit, report) rather than technical concepts
  • Model detail pages consolidate all analysis for a single model, eliminating the need to jump between sections
  • Sub-tabs separate different types of analysis (explainability vs. bias vs. drift) while keeping them contextually grouped
  • Persistent context: The model header remains visible across sub-tabs so users never lose sight of which model they're analyzing
Reducing Cognitive Load

Enterprise governance tools risk overwhelming users with information. Several design decisions specifically addressed cognitive load:

  • Progressive disclosure: Dashboard shows summary metrics; clicking through reveals detailed analysis. Users aren't confronted with all data at once.
  • Visual hierarchy: Critical alerts use red, warnings use amber, normal status uses neutral colors—users can scan quickly for issues
  • Consistent patterns: Card layouts, table structures, and action buttons work the same way across all screens
  • Contextual actions: Buttons appear only when relevant (e.g., "Pause Model" only shows for active models)
  • Default views: Each screen loads with sensible defaults (last 30 days, current model version) to minimize decision fatigue
Risk Reduction Through Design

The platform's primary purpose is risk management, so the interface itself needed to reduce risk of errors:

  • Confirmation dialogs: Critical actions (pause model, reject audit) require users to provide a reason, preventing accidental clicks
  • Audit trails: Every action is logged with user attribution and timestamps, visible in activity feeds
  • Read-only states: Completed audits and archived reports are clearly marked and prevent editing
  • Validation: Forms validate in real-time and block submission if required fields are missing
  • Status indicators: Color-coded badges and icons make it immediately obvious when a model is in an unhealthy state

Figma Prototype

Key Learnings

What I Learned

Designing InsightGuard deepened my understanding of how complex enterprise systems must balance power with usability. A few key lessons:

  • IA is critical for enterprise tools: With dozens of features and screens, poor information architecture makes even the best UI unusable. Early IA work         pays dividends throughout the project.
  • Design systems accelerate velocity: Carbon's pre-built components let me focus on workflows and IA rather than pixel-pushing. The design system         also ensured consistency across 20+ screens.
  • Domain expertise matters: Understanding AI governance concepts (bias metrics, explainability, drift) was essential to designing intuitive interfaces.         I spent significant time learning the problem space before sketching.
  • Risk must inform every decision: Governance tools carry high stakes— errors can have legal/regulatory consequences. This influenced choices like         confirmation dialogs, audit trails, and read-only states.
What I'd Improve
  • User testing with real auditors: While the design is based on research interviews, observing auditors complete real tasks would reveal friction         points I missed.
  • Mobile experience: The current design assumes desktop usage. A companion mobile app for quick model status checks or incident triage could be         valuable.
  • Collaborative features: Adding inline commenting, @mentions, and real-time collaboration would improve team workflows during audits.
  • Customization: Organizations have different governance policies. Making audit checklists and report templates fully customizable would increase         adoption.