Designing Streamlined and Scalable Data Permissions for AI

A UX approach to secure and explainable AI access across enterprise environments.

Role
Founding UX Designer

Industry
Data Governance

Timeline
9 Months

Team
CEO & Engineering Team (Director, Founding, AI/ML, Front-End)

The $10 Million Question Every CISO Was Asking...

At Codified, I faced a fascinating design challenge to help CISOs at companies like JP Morgan confidently say “yes” to AI innovation.

"What if our LLM accidentally exposes trade secrets, customer data, or strategic plans?"

Learning by building

The reality of enterprise and early stage design is that you don't learn everything upfront. At Codified too, our most important insights emerged only after customers started giving us feedback on what we built.

Through 50+ customer conversations across 9 months, we learned that enterprise AI adoption is a challenge of confidence and trust that can only be solved through iterative learning.

Project timeline

Foundational UX & market gaps

Oversharing & Permission Discipline Issues
Files and folders have been shared too broadly over time, creating security risks when AI systems access this data.

Classification and Categorization Nightmare
Existing tools can't properly categorize and classify sensitive data, making rule-setting impossible.

AI Deployment Blocked by Security Concerns
Companies want to deploy AI tools but security teams are blocking rollouts due to data access concerns.

Manual Permission Management is Unsustainable
Current solutions require manual review and fixing of permissions, which doesn't scale.

Lack of Real-Time Visibility & Auditing
Companies can't see what data their AI systems are accessing in real-time or audit access patterns.

Existing solutions are like smoke detectors

Data security posture management (DSPM)and Data loss prevention (DLPs) tools only say "there's smoke somewhere" but don't tell you where the fire is, how bad it is and what to do about it. 

These tools are good at pattern matching, scanning documents to find SSNs, credit cards, or other sensitive data. But they stop there, leaving security teams with more questions to answer:

  • No business context: Is this SSN in an HR file where it belongs, or leaked into a marketing folder?
  • No access visibility: Who can actually see this sensitive data? Should a summer intern have access to executive compensation files?
  • No actionable guidance: What should you DO about these findings? Fix permissions? Move files? Create restrictions?

Manual Workarounds Don't Scale. 
Multiple customers mentioned they created Personas and Data Owners (who go fix permissions) and then stress test with adverse prompts

Codified's Value Proposition

For: Security-conscious enterprises deploying internal AI applications
Who: Need to reduce the risk of sensitive data exposure while enabling AI adoption across teams
Our product: An explainable access governance platform built for AI workloads
That: Highlights who can access what, why, and how: providing visibility, control, and confidence across millions of documents and complex permission systems.

What we built first with what we knew

Core MVP Features (initial attempt at solving the identified gaps)

  • Data Source Onboarding
    • First critical step. Without connected data sources, there’s no way to evaluate or manage AI data access
  • Automated Data Categorization
    • We thought intelligent classification by sensitivity, domain, and datasource would provide the missing cont
  • User and Data Details
    • Teams needed clear visibility into who could access what data
  • Rules creation
    • Teams needed a way to define and test AI data access policies upfront.
  • Risk Dashboard
    • Policy violations and risk breakdowns needed intuitive visual representation
  • Access Check Logs
    • We thought basic logging of API permission checks would provide the audit trail teams needed

Our first feedback

"This is helpful, but you're missing something fundamental. Every organization thinks about data differently. Our 'Sensitive' isn't the same as JP Morgan's 'Sensitive.' We need to label data by our business structure, not generic categories."
— CISO, Fortune 500 Financial Services

Customer Feedback Pattern
As customers began trying our categorization and classification system, consistent feedback emerged:

"We need to label data by our business structure, not generic categories"
"Our legal team has different sensitivity requirements than our product team"
"Can we tag things as 'board-level confidential' or 'customer-facing approved'?"

KEY LEARNING

Context matters more than accuracy. Generic categorization and classification, no matter how sophisticated, doesn't match enterprise mental models.

Custom labels in addition to categorization

  • Flexible labeling that supports any organizational taxonomy
  • Label inheritance through data hierarchies
  • Labels integrated into rule logic and risk calculations
  • User-friendly interface for creating and managing custom classifications.

Every rule needs a story

"We're worried about actually SETTING UP RULES. What if we accidentally block the entire finance team from their own data?We need to know BEFORE we implement what will break.”
— IT Director, Mid-size Healthcare Company

The Fear Pattern
Customer after customer expressed the same anxiety about our rules engine:

"We need to see exactly which files and users will be affected, before we hit apply."
"What if this rule ends up blocking access to the wrong folders?"
"Our data lives in layers. How do we know what part of the hierarchy this rule actually touches?"

KEY LEARNING

Visibility means nothing if users are afraid to act on it.

Knowing what a rule actually touches

  • A data hierarchy explorer to show precisely what’s being targeted: from large folders to individual files..
  • A way to select and filter by path, metadata, or data type
  • And an impact preview to visualize affected users and data before rules go live

"Access approved” can be a real scare

"I can see someone got access to sensitive data, but I have no idea why. What rule allowed this? Did we mess something up? If this ever leaks, I won’t just need logs.. I’ll need a story I can stand behind."
— CISO, Fortune 100 Technology Company

The Pattern Behind Every Panic Call
When unexpected access approvals occurred, teams needed to:

‍Understand the reasoning: Why was this access granted?
Verify policy accuracy: Did we configure our rules correctly?Defend to stakeholders: How do we explain this to execs or auditors?
Take corrective action: Can we fix this before it happens again?

KEY LEARNING

Unexplained approvals create more panic than denials. Teams fear of silently overexposed sensitive data without a clear trail of justification.

Visual explainability system

  • Decision graphs: Visual representation of how user data, custom postures, and default policies combined
  • Plain language summaries: "Access denied because user lacks 'Executive_Read' permission, required by Custom Rule #3"
  • Recommended actions: "Grant permission via IT portal" or "Request exception approval"

Information Architecture

The Challenge
How do you organize 8 interconnected systems into coherent conceptual models for two very different user types?

Security Teams

Goal
Monitor data access risk, define policies, and simulate impact before enforcing.

AI application builders

Goal
Test and verify access logic during app development.

The Complete User Experience

Design validation & customer feedback

❤️ Customers loved

Classification & Categorization:
"This is the biggest gap in our current tools. The way you handle classification is exactly what we need."

API-First Integration:
"I really like the API route, much better than the manual policy mess we deal with now."

Comprehensive Metadata Store:
"It’s not just permissions, the fact that you capture rich metadata context too makes this way more useful than a traditional access control tool."

✅ Customers acknowledged

100% Problem Confirmation:
Every POC participant validated that permission visibility and explainability were missing in their AI deployments.

Existing Tools Are Inadequate:
"Current DSPMs are useless beyond surface-level scanning — they can’t tell us anything actionable."

Need for the Capability:
Multiple stakeholders expressed intent to build, buy, or partner to acquire these capabilities.

Timing Urgency:
Several teams had active implementation deadlines and requested extended evaluation periods to explore fit.

⚠️ Customers were wary of

Security Integration Concerns: There’s no way our infosec team will allow unrestricted scanning, this could block adoption."

Scalability Questions:
"We have millions of documents. We’ll need to see real proof that this works at scale."

Architecture Philosophy Divide:
Some stakeholders preferred solving the problem upstream (at the data source) rather than in the application layer.

Tool Fatigue:
Security teams expressed hesitancy about adding “yet another tool” to an already crowded governance stack.

Key Metrics of Success

Over 9 months, we ran 4 enterprise POCs

50% demonstrated clear product fit with one additional POC showing positive signals

1 of 4 converted into a paid engagement validating early commercial interest despite being pre-revenue

Enterprise-scale validation tested on environments with 2M+ documents and 180+ employees

Support for 3 key data connectors prioritizing high-value sources based on early customer needs

Learnings

Trust is a design problem
Enterprise AI adoption isn't limited by technology, it's limited by trust. The most sophisticated AI systems fail if decision-makers don't understand or trust how they work.

Validation Comes in Many Forms
Customer enthusiasm, pilot programs, and design pattern adoption across an industry are all forms of validation, even when they don't immediately translate to sustainable business outcomes.