A UX approach to secure and explainable AI access across enterprise environments.
Role
Founding UX Designer
Industry
Data Governance
Timeline
9 Months
Team
CEO & Engineering Team (Director, Founding, AI/ML, Front-End)
At Codified, I faced a fascinating design challenge to help CISOs at companies like JP Morgan confidently say “yes” to AI innovation.
The reality of enterprise and early stage design is that you don't learn everything upfront. At Codified too, our most important insights emerged only after customers started giving us feedback on what we built.
Through 50+ customer conversations across 9 months, we learned that enterprise AI adoption is a challenge of confidence and trust that can only be solved through iterative learning.
Oversharing & Permission Discipline Issues
Files and folders have been shared too broadly over time, creating security risks when AI systems access this data.
Classification and Categorization Nightmare
Existing tools can't properly categorize and classify sensitive data, making rule-setting impossible.
AI Deployment Blocked by Security Concerns
Companies want to deploy AI tools but security teams are blocking rollouts due to data access concerns.
Manual Permission Management is Unsustainable
Current solutions require manual review and fixing of permissions, which doesn't scale.
Lack of Real-Time Visibility & Auditing
Companies can't see what data their AI systems are accessing in real-time or audit access patterns.
Data security posture management (DSPM)and Data loss prevention (DLPs) tools only say "there's smoke somewhere" but don't tell you where the fire is, how bad it is and what to do about it.
These tools are good at pattern matching, scanning documents to find SSNs, credit cards, or other sensitive data. But they stop there, leaving security teams with more questions to answer:
Manual Workarounds Don't Scale.
Multiple customers mentioned they created Personas and Data Owners (who go fix permissions) and then stress test with adverse prompts
Core MVP Features (initial attempt at solving the identified gaps)
"This is helpful, but you're missing something fundamental. Every organization thinks about data differently. Our 'Sensitive' isn't the same as JP Morgan's 'Sensitive.' We need to label data by our business structure, not generic categories."
Customer Feedback Pattern
As customers began trying our categorization and classification system, consistent feedback emerged:
"We need to label data by our business structure, not generic categories"
"Our legal team has different sensitivity requirements than our product team"
"Can we tag things as 'board-level confidential' or 'customer-facing approved'?"
Context matters more than accuracy. Generic categorization and classification, no matter how sophisticated, doesn't match enterprise mental models.
"We're worried about actually SETTING UP RULES. What if we accidentally block the entire finance team from their own data?We need to know BEFORE we implement what will break.”
The Fear Pattern
Customer after customer expressed the same anxiety about our rules engine:
"We need to see exactly which files and users will be affected, before we hit apply."
"What if this rule ends up blocking access to the wrong folders?"
"Our data lives in layers. How do we know what part of the hierarchy this rule actually touches?"
Visibility means nothing if users are afraid to act on it.
"I can see someone got access to sensitive data, but I have no idea why. What rule allowed this? Did we mess something up? If this ever leaks, I won’t just need logs.. I’ll need a story I can stand behind."
The Pattern Behind Every Panic Call
When unexpected access approvals occurred, teams needed to:
Understand the reasoning: Why was this access granted?
Verify policy accuracy: Did we configure our rules correctly?Defend to stakeholders: How do we explain this to execs or auditors?
Take corrective action: Can we fix this before it happens again?
Unexplained approvals create more panic than denials. Teams fear of silently overexposed sensitive data without a clear trail of justification.
The Challenge
How do you organize 8 interconnected systems into coherent conceptual models for two very different user types?
Goal
Monitor data access risk, define policies, and simulate impact before enforcing.
Goal
Test and verify access logic during app development.
❤️ Customers loved
Classification & Categorization: "This is the biggest gap in our current tools. The way you handle classification is exactly what we need."
API-First Integration: "I really like the API route, much better than the manual policy mess we deal with now."
Comprehensive Metadata Store: "It’s not just permissions, the fact that you capture rich metadata context too makes this way more useful than a traditional access control tool."
✅ Customers acknowledged
100% Problem Confirmation: Every POC participant validated that permission visibility and explainability were missing in their AI deployments.
Existing Tools Are Inadequate: "Current DSPMs are useless beyond surface-level scanning — they can’t tell us anything actionable."
Need for the Capability: Multiple stakeholders expressed intent to build, buy, or partner to acquire these capabilities.
Timing Urgency: Several teams had active implementation deadlines and requested extended evaluation periods to explore fit.
⚠️ Customers were wary of
Security Integration Concerns: There’s no way our infosec team will allow unrestricted scanning, this could block adoption."
Scalability Questions: "We have millions of documents. We’ll need to see real proof that this works at scale."
Architecture Philosophy Divide: Some stakeholders preferred solving the problem upstream (at the data source) rather than in the application layer.
Tool Fatigue: Security teams expressed hesitancy about adding “yet another tool” to an already crowded governance stack.
Over 9 months, we ran 4 enterprise POCs
50% demonstrated clear product fit with one additional POC showing positive signals
1 of 4 converted into a paid engagement validating early commercial interest despite being pre-revenue
Enterprise-scale validation tested on environments with 2M+ documents and 180+ employees
Support for 3 key data connectors prioritizing high-value sources based on early customer needs
Trust is a design problem
Enterprise AI adoption isn't limited by technology, it's limited by trust. The most sophisticated AI systems fail if decision-makers don't understand or trust how they work.
Validation Comes in Many Forms
Customer enthusiasm, pilot programs, and design pattern adoption across an industry are all forms of validation, even when they don't immediately translate to sustainable business outcomes.