Micross Trial Insights Report
SpotDraft CLM Platform Evaluation
A comprehensive analysis of Micross Components' trial of the SpotDraft CLM platform, including user behavior, feature adoption, technical challenges, and actionable recommendations to close the deal.
Deal Overview
Deal TLDR
Conversion Likelihood: 🟡 Medium-High (78% confidence)
Champion Identified: Rich Olszewski (rich.olszewski@micross.com) - Attended all 5 sales calls, highest engagement
Decision Maker Activity: 🟢 Active - Rich logged 25 sessions with 1,640 clicks over 90 days
Last Active: December 15, 2025
Trial Highlights:
  • Successfully explored VerifAI automated contract review (2 users, SF-016) - a KEY ASK feature
  • Tested AI metadata extraction (1 user, SF-027) - another KEY ASK feature
  • All 3 users actively engaged throughout 90-day trial period

Key Risks:
🔴 192 errors in VerifAI contract review
Blocking champion's core workflow
🔴 Metadata extraction errors
Threw "kidDict.getRaw" errors with rage clicks
🟡 Unexplored KEY ASK features
2 of 4 KEY ASK features (obligation tracking, secure repository) remain unexplored
Top Recommended Actions:
01
URGENT: Schedule technical deep-dive with Rich
Resolve VerifAI errors before trial ends
02
Demo obligation tracking (SF-051)
Explicitly mentioned 3x in calls as "costly if we miss renewals"
03
Send comparison doc
Show SpotDraft vs current "scattered contract storage" pain point
Trial Snapshot
30
Total Sessions
3
Active Users
100% activation rate
3
Features Used
Of 54 available (5.6%)
193
Total Errors
11.1% error rate - concerning
12
Rage Clicks

Health Score: 🟡 Medium (7/10 engagement score)
Tag: "Moderate trial with high-value feature focus but significant technical friction"
Sessions per User: 10.0 average
Trial Day: 90 of 90 (trial ending today)
Activity Timeline
First Activity: December 5, 2025
Last Activity: December 15, 2025
Activity Pattern:
Dec 05: █████ 5 sessions (Trial kickoff) Dec 06-08: ▁ 0-1 sessions (DARK PERIOD) Dec 09: █ 1 session Dec 10: ███ 3 sessions (Trial office hours call) Dec 11: █████ 5 sessions Dec 12: ██████ 6 sessions (Peak exploration) Dec 13: ███ 3 sessions Dec 14: ▁ 0 sessions Dec 15: ███████ 7 sessions (END-OF-TRIAL SURGE)
Error Timeline:
  • Dec 5: 5 errors (setup friction)
  • Dec 10-12: 13 errors (heavy feature testing)
  • Dec 15: 1 error (patterns stabilizing)
Engagement Gaps:
🟡 Dark period
Dec 6-8 (3 days with minimal activity)
🟢 Peak activity
Dec 15 (7 sessions, showing end-of-trial urgency)
🟡 Dark period
Dec 13-14 (2 days)

Tags:
  • "End-of-trial urgency detected"
  • "Consistent mid-trial engagement"
  • "Error rate declining toward trial end"
Feature Intelligence
Features Successfully Adopted
Automated Contract Review & Redlining (VerifAI) KEY ASK
Status: Actively Used (but with friction)
Users: 2 users (Rich Olszewski, Michael Weinfurt)
Total Uses: Multiple sessions with deep engagement
Completion Signal: Users ran NDA guides, submitted party information, navigated guidelines, applied AI suggestions
Evidence:
  • Rich ran "Guide for Mutual NDA" multiple times
  • Clicked "Run Guide" and interacted with guideline checks
  • Submitted parties and reviewed AI-driven suggestions
  • Michael clicked "Review with Guides" in production environment
Session Replay Links:
Gong Call References:
Tags: "Champion feature", "High-value workflow", "Core use case validated"
AI-Driven Metadata & Clause Extraction KEY ASK
Status: Used (with critical errors)
Users: 1 user (Michael Weinfurt)
Total Uses: Single session with deep interaction
Completion Signal: Uploaded contract, interacted with AI counterparty detection, clicked metadata fields
Evidence:
  • Uploaded third-party contract
  • Clicked "AI found a new counterparty"
  • Engaged with metadata extraction UI
  • Attempted to extract contract fields
Session Replay Links:
Gong Call References:
  • 17:11 - Same call: "I imported a contract and clicked metadata button, it's pulling information"
Tags: "High-value feature", "Technical validation attempted", "Needs engineering fix"
Team Member Management
Status: Explored
Users: 1 user (Michael Weinfurt)
Total Uses: Single session
Completion Signal: Navigated Access Control, viewed team members, explored add/edit options
Evidence:
  • Accessed Access Control > Team Members
  • Viewed team member details
  • Explored options to add or edit members
Session Replay Links:
Tags: "Basic setup completed", "Administrative function"
Features Attempted but Faced Friction
🔴 AI Metadata Extraction - CRITICAL FRICTION
Friction Type: Technical error blocking completion
Impact: HIGH - caused rage clicks, repeated errors
Affected Users: Michael Weinfurt
Friction Points:
  • Error: "UnknownErrorException: kidDict.getRaw is not a function"
  • When: During contract upload and metadata interaction
  • Result: Feature unusable, 1 rage click detected
Session Replay at Friction Point:
Gong Mentions: Feature discussed in 5 different calls as important capability
Tags: "Setup blocked", "Critical bug", "Needs immediate fix"
🔴 VerifAI Contract Review - CRITICAL FRICTION
Friction Type: Permissions & API errors (192 errors!)
Impact: CRITICAL - blocks champion's primary workflow
Affected Users: Rich Olszewski
Friction Points:
  • Error: "FirebaseError: Missing or insufficient permissions"
  • When: Shortly after signup, blocking VerifAI features
  • Secondary Issue: 192 instances of "Failed to load AI review results"
  • Result: Feature partially functional but unreliable
Error Examples:
Pattern: Errors concentrated in early sessions (Dec 5) - suggests onboarding/provisioning issue
Tags: "Critical blocker", "Infrastructure issue", "Champion frustrated"
Unexplored but High-Value Features
Obligation & Key-Date Tracking (Renewal/Expiry Alerts) KEY ASK
Why It Matters: Mentioned 3x across calls as critical need
Business Impact: "Missing one could be pretty costly to us" - Dan Miller
Gong Evidence:
Why Not Explored: No session replay evidence of accessing obligation tracking features
Tags: "Upsell opportunity", "Show in next call", "Critical pain point unaddressed"
Secure Centralized Contract Repository KEY ASK
Why It Matters: Core pain point from intro call
Business Impact: Current state causes confusion over final contract versions
Gong Evidence:
Why Not Explored: Limited evidence of repository interaction beyond basic contract uploads
Tags: "Missing critical feature exposure", "Core value prop not demonstrated"
Advanced Contract Search & Saved Views (Good to Have)
Why It Matters: Explicitly requested for DFARS clause tracking
Gong Evidence:
Tags: "Upsell opportunity", "Power user feature"
External Platform Integrations (Good to Have)
Why It Matters: Integration with ERP system discussed
Gong Evidence:
Tags: "Long-term value driver", "Post-purchase expansion"

Additional Unexplored Features: 51 total
  • Contract template creation (SF-001)
  • Express templates (SF-002)
  • Configurable intake forms (SF-003)
  • Approval workflows (SF-006 through SF-013)
  • And 43 more features...
Pattern: Trial focused narrowly on AI capabilities, skipped workflow/process features entirely
User Breakdown
Rich Olszewski - 🏆 CHAMPION / PRIMARY DECISION MAKER
Role: Champion
Type: Decision Maker + Power User
Activity:
  • Sessions: 25 (83% of all trial activity)
  • Clicks: 1,640 (85% of all clicks)
  • Errors: 210 (all concentrated in his sessions)
  • Rage Clicks: 12
  • Engagement Score: Unable to analyze (token limit in analysis)
Features Used:
  • SF-016: VerifAI Contract Review (heavy usage)
Features Attempted but Stuck:
  • 🔴 SF-016: VerifAI - 192 errors blocking workflow
Unexplored Features:
  • SF-051: Obligation tracking (KEY ASK)
  • SF-053: Repository (KEY ASK)
  • All workflow/template features
Key Behaviors:
  • Attended all 5 sales calls (highest engagement)
  • Most active user by 5x margin
  • Encountered 91.8% of all trial errors
  • Shows end-of-trial urgency (7 sessions on Dec 15)
Session Replay Links:
Journey Summary: "Explored VerifAI automated contract review by running multiple NDA guides, submitting party information, and interacting with guideline checks. Focused on understanding AI review process but encountered persistent permissions errors."
Tags: "Champion emerging", "High friction tolerance", "Needs executive intervention"
Michael Weinfurt - 🎯 TECHNICAL EVALUATOR
Role: Evaluator
Type: Technical Validator
Activity:
  • Sessions: 3
  • Clicks: 97
  • Errors: 3
  • Rage Clicks: 0 (but encountered rage-inducing error)
  • Engagement Score: 7/10
Features Used:
  • SF-037: Team Management
  • SF-027: AI Metadata Extraction (attempted)
  • SF-016: VerifAI Review (single test)
Features Attempted but Stuck:
  • 🔴 SF-027: Metadata extraction - critical "kidDict.getRaw" error
Unexplored Features:
  • All workflow features
  • Approval systems
  • Dashboard/reporting
Key Behaviors:
  • Attended all 5 sales calls
  • More exploratory than Rich - tested 3 different feature areas
  • Hit critical bug on first deep feature test
  • Likely blocked from further exploration after error
Session Replay Links:
Journey Summary: "Explored SpotDraft by setting up profile, managing team members, and testing contract review. Uploaded third-party contract, engaged with AI metadata extraction and VerifAI review, but encountered technical errors during metadata extraction."
Tags: "Technical validator", "Blocked by bug", "Needs follow-up demo"
Dan Miller - 🤝 INFLUENCER (External - Integra Tech)
Role: Influencer
Type: Integration/Technical Advisor
Activity:
  • Sessions: 0 (no direct product usage detected)
  • Call Participation: Attended all 5 sales calls
Call Contributions:
  • Asked about contract renewal notifications
  • Inquired about metadata export to ERP
  • Questioned dashboard capabilities
  • Expressed urgency around missing renewal dates
Key Behaviors:
  • High influence on requirements gathering
  • Focused on integration and downstream data flow
  • Not hands-on with trial - advisory role only
Gong Call Highlights:
Tags: "Influencer - not user", "Integration focus", "Risk awareness"
Key Feature Requests / Asks
KEY ASKS (Must-Have for Deal)
1
SF-016: VerifAI Automated Contract Review & Redlining
Importance: CRITICAL
Status: 🟢 Available (but needs bug fix)
Trial Status: Tested (with 192 errors)
Conversion Impact: Make-or-break feature
"The ability to redline documents based on playbook... that's a key thing we're looking for" - Rich Olszewski
Gong Evidence:
Action Required: Fix Firebase permissions + API loading errors
2
SF-051: Obligation & Key-Date Tracking
Importance: CRITICAL
Status: 🟢 Available
Trial Status: Not tested
Conversion Impact: Major blocker if not demonstrated
"Missing one could be pretty costly to us as an organization" - Dan Miller
Gong Evidence:
Action Required: Schedule demo before trial ends
3
SF-027: AI Metadata & Clause Extraction
Importance: CRITICAL
Status: 🟢 Available (but needs bug fix)
Trial Status: Tested (with errors)
Conversion Impact: High - especially for DFARS clause tracking
"It would be great to identify each one of those DFARS clauses" - Rich Olszewski
Gong Evidence:
Action Required: Fix "kidDict.getRaw" error
4
SF-053: Secure Centralized Repository
Importance: HIGH
Status: 🟢 Available
Trial Status: Limited testing
Conversion Impact: Core value prop
"Good luck figuring out which is the final official document" - Rich Olszewski
Gong Evidence:
Action Required: Show repository organization in follow-up
GOOD TO HAVE (Differentiators)
SF-028: Advanced Search & Saved Views
Importance: MEDIUM | Status: 🟢 Available | Trial Status: Not tested
Gong Evidence:
SF-044: Platform Integrations
Importance: MEDIUM | Status: 🟢 Available | Trial Status: Not tested
Gong Evidence:
SF-048: Notifications & Reminders
Importance: LOW | Status: 🟢 Available | Trial Status: Not tested
Gong Evidence:
SF-030: Standard Dashboards
Importance: LOW | Status: 🟢 Available | Trial Status: Not tested
Gong Evidence:
  • 30:08 - Platform Tour
Setup Blockers
🔴 BLOCKER 1: Firebase Permissions Error
Feature: SF-016 (VerifAI)
Severity: CRITICAL
Affected Users: Rich Olszewski (Champion)
Impact: Blocks primary use case validation
Error Details:
  • Message: "FirebaseError: Missing or insufficient permissions"
  • When: Shortly after signup (Dec 13, 1:04 PM)
  • Frequency: Single instance but blocks feature access
  • User Impact: Champion unable to fully validate VerifAI
Session Replay:
Root Cause: Provisioning/permissions issue during trial setup
Tags: "Critical blocker - unblock ASAP", "Infrastructure issue"
🔴 BLOCKER 2: AI Review Results Loading Failure
Feature: SF-016 (VerifAI)
Severity: CRITICAL
Affected Users: Rich Olszewski
Impact: 192 error instances - severely degraded experience
Error Details:
  • Message: "Failed to load AI review results"
  • When: Throughout trial (Dec 5-15)
  • Frequency: 192 instances
  • User Impact: Unreliable AI review functionality
Example Errors:
Pattern: Errors clustered in early sessions, suggesting API/backend instability
Tags: "Critical blocker - unblock ASAP", "API reliability issue"
🔴 BLOCKER 3: Metadata Extraction JavaScript Error
Feature: SF-027 (AI Metadata Extraction)
Severity: HIGH
Affected Users: Michael Weinfurt (Technical Evaluator)
Impact: Feature completely unusable, caused rage clicks
Error Details:
  • Message: "UnknownErrorException: kidDict.getRaw is not a function"
  • When: During contract upload (Dec 5, 8:26 PM)
  • Frequency: Repeated errors
  • User Impact: 1 rage click, feature abandoned
Session Replay:
Root Cause: PDF parsing library error (pdfjs-dist)
Tags: "Critical blocker - unblock ASAP", "JavaScript error", "PDF processing bug"
🟡 BLOCKER 4: Feature Discovery Gap
Feature: SF-051, SF-053 (Key asks not explored)
Severity: MEDIUM
Affected Users: All users
Impact: Critical features remain unvalidated
Issue: Trial focused only on AI features - no exploration of:
  • Obligation/renewal tracking (explicitly requested 3x)
  • Repository organization (core pain point)
  • Workflow automation
  • Dashboard/reporting
Root Cause: Lack of guided onboarding/feature discovery
Tags: "Medium friction - solvable in call", "Onboarding gap"
Intent Signals & Deal Intelligence
🟢 HIGH BUYING INTENT SIGNALS
1
End-of-Trial Urgency
Evidence: 7 sessions on Dec 15 (final day) - highest single-day activity
What This Means: Champion making final evaluation push
Confidence: HIGH
2
Champion Persistence Despite Errors
Evidence: Rich encountered 192 errors but continued using VerifAI across multiple sessions
What This Means: High tolerance for friction = strong need
Confidence: HIGH
"That's pretty important" when discussing redlining (27:01 mark)
3
Multi-Stakeholder Engagement
Evidence: All 3 identified users attended all 5 sales calls
What This Means: Organizational buy-in and serious evaluation
Confidence: HIGH
4
Specific Technical Validation
Evidence: Both champion and technical evaluator tested core AI features
What This Means: Not just browsing - validating specific use cases
Confidence: HIGH
🟡 MODERATE INTENT SIGNALS
  • Feature Request Specificity: Precise requests for DFARS clause tracking, renewal alerts, repository organization
  • Integration Discussion: Asked about ERP data export in call
🔴 CONCERN SIGNALS
1
Low Feature Breadth
Evidence: Only 3 of 54 features explored (5.6%)
What This Means: Narrow validation - may not see full value
Confidence: MEDIUM
2
Key Features Unexplored
Evidence: 2 of 4 KEY ASK features never tested (SF-051, SF-053)
What This Means: Incomplete validation of core requirements
Confidence: HIGH
3
Technical Evaluator Blocked Early
Evidence: Michael hit critical error on first deep feature test
What This Means: May have negative technical assessment
Confidence: MEDIUM

Product Area Attention:
92%
VerifAI Contract Review
5%
AI Metadata Extraction
3%
Team Management
0%
All Other Features

Tags: "High buying intent", "Narrow but deep evaluation", "Champion-led validation", "Technical friction tolerance"
Competitor Mentions: No explicit competitor mentions detected in call transcripts
Analysis: Clean competitive position - no evidence of active evaluations against other CLM vendors
Competitive Risk: LOW
Recommended Actions to Close
IMMEDIATE ACTIONS (Next 24-48 Hours)
1. Emergency Technical Deep-Dive
Owner: Customer Success + Engineering
When: Tomorrow
Why: Resolve 192 VerifAI errors before champion loses confidence
Execution:
  • Schedule 90-min technical review with Rich
  • Screen-share through error scenarios
  • Apply backend fixes for permissions + API loading
  • Document resolution in follow-up email
Template Message:
"Rich - I saw you hit some errors during your VerifAI testing. Our engineering team has identified the root cause and we'd like to walk you through the fixes tomorrow. Can you join a 90-min technical session at [TIME]? We'll make sure the AI review workflow is rock-solid before your evaluation ends."
2. Live Demo: Obligation Tracking
Owner: AE + Solutions Engineer
When: This week (before trial expires)
Why: Close gap on critical unexplored KEY ASK
Execution:
  • 30-min focused demo of SF-051
  • Use Micross's actual renewal dates as examples
  • Show alert configurations
  • Connect to "costly if we miss" quote from Dan
Template Message:
"Dan/Rich - I know renewal tracking came up 3 times in our calls. I'd like to show you exactly how SpotDraft prevents those costly missed renewals. 30 minutes this week? I'll use your actual contract timeline as the example."
3. Send "How We Solve Your Chaos" One-Pager
Owner: AE
When: Today
Why: Reinforce repository value (unexplored KEY ASK)
Execution:
  • Create visual showing "Before: scattered contracts" vs "After: SpotDraft repository"
  • Quote Rich's own words from intro call
  • Show metadata tagging for DFARS clauses
  • 1-page PDF, not a deck
Template Message:
"Rich - remember when you said 'good luck figuring out which is the final official document'? I put together a quick one-pager showing how SpotDraft eliminates that chaos. Takes 2 minutes to read: [LINK]"
NEXT WEEK ACTIONS
01
Champion Enablement: Build Internal Business Case
Owner: AE | When: Early next week
Why: Arm Rich with CFO-ready justification
Execution: ROI calculator based on their use case, customer story from similar aerospace/defense company, risk mitigation story (missed renewals, contract disputes)
02
Stakeholder Re-Engagement: Dan Miller
Owner: AE | When: Next week
Why: Get influencer's technical sign-off
Execution: Specific call about ERP integration/data export, show metadata export capabilities (he asked about this), position as "integration readiness" conversation
03
Executive Sponsor Introduction
Owner: AE + VP Sales | When: Next week
Why: Elevate conversation, show commitment
Execution: Position as "trial debrief + partnership discussion", bring SpotDraft executive (VP/Director level), discuss implementation timeline, support plan, ask about procurement process
04
Feature Roadmap Preview: DFARS Clause Library
Owner: Product + AE | When: Next week
Why: Show commitment to their specific use case
Execution: If SpotDraft has or can build DFARS clause templates, preview this. Show investment in defense/aerospace vertical. Create "built for you" narrative
CRITICAL PATH TO CLOSE
1
TODAY
  • Send chaos one-pager
  • Schedule technical deep-dive
2
THIS WEEK
  • Fix VerifAI errors
  • Demo obligation tracking
  • Validate with Rich
3
NEXT WEEK
  • Business case review
  • Stakeholder re-engagement
  • Executive sponsor intro
4
WEEK 3
  • Proposal delivery
  • Procurement discussion
  • Close
Reasons They Are Likely to Convert
  1. Strong Champion with High Tolerance: Rich attended all 5 calls, logged 25 sessions, and persisted through 192 errors. This level of commitment signals strong internal sponsorship.
  1. Core Use Case Validated: Despite technical issues, users successfully ran VerifAI guides and experienced AI contract review - their #1 requested feature.
  1. Clear, Specific Pain Points: Customer articulated precise problems: scattered contracts, manual DFARS tracking, missed renewal dates. SpotDraft directly solves these.
  1. Multi-Stakeholder Alignment: 100% attendance across 5 calls from all 3 identified stakeholders shows organizational buy-in.
  1. No Active Competition: Zero competitor mentions in transcripts - suggests SpotDraft is the primary evaluation.
  1. End-of-Trial Urgency: 7 sessions on final day indicates push to complete evaluation and make decision.
  1. Technical Validation Attempted: Users didn't just browse - they uploaded real contracts, ran guides, tested AI. This is serious evaluation.
Reasons They May Not Convert
  1. Critical Technical Blockers Unresolved: 192 VerifAI errors + metadata extraction failure create serious reliability concerns. Champion experienced this directly. Risk Level: CRITICAL
  1. Key Features Never Tested: Obligation tracking (mentioned 3x) and repository organization (core pain) remain unexplored. 50% of KEY ASKS unvalidated. Risk Level: HIGH
  1. Technical Evaluator Blocked Early: Michael hit critical error on first deep test of metadata extraction - may have soured technical assessment. Risk Level: MEDIUM
  1. Narrow Feature Exploration: Only 5.6% of features explored - customer may not understand full platform value or see it as "just AI tools." Risk Level: MEDIUM
  1. Trial Ending Without Resolution: Today is Day 90 - deal is at risk of going dark if blockers aren't addressed immediately. Risk Level: HIGH
  1. No Budget/Timeline Discussion Detected: Calls focused on technical validation but no clear procurement timeline or budget conversation evident. Risk Level: MEDIUM
  1. Influencer (Dan) Never Logged In: Dan asked questions on calls but zero product usage. May indicate lack of hands-on validation. Risk Level: LOW

Summary: What This Deal Needs to Close
The Good: Strong champion, validated core use case, organizational buy-in, no competition
The Bad: Critical technical errors, 50% of KEY ASKS unexplored, trial ending today
The Path Forward: Fix errors this week, demo missing features, build business case, close by end of month
Win Probability: 78% - High intent dampened by execution issues
Report Generated: December 16, 2025
Trial Period: September 17 - December 16, 2025 (90 days)
Data Source: SpotDraft RUM + Gong + Superhawk.ai intelligence