AI Readiness — Next Steps Dashboard

Submitted Plans · Deadline: 13th March 2026 · 12 Leaders

12
Leaders
11
Plans Submitted
1
Not Submitted
3
Below Benchmark
~15+
Roles w/ People Impact
VIEW:
SORT:
Summary — All Leaders ⚠ Not submitted: Saurabh Bansal (Solutions) — plan blank
Leader Function AI Score Roles 30-Day Efficiency Target 60-Day Efficiency Target People Impact · 30 Days People Impact · 60 Days
Divi Ramola User Research 50% 1 50%+ across all 50%+ across all
Vaibhav Agarwal CX / PX 49% 14 30–70% · avg ~48% 40–90% · avg ~65% 6 role categories reduced
Outsourced L1, QA auditors, trainers (CX+PX), PX L1, L1 partner
Ashish Agrawal Design 44% 10 20–50%+ · avg ~45% 30–75%+ · avg ~55%
Gurpreet HR 40% 4 40% across all 60% across all
Dhruv Mathur Product 40% 5 Not specified Not specified
Maanas Dwivedi Genie (AI/Data) 40% 4 80–100% (already at end-state for deployment) 60–75% (formalising protocols) 1 role at end-state
Deployment Architect — fully AI-run
Harmeet Hora Partner Experience 38% 6 50–100% · avg ~75%
Analysts: 100%; Partner WE: 50%
50–100% · avg ~70%
Router Diagnostics: 100%
1 role removed
Analyst role redundant
4 role categories impacted
L1 calling agents ↓ · NQT 12→5-6 · NQT Head redundant · Remote BD redundant
Marut Singh Engineering 37% 3 40–50% · avg ~45% 80–90% · avg ~85% Non-adapters replaced
Devs not hitting +20% productivity likely replaced in 6 months
Ashutosh Mishra NetBox (Ops) 29% 4 20–50% · avg ~40% 40–70% · avg ~60% Team focus shifts
Application layer → deep tech (no direct removal)
Rohan Agarwal Finance 30% 6 10–20% · avg ~13%
Constrained by EY Audit + Series B DD
20–30% · avg ~24%
Saurabh Bansal Solutions 28% 6 ⚠️ Plan not submitted — both 30-day and 60-day sections blank
Abhinav Marketing 20% 8 80% for 3 areas
Performance, Partner text, Consumer text
25–80% · avg ~65%
Creative: 25%; Comms deploy: 80%; UGC: 75–100%
3 role categories impacted
Performance: 3 people → 1–1.5 · Comms: 25% role redistributed · Agency savings on creative
Divi Ramola
User Research
1 role Within Benchmark
50%
Benchmark: 40–60%
30 Days
60 Days
  • Tool Setup Standardise transcription, storage & tagging protocols. Build agent for auto-tagging and topic extraction. Fine-tune NPS coding & theme clustering prompts. 50%+
  • Research Ops Fine-tune AI sentiment tagging for open-text NPS and other survey responses — test on existing data. 50%+
  • Audit & Quality Create 'AI override log' — every correction logged as calibration data. Document 'what good looks like'. Baseline
  • Qualitative Analysis Run AI theme clustering for past data. Build contradiction-flagging and theme-validation pipeline. 50%+
  • Insight Synthesis Fine-tune based on AI + human judgment loop (accept/edit/reject → log for calibration). Document gaps. 50%+
  • Nuance Understanding Add flags for Needs, Behaviors, Motivations — move toward understanding root intent behind responses. 50%+
Vaibhav Agarwal
CX / PX Operations
14 roles Within Benchmark ⚠ People Impact
49%
Benchmark: 40–60%
30 Days
60 Days
  • Director – CX Ops Implement CX dashboards for SLA, escalation, VOC, repeat complaints. Move reviews to AI dashboards. 30%
  • Mgr – CX L1 / Vendor Finalise AI voicebot for inbound queries. Pilot automation for recharge, status, service queries. 60%
  • Mgr – CX Platforms Configure auto ticket routing, alerts, workflow automation in Ameyo / CRM / ticketing tools. 40%
  • Mgr – Quality CX Implement AI QA dashboards, speech analytics, VOC reporting automation. Reduce manual reports. 50%
  • TL – Quality Deploy AI call audit tool for auto QA. Reduce manual audit volume. 70%
  • PX L1 Operations Launch Partner AI bot + KB for installation / app / access queries. Route repetitive queries to bot. 70%
  • TL – Partner L1 Route partner queries to AI first. Track deflection rate & reduce agent load. 60%
  • Mgr – CX L1 Ops Shift majority inbound calls to AI. Keep agents only for complex cases. 85%
  • TL – Quality AI audits 100% of calls. Keep only 1 reviewer for sampling. 90%
  • Mgr – Quality CX Fully move to AI QA + VOC analytics. Manual QA only for calibration. 70%
  • TL – Training CX AI-led training, auto evaluation, performance-based modules. Trainers only for behaviour coaching. 70%
  • PX L1 Operations Partner AI bot handles majority queries. Agents only for exceptions. 85%
  • TL – Partner L1 AI handles first response. Agents handle escalations only. 80%
  • PX Training Team AI certification + automated SOP training + knowledge bot for partners. 70%
👥 People Impact (60-day end state)
Reduce outsourced L1 seats (CX L1 Ops)
Remove majority of QA auditors — 1 sampling reviewer retained
Reduce trainer dependency in CX Training
Reduce PX L1 agents — AI handles majority queries
Reduce L1 partner team — AI first response
Reduce trainer dependency in PX Training
Ashish Agrawal
Design
10 roles Within Benchmark
44%
Benchmark: 40–60%
30 Days
60 Days
  • Core Sprint Build Claude prompts for JTBD framing. Create AI-assisted prototyping workflow. Run on 2 real sprints. 50%+
  • Corner Cases & PRD Catalog past 10 solutions for metric patterns. Build Claude prompt for corner-case enumeration from specs. 50%+
  • Bug Intelligence Read last 50 #bug_front Slack entries — tag as tech bug / UX issue / noise. Build AI classification prompt. 50%+
  • Visual Communication Create brand-calibrated prompt library — test on 5 past briefs, compare AI output with production designs. 50%+
  • Firefighting Categorize last 10 fast-track requests by type. Build fix templates and reusable prompt library. 50%+
  • Metrics & Measurement Audit current doc gaps in component library. Use AI to auto-generate docs for undocumented components. 50%+
  • Core Sprint Run AI-assisted sprint workflow on 4 consecutive sprints. Handover APK with all workflows and edge cases. 75%+
  • Corner Cases & PRD Handover APK covering all workflows and edge cases. PRD template ready with AI-generated sections. 75%+
  • Post-Launch UAT Expand corner case library as living document. Build PRD handover checklist with automated validation. 35–40%
  • Bug Intelligence Refine classification prompt on 30 days of real data. Move from daily scrape to real-time Slack monitoring. 50%+
  • Design System Create constraint-aware prompts per surface. Input brief + constraints → output design-approved concepts. 35–40%
  • Metrics & Measurement Make AI audit a mandatory step before every design handover — zero deviations to reach tech undetected. 50%+
Gurpreet
HR
4 roles Within Benchmark
40%
Benchmark: 30–50%
30 Days
60 Days
  • Talent Acquisition Systemise manual follow-up transactions — stage-wise candidate status, interview confirmations, offer tracking using AI-drafted communications. 40%
  • HR Operations Systemise key manual follow-ups across employee lifecycle — with employees, candidates, managers. Reduce turnaround time on standard requests. 40%
  • Talent Acquisition Direct sourcing integration with Naukri, Instahyre, LinkedIn — auto-ingest profiles into ATS. AI screening and shortlisting. 60%
  • HR Operations Internal communications agent — draft and schedule policy announcements, engagement updates, all-hands prep using AI. 60%
Dhruv Mathur
Product
5 roles Within Benchmark No % targets set
40%
Benchmark: 30–45%
30 Days
60 Days
  • Enable Team Standardise Claude skills for all generation artefacts — data analysis, PRDs, specs — with high standards. 10x productivity target.
  • Pilot — Customer Pod Optimise ways of working across PDT for learning velocity. Establish checkpoints for objective review.
  • Coordination & Comms End-to-end automation of stakeholder coordination — intake, progress updates, release updates.
  • Feature Monitoring High-quality event instrumentation, A/B testing platform integrated, automated monitoring of new feature performance.
  • Ways of Working Standardise roles & expectations, context sharing protocols, and team coordination across all pods.
Maanas Dwivedi
Genie (AI / Data)
4 roles Below Benchmark Alt. Format ⚠ People Impact
40%
Benchmark: 50–70%
30 Days
60 Days

Note: Plan submitted in role-current-state format (not 30/60 day table). Summarised below.

  • Deployment Architect AI already writes all deployment code, manages CI/CD, maintains production. This role is at end state — no further human actions needed. 100%
  • Experiment Manager Document the collaboration protocol explicitly so it is repeatable. Shift from human-initiating every experiment to AI proposing threshold changes based on continuous monitoring. ~80%
👥 People Impact
Deployment Architect role at end state — function is fully AI-run in production. Human oversight only.
  • Algorithm Designer Write down the implicit human-AI collaboration protocol explicitly so it becomes repeatable for every new risk dimension. AI already codes all mathematical implementations. ~75%
  • System Architect Document the implicit collaboration pipeline for any sub-system. AI already stress-tests specs and catches critical dependencies. Key shift: formalise the handoff so it scales. ~60%
Harmeet Hora
Partner Experience
6 roles Within Benchmark ⚠ People Impact
38%
Benchmark: 25–45%
30 Days
60 Days
  • Analysts Deploy AI SQL generation (Claude Code + curated Snowflake context) — automate all recurring reports and queries. 100%
  • Partner Working Exp. The Brain: Partner calls → L1 selects category → system shows answer → L1 clicks action. Reduce call handle time significantly. 50%
  • Data Foundation Clean Snowflake data foundation — fix frequency, accuracy, nomenclature for Internet Experience monitoring prerequisites. IX-ready
👥 People Impact (30 days)
1 Analyst role redundant — AI SQL generation covers full scope
  • Partner Working Exp. Same answer engine, AI delivers the answer instead of human. AI or human — customer's choice. L1 call agents become AI-assisted or redundant. 50%+
  • Internet Experience Deploy AI network monitoring & anomaly detection for IX NQTs — auto-benchmark, predictive alerts, automated diagnostics. 50%+
  • Remote BD Build self-serve partner onboarding app (~100/month scale). Needs Product/Tech support. 80%
  • Router Diagnostics Pilot AI router diagnostics — auto-RCA from device logs, auto-fix for common issues. Needs Tech. 100%
👥 People Impact (60-day end state)
L1 calling agents: AI takes over first response — headcount reduction
NQT team: Potential reduction from 12 → 5–6 with AI network monitoring
NQT Head: Role becomes redundant — org reports directly into PSH
Remote BD: 1 role redundant with self-serve onboarding app
Marut Singh
Engineering
3 roles Within Benchmark ⚠ People Impact
37%
Benchmark: 30–50%
30 Days
60 Days
  • QA Automation using AI tools on customer-facing products. Reduce manual test cycles. 50%
  • Developer Cursor already adopted widely (visible in team metrics). Formalise code generation, system design, monitoring automation. Mobile app strength may reduce. ~40%
  • QA Full AI-driven automation on customer products. Significantly reduce manual testing headcount requirement. 80%
  • Developer Target 90% automation of repetitive dev tasks. Engineers who do not pick up additional 20% may face replacement in 6 months. 90%
👥 People Impact (60-day end state)
Mobile app team strength to reduce — specialisation becoming less critical
Developers not adapting to AI (target +20% productivity) likely to be replaced within 6 months
Ashutosh Mishra
NetBox (Ops)
4 roles Below Benchmark
29%
Benchmark: 30–50%
30 Days
60 Days
  • NetBox QA Set up automated system for QA — proper clients, automated test scenarios, test data management. Reduces time taken to test a scenario. 50%
  • NetBox R&D Create a partner network persona, simulate a network model, validate the simulation. Enables quick RnD simulation setup. 20%
  • AI Auditing Tools Frontend and backend development using AI. Team shifts focus from application layer to real deep tech problems. 50%
  • NetBox QA Anomaly detection via trained AI system on collected data. Flag anomalies + root cause analysis. Won't fully solve resource crunch but reduces manual burden. 70%
  • NetBox R&D Use NetBox QA setup for validating R&D experiments. 40%
  • AI Auditing Tools Build AI agents for 360° auditing and outlier detection across network operations. 70%
Rohan Agarwal
Finance
6 roles Within Benchmark 🔒 EY Audit + Series B DD
30%
Benchmark: 30–50%
30 Days
60 Days

Context: EY Big 4 Audit (first time), Series B Financial DD in 6–9 months, Fixed Asset Audit + ICFR

  • FP&A Finalise investor-grade MIS template (P&L by segment, burn, runway). Live budget vs actuals dashboard. 20%
  • Finance Controller EY Audit management using Claude — identify gaps, prepare documentation. Statutory compliance AI-assisted drafting. 10%
  • Fin Ops PG-to-bank recon: daily automated match (PG settlement MIS ↔ bank). Monthly close management. 10%
  • Tax Manager AI-assisted first draft of GST filings. TDS calendar: auto-compute from vendor master. 10%
  • Accounts Payable Invoice OCR (Nanonets / Zoho): eliminate manual data entry. Target 80%+ straight-through processing. 20%
  • Accounts Receivable PG-to-bank recon of FY26: extend Fin Ops monthly recon to AR. 10%
  • FP&A Build AOP financial model: bottom-up revenue, headcount, capex with labelled assumptions. AI variance analysis. 30%
  • Finance Controller EY Audit: preparation of reconciliations for audit management. Statutory audit support AI-assisted. 20%
  • Fin Ops End-to-end daily recon pipeline (PG → bank → books). Exceptions-only human review. 25%
  • Tax Manager 26AS vs books: TDS reconciliation. GSTR-9 annual return + tax provision (current & deferred). 20%
  • Accounts Payable Partner onboarding: no vendor activated without PAN (ITD API) + GST (GSTN API) verified. 3-way matching. 30%+
  • Accounts Receivable PG-to-bank recon: extend Fin Ops daily recon to full AR cycle. 20%
Saurabh Bansal
Solutions
6 roles Within Benchmark ⚠ Plan Not Submitted
28%
Benchmark: 25–40%
30 Days
60 Days

⚠️ Both 30-day and 60-day sections were submitted blank. Plan pending.

⚠️ Both 30-day and 60-day sections were submitted blank. Plan pending.

Abhinav
Marketing
8 roles Below Benchmark ⚠ People Impact
20%
Benchmark: 35–55%
30 Days
60 Days
  • Performance Campaigns Campaign setup automation for pre-determined parameters — TG definition, spend limits, CAC/CPB targets. Low time impact currently. 80%
  • Partner Text Comms AI tool exists — needs update as comms playbook evolved Feb-Mar. Expand usage. 15% of comms role to be generated by respective teams (Product, PX, Supply). 80%
  • Consumer Text Comms AI tool exists — refine and expand across teams. 10% of comms role to be generated by Product/CX using the AI tool. 80%
  • Performance Campaigns Deploy 3 sub-agents as a team: (1) campaign evaluation — which creatives to boost; (2) campaign setup; (3) performance monitoring. Between Rahul, Kashish, Nikhil — only 1–1.5 people needed. 50–75%
  • Creative Development AI-led creative development — human to brief and QA. Tools: HeyGen, Nano-banana. Agency cost savings flow in (reduce agency time and effort). 25%
  • Comms Deployment & Audit AI agents to capture outgoing comms, compare against guidelines, and rate them. Currently ad hoc and manual — 25% of comms role impacted. 80%
  • Influencer & UGC UGC creatives from consumer NPS comments → videos. Influencer campaigns. Currently not being done — pure additive capacity. 75–100%
👥 People Impact (60-day end state)
Performance campaigns: From 3 people (Rahul + Kashish + Nikhil) → 1–1.5 people needed
Partner & consumer text comms: 25% of comms role redistributed to Product, PX, CX, Supply teams
Creative: Agency cost savings (reduced agency time + effort)