Months of LeetCode. Maybe an hour of behavioral interview prep. That's the gap.

Your answer was good.
Your interviewer
heard something else.

That's not a skills gap. It's a translation gap — and it's costing Indian engineers offers they've already earned.

Your technical skills got you the interview.
Your communication style is what cost you the offer.

In a pool where everyone is technically strong, the behavioral round is the final gate. The engineers who clear it — and land the offer — are the ones who invested here.

Live translation — same project, same work
What you said "We redesigned the checkout flow and reduced cart abandonment by 18%."
What they heard "Someone on their team did something impressive. I don't know what this person did."
What the offer went to "I owned the checkout redesign end to end. Cart abandonment dropped 18%."

Neither answer is wrong. They're different professional languages. US behavioral interviews are calibrated for one of them.

How it works

An AI interviewer that measures what you actually say — not what you planned to say.

You're interviewed by an AI trained on US hiring expectations — in a hiring manager's voice. It asks follow-up probes, pushes back on vague answers, and goes deeper when it needs to. Exactly what you'll face. After each 30-minute session, your answers are scored across 6 cultural dimensions and rewritten side by side — your words, then what a successful US candidate would say instead. Video sessions add a 7th dimension: how you show up on screen.

Step 1
The interview

An AI interviewer — trained on US hiring expectations — conducts a real 30-minute behavioral session. Opens with your resume. Asks follow-up probes. No scripts, no hints. Exactly what you'll face.

Step 2
The scoring

Every answer is scored across 6 dimensions: ownership language, quantified impact, STAR structure, conciseness, bottom-line delivery, and active voice. After Session 2, these scores combine into your Cultural Readiness Score — your baseline. Think of it like an athlete's sprint time: a single number that tells you exactly where you stand, and moves as you train. Audio sessions score all 6. Video sessions add a 7th dimension: how you show up on screen.

Step 3
The rewrite

Your exact answer, side by side with how a strong US candidate would say it — with every change annotated. Not a generic script. A rewrite of your specific words, your specific story, in the language that lands.

Step 4
Track your progress

Your Cultural Readiness Score is benchmarked against peers at your experience level. Run more sessions and watch it move. Dimension by dimension, you can see exactly what's improving and what still needs work — the same way an athlete tracks splits, not just finish times.

What your score looks like
Ownership language
3.8
Interview-ready: 7.5
Quantified impact
4.2
Interview-ready: 7.0
Bottom-line delivery
4.6
Interview-ready: 7.0

Synthetic example — illustrative of what Session 1 results look like. Your actual scores are based on your specific answers across all 6 dimensions.

What a rewrite looks like
Your answer

"There was a situation where our data pipeline was failing intermittently and the team was under pressure. We looked into it and after some investigation we were able to resolve the issue and things improved after that."

Rewritten

"I diagnosed a race condition in our ingestion pipeline that was silently dropping 12% of records. I isolated the failure, patched the scheduler, and added monitoring. Incidents dropped to zero over the next 60 days."

Context-first → result-first — bottom-line delivery
"Things improved" → "zero incidents, 60 days" — quantified impact
Vague → specific actions — STAR structure
"We looked into it" → "I diagnosed" — active voice + ownership
Worth noting

Some companies give you a framework. Amazon's 14 Leadership Principles are explicitly used to anchor behavioral questions — most engineers study them. Meta publishes 6 core values, Google has its own. But knowing a company's stated values doesn't tell you what the interviewer is actually listening for. None of these frameworks map directly to what gets evaluated in the room.

Most American startups and mid-size companies have no published interview framework at all. The behavioral round is unstructured, the questions are open-ended, and the ownership gap is fully exposed with nothing to anchor to.

Arpan's rubric is calibrated across all of these formats — Amazon-style structured, open-ended, and unscripted. The gap shows up in all of them.

The rubric

Built by someone who made the final call — on both sides of the table.

Most interview advice comes from coaches who've studied interviews. Arpan's rubric was built by someone who spent 17 years as a hiring decision-maker in US tech — and who simultaneously ran an Indian product and engineering entity for a US startup. Both sides of the table, at the same time. That's not a credential. It's the only vantage point from which this rubric could exist.

High-growth startups

No published framework. Behavioral rounds are unstructured and ownership-heavy. Interviewers are looking for founders in engineer clothing — people who drive, not people who participate.

Growth-stage companies

Structured but not published. Competency frameworks exist internally — interviewers are trained to probe for individual impact, not team output. Collaborative answers read as low-agency.

Big tech

Principle-driven and structured. Amazon, Meta, and Google each have distinct behavioral formats. The gap is the same across all of them — only the scaffolding changes.

The 6 scoring dimensions
Ownership languageWE:I ratio, agency framing
Quantified impactmetrics, percentages, scope
STAR structuresituation, task, action, result
Bottom-line deliveryoutcome first, context second
Concisenesssignal-to-noise ratio per answer
Active voicepassive construction detection

"The rubric isn't based on what interview coaches say works. It's based on what I was actually evaluating when I made hiring decisions — and validated through direct conversations with Indian engineers at Meta, Amazon, Goldman Sachs, Twilio, and others who've lived this gap firsthand."

— Andrew, Founder

What "not the right fit" actually means

The feedback you got was probably wrong about why.

If you've walked out of a behavioral round thinking it went well — and then received "not the right fit" — it almost certainly wasn't about fit. US interviewers aren't evaluating your technical depth in behavioral rounds. They're listening for one thing: ownership.

Indian professional culture defaults to team framing. That's not wrong — it's a different professional language. US behavioral interviews are calibrated for the other one. Arpan teaches you to code-switch between both.

"Folks from India are brought up differently and the culture that you expect in US will be embodied best by people living in US. The expectation itself is a culture shift for us. In any case, we are always happy to understand newer ways to be. We prepare."

— Engineer, Rippling

Same project  ·  Same work  ·  Different language  ·  Different outcome
Andrew with host mother in Kalimpong
With my host mother in Kalimpong, West Bengal — where my connection to India began, over 20 years ago.
Andrew with India team in Kerala
With the India team, Kerala — 2025. I've hired, managed, and interviewed engineers just like you for years. That's why I built Arpan.
🇺🇸
17 years as an operator in American tech
🇮🇳
6 months living in Kalimpong, West Bengal
🚗
Launched Uber Dallas
💼
COO & Head of Product, venture-backed startups
👥
Final hiring authority & manager, US Series A startup with Indian entity

Why this exists

I've been on the other side of this table.

I spent 17 years as an operator in American startups — including Uber and companies with successful exits. Throughout that career I worked with offshore and globally distributed engineering teams — at Neiman Marcus, at Minibar Delivery, and others. The deepest chapter came last: serving as final hiring authority and manager for an Indian product and engineering entity at a US startup, where I saw the gap from both sides at the same time.

"I'd read their résumé again. This person clearly did impressive work. But in the interview, I couldn't see it."

I spent six months living with a Brahmin Hindu family in Kalimpong — and have returned to India many times since, most recently working alongside the engineering team I managed. The family gave me the name Arpan — offering in Sanskrit. I understand why Indian engineers communicate the way they do. I also understand exactly what American interviewers are listening for, because I've been that interviewer — and the manager those engineers reported to.

Indian professional culture values team credit, thorough context, and collaborative framing. American interviewers expect individual ownership, bottom-line-up-front answers, and quantified impact. Neither is better. They're different professional languages. I've operated fluently in both. Arpan is the bridge.

Most interview coaching teaches you to memorize scripts in one of them. Arpan teaches you to code-switch between both — so you can be who you are, in the language that wins US behavioral interviews.

And here's what most people miss: the interview is where this gap first matters, but it's not where it stops. In a market where technical skills are assumed, behavioral communication is how you differentiate — at the offer stage, in salary negotiations, and in every performance review and promotion conversation after you land. Master the language once. It compounds.

Why existing tools don't solve this

You've probably already tried the obvious things.

What they give you vs. what you need
LeetCode
Technical fluency. Passes the coding round. Doesn't touch the behavioral round at all.
Generic AI chatbots
A better answer when you ask for one. Can't show you what you say automatically under pressure.

Neither one was built for this.

LeetCode is built for technical rounds. AI chatbots are built for everything. Arpan is built for one thing only: the behavioral gap that costs Indian engineers offers they've already earned.


The gap isn't in what you know to say. It's in what comes out automatically — under pressure, on the clock, when you're not monitoring yourself. You can't see that pattern by reading about it. You have to hear yourself.


You spent years mastering the technical side. The engineers who break through to US salaries treat the behavioral round the same way — as a skill that compounds. Not a box to check. A capability to build.

The Diagnostic — ₹2,999

The only way to know your pattern is to hear yourself.

Two real behavioral interviews with an AI trained on US hiring expectations. Your own transcript, scored and rewritten. Most engineers are surprised by what they hear — not what they expected to fix.

2 real behavioral interviews, 30 minutes each
Scored across 6 cultural dimensions — including your WE:I ratio
Side-by-side rewrite of your actual answers
7-day access · ₹2,999 credited toward Pro if you upgrade
The ROI framing

The behavioral round is not a formality. It is the round where Indian engineers — who pass technical screens at the same rate as anyone else — lose offers. Not because they lack capability. Because the communication style that works in Indian professional environments signals something different to US hiring managers.

Engineers who fix this don't just land offers — they negotiate from a position of demonstrated competence. The same patterns that win the behavioral round are what get you taken seriously in compensation conversations, performance reviews, and promotion decisions. ₹2,999 is the start of that trajectory.

Begin your Diagnostic →

₹2,999 · 7-day access · Credit applied to Pro if you upgrade