That's not a skills gap. It's a translation gap — and it's costing Indian engineers offers they've already earned.
Your technical skills got you the interview.
Your communication style is what cost you the offer.
In a pool where everyone is technically strong, the behavioral round is the final gate. The engineers who clear it — and land the offer — are the ones who invested here.
Neither answer is wrong. They're different professional languages. US behavioral interviews are calibrated for one of them.
How it works
You're interviewed by an AI trained on US hiring expectations — in a hiring manager's voice. It asks follow-up probes, pushes back on vague answers, and goes deeper when it needs to. Exactly what you'll face. After each 30-minute session, your answers are scored across 6 cultural dimensions and rewritten side by side — your words, then what a successful US candidate would say instead. Video sessions add a 7th dimension: how you show up on screen.
An AI interviewer — trained on US hiring expectations — conducts a real 30-minute behavioral session. Opens with your resume. Asks follow-up probes. No scripts, no hints. Exactly what you'll face.
Every answer is scored across 6 dimensions: ownership language, quantified impact, STAR structure, conciseness, bottom-line delivery, and active voice. After Session 2, these scores combine into your Cultural Readiness Score — your baseline. Think of it like an athlete's sprint time: a single number that tells you exactly where you stand, and moves as you train. Audio sessions score all 6. Video sessions add a 7th dimension: how you show up on screen.
Your exact answer, side by side with how a strong US candidate would say it — with every change annotated. Not a generic script. A rewrite of your specific words, your specific story, in the language that lands.
Your Cultural Readiness Score is benchmarked against peers at your experience level. Run more sessions and watch it move. Dimension by dimension, you can see exactly what's improving and what still needs work — the same way an athlete tracks splits, not just finish times.
Synthetic example — illustrative of what Session 1 results look like. Your actual scores are based on your specific answers across all 6 dimensions.
"There was a situation where our data pipeline was failing intermittently and the team was under pressure. We looked into it and after some investigation we were able to resolve the issue and things improved after that."
"I diagnosed a race condition in our ingestion pipeline that was silently dropping 12% of records. I isolated the failure, patched the scheduler, and added monitoring. Incidents dropped to zero over the next 60 days."
Some companies give you a framework. Amazon's 14 Leadership Principles are explicitly used to anchor behavioral questions — most engineers study them. Meta publishes 6 core values, Google has its own. But knowing a company's stated values doesn't tell you what the interviewer is actually listening for. None of these frameworks map directly to what gets evaluated in the room.
Most American startups and mid-size companies have no published interview framework at all. The behavioral round is unstructured, the questions are open-ended, and the ownership gap is fully exposed with nothing to anchor to.
Arpan's rubric is calibrated across all of these formats — Amazon-style structured, open-ended, and unscripted. The gap shows up in all of them.
The rubric
Most interview advice comes from coaches who've studied interviews. Arpan's rubric was built by someone who spent 17 years as a hiring decision-maker in US tech — and who simultaneously ran an Indian product and engineering entity for a US startup. Both sides of the table, at the same time. That's not a credential. It's the only vantage point from which this rubric could exist.
No published framework. Behavioral rounds are unstructured and ownership-heavy. Interviewers are looking for founders in engineer clothing — people who drive, not people who participate.
Structured but not published. Competency frameworks exist internally — interviewers are trained to probe for individual impact, not team output. Collaborative answers read as low-agency.
Principle-driven and structured. Amazon, Meta, and Google each have distinct behavioral formats. The gap is the same across all of them — only the scaffolding changes.
"The rubric isn't based on what interview coaches say works. It's based on what I was actually evaluating when I made hiring decisions — and validated through direct conversations with Indian engineers at Meta, Amazon, Goldman Sachs, Twilio, and others who've lived this gap firsthand."
— Andrew, Founder
What "not the right fit" actually means
If you've walked out of a behavioral round thinking it went well — and then received "not the right fit" — it almost certainly wasn't about fit. US interviewers aren't evaluating your technical depth in behavioral rounds. They're listening for one thing: ownership.
Indian professional culture defaults to team framing. That's not wrong — it's a different professional language. US behavioral interviews are calibrated for the other one. Arpan teaches you to code-switch between both.
"Folks from India are brought up differently and the culture that you expect in US will be embodied best by people living in US. The expectation itself is a culture shift for us. In any case, we are always happy to understand newer ways to be. We prepare."
— Engineer, Rippling
Why this exists
I spent 17 years as an operator in American startups — including Uber and companies with successful exits. Throughout that career I worked with offshore and globally distributed engineering teams — at Neiman Marcus, at Minibar Delivery, and others. The deepest chapter came last: serving as final hiring authority and manager for an Indian product and engineering entity at a US startup, where I saw the gap from both sides at the same time.
I spent six months living with a Brahmin Hindu family in Kalimpong — and have returned to India many times since, most recently working alongside the engineering team I managed. The family gave me the name Arpan — offering in Sanskrit. I understand why Indian engineers communicate the way they do. I also understand exactly what American interviewers are listening for, because I've been that interviewer — and the manager those engineers reported to.
Indian professional culture values team credit, thorough context, and collaborative framing. American interviewers expect individual ownership, bottom-line-up-front answers, and quantified impact. Neither is better. They're different professional languages. I've operated fluently in both. Arpan is the bridge.
Most interview coaching teaches you to memorize scripts in one of them. Arpan teaches you to code-switch between both — so you can be who you are, in the language that wins US behavioral interviews.
And here's what most people miss: the interview is where this gap first matters, but it's not where it stops. In a market where technical skills are assumed, behavioral communication is how you differentiate — at the offer stage, in salary negotiations, and in every performance review and promotion conversation after you land. Master the language once. It compounds.
Why existing tools don't solve this
Neither one was built for this.
LeetCode is built for technical rounds. AI chatbots are built for everything. Arpan is built for one thing only: the behavioral gap that costs Indian engineers offers they've already earned.
The gap isn't in what you know to say. It's in what comes out automatically — under pressure, on the clock, when you're not monitoring yourself. You can't see that pattern by reading about it. You have to hear yourself.
You spent years mastering the technical side. The engineers who break through to US salaries treat the behavioral round the same way — as a skill that compounds. Not a box to check. A capability to build.
The Diagnostic — ₹2,999
Two real behavioral interviews with an AI trained on US hiring expectations. Your own transcript, scored and rewritten. Most engineers are surprised by what they hear — not what they expected to fix.
The behavioral round is not a formality. It is the round where Indian engineers — who pass technical screens at the same rate as anyone else — lose offers. Not because they lack capability. Because the communication style that works in Indian professional environments signals something different to US hiring managers.
Engineers who fix this don't just land offers — they negotiate from a position of demonstrated competence. The same patterns that win the behavioral round are what get you taken seriously in compensation conversations, performance reviews, and promotion decisions. ₹2,999 is the start of that trajectory.
₹2,999 · 7-day access · Credit applied to Pro if you upgrade