Why Agency, Not Skilling, Is the Real Bottleneck for Indian Students
The question that has shaped my work
"How can technology empower people to make better decisions and take control of their own futures?"
That question has quietly anchored everything I've worked on for the last five years. It's a question I reflect on often, and one I believe most education and skilling initiatives fail to take seriously.
We talk constantly in India about the employability crisis: 1.5 million engineering graduates a year, seven in ten deemed unemployable, and AI accelerating the gap. Almost every public conversation responds with the same prescription: more skilling, better content, sharper assessments, faster certifications.
But this response treats the problem as a deficit of skill. After running pilot programs across more than 5,000 students in Tier-2 and Tier-3 colleges and over 1,000 students at top tier colleges like IITs, IIITs, and BITS, I am now convinced the upstream bottleneck is different. It is a deficit of agency: the capacity to see one's options, update one's beliefs about the world, and make deliberate choices in one's own interest.
But agency doesn't materialise out of thin air. From what we've seen, it follows a chain: information → assimilation → trust → agency → action. You first need access to the right information. Then you need to absorb and make sense of it. Then you need to trust the source, and trust yourself, enough to act on it. Only then do you have real agency, and only then does action follow.
This blog post pulls together what I have learnet about the gaps in that chain, and how AI, when designed with trusted humans in the loop, can begin to close them.
Five threads, one thesis
Across five years, I've been working on what looked at first like five different problems. But once we dig deeper and look beyond the noise - they are actually the same problem, in different forms.
Thread 1: Know Thy Choice: From information to lived exposure
Know Thy Choice was built on a simple premise: most career decisions in India are made during high school, often by parents and teachers, with very little understanding of what specific careers actually involve. We responded by building experienceships (short, structured, project-based exposures to real careers under industry mentors) alongside psychometric diagnostics, coaching, and counselling.
What we observed there shaped everything that came later:
- Students could not aspire to careers they had never seen.
- Diagnostics alone (psychometrics, interest inventories) produced labels but not insight. Students who were told "you might fit a UX career" rarely changed behaviour unless they actually got to try UX work for a week.
- Career counselling worked best when it was iterative, not a one-time event. Decisions made deliberately in Grade 9 looked very different from default-driven decisions made in Grade 12.
The lesson: information alone doesn't move people to action. Knowing about a career does not give a student the capacity to choose it. Information has to be assimilated through lived exposure, structured reflection, and ongoing guidance before it becomes something a student can actually act on.
Thread 2: BlendNet's last-mile work in Bihar: Trust through empowered intermediaries
Before AI-mentored learning, we worked on a much more pragmatic question: how do you get any kind of digital service into the hands of low-income users in rural India? Our COMPASS 2024 paper documented a pilot in Bihar where 258 retailers reached over 68,000 end-users in three months by acting as digital service intermediaries. (More on this in the BlendNet project.)
The insight from that work was unexpected: the rural user's biggest barrier was not access to the service; it was confidence that engaging with the service would actually help them. Information was available, but assimilation hadn't happened, and without a trusted face, it never would. Local retailers, people users already knew, closed that confidence gap in a way no app interface could. The retailer wasn't delivering the service. The retailer was delivering trust in the service, and trust is the precondition for action.
That finding is the reason why every subsequent product we have built has empowered a "human in the loop", not because AI is inadequate, but because trust is the bridge between knowing and doing, and humans build that bridge faster than software does.
Thread 3: Comuniqa: LLMs vs. human experts, and why both win together
Comuniqa, built with Microsoft Research India and IIIT Delhi, asked a simple question: can LLMs help non-native English speakers in India improve their speaking skills, and how does that compare to human experts? (ACM paper)
The study split participants into three groups: LLM-only, human-expert-only, and a combined group. The findings, which we reported honestly, were instructive:
- The LLM-based system was accurate: pronunciations were assessed correctly, feedback was substantive.
- But the LLM lacked something humans provided: empathy and personalised emotional support. Learners using only the LLM reported feeling less understood, even when the technical feedback was correct.
- The combined LLM-plus-human-expert group produced the strongest outcomes.
The lesson here is deeper than it first appears. Pure-AI systems can deliver information and even help with assimilation, but the moments where a learner needs to update a belief about themselves, their voice, their capability, require human warmth. Trust at those junctures is what turns feedback into growth. AI mass-customizes; humans mass-encourage. The combination is more than the sum of the parts.
Thread 4: Sakshm AI : Scaffolding without dependency
This is the work I'm most often asked about. Sakshm AI, in collaboration with BITS Pilani, IIIT Delhi, and Microsoft Research India, deployed an AI tutor named "Disha" to help engineering students at top Indian institutions learn data structures and algorithms, but with a deliberate constraint. Disha was built to refuse direct answers and instead guide students through Socratic questioning. (ACM paper | Press: Karnataka Higher Education Dept MOU)
The study covered 3,951 registered users and 1,170 active users, with structured surveys (n=45) and in-depth interviews (n=25). A few findings stand out:
The "self-weaning" pattern. The most engaged users (Q3, "highly engaged" by problems attempted) leaned on the AI 30.8% of the time. The actual super-users (Q4) leaned on it only 12.7%. As students became more proficient, they self-regulated their use of AI assistance. This is the opposite of the "AI dependency" worry that dominates popular discourse. When designed well, AI scaffolding transfers capability, it does not replace it.
The Socratic ceiling. Students embraced guided questioning for easy and medium problems. On hard problems, many disengaged from the AI tutor entirely and bailed to ChatGPT for direct answers. The lesson: scaffolding works only when the learner has enough foundation to reason from. Without it, hints feel useless. The assimilation step hasn't happened yet.
The trust design choice. "Disha" means "direction" in Hindi. The deliberately Indian, female-coded persona reduced friction with users in a way that a generic Western-named bot would not have. Cultural design is not a footnote. It is part of how trust gets built.
We also built and an open-sourced a Bayesian A/B testing framework on top of this deployment. With 1,186 participants comparing two prompt variants, we found a statistically significant 6.35% lift in engagement during the "discussion phase", the moments where the AI tutor helps students assimilate information and articulate their own approach before solving.
The takeaway: AI can scaffold capability development without creating dependency, but only when the design respects the learner's existing foundation and the cultural context of trust. Trust in the tool is what keeps learners engaged long enough to build real capability.
Thread 5: Aspireworks: Institutional trust as a distribution channel
Aspireworks, our AI-powered career-navigation platform for engineering students, has now been deployed across two Karnataka government colleges in partnership with the Department of Higher Education, Government of Karnataka, and Microsoft Research India. The structured PoC report covers more than 200 students at SKSJTI Bangalore and GEC Ramanagara. A subsequent multi-state Codathon brought another 1,000+ students into the loop across BITS, IITs, IIITs, and other partner institutions.
Two findings from the Karnataka pilots are particularly relevant:
The govt-job default is a belief, not a choice. 29% of our PoC students named government jobs and civil services as their primary aspiration. When pressed in interviews, very few could articulate why. Most reported it as the default expectation in their families and communities. After exposure to alternative pathways through Aspireworks (private sector roles, entrepreneurship, post-graduation routes), preferences shifted measurably. We have not yet tracked whether shifted preferences led to shifted outcomes. That is the central question our proposed evaluation will answer.
Channel matters more than content. We initially assumed that the platform's quality would drive adoption. What actually drove adoption was the college's endorsement of the program. Students engaged with Aspireworks because their faculty and administration framed it as part of their education, not as an external service to optionally try. Government-channel distribution is not just a scale strategy. It is a trust strategy.
The synthesis: from information to action
When I look back at these five threads, a single chain emerges. Every project is, at heart, trying to move people along the same progression:
Information → Assimilation → Trust → Agency → Action
-
Information: Help the user encounter options they hadn't previously considered. Diagnostics, career exposure, labour-market mapping. Without this step, the user's choice space is artificially small.
-
Assimilation: Help the user make sense of what they've seen and update their beliefs. Structured reflection, AI coaching, peer comparison. This is the hardest step, and the one most skilling platforms skip entirely. Information that isn't assimilated is just noise.
-
Trust: Earn the user's confidence that acting on what they've learned will actually help them. Trusted intermediaries, institutional endorsement, cultural design, human-in-the-loop warmth. Without trust, even well-assimilated information stalls before it becomes a decision.
-
Agency: The user now has the capacity to make a deliberate choice. They can see the path, they understand it, and they trust the support around them enough to commit.
-
Action: Help the user build credible capability and convert it into outcomes: project-based learning, scaffolded practice, placement, earnings, livelihood.
Most education and skilling work in India jumps straight to action (step 5), more courses, more certifications, with weak handoffs at steps 1 and 2 and almost nothing at step 3. The chain breaks most often at trust.
What we still don't know, and what we are trying to find out
Across these five threads, here is what we have learned, and where the gaps remain:
What we have evidence for:
- AI can scaffold capability without producing dependency, when designed with care (Sakshm AI).
- LLM + human-expert combinations outperform either alone for trust-sensitive, belief-shifting interactions (Comuniqa).
- Empowered intermediaries and trusted channels are foundational to digital adoption in low-resource contexts (BlendNet).
- Government-endorsed distribution drives engagement at scale in college settings.
- Career exposure through experienceships meaningfully shifts what students believe is possible for themselves (Know Thy Choice).
What we still need to demonstrate:
- Whether shifted beliefs and shifted engagement translate into shifted outcomes: placement rates, salaries, time-to-employment, government-prep diversion. This is the hardest measurement and the one we are now planning to set up and answer in a study.
- Whether the four-step loop can be delivered end-to-end at the scale of an Indian state, not just a few colleges.
- Whether outcomes for women, first-generation graduates, and Tier-3 college students differ meaningfully, and what the system needs to do differently for each group.
References
- Sakshm AI: Disha: A Socratic AI Tutor for Programming Education. ACM COMPASS 2025. Paper | Project
- Comuniqa: Exploring Large Language Models for Improving English Speaking Skills. ACM COMPASS 2024. Paper | Project
- BlendNet Last-Mile Service Delivery: ACM COMPASS 2024. Paper | Project
- Sakshm AI LLM Evaluation Framework: Bayesian A/B testing middleware, MIT-licensed. GitHub
- Know Thy Choice: Career exploration platform for school students in India. Website
- Hidalgo, Ed. "How can a child aspire to a career they don't know exists?" TEDxKids@ElCajon.