A new kind of side hustle has taken hold among physicians. Doctors are quietly logging on after hours to do something unexpected: teach artificial intelligence how to be a better doctor.
The trend has accelerated sharply over the past year, and it is reshaping how the AI industry sources the expert knowledge it needs to make its models credible in high-stakes domains like medicine. The numbers below are drawn from Mozibox Research's proprietary database of physician-reported compensation — data collected directly from doctors in the Mozibox community as they navigate this new market. They reveal something striking: this is not hobby work.
| $180 | Average hourly rate across 34 reported gigs |
| $420 | Top reported rate — Radiology, via Mercor |
| $170 | Median hourly rate (range: $50–$420/hr) |
What AI Tutoring
Actually Means
The phrase AI tutor
can sound abstract, but the work is concrete. Physicians hired through platforms like Mercor, Handshake, Scale AI, Outlier, and Snorkel AI are asked to write detailed medical prompts and clinical scenarios, review AI-generated responses for accuracy, perform quality assurance, and label the datasets used to train models.
These tasks fall under the broader umbrella of human-in-the-loop training — methods like reinforcement learning from human feedback (RLHF) and supervised fine-tuning that underpin the reliability of large language models across industries. In medicine, the stakes are considerably higher, which is exactly why AI labs are willing to pay physician-level rates to get it right.
Prompt writing
AI response review
Quality assurance (QA)
Dataset labeling
Model red-teaming
Clinical scenario creation
Who Is Doing the Work — and Why
Mozibox Research's dataset — built from physician self-reports submitted on the Mozibox platform — captures responses from doctors across more than a dozen specialties: emergency medicine, anesthesiology, family medicine, radiology, internal medicine, pediatrics, psychiatry, neurology, and others. Experience levels span residents and fellows through mid-senior attendings and executives at the VP and CMO level. It is, to our knowledge, the most detailed view of physician AI tutoring compensation available anywhere.
Most physicians doing this could earn more picking up extra shifts. But they want to be part of AI training — to have this front-row seat of where AI is headed.— Physician and Mozibox contributor, as reported by The SF Standard, April 2026
The flexible, asynchronous nature of the work is a draw. Physicians can log hours late at night or between clinical shifts, with no commute, no administrative overhead, and immediate feedback on their contributions. Several physicians in the Mozibox Research dataset reported dedicating 10 to 20 hours per week to AI training while holding down full-time clinical roles.
The Platform Landscape
Four major platforms dominate the physician AI training market. The reputations below are aggregated from physician ratings submitted to Mozibox Research — a view of the marketplace built from the contributor side, not from the platforms' own marketing.
Mercor — ★★★★★ (4.6 / 5)
Most reported gigs. Known for reliable weekly pay, responsive teams, and clear pathways to earn more through reviewer and QA roles.
Handshake — ★★★☆☆ (3.2 / 5)
Strong compensation (up to $300/hr reported) but polarizing reviews. Concerns about disorganized onboarding and unpaid calibration time.
Outlier (Scale AI) — ★★★☆☆ (3.0 / 5)
Lower base rates; compensation depends heavily on negotiation and mission-based bonuses. Limited physician-specific data so far.
Inflect — ★★★★★ (5.0 / 5)
Early-stage platform with a perfect rating from initial respondents, though the sample is small. Projects noted as more difficult and draining.
A Structural Shift, Not Just a Side Hustle
What looks like a side income opportunity at the individual level reflects something more structural at the industry level. AI companies are building a new class of expert labor that sits at the intersection of domain knowledge and machine intelligence.
The broader market for AI data labeling and expert annotation is enormous and growing fast. The most sophisticated AI labs have settled into a tiered approach: recruiting PhDs, MDs/DOs, and attorneys for the highest-value training tasks, while routing routine annotation to more general platforms at scale. Physicians, given the clinical complexity of medicine, sit firmly in the premium tier.
And on the contributor side, this work has no native infrastructure. No career fair lists AI tutoring gigs. No specialty society publishes rate cards. Physicians find their way in through Slack groups, Reddit threads, and word of mouth — which is precisely why a market paying up to $420 an hour can still feel invisible from inside a hospital. The same gap shows up across every emerging category of physician work, from medical affairs and expert networks to advisory and operating roles in health tech: real money, real demand, and almost no signal reaching the people who could fill it.
That gap is what Mozibox is built to close. Mozibox Research exists to make this territory legible — tracking compensation, contracts, and contributor experience as new categories of physician work appear. The Mozibox platform exists to help physicians move through it.
A Foothold, Not a Side Hustle
The natural worry — Am I training my replacement? — runs through nearly every physician conversation about this work, and it deserves a straight answer rather than reassurance.
The honest read: medical AI gets built either way. The question is who shapes it. Models become clinically credible not by absorbing more text but by having physicians sit down, case by case, and tell them where their reasoning breaks. That expertise is scarce, the labs paying for it know it, and the physicians who develop fluency in both clinical judgment and machine behavior become the people the next phase of medical AI is designed around — not the ones it displaces.
Mozibox Research will keep tracking it. Doctors logging hours after shifts are not just earning $180 an hour; they are accumulating something harder to come by than another moonlighting credential — a working understanding of how medical AI actually gets built, and of where their own judgment fits inside it. For a profession with few off-ramps from the clinical treadmill, the front-row seat is starting to look less like a curiosity and more like a position.
Primary source:
Mozibox Research AI Tutoring Compensation Database — proprietary, 34 physician-reported entries, 2025–2026.
Secondary sources:
The SF Standard,
SF Is So Expensive, Even Doctors Are Working AI Side Hustles
(April 17, 2026)HeroHunt.ai,
Top 10 AI Tutor Marketplaces for AI Labs
(February 2026)The Lancet Digital Health,
How Can AI Transform the Training of Medical Students and Physicians?
(2025)
Mozibox is building the infrastructure for physician work beyond the clinic — pharma, medtech, consulting, advisory, leadership, and the emerging categories being invented in real time. Mozibox Research surfaces the data physicians need to navigate it. The Mozibox platform connects them to where the work is.