Here’s What 90 Days of Letting AI Run Half My Job Actually Looked Like.
It’s 9:42 on a Tuesday morning. I’m at a café near my apartment, on my second flat white, about to hop on a call with a senior platform engineer based in Berlin. We’ve been exchanging messages for eleven days. I know his current stack. I know why he’s considering leaving. I know his partner just took a new job that anchors him to his timezone for the next three years. We haven’t talked yet. We’re about to.
Ninety days ago, this scene would have been impossible.
Ninety days ago at 9:42 on a Tuesday morning, I would have been twenty minutes into a sprint through 80 fresh resumes, trying to flag the top 12 before my 10am standup. I would have been half-present to every one of them. I would not have been at a café. I would not have been drinking a flat white. I would have been at my laptop in sweatpants, chewing a cold breakfast bar, clicking through LinkedIn profiles that all started to blur into one vague shape of backend engineer, five years of experience, probably fine.
I’ve been doing this for ten years. Three months ago my head of talent asked me to restructure my workflow around AI tools for a 90-day pilot. I’m going to walk you through what those 90 days actually looked like, because most of what I read online about AI in hiring is either breathless hype from people selling tools or panicked catastrophizing from people who have never run a pipeline. Both miss the point. The point is simpler, and much more boring, and much more important.
AI did not replace my job. It gave me my job back.
Day zero — what a morning used to look like
Let me set the baseline. The contrast is the whole story.
I missed people. I know I missed people. Every recruiter who tells you otherwise is lying.
The afternoon was scheduling. Fifty-eight emails back and forth with candidates trying to find a time that worked. Then an hour of drafting follow-ups to candidates waiting on hiring manager feedback I hadn’t had time to chase down. Then an hour of actually chasing the hiring managers, who always said “I’ll review tonight” and never did. Then a pipeline report nobody read.
Somewhere in that day, in theory, I was supposed to talk to humans.
In practice, I talked to a human maybe forty minutes a day. The rest of the job was logistics dressed up as relationship work.
That was day zero. That’s where the 90 days started.
Days 1 to 30 — learning to trust the tools
The first month was uncomfortable.
I didn’t want to let the AI do the first-pass screening. I had ten years of calibrated instinct. I trusted my eyes. So for the first two weeks, I did both — I reviewed every resume the agent had already processed, just to check. What I found was this: we disagreed on about 8% of candidates. On half of those 8%, I was right and the AI had miscategorized someone interesting. On the other half, the AI was right and I had been about to reject someone good out of pattern-matching fatigue.
We were roughly equally fallible. But the AI was fallible in fifteen seconds. I was fallible in ninety.
By week three I’d stopped double-reviewing. I set up a feedback loop instead — every time I disagreed with a screening decision, I logged why, and the agent’s reasoning improved. By day thirty, our disagreement rate was under 3%.
That was the first unlock. Not speed. Calibration.
Days 31 to 60 — the morning that changed shape
Somewhere in the second month, my mornings started looking completely different.
I still wake up at 7:30. But I don’t open 140 resumes.
I open a dashboard where the AI has worked through the overnight applicant pile. It’s done four things. It’s made a first-pass match against the role requirements and given each candidate a short structured paragraph on why they fit — not a score, actual reasoning. It’s flagged the unusual ones. The candidate whose background doesn’t match the JD on paper but whose resume suggests an interesting lateral fit. The candidate whose last company I’ve been trying to recruit from for six months. It’s drafted a personalized first-touch message for each shortlisted candidate that I can edit in thirty seconds. And it’s flagged anyone still waiting on me from previous days.
This takes me about twenty-five minutes of real reviewing. I approve, I tweak, I reject the ones that are obviously off. Shortlist goes to the hiring manager by 8:30.
That used to be my entire morning.
Then I do the thing I couldn’t do before. I go to the café. I take a call with a human.
The candidate I would have missed three months ago
Back to the platform engineer.
The AI didn’t source him blindly. Three weeks ago, I’d told my sourcing agent what I was really looking for — not the JD, the real one. Someone who’d built platform infra at scale but was getting tired of the politics of bigger companies. Someone who could code but whose real skill was designing systems that didn’t need heroes to keep them running. The agent worked through LinkedIn, GitHub, a couple of engineering blog archives, and came back with twelve names and a paragraph on each about why they fit. Not keyword matches — actual reasoning grounded in things they’d written or shipped.
After the call, I used to spend twenty minutes writing up my notes. Now I don’t.
The call is transcribed. An AI summarizer produces a debrief in a format I’ve trained it on: candidate background, what they’re optimizing for in their next role, potential concerns, suggested next step. I read it, I fix the two things it got slightly wrong, I add the one thing a transcript couldn’t catch — he laughed nervously when I asked about his current manager, worth noting — and I send it to the hiring manager. Total time: four minutes.
That’s sixteen minutes back in my day. Multiply by six calls and I’ve got ninety minutes of my life back. Ninety minutes I now spend talking to more candidates, or thinking about whether the role I’m filling even makes sense, or pushing back on a hiring manager’s unreasonable ask, or doing any of the dozen things that used to get squeezed out of my week.
Days 61 to 90 — the conversations I can finally have
This is the part of the pilot I wasn’t expecting.
I have time now to push back.
In the old world, when an engineering manager sent me a JD asking for “seven-plus years of Kubernetes, deep ML background, and strong frontend skills,” I would take it. I would run the search. I would fail to find anyone. I would come back in six weeks with three mediocre candidates and a tired apology. I didn’t have time to tell him that what he was asking for didn’t exist.
Now, the AI does an initial market sweep the same afternoon I receive the JD. It comes back and tells me there are about 40 people globally who fit that exact profile, 6 are in reachable geographies, and of those 6, 4 are currently at companies known for not letting people leave. I take that data to the hiring manager and we rewrite the JD together. The role I eventually close is different — better — than the role he originally asked for.
That conversation used to happen in month three of a failed search. Now it happens on day one. The hiring manager thinks I’m smarter. I’m not. I just have the data I didn’t have before.
The hire nobody thought I’d make
In week eleven of the pilot, I filled a role nobody thought I’d fill.
Principal ML engineer, rare specialty, remote-only, in a niche where everyone who’s good is already at one of five companies you’ve definitely heard of. My hiring manager told me to expect a six-month search. I told him to give me four weeks and then we’d re-evaluate.
Here’s what I did. I had the sourcing agent build me a list — not of people who fit the JD, but of people who had written about the problem this role was trying to solve. Blog posts, papers, conference talks, obscure tweet threads. About sixty names. The agent ranked them by how recently they’d engaged with the problem and how close their stated career interests sounded to what this role actually offered. I went through the top fifteen myself.
Number nine was a woman I had never heard of. She’d written a single Medium post about a technique the role required. She was at a company nobody in my network had flagged. Her title didn’t match the role. Her profile wasn’t searchable in any obvious way.
I reached out. We talked. Three weeks later she accepted the offer.
I want to be precise about what happened, because this is the part the hype cycle always gets wrong. The AI did not find her. I found her. What the AI did was clear the space in my day and my brain so that when I looked at a list of sixty names with real context attached to each one, I could notice the one that mattered. In the old world, I would have run a LinkedIn search, gotten a hundred generic results, messaged the top twenty, gotten four replies, and hired someone mediocre by month five.
The AI didn’t make the decision. It made the decision possible.
Day 90 — what the numbers actually say
I ran the final report last Friday.
Across the 90 days: time-to-first-interview dropped from 8 days to just under 2. Average time spent per candidate (from application to first human conversation) dropped from 14 minutes to 6. My passive-outreach response rate went from 11% to 29%. Four hires closed in the quarter against a historical average of 1.5. And the thing I’m proudest of, which doesn’t show up cleanly in any dashboard — I had 43 real conversations with candidates this quarter that weren’t attached to any open role. Those are my hires for next quarter, and the quarter after that, and the quarter after that.
That last number is the one that matters. Because that’s the number that tells me the 90 days weren’t just about getting faster. They were about getting my job back.
Why it works
I’ll keep this short, because I think the 90-day walk-through does more work than any framework I could draw.
Recruiting is a job where about half the work is legible — the pattern-matching, the screening, the scheduling, the logistics, the formatting, the chasing. AI does legible work well. The other half is illegible — the judgment, the rooms you have to read, the relationships you have to build, the conversations that turn a maybe into a yes. AI does not do this well and probably never will.
The mistake teams made was thinking they had to pick one half or the other. The teams that are winning figured out that you give the legible half to AI so your humans can actually do the illegible half for the first time in their careers.
That’s the whole story. That’s what 90 days changed. That’s what my day looks like now.
The shift I’m describing is already the direction serious hiring platforms are quietly building toward — SkillBrew is one of the teams designing around exactly this split, handling the screening and surfacing so recruiters get to spend their day on the parts of hiring that were never really about speed.
The tech is the floor. People are still the ceiling. For the first time in a long time, the floor is high enough that the people can actually reach.