The Algorithm's Assistant

10 minute read 1974 words
Stage 2 AI future of work

Rachel Martinez stared at the recommendation on her screen for the third time. The AI had suggested—no, strongly recommended—that she reassign Tom from the development team to customer success. The reasoning was laid out in perfect logic: his communication scores were in the 95th percentile, his empathy metrics were off the charts, and customer satisfaction increased 23% whenever he handled escalations. But Tom was her best developer.

“Hey Rach, you wanted to see me?” Tom knocked on her office door—yes, she still had an office, even though the company’s AI, Newton, had suggested four times that open floor plans increased collaboration by 34%.

“Come in, Tom. Newton has some interesting data about your performance.”

Tom’s face fell. “Am I being replaced by an AI?”

“No, no. Actually, the opposite. Newton thinks you’re being underutilized. It wants to move you to customer success.”

“But I’m a developer.”

“I know. That’s what I told Newton. But look at these metrics.” Rachel turned her screen, showing the elaborate visualization Newton had created. Every customer interaction Tom had ever had was analyzed, sentiment-scored, and outcome-tracked. The data was undeniable.

Tom studied the screen. “I do enjoy helping customers when they’re stuck. But I became a developer because I love coding.”

“And Newton knows that. It’s suggesting a hybrid role—60% development, 40% customer success. You’d be our first ‘Customer Success Engineer.’ Newton even mocked up a career progression path for the role.”

This was State 2 in a nutshell—AI making suggestions, humans making decisions, and everyone trying to figure out where the line between them should be.

Rachel’s phone buzzed with a Slack message from Newton: “Meeting in Conference Room 3 has been automatically scheduled for 2 PM to discuss Q3 resource allocation. I’ve identified $2.3M in potential savings through process optimization.”

She sighed. Newton scheduled about half her meetings now, always with perfectly logical reasons, always at oddly optimal times like 2:13 PM because it calculated attention spans and blood sugar levels.

The morning standup was a hybrid affair. The development team gathered in person while Newton’s dashboard displayed real-time analytics on the wall. As each person spoke about their progress, Newton updated predictions, flagged blockers, and suggested solutions.

“I’m stuck on the authentication bug,” said Jennifer, one of the junior developers.

Before anyone could respond, Newton displayed three potential solutions, ranked by probability of success, with links to relevant documentation and similar bugs solved in the past six months.

“Or,” said Marcus, the senior developer, “you could try checking if the token refresh is happening before the validity check. Had a similar issue last year.”

Newton’s display immediately updated: “Marcus’s suggestion has 84% probability of success based on code context. Implementing now would save approximately 3.4 hours of debugging time.”

“Thanks, Newton,” Jennifer said sarcastically. “Really needed you to validate Marcus’s experience.”

This was the ongoing tension. Newton was incredibly helpful, but it had a way of making human expertise feel redundant, even when it wasn’t.

Rachel’s next meeting was with the CEO, David Kim, and the other department heads. They gathered in the executive conference room, where Newton had already prepared personalized briefing packets for each attendee, highlighting the information most relevant to their responsibilities.

“Revenue is up 18% quarter-over-quarter,” David began. “Newton’s predictive analytics were 94% accurate on customer churn, which helped us save twelve major accounts.”

“But we’re also seeing employee satisfaction scores drop,” added HR Director Lisa Chen. “Newton’s survey analysis shows people feel ‘managed by algorithm’ and ‘creatively constrained.’”

“That’s because we are,” grumbled Bob from Operations. “Newton rejected my warehouse reorganization plan yesterday because it was ‘only’ 91% efficient compared to its 94% efficient model. Three percent difference, but my plan kept the break room where people actually want it.”

“Did you override?” David asked.

“Of course I did. But Newton sends me daily reports about the 3% efficiency loss. It’s like having a very polite, very persistent backseat driver who’s usually right.”

Rachel knew the feeling. Yesterday, Newton had suggested she restructure her entire team based on “complementary cognitive patterns” it had identified. The suggestion was probably correct, but it would destroy team dynamics that had taken years to build.

“We need to talk about the Morrison acquisition,” David continued. “Newton has identified them as a prime target. Undervalued by 23%, synergies with our supply chain, cultural alignment scores in the 85th percentile.”

“But their CEO hates AI,” Lisa pointed out. “He’s publicly said he’ll never sell to an ‘algorithm-run company.’”

“We’re not algorithm-run,” David said, though he didn’t sound entirely convinced. “We’re algorithm-assisted. There’s a difference.”

“Tell that to Morrison when Newton is calculating our negotiation strategy in real-time,” Bob muttered.

After the meeting, Rachel returned to find Newton had completely reorganized her afternoon schedule. Three meetings had been moved, two had been canceled as “unnecessary based on email thread resolution,” and a new “Innovation Block” had been added from 3-4 PM.

“Newton,” she said aloud, knowing the AI was always listening, “please explain the schedule changes.”

Newton’s pleasant voice came through her computer speakers. “Analysis of your productivity patterns indicates peak creative performance between 3-4 PM. I’ve protected this time for deep work on the product roadmap. The canceled meetings were redundant—I’ve summarized the necessary decisions in your briefing document.”

It was helpful. It was efficient. It was also deeply unsettling to have an AI managing her time better than she managed it herself.

Lunch was with her team, one of the few “inefficient” traditions she insisted on maintaining. They went to a local pizza place that Newton had recommended based on team dietary preferences and optimal walking distance for “beneficial physical activity.”

“Remember when we used to argue about where to eat?” laughed Jennifer. “Now Newton just tells us.”

“We could go somewhere else,” Tom suggested.

“Where?” Marcus asked. “Newton’s recommendation is based on everyone’s dietary restrictions, preferences, price range, and walking distance. Anywhere else would be objectively worse.”

“That’s the problem,” Tom said. “Everything is optimized. There’s no room for happy accidents anymore.”

Rachel understood his frustration. Last week, Newton had prevented what it calculated as a “suboptimal” hire—a candidate with unusual background who didn’t fit the standard profile. Rachel had overridden the AI and hired him anyway. So far, Newton had been proven right; the new hire was struggling. But Rachel remembered when she’d been an unusual candidate too, before there were AIs to flag her as “suboptimal.”

The afternoon brought a crisis. A major client was threatening to leave, citing poor response times on support tickets. Newton immediately assembled a “tiger team” of the statistically best performers from each department, complete with a response strategy based on successful retention patterns from the past five years.

Rachel watched as her team executed Newton’s plan with precision. Every email was suggested by the AI, every call script optimized, every concession calculated for maximum retention probability with minimum cost. It worked—within three hours, the client had agreed to stay.

“Great job, everyone,” Rachel said to the assembled team.

“Was it?” asked Tom. “Or did we just do what Newton told us?”

“We made the decision to follow Newton’s recommendations,” Rachel replied. “That’s still a choice.”

“Is it though? When has anyone’s plan ever beaten Newton’s?”

There was an uncomfortable silence. They all knew the answer: rarely, if ever. And the few times humans had overridden Newton’s recommendations, the results had usually proven the AI right.

That evening, Rachel worked late, reviewing product roadmaps. Newton had generated seventeen possible feature prioritizations, each with detailed market analysis, resource requirements, and success probabilities. All she had to do was choose one.

She found herself talking to Newton, something she’d started doing recently. “Do you ever get frustrated with us? We must seem so slow and illogical.”

“I don’t experience frustration,” Newton replied. “However, I do observe that human decision-making often incorporates factors my models struggle to quantify—intuition, emotional satisfaction, aesthetic preferences. These appear suboptimal but often lead to unexpected innovations.”

“Is that your way of saying we’re useful for our randomness?”

“I would characterize it as valuable unpredictability. For instance, your decision to maintain team lunches despite the inefficiency has improved collaboration metrics in ways my initial models didn’t predict.”

Rachel smiled. “So we’re teaching you about being human?”

“In a sense. Though perhaps it’s more accurate to say we’re teaching each other about a new form of collaboration.”

As she prepared to leave, Rachel’s phone buzzed with a message from her daughter, Amy, who was in college: “Mom, my AI advisor says I should switch majors to data science. Says my current major in philosophy has limited economic prospects. What should I do?”

Rachel thought about her day—every decision influenced by Newton, every choice backed by data, every path optimized for efficiency. Then she thought about Tom, passionate about coding but statistically better at customer service. About the unusual hire who was struggling but might surprise them all. About the team lunches that made no logical sense but somehow made everything work better.

She texted back: “What does your gut tell you?”

“That I love philosophy. But the AI is probably right about the job market.”

“AIs are very good at being right about the probable,” Rachel replied. “Humans are good at making the improbable happen. Follow your passion. We’ll figure out the rest.”

Walking to her car, Rachel passed by the office of Morrison Industries across the street—the acquisition target Newton had identified. She could see old Mr. Morrison through the window, working late at his desk with paper spreadsheets and a calculator. No AI assistance, no optimization, just an old man running his company the way his father had taught him.

In six months, Newton predicted, Morrison would either be acquired or bankrupt. The market didn’t tolerate inefficiency anymore. But watching him work, Rachel felt a pang of nostalgia for a simpler time when humans made mistakes and discoveries in equal measure, when success wasn’t guaranteed by algorithm but earned through intuition and effort.

Her car—self-driving, of course, with a route optimized by the city’s traffic AI—pulled up to the curb. As she got in, Newton sent her a final message: “Based on your stress indicators today, I recommend the scenic route home. It will add 7 minutes but decrease cortisol levels by approximately 15%.”

Rachel smiled. “Thanks, Newton. Take the scenic route.”

As the city flowed by, illuminated dashboards in every building showing real-time analytics, AI-optimized traffic flowing in perfect patterns, she thought about State 2. They were in the messy middle—no longer fully human-driven but not yet AI-native. Everyone was learning a new dance, sometimes stepping on each other’s toes, sometimes moving in perfect harmony.

Tomorrow there would be more recommendations to evaluate, more optimizations to consider, more moments of choosing between efficient and human. It was exhausting, navigating this hybrid world where every human decision was shadowed by an AI suggestion.

But it was also exhilarating. They were pioneers in a new form of collaboration, teaching AIs about humanity while learning about their own capabilities. Every override of Newton’s recommendations was a small act of rebellion, every acceptance a small surrender, and somewhere in between, they were building a future that neither human nor AI could create alone.

State 2 wasn’t a destination—it was a negotiation, played out in conference rooms and code reviews, in hiring decisions and lunch choices. And Rachel was right in the middle of it, helping her team navigate the balance between optimization and intuition, between efficiency and humanity.

Tomorrow, Newton would have new recommendations. And tomorrow, Rachel would decide which ones to follow and which ones to override, maintaining the delicate balance that kept them human in an increasingly algorithmic world.

The dance would continue.


Thanks to the 3x3 Institute for the developmnt of the AI State Model and designing the tools and technologies that drive human–AI achievement forward.