How Modern Engineering Leaders Think: Pranav Prabhakar on ML Reliability and Remote Team Performance

Pranav Prabhakar is an engineering leader with expertise in building scalable systems, deploying machine learning in production, and leading high-performing remote teams. As Co-founder & CTO of MiStay, he architected the platform from the ground up, and later, as Associate Engineering Manager at ManyPets, he drove ML-driven claims automation, backend architecture redesign, and async collaboration initiatives. His work blends deep technical skill with a focus on measurable business outcomes in complex, regulated domains.

In this interview, Pranav shares insights into becoming an engineering leader, managing data drift, and building high-performing remote teams.

1. What were some of your earliest lessons in building scalable backend systems, and how did they shape your approach to engineering leadership?

One of my earliest lessons was separating responsibilities: keeping the core business logic, data persistence, and orchestration distinct. I also learnt to prioritise simplicity before scalability. I believe you should avoid premature optimisations and instead create an architecture that can scale without a rewrite. With that, I have found that data modelling and boundaries matter more than individual APIs. They are what keep complexity manageable in the long term.

Another important takeaway is that observability needs to be treated as a first-class concern: metrics, logs, and tracing should be built into the system from day one. Finally, my experience showed that tight coupling slows interaction. That is why I now prefer modular, domain-driven designs.

2. How did your time as a startup co-founder prepare you for leading engineering teams in larger organizations?

My time as a startup co-founder taught me many invaluable lessons. Most importantly, I developed an ownership mindset since I needed to think end-to-end, from the production vision down to the customer experience. Considering limited resources, I started to balance ambitions with pragmatism, which later helped me become masterful at prioritisation.

Moreover, I saw the value of fast feedback loops with customers and stakeholders, which was very beneficial later in iterative delivery. Then I realised that team culture and communication matter as much as technology. Last but not least, I became comfortable with navigating ambiguity and trade-offs - a skill that scales to larger organisations.

3. How did you monitor and manage data drift in production systems during the course of your career?

Initially, I observed drifts with basic statistical monitoring, which included distribution checks and histograms. Later, when the system matured, I introduced feedback loops from production outcomes so that we could compare model performance with reality.

Another important point is that we used shadow monitoring and retraining triggers to catch drift proactively before it had an impact. If drift occurred, I focused on logging as much rich metadata as possible. Most importantly, I came to see drift as not only a technical challenge but also a signal of evolving business context, сhanges that required product-level and technical responses.

4. What role did shadow mode testing play in your ML deployment strategy?

Shadow mode testing became an integral part of my ML deployment strategy, as it enabled safe validation of predictions in real-world conditions without business risk. It allowed us to calibrate thresholds and gave us accuracy and confidence levels before rollout.

It is important to note that shadow mode testing helped surface unexpected failure cases and edge conditions, which would have been costly to discover afterwards. It was also very beneficial for building stakeholder trust by showing alignment between machine and human decisions. In general, this tool provided a structured pathway for experimentation, allowing us to transition from testing to production gradually.

5. Many engineering managers struggle with balancing delivery speed and system stability. How did you navigate that trade-off at your organization?

Balancing delivery speed with stability has always implied making trade-offs explicit and transparent, framing them as short-term versus long-term decisions, so stakeholders understand the risks. We practised progressive delivery to keep velocity high while limiting risks, like feature flags and incremental rollouts.

At the same time, we invested in automation and runbooks to reduce operational load and support faster delivery. When stability issues started negatively affecting velocity, I advocated for reducing technical debt and ensuring that the roadmap included quick tactical wing and longer-term re-architecture work.

6. What’s your philosophy on building high-performing remote teams, and how did you apply it before and during the pandemic years?

My philosophy on building high-performing remote teams centres on trust, autonomy, and clear accountability. As such, I focus on clarity of goals and outcomes instead of monitoring hours or activity.

To make remote work easier, I lean on asynchronous documentation and communication tools to reduce dependency on time zones. We ensure visibility through regular updates, demos, and structured check-ins to maintain team alignment. In short, we maintain a balance of rituals - synchronous for alignment and asynchronous for deep work. 

It is also crucial for me to look after the team members’ mental health. I pay special attention to creating psychological safety so engineers feel comfortable raising blockers and proposing ideas.