In this edition of the Sagard AI Newsletter, we:
- Speak with Ammar Naqvi, Chief Technology Officer at OneCarNow, a company that helps underserved individuals access cars as productive assets and build a financial track record without relying on traditional credit histories.
- Highlight recent GenAI model releases, such as Gemini 3, Claude Opus 4.5 and Nano Banana Pro, and what makes them the superior choices for different use-cases.
- Report how the AI regulation landscape is easing, particularly in the EU.
Raising the Floor: OneCarNow’s Path to Inclusive AI with Ammar Naqvi, CTO
In this conversation, Ammar shares how OneCarNow’s (OCN) “raise the floor” mission shapes its products, how AI powers underwriting and predictive maintenance, how his team manages bias and safety in machine learning, and how AI copilots are reshaping engineering workflows. He also offers pragmatic lessons for leaders beginning to embed AI into their products and organizations.
Here are our key takeaways from the interview:
- AI With Purpose, Not Hype: OCN embeds AI deeply into underwriting and operations but avoids branding itself as an “AI company.” The focus is on solving real problems for underserved customers, not chasing trends or buzzwords.
- Bias-Aware Underwriting: Ammar emphasized that raw data is inherently biased. OCN mitigates this through normalization, continuous monitoring, and a human-in-the-loop model to ensure decisions remain fair, explainable, and mission-aligned.
- Pragmatic AI Scaling: Cost discipline and security are central to OCN’s AI strategy. Instead of pushing models to production rapidly, the team prioritizes guardrails; testing, red teaming, and careful evaluation of business value.
- AI-Enhanced Engineering, Human-Led Design: AI copilots handle repetitive coding and testing tasks, enabling engineers to focus on system design and creative problem-solving. The result is happier developers, faster iteration, and higher-quality products without compromising safety.
Let’s dive in.
To start us off, what does OCN do, and what is your role there?
Ammar Naqvi: I’m the Chief Technology Officer at OCN. Our mission is social mobility and financial inclusion; we focus on people who are typically excluded because they lack a traditional credit history.
Instead of trying to “raise the ceiling” for those who are already well-served, we aim to raise the floor so more people can participate in the formal economy. We help them access productive assets like cars, earn income, and gradually build a financial footprint. My role is to design and scale the technology and AI that enable this, while staying aligned with that mission.
How does that idea of “raising the floor” translate into your products, especially around credit and cars?
Ammar Naqvi: Traditional credit systems look backwards; if you don’t have a history, you’re often treated as too risky by default.
We built a parallel scoring system called the social score. Instead of penalizing you for having no credit file, it looks at your ability to pay and to earn going forward. We use a set of features derived from public and proprietary data and combine them in a way that reflects your potential as a gig worker or small-business operator.
Cars are treated as productive assets we underwrite. A vehicle is essentially a loan tied to your ability to generate income. If your social score indicates you’re a good fit, we can help you access a car and, in the process, start building a financial track record.
Where does AI play the most important role in OCN’s business today?
Ammar Naqvi: AI is deeply embedded in two main areas.
First, underwriting. Our social score is powered by machine learning models that help us assess customers who don’t fit into traditional credit frameworks. AI helps us combine many signals into a consistent risk view that starts from “no credit history is okay,” while still aiming to avoid defaults.
Second, predictive maintenance and operations. We collect data from vehicles, such as driving patterns, usage, road conditions, and use models to predict when a car may need attention. Instead of waiting for a breakdown and weeks of downtime, we can ask a driver to come in for a quick check at the right time.
That reduces downtime, supports safer driving, and helps drivers maintain stable earnings. AI, in that sense, is both a risk tool and an operational efficiency tool.
AI and ML models are known to inherit bias from data. How do you handle that for underwriting?
Ammar Naqvi: We start from the assumption that the raw data is biased. If you train on it as-is, the model may prefer higher-income, higher-GDP contexts and systematically disadvantage the very people we want to serve.
We tackle this in three ways:
- We normalize for skew in our data across dimensions like geography, income, and other sensitive attributes so that the model doesn’t simply reproduce historical discrimination.
- We keep a human in the loop. When the model is uncertain or sees an anomaly, we escalate to a human reviewer rather than letting AI be the final authority.
- We review and retrain. We examine decisions and adjust if we see that certain groups are being treated unfairly for the wrong reasons.
We don’t pretend bias can be eliminated completely, but we design the system to recognize it, contain it, and keep humans responsible for the hardest edge cases.
What challenges have you faced when scaling AI models into production for many users?
Ammar Naqvi: The two biggest challenges are cost and security.
On cost: modern AI infrastructure can become expensive quickly if you run every experiment in production with large models. We force ourselves to start from the business problem and value, and only then decide where AI fits. That discipline helps us avoid chasing every new model just because it is fashionable.
On security: we work with sensitive data, so we cannot ship AI features over a weekend and fix issues later. For new AI-powered products, we often spend as much or more time on testing, red teaming, and safeguards as we do on the initial build. We want to avoid data leakage, jailbreaks, or misuse as much as possible before going live.
Speed matters, but in our world, safe and robust matters more.
How are you using AI internally for your engineering team?
Ammar Naqvi: Inside OCN, AI use is expected, but AI does not replace engineers.
We treat AI as a copilot, not an autonomous driver. Engineers still design systems and own the final code. The copilot handles the repetitive parts; generating boilerplate, suggesting tests, running smoke checks, and flagging outdated libraries.
For me personally, a lot of the time-consuming, less-interesting work, especially around test setup and repetitive scaffolding, is now assisted by an AI agent that understands our patterns. I still review what it produces, but the time savings are substantial.
The net effect is happier engineers and more time for real problem-solving, that is design, architecture, and experimentation.
Looking 12–18 months ahead, which AI or adjacent technologies do you think may influence OCN’s roadmap?
Ammar Naqvi: Two areas stand out.
First, deeper personalization. Because customers stay with us for a long time and use the OCN Driver app, we accumulate a rich view of each person’s behavior; earnings patterns, driving habits, maintenance history, payment behavior. That lets us move beyond generic segments and design products that are truly tailored to the individual, whether that’s a different structure for a car lease, an SME loan, or possibly an education-related product.
Second, on-device inference. As models get smaller and more efficient, we are interested in moving more intelligence directly onto the customer’s device. That could reduce latency and infrastructure cost, and enable real-time, personalized insights on the phone without always relying on a central backend.
Both directions point toward the same goal: more responsive, personalized experiences that still respect safety, privacy, and our mission.
For leaders just beginning to embed AI into their strategy, what are your top three lessons?
Ammar Naqvi: I’d highlight three points:
- Getting started is inexpensive.
Prototyping with AI is often effectively free now. You can use mainstream LLMs to explore ideas, draft logic, or mock up flows without committing to heavy infrastructure or big contracts. Don’t let perceived cost stop you from experimenting. - Use AI to validate ideas faster.
You can generate wireframes, clickable prototypes, or basic flows quickly and test them with real users in days rather than months. That makes product-market fit exploration much more efficient. - Do not blindly trust AI, especially in production.
AI-generated code and logic can hide security issues, scalability problems, or subtle flaws. MVPs are fine for learning, but when you move into production, you still need engineers, security reviews, and proper testing.
Overall, I’d call it cautious optimism: use AI aggressively to learn and prototype, but keep human judgment and good engineering practices at the core.
What we are reading at Sagard
- Foundational GenAI models keep getting better: Anthropic, Google, and OpenAI all released major advancements that push the frontier of AI capabilities, with Anthropic’s Claude Opus 4.5 setting new benchmarks for coding, agentic workflows, and complex reasoning; Google’s Gemini 3 introducing a 1M-token multimodal context window and system-wide integration; DeepMind’s “Nano Banana Pro” delivering high-fidelity, fact-grounded image generation; and OpenAI’s GPT-5.1 adding adaptive reasoning for more reliable, context-sensitive outputs. Together, these innovations signal a rapidly accelerating shift toward more autonomous, multimodal, and efficient AI systems, expanding what’s possible in software engineering, professional services, large-scale data analysis, and high-accuracy content creation; while rewarding organizations that can experiment quickly, learn fast, and implement strong safeguards. They also highlight how leading labs are converging in a shared direction. Models that are not only powerful but controllable, cost-efficient, and deeply integrated into workflows. Finally, these latest model releases felt like a step-function change in sheer performance, and showcased Google’s dominance in the AI race. For instance, Nano Banana Pro’s ability to synthesize images while following highly specific instructions has not been seen in prior image generation models.
- EU to delay high-risk AI rules until 2027: The European Commission is considering postponing parts of the AI Act, the world’s first comprehensive regulation of artificial intelligence, following heavy pressure from major tech firms and the Donald Trump administration. Although the law formally came into force in August 2024, many of its provisions (especially those governing “high-risk” AI systems) are not yet active. The proposed changes under discussion include a one-year “grace period” for companies that may breach transparency requirements and delaying fines until as late as 2027. Proponents argue these delays give firms time to adapt without disrupting the market, while critics (including some EU lawmakers) warn that postponing enforcement undermines legal certainty and weakens protections the law aims to guarantee.