skip to main content
Back

Sagard AI Pulse - October Edition

In this edition of the newsletter, Nesto’s Chief Product Officer, Samuel Couture-Brochu, shares how the company is reimagining mortgage technology through AI. His reflections capture the shift from experimenting with models to designing products where AI and people work in tandem. Complementing Nesto’s story, we also highlight recent breakthroughs shaping the next wave of enterprise AI along with what we, at Sagard, are reading; from OpenAI’s AgentKit and Anthropic’s vertical focus to emerging research on self-evolving agents.

Together, these developments paint a clear picture; the future of AI belongs to organizations that can merge technical ambition with governance, clarity, and purpose.

Where Product Meets AI: Nesto’s Journey with Samuel Couture-Brochu, CPO

In this conversation, Samuel Couture-Brochu, Chief Product Officer at Nesto, shares how the company is embedding AI into its mortgage technology and operations. Samuel discusses Nesto’s work in document processing and underwriting automation, the challenges of productizing AI responsibly, and the balance between automation and the human touch. He also reflects on the evolution from pre-generative AI systems to large language models (LLMs), emphasizing trust, regulation, and organizational readiness. The discussion concludes with his thoughts on ROI, future talent, and the mindset leaders need to integrate AI effectively.

Here are our key takeaways from the interview:

  1. Foundation before Intelligence: Clean, structured, and accessible data remains the foundation of any successful AI initiative. Without robust data models and pipelines, even the most sophisticated AI models will produce unreliable outcomes.
  2. Governance enables Trust: In a regulated domain like mortgage lending, explainability and auditability are non-negotiable. Strong governance frameworks should be defined that make AI systems predictable and compliant, enabling both regulators and employees to trust automated decisions.
  3. Hybrid Automation Model: Samuel underscored the importance of blending automation with human oversight. While AI handles routine, data-heavy steps with consistency, humans provide empathy and reassurance at critical decision points, particularly in emotionally charged processes like mortgage approvals.
  4. Product Leadership in the AI Era: For product teams, AI fluency has become as essential as data literacy. Experimenting constantly with AI, aligning early with compliance and risk teams, and prioritizing real customer outcomes over technical novelty help cultivate an AI-first mindset.

Let’s dive in.

Can you start by introducing Nesto and your role there?

Samuel: I am the Chief Product Officer at Nesto Group, a Canadian provider of mortgage technology and financing solutions. While many know us through nesto.ca, our direct-to-consumer mortgage lending platform, we have expanded far beyond that. We operate Nesto Cloud, which is designed to simplify and modernize the financing experience for both lenders and consumers.

The platform can be offered as a white-labeled business process outsourcing solution, where our team handles end-to-end fulfillment for financial institutions, or as a full SaaS platform, where lenders use their own teams to serve their customers. Since launching in 2018, we have grown to about a thousand fully remote employees across Canada and have been nominated Canadian Lender of the Year for three consecutive years. We are also active in commercial lending and the broker channel through our CMLS brand. It has been a fast-paced, exciting seven years of growth.

Can you share a few key use cases at Nesto that are leveraging AI?

Samuel: Two examples stand out. The first is our document intelligence and fraud detection pipeline on Nesto Cloud. Every document that comes in, whether uploaded by advisers, underwriters, or customers, goes through an automated process where it is classified, text is extracted, and the file is checked for potential inconsistencies or fraud indicators. When we started, we relied on traditional computer vision models, but as LLMs matured, we integrated them into our workflow. That switch has significantly improved both accuracy and speed, and it has also made the system easier to maintain and adapt.

The second is a pilot project in automated underwriting. We are experimenting with combining agentic AI with traditional AI models to explore autonomous or assisted underwriting. The intent is to help advisers and customers get decisions faster and with higher consistency. It is still early, but this direction may meaningfully shorten lead times and improve conversion rates if done responsibly.

You have built products that leverage both traditional AI and modern LLMs. What challenges have you experienced in productizing these technologies?

Samuel: I like to say “garbage in, garbage out.” That has been true since the earliest days of AI, and it still applies today. A lot of teams assume that adding AI, or especially an LLM, magically solves problems. But unless your data is clean, structured, and accessible, the output will not be useful. Before you can do anything interesting with AI, you need to fix your foundation: data models, pipelines, and integrations.

Then there is the regulatory aspect, which is particularly significant in the mortgage and financing world. The challenge is that AI outputs are variable. They do not always produce the same answer given the same input. That variability is difficult to reconcile with a compliance environment that requires explainability and repeatability. So, a lot of our work involves defining constraints, governance, and controls that make these systems auditable and safe to deploy. And finally, there’s human adoption. Even when the model works well, you still need people to trust it. That requires transparency; showing users how and why the model arrived at its conclusion, and helping them build confidence over time.

Double-clicking on transparency and trust, how can users build trust in AI systems?

Samuel: In my opinion, this is as much a philosophical question as a technical one. Humans make mistakes and still trust one another. With AI, we will need a similar mindset. We will probably have to accept a degree of variability in AI outputs, as long as it is within a regulated and acceptable range. The key is to establish frameworks that make AI predictable and governed, not flawless. Over time, I believe we will see clearer regulatory guidance and more robust internal standards. But culturally, we will also need to become comfortable with AI as a partner that can err occasionally, just as people do, while still improving efficiency and consistency overall.

What ROI levers have you seen AI deliver at Nesto?

Samuel: Many companies focus on cost reduction, but in my view, the bigger opportunity lies in revenue growth and value creation. AI does not just replace human effort; it can help people do more, faster, and with greater precision.

At Nesto, we have seen that AI allows teams to focus on higher-value tasks while the system handles the repetitive, lower-value work. For example, in underwriting, humans are excellent at reasoning through complex edge cases, but AI can analyze every possible decision path at scale. Sometimes that leads to more optimal decisions; lower costs for lenders or better rates for customers. So rather than just treating AI as a cost-saving tool, we see it as a force multiplier, something that amplifies what people can achieve rather than replacing them.

With regards to hiring, how has generative AI changed what you look for in product talent?

Samuel: For product leaders, AI fluency is becoming as fundamental as data literacy. It is no longer enough to just understand what AI is. You have to engage with it directly. I encourage my team to experiment constantly; try new models, test new tools, and see what actually works versus what is hype. But beyond technical fluency, it’s about developing an AI-first mindset. When you start a task, ask: “Should I do this manually, or can AI help me get there faster?” Whether it is drafting an email, writing a user story, or doing market research, it is about finding where AI can streamline your process.

Looking ahead 12–18 months, how do you see AI evolving, and where does Nesto fit in?

Samuel: While it’s always risky to predict too far out, I believe voice and conversational AI may see rapid improvements. These systems are already capable of holding surprisingly natural conversations and following scripts effectively, though latency and response speed still need work. At Nesto, we are exploring how this might complement our call centers, that is handling routine inbound calls, gathering customer data, or even pre-filling parts of a mortgage application to make instantaneous loan-approval decisions. In theory, you could imagine an AI assistant that collects information, runs initial underwriting checks, and transfers the call to a human adviser for final review, all within one conversation.

What do you think about the balance between automation and human-in-the-loop?

Samuel: It is about understanding where humans add unique value. Some parts of the process such as repetitive questions, data collection, and validation are perfect for automation. AI systems are consistent and do not fatigue, whereas humans naturally vary in performance over time.

But when customers are making major financial decisions, like taking out a mortgage, empathy and reassurance still matter. Interestingly, some users have told us they actually prefer interacting with AI for certain steps because it’s objective, efficient, and does not judge. Yet at the final stage, when someone wants confirmation that they have made the right choice, they value a human voice. So the right model, in my opinion, is hybrid; let AI handle the routine, and bring in humans for moments that require emotional intelligence and trust.

Finally, what advice would you give to product leaders beginning their AI journey?

Samuel: Start with a real problem, not the technology. Too many teams begin with a desire to “use AI” and then go looking for a use case. Instead, clearly define what problem you are solving and why AI is the right solution.

Also, involve risk, compliance, and security teams from day one. These stakeholders often determine whether an AI solution can actually go live. Working with them early avoids rework later and ensures whatever you build stands up to regulatory scrutiny. And finally, keep your focus on customer impact. In our case, speed, trust, and repeatability are what matter most. If AI helps us improve those, then it is a win. Real success in AI adoption is not about novelty; it is about solving tangible problems better than before.

Timely AI Updates

1. Anthropic partners with London Stock Exchange Group (LSEG) to bring financial data into its Claude platform

  • In October 2025, LSEG announced a collaboration with Anthropic: licensed financial-data from LSEG will be integrated into Anthropic’s “Claude for Financial Services” offering, enabling tasks such as summarising earnings calls, scanning due diligence materials, triggering workflows and surfacing market signals.
  • Why this matters: Strategic significance: this marks a clear move by Anthropic from a generic LLM/chatbot play towards vertical-industry specialist AI (in this case, finance). The availability of premium data combined with agentic workflows gives enterprises in financial services a more compelling value proposition.
  • Implications: If you are in fintech or in a domain where high-quality proprietary data + workflow automation matter, this is a useful signal. It also raises the bar for caution: integrating large data sets and agentic workflows brings additional governance, security and regulatory burdens.

2. Palo Alto Networks launches “Prisma AIRS 2.0”, “Cortex Cloud 2.0” and “AgentiX” to secure the agentic enterprise

  • Palo Alto Networks introduced three major releases aimed at securing the agentic enterprise, notably AI Agent Security, AI Model Security, and no-code builder for custom agents. [Source]
  • Why this matters: As enterprises begin to deploy autonomous agents (not just chatbots), security, governance and manageability become central. This is a strong vendor move signalling that managing agentic workflows is now a board-level concern.
  • Implications: The industry is shifting from “we trial gen-AI” to “we govern and secure agentic-AI at scale.” If you are building use-cases that are leveraging agentic AI, involve risk, compliance and security teams from day one.

3. OpenAI Introduces AgentKit, and Sora 2

At their recent DevDay 2025 event, OpenAI unveiled a major shift: not just new models, but a full toolkit for building agents and embedded apps inside ChatGPT.

  • AgentKit is a set of tools for developers/enterprises to build, deploy and optimize agents (autonomous-capable workflows) rather than just chat responses. OpenAI also announced access to new high-accuracy models (e.g., GPT‑5 Pro) and video generation models (e.g., Sora 2) via APIs.
  • Why this matters: If your company is thinking about embedding AI into workflows, moving beyond proofs-of-concept (PoCs) to production, this is a signal that the ecosystem is maturing. Rather than just “which model”, the question now is “which agent, which app, what workflow, what governance, and how to integrate”.
  • Implications: Consider whether your enterprise has the data & systems ready for agentic automation, not just assistance. Also evaluate vendor lock-in, integration risk, and governance/operational readiness.

4. Applied AI: “Learning on the Job”: Self-Evolving Agent for Long-Horizon Tasks

  • A paper (published 9 Oct 2025) titled “Learning on the Job: An Experience-Driven Self-Evolving Agent for Long-Horizon Tasks” introduces MUSE, an agent framework that accumulates experience, reflects on its performance and continually improves rather than being a static pretrained model.
  • Why this matters: Most current agents are still static after deployment. The idea of agents that learn from experience in situ is a next-step toward true autonomy. For product leaders: this evolution signals that future AI investments may need to factor in continuous learning rather than “deploy and forget.”
  • Implications: If you are designing an AI-agent roadmap, plan for experience accumulation, memory, and adaptation; not only initial model build.

What we are reading at Sagard

1. Anthropic – Agent Skills

  • Anthropic’s Skills are modular, reusable task packages that let Claude operate with organization-specific instructions, code, and resources, aligning outputs with established workflows and brand standards.
  • Why this matters: It addresses a common enterprise challenge: generic AI that lacks operational context and requires extensive prompting. This approach enables more consistent adherence to internal rules and processes.
  • Implications: Emphasis shifts from model benchmarks to workflow alignment and governance. Teams can standardize agent behavior through curated skill libraries and controls around access, provenance, and security.

2. OpenAI – ChatGPT Atlas

  • ChatGPT Atlas runs as a browser-embedded assistant that provides contextual help across websites, including summarization and user-authorized actions, within the flow of daily work.
  • Why this matters: It reduces friction created by switching between apps and detached AI tools, offering assistance where information is consumed and tasks are executed. Memory and context features support continuity across sessions under user control.
  • Implications: Productivity tooling centers more on embedded, agent-assisted experiences than on standalone apps. Organizations prioritize controls for privacy, data handling, and action authorization within the browsing environment.

In a nutshell, on the enterprise side, we are evolving from “generative AI as novelty” toward verticalised use-cases + agentic systems. On the research side, the frontier is shifting from getting LLM answers to LLM + agents + memory + tools.

Keeping these advancements in mind, here are few questions for C-suite executives and tech-leaders to consider:

  • Do we have data flows that could benefit from agentic automation (not just chatbots)?
  • Is our governance, security and infrastructure ready for agents that act?
  • Are we selecting from vendors who are building for the “agentic-future” versus merely generative shortcuts?
  • How will we measure value in addition to cost-savings?

You may also be interested in

Rising Sun, Rising Returns

Insights

Rising Sun, Rising Returns

Collaborative Funding Models

Insights

Collaborative Funding Models

Advising on Private Equity: Access, Diversification, and Time Horizons

Insights

Advising on Private Equity: Access, Diversification, and Time Horizons

Connect with us

Get in touch
Back To Top