In this month’s issue:
- Inside LeapXpert’s AI-First Transformation with Dima Gutzeit, CEO LeapXpert
- Top 5 Takeaways from the GPT-5 Announcement
- Generative Engine Optimization (GEO): The New Playbook for Search
- AI-First Development Platforms Reshaping Software Creation
- What we are reading at Sagard
Inside LeapXpert’s AI-First Transformation with Dima Gutzeit, CEO
For this month’s newsletter, we spoke with Dima Gutzeit, CEO of LeapXpert, on transforming an enterprise into an AI-native company. LeapXpert helps enterprises, especially in financial services, securely use modern messaging platforms for business communication, and is now embedding AI deeply into its products and operations. Our conversation explored how LeapXpert uses AI both in its core product and internally to boost productivity, why cultural change must start at the top, and how the company balances building in-house with buying off-the-shelf tools.
Here are our key takeaways from the interview:
- AI before headcount: LeapXpert solves problems with AI first, not more people. This mindset has powered a 3x revenue jump in 18 months with a smaller team, proving that working smarter beats working harder.
- Innovation as a team sport: AI adoption started top-down but quickly became a company-wide movement. Budgets, freedom to experiment, and monthly town-hall showcases turned AI into something everyone wanted to be part of.
- Build where it matters: The team buys proven tools for generic tasks but builds their own for anything core to operations, moving fast enough to shrink onboarding from two weeks to just five minutes.
- Practical with models: LeapXpert picks the right model for the job, often mixing and matching providers, while steering clear of open-source maintenance headaches so they can focus on building real-world solutions.
Let’s dive in.
For people who don’t know about LeapXpert, could you just give a quick overview of what the company does and what you’re specifically doing around AI?
Dima: LeapXpert works with many enterprises, particularly financial institutions, solving what we call “responsible business communication.” Essentially, we enable businesses to securely use messaging platforms, as messaging increasingly replaces email in professional settings. Regarding AI, we focus on two major aspects: one, we have an AI-driven product named Maxen that extracts valuable intelligence from captured communication data, helping users become more productive. Secondly, we are deeply committed to internal AI adoption aimed at optimizing our operational efficiency across every department, fostering rapid growth without solely increasing headcount.
You mentioned LeapXpert is becoming an AI-native company. What does that actually look like day-to-day, and what caused this shift?
Dima: Being AI-native means we prioritize solving problems with AI before resorting to hiring additional people. Companies founded after the generative AI boom operate differently, achieving significantly greater productivity per employee. We began our transformation roughly a year ago, realizing that legacy businesses must adapt to this model to survive long-term. Retrofitting legacy processes is more challenging than building from scratch, but it’s essential to remain competitive. Shockingly, many companies still aren’t adopting AI meaningfully, which may threaten their survival.
How did you actually go about making that cultural shift internally, especially when it comes to talent?
Dima: For our AI product, we had an experienced AI team already working effectively. Internally, however, we initiated a significant mindset shift top-down, clearly communicating that professionals who fail to adopt AI tools risk becoming irrelevant in their careers. We provided infrastructure, substantial budgets, and complete freedom to experiment, empowering especially our tech-savvy employees to automate workflows. Initially driven from the top, this quickly evolved into a bottom-up movement, with teams enthusiastically experimenting and innovating autonomously.
What helped create this culture where people are enthusiastically experimenting with AI?
Dima: First, we clearly demonstrated the “art of the possible” by providing successful examples, infrastructure, budgets, and autonomy. Critically, we created platforms like monthly town halls where teams could showcase their AI creations. Celebrating these internal successes openly became contagious, motivating further innovation. The results were remarkable, our customer success and support teams, for example, significantly improved their service efficiency through innovative AI tools developed internally.
What have been some of the most impactful or exciting AI tools your team has built?
Dima: One standout is our customer support system, where bots analyze customer issues immediately, suggesting precise solutions along with confidence scores, significantly speeding up response times. Another example is our Site Reliability Engineering (SRE) team, which automated system health monitoring and remediation tasks, drastically cutting down manual labor. We’ve also automated customer provisioning processes, reducing onboarding from two weeks to just five minutes, dramatically increasing how quickly we onboard new customers.
You’re building a lot internally – what’s your philosophy around build versus buy, given there are so many AI tools available?
Dima: We typically buy tools for standardized processes like marketing because proven external solutions already exist. But for deeply ingrained operational processes, external tools rarely match our exact needs. The rapid speed of internal AI-driven software development has shifted our preference strongly towards building tailored solutions.
Given your investment in AI, how are you measuring the ROI internally? Do you see clear results yet, or is it more of a general feeling of productivity improvement?
Dima: Measuring precise ROI is challenging, so we focus on productivity: how much more our teams can accomplish in a given period. Automating manual tasks has directly translated into substantial productivity gains. Compared to 18 months ago, our revenue tripled while headcount decreased by 25%. Employees aren’t working harder; they’re working smarter.
How do you handle the risks, especially when AI is involved in customer-facing tasks like customer support? Do you trust the AI outputs you’re seeing?
Dima: For customer-facing interactions, we maintain strict human oversight to ensure accuracy and reliability. Internally, the AI tools are developed by the employees who use them, creating a deep understanding and inherent trust. This transparency means employees clearly understand when AI output can be trusted and proactively address any issues that arise, enhancing overall confidence in our internal AI applications.
What’s your stance on using multiple AI models internally? Are you model-agnostic, or do you stick with specific providers?
Dima: Internally, we view models similarly to human talent – each excels in different tasks, and we select models accordingly. Our AI product relies on enterprise-grade APIs from trusted providers like OpenAI and Microsoft, ensuring stability and quality. Internally, however, we remain model-agnostic, freely choosing the model best suited to each specific task. Some tasks even benefit from using multiple models simultaneously.
Have you experimented at all with open-source models, or are you primarily using third-party commercial APIs?
Dima: About six months ago, we experimented with open-source models like Llama. While promising, the maintenance requirements, hardware costs, and constant upkeep proved impractical. Ultimately, commercial models provided the reliability and convenience needed, allowing our teams to focus resources on innovative application development rather than managing underlying infrastructure.
What’s next for LeapXpert on your AI journey? Where do you see yourselves going in the next 6 to 12 months?
Dima: We’re currently about 20% along the path to becoming fully AI-native. We’ve tackled easy, high-value automations, but there’s much more complexity and integration potential ahead. We continuously refine internal AI tools, some of which could become independent products or even spin-off startups.
What advice would you share with companies that are just getting started with AI? Any lessons you’ve learned about what works best?
Dima: Leadership must drive AI adoption from the top initially. You must enable your employees to fully integrate AI into everyday operations, not just as an experiment and genuinely trust its capabilities. Provide a basic but robust safety framework, then empower your teams to innovate. Our experience proves this is the most effective approach.
Top 5 Takeaways from the GPT-5 Announcement
OpenAI announced their latest frontier model today called GPT-5. There were a lot of great demos and announcements from ChatGPT getting access to Gmail and Google Calendar (not just for Deep Research) to vibe-coding and building mini-apps within ChatGPT! Here are our top 5 takeaways, focused on the GPT-5 API that we can all leverage to improve and build new products.
- Cost: Going into today’s announcement, most of us expected GPT-5 to be expensive. However, it’s cheaper than expected for frontier quality: $1.25 per 1M input tokens, and $10 per 1M output tokens. We covered costs for all frontier models in last month’s newsletter.
- Mini and Nano: OpenAI also announced GPT-5 Mini and Nano that let you make trade-offs between performance, cost, and latency. From the initial read, it looks like Mini will be the go-to model for most tasks; Nano can be used for routing, classification, extraction, and simple summaries, and use the full GPT-5 for heavy workloads.
- Context Window: OpenAI increased the input context window to 400k tokens, and output to 128k tokens; up from 200k and 100k for o3 respectively. While this is still less than the 1M context window that Gemini supports, it’s definitely a nice bump!
- Agentic Tool Use: OpenAI introduced better parallel tool calling. You could technically run tools in parallel but the models were pretty bad at it, and mostly ran tools sequentially. With GPT-5 the model decides all the tools it wants to execute, then runs them in parallel, and finally executes more tools (if needed) or generates the response.
- New Control Knobs: GPT-5 API also offers developers some new controls to speed up response generation. First, there is reasoning_effort that controls how long models thinks; second verbosity that controls the output length, and third preamble messages that allot GPT-5 to communicate its plans and progress to users before and after it makes tools calls. This combined with Mini and Nano versions should help tremendously with latency.
Generative Engine Optimization (GEO): The New Playbook for Search
AI-powered chat interfaces and large language models (LLMs) have revolutionized lead generation. In our portfolio, several companies have reported that 25–55 percent of their qualified leads now originate from AI-powered chat interfaces.
Unlike traditional SEO which relies on backlinks and exact-match keywords—LLM-based search uses semantic embeddings and vector search. It returns concise, natural-language answers directly, so the AI response itself often serves as the landing page.
Semantic search interprets user intent instead of matching words. By embedding both queries and content into vector spaces, the system ranks results by proximity. Consequently, AI assistants frequently quote content that might rank lower in traditional SEO but delivers higher contextual relevance. This shift forces marketers to optimize for semantic quality rather than keyword density.
To measure brand presence in AI answers, companies now use Answer Engine Optimization (AEO) tools. For example, Profound tracks how often and how prominently your content appears in ChatGPT’s responses or Google’s AI Overviews.
Today’s major LLM providers still rely on external web-search APIs from Bing or Google. But Microsoft’s planned retirement of its legacy Bing Search APIs on August 11, 2025, underlines a broader pivot toward proprietary web-indexing systems Microsoft Learn. As a result, organizations must prepare for rapid changes in how AI models access and update web data. Real-time monitoring, prompt-driven content creation, and ongoing semantic optimization have become critical to stay visible in AI-driven search.
AI-First Development Platforms Reshaping Software Creation
Leading AI-first development platforms—Replit, Lovable, Bolt and V0—are lowering the barrier to software creation with simple prompts and chat-style interfaces. They eliminate complex setup, deep coding expertise and many time or cost constraints, making app building fast and accessible to a much wider audience.
Replit
Replit has surpassed $100 million in annual recurring revenue as of June 2025 Startup Hub and reports over 22 million registered users. Growth is driven by Replit Agent—an AI assistant that writes code, sets up environments and manages deployment entirely in the browser, with no local installation.
Lovable
Founded in Stockholm in 2023, Lovable now has 2.3 million users and 180 000 paid subscribers. It reached $75 million ARR within seven months, then closed a $200 million Series A at a $1.8 billion valuation in 2025. Its “vibe coding” model lets users describe app ideas in plain language and receive fully functional code.
Bolt
Spun out of StackBlitz in October 2024, Bolt generates full-stack applications from natural-language descriptions. It includes live previews and an integrated debug terminal. Bolt hit $4 million ARR within its first 30 days and grew to $40 million by March 2025, serving over 5 million active “software composers.” Unlike Lovable and V0, Bolt emphasizes real-time developer tools and debugging support alongside AI generation.
V0 by Vercel
V0 converts plain-language prompts into styled React components (using Tailwind CSS) and deploys them seamlessly on Vercel’s Frontend Cloud. Vercel raised $250 million in Series E at a $3.25 billion valuation in May 2024, and is doubling down on its AI SDK and V0 roadmap Reuters.
Risks and Best Practices
AI-generated code can bring security vulnerabilities and often lacks clear structure or documentation. Treat it like external code: review it carefully, write additional tests, and apply your domain expertise. For mission-critical or regulated projects, a traditional development process remains the safest path to ensure performance and compliance. You can leverage these AI tools to quickly generate prototypes, gather early customer feedback, and streamline collaboration between designers, product managers, and engineers.
What we are reading at Sagard
OpenAI’s New Open-Source Models
OpenAI has released two open-source language models: gpt-oss-120b and gpt-oss-20b. These models deliver strong reasoning and instruction-following performance, comparable to previous closed models like o4-mini and o3-mini, but are small enough to run on laptops or even phones. Both come with a 128k context window, making them practical for handling long or complex tasks, and are licensed under Apache 2.0, so you can deploy them in production with minimal restrictions.
However, there’s a catch: running open-weight models means you’re responsible for ongoing maintenance, hardware investments, updates, and monitoring. While these models lower the barrier for advanced AI adoption and offer greater control, they come with real operational requirements that teams need to be ready for.
This shift marks an important moment for enterprise AI; powerful, open models are now widely available, but owning the stack brings new challenges along with new opportunities.
Anthropic launches Claude for Financial Services
In July, Anthropic launched a financial analysis solution built on Claude. We think the most interesting part here is that they are adding pre-built connectors to major financial data sources like PitchBook, S&P Global, Palantir, Databricks, and Snowflake. The platform lets developers pull together data from both public and private markets, with each datapoint linked directly to its source for easier verification.
Instead of relying on manual cross-referencing or batch downloads, analysts can now check and combine information across multiple platforms in real time. This is meant to reduce errors, speed up research, and make financial analysis workflows more transparent.
Google Sheets is getting an =AI function.
This is one of our favourite AI integrations into a product that we use daily, no, hourly.
You can use the =AI function in Google Sheets to do some really cool things using natural language. Start with =AI (“natural language prompt”,[range]) and let Gemini do its magic. It can easily do the following and much more!
- Write formulas
- Generate text.
- Summarize information.
- Categorize information.
- Analyze sentiment.