A Journey to AI-Powered Migrations
How AI is transforming large-scale migration efficiency
Migrating large-scale financial systems is no small task. It’s complex, time-consuming, and full of edge cases that can trip up even the most experienced engineers. The challenge in our case is migrating code from our legacy systems to our new financial-activity-model (FAM), a system we’ve built that unifies how financial activities are represented across our services.
Historically, each service defined financial activities differently. For example, an internal transfer in our money movement service is represented differently in another service, like our metrics service. FAM centralizes this by providing a single point of integration, which simplifies the architecture and improves control over our user activity feeds.
The migration itself, however, requires engineers to move a vast number of activities from legacy services into FAM and maintain consistency. This is where AI is beginning to transform the process.
The Early Days: Manual and Slow
The first migration – internal transfers – was completed manually. Without a defined process or AI support, the work took over two months. Engineers had to grasp code across multiple services, account for unique business logic, and carefully implement its equivalent functionality in FAM.
While some structures were similar across activities, each had its own nuances, making it challenging to move quickly while preserving correctness.
From One-Shot Prompts to Reusable Templates
The team set up an environment where the AI could “see” both our legacy and FAM codebases at the same time through Cursor, an AI-powered coding tool. With Cursor’s agent mode enabled, it could perform a migration from start to finish independently.
The early workflow relied on one-shot prompts. Initially, we prompted the AI to generate code for a migration, then we manually refined the output. While workable, this approach was inconsistent and more time-consuming.
The big breakthrough came when we asked the AI to document its process after we completed a portion of the migration. Once we were happy with the results, we asked it to create a template with the step-by-step process it took to perform the migration. That way, we could use the template for future migrations.

We had the AI create several of these templates to assist across different service migrations. Each step included automated testing to ensure accuracy before moving forward. The nice thing about the templates is they produced PR-sized outputs which were easier to review and verify.
We found the best way forward was to have 6-7 steps per template to avoid overwhelming the AI with context. Over time, we continued to refine the templates with each new activity implemented, and they grew more precise and repeatable.
Early Results: Faster and more Consistent Migrations

Manual migrations such as internal transfers and EFT deposits took on average 1.5–2 months each. Our first AI-assisted migration (currency conversion) set the foundation for reusable templates. With those in place, the majority of subsequent activities like e-transfer send and spend were completed in a few days.
Each migration added new patterns to the AI’s library, allowing it to produce increasingly relevant and accurate drafts. The more activities migrated, the more examples the AI could draw from, accelerating each subsequent effort.
Testing & Code Quality
While generating code helped us speed up migrations, it also required us to be more rigorous with our review and testing process so we didn’t miss any issues. To do this, we developed an in-house verification framework that could generate financial activities in a staging environment. It verified that data in downstream services was populated and accurate by comparing it to the behaviour and data from the legacy system.
For every new activity migrated we:
- Ran the verification framework on several test cases for the activity in staging and then again on test accounts in production.
- Manually tested all the activity states in a staging and production environment.
- Gradually rolled out the migrated activity to employees then to clients
Additionally, for every pull request we:
- Thoroughly reviewed every line generated by an LLM three times, first by the developer who generated the code, then by an AI review bot, and finally by a second human reviewer.
- Produced more thorough unit testing with the help of LLMs for better coverage than typical manually written tests.
- Manually tested in a staging environment where appropriate.
Where AI Helps and Where Humans Remain Essential
AI proved to be great at pattern recognition. Whenever a task requires repeating or adapting an established structure, AI can reduce the time and effort involved:
- Repetitive, standardized transformations (e.g., migrating our logic for writing to the ledger).
- Maintaining naming conventions and structural consistency.
- Generating boilerplate code.
Human expertise, however, remained critical for:
- Complex business logic, like balance adjustments or currency conversions.
- Multi-step call flows that require deep understanding of context.
- Catching edge cases that only experienced engineers will recognize.
The winning formula was a blend of the two: AI for repetitive, patterned work, and engineers for design decisions, correctness, and business logic.
Looking Ahead
Even though this is still early in the migration journey, the results are promising. Any repetitive code change or new instance of an existing pattern is a candidate for AI assistance.
The question to ask yourself is: Does this task follow patterns we’ve written before? If the answer is yes, AI can probably help.
If you’re shipping production code generated by AI, maintaining high-quality, well-tested code is achievable with a thorough verification process like we did for the FAM migration. It’s important to maintain standard code review and testing best practices, but also to be extra cautious. After all, even though the LLMs generate the code, the developer is responsible for its reliability.

The FAM migration is still ongoing, but our early experiments with AI-assisted development demonstrate serious potential. The benefits go beyond speed. Offloading repetitive work helps engineers focus on the interesting parts: architecture, business logic, and edge cases. And with every migration, the AI gets better examples to learn from, making the process much smoother over time.
...
Written by Marina Samuel, Staff Software Developer
Interested in working at Wealthsimple? Check out the open roles on our team today.
