When Martin McCann and Mathias Born decided to create Trade Ledger, an Australian lending platform, their plan was to simplify and streamline lending services through cloud-based software for lenders. Their journey provides insights for CIOs in their own development efforts.
When Trade Ledger started to develop its software as a service, Born’s team was focused on how to build and architect a system that wouldn’t be obsolete in a few years. Beyond the technical choices, Born had to consider whether the whole team work independently on each piece of functionality, and managed and maintained their portions of that repository.
And, finally, in a business context, Born says it is important to know if the functionality is modular and can be used in different areas.
“It’s sometimes more an art than a science. But breaking those components down into the right context was definitely a challenge,” he says.
The challenges of building modular functions and adopting a microservices architecture
There were multiple challenges along the way around the approach to adopting a microservices architecture.
For organisations building a similar system, Born says that it is important to figure out if there are certain parts of the system that get a lot of usage and decouple them so the function can then be scaled independently.
One technical challenge was that many of Trade Ledger’s engineers were more familiar with traditional SQL, traditional relational databases, and a traditional way to construct data models. “Putting that together into the document-oriented database is definitely a different way of thinking, but nevertheless the document-oriented database was the right model for us,” Born says.
Another challenge involved an early decision to build the system monolithically. The first version of Trade Ledger that went to market in 2017 was built by a team of three engineers, despite being a bigger and complex system than what now exists. That monolithic approach made it difficult to evolve the platform, so Trade Ledger had to switch to a modular, component approach.
As a result, they had to break down the components of the modules into individual components in the next iteration. To do that, Trade Ledger took a phased approach by refactoring certain parts of the system and creating dedicated components.
For example, Trade Ledger initially had a feature that allowed a connection with cloud-based accounting systems like Xero. Now, it resides in its own dedicated component, rather than be a function within a monolithic application. “We have taken this piece and moved it into its own connector component, which then allowed us to extend this independent of any other change in the initial monolith,” Born says.
The application architecture itself has to be modular, not simply the coded components, Born notes. “In a microservices-based system, we want to ensure that the services work independently of each other. Otherwise, we risk building a monolithic application using a microservices architecture.”
In a modular, microservices application, temporary inconsistency is expected and it is important to, ensure data will nonetheless essentially be consistent because data across the various services could be at different levels depending on the execution time.
“Other challenges are leveraging the power of document-oriented thinking,” Born says. “In relational databases, data is typically linked together, and you create joins to query or consolidate the data. In a document-oriented database, you need to think differently and can store a lot of information in an embedded object. However, if this is information which changes very frequently then it might not be the best approach to store everything in one single document. Multiple smaller documents may be more efficient.”
Born suggests a few things to look out for:
- If the data belongs to the same domain — domain-driven design — and the frequency of changes is the same, put everything in the same model.
- If one entity can live without the other entity, put them into two separate documents.
- If an entity always requires another specific entity — a one-to-many relationship — chances are that you can embed them in the same document.
How Trade Ledger took control of data
For the data itself, Trade Ledger opted for a document-oriented database, which would allow the flexibility needed in the data model and the ability to manage future growth and scalability.
The next step was to identify the right components that would be put into a component-based system. “The way how we constructed it is that every component owns its own data structure and data tables, so one component cannot talk to the database of the other component directly. That’s all handled either by events or APIs, and this allows us in the future to have flexibility,” Born says. This structure allows Trade Ledger a clear separation of data ownership of which component can modify the data.
With MongoDB Atlas, MongoDB’s cloud-hosted database service, Trade Ledger was able to configure its database to provide high resilience and high availability to its customers, with MongoDB functioning as a data layer.
“MongoDB is the operational database that is behind the microservices, and it provided us with the flexibility to make the move. While not every service has its own cluster, this model gives us the flexibility, if it were ever needed, to change the services or components that are powering the microservices — including the database — to support different use cases,” Born says.
The database also helped Trade Ledger on the operational side, as the organisation was able to offload a lot of the operational activities to then focus on the domain-specific problems. Now, Trade Ledger can control where the data is hosted, replicate it, set up the availability, and run tests. Born says that 10 years ago he would have needed a team of 10 to 20 people to do that job.
Selecting the right tools and programming languages
As to how MongoDB came to be the final choice, Born says that he looked into offerings like ArangoDB and DynamoDB, as well as a couple smaller options. One of the key differentiators was that MongoDB Atlas provides a fully managed hosted platform. “It allowed us to manage the database system much more efficiently, with a small team,” he says.
When Trade Ledger was selecting tools and programming languages, it started by looking at the core capabilities of its engineering team, which primarily was based on Java, and so started to build the first components in Java.
For its events server, Trade Ledger chose NATS as one of the core components for the event bus, instead of Apache Kafka. “At the time, there wasn’t a great hosted Kafka solution in the market, and we didn’t have enough capacity to have multiple engineers solely working on Kafka. NATS was a great solution that optimised Docker and got us up and running quickly, still offering a very robust event-messaging solution.”
How Trade Ledger plans to simplify its system
Next for Trade Ledger is a complete change of its user experience. Born says that to ensure the system can continue to expand, a big change needs to be made. As the team itself has experienced when they go through a big scaling phase, it’s been getting harder to coordinate all the moving pieces.
The engineering team is looking at principles of design systems, which they began to elaborate two years ago, but the approach was too complex. The goal now is to make it simpler.
“My strong belief is the best code you can have is no code, because you don’t need to maintain and nothing can go wrong. Obviously, it puts it to an extreme, but it’s about making smart decisions on what you code. That is one of one of the big learnings that sometimes rather pay more attention that it probably takes way less time to create a system with less code rather than just coding it and creating a lot of information,” Born says.
Trade Ledger is now building a no-code solution for customers where they can implement their own rules without the need for coding. Born says that there are interesting movements around no-code UI platforms with powerful concepts but that it still needs to be proven if they work as well in more complex systems.