We’ve recently been developing a batch processing system for a client in the finance sector that uses Drools.
Subject to a few constraints our system is developed in Drools 5.6 with Java 1.7 and makes use of Drools Templates which generate several thousand rules in total. The batches are passed through a chain of several KnowledgeSessions created from different KnowledgeBases and uses a model of separate Input and Output facts.
Our first volume test processed a batch of over 13,000 financial records through a legacy system and then into this new rule engine.
Without the new rule engine in place the batch ran in 18 minutes.
With the new rule engine integrated with the legacy system the batch ran in 19 minutes.
This means that using this design it took under a minute to perform the following:-
- Generate over 2000 rules from Drools Templates using JDBC
- create a series of KnowledgeBases from the rules
- Receive 13,000 records from the legacy system
- Map 13,000 input records to the fact model
- Process the facts across a chain of KnowledgeSessions
- create output facts
- map the output facts back to output data
- pass the output data back to the legacy system
Considering that this timing includes actually generating the rules and building the KnowledgeBases and that this is only required once for all batches, this is a great result!
We’re not allowed to post code but rest assured that Drools has proven itself to be a formidable tool for batch processing in the finance sector when used efficiently.