Last updated on 16th July 2024
Some use cases require scaling of a simulation to be representative of the entire population of a system to produce the desired results. In many cases, when simulating millions of agents or running a large set of simulations, this can exceed the compute available on a single machine.
The Simudyne SDK is easily distributable across a Spark cluster without changing a single line of code in your model.
More information on supported platforms can be found in Run & Deploy.
Spark is a powerful Java-based tool for distributing calculations across a cluster of nodes.
In the case of an agent-based model, Spark is the ideal tool for splitting up a job and distributing the workload of a large set of runs, or a large population of agents, across a distributed environment.
Many institutions have already leveraged distributed environments to handle large computational workloads. The Simudyne SDK is designed to leverage that existing investment by providing flexibility in how a model is deployed.
More information can be found in Spark Setup.
In many cases it is required to run a large set of Monte Carlo simulations to understand the distribution of results that might emerge from a given set of conditions.
These Monte Carlos can be deployed in a distributed environment using Spark, to speed up the generation of results or scale the number of runs.
More information can be found in Distributed Computation.
Agent-based models at scale are one way of simulating large complex adaptive systems. In some instances large systems can contain millions or billions of agents. These massive models could be too computationally taxing to run on a single machine.
An experimental feature, using Akka, allows users to scale a model horizontally across a cluster of nodes thereby distributing the agents and their calculations across the cluster of machines. More information can be found in Akka.