Distributed Computation

Last updated on 16th July 2024

This section addresses how to seamlessly run a model developed locally on a cluster. There are several ways of distributing a model. The recomended way is to distribute a multirun, so that the master node coordinates the running of runs on the nodes of the cluster, but each run is executed on a single node. At the end, the data is either aggregated back to the master node, or written to HDFS. To learn how to do this, read Spark model runner

It is also possible to distribute a very large model, but this will likely dramaticlly slow down the running of your model because of the overhead of node to node communication. To learn how to do this, read Distributed Graph