{"data":{"markdownRemark":{"html":"<p>This section addresses how to seamlessly run a model developed locally on a cluster. There are several ways of distributing a model. The recomended way is to distribute a multirun, so that the master node coordinates the running of runs on the nodes of the cluster, but each run is executed on a single node. At the end, the data is either aggregated back to the master node, or written to HDFS.\nTo learn how to do this, read <a href=\":version/reference/distributed_computation/spark_setup#spark-model-runner\">Spark model runner</a> </p>\n<p>It is also possible to distribute a very large model, but this will likely dramaticlly slow down the running of your model because of the overhead of node to node communication. To learn how to do this, read <a href=\"%22version/reference/distributed_computation/distributed_graph\">Distributed Graph</a></p>","headings":[],"frontmatter":{"title":"Distributed Computation","toc":false,"experimental":null}},"site":{"siteMetadata":{"title":"Simudyne Docs","latestVersion":"2.6"}}},"pageContext":{"absolutePath":"/home/vsts/work/1/s/content/2.4/reference/distributed_computation.md","versioned":true,"version":"2.4","kind":"reference","pagePath":"/reference/distributed_computation","chronology":{"prev":{"name":"Logging Examples","path":"/reference/logging/logging-examples"},"next":{"name":"Spark setup","path":"/reference/distributed_computation/spark_setup"}},"lastUpdated":"2026-04-21T13:56:54.851Z"}}