{"data":{"markdownRemark":{"html":"<p>This page explains how to setup and use the Simudyne SDK  to distribute a single simulation on Spark.</p>\n<h2 id=\"spark-graph-backend\"><a href=\"#spark-graph-backend\" aria-hidden=\"true\" class=\"anchor\"><svg aria-hidden=\"true\" height=\"16\" version=\"1.1\" viewBox=\"0 0 16 16\" width=\"16\"><path fill-rule=\"evenodd\" d=\"M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z\"></path></svg></a>Spark graph backend</h2>\n<p>The spark graph backend allows you to run a single large graph on a Spark cluster.\nRunning a distributed graph simulation depends on the package <code class=\"language-text\">core-graph-spark</code> which needs to be imported in your project:</p>\n<p class=\"code-header\">pom.xml</p>\n<div class=\"gatsby-highlight\" data-language=\"xml\"><pre class=\"language-xml\"><code class=\"language-xml\"><span class=\"token tag\"><span class=\"token tag\"><span class=\"token punctuation\">&lt;</span>dependency</span><span class=\"token punctuation\">></span></span>\n    <span class=\"token tag\"><span class=\"token tag\"><span class=\"token punctuation\">&lt;</span>groupId</span><span class=\"token punctuation\">></span></span>simudyne<span class=\"token tag\"><span class=\"token tag\"><span class=\"token punctuation\">&lt;/</span>groupId</span><span class=\"token punctuation\">></span></span>\n    <span class=\"token tag\"><span class=\"token tag\"><span class=\"token punctuation\">&lt;</span>artifactId</span><span class=\"token punctuation\">></span></span>simudyne-core-graph-spark_2.11<span class=\"token tag\"><span class=\"token tag\"><span class=\"token punctuation\">&lt;/</span>artifactId</span><span class=\"token punctuation\">></span></span>\n    <span class=\"token tag\"><span class=\"token tag\"><span class=\"token punctuation\">&lt;</span>version</span><span class=\"token punctuation\">></span></span>${simudyne.version}<span class=\"token tag\"><span class=\"token tag\"><span class=\"token punctuation\">&lt;/</span>version</span><span class=\"token punctuation\">></span></span>\n<span class=\"token tag\"><span class=\"token tag\"><span class=\"token punctuation\">&lt;/</span>dependency</span><span class=\"token punctuation\">></span></span></code></pre></div>\n<p>To enable Simudyne SDK using Spark as the backend implementation of the SDK, you need to uncomment the following line in your properties file:</p>\n<p class=\"code-header\">simudyneSDK.properties</p>\n<div class=\"gatsby-highlight\" data-language=\"scala\"><pre class=\"language-scala\"><code class=\"language-scala\">### CORE<span class=\"token operator\">-</span>ABM<span class=\"token operator\">-</span>SPARK ###\ncore<span class=\"token operator\">-</span>abm<span class=\"token punctuation\">.</span>backend<span class=\"token operator\">-</span>implementation<span class=\"token operator\">=</span>simudyne<span class=\"token punctuation\">.</span>core<span class=\"token punctuation\">.</span>graph<span class=\"token punctuation\">.</span>spark<span class=\"token punctuation\">.</span>SparkGraphBackend</code></pre></div>\n<p>Then you need to configure the properties related to core-abm on Spark.\nYou have two possibilities to configure them: </p>\n<ul>\n<li>modify <code class=\"language-text\">core-abm-spark</code> properties in the <code class=\"language-text\">simudyneSDK.properties</code> file</li>\n<li>set configuration parameters as command parameters when using <code class=\"language-text\">spark-submit</code> command</li>\n</ul>\n<p>Some properties are already listed with default values in <code class=\"language-text\">simudyneSDK.properties</code>:</p>\n<div class=\"gatsby-highlight\" data-language=\"scala\"><pre class=\"language-scala\"><code class=\"language-scala\">### CORE<span class=\"token operator\">-</span>ABM<span class=\"token operator\">-</span>SPARK ###\ncore<span class=\"token operator\">-</span>abm<span class=\"token operator\">-</span>spark<span class=\"token punctuation\">.</span>master<span class=\"token operator\">-</span>url <span class=\"token operator\">=</span> local<span class=\"token punctuation\">[</span><span class=\"token operator\">*</span><span class=\"token punctuation\">]</span>\ncore<span class=\"token operator\">-</span>abm<span class=\"token operator\">-</span>spark<span class=\"token punctuation\">.</span>checkpoint<span class=\"token operator\">-</span>directory <span class=\"token operator\">=</span> <span class=\"token operator\">/</span><span class=\"token keyword\">var</span><span class=\"token operator\">/</span>tmp\ncore<span class=\"token operator\">-</span>abm<span class=\"token operator\">-</span>spark<span class=\"token punctuation\">.</span>log<span class=\"token operator\">-</span>level <span class=\"token operator\">=</span> WARN\n# core<span class=\"token operator\">-</span>abm<span class=\"token operator\">-</span>spark<span class=\"token punctuation\">.</span>spark<span class=\"token punctuation\">.</span>executor<span class=\"token punctuation\">.</span>memory <span class=\"token operator\">=</span> <span class=\"token number\">2</span>g\n# core<span class=\"token operator\">-</span>abm<span class=\"token operator\">-</span>spark<span class=\"token punctuation\">.</span>spark<span class=\"token punctuation\">.</span>sql<span class=\"token punctuation\">.</span>shuffle<span class=\"token punctuation\">.</span>partitions <span class=\"token operator\">=</span> <span class=\"token number\">24</span></code></pre></div>\n<p>You must be aware that a property set in the <code class=\"language-text\">simudyneSDK.properties</code> file will override the one passed to the <code class=\"language-text\">spark-submit</code>.</p>\n<p>You can then submit your job using <code class=\"language-text\">spark-submit</code>. Here is a example with some configurations options:</p>\n<div class=\"gatsby-highlight\" data-language=\"bash\"><pre class=\"language-bash\"><code class=\"language-bash\">spark-submit --class Main --master <span class=\"token operator\">&lt;</span>sparkMasterURL<span class=\"token operator\">></span>  --deploy-mode client --files simudyneSDK.properties,licenseKey name-of-the-fat-jar.jar</code></pre></div>\n<p>This command will run the <strong>main</strong> function of the class <strong>Main</strong> and distribute it on Spark. You can then access the console through the config parameters <code class=\"language-text\">nexus-server.hostname</code> and <code class=\"language-text\">nexus-server.port</code>.</p>\n<p>They default to <code class=\"language-text\">localhost</code> and <code class=\"language-text\">8080</code>. You can also interact with the server through the <a href=\":version/rest_api/rest_api\">REST API</a></p>\n<p><strong>spark-submit</strong> allows you to configure Spark. You need to choose a configuration that best suits your cluster.\nTo learn more about Spark configuration, refer to the <a href=\"https://spark.apache.org/docs/latest/submitting-applications.html\">official documentation</a>.</p>\n<p>Some <a href=\"http://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark-jobs-part-2/\">useful resources</a> can be found on Cloudera's website.</p>","headings":[{"value":"Spark graph backend","depth":2}],"frontmatter":{"title":"Spark Graph","toc":null,"experimental":true}},"site":{"siteMetadata":{"title":"Simudyne Docs","latestVersion":"2.6"}}},"pageContext":{"absolutePath":"/home/vsts/work/1/s/content/2.4/reference/distributed_computation/spark_graph.md","versioned":true,"version":"2.4","kind":"reference","pagePath":"/reference/distributed_computation/spark_graph","chronology":{"prev":{"name":"Spark setup","path":"/reference/distributed_computation/spark_setup"},"next":{"name":"Distributed Graph","path":"/reference/distributed_computation/distributed_graph"}},"lastUpdated":"2026-04-21T13:56:54.852Z"}}