Model Configuration

Last updated on 16th July 2024

You can start using the Simudyne SDK without defining any configuration, since sensible default values are provided. Later on you might need to amend the settings to change the default behavior. Typical examples of settings that you might amend:

  • distribution of model
  • log level
  • serialization level
  • port and hostname for the server to run on

Where configuration is read from

The Simudyne SDK reads the config from a simudyneSDK.properties file in the root directory of a project. If this file cannot be found, default values are set for all config.

All config properties can also be overwritten dynamically in the model code. Use getConfig() to get the config for a specific model, and setString(), setBoolean(), setLong() or setInt() to set config of a specific type.

ModelConfig.java

public class Housing implements Model {
	@Constant public boolean distributed = false;

  public void setup(){
    if (distributed) {
    	getConfig().setString(
        "core-abm.backend-implementation",
        "simudyne.core.graph.spark.SparkGraphBackend");
    }  
  }  
}

Best practice

Put all config in the properties file, and only override config in the code for values that change based on user input. (An example of this might be overriding if the model should run distributed based on a toggle input). This is so that the model can be compiled and the config changed without having to recompile.

Custom simudyneSDK.properties

A custom simudyneSDK.properties might look like this

  • core-abm.max-messaging-phases - This is the max number of phases we will keep sending messages in a single step/tick. Setting a max number will prevent you from accidentally creating an infinite loop of messages being sent and received.
  • core-abm.serialize.agents - When the AgentSystem is marked as @Variable, should the agents be reported Read More.
  • core-abm.serialize.accumulators - When the AgentSystem is marked as @Variable, should the accumulators be reported Read More.
  • core-abm.local-parallelism - If not set, will default to number of processors available to the JVM.
  • core.prng-seed - The seed to use for generating random numbers.

simudyneSDK.properties

### SimudyneSDK Configuration file

### NEXUS-SERVER ###

nexus-server.port = 8080
nexus-server.hostname = 127.0.0.1

### CORE ###

# core.prng-seed = 1640702558671097951

### CORE-ABM ###
core-abm.max-messaging-phases = 50

# For serialization-level, choose between : {NONE,CHECKED}
core-abm.serialization-level=NONE
core-abm.serialize.agents=true
core-abm.serialize.links=true
core-abm.serialize.accumulators=true
core-abm.local-parallelism=8
The above hostname will make sure that the server and console are ONLY available locally. If you wish to deploy to a server you should change the hostname either to the IP address, or more commonly 0.0.0.0

Configuring Distributed ABMs (with Spark)

If the model is to be run as a distributed model using Spark, the core-abm.backend-implementation needs to be set in the config, and the following Spark settings.

simudyneSDK.properties

core-abm.backend-implementation=simudyne.core.graph.spark.SparkGraphBackend

# Default Spark settings. Comment these lines if you will be providing the configuration
# via spark-submit or similar, otherwise these settings are required.

# For log-level, choose between : {OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE, ALL}

core-abm-spark.master-url = local[*]
core-abm-spark.spark.executor.memory = 2g
core-abm-spark.spark.sql.shuffle.partitions = 24
core-abm-spark.checkpoint-directory = /var/tmp
core-abm-spark.log-level = WARN

Configuring Distributed Multiruns (with Spark)

If the model is to be run as a distributed model using Spark when running as multirun, the core-runner.runner-backend needs to be set in the config, and the following Spark settings.

core-runner.runner-backend = simudyne.core.runner.spark.SparkRunnerBackend

# Default Spark settings. Comment these lines if you will be providing the configuration
# via spark-submit or similar, otherwise these settings are required.

# For log-level, choose between : {OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE, ALL}

core-runner-spark.master-url = local[*]
core-runner-spark.spark.executor.memory = 2g
core-runner-spark.spark.sql.shuffle.partitions = 24
core-runner-spark.log-level = WARN
core-runner-spark.task.cpus = 1