Last updated on 16th July 2024
You can start using the Simudyne SDK without defining any configuration, since sensible default values are provided. Later on you might need to amend the settings to change the default behavior. Typical examples of settings that you might amend:
The Simudyne SDK reads the config from a simudyneSDK.properties
file in the root directory of a project. If this file cannot be found, default values are set for all config.
All config properties can also be overwritten dynamically in the model code. Use ModelContext.get(this).getConfig()
to get the config for a specific Model, and setString()
, setBoolean()
, setLong()
or setInt()
to set config of a specific type.
ModelConfig.java
public class Housing implements Model {
@Constant public boolean distributed = false;
public void setup(){
if (distributed) {
ModelContext.get(this).getConfig().setString(
"core-abm.backend-implementation",
"simudyne.core.graph.spark.SparkGraphBackend");
}
}
}
A custom simudyneSDK.properties
might look like this
AgentSystem
is marked as @Variable
, should the agents be reported Read More.AgentSystem
is marked as @Variable
, should the accumulators be reported Read More.simudyneSDK.properties
### SimudyneSDK Configuration file
### NEXUS-SERVER ###
nexus-server.port = 8080
nexus-server.hostname = 127.0.0.1
### CORE ###
# core.prng-seed = 1640702558671097951
### CORE-ABM ###
core-abm.max-messaging-phases = 50
# For serialization-level, choose between : {NONE,CHECKED}
core-abm.serialization-level=NONE
core-abm.serialize.agents=true
core-abm.serialize.links=true
core-abm.serialize.accumulators=true
core-abm.local-parallelism=8
If the model is to be run as a distributed model using Spark, the core-abm.backend-implementation
needs to be set in the config, and the following Spark settings.
simudyneSDK.properties
core-abm.backend-implementation=simudyne.core.graph.spark.SparkGraphBackend
# Default Spark settings. Comment these lines if you will be providing the configuration
# via spark-submit or similar, otherwise these settings are required.
# For log-level, choose between : {OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE, ALL}
core-abm-spark.master-url = local[*]
core-abm-spark.spark.executor.memory = 2g
core-abm-spark.spark.sql.shuffle.partitions = 24
core-abm-spark.checkpoint-directory = /var/tmp
core-abm-spark.log-level = WARN
If the model is to be run as a distributed model using Spark when running as multirun, the core-runner.runner-backend
needs to be set in the config, and the following Spark settings.
core-runner.runner-backend = simudyne.core.runner.spark.SparkRunnerBackend
# Default Spark settings. Comment these lines if you will be providing the configuration
# via spark-submit or similar, otherwise these settings are required.
# For log-level, choose between : {OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE, ALL}
core-runner-spark.master-url = local[*]
core-runner-spark.spark.executor.memory = 2g
core-runner-spark.spark.sql.shuffle.partitions = 24
core-runner-spark.log-level = WARN
core-runner-spark.task.cpus = 1