Last updated on 16th July 2024
Whether it's via a simudyneSDK.properties file or directly via Java and loading system properties the Simudyne SDK offers a variety of configurations that can be accessed during your model run (by prepending simudyne.
) and will ultimately be used internally by the SDK in order to configure your simulation. The below is a brief explanation of these settings, often enhanced by other parts of the reference guide to explain the core topic, but is meant to serve as an easy to refer to table.
Property | Values | Explanation | Example/Notes |
---|---|---|---|
core-abm-spark.checkpoint-directory | Directory | Refer to: Spark Configuration | Example: /var/tmp |
core-abm-spark.log-level | ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE | Which logs will be made visible when using Spark | Log4J Levels |
core-abm-spark.master-url | URL, or Local | Refer to: Spark Configuration | Example: local[*], sd-node-master0.simudyne.cloudera.site |
core-abm-spark.spark.executor.memory | RAM | Refer to: Spark Configuration | Example: 2g |
core-abm-spark.spark.sql.shuffle.partitions | Integer | Refer to: Spark Configuration | 24 |
core-abm.backend-implementation | simudyne.core.graph.experimental.dig.treelike.backend.NoClusterBackend | Tells the SDK Backend which implementation to use for distribution | EXPERIMENTAL: Akka Backend |
core-abm.backend-implementation | simudyne.core.graph.experimental.dig.treelike.backend.SubprocessBackend | Tells the SDK Backend which implementation to use for distribution | EXPERIMENTAL: Akka Backend |
core-abm.backend-implementation | simudyne.core.graph.spark.SparkGraphBackend | Tells the SDK Backend which implementation to use for distribution | For Usage with a Distributed Spark Graph |
core-abm.debug | true, false | Not Currently Used | |
core-abm.local-blocksize | Integer | Creates the blocksize for message inboxes on the local graph | Example: 256 |
core-abm.local-parallelism | 0, 1 | Sets whether the local graph should be using parallel threads | Note: This is a non-boolean value due to potential parallelism mode changes |
core-abm.max-messaging-phases | 50 | This will repeat the 'phase' of processing messages until the block is empty. Set higher to avoid dead letters. | Example: 50 |
core-abm.serialize.accumulators | true, false | Serializes the internal data structure in order to show on console and I/O channel outputs. | |
core-abm.serialize.activities | true, false | Serializes the internal data structure in order to show on console and I/O channel outputs. | |
core-abm.serialize.agents | true, false | Serializes the internal data structure in order to show on console and I/O channel outputs. | |
core-abm.serialize.links | true, false | Serializes the internal data structure in order to show on console and I/O channel outputs. | |
core-abm.serialize.sections | true, false | Serializes the internal data structure in order to show on console and I/O channel outputs. | |
core-abm.sort-inboxes | true | This will sort the inboxes within a block | Note: This will NOT sort all messages recieved to avoid determinism issues as it only works within a block. |
core-graph-akka.executor.memory | RAM | Refer to Akka Configuration | Example: 2g |
core-graph-akka.master-url | URL | Refer to Akka Configuration | Example: local[*] |
core-graph-akka.partitions | 24 | Refer to Akka Configuration | Example: 24 |
core-graph-akka.task.cpus | Integer | Refer to Akka Configuration | Example: 4 |
core-runner-spark.executor.memory | RAM | Refer to: Spark Configuration | Example: 2g |
core-runner-spark.log-level | ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE | Which logs will be made visible when using Spark | Log4J Levels |
core-runner-spark.master-url | URL | Refer to: Spark Configuration | Example: local[*], sd-node-master0.simudyne.cloudera.site |
core-runner-spark.partitions | Integer | Refer to: Spark Configuration | Example: 24 |
core-runner-spark.task.cpus | Integer | Refer to: Spark Configuration | Example: 4 |
core-runner.runner-backend | simudyne.core.exec.runner.spark.SparkRunnerBackend | Enables to the runner backend to use Spark over the default local | |
core.graph.experimental.clusterSize | Integer | The size of your Akka cluster nodes | Example: 3 |
core.graph.experimental.distributed.log-level | ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE | Which logs will be made visible when using Akka | Log4J Levels |
core.graph.experimental.timeouts.base | Integer | Actor timeout time in seconds | Example: 240 |
core.hive-export-path | URL | Where your Hive table is made available | Example: hive2://localhost:10000/default |
core.hive-export.enabled | true, false | Enables the SDK to try and push data to Hive, must set path/username/password | |
core.export.password | String | Hive or SQL Worker Password | You may instead opt to provide this as part of the Java OPTS at runtime using the property configuration to avoid stored passwords |
core.export.username | String | Hive or SQL Worker Username | You may instead opt to provide this as part of the Java OPTS at runtime using the property configuration to avoid stored passwords |
core.model-export-path | String | Location of where you want your model configuration output to be placed (see core.parquet-export-path for actual parquet output location) | Example: output |
core.parquet-compression-format | UNCOMPRESSED, SNAPPY, GZIP, LZO | Set the compression format of your parquet file output | |
core.parquet-export-path | String | Where you want the output of your parquet files to be placed. You may also use an HDFS URL format here. MUST enable parquet and io channels | Example: parquet |
core.parquet-export.enabled | true, false | Enables the SDK to automatically output parquet for available data and serialized values | |
core.parquet-export.folder-structure | group-by-type, group-by-run | Refer to: Output Directory Structure | |
core.parquet-page.size | Integer | Refer to: Parquet Configuration | |
core.parquet-row-group.size | Integer | Refer to: Parquet Configuration | |
core.prng-seed | Integer | The values of the seed for the internal Programmable Random Number Generator | Note: For Batch Runs each run will have a seed set from this core seed. Example: 1640702558671097951 |
core.return-data | true, false | Sets whether the simulation should return any data via JSON/Console. Useful if output is defined by your model code | |
core.run-id | Integer | The interal id of the simulation's run used by the REST API and internal processess | Note: It can be risky to set this value manually |
core.sparse-output | true, false | Similar to `core.return-data` however this will emit metadata | Note: This is more useful than `core.return-data` as it will simply remove any scheme or values |
core.uiconfig.readonly | true, false | This is an internal setting useful for Simudyne demos | |
feature.immutable-schema | true, false | Right now the schema is mutable, this disables it for potential future features that would require a strict schema | |
feature.interactive-parquet-output | true, false | This is deprecated, but for assurance purposes if enabling parquet output you should also set this to true. | |
feature.io-channels | true, false | Enables the usages of I/O channels see more here | |
feature.scenario-file | true, false | Enables the ability to load a Scenario File of JSON elements for your simulations | |
nexus-server.autocompile-root | String | Very Experimental: Allows you to set a root folder such that changes to the a model file will be caught, compiled, and re-served to the user. | Example: sandbox/src/main/scala/sandbox/models |
nexus-server.batchrun-lifetime | Minutes | State Timeout for the Batch Runner | Example: 10 |
nexus-server.batchrun-runs-limit | Integer | Demo Purposes: This blocks the console from running more than the number of listed runs. Useful to avoid malicious users taking CPU/Mem | Example: 100 |
nexus-server.batchrun-tick-limit | true, false | Demo Purposes: This blocks the console from running more than the number of ticks in model settings. Useful to avoid malicious users taking CPU/Mem | |
nexus-server.health-check | true, false | Enables the Determinism Health Check which creates two examples of your registered models and compares a short run to see if they match results. | For more see here |
nexus-server.hostname | URL/IP | Where you tell the console or REST API to point too | Example: 127.0.0.1 (local only), 0.0.0.0 (available on network), nameofdomain.com |
nexus-server.nexus-lifetime | Minutes | State Timeout for the Nexus Server | Example: 10 |
nexus-server.parallel-batchrun-limit | Integer | Demo Purposes: This blocks the console from running more than the number of listed runs. Useful to avoid malicious users taking CPU/Mem | Example: 100 |
nexus-server.parallel-nexus-limit | Integer | Demo Purposes: Blocks the creation of multiple instances of models. Useful to avoid malicious users taking CPU/Mem | Example: 2 |
nexus-server.port | 0-65535 | The port the simulation will be made available on the defined URL in `nexus-server.hostname` | Note: Simply setting port 80 may not work with all firewalls. It's often best to create a re-route from a port like 8080 |
nexus-server.rate-limit | Integer | Inactive: Meant as another configuration for Demo purposes to limit the rate of REST API calls per second | Example: 5 |
nexus-server.run-past-end | true, false | Using the defined `ticks` in `@ModelSettings` allows whether the user can initiate a run past that set time. | |
nexus-server.webserver-root | String | Path to a built webapp folder, useful for a custom webapp or internally for a customized demo | Example: console/src/main/resources/webapp |