Data Output

Last updated on 16th July 2024

The Simudyne SDK has built in output generators for all of it's varying run types, and in multiple different formats. The following pages will be helpful in understanding if there are certain restrictions, requirements, or configurations for certain output formats. However most information on using these outputs is the same regardless of format.

Output Formats

  • Parquet: Parquet is the preferred output format for the SDK due to it's ability to minimize file size and work with data science tools
  • JSON: Not to be confused with the output via the REST API, creates static JSON files for consumption
  • CSV: CSV tabling allows for easy import and/or working with Excel
  • MySQL: Currently only MySQL tables are support
  • HIVE via Parquet: Allows you to use HIVE tables on an existing or external cluster in the Parquet format
  • H2 & Other Connections: A brief explanation of other ways to handle output connections.

Batch Run Export

By default, when running a batch run, Agent and Link data is not serialised, and so not output to parquet. This is to reduce the amount of data being held in memory when sending the batch results to the console. If the data is being output to parquet and does not need to be viewed on the console, the in memory data storage can be turned off allowing the Simudyne SDK to export Agent and Link data to parquet as well as the general Model data. This is done by setting the config field core.return-data to false.

For large model runs that produce a lot of data, setting this config field to false will also reduce the amount of memory being held by the simulation, which can help avoid potential OutOfMemory exceptions and improve the efficiency of the model.

If the data does not need to be displayed on the console, but Agent and Link data is not needed, the config fields core-abm.serialize.agents and core-abm.serialize.links should be set to false, to avoid generating uncessary data.

Scenario Run Export

Scenario runs do not hold the data in memory because they are not managed by the console, and the data cannot be viewed on the console. This means that Agent and Link data is serialised by default, and so should be explicitly turned off if not needed. (Use the config fields core-abm.serialize.agents and core-abm.serialize.links to control this.)

Data export format for scenario runs is controlled via the POST request sent to start the scenario run. (See the scenario REST specification for more details on the POST request here.)

By default the scenario will output data as JSON files. To specify the output format as parquet, set the 'format' field in the 'output' section of the POST request.

{
  //Other scenario json fields
  "output": {"uri": "/path/to/export/to" , "format": "parquet"}
}

Model Sampler Export

The model sampler will always output data to parquet. As with scenarios, the data is not held in memory, so Agent and Link data is serialised by default and should be explicity turned off if not needed using the config fields core-abm.serialize.agents and core-abm.serialize.links.

Output Directory Structure

When exporting data to parquet, the folder layout can be specified in the config using the config field core.export.folder-structure. There are two options supported for this field, group-by-type and group-by-run. If no value is specified, it will default to group-by-type.

Group by type structure

When the output folder structure is group by type, folders are created for each output table type, and a output file for each run is created inside these folders.

For this example, the root export directory passed through the config field core.export-path is /exportFolder. The output is also shown as parquet files, but could be JSON/CSV/etc.

Group by type batch output folders

/exportFolder/
    {simulation_id}/
        runs/
            root/
              run000.parquet
              run001.parquet
              run002.parquet
            root__system__Agents__Cell
              run000.parquet
              run001.parquet
              run002.parquet
            metadata.json
            finished.json
  • exportFolder -> This is the root export directory
  • {simulation_id} -> This is the UUID created for every run of the simulation (This is the ID used with the REST API)
  • runs -> The root folder for all output run data
  • root -> The data for each output table type will be in its own folder
  • run000.parquet, run001.parquet -> The output files created for each run.
  • metadata.json -> A file containing some metadata about the data produced.
  • finished.json -> An empty file created to signal that no new data will be added to this directory.

Group by type scenario output folders

/exportFolder/
    {simulation_id}/
        runs/
            root/
              scenario0run0001.parquet
              scenario0run0002.parquet
              scenario0run0003.parquet
            root__system__Agents__Cell
              scenario0run0001.parquet
              scenario0run0002.parquet
              scenario0run0003.parquet
            metadata.json
            finished.json
  • exportFolder -> This is the root export directory
  • {simulation_id} -> This is the UUID created for every run of the simulation (This is the ID used with the REST API)
  • runs -> The root folder for all output run data
  • root -> The data for each output table type will be in its own folder
  • scenario0.run0.parquet -> The output files created for each run.
  • metadata.json -> A file containing some metadata about the data produced.
  • finished.json -> An empty file created to signal that no new data will be added to this directory.

The model sampler output folders will match the scenario output folders.

Group by run structure

When the output folder structure is group by runs, folders are created for each simulation run, and a output file for each table type is created inside these folders.

For this example, the root export directory passed through the config field core.export-path is /exportFolder.

Group by run batch output folders

/exportFolder/
    {simulation_id}/
        runs/
            run000/
              root001.parquet
              root__system__Agents__Cell001.parquet
            run001/
              root001.parquet
              root__system__Agents__Cell001.parquet
            run002/
              root001.parquet
              root__system__Agents__Cell001.parquet
            metadata.json
            finished.json
  • exportFolder -> This is the root export directory
  • {simulation_id} -> This is the UUID created for every run of the simulation (This is the ID used with the REST API)
  • runs -> The root folder for all output run data
  • run0 -> The data for each run of the simulation will be in its own folder
  • root.parquet, rootsystemAgents__Cell001.parquet -> The output files created.
  • metadata.json -> A file containing some metadata about the data produced.
  • finished.json -> An empty file created to signal that no new data will be added to this directory.

Group by run scenario output folders

/exportFolder/
    {simulation_id}/
        runs/
            scenario0.run0/
              root001.parquet
              root__system__Agents__Cell001.parquet
            metadata.json
            finished.json
  • exportFolder -> This is the root export directory
  • {simulation_id} -> This is the UUID created for every run of the simulation (This is the ID used with the REST API)
  • runs -> The root folder for all output run data
  • scenario0.run0 -> The data for each scenario and run will be in its own folder
  • root.parquet, rootsystemAgents__Cell001.parquet -> The output files created.
  • metadata.json -> A file containing some metadata about the data produced.
  • finished.json -> An empty file created to signal that no new data will be added to this directory.

The model sampler output folders will match the scenario output folders.

metadata.json

A metadata file is added to the data export giving details about the data. The metadata contains

  • model_name -> The name of the model that we can use to query the API
  • source -> Simudyne
  • source_version -> The version of The Simudyne SDK that produced this data
  • format -> Parquet
  • creation_date -> The date this data was produced
  • schema -> The nested schema that matches this data output
  • custom -> Custom data that can be passed through in the create simulation request

finished.json

This is an empty file created at the end of a run to let you know that no new output files will be created in this directory.

Metadata/Finished.json locations

The location of these files will be in the main output directory if running in a batch mode due to the grouping of output locations.

As well if you are working with a SQL/Hive location you will NEED to specify this output folder for these files alongside the corresponding URL for connection to the external tables.