AWS Supply Chain Platform

Last updated on 27th January 2026

AWS Supply Chain Platform Documentation

This documentation covers both the User Guide for operating the Supply Chain Simulation Platform and the Deployment Guide for setting up the platform on AWS infrastructure.


Documentation Overview

Section Description
User Guide How to use the platform: launching, running simulations, viewing results, and troubleshooting
Deployment Guide How to deploy the platform on AWS: EC2 setup, configuration, and maintenance

Quick Navigation

User Guide

Deployment Guide


User Guide

This guide will help you get started with the Supply Chain Simulation Platform, from launching the application to running your first simulation and analyzing the results.


Getting Started

Prerequisites

Before using the platform, ensure you have:

  • Desktop Shortcut: After installation, you should see a "Supply Chain Platform" icon on your desktop
  • Web Browser: Chrome, Firefox, or Edge (latest version recommended)
  • Simudyne License: A valid license file should be placed in the simulator folder

First-Time Setup

If you see the desktop shortcut, you're ready to go! Skip to Launching the Platform.

If you don't see the desktop shortcut:

  1. Navigate to the folder where the platform was installed
  2. Double-click install.bat
  3. Wait for the installation to complete
  4. A desktop shortcut will be created automatically

Setting Up AI Features (Optional)

To use the AI-powered analysis and chat features, you'll need an API key from one of our supported providers:

  1. Launch the platform (see next section)
  2. Click the Settings icon (gear) in the bottom-left corner
  3. Select your AI provider:

    • OpenRouter (recommended) - supports multiple AI models
    • OpenAI - for GPT-4 access
    • Anthropic - for Claude access
    • AWS Bedrock - for enterprise deployments
  4. Enter your API key
  5. Click Test Connection to verify it works
  6. Click Save Settings

Note: Without an API key, you can still run simulations and view results. Only the AI chat and automated insights features require an API key.


Launching the Platform

Starting the Application

  1. Double-click the "Supply Chain Platform" shortcut on your desktop
  2. A command window will open briefly - this is the server starting up
  3. Your web browser will automatically open to the dashboard
  4. If the browser doesn't open, manually go to: http://localhost

What You'll See

When the platform loads, you'll see the main dashboard with a navigation sidebar on the left:

Icon Section Purpose
Guide User Guide In-app help and documentation
Settings Settings Configure AI provider and API keys
Setup Setup Upload and process your data
Simulation Simulation Run simulations and monitor progress
Results Results View simulation outputs and metrics
Analysis Analysis AI-powered insights and chat

Platform Overview

The left sidebar provides access to all main sections. Click any icon to navigate to that section. The currently active section is highlighted.

In-App User Guide

For quick help while using the platform, click the Guide icon in the sidebar. This provides step-by-step instructions for each feature directly within the application.


Step-by-Step Workflow

Step 1: Prepare Your Data

Before running a simulation, you need three types of data files (CSV format):

  1. Facilities File - Your warehouses, distribution centers, and other locations
  2. Products File - Your product catalog with weights and dimensions
  3. Shipments File - Historical shipment data showing how products move through your network

See Understanding Your Data for detailed requirements, or use our Sample Data to get started quickly.

Step 2: Upload Data

  1. Click Setup in the sidebar
  2. Click Upload Files or drag-and-drop your CSV files
  3. The system will automatically detect the file type (facilities, products, or shipments)
  4. Review the file summary to ensure your data was recognized correctly

Step 3: Validate and Process

  1. After uploading, click Validate Data
  2. Review any warnings or errors shown
  3. If validation passes, click Process & Create Session
  4. Wait for processing to complete - this creates a new simulation session

Step 4: Configure Demand (Optional)

After processing, you can optionally adjust demand settings:

  1. In the Session Viewer, click the Demand tab
  2. Adjust the global demand multiplier to increase or decrease overall order volume
  3. Edit individual facility demand rates if needed
  4. Review the throughput preview to see estimated daily volumes

Step 5: Run Simulation

  1. Click Simulation in the sidebar
  2. Select your session from the dropdown (if not already selected)
  3. Set the simulation duration (1-14 days)
  4. Click Run Simulation
  5. Monitor progress in the debug console at the bottom

Step 6: View Results

  1. Click Results in the sidebar
  2. Select your session to view its results
  3. Explore the various tabs and charts (see Viewing Results)

Step 7: Get AI Insights (Optional)

  1. Click Analysis in the sidebar
  2. Ask questions about your simulation results in natural language
  3. Get AI-powered insights and recommendations

Understanding Your Data

Facilities File

This file describes your supply chain network locations.

Column Required Description
facility_id Yes Unique identifier (e.g., "WH001", "DCEAST")
latitude Yes Geographic latitude coordinate
longitude Yes Geographic longitude coordinate
facility_type Recommended Type of facility (e.g., "WAREHOUSE", "DC", "STORE")

Example:

facility_id,latitude,longitude,facility_type
WH_001,34.0522,-118.2437,WAREHOUSE
DC_EAST,40.7128,-74.0060,DISTRIBUTION_CENTER
STORE_101,37.7749,-122.4194,STORE

Products File

This file describes your product catalog. Weight and volume are required for accurate trailer loading calculations.

Column Required Description
unitID Yes Product/SKU identifier
unitWeight Yes Weight in pounds (lbs)
unitCube Yes Volume in cubic feet
unitCost Optional Unit cost in USD

Example:

unitID,unitWeight,unitCube,unitCost
SKU_0001,15.5,0.75,49.99
SKU_0002,8.2,0.35,24.99
SKU_0003,42.0,2.10,149.99

Shipments File

This file contains historical shipment data showing how products move through your network.

Column Required Description
origin_facility Yes Source facility ID
destination_facility Yes Destination facility ID
timestamp Recommended When the shipment occurred
quantity Optional Number of units shipped
sku Optional Product identifier

Example:

origin_facility,destination_facility,timestamp,quantity,sku
WH_001,DC_EAST,2024-01-15 08:30:00,100,SKU_0001
DC_EAST,STORE_101,2024-01-15 14:45:00,25,SKU_0001
WH_001,STORE_101,2024-01-16 09:00:00,50,SKU_0002

Using Sample Data

The platform includes sample data files to help you get started quickly.

Sample Data Location

Sample data files are located in the sample_data folder within the installation directory.

Available Sample Files

File Description Use For
facilities.csv Sample warehouse and distribution center locations Facilities upload
products.csv Sample product catalog with weights and volumes Products upload
shipments.csv Sample historical shipment records Shipments upload

How to Use Sample Data

  1. Click Setup in the sidebar
  2. Click Upload Files
  3. Navigate to the sample_data folder
  4. Select and upload all three files:

    • facilities.csv
    • products.csv
    • shipments.csv
  5. Click Validate Data then Process & Create Session

This will create a sample simulation that you can run immediately to explore the platform's features.


Viewing Results

After running a simulation, the Results section provides comprehensive analytics.

Results Dashboard

The main results view shows:

  • Summary Metrics: Key performance indicators at a glance
  • Network Map: Visual representation of your supply chain
  • Facility Performance: Metrics for each location

Available Tabs

Tab What It Shows
Overview High-level summary and key metrics
Facilities Individual facility performance and utilization
Network Network-wide flow and routing metrics
Units Product movement and throughput

Charts and Visualizations

  • Utilization Charts: How efficiently facilities and trailers are being used
  • Throughput Charts: Volume of units moving through the network over time
  • Cost Analysis: Transportation and handling costs breakdown

Anomaly Detection

The system automatically identifies potential issues:

  • Low Utilization: Facilities or trailers not being fully utilized
  • Bottlenecks: Locations where flow is restricted
  • Cost Outliers: Routes or facilities with unusually high costs

Click on any anomaly to see details and potential optimization suggestions.


AI Analysis

The AI Analysis feature lets you ask questions about your simulation results in plain English.

How to Use

  1. Click Analysis in the sidebar
  2. Make sure you have configured an API key in Settings
  3. Type your question in the chat box
  4. Press Enter or click Send

Example Questions

  • "What's causing the low utilization at the main warehouse?"
  • "Which routes have the highest transportation costs?"
  • "How can I improve throughput at the sort centers?"
  • "What would happen if I increased demand by 20%?"
  • "Summarize the key findings from this simulation"

Tips for Better Results

  • Be specific about what you want to know
  • Reference specific facilities or metrics when possible
  • Ask follow-up questions to dive deeper into insights

Troubleshooting

Platform Won't Start

Symptom: Nothing happens when you double-click the desktop shortcut

Solutions:

  1. Check if the server window is already open (look in the taskbar)
  2. Close any existing server windows
  3. Try running start-platform.bat directly from the installation folder
  4. Restart your computer and try again

Browser Shows "Cannot Connect"

Symptom: Browser opens but shows a connection error

Solutions:

  1. Wait 10-15 seconds for the server to fully start
  2. Refresh the browser page
  3. Check that the server window is still open (don't close it!)
  4. Try manually navigating to: http://localhost

Server Window Closes Immediately

Symptom: The command window opens briefly then closes

Solutions:

  1. Run install.bat again to ensure dependencies are installed
  2. Make sure Node.js is installed on your computer
  3. Contact your IT administrator for assistance

Simulation Shows No Results

Symptom: Simulation completes but shows 0 results

Solutions:

  1. Make sure your products file includes valid weight and volume values
  2. Check that your facilities have links between them (from shipments data)
  3. Verify the simulation ran without errors in the debug console

AI Chat Not Working

Symptom: AI chat shows an error or doesn't respond

Solutions:

  1. Go to Settings and verify your API key is entered correctly
  2. Click "Test Connection" to check the connection
  3. Make sure you have credits/balance with your AI provider
  4. Try selecting a different AI model

Data Validation Errors

Symptom: Validation fails with error messages

Common issues:

  • Missing columns: Ensure your CSV files have all required columns
  • Invalid values: Check for empty cells or non-numeric values in weight/volume columns
  • File format: Save files as CSV (comma-separated), not Excel format

Desktop Shortcut Missing

Symptom: Can't find the platform shortcut on your desktop

Solutions:

  1. Navigate to the installation folder
  2. Double-click install.bat
  3. The shortcut will be created on your desktop

Getting Help

If you encounter issues not covered in this guide:

  1. Check the in-app User Guide (click the Guide icon in the sidebar)
  2. Review the debug console for error messages
  3. Contact your system administrator or Simudyne support

Quick Reference

Keyboard Shortcuts

Action Shortcut
Submit chat message Enter
New line in chat Shift + Enter

Default URLs

Service URL
Dashboard http://localhost

File Locations

Item Location
Sample Data sample_data/ folder
Configuration config/settings.json
Simulation Outputs simulator/supply-chain-simulation-intelligence/data_session_*/outputs/

Deployment Guide

As mentioned on our Run & Deploy page, there are multiple methods that one could employ for their deployment. For the purpose of this guide we will focus on deploying to an EC2 instance primarily, but you may also make usage of other AWS technologies such as an EMR cluster, or because the SDK can be packaged to a Docker container and/or Kubernetes there are multiple possible options to use with AWS. Please refer to the aforementioned pages for specific steps; however this guide will not cover all deployment options.

AWS AMI

An alternative method for setup involves purchases the AMI (Amazon Machine Image) from the AWS Marketplace. Currently Simudyne offers a Linux AZ Linux and Windows Server 2022 version. There are a few benefits for choosing these options:

  • Purchasing is handled via AWS, and rather than buying a year long license it is included in the purchase price as a monthly charge alongside the typical hourly charges per usage
  • Easily scale up or down
  • No need to setup access tokens, license files, or supporting software like Java or Maven
  • Comes preloaded with a starter model allowing you to test firewalls/access from the web quickly
If you look to the right you should see a section labeled "Setting up via AMI" in order to get started. Please note that you'll still need to follow some of the below steps if you have developed a model locally and need to deploy it on this AMI (such as packaging the model).

Information Guide

Before we move onto the specific step-by-step guide for deploying on AWS we'll first cover some possible questions you may have when working with the AWS environment. For a reminder, much of your development work does not require an AWS instance or account for building and testing your model.

Simudyne AWS Arch ai drawio

Region/Availability Zones:

In short there are no formal restrictions on which AZ you set, or which region your resources are deployed too. From a deployment side there are a few requirements to run your packaged model:

  • a packaged FatJAR file that you will create once your model development is ready
  • a Simudyne license file
  • a simudyneSDK.properties file
  • an installation of JAVA 8 JDK
  • any other external data/scripts/code that you require to run the simulation

As such when working with AWS you are free to use any region with any additional requirements being based on how you are loading data, which region you wish to make the simulation available in, or if there are other AWS software that your are using that is restricted to a specific AZ.

Operating System

SDK development and deployment is available on Windows, Mac, and Linux. You are free to use whichever OS you are most comfortable with. For deployment we recommend using either the Windows or the Amazon Linux AMI (and the below steps will be based on that). Because the SDK requires JAVA 8 JDK; please ensure the OS can support this installation.

Data

The Simudyne SDK generates a lot of data, and in turn can ingest multiple sources as needed both to set various parameters, control monte carlo style runs, and of course to dynamically create your agents. Please refer to our Data Management for steps on working with data in the SDK, but because there are multiple ways to work with data you should consider the best option for your AWS deployment.

If your data requirements are larger than what your EC2 instance can handle (especially if running multiple days of analysis) our recommendation would be to setup an S3 bucket, or make usage of the JDBC connection for access. Note however this is not required, and does necessitate an IAM role that has write or admin access to the S3 bucket in usage. For more information on how you can copy data between a created EC2 and S3 instance please refer to Use Amazon S3 with Amazon EC2

Sizing and Costs

Because the size and complexity of your SDK model is based on a combination of factors it's up to your to make the formal decision on what size instance you want to deploy on, and therefore it's associated costs. Part of your development time should be allocated to optimization, as making even seemingly small changes to an agent's function, how data is managed, etc can dramitically decrease the amount of memory or computation time required. Below are some key factors that will likely require you to want a larger deployment espeically if you are scaling up from your local machine.

  • Number of agents, and how interconnected they are: If you've been developing the model with say 100 agents, but for the real deployment you need 1 Million - you will liekly require additional memory.
  • Amount of data input/output: This will also determine if you wish you make usage of AWS S3 buckets to offload data after a run or import real data if you've been developing the model against sythentic data.
  • Agent complexity: If your agents for example have very complex Actions due to a specific function or algorithm that for example runs in quadratic or exponential time - you will need to factor this algorithm's usage in your agents/codebase in regards to the time it takes a simulation to complete, and the performance of your deployment machine.
  • Complex data structures: The SDK provides a network of agents with the ability to send messages based on those links; however because the models are writen in JAVA - you are free and encouraged to make usage of other data structures for whatever usage your model requires. Having those structure available in the globals and/or certain actions taken on those structures (such as frequent searches) may impact your performance. If these actions cannot be simplified or optimized than this will also affect simulation time and memory requirements.

The only mandatory requirement is (either directly or indirectly) an EC2 instance. Our recommendation for deployment would be something like an "r6i.large" instance with an hourly on-demand cost of $ 1.126. This is because the main bottleneck for most models will be memory (due to storing agents + user data in memory as needed), and as such other memory optimized instances are also what we would typically recommend. However your specific model may require a lot of CPU usage (if say for example you are reading in a large amount of data).

The license cost for your deployment is included with the provided license you've already been given as part of your model development with the AMI cost. (based on whether you are a free trial, developer, or enterprise user)

Deployment Guide

AWS Requirements

For the below deployment guide you will need:

  • an AWS account which you can sign up for one here
  • a very basic familiarity with both AWS console and Linux/Windows
  • a completed model built in the SDK. This could be your own model, or one of our tutorials or sample models that you've downloaded and are using the following steps to test deployment
  • a valid Simudyne SDK license file
  • roughly 1 hour to complete

AWS Account

There are 2 main things to consider with deployment of the SDK on AWS. The first is that most likely the amount of compute required to run your model will likely exceed the Free-Tier. The second is that if your AWS account was not set up by your organization (such as you are using a personal account) you should not be using your root account for deployment or operations. Please refer to the following page on best practices for usage of the root user. Other users, especially if customer-facing due to the requirement of running via command-line should be given as restricted accounts. (Only have access to a directory created for them with the relevant JAR file, no sudo access, etc).

Packaging Your Model

The first step is to make a few changes to your pom.xml. Make sure you add the jodatime and junit versions to your properties section.

pom.xml

<properties>
  <maven.compiler.source>1.8</maven.compiler.source>
  <maven.compiler.target>1.8</maven.compiler.target>
  <jodatime.version>2.10.1</jodatime.version>
  <junit.version>5.3.2</junit.version>
</properties>

You will then want to add the jodatime and junit dependencies to the section starting with ''

pom.xml

<dependency>
      <groupId>org.junit.jupiter</groupId>
      <artifactId>junit-jupiter-api</artifactId>
      <version>${junit.version}</version>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>joda-time</groupId>
      <artifactId>joda-time</artifactId>
      <version>${jodatime.version}</version>
</dependency>

Finally you will add the plugins for packaging the project.

pom.xml

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-shade-plugin</artifactId>
  <version>3.2.1</version>
  <executions>
    <execution>
    <phase>package</phase>
    <goals>
      <goal>shade</goal>
    </goals>
    <configuration>
      <shadedArtifactAttached>true</shadedArtifactAttached>
      <shadedClassifierName>allinone</shadedClassifierName>
      <artifactSet>
        <includes>
          <include>*:*</include>
        </includes>
      </artifactSet>
      <transformers>
        <transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
          <resource>reference.conf</resource>
        </transformer>
        <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
          <main-Class>Main</main-Class>
        </transformer>
      </transformers>
    </configuration>
    </execution>
  </executions>
</plugin>
<plugin>
  <groupId>com.spotify</groupId>
  <artifactId>docker-maven-plugin</artifactId>
  <version>1.0.0</version>
  <configuration>
    <imageName>simudyne-maven-docker</imageName>
    <baseImage>simudyne/scala-sbt:2.11.12.1.0.4</baseImage>
    <entryPoint>["java", "-jar", "/${project.build.finalName}-allinone.jar"]</entryPoint>
    <!-- copy the service's jar file from target into the root directory of the image -->
    <resources>
      <resource>
        <targetPath>/</targetPath>
        <directory>${project.build.directory}</directory>
        <include>${project.build.finalName}-allinone.jar</include>
      </resource>
      <resource>
        <targetPath>/root</targetPath>
        <directory>.</directory>
        <include>licenseKey</include>
      </resource>
      <resource>
        <targetPath>/</targetPath>
        <directory>.</directory>
        <include>simudyneSDK.properties</include>
      </resource>
    </resources>
  </configuration>
</plugin>

Once your pom has been updated you can the package everything into a single jar (allinone) by running 'mvn clean compile package -s settings.xml' inside the project's folder. You may need to use -s ~/.m2/settings.xml or similiar if your settings file is not included in the project directory.

Once you've run this you should have a large JAR file in your target directory under the main project directory.

Creating Your Instance

Next we'll create our instance. We'll outline the steps below, but you can always refer to the Official AWS Documentation for guidance on setting up a new instance.

  • Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/ and select the orange Launch instance button.

  • Set a name for your instance, and then select OS. For the purpose of this guide you should stick to the default which is a Windows or Amazon Linux instance with a preselected Amazon Machine Image (AMI) already chosen for you.

  • You'll then select the instance type. As covered above in the Sizing and Costs section under information this choice is largely dependent on the size and complexity of your model. If you are just trying to deploy for your first time to ensure that everything works, you can use the the t2.micro/t3.micro selected by default based on your region. This is also Free Tier eligible if you are simply testing things. However you will liekly wish to use a large and more memory-optimized machine for real deployments.

  • Under Key pair (login) select 'Create new key pair' (unlesss you have already created or associated a key pair, or one has been assigned to you)

  • Follow the steps to create a key pair, and make sure to select either .pem if you are working primarily on a Unix-based system, or .ppk if using Windows. Make sure you store this in a secure location as it will be necessary to connect to the instance.

  • Under Network Settings unless you have already created a Security Group you will likely want to proceed with the default which will create one for you.

  • If your goal with this deployment is to access the SDK either via the console, or via a dashboard created with the REST API you will then want to select the boxes allowing for HTTP/HTTPS traffic. However if you are running directly in the command line, then you will not need to choose these options.

  • Keep the default selections for the other configuration settings for your instance.

  • Finally from the summary panel on the right when you're ready, choose Launch instance. This will take a few minutes to complete.

Adding Your Files

Once you have created your instance you'll then want to find it's IP Address or Public DNS. If you followed the above steps and were taken to your new instance you should see this on the summary screen for the instance. Otherwise go to https://console.aws.amazon.com/ec2/ and select instances, and finally your desired instance.

For Linux

  • First we'll connect to our instance directly via SSH. Per the above steps on creating a key pair you should have either a .pem or .ppk file dependent on your workstation. If you followed the above steps your default root username will be 'ec2-user'. If you are using a different AMI please refer to AWS Connect to Linux or the AMI provider for username.

  • Please follow the steps for connecting via SSH to a machine per your current workstation operating system. Ensure that your private key is included as part of this connection.

  • Once you have connected directly via SSH we now will want to move our relevant files to the instance. There are multiple ways you could do this (uploading and downloading if publically available for example); however for our purposes you should use a tool like FileZilla which will allow you to use your private key file and username to make a SFTP connection.

For Windows

  • For Windows we'll connect via RDP. Please follow the connection steps here to connect to your Windows instance. From there downloading files via your approved internal file share (Dropbox/Onedrive/etc), or alternatively if you do not have an existing online file share we suggest uploading data to an S3 bucket and then downloading from there.

The files we'll need to move are

  • a packaged FatJAR file that you created above
  • a Simudyne license file
  • a simudyneSDK.properties file

Root Account

The above guide assumes you are connecting to this instance as the root user (ec2-user for Linux, Adminstrator for Windows). However you should strongly consider setting up other users other than root, especially if this instance will be connected to by other clients to run the SDK software. While if your end-users may only connect via HTTP/HTTPS to the Console or your created Dashboard it is still better to run as a non-root user if possible.

Running

Once everything has been moved and you are connected to machine running is as simple as java -jar NAMEOFYOURJARHERE.jar. Of course if this is meant to be a long running process (aka not a batch style run, btu something accessible via the web) you will want to use tools to keep the process running after your close your login session. For Linux the recommendation Simudyne suggests would be 'tmux' which you can find a tmux beginner's guide here. For Windows you can use the javaw instead of java to maintain a long-running process.

Troubleshoot

  • Unable to create jar: Please either refer to the command prompt for any specific error that would prevent the jar from being created (such as a compile error). If however there are no issues but a file is not created, please refer to your pom.xml to ensure that the above plugins are included.

  • Connection to instance failed: If you have followed the above steps and despite using the correct DNS/IP address, private key file, and username are still unable to connect to your EC2 instance you will want to refer to the following AWS Documentation to best troubleshoot.

  • No Java detected: If the AMI instance you are using does not have Java 8 JDK installed already you (as the root ec2-user) will need to run the following command sudo yum install java-1.8.0-openjdk-devel or download an installer is using Windows as the Adminstrator

  • Contacting Support: If you are still unable to resolve issues with your deployment, please feel free to contact support@simudyne.com for assisstance. Please note however some issues such as a key pair or instance created by your organizations IT may require working with additional personnel not on the Simudyne team.

Setting up via AMI

Purchasing from Marketplace

Alternatively after clicking "Launch Instance" from the EC2 console click "Browse more AMIs" and search for "Simudyne". After selecting the Linux or Windows version you'll want to either click Subscribe now or on launch. From here it's essentially the same steps as setting up any new instance on AWS. No other configuration on the Simudyne side is required - you'll still need to:

  • Create or make usage of an existing Key pair
  • Select the size of the machine taking note that the Simudyne AMI is approved for either r6i, r6a, r7i, or r7a machines as they are a suitable memory and cpu limit for running a production system.
  • Configure your network security group, and decide if you wish for the machine to be available to HTTP/S traffic
  • Set your storage options and size. The AMI is configured for only EBS style storage

Finally click "Launch Instance" and follow the steps above on how to connect depending on your chosen operating system.

Maintaining Your Deployment

Monitoring and Logging

Once you have successfully deployed your Simudyne SDK models to your new AWS instance there are a few steps you can take to monitor the simulation run, and how to best handle situations where something goes wrong.

This also will not cover the myriad of available tools you have available to either monitor the health of the instance (which you would do via the AWS Console), or some other tool to monitor the process on your instance. Please refer to the user guide for those tools to confirm how to monitor as directed. The most common monitoring toolset would be something like the 'top' command on a Linux instance.

This will differ based on the toolset used for maintaining the simulation as a daemon process (for example tmux attach -t 0 being the most likely command on Linux) Please refer to the guide for this toolset in order to view the output from the running process to check for runtime errors. If you are simply running in command line for a batch style run and have not set up a daemon for the java jar command you can simply refer to the current output.

The Simudyne SDK has a built-in health check which you can use to confirm that your models are deterministic. This can be enabled in your simudyneSDK.properties file by setting nexus-server.health-check to true. Note that this health check will only work by running a few ticks of your model in two separate harnesses and then compare the results to see if given the same seed they match. This matching can also make usage of Atomic Logging

If you wish you can make usage of a Log4J.properties file with your own desired settings in order to get a more verbose output for monitor where your simulation is in terms of processing. This can also be used in conjunction with the Atomic Logging feature. Atomic Logging is a newer experimental feature that allows a greater visibility into the specific actions/sequencing/message sending/and more for your simulation. It also allows you to easily create points in the simulation where you cna log output as needed (without having to add multiple imports and checks to write to output)

Updates

Making changes to your deployment is as simple as replacing the jar file with a newly packaged jar file. As usual if you wish to upgrade the version of the Simudyne SDK or any other dependencies you will need to make the corresponding changes to your pom.xml on your workstation. However once you've confirmed that any changes you've made are correct and are able to package and test on your workstation then you don't need to make any other changes on the AWS side. There are a few caveats to this however:

  • Ideally if you are making these updates you should consider setting version numbers to your jaf file and now overwriting the file directly. This way you can stop the current JAVA process of your deployment and test with the new one, but still be able to revert back quickly if a problem arises.
  • If you are using the Nexus Console - you will be unable to run both processes simultaneously (if trying to do a hotswap) as this will give an error stating that the port is already in usage.
  • Most notably this update step ignores any other external processes which you will need to follow per your simulations requirements. (Such as if there's changes required to the loaded dataset, connection credentials in your properties file, or post-processing scripts)

System Updates

It is recommended for you to stop any Simudyne SDK software when making changes to the underlying instance. The simulation should not be affected by any security or other updates; however ensure that the JAVA OpenJDK version is maintained as JAVA 8.

Updating License

If your license has expired you will receive an error in the output (or in Simudyne Console) informing you as such. Once you have been granted or purchased a new license file the best option would be to rename the file to just 'licenseKey' (no extension) and replace the file directly. From there you can try to re-run the simulation jar.

Key Pairs/Passwords

By default the Simudyne software does not provide any password for access to it's simulations. As such if you wish to make a dashboard only available to either a certain group of people (with either a set password, or a login system) then you will need to include this in your custom dashboard making usage of the REST API.

As well if you are primarily running simulations via the command line directly on the instance it would be best to setup a policy for your Key Pairs and subsequently SSH access to be rotated at a scheduled consistent with your organizations security policies. Typically Simudyne would recommend that you make these changes as part of the license file update (which lasts for 1-year) at a minimum.

AWS Service Limits

If you are deploying to a single instance (and not a Docker-based deployment) per above you will likely not require any additional service limits beyond the default. The caveats to this include whether you are using an existing account of one setup by your organization. This will require you to request "All Standard (A, C, D, H, I, M, R, T, Z) Spot Instance Requests" and/or "EC2-VPC Elastic IPs" as the compute and IP addressess are what is required for a typical setup.

Disaster Scenario

The Simudyne SDK's method for recovering from a disaster scenario is as follows:

  • Ensure that any work on any workstation is maintained in a versionning software such as Git. This is useful even in normal situations such as user change or working with multiple developers.
  • Backup as desired any packaged jars used for deployments. Because deployment is so simplistic, if you need to re-deploy elsewhere the process for doing is simple, and any relevant files (such as license or properties) will be available on workstations.
  • Our PRNG ensures that if a simulation were to fail mid-run either through a sweep or script, because the results will be the same you can simply re-run the simulation. This however does not account for working with live/streaming data. If you are using input data that can be updated or modified our recommendation would be to create separate 'snapshots' of this data that your Simudyne SDK model can be re-run against should a run fail.

Contacting Support

The best option for getting support is to email support@simudyne.com

You can also hop into our Community Discord Server and ask a Simudyne Dev for help!

Discord Server

Currently there are 2 tiers of support for the Simudyne SDK.

  • Limited to no support is available for those using a free or trial license.
  • Dedicated support-team and developers will be assigned to any tickets created via our email system above if you have purchased a license.
  • During contract discussions businessess may request "modeling support" which involves assigning a member of our team or more likely a 3rd-party partner to said contract to provide support beyong technical, development, or deployment issues in order to help build the underlying model.