Introduction
Welcome to the RoboSAPIENS Trustworthiness Checker (TC) documentation.
This book gives a concise guide to running and configuring the TC.
The work presented here is supported by the RoboSAPIENS project funded by the European Commission's Horizon Europe programme under grant agreement number 101133807.
General Usage
This page gives a short overview of the standard command-line workflow for running the Trustworthiness Checker (TC) with the most common input and output configurations.
Basic usage
The most basic usage of the TC is to run it locally on a model and an input trace.
Run the TC locally from the repository with cargo run -- <other options>. The -- separates Cargo's own arguments from the arguments passed to the TC itself.
The general shape of the command is:
cargo run -- <model> <input-options> [output-options] [extra options]
In the simplest case, provide a model, and an input source:
cargo run -- examples/simple_add.dsrv --input-file examples/simple_add.input
This starts the TC with the model in examples/simple_add.dsrv, reads the trace from examples/simple_add.input, and prints the monitoring result to standard output. Printing to standard output can also be explicitly specified with --output-stdout.
Additional flags can be added for language selection, parser choice, runtime settings, or distributed monitoring.
Getting help
To see the available command-line options, run:
cargo run -- --help
The help output shows the supported input and output flags together with the available option values.
MQTT Usage
For MQTT-based monitoring, use --mqtt-input and --mqtt-output.
cargo run -- examples/simple_add.dsrv --mqtt-input --mqtt-output
With these flags, the TC maps stream names in the specification directly to MQTT topics. For example, if the model has input streams x and y, the TC subscribes to the topics x and y. If the model produces an output stream z, the TC publishes the result on the topic z.
Message format
Publish MQTT payloads as JSON values that match the expected stream type. On Linux systems, we suggest using MQTT Explorer to publish and inspect messages. For the simple_add.lola example, publish the following value to topic x and then to topic y:
42
The result is then published on topic z as:
{
"value": 84
}
ROS2 Usage
For ROS2-based monitoring, run the TC with the ros feature enabled. I.e., cargo run --features ros -- <other options>
Before starting the TC, source your ROS2 installation in the terminal, for example:
source /opt/ros/<distro>/setup.bash
Then build the custom messages with colcon build from the repository root:
colcon build
After the build finishes, source the generated workspace so the custom message types are available:
source install/setup.bash
You can then start the TC with a ROS input mapping:
cargo run --features ros -- --input-ros-topics examples/ros/simple_add_mapping.json --output-ros-topics examples/ros/simple_add_output_mapping.json examples/simple_add.dsrv
In a second terminal, source ROS again and subscribe to the z topic:
source /opt/ros/<distro>/setup.bash
ros2 topic echo /z
In a third terminal, publish to the x and y topics:
source /opt/ros/<distro>/setup.bash
ros2 topic pub /x std_msgs/msg/Int32 "{data: 1}"
ros2 topic pub /y std_msgs/msg/Int32 "{data: 1}"
The second terminal should now show the result.
Redis Usage
TODO: TW - please write a brief demonstration on how to use Redis. Cant seem to get it working.
Reconfiguration Runtime
One of the novel features of the TC is the ability to reconfigure a running monitor at runtime. In practice, this means that the TC can receive an updated specification while it is already running and then rebuild the monitoring pipeline around the new specification.
This is useful when the property to be monitored changes during execution, for example because new streams become relevant, old streams are removed, or a different monitoring task should take over without restarting the full deployment.
How it works
When the TC runs in reconfiguration mode, it listens for reconfiguration commands on a dedicated input stream. This means that an extra topic is injected automatically into the input configuration. By default, this topic is named reconfig, but you can specify it with --reconf-topic.
Each reconfiguration command is sent as a JSON string payload with the following structure:
{
"spec": "in x: Int\nout z: Int\nz = x",
"type_info": {}
}
The spec field contains the new specification as a string. The type_info field is mainly relevant for ROS-based setups, where each input and output stream in the new specification must be associated with a ROS message type.
For ROS-based reconfiguration, the payload must include the ROS message types for all inputs and outputs used by the new specification. For example:
{
"spec": "in x: Int\nout z: Int\nz = x",
"type_info": {
"x": "Int32",
"z": "Int32"
}
}
How to do it
Start the TC with the reconfiguration runtime:
cargo run -- \
--runtime reconf-semi-sync \
<model> <input-options> [output-options]
For example, with MQTT input and output and custom reconfiguration topic my-reconfig:
cargo run -- --runtime reconf-semi-sync \
--reconf-topic my-reconfig examples/simple_add.dsrv \
--mqtt-input --mqtt-output
After the TC is running, publish a reconfiguration command to the configured reconfiguration topic. The TC will parse the new specification, rebuild the relevant input and output handlers, and continue execution with the updated monitor.
Development status
-
Basic reconfiguration: Monitor can switch to a new specification without restarting the process
-
✅: Runtime reconfiguration through
--runtime reconf-semi-sync -
✅: Configurable reconfiguration topic through
--reconf-topic -
✅: ROS2 reconfiguration -- requires
type_infosince ROS2 messages are typed -
✅: MQTT reconfiguration
- ⚠️: No custom topic mapping yet, topic naming based on stream names
-
✅: Redis reconfiguration
- ⚠️: No custom topic mapping yet, topic naming based on stream names
-
⚠️: A short delay occurs during reconfiguration due to reconnecting to external services. (WIP)
-
❌: Context transfer to preserve as much of the trace context as possible across reconfiguration
-
❌: File-based reconfiguration
Distributed Monitoring
This tutorial explains how to run RoboSAPIENS Trustworthiness Checker on multiple nodes of a distributed system in order to collectively monitor an overall property.
Overview
Distributed monitoring splits a single DSRV specification into multiple localised specifications which monitor the part of a specification relevant to a local network node.
Architecture
In distributed monitoring:
- A distribution graph defines which streams are computed on which nodes. This may be either static, or dynamically determined via a central node.
- Each local node runs an separate, node-specialized instance of the monitor
- Nodes communicate via MQTT to exchange intermediate results
- The specification is split automatically based on the distribution graph
┌─────────────┐ ┌─────────────┐
│ Node A │◄───────►│ Node B │
│ │ MQTT │ │
│ x, y → w │ │ z, w → v │
└─────────────┘ └─────────────┘
▲ ▲
│ │
└───────────┬───────────┘
│
MQTT Broker
The distributed monitoring system can be used in multiple configurations:
- Static: nodes run independently and are assigned work following a predefined distribution graph which labels each node with a work assignment consisting of a list of streams from the specification.
- Centralised, random: the nodes run at each location and an additional centralised node assigns random work to each node.
- Centralised, constraints: the nodes run at each location and an additional centralised node generates a work assignment which optimised for a number of distribution constraints which limit where each stream of the specification
Static distribution
In this section we will set up a static distribution of a specification across two nodes.
Step 1: Create the Specification
File: simple_add_distributable.dsrv
in x: Int
in y: Int
in z: Int
out w: Int
out v: Int
w = x + y
v = w + z
This specification:
- Takes inputs
x,y,z - Computes intermediate stream
w = x + y - Computes final stream
v = w + z
We'll split this so Node A computes w and Node B computes v.
Step 2: Create the Distribution Graph
File: simple_add_distribution_graph.json
{
"dist_graph": {
"central_monitor": 1,
"graph": {
"nodes": [
"A",
"B"
],
"edge_property": "directed",
"edges": [
[
0,
1,
0
]
]
}
},
"var_names": [
"x",
"y",
"z",
"w",
"v"
],
"node_labels": {
"0": [
"w"
],
"1": [
"v"
]
}
}
Step 3: Start the Nodes
Node A:
cargo run -- examples/simple_add_distributable.dsrv \
--mqtt-input --mqtt-output \
--distribution-graph examples/simple_add_distribution_graph.json \
--local-node A
Node B:
cargo run -- examples/simple_add_distributable.dsrv \
--mqtt-input --mqtt-output \
--distribution-graph examples/simple_add_distribution_graph.json \
--local-node B
Step 4: Send Inputs
Using MQTT Explorer or mosquitto_pub:
# Send inputs for Node A
mosquitto_pub -t x -m '1'
mosquitto_pub -t y -m '2'
# Send input for Node B
mosquitto_pub -t z -m '3'
Step 5: Observe Results
# Node A computes w
mosquitto_sub -t w
# Output: {"value": 3}
# Node B computes v
mosquitto_sub -t v
# Output: {"value": 6}
What happened:
- Node A received
x=1andy=2, computedw=3 - Node A published
w=3to MQTT - Node B received
z=3andw=3(from MQTT), computedv=6 - Node B published
v=6to MQTT
Dynamic distribution
In this section we will set up a dynamic distribution of a specification based on a central coordination node.
This will use the same specification as before (simple_add_distributable.dsrv) but with a dynamic distribution graph.
Step 1: Create the Specification
File: simple_add_distributable.dsrv
in x: Int
in y: Int
in z: Int
out w: Int
out v: Int
w = x + y
v = w + z
This specification:
- Takes inputs
x,y,z - Computes intermediate stream
w = x + y - Computes final stream
v = w + z
We'll split this so Node A computes w and Node B computes v.
Step 2: Start the local Nodes
Node A:
cargo run -- examples/simple_add_distributable.dsrv \
--mqtt-input --mqtt-output \
--distributed-work \
--local-node A
Node B:
cargo run -- examples/simple_add_distributable.dsrv \
--mqtt-input --mqtt-output \
--distributed-work \
--local-node B
This uses different arguments compared to the static deployment:
- The
--distributed-workargument enables distributed work assignment, so the node will wait to be sent astart_monitors_at_<node name>message before it starts monitoring.
Step 5: Central Node
Option A: Mock Central Node
To test the local nodes without starting the central node, you can send the following MQTT messages.
Using MQTT Explorer or mosquitto_pub:
# Send inputs for Node A
mosquitto_pub -t start_monitors_at_A -m "[\"w\"]"
# Send input for Node B
mosquitto_pub -t start_monitors_at_B -m "[\"v\"]"
Option B: Randomized Central Node
Note: This option is under development and may not yet work reliably.
To automatically distribute work randomly across nodes, start the central node:
Central Coordination Node:
cargo run -- examples/simple_add_distributable.dsrv \
--mqtt-input --mqtt-output \
--mqtt-randomized-distributed A B
The central coordination node will randomly assign streams to the specified nodes (A, B) and send start_monitors_at_<node> messages.
Option C: Constraint-based Central Node
Note: This option is under development and may not yet work reliably.
For optimized distribution based on constraints, create a specification with distribution constraints and start the coordinator:
File: simple_add_dist_constraints.dsrv
in x
in y
in z
out w
out v
w = x + y
v = z + w
can_run w: source(x) || source(y)
locality w: dist(source(x)) + dist(source(y))
can_run v: source(z) && monitor(w)
locality v: dist(source(z)) + dist(monitor(w))
Central Coordination Node:
cargo run -- examples/simple_add_dist_constraints.dsrv \
--mqtt-input --mqtt-output \
--mqtt-static-optimized A B \
--distribution-constraints w v
The coordinator analyzes the constraints for streams w and v, computes optimal assignments to minimize locality scores while satisfying can_run conditions, and distributes work to nodes A and B. Use --mqtt-dynamic-optimized for dynamic reconfiguration.
Step 4: Send Inputs
Using MQTT Explorer or mosquitto_pub:
# Send inputs for Node A
mosquitto_pub -t x -m '1'
mosquitto_pub -t y -m '2'
# Send input for Node B
mosquitto_pub -t z -m '3'
Step 5: Observe Results
# Node A computes w
mosquitto_sub -t w
# Output: {"value": 3}
# Node B computes v
mosquitto_sub -t v
# Output: {"value": 6}
What happened:
- Node A received
x=1andy=2, computedw=3 - Node A published
w=3to MQTT - Node B received
z=3andw=3(from MQTT), computedv=6 - Node B published
v=6to MQTT
Distribution Graph Format
The distribution graph is a JSON file defining the topology.
Structure
{
"dist_graph": {
"central_monitor": 1,
"graph": {
"nodes": [
"A",
"B"
],
"edge_property": "directed",
"edges": [
[
0,
1,
0
]
]
}
},
"var_names": [
"x",
"y",
"z",
"w",
"v"
],
"node_labels": {
"0": [
"w"
],
"1": [
"v"
]
}
}
Distribution constraint syntax
Distribution constraints follow the following syntax:
can_run <var>: <bool_expr>- Boolean condition: when can this stream be computed at a node?locality <var>: <numeric_expr>- Numeric score: lower is better (minimizes network traffic)source(x)- True if inputxoriginates at this nodemonitor(w)- True if outputwis computed at this nodedist(target)- Network distance to target