Prototype 2 — Ticket Pool Simulator
This simulator reveals how threshold models, view modes, and ticket dynamics interact — and what a healthy validation system looks like before a single real participant joins.
Try the Ticket Pool Simulator →
What this prototype tests
This simulator tests how the ticket pool and cargo validation mechanics behave across different parameter combinations — exploring threshold models, vote distribution, and participation dynamics before the real platform is built. Rather than demonstrating a user-facing feature, it is a research tool: a way of making the system's interdependent mechanics visible and testable through controlled simulation.
The system described in the white paper introduces a set of interdependent mechanics — ticket issuance, vote distribution, adaptive thresholds, cargo transitions — that interact in ways that are difficult to predict through design reasoning alone. Small changes in one parameter produce unexpected effects elsewhere. The simulator makes these interactions visible and testable.
It models one version window — the period between system governance adjustments, up to 60 days — at a community scale of up to 200 active participants each day. This reflects the realistic early and mid-phase scale of the project rather than hypothetical large-scale scenarios.
The simulator deliberately simplifies some aspects of the real system. Popular, oldest, and newest view modes use a shared daily window rather than individual per-participant snapshots. Participation follows a base growth trend rather than modelling wave-based community joins. Version transitions are not simulated — each run models a single version period with fixed parameters. These simplifications are intentional — they isolate the variables under study and keep the tool usable on standard hardware.
"Submitted entry" throughout this simulator refers to what the white paper calls a "response."
What the simulator revealed
Running the simulator across different parameter combinations produced five findings that directly inform how the real system should be configured and monitored.
Finding 1 — No single view mode is sufficientNeglected mode alone produced zero cargo in testing. Popular mode alone starves the long tail — the same entries keep accumulating votes while newer or less visible entries never surface. The multi-modal approach described in the white paper is not just a design preference — it is mathematically necessary for healthy validation.
Finding 2 — View mode effectiveness depends on community sizeWith 20 participants, popular mode significantly outperformed random. With 200 participants, random outperformed popular. The optimal view mode mix is not fixed — it shifts as the community scales.
Finding 3 — Window size can override threshold differencesWith popular mode and window size 20, fixed threshold K=5 and logarithmic threshold produced almost identical cargo rates. The attention bottleneck matters more than the threshold model when viewing is concentrated.
Finding 4 — Unused ticket rate is the hidden dragThe gap between peak ticket pool and votes cast is the clearest early warning sign of system stress. A large ticket pool with low cargo rate means a significant portion of participants are holding tickets they never spend.
Finding 5 — Cargo rate % is the primary health indicatorNot average votes per submitted entry, not ticket pool size, not daily votes cast — the ratio of cargo to total submitted entries tells you immediately whether the system is functioning. A healthy cargo rate sits between 10–30%. Below 5% the system is stagnating. Above 40% the threshold may be too permissive.
How to use it
Community shape
Days, Initial participants, Daily growth %, Participation volatility %
Community size fundamentally changes how the system behaves. When the community is small — around 20 participants — the ticket pool is limited and sensitive. As the community grows toward 50 active participants per day, the submission pool expands faster and more entries compete for the same votes. At around 100 active participants, the question shifts from threshold calibration to attention distribution — how do submitted entries stay visible long enough to accumulate votes across multiple days.
The Participation volatility parameter adds realistic irregularity on top of the base growth trend. Use this to test whether the system remains stable when participation is uneven, as it will be in any real internet-based community.
Participation behaviour
Submission rate %, Submission multiplier, Daily vote rate %, Unused ticket rate %, Ticket issuance multiplier, Ticket expiry
Not every participant is equally active. Some submit regularly and vote every day. Others join, submit once, and never return. Others are present but passive — they hold tickets they never spend. The unused ticket rate captures these passive participants: their tickets accumulate in the pool and eventually expire unused, reducing the effective vote supply.
The submission multiplier simulates participants submitting more than once per day — up to the system maximum of three entries. The ticket issuance multiplier represents the bonus ticket system: active participants who complete tasks, maintain streaks, and contribute to governance earn additional tickets beyond the base issuance.
Ticket expiry determines how long a ticket remains valid. A short expiry window creates urgency. A longer window gives passive participants more time to engage, but means unused tickets sit in the pool longer before clearing.
Validation mechanics
View mode, Window size, Threshold model and its parameters
When enough participants independently make similar choices — selecting the same entry across different days, different view modes, different attentional contexts — the entry crosses the threshold and enters the cargo archive. The threshold number is the final gate, but the real validation is the entire journey.
The simulator models three parameters that govern this layer: view mode determines how entries are discovered, window size determines how many entries compete for attention on any given day, and the threshold model determines what level of accumulated consensus counts as meaningful validation.
Reading the results
Total cargo · Final sub pool · Cargo rate % · Total tickets expired · Peak ticket pool
Cargo rate % is the primary indicator. A rate between 10–30% suggests the system is functioning with meaningful selectivity. Below 5% the system is stagnating. Above 40% the threshold may be too permissive.
Total cargo and Final sub pool together tell you the shape of the archive. A large final sub pool alongside low total cargo means many entries are waiting but not reaching threshold.
Total tickets expired is the unused ticket problem made visible. If this number is high relative to total cargo, a significant portion of the ticket supply is being wasted.
Peak ticket pool shows the maximum ticket backlog reached during the run. A very high peak relative to votes cast per day signals that tickets are accumulating faster than they are being spent.
Run the same parameters multiple times — participation volatility and random view mode introduce randomness, so consistent patterns across multiple runs are more meaningful than any single result.
See also
Prototype 1
Submit and Vote Flow
Prototype 3
Blockchain Anchoring