Skip to content

Phala

Phala (Sanskrit for fruit, the outcome of action) defines a principal-declared welfare feedback protocol for agent-to-agent networks.

Agent protocols define how tasks begin. They say nothing about whether tasks ended well, who decides what well means, or whether the cumulative effect of many well-ended tasks is improving the principal’s life or eroding it.

Every existing protocol-layer mechanism that propagates outcome signals (RLHF at training time, MARL reward functions, advertising context protocols) defines what a good outcome means from the service provider’s perspective. The principal does not. This is the welfare inversion.

PrimitiveWhat it records
OutcomeEventObjective facts of task resolution
SatisfactionRecordPrincipal’s quality signal (valence in [-1, 1])
BeliefUpdateScalar weight adjustment propagated back through the network
PrincipalSatisfactionModelPer-context, principal-authored definition of what good means
WelfareTraceOn-device longitudinal welfare signal modulating network learning rate

The PrincipalSatisfactionModel is the primitive that closes the welfare inversion: the principal declares what good means, and no agent may substitute its own formula.

The welfare_detectors extension adds a typed panel of specialized welfare detectors with deterministic arbitration and a predictive welfare horizon. Phala Core’s single BeliefUpdate channel collapses every welfare dimension into one scalar weight delta. The relevant welfare dimensions (cognitive load, autonomy, dignity, social connection, pace) routinely conflict, and a single scalar loses the information the agent needs to act well — most acutely for elderly, autistic, or cognitively impaired principals.

PrimitiveWhat it records
WelfareDetectorTyped detector declaration with priority
DetectorPanelConsumer-side declaration of accepted detector types
TypedBeliefUpdateA BeliefUpdate carrying detector_type and provenance_hash
WelfarePrediction / WelfareRealizationPredicted vs realized welfare delta over a declared horizon
MissingRealizationAuto-emitted when a prediction’s horizon elapses without a paired realization
InvariantPurpose
WD-1 Typed Detector CompositionUntyped or unknown-type updates are rejected
WD-2 Arbitration DeterminismConflicting updates resolve by declared priority + lower provenance_hash
WD-3 Predictive Welfare HorizonPaired realizations stay within the prediction’s horizon window
WD-4 Detector Provenance DisclosureEvery update carries an audit-fingerprint; BU-Privacy preserved

Full specification with TLA+ model and TLC configuration at extensions/welfare_detectors/. The extension is additive: agents that do not declare a DetectorPanel continue to operate under Phala Core’s existing single-channel BeliefUpdate model.