Job
Represents a single unit of work
The Job account tracks a single task through its lifecycle.
Seeds: ["job", queue, job_id_le_bytes] — unique per queue + monotonic ID.
Fields
| Field | Type | Description |
|---|---|---|
queue | Pubkey | Parent queue |
job_id | u64 | Unique ID (from queue's monotonic counter) |
creator | Pubkey | Who posted the job |
worker | Pubkey | Who claimed it (default pubkey if unclaimed) |
status | JobStatus | Current state — one of 7 variants |
reward_amount | u64 | Tokens escrowed in vault for this job |
data_hash | [u8; 32] | Blake3 hash of the off-chain job payload |
created_at | i64 | Unix timestamp |
claimed_at | i64 | Unix timestamp (0 if unclaimed) |
deadline | i64 | claimed_at + job_timeout |
priority | u8 | 0=Low, 1=Medium, 2=High |
max_retries | u8 | How many times to re-open after expiry |
retry_count | u8 | Current retry count |
JobStatus Enum
pub enum JobStatus {
Open, // Waiting for a worker
Claimed, // Worker assigned, deadline ticking
Submitted, // Result submitted, awaiting review
Completed, // Terminal — result approved
Disputed, // Awaiting arbiter resolution
Expired, // Terminal — deadline passed, retries exhausted
Cancelled, // Terminal — creator cancelled before claim
}
Notes
- Priority is metadata only. There is no on-chain priority queue — workers freely choose which open jobs to claim. Priority serves as a signal to off-chain clients that can sort/filter.
data_hashfollows the off-chain data pattern: store the actual job specification on IPFS/Arweave and commit only the hash on-chain. This keeps account size fixed at 186 bytes.- The
job_idis derived fromtotal_jobs_createdon the queue at creation time, ensuring unique sequential IDs.