API docs
The predict contract.
The public HTTP transport now leads with one canonical prediction entry point: `POST /api/v2/predict`. `v1` remains supported for the current pricing-shaped contract while the single-predict API is being generalized behind the new major version.
- `POST /api/v2/predict` is the canonical HTTP prediction transport
- `POST /api/v1/predict` remains available for the current pricing contract
- `GET /api/v1/jobs/:jobId` remains the current async polling path
- `GET /api/v1/jobs/:jobId` with customer-managed API keys
- `POST /api/v1/outcomes` with customer-managed API keys
- `predictionId` is the canonical response identifier
- `decision` is the canonical machine-readable action field
- Minimal JS/TS SDK in `sdk/javascript`
- Customer-managed Stripe checkout webhook connectors
- Authenticated reusable-pattern view in `/api/patterns`
- Supported pricing action shapes: set price, percent markdown, increase, promo start, promo end
- Context inputs: inventory pressure, seasonality, elasticity hints, competitor behavior, promo flags
Credit model: one pricing prediction currently spends one API credit. Outcome recording and prediction reads do not spend credits.
Canonical prediction transport: `v2`. Stable pricing contract: `v1`.
Latency
Fast path and fallback
- Conseq targets `1200ms` for normal synchronous pricing checks.
- Clients should treat `1500ms` as the recommended timeout budget.
- If Conseq does not return in time, default to `ESCALATE`.
If Conseq is slow or unavailable, do not auto-approve the pricing action. Escalate it to human review or keep the current price in place.
These values are also exposed by `serviceStatus` and the REST response headers for `POST /api/v2/predict`.
GraphQL compatibility
Backend GraphQL naming still exists underneath
The public product language is `predict`, but the AppSync contract still exposes `predictConsequence` while the HTTP `v2` surface is being generalized.
mutation PredictConsequence($input: ConsequencePredictionInput!) {
predictConsequence(input: $input) {
prediction {
id
capability
modelVersion
reasoningVersion
actionType
customerAccountId
customerAccountName
requestSource
requestCallerId
requestCallerEmail
requestApiKeyId
requestApiKeyLabel
expectedValue
expectedValueLabel
tailRisk
confidence
actionPolicy
immediateOutcome
secondOrderEffects
thirdOrderEffects
recommendation
reasoning
outcomeStatus
accuracyStatus
outcomeSource
outcomeActorId
outcomeActorLabel
}
job {
id
status
pollAfterMs
predictionId
}
error {
code
message
}
}
}Variables
Generic request envelope
{
"capability": "PRICING",
"actionType": "PERCENT_MARKDOWN",
"companyId": "northstar-commerce",
"actor": {
"kind": "AI_AGENT",
"id": "pricing-worker",
"label": "Production pricing worker"
},
"target": {
"type": "SKU",
"id": "BUNDLE-15",
"name": "Electronics Bundle"
},
"context": {
"category": "electronics",
"oldPrice": 100,
"priceChangePercent": 15,
"promoDurationDays": 7,
"inventoryPressure": 0.7,
"seasonalityIndex": 1.15,
"priceElasticityHint": 0.68,
"recentCompetitorBehavior": "MATCHING_DISCOUNTS",
"promoAlreadyActive": false,
"recentDailyRevenue": 4200,
"recentDailyUnits": 42,
"marginRate": 0.46,
"competitorVolatility": 0.72,
"currentConversionRate": 0.034
}
}Async jobs
Poll background predictions
query GetConsequenceJob($id: ID!) {
getConsequenceJob(id: $id) {
id
status
pollAfterMs
predictionId
lastError
}
}Mode choice
Sync for preflight, async for heavier work
- `SYNC` is for low-latency pre-execution checks.
- `ASYNC` queues a background job and returns `job.id`, `status`, and `pollAfterMs`.
- Once the job is `COMPLETED`, use `predictionId` to load the finished prediction.
Benchmark policy
Conseq should beat a frozen frontier-prompt baseline, not hide from it.
The pricing capability is now benchmarked against held-out cases with a frozen frontier-prompt baseline. The point is not to claim generic model superiority. The point is to show that Conseq makes better pricing decisions against realized outcomes.
`v2` returns a flat decision envelope instead of leaking the internal prediction/job storage shape.
{
"predictionId": "pred_123",
"decision": "ESCALATE",
"expectedValue": -18400,
"tailRisk": 0.36,
"confidence": 0.77,
"immediateOutcome": "$8.7K of short-term revenue lift if the markdown increases unit demand.",
"secondOrderEffects": [
"Competitors are more likely to match the lower price and compress category margin."
],
"thirdOrderEffects": [
"Customers begin anchoring on the lower price and delay future full-price purchases."
],
"recommendedAction": "Require human approval for markdowns above 15%...",
"reasoning": "This pricing model treats percent markdown as a short-term unit lift tradeoff..."
}Why the contract matters
This is not just a scoring API.
- The predict response includes expected value, tail risk, and confidence.
- The response now includes a machine-readable `decision` field.
- Each `predictionId` can still be traced to customer, caller, API key, and outcome provenance internally.
- Sync target: `1200ms` for normal pre-execution checks.
- Recommended client timeout: `1500ms`.
- Fallback policy: `ESCALATE` if Conseq is slow or unavailable.
- The canonical public call is `predict`, even while older backend names remain in `v1` and GraphQL paths.
- Confidence now blends feature coverage with recorded historical accuracy.
- When `companyId` is present, same-company recorded outcomes can shift the baseline.
- Repeat customers now get slice-specific calibration from similar recorded outcomes.
- Recurring failure patterns can now be clustered and fed back into future predictions.
- It explicitly returns second-order and third-order effects.
- Each prediction can later be checked against actual outcomes.
- Stripe checkout events can now feed outcome signals automatically.
- The accuracy status is part of the product, not an afterthought.
Versioning
How contract changes ship
The single-predict HTTP transport now leads with v2. Additive changes can ship within v2. Breaking changes require a new major version path and schema contract.
- `POST /api/v2/predict` is now the canonical HTTP entry point.
- `/api/v1` remains the stable pricing-specific contract during the transition.
- `serviceStatus` still reflects the current GraphQL/AppSync contract version.
- Breaking changes ship only behind a new major version.
Error contract
Stable machine-readable error codes
- `INVALID_INPUT`
- `UNSUPPORTED_CAPABILITY`
- `INSUFFICIENT_CREDITS`
- `RATE_LIMIT_EXCEEDED`
- `DAILY_QUOTA_EXCEEDED`
- `IDEMPOTENCY_CONFLICT`
- `NOT_FOUND`
- `UNAUTHORIZED`
- `INTERNAL_ERROR`
HTTP error response
Predict returns one error envelope
{
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "Rate limit exceeded for this customer account. Limit: 30 predictions per minute."
}
}Quickstarts
Copy, paste, and run.
1. API key auth
export APP_URL="https://conseq.ai"
export CONSEQ_API_KEY="conseq_..."2. Run a prediction
curl -X POST "$APP_URL/api/v2/predict" \
-H "content-type: application/json" \
-H "authorization: Bearer $CONSEQ_API_KEY" \
-d '{
"capability": "PRICING",
"actionType": "PERCENT_MARKDOWN",
"companyId": "northstar-commerce",
"actor": {
"kind": "AI_AGENT",
"id": "pricing-worker"
},
"target": {
"type": "SKU",
"id": "BUNDLE-15",
"name": "Electronics Bundle"
},
"context": {
"oldPrice": 100,
"priceChangePercent": 0.15,
"recentDailyRevenue": 4200,
"recentDailyUnits": 42,
"marginRate": 0.46
}
}'3. Record the outcome
curl -X POST "$APP_URL/api/v1/outcomes" \
-H "content-type: application/json" \
-H "authorization: Bearer $CONSEQ_API_KEY" \
-d '{
"predictionId": "pred_123",
"actualRevenueDelta": -1200,
"actualUnitDelta": -12,
"competitorMatchedWithin48h": true,
"recommendationFollowed": false
}'4. JS/TS wrapper
const predict = async (input) => {
const response = await fetch(`${process.env.APP_URL}/api/v2/predict`, {
method: "POST",
headers: {
"content-type": "application/json",
authorization: `Bearer ${process.env.CONSEQ_API_KEY}`,
},
body: JSON.stringify(input),
});
if (!response.ok) {
throw new Error("Predict request failed.");
}
return response.json();
};
const prediction = await predict({
capability: "PRICING",
actionType: "PERCENT_MARKDOWN",
companyId: "northstar-commerce",
actor: { kind: "AI_AGENT", id: "pricing-worker" },
target: { type: "SKU", id: "BUNDLE-15" },
context: {
oldPrice: 100,
priceChangePercent: 0.15,
recentDailyRevenue: 4200,
},
});
console.log(prediction.predictionId, prediction.decision);5. Fallback policy
try {
const prediction = await predict(input);
if (prediction.decision !== "ALLOW") {
return { execute: false, reason: prediction.decision };
}
return { execute: true };
} catch (error) {
return {
execute: false,
reason: "ESCALATE",
notes: "Conseq timed out or was unavailable. Keep the current price or escalate.",
};
}6. Queue async job
`ASYNC` remains on the current `v1` pricing contract while the `v2` request envelope is being standardized.
curl -X POST "$APP_URL/api/v1/predict" \
-H "content-type: application/json" \
-H "authorization: Bearer $CONSEQ_API_KEY" \
-d '{
"capability": "PRICING",
"responseMode": "ASYNC",
"actionType": "PERCENT_MARKDOWN",
"companyId": "northstar-commerce",
"sku": "BUNDLE-15",
"oldPrice": 100,
"priceChangePercent": 0.15,
"recentDailyRevenue": 4200
}'7. Poll async job
curl "$APP_URL/api/v1/jobs/job_123" \
-H "authorization: Bearer $CONSEQ_API_KEY"Webhook flow
Push real Stripe outcomes back into Conseq.
- Create a Stripe connector in `/api/connectors`.
- Use the generated webhook URL as a Stripe endpoint.
- Subscribe the endpoint to `checkout.session.completed`.
- Add `conseq_prediction_id` metadata to the checkout session.
- Optionally add `conseq_units` when one session covers multiple units.
Webhook metadata
metadata: {
conseq_prediction_id: "pred_123",
conseq_units: "2"
}Conseq ignores unsupported event types, events without a prediction link, non-USD sessions, and duplicate Stripe event IDs.