Overview
| Job name | Schedule | Purpose |
|---|---|---|
process-renewal-cycles | Every 5 minutes | Execute due subscription renewals |
process-dunning-retries | Every 5 minutes | Retry failed renewal payments |
process-analytics-daily-snapshots | Daily at 00:00 UTC | Rebuild analytics data |
process-cancellation-operational-metrics | Every hour | Aggregate cancellation metrics |
process-renewal-cycles
Schedule:*/5 * * * * — runs every 5 minutes.
This is the core renewal engine. On each execution it queries for all RenewalCycle records that are due — specifically those with status scheduled or failed where scheduled_for is in the past. It then processes them in batches of 20.
What it does for each cycle:
- Checks whether approval is required. If
approval_requiredis true and the cycle has not been approved, it is skipped until it is. - Runs
processRenewalCycleWorkflow, which creates a Medusa order and charges the customer via the configured payment provider. - On success the renewal cycle transitions to
succeededand the next cycle is scheduled. - On payment failure the cycle transitions to
failedand aDunningCaseis opened.
blocked (a warning, not an error). No distributed lock is used at the job level — cycles themselves are protected.
Log events emitted:
renewal.job— job start and completion summary (scanned, processed, succeeded, failed, blocked counts)renewal.job.discovery— how many due cycles were foundrenewal.job.batch— per-batch progressrenewal.job.cycle— per-cycle outcome with duration
process-dunning-retries
Schedule:*/5 * * * * — runs every 5 minutes.
Handles payment recovery for subscriptions that failed to renew. On each execution it queries for all DunningCase records whose next retry is due, processes them in batches of 20.
What it does for each case:
- Runs
runDunningRetryWorkflow, which attempts to charge the customer again using the same payment provider. - On success the dunning case is marked
recoveredand the subscription is returned toactive. - On failure the next retry is scheduled according to the dunning configuration (retry intervals and max attempts are set in Subscription Settings).
- When all retry attempts are exhausted (or a permanent failure is detected) the case is marked
unrecoveredand the subscription remains inpast_due.
jobs:dunning-retries). If a previous execution is still running when the next one fires, the new execution is skipped silently. This prevents double-charging on slow payment provider responses.
Log events emitted:
dunning.job— job start and completion with aggregate stats (recovered, rescheduled, unrecovered, avg attempts, recovery rate, avg time to recover)dunning.job.discovery— how many due cases were founddunning.job.batch— per-batch progressdunning.job.case— per-case outcome with attempt number and duration
process-analytics-daily-snapshots
Schedule:0 0 * * * — runs once daily at 00:00 UTC.
Rebuilds the daily snapshot rows that power the Analytics dashboard. On each run it rebuilds the last 3 days of snapshots (today plus the two preceding days) to self-correct any snapshots that may have been incomplete due to late-arriving data or a job failure.
What it does:
- Runs
rebuildAnalyticsDailySnapshotsWorkflowover the resolved 3-day window. - For each day in the window it scans all subscriptions and upserts the snapshot row — MRR, active count, churn count, new subscriptions, and related KPIs.
- The analytics dashboard reads exclusively from these snapshot rows, not from live subscription data. If this job does not run, the dashboard will show stale data.
jobs:analytics-daily-snapshots). Concurrent executions are skipped.
If you need to rebuild analytics for a longer historical window — for example after a data migration or a multi-day outage — you can trigger a manual rebuild via
POST /admin/subscription-analytics/rebuild.analytics.job— job start and completion (processed days, processed subscriptions, upserted rows, skipped rows, any failed or blocked days)
process-cancellation-operational-metrics
Schedule:0 * * * * — runs every hour.
A read-only metrics aggregation job. It does not create or modify any records — it scans cancellation data for the past 24 hours and emits structured log entries with operational metrics. Its primary purpose is to enable alerting on churn spikes.
What it reports:
- Total cancellation cases, terminal cases, cancelled and retained counts
- Pause count, churn rate, and offer acceptance rate
- Top cancellation reason categories for the window
- Spike detection: if any reason category increases by 5 or more cases compared to the previous equivalent window, the log entry is emitted at
warnlevel withalertable: true
jobs:cancellation-operational-metrics).
Log events emitted:
cancellation.job— completed metrics summary including churn rate, offer acceptance rate, top reason categories, and spike signal if detected
Schedules and Configuration
Job schedules are defined in code and are not configurable through the Admin UI ormedusa-config.ts. The defaults are:
config.schedule export in the corresponding job file and rebuild the plugin.
Monitoring
Since there is no dashboard, the recommended approach is to forward Medusa server logs to a log aggregation tool (Datadog, Grafana Loki, Logtail, etc.) and build alerts on the structured fields:outcome: "failed"oralertable: trueon any job eventfailure_kind: "unexpected_error"on any job event- Missing
outcome: "completed"entries foranalytics.job— indicates the daily job did not run outcome: "blocked"onrenewal.job.cycleis normal (idempotency guard) and should not be alerted