Skip to main content
Reorder registers four background jobs on your Medusa server at install time. They run automatically — no additional setup is required — but understanding what each one does, how often it runs, and how failures are handled is important for operating a production subscription store. There is no Admin UI dashboard for job status. Job activity is emitted as structured log entries to your Medusa logger and can be piped to any log aggregation system you already use.

Overview

Job nameSchedulePurpose
process-renewal-cyclesEvery 5 minutesExecute due subscription renewals
process-dunning-retriesEvery 5 minutesRetry failed renewal payments
process-analytics-daily-snapshotsDaily at 00:00 UTCRebuild analytics data
process-cancellation-operational-metricsEvery hourAggregate cancellation metrics

process-renewal-cycles

Schedule: */5 * * * * — runs every 5 minutes. This is the core renewal engine. On each execution it queries for all RenewalCycle records that are due — specifically those with status scheduled or failed where scheduled_for is in the past. It then processes them in batches of 20. What it does for each cycle:
  1. Checks whether approval is required. If approval_required is true and the cycle has not been approved, it is skipped until it is.
  2. Runs processRenewalCycleWorkflow, which creates a Medusa order and charges the customer via the configured payment provider.
  3. On success the renewal cycle transitions to succeeded and the next cycle is scheduled.
  4. On payment failure the cycle transitions to failed and a DunningCase is opened.
Concurrency: Individual renewal cycles are idempotent. If two job instances attempt to process the same cycle simultaneously, the second attempt is detected and logged as blocked (a warning, not an error). No distributed lock is used at the job level — cycles themselves are protected. Log events emitted:
  • renewal.job — job start and completion summary (scanned, processed, succeeded, failed, blocked counts)
  • renewal.job.discovery — how many due cycles were found
  • renewal.job.batch — per-batch progress
  • renewal.job.cycle — per-cycle outcome with duration

process-dunning-retries

Schedule: */5 * * * * — runs every 5 minutes. Handles payment recovery for subscriptions that failed to renew. On each execution it queries for all DunningCase records whose next retry is due, processes them in batches of 20. What it does for each case:
  1. Runs runDunningRetryWorkflow, which attempts to charge the customer again using the same payment provider.
  2. On success the dunning case is marked recovered and the subscription is returned to active.
  3. On failure the next retry is scheduled according to the dunning configuration (retry intervals and max attempts are set in Subscription Settings).
  4. When all retry attempts are exhausted (or a permanent failure is detected) the case is marked unrecovered and the subscription remains in past_due.
Concurrency: Protected by a distributed lock (jobs:dunning-retries). If a previous execution is still running when the next one fires, the new execution is skipped silently. This prevents double-charging on slow payment provider responses. Log events emitted:
  • dunning.job — job start and completion with aggregate stats (recovered, rescheduled, unrecovered, avg attempts, recovery rate, avg time to recover)
  • dunning.job.discovery — how many due cases were found
  • dunning.job.batch — per-batch progress
  • dunning.job.case — per-case outcome with attempt number and duration

process-analytics-daily-snapshots

Schedule: 0 0 * * * — runs once daily at 00:00 UTC. Rebuilds the daily snapshot rows that power the Analytics dashboard. On each run it rebuilds the last 3 days of snapshots (today plus the two preceding days) to self-correct any snapshots that may have been incomplete due to late-arriving data or a job failure. What it does:
  1. Runs rebuildAnalyticsDailySnapshotsWorkflow over the resolved 3-day window.
  2. For each day in the window it scans all subscriptions and upserts the snapshot row — MRR, active count, churn count, new subscriptions, and related KPIs.
  3. The analytics dashboard reads exclusively from these snapshot rows, not from live subscription data. If this job does not run, the dashboard will show stale data.
Concurrency: Protected by a distributed lock (jobs:analytics-daily-snapshots). Concurrent executions are skipped.
If you need to rebuild analytics for a longer historical window — for example after a data migration or a multi-day outage — you can trigger a manual rebuild via POST /admin/subscription-analytics/rebuild.
Log events emitted:
  • analytics.job — job start and completion (processed days, processed subscriptions, upserted rows, skipped rows, any failed or blocked days)

process-cancellation-operational-metrics

Schedule: 0 * * * * — runs every hour. A read-only metrics aggregation job. It does not create or modify any records — it scans cancellation data for the past 24 hours and emits structured log entries with operational metrics. Its primary purpose is to enable alerting on churn spikes. What it reports:
  • Total cancellation cases, terminal cases, cancelled and retained counts
  • Pause count, churn rate, and offer acceptance rate
  • Top cancellation reason categories for the window
  • Spike detection: if any reason category increases by 5 or more cases compared to the previous equivalent window, the log entry is emitted at warn level with alertable: true
What it does NOT do: It does not write to the database, update any subscription records, or affect the Admin UI. The metrics exist only as log entries. Concurrency: Protected by a distributed lock (jobs:cancellation-operational-metrics). Log events emitted:
  • cancellation.job — completed metrics summary including churn rate, offer acceptance rate, top reason categories, and spike signal if detected

Schedules and Configuration

Job schedules are defined in code and are not configurable through the Admin UI or medusa-config.ts. The defaults are:
process-renewal-cycles               */5 * * * *   (every 5 min)
process-dunning-retries              */5 * * * *   (every 5 min)
process-analytics-daily-snapshots   0 0 * * *     (daily at midnight UTC)
process-cancellation-operational-metrics  0 * * * *  (every hour)
To change a schedule you need to modify the config.schedule export in the corresponding job file and rebuild the plugin.

Monitoring

Since there is no dashboard, the recommended approach is to forward Medusa server logs to a log aggregation tool (Datadog, Grafana Loki, Logtail, etc.) and build alerts on the structured fields:
  • outcome: "failed" or alertable: true on any job event
  • failure_kind: "unexpected_error" on any job event
  • Missing outcome: "completed" entries for analytics.job — indicates the daily job did not run
  • outcome: "blocked" on renewal.job.cycle is normal (idempotency guard) and should not be alerted