STD-GOV-136: Performance Management Framework¶
| Field | Value |
|---|---|
| Standard | STD-GOV-136 |
| Title | Performance Management Framework |
| Status | Draft — for leadership offsite review |
| Owner | CDO |
| Created | 2026-04-05 |
| Review | Quarterly |
| Depends on | STD-GOV-135 (Execution Framework) |
Purpose¶
Define how Simpaisa measures progress against strategic goals, surfaces delivery gaps early, and enables leadership intervention before problems become crises. The 2026 strategy identifies this as Foundational Support #2 ("Centralised performance management framework") and calls for "clear success metrics with regular, non-bypassable review rhythms."
This standard defines what is measured, how often, by whom, and what happens when the numbers say something is wrong.
Scope¶
Company-wide performance visibility across all five strategic goals (SG1-SG5), six foundational supports, and all active initiatives tracked under STD-GOV-135 (Execution Framework).
Does not cover: individual employee performance management, team-level sprint velocity, or HR-related performance reviews.
Current State¶
- No company-wide performance dashboard.
- No defined success metrics for the five strategic goals.
- No regular performance review cadence outside of ad-hoc CEO check-ins.
- Technical SLOs defined (STD-DEVEX-090) but not connected to business outcomes.
- No leading indicators. Problems are detected when they become incidents.
- No mechanism to compare planned vs actual delivery across departments.
Gaps¶
- No success metrics for strategic goals — impossible to measure progress.
- No performance dashboard — leadership relies on verbal updates and Slack.
- No leading indicators — only lagging indicators (incidents, missed deadlines).
- No department-level accountability metrics.
- No connection between technical SLOs and business outcomes.
Target State¶
- Each strategic goal has 2-3 measurable success metrics with targets and thresholds.
- A company-wide performance dashboard updated weekly with automated data where possible.
- Leadership reviews performance monthly with authority to intervene.
- Leading indicators surface problems before they become crises.
- Department heads own their metrics and report exceptions (not status).
Performance Architecture¶
┌─────────────────────────────────────────────────────────┐
│ STRATEGIC GOALS (SG1-SG5) │
│ Success metrics + targets │
└───────────┬─────────────────────────────┬───────────────┘
│ │
▼ ▼
┌───────────────────────┐ ┌───────────────────────────┐
│ INITIATIVE METRICS │ │ OPERATIONAL METRICS │
│ (from STD-GOV-135) │ │ (SLOs, SLAs, health) │
│ Completion rate │ │ Uptime, latency, errors │
│ Outcome achievement │ │ Channel health │
│ Time to delivery │ │ Partner performance │
└───────────┬───────────┘ └───────────┬───────────────┘
│ │
▼ ▼
┌─────────────────────────────────────────────────────────┐
│ PERFORMANCE DASHBOARD │
│ Weekly automated + monthly leadership review │
└─────────────────────────────────────────────────────────┘
Strategic Goal Metrics¶
SG1: Operational Excellence & Platform Infrastructure Resilience¶
| Metric | Current | Target (Q4 2026) | Source | Frequency |
|---|---|---|---|---|
| Platform availability | Unknown (no SLO) | 99.9% | Monitoring | Real-time |
| Incident response time (P1) | Unknown | < 15 min to acknowledge | Incident tracker | Per incident |
| Mean time to recovery (P1) | Unknown | < 1 hour | Incident tracker | Per incident |
| Security posture score | 4/10 | 7/10 | Security Architecture review | Quarterly |
| Critical security findings open | 6 | 0 | Security Architecture | Monthly |
| Data maturity score | 1/5 | 3/5 | Data Architecture review | Quarterly |
SG2: Institutionalise Execution & Performance Management¶
| Metric | Current | Target (Q4 2026) | Source | Frequency |
|---|---|---|---|---|
| Active initiatives in register | 0 (no register) | All tracked | Central register | Fortnightly |
| Stale initiatives (no update > 4 weeks) | N/A | 0 | Central register | Fortnightly |
| Initiative outcome achievement rate | N/A | > 70% | Central register | Quarterly |
| Proposal-to-decision time | N/A | < 2 weeks | Central register | Monthly |
SG3: Financial & Liquidity Discipline for Scale¶
| Metric | Current | Target (Q4 2026) | Source | Frequency |
|---|---|---|---|---|
| Settlement reconciliation rate | Unknown | 99.9% automated | Finance systems | Daily |
| Liquidity visibility latency | Unknown (manual) | < 1 hour | Treasury dashboard | Real-time |
| FX exposure unhedged | Unknown | < defined threshold | Treasury | Daily |
| Revenue per corridor accuracy | Unknown | > 95% match to forecast | Finance | Monthly |
SG4: Market Expansion & Global Network Activation¶
| Metric | Current | Target (Q4 2026) | Source | Frequency |
|---|---|---|---|---|
| Markets with documented regulatory posture | 0/5 | 5/5 | Architecture repo | Quarterly |
| Time to activate new market | Unknown | < 90 days | Operations | Per activation |
| Partner integrations using new architecture | 0% | > 50% of new | Engineering | Quarterly |
| Non-Pakistan revenue share | Unknown | > 30% | Finance | Monthly |
SG5: Revenue Diversification & Merchant Monetisation¶
| Metric | Current | Target (Q4 2026) | Source | Frequency |
|---|---|---|---|---|
| Products per merchant (average) | Unknown | > 2.0 | Platform data | Monthly |
| Revenue per merchant (average) | Unknown | Trending upward | Finance | Monthly |
| Merchant churn rate | Unknown | < 5% quarterly | Commercial | Quarterly |
Note: Many "Current" values are "Unknown" because the measurement infrastructure does not exist yet. Establishing baselines is a Q2 2026 priority. You cannot improve what you cannot measure.
Thresholds and Alerts¶
Each metric has three zones:
GREEN AMBER RED
═══════╤═══════════╤═══════════════
On │ At risk │ Off track
track │ Needs │ Leadership
│ attention │ intervention
│ │ required
| Zone | Definition | Response |
|---|---|---|
| Green | On target or ahead. No action needed. | No escalation. Reported in dashboard. |
| Amber | Within 10% of target but trending wrong or at risk of missing. | Department head investigates. Reported at next Leadership Forum. |
| Red | Below target or trending significantly wrong. | Immediate escalation to Leadership Forum. Root cause analysis within 1 week. Action plan required. |
Thresholds are set per-metric by the metric owner and reviewed quarterly. Initial thresholds will be set once baselines are established.
Review Cadence¶
| Review | Frequency | Attendees | Focus |
|---|---|---|---|
| Dashboard update | Weekly (automated) | All leadership | Data refresh. No meeting required. Leaders review async. |
| Monthly Performance Review | Monthly (last week) | CEO, CDO, COO, CFO, department heads | Metrics by exception. Only amber/red items discussed. Green items are not presented. |
| Quarterly Strategy Review | Quarterly | Full leadership team + board summary | Are we on track against the strategy? Metrics against targets. Recalibrate targets if needed. |
Monthly Performance Review Agenda (60 minutes)¶
TIME ITEM OWNER
─────── ─────────────────────────────────────── ──────────
00-05 Dashboard walkthrough (red items only) PMO
05-25 Red items: root cause + action plan Metric owners
25-40 Amber items: risk assessment Metric owners
40-50 Cross-department dependencies All
50-55 Decisions and actions CEO
55-60 Next month focus CEO
Rule: Green is silent. If a metric is green, it does not get airtime. The review is not a celebration of what is working. It is a diagnostic of what is not.
Department Scorecards¶
Each department maintains a scorecard with: - 3-5 metrics mapped to the strategic goals they support - Current value, target, trend direction - Owner (named individual, not team)
Departments: Product, Technology, Security, Data, Operations, Commercial, Finance.
Department heads present their scorecard exceptions (amber/red only) at the monthly review.
Leading vs Lagging Indicators¶
| Type | Example | Why it matters |
|---|---|---|
| Leading | Number of security findings open > 30 days | Predicts future incidents before they happen |
| Leading | Initiatives with no status update > 2 weeks | Predicts execution stall before deadlines are missed |
| Leading | Partner integration test failure rate trending up | Predicts channel health degradation |
| Lagging | Platform downtime hours | Already happened. Post-mortem territory. |
| Lagging | Revenue miss vs forecast | Already missed. Damage done. |
| Lagging | Regulatory finding or sanction | Too late to prevent. |
The framework prioritises leading indicators. Lagging indicators confirm what happened. Leading indicators give you time to intervene.
Integration with Existing Standards¶
| Standard | Integration |
|---|---|
| STD-GOV-135 (Execution Framework) | Initiative metrics (completion rate, outcome achievement) feed the performance dashboard. |
| STD-DEVEX-090 (Service Level Objectives) | Technical SLOs for platform availability, latency, and error rate feed SG1 metrics. |
| STD-DEVEX-091 (Error Budget Policy) | Error budget consumption is a leading indicator for SG1. |
| STD-PRODUCT-105 (Channel Health Monitoring) | Channel health metrics feed SG4 (network health) and SG1 (operational resilience). |
| Maerifa | Performance dashboard and metric definitions indexed in Maerifa. Queryable: "what is our SLO for Pay-In availability?" |
Rollout Plan¶
| Phase | Timeline | Scope |
|---|---|---|
| 1. Offsite agreement | April 14-15, 2026 | Leadership aligns on metrics and cadence. |
| 2. Baseline measurement | May 2026 | Establish current values for all metrics. Many will require new instrumentation. |
| 3. Dashboard v1 | June 2026 | First dashboard with available data. Manual where automation not ready. |
| 4. First monthly review | July 2026 | First formal review under this framework. |
| 5. Full automation | Q4 2026 | All metrics automated. Manual data entry eliminated. |
Appendix: What "Good" Looks Like¶
A mature performance management framework has three properties:
-
Clarity. Everyone knows what the targets are and whether we are hitting them. No ambiguity, no "it depends," no "we need to define that."
-
Speed. Problems are visible within days, not quarters. The dashboard shows today's truth, not last month's report.
-
Action. When a metric goes red, a specific person is accountable, a root cause is identified within a week, and an action plan is in place. Red metrics that stay red for two months without an action plan are a governance failure, not a performance failure.