Get help, check service status, or find answers to common questions.
| Severity | Definition | Initial Response | Resolution Target |
|---|---|---|---|
| Critical | Platform or core API fully unavailable; financial operations blocked | < 2 hours | < 8 hours |
| High | Significant degradation; AI outputs delayed or producing incorrect results | < 4 hours | < 24 hours |
| Medium | Non-critical feature unavailable; workaround exists | < 8 hours | 3 business days |
| Low | General questions, configuration guidance, documentation requests | 1 business day | 5 business days |
Most single-product integrations go live within 5–14 business days from a signed agreement. This covers API connector configuration, 14-day model calibration on your historical data, and threshold setup. Full multi-product deployments typically take 3–6 weeks depending on infrastructure complexity.
No. FRANK products deploy inside your infrastructure. In the default model, FRANK software runs within your environment and no raw financial data is transmitted to FRANK systems. For cloud deployments, all processing occurs within your dedicated VPC. Air-gapped deployments are available for institutions with strict data residency requirements.
Every FRANK product output includes a structured reasoning trace: which input features contributed most to the decision, what thresholds were applied, and a confidence score. This reasoning is available via API response or in the web dashboard. The format is designed to satisfy regulatory audit requirements without additional transformation.
The Compliance Monitor is preconfigured for CBR reporting requirements and FATF recommendations. The Fraud Detection API is aligned with ISO 20022 transaction standards. All products are GDPR-compliant in their data handling. Additional frameworks (Basel III, PCI DSS, IFRS 9) can be configured at onboarding.
Yes. All deployments include a full sandbox environment. You can replay 12 months of historical transaction data through FRANK APIs, inspect outputs, adjust model parameters, and run what-if scenarios before going live. The sandbox remains available throughout the contract term for testing configuration changes.
Every output has a configurable human-review gate. Decisions below the confidence threshold are automatically flagged in your review queue — no action is taken automatically on low-confidence outputs. When an incorrect decision is identified, submitting a feedback event through the API recalibrates the relevant model. FRANK support investigates all reported misclassification incidents within SLA.