Summary
On 12 January 2026, Visualcare experienced a Priority 0 (P0) service incident affecting the Mobile API and related services. The incident was triggered by a sudden and sustained surge of external request traffic, which placed unexpected pressure on backend systems and led to service unavailability.
Core services were restored by 1:35pm AEDT, with degraded performance continuing until 2:25pm AEDT, after which normal service levels were fully re-established.
Incident Classification
- Priority: P0
- Detected: 12:45pm AEDT
- Declared: 12:47pm AEDT
- Stable: 1:35pm AEDT
- Normal: 2:25pm AEDT
Customer Impact
During the incident window:
- The Mobile APIwas unavailable or intermittently unresponsive
- Some customers experienced timeouts or slow responses in connected Visualcare services
- During the recovery phase, services were available but may have exhibited degraded performance
There was no data loss, no unauthorised access, and no impact to data integrity.
What Happened
The incident was caused by a rapid increase in external request volume directed at the Mobile API. The traffic pattern resulted in a significantly higher number of concurrent requests than typically observed.
As request volume increased, backend processing slowed, and active requests accumulated faster than they could be completed. This led to temporary resource saturation and prevented the system from efficiently accepting or completing new requests.
An initial service restart did not immediately restore normal service. Additional controls were subsequently applied to support stable request handling during high traffic conditions, after which the system recovered.
Detection
The issue was detected through a combination of:
- Internal monitoring indicating elevated load and degraded responsiveness
- Customer reports of service unavailability
The incident was escalated and formally declared a P0 once widespread impact was confirmed.
Resolution
Service recovery occurred in two phases:
- Stabilisation (by 1:35pm AEDT)
- Protective request-handling controls were applied
- Services were restarted in a controlled manner
- Core functionality was restored and customer access resumed
- Performance Recovery (1:35pm–2:25pm AEDT)
- Elevated traffic gradually subsided
- System performance progressively returned to normal levels
- No manual intervention was required for downstream systems once stability was achieved.
Timeline (AEDT)
- 12:35pm – Elevated external request volume begins impacting service responsiveness
- 12:45pm–12:50pm – Mobile API becomes unavailable or severely degraded
- 12:50pm – Incident declared P0
- 12:50pm – Initial restart attempted; elevated traffic persists
- 1:30pm – Additional protective controls applied to manage request load
- 1:35pm – Core services restored (start of degraded performance window)
- 2:25pm – Traffic normalises; full-service performance restored; P0 cleared
Root Cause
A sudden and sustained surge of external requests placed unexpected load on the Mobile API, leading to temporary saturation of request processing capacity. This prevented the system from handling new requests efficiently until traffic was regulated and services were stabilised.
Preventative Actions
To reduce the risk and impact of similar events in the future, we are implementing the following improvements:
- Enhanced controls to better regulate and absorb sudden spikes in request traffic
- Improved monitoring and alerting to detect abnormal traffic patterns earlier
- Additional safeguards to ensure services recover more quickly under extreme load
These actions are actively being tracked through our internal delivery process.
Closing
We recognise the operational impact this incident may have caused and appreciate your patience.
Visualcare continues to invest in platform resilience and protection to ensure reliable service, even during abnormal traffic conditions.
If you have any questions or would like further clarification, please contact your Customer Success Manager, or the Head of Customer Success, Maddie Hayes (mhayes@visualcare.com.au).