Resolved -
This incident has been resolved. All services are operational and functioning at nominal levels.
Mar 25, 01:35 UTC
Update -
The backlog continues to process, and at this time, over 75% of backlogged data has been successfully processed.
Mar 24, 20:35 UTC
Update -
The backlog continues to process, and at this time, over 50% of backlogged data has been successfully processed. The root cause was due to a sudden and abnormal influx of data into the processing queue that very quickly overwhelmed the service. We have identified and mitigated the source of this increased volume. We have scaled out the service to maximize processing power.
Mar 24, 18:30 UTC
Update -
We are still processing the backlog and monitoring this.
Mar 24, 16:29 UTC
Update -
We have configured an additional trigger for the queue. We are still processing the backlog.
Mar 24, 15:33 UTC
Update -
We are continuing to see a decrease in the backlog and this is continuing to be processed.
Mar 24, 15:04 UTC
Update -
We have reached a peak in our queue. The number left to process is decreasing, but this will still take some time.
Mar 24, 14:23 UTC
Identified -
The issue has been identified and we are currently processing the backlog.
Mar 24, 13:50 UTC
Investigating -
We are currently investigating an issue with shares to ServiceNow.
Mar 24, 13:49 UTC