A follow-up incident occurred after the previous DB2 database issue, causing long processing times for services attempting to connect to the faulty database instance.
Symptoms: - Internal services experienced long response times and delays in processing workflows. - External users faced significant delays and time outs on several services. - Some services timed out or degraded in performance. - Issues with creating login sessions for internal users.
Impact - External Users: Delays in critical operations, such as loan creation, disrupted user workflows. This could cause timeouts on the clients end. - Internal Operations: Transaction backlogs and delayed automated processes caused operational strain.
Root Cause The infrastructure provider’s changes to the previous faulty patch was introduced and caused the faulty DB2 instance to stall. This led services connected to the instance to encounter persistent errors, transaction rollbacks and connection issues.
Resolution - All network traffic was redirected to the healthy DB2 database instance.
Actions - The same fallback plan from the previous issue, will be activated if operation on one instance of DB2 database cannot handle the current load. - A continuation of the root cause analysis and problem management case from the previous incident will continue
We apologise for the disruption caused by this incident. Steps are underway to improve our processes and collaboration with the infrastructure provider to prevent recurrence. For further questions, please contact us.