We'd like to remind Forumites to please avoid political debate on the Forum... Read More »
📨 Have you signed up to the Forum's new Email Digest yet? Get a selection of trending threads sent straight to your inbox daily, weekly or monthly!
Hit by Lloyds, Halifax, TSB and Bank of Scotland glitch? Your rights explained
Options
Comments
-
I may be bucking the trend here but I don't think the article is encouraging compensation claims!
The article exaggerates from the outset, e.g. by calling yesterday's LBG outage of a few hours for some customers / some services a "meltdown". It then goes further to link it with the 2012 RBS group outage that was of entirely different, much larger and graver, proportions. They even state that the RBS one took a month, and that the LBG one took 3 hours. There are about 720 hours in a month. 3 of 720 is about 0.41%. How can anyone possibly seriously throw them into the same pot?
The article then goes further to quote someone who received a Natwest compensation, implying that £150 compensation should be achievable.
Shortly followed by Martin Lewis being quoted as saying that the RBS/NatWest issue has set the 'standard industry practice', and that "if Lloyds fails to do the same, as a bare minimum, you could take it to the Ombudsman." Followed by "When some who had been put in genuinely distressful circumstances spoke to NatWest, it paid compensation in some cases, so it's worth calling Lloyds if that's happened to you".
If none of this is encouraging compensation claims for "distress", I don't know what is.I do agree with innovate's point about more constructive advice about prevention rather than cure though....0 -
This website is getting worse its changed its whole outlook to money making expert.
I like what Martin does most times but this arcticle while not saying everyone claim its basically nudging them in that direction.
In recent years banks and energy firms seem to.be the whiping boys for a new generation .I know they have made mistakes but this agenda is just wrong.
If people were legitamly out of pocket they should get compensation but this has everyone who wants a few quid basically claiming 'free money '
Were will this compensation culture end ? I never got a pre realise cd on realease date the other week .Should i complain to amazon? My local off licence didnt have my favourite beer on friday night should i show signd of distress?
no im better than that things happen. There is bigger things in life to worry about
Chill outLbm Spoke to Payplan August 2009 £29000.00
current balance £0.00
Debt free sept 150 -
But after the RBS Group glitch last December, some reported being offered goodwill payments
Is rubbish. Banks aren't allowed to offer customers "goodwill payments", they have to categorise exactly what they're compensating customers for.
Encouraging a rush to compensation for a 3 hour IT glitch is pathetic considering nobody will have been hit financially by what happened. Payment systems haven't caused Direct Debits or Standing Orders to bounce, at worst people are going to have suffered minor inconvenience/hassle when it comes to paying for services.DEBT FREE!
Debt free by Xmas 2014: £3555.67/£4805.67 (73.99%)
Debt free by Xmas 2015: £1250/£1250 (100.00%)0 -
Have to fully agree, claim culture as bad as america. MSE should rebrand claim for it or write it off expert. Compensation costs all of us, wastes regulators time and ALL OUR money blocking genuine complaints.
Its as bad as ambulance chassing reallyDon't put your trust into an Experian score - it is not a number any bank will ever use & it is generally a waste of money to purchase it. They are also selling you insurance you dont need.0 -
Believe me the claim culture is extremely high in banks nowadays, I work for one of these banks and as people were ringing up during the outage they were already quoting all the key words "embarrassment" "distress" "inconvenience" "out of pocket for call costs". It's clear from the second you speak to them they're jumping a compensation bandwagon just because their card was declined in Mcdonalds.0
-
This begs of course the question why they had no hot standby for this server because it is clearly a mission-critical point in the LBG infrastructure. 2 possible explanations: incompetent IT management who don't understand disaster recovery management, or incompetent Business Executives who cut the IT budget to the bare bones. Probably a combination of both, and Joe Public is unlikely to ever find out the true cause.
Not always the case, we had a system failure, and the cause was a Million pound plus part of a new system. It had been looked at it from a risk point of view, the predicted mean time to fail was a silly high number, and the business case was it was not cost effective to have a spare, it was not something you could hot swap in, even the swapping in was a days downtime. Even the vendor advised us not to bother as the vendor had *never* seen them fail, and vendors usually love to sell stuff.
Yes it failed, system went down for two to three days for a replacement to be got to site from the vendor, who replaced it free as they could not understand why it failed and wanted it to test.
Point is every bit of business sense said it was not worth holding a standby but it still failed and caused issues. It wasn't incompetence on anyone's part we'd done the due diligence, and the risk was very low, but clearly still there.
Just because its a server it does not mean its easily hot swappable, three hours sounds like they had a backup hardware ready. Having duplicate systems working at once is great when they work together, but it sometimes make matters worse if one fails as transactions are not all processed in the right order, some payment may have been entered twice, some not al all depending if it hit a functioning server or not, that could take a lot longer to sort than a hard failure and simply putting a spare in place that then takes a while to catch up again.0 -
Not always the case, we had a system failure, and the cause was a Million pound plus part of a new system. It had been looked at it from a risk point of view, the predicted mean time to fail was a silly high number, and the business case was it was not cost effective to have a spare, it was not something you could hot swap in, even the swapping in was a days downtime. Even the vendor advised us not to bother as the vendor had *never* seen them fail, and vendors usually love to sell stuff.
Yes it failed, system went down for two to three days for a replacement to be got to site from the vendor, who replaced it free as they could not understand why it failed and wanted it to test.
Point is every bit of business sense said it was not worth holding a standby but it still failed and caused issues. It wasn't incompetence on anyone's part we'd done the due diligence, and the risk was very low, but clearly still there.
Just because its a server it does not mean its easily hot swappable, three hours sounds like they had a backup hardware ready. Having duplicate systems working at once is great when they work together, but it sometimes make matters worse if one fails as transactions are not all processed in the right order, some payment may have been entered twice, some not al all depending if it hit a functioning server or not, that could take a lot longer to sort than a hard failure and simply putting a spare in place that then takes a while to catch up again.
Completely agree with this. It's all very well trying to sound clever by banging the word 'incompetence' around but at the end of a day a business is just that, a business. It doesn't have a bottomless pit of money to cover every tiny thing that could go wrong, it's just not cost effective or makes much business sense.0 -
A 3-hour outage for a mission critical banking system is about 2 hours and 55 minutes too much, and it doesn't cost vast amounts of money to achieve "seven nines", or even just "two nines" availability.
You wouldn't expect an aeroplane to fail for a few hours on the way from London to Tokyo. Or a life support unit in an intensive care unit. Both are of course more important than an ATM network since there is an immediate danger to life, but both are examples of affordable "seven nines" availability.
Thus the excuse that it it would be too expensive for a bank to provide the highest availability is exactly that - an excuse.0 -
Archi_Bald wrote: »Sorry to disagree there.
The article exaggerates from the outset, e.g. by calling yesterday's LBG outage of a few hours for some customers / some services a "meltdown". It then goes further to link it with the 2012 RBS group outage that was of entirely different, much larger and graver, proportions. They even state that the RBS one took a month, and that the LBG one took 3 hours. There are about 720 hours in a month. 3 of 720 is about 0.41%. How can anyone possibly seriously throw them into the same pot?
The article then goes further to quote someone who received a Natwest compensation, implying that £150 compensation should be achievable.
Shortly followed by Martin Lewis being quoted as saying that the RBS/NatWest issue has set the 'standard industry practice', and that "if Lloyds fails to do the same, as a bare minimum, you could take it to the Ombudsman." Followed by "When some who had been put in genuinely distressful circumstances spoke to NatWest, it paid compensation in some cases, so it's worth calling Lloyds if that's happened to you".
If none of this is encouraging compensation claims for "distress", I don't know what is.
I do agree that the RBS comparison is unhelpful but suppose I'm maybe putting too much faith in people's ability to differentiate between inconvenience and "genuinely distressful circumstances"! But ultimately customer pressure is what will help banks to raise their games, whether in the form of high profile complaints or simply by voting with their feet....0 -
Archi_Bald wrote: »A 3-hour outage for a mission critical banking system is about 2 hours and 55 minutes too much, and it doesn't cost vast amounts of money to achieve "seven nines", or even just "two nines" availability.
You wouldn't expect an aeroplane to fail for a few hours on the way from London to Tokyo. Or a life support unit in an intensive care unit. Both are of course more important than an ATM network since there is an immediate danger to life, but both are examples of affordable "seven nines" availability.
Thus the excuse that it it would be too expensive for a bank to provide the highest availability is exactly that - an excuse.
The trick is to eliminate single points of failure - but even then failure can still occur. Imagine a component breaks, the system fails-over to a redundant component. But the redundant component breaks before the primary component is repaired/replaced.
Why would it not cost vast amounts to achieve high availability? Keep in mind that your 2 examples are on a much smaller scale.0
This discussion has been closed.
Confirm your email address to Create Threads and Reply

Categories
- All Categories
- 351.1K Banking & Borrowing
- 253.2K Reduce Debt & Boost Income
- 453.7K Spending & Discounts
- 244.1K Work, Benefits & Business
- 599.2K Mortgages, Homes & Bills
- 177K Life & Family
- 257.5K Travel & Transport
- 1.5M Hobbies & Leisure
- 16.1K Discuss & Feedback
- 37.6K Read-Only Boards