We'd like to remind Forumites to please avoid political debate on the Forum... Read More »
📨 Have you signed up to the Forum's new Email Digest yet? Get a selection of trending threads sent straight to your inbox daily, weekly or monthly!
Pension Dashboard
Comments
-
I still think it is doomed to irrelevance or prolonged non-arrival in more than a marginally useful skeletal form.
Incentives, policy / functionality / data practicality confusion all stands a good ongoing chance of derailing it.
So a "sunk cost" face saving basic delivery is likely to crawl out but then be starved of further investment due to a lack of takeup. Takeup hobbled due to it being a chocolate fireguard from a consumer viewpoint due to incomplete coverage of schemes or via functionality being unavailable to many customers due to absent data/rules/interlock for more than the simplest pensions scenarios (i.e. the people it is least useful to anyway). This design necessity in turn being based on avoiding the data complexity and only loading the most basic summary info.
As example - say you import only very high level summary data from each scheme for each identifiable member and match those folk up with NINO records and gov gateway IDs. Great - this simplifies the scheme load and reporting burden and the data management issue is simpler than a full cleaned up load. And if you make this an annual reload then it doesn't need to work out what rules apply centrally. Just reporting on "as of date" a way behind. But then won't be able to do very much as it won't know enough about contributions and timing and scheme specific rules and what options exist so as to be able to fully interpret them. Or you could try and normalise data conventions and rules and load the lot. Good luck with that.
The combinational complexity of loading the full records for all schemes ever which are still relevant to the living demographic is daunting indeed. And applying more complex projections atop this badly converted hotmess of data is a recipe for terrible errors and misdirection. Testing it properly (with real data) being a nightmare on stilts. Coming up with the same answer as existing projections (for the regulated assumptions) is also likely to be a requirement to avoid confusion and embarrassment caused by telling the same person two different numbers,
So if I had to guess - it will fallback to summary data only and limited adding up and projection (to not contradict per scheme regulated projections) and easing the upload requirements on old paper record schemes. Functionality thereby limited to simple adding up what you can get anyway from each scheme based on assumptions that likely don't match your retirement intentions. Whoopy do. And maybe let us play with a few future assumptions sliders if they are feeling a bit braver.
From experience - it's bad enough doing a single scheme web calculator and a regulated assumptions document generator and then spending money to keep it current (and tested for ongoing correctness) to changes to pensions rules and/or personal tax changes (if they have been smoking some amazing stuff). The value to the consumer (or tax collecting sponsor) is low - but the cost of doing it and keeping it correct is too high for the value delivered.
Some single scheme tools are still not sorry that they don't support LTA rules many years after it's introduction (and the subsequent many sets of changes, certificates etc.). The support for it offered is to tell you the wrong value for available tax free cash (25% of a too large number) and then have a footnote saying - I paraphrase - "we don't allow for LTA rules so this may be all crap)". At least they added the footnote to the web page. It's a start. Before it was just silently wrong.
Pensions is just a world of edge cases. Segments, LTA, With Profits, Terminal bonuses, GARs, GMPs, Adjustments for taking early or late. Differing indexation for different portions within the same benefits etc. etc. etc. And that's before you get to the fact it's not a tracing service. The fact an old very small pension enrollment is NOT there doesn't mean it doesn't exist. Until you are certain they are all loaded (conversion announced as complete). And it's been reconciled with contracted out NI records to double check that's broadly true then who knows.
Even the state bits have hidden depths. Get started on reciprocal agreements with overseas pensions benefits. That's another whole shelf full of binders (in a room without directional signage and a beware of the leopard on it) somewhere inside The Pension Service @ DWP.
0 -
We live in a world where a well known pension administrator muddled up the reference numbers between my DB and DC scheme (I had to tell them the problem) transferred the incorrect amount of my DC scheme and the whole thing took over six months from start to finish. I wouldnt hold my breath for a one stop shop just yet.
0 -
Marcon said:jamesd said:The dashboard is still happening, currently expected some time in 2023.3
-
Perfection is nice but simply saying "this firm has a pension record that might or might not refer to you based on name or NINO or DOB similarity, please contact them for further information to eliminate or exclude this one" would be very useful for tracing lost pensions. Name, DOB and NINO isn't asking a lot of even amateurly run schemes, particularly if missing elements are allowed, like no NINO. Location of workplace or name of company might give a further clue. It's this handling of incomplete information about possibly lost pensions that offers some of the greatest promise.0
-
A sensible suggestion.
A "tracing objective" for such a site (a raw list) - is different from an adding up or "a clever single integrated view + guidance thereon". One is less ambitious and somewhat easier than the other but a potential self service improvement to current tracing (at least once the data set is big and complete enough. But it forecloses on more complex adding up features and that would need to be understood. Which is where the danger lies. As @jamesd suggests there are ways to make loading upstream dodgy information more or less strict. And limiting what is offered based on the uploaded information to make this more practical if you choose "less strict".
But My oh my there are lot of "wreck here" and hazard marker buoys to be found around these rocks. Going slack on validation for upload and then changing your mind on what features to offer - to increase uptake. Crunch. Glug glug glug. Down you go.
The horns of a dilemma.
When there is strict record rejection (i.e. validation on upload) - you can easily get an *enormous* pile of rejected/to be recycled cleaned and resubmitted records - i.e. business exceptions. This can be an unpleasant operational surprise - reputation, cost. delays etc. etc. if the project hasn't taken even basic precautions ahead or test findings are ignored. Happens more often than you would think. And as night follows day there is a load of whining about incorrect and incomplete consumer views to a % (because a lot wasn't loaded) and also from the participant organisations about the costs to wade through the now exposed all at once as dirty data rather than quietly dealing with their own mess cohort by cohort, consumer by consumer. All of this chaos is *clearly* the fault of the "new system" rather than just the exposure of the state of the previously crappily managed information.
Ah you say - you can simply "relax" the validation and let the data in good or bad. Partial success - now the pile of business exceptions is *much* reduced in size but the partly missing or corrupt data has now polluted the new database and the new dashboard functionality has to cope with all the possible combinational complexity across a consumers' schemes - whatever dreck has been dropped on it. This often goes quite poorly requiring as it does the developer to code extremely defensively across the full data set for practically anything to be present or missing, the wrong type entirely, null, the wrong format, a duplicate where it should not be etc. etc. etc.) and then do what exactly if the ingredients for the desired cake are missing ?
If the intent is to do more than just display the existence - "you have records with scheme X" this gets ugly fast. And towards expensively untestable faster. So in many situations the 2nd route is in fact the worse choice. It's better to detect and size the problem, sample/practice/clean up and manage down the "error" rate and keep the validation stricter and the downstream complexity thereby lower. And more cost effectively testable. A balance needs to be found.
This is a long running "feature" of Digital and IT project management generally. For all brownfield work - data migration is very often capable of becoming the critical path of the whole exercise. Implication - the scoping, sizing and design of this needs to be started early on. Yet many times it is quite late to be mobilised resulting in a "surprise" delay to the delivery and launch of the initiative. Even if you know and buy into this planning heuristic "mobilise data conversion early" a priori. It can still be difficult to get the right resources, and traction on it. It is in the category of specialised to resource, a bit dull, important yet *apparently* less urgent to stakeholders - compared to arguing about money, about what font and colour it should be with marketing and what features are to be allowed at all with the internal or external regulatory overseers.
0 -
In a system like this one there can be combinations of high confidence and low confidence data. High for DC and DB with good, professional management. Low for small schemes or other declaring uncertainty, or uncertainty about certain parts of their data, down to individual record level. For the highly uncertain cases you resort to telling humans to check entitlement with a view to their records becoming more precise over time as the human interventions clean things up.
This stuff can't be ignored by pension schemes because ultimately they do have to conduct tracing exercises. By alerting potential beneficiaries a dashboard could make the tracing cheaper and more effective. But rate or age limits might be applied to keep the workload manageable for small schemes, or even any initially to avoid a flood of requests.
Back when I was writing accounting software for ship owners I at least had the luxury of being able to do up front data validation. Though not up front fraud checks...
0 -
Silvertabby said:jamesd said:The dashboard is still happening, currently expected some time in 2023.
I am going to revisit the page in a months time to see if the date states 30 November 2026Personal Responsibility - Sad but True
Sometimes.... I am like a dog with a bone1
Confirm your email address to Create Threads and Reply

Categories
- All Categories
- 351.4K Banking & Borrowing
- 253.3K Reduce Debt & Boost Income
- 453.8K Spending & Discounts
- 244.4K Work, Benefits & Business
- 599.6K Mortgages, Homes & Bills
- 177.1K Life & Family
- 257.9K Travel & Transport
- 1.5M Hobbies & Leisure
- 16.2K Discuss & Feedback
- 37.6K Read-Only Boards