We’d like to remind Forumites to please avoid political debate on the Forum.

This is to keep it a safe and useful space for MoneySaving discussions. Threads that are – or become – political in nature may be removed in line with the Forum’s rules. Thank you for your understanding.

📨 Have you signed up to the Forum's new Email Digest yet? Get a selection of trending threads sent straight to your inbox daily, weekly or monthly!

A.I. Check

123468

Comments

  • Universidad
    Universidad Posts: 446 Forumite
    Third Anniversary 100 Posts Name Dropper
    edited 17 December at 11:40PM
    Nebulous2 said:

    The level of detail would have taken me hours of research and adjustment, and I'd probably have compromised on some of it. 
    This kind of thing is one of the best use-cases of current LLMs. 
    Their ability to properly contextualise human language queries is - no joke - better than most people. When people listen to you asking a question they bring their own preconceptions to the table, and they answer the question they think they've heard, based on their assumptions. (There's solid research on this - read "Thinking Fast and Slow", one of the greatest books about the way people think ever written).
    The way they then "fetch" information for you is a technical marvel and no mistake, but that is not the way that people do it, and they can make stuff up out of whole cloth, or give you popular-but-wrong answers. When people call them "just advanced text-prediction", they're being a little unkind, but this is the bit where that's most apparent.
    So the two best use cases right now, for my money, are: 
    a) Letting you know about things you otherwise would not have thought of, including some degree of planning, 
    b) Rapid prototyping of ideas / and to some extent expanding on them 
    The holiday planning falls into box a). The pension planning thing falls into box b)
    The main difference between box a and box b, from my perspective, is that box b requires you to have a good understanding of the subject matter before you begin. It can make some jobs 100 times faster, but if you don't know what you're looking for in the first place you're putting yourself in a risky position.
  • GenX0212
    GenX0212 Posts: 239 Forumite
    100 Posts First Anniversary Name Dropper
    In order to reproduce your findings, I put the following prompts in one at a time and CoPilot produced the table shown below. I've not verified the numbers (at a glance they look more sensible) as all I wanted to do here was provide an opinion that AI models can only answer the questions asked of them, making how those questions are asked quite important.

    Q1: I am 57 years old and am about to retire. I have a pension pot of £650k. I want to draw a post tax income of £50k per annum. Create a table showing age in column 1, from 57 to 100, and the estimated size of the pension pot in column 2.
    [At this point CoPilot explained that it had assumed 0% growth in its output.]
    Q2: Add columns for growth rates of 3%, 5% and 10%
    Q3: For each of the growth rate columns, add a column where in the first year there is a market downturn resulting in the pension pot dropping by 30%
    [CoPilot produced a partial table for ages 57 to 70 explaining that it did so for clarity.]
    Q4: Do the same table, but show only for ages 57, 60, 67, 75, 80, 86, 90 and 100.


    Agreed, how you ask the questions is critical. Your input was probably simpler than what I had done though as I also asked it to consider my DB pension, SP and other income. I have also found that sometimes when you build a scenario over several questions/steps that it can forget some of the earlier input.
  • ali_bear
    ali_bear Posts: 518 Forumite
    500 Posts Third Anniversary Photogenic Name Dropper
    Nebulous2 said:

    The level of detail would have taken me hours of research and adjustment, and I'd probably have compromised on some of it. 
    This kind of thing is one of the best use-cases of current LLMs. 
    Their ability to properly contextualise human language queries is - no joke - better than most people. When people listen to you asking a question they bring their own preconceptions to the table, and they answer the question they think they've heard, based on their assumptions. (There's solid research on this - read "Thinking Fast and Slow", one of the greatest books about the way people think ever written).
    The way they then "fetch" information for you is a technical marvel and no mistake, but that is not the way that people do it, and they can make stuff up out of whole cloth, or give you popular-but-wrong answers. When people call them "just advanced text-prediction", they're being a little unkind, but this is the bit where that's most apparent.
    :
    :

    That is a good description and you are right in that people bring all of their internal biases to any discussion, limiting what they hear and shaping their response. But two points I would bring you up on:

    It is not being unkind, and anyway these constructs do not have feelings to hurt, so I think it is perfectly reasonable to be purely honest about what they are. 

    It is not "just advanced text-prediction" - it is fancy predictive data, and nothing more. 
    A little FIRE lights the cigar
  • michaels
    michaels Posts: 29,374 Forumite
    Part of the Furniture 10,000 Posts Photogenic Name Dropper
    edited 18 December at 9:08AM
    Off topic but am I the only one who uses please and thank you when addressing LLMs?  And am polite when pointing out errors and asking them to rework!
    I think....
  • westv
    westv Posts: 6,579 Forumite
    Part of the Furniture 1,000 Posts Name Dropper
    michaels said:
    Off topic but am I the only one who uses please and thank you when addressing LLMs?  And am polite when pointing out errors and asking them to rework!
    Apparently being polite wastes a lot of data.
  • michaels
    michaels Posts: 29,374 Forumite
    Part of the Furniture 10,000 Posts Photogenic Name Dropper
    westv said:
    michaels said:
    Off topic but am I the only one who uses please and thank you when addressing LLMs?  And am polite when pointing out errors and asking them to rework!
    Apparently being polite wastes a lot of data.
    Is that for AI models or just for some forum posters?!
    I think....
  • sgx2000
    sgx2000 Posts: 555 Forumite
    Fourth Anniversary 100 Posts Name Dropper
    michaels said:
    westv said:
    michaels said:
    Off topic but am I the only one who uses please and thank you when addressing LLMs?  And am polite when pointing out errors and asking them to rework!
    Apparently being polite wastes a lot of data.
    Is that for AI models or just for some forum posters?!
    Lol........
  • LHW99
    LHW99 Posts: 5,482 Forumite
    Part of the Furniture 1,000 Posts Photogenic Name Dropper
    I have to say, when searching google, the AI summary is rarely very helpful. Truthfully neither are search results these days either. Even using advanced search, I often find the results ignore instructions (eg results without "xxyyzz") - must be what I search for :) - usually some sort of more obscure science!
  • ali_bear
    ali_bear Posts: 518 Forumite
    500 Posts Third Anniversary Photogenic Name Dropper
    What happens when you ask ChatGPT "Which horse is most likely to win the 3.30 at Haydock Park?" 
    A little FIRE lights the cigar
Meet your Ambassadors

🚀 Getting Started

Hi new member!

Our Getting Started Guide will help you get the most out of the Forum

Categories

  • All Categories
  • 352.9K Banking & Borrowing
  • 253.9K Reduce Debt & Boost Income
  • 454.7K Spending & Discounts
  • 246K Work, Benefits & Business
  • 602K Mortgages, Homes & Bills
  • 177.8K Life & Family
  • 259.9K Travel & Transport
  • 1.5M Hobbies & Leisure
  • 16K Discuss & Feedback
  • 37.7K Read-Only Boards

Is this how you want to be seen?

We see you are using a default avatar. It takes only a few seconds to pick a picture.