We’d like to remind Forumites to please avoid political debate on the Forum.

This is to keep it a safe and useful space for MoneySaving discussions. Threads that are – or become – political in nature may be removed in line with the Forum’s rules. Thank you for your understanding.

IMPORTANT: Please make sure your posts do not contain any personally identifiable information (both your own and that of others). When uploading images, please take care that you have redacted all personal information including number plates, reference numbers and QR codes (which may reveal vehicle information when scanned).
📨 Have you signed up to the Forum's new Email Digest yet? Get a selection of trending threads sent straight to your inbox daily, weekly or monthly!

Parking Code of Practice Consultation 2025 - now let's see what happens

1242526272830»

Comments

  • h2g2
    h2g2 Posts: 259 Forumite
    Third Anniversary 100 Posts Photogenic Name Dropper
    edited 9 December at 10:55AM
    Protest said:
    AI does seem to have some practical weaknesses.

    Getting into something I do have some professional expertise with: this isn't something I'd trust AI (or more accurately, LLMs) with at all. I prefer the term "LLM" over "AI" because it describes what these tools actually do. "LLM" stands for "Large Language Model".

    What they do, approximately, is convert the question into a bunch of numerical tokens and parameters and then - based on the data they were "trained" on - works out a bunch of words that look like they should be an answer.

    Now, if the actual answers do exist in the training data the LLMs turn out to be quite good at finding it (though, I'd wager, less good than a normal search engine - certainly for the same computing cost) and they are also quite good at rephrasing it, so "explain in simple terms..." or "here's a speech - make it funnier" type queries tend to give decent results. Especially in situations like the latter where you can verify for yourself that the result is correct.

    BUT (and this is a big one) if the answer doesn't exist the LLM doesn't typically have any way of knowing of that, but will still finds words and sentences that look convincing anyway. Think of it this way (to grossly oversimplify how they work): if the answer has the words "in the year xxxx" in it, but there's no source with the correct year it will find other examples of "in the year xxxx" in related text and put that (possibly irrelevant) year in instead. But the way the algorithms work it probably has no idea it's done this. It's just trying to find a best match to answer the question. Truth is kind of secondary. (This was particularly bad before "AI summaries" started putting sources on their results.) This is how an AI infamously recommended supergluing pepperoni slices onto your pizza before putting it in the oven to stop them falling off. It was traced back to a reddit post where someone suggested it. It was obviously as a joke, in the original context, but the LLM wasn't able to pick up on that and presented it as advice. This is a more lighthearted example - there are more serious examples with LLMs recommending very dangerous courses of action.

    More relevant here, perhaps, are the numerous reported cases of lawyers using LLMs to draft pleadings. They will generate bits of text that appear to be case citations (and are very good at making convincing looking ones - correct formatting and everything!) but with words picked from the training data rather than actual cases. This has led to at least one lawyer (unnamed in the article) in Victoria, Australia, being forbidden from being a principle solicitor, forbidden from handling client money, and forbidden from running his own law firm.

    At the risk of skirting the "no politics" rules here, in LLM research the tendency to present very confidently about something without any actual truth behind it is known as "the Boris Johnson problem".
  • Ralph-y
    Ralph-y Posts: 4,786 Forumite
    Part of the Furniture 1,000 Posts Name Dropper Photogenic
    To back that up .......

    AI Overview
    +1
    Parking "fines" near Upholland church are
    likely from automated systems (ANPR cameras) at nearby businesses like the Davy Lamp pub, where you must register your car to avoid charges, leading to penalty notices for non-compliance, though some fines (even for businesses) can be successfully appealed or cancelled if you were a genuine customer or if there was an error. The issue seems common in the area with local businesses and parking management. 


    I live here and know the area very well ..... Davy Lamp pub, ...  no such place that I know of ...... user beware !



Meet your Ambassadors

🚀 Getting Started

Hi new member!

Our Getting Started Guide will help you get the most out of the Forum

Categories

  • All Categories
  • 352.8K Banking & Borrowing
  • 253.8K Reduce Debt & Boost Income
  • 454.6K Spending & Discounts
  • 245.8K Work, Benefits & Business
  • 601.9K Mortgages, Homes & Bills
  • 177.7K Life & Family
  • 259.7K Travel & Transport
  • 1.5M Hobbies & Leisure
  • 16K Discuss & Feedback
  • 37.7K Read-Only Boards

Is this how you want to be seen?

We see you are using a default avatar. It takes only a few seconds to pick a picture.