Debate House Prices


In order to help keep the Forum a useful, safe and friendly place for our users, discussions around non MoneySaving matters are no longer permitted. This includes wider debates about general house prices, the economy and politics. As a result, we have taken the decision to keep this board permanently closed, but it remains viewable for users who may find some useful information in it. Thank you for your understanding.
📨 Have you signed up to the Forum's new Email Digest yet? Get a selection of trending threads sent straight to your inbox daily, weekly or monthly!

Why have house prices increased so much over the last twenty years?

1234689

Comments

  • GreatApe
    GreatApe Posts: 4,452 Forumite
    economic wrote: »

    Interesting I didn't know nvidia was a leader in the hardware or software for AI or self drive

    I always assumed the software side would make the big bucks and the hardware side would have to compete down to razor thin margins.
  • economic
    economic Posts: 3,002 Forumite
    GreatApe wrote: »
    Interesting I didn't know nvidia was a leader in the hardware or software for AI or self drive

    I always assumed the software side would make the big bucks and the hardware side would have to compete down to razor thin margins.

    full version:

    https://www.youtube.com/watch?v=YIZWfItOQRs

    about half way through he talks about self driving cars.

    both hardware and software are crucial. hardware needs to be developed to be powerful enough.

    nvidia have pretty much a monopoly in hardware powerful enough to run AI, self driving cars etc.
  • MobileSaver
    MobileSaver Posts: 4,349 Forumite
    Part of the Furniture 1,000 Posts Name Dropper
    GreatApe wrote: »
    Anyway the reason your homes value will crash is because once we have general AI it can build things for us much quicker and much cheaper than humans.

    Your reasoning is flawed and ignores the golden rule of property... "location, location, location."

    You need land to build a house but there's little any "General AI" can do about creating land. So ignoring land costs building a 4 bed detached house would cost pretty much the same whether in Central London or the Scottish Highlands yet the desirability factor means London prices would still be many times those of the Highlands.

    Similarly I live in a unique house in an exclusive location and happen to own acres of land around my property. Your general AI could probably replicate my physical house (albeit at a much higher cost than a standard house) but it would be impossible to build it in the same stunning location without first buying (at great cost) suitable land from me.
    Every generation blames the one before...
    Mike + The Mechanics - The Living Years
  • Malthusian
    Malthusian Posts: 11,055 Forumite
    Tenth Anniversary 10,000 Posts Name Dropper Photogenic
    GreatApe wrote: »
    If the sun was to explode in 20 years time and we knew this as a certainty do you think humanity would change its savings and spending habits?

    This has nothing to do with a speculated future event which could happen 20 years from now or 2,000.
    Information technology is young and is an exponential. Suggesting that the Jews were trying to create AI with clay golums 3000 years ago and we are no closer so its going to take another 3000 years to also get nowhere is IMO stupid.
    I didn't. I said that to the best of our knowledge, an AI is just as likely to be 3,000 years off as 20.

    If we have a scale of intelligence a mile long and we put a lump of dirt on one end and a Singularity AI on the other, the clay golem is probably one inch closer to the Singularity AI than the lump of dirt, and the world's most advanced computer is probably one metre closer.

    People simply don't understand AI. When someone designed a program that could teach itself Go and after a short period of playing against itself became capable of winning against the world's best Go player, there was an awful lot of guff in the media about how the Singularity was on its way. It ignores the yawning chasm between a game-playing bot and an intelligence, which is this: a human still had to tell the program what winning even means.

    We have computers that, once you tell them that their goal is to surround the other guy's pieces with your pieces, can do it better than any human. But AlphaZero can't win a game of chess against a five-year-old unless a human tells it that checkmating the enemy king is desirable and being checkmated is not desirable.

    A two-year-old wrestling with another toddler in a sandpit can work out whether it's better to win or lose; an AI, at present, cannot, and we have no idea how to create an AI that can.

    On the local radio yesterday the presenters started talking about AI and then started wittering about self-service machines. What have self-service machines, a barcode scanner attached to a card machine, got to do with AI? Absolutely nothing. To people who don't know anything about the subject it just falls into a general category of "techy stuff that might take mah jerb".
    You need land to build a house but there's little any "General AI" can do about creating land.

    A singularity AI would rapidly invent everything there is to be invented (it isn't constrained by the painstaking process of trial and error that we have been for the past 10,000 years), which means that:

    - nobody is constrained by the need to live close to their job - there aren't any
    - building new houses would be cost-free and done by robots
    - advances in transport would remove most of the need to live in urban centres
    - swathes of unused land could be landscaped beyond recognition and made desirable to live on
    - excess population would be able to depart for other planets

    If any of this seems implausible, that's because you're still thinking like a human rather than an AI that can improve upon itself. There is literally no constraint on what a Singularity AI can do. If it can't do it, it creates a superior AI that can, and if that AI can't, it creates a superior AI, until the original goal has been achieved.

    The only constraints are the laws of physics, and since a singularity AI will have a much better understanding of how physics works than we do, it would be daft for us to say "A singularity AI won't be able to do that, because it's against the laws of physics". It would be like Aristotle lecturing a present-day quantum physicist about the five classical elements.

    But let's ignore all that. Let's say that there's a house somewhere in the Alps with a particularly beautiful view, and I want to live there, but someone else already lives there. The benevolent AI offers to build me a house 100 yards nearby, but being a stubborn human I don't want that one, I want the other guy's house. So it seems you're right - despite all humanity's other needs being supplied at zero cost, land still has value.

    So what am I going to offer Mr Jones for his lakeside house? Food? He doesn't need any, it's free. Labour? Free. Money? Doesn't exist, everything is free. Some other land that I own? Worthless, he doesn't want to live there and there's nothing anyone else can give him to rent it. There's little way of resolving this other than fighting him for it or letting the AI plug me into a virtual reality where Mr Jones doesn't exist and I can live in his house. Either way the land is still effectively worthless as I have nothing I can offer in trade.

    This is all totally out there science fiction stuff - and that's what a Singularity AI is. That's the inevitable consequence of an AI that can improve on itself and isn't constrained by the fleshy inconvenience of being human. It's not an advanced version of a self-scanning machine.

    Maybe this world will arrive in 20 years' time but to us in the present it's only useful as a fantasy, not to inform our financial planning.
  • Herzlos
    Herzlos Posts: 15,918 Forumite
    Part of the Furniture 10,000 Posts Photogenic Name Dropper
    GreatApe wrote: »
    The idea of the housing ladder is and has always been fake
    There is a housing step not a housing ladder
    People buy around age 30 and then again around age 55 and that is it

    Only because of the expense of doing so. People used to buy around 20, then again about 25, 30 and 55 as demand changed and they built up equity. My parents were on their 4th place by 31 (tiny flat, decent flat, tiny house, decent house). I really should have done the same instead of buying a "forever" place @ 23 and over-stretching.
  • GreatApe
    GreatApe Posts: 4,452 Forumite
    Herzlos wrote: »
    Only because of the expense of doing so. People used to buy around 20, then again about 25, 30 and 55 as demand changed and they built up equity. My parents were on their 4th place by 31 (tiny flat, decent flat, tiny house, decent house). I really should have done the same instead of buying a "forever" place @ 23 and over-stretching.


    The transaction data shows this to be false.
    If people were buying 4-5 homes on a ladder the transaction levels would be much higher.
    Average property only changes hands once every 25 years. Its been more or less the same for a generation

    So the typical home owner ever owns 2 homes their first one and then one more and that's it
  • GreatApe
    GreatApe Posts: 4,452 Forumite
    Malthusian wrote: »
    This has nothing to do with a speculated future event which could happen 20 years from now or 2,000.

    I didn't. I said that to the best of our knowledge, an AI is just as likely to be 3,000 years off as 20.

    If we have a scale of intelligence a mile long and we put a lump of dirt on one end and a Singularity AI on the other, the clay golem is probably one inch closer to the Singularity AI than the lump of dirt, and the world's most advanced computer is probably one metre closer.

    People simply don't understand AI. When someone designed a program that could teach itself Go and after a short period of playing against itself became capable of winning against the world's best Go player, there was an awful lot of guff in the media about how the Singularity was on its way. It ignores the yawning chasm between a game-playing bot and an intelligence, which is this: a human still had to tell the program what winning even means.

    We have computers that, once you tell them that their goal is to surround the other guy's pieces with your pieces, can do it better than any human. But AlphaZero can't win a game of chess against a five-year-old unless a human tells it that checkmating the enemy king is desirable and being checkmated is not desirable.

    A two-year-old wrestling with another toddler in a sandpit can work out whether it's better to win or lose; an AI, at present, cannot, and we have no idea how to create an AI that can.

    On the local radio yesterday the presenters started talking about AI and then started wittering about self-service machines. What have self-service machines, a barcode scanner attached to a card machine, got to do with AI? Absolutely nothing. To people who don't know anything about the subject it just falls into a general category of "techy stuff that might take mah jerb".



    A singularity AI would rapidly invent everything there is to be invented (it isn't constrained by the painstaking process of trial and error that we have been for the past 10,000 years), which means that:

    - nobody is constrained by the need to live close to their job - there aren't any
    - building new houses would be cost-free and done by robots
    - advances in transport would remove most of the need to live in urban centres
    - swathes of unused land could be landscaped beyond recognition and made desirable to live on
    - excess population would be able to depart for other planets

    If any of this seems implausible, that's because you're still thinking like a human rather than an AI that can improve upon itself. There is literally no constraint on what a Singularity AI can do. If it can't do it, it creates a superior AI that can, and if that AI can't, it creates a superior AI, until the original goal has been achieved.

    The only constraints are the laws of physics, and since a singularity AI will have a much better understanding of how physics works than we do, it would be daft for us to say "A singularity AI won't be able to do that, because it's against the laws of physics". It would be like Aristotle lecturing a present-day quantum physicist about the five classical elements.

    But let's ignore all that. Let's say that there's a house somewhere in the Alps with a particularly beautiful view, and I want to live there, but someone else already lives there. The benevolent AI offers to build me a house 100 yards nearby, but being a stubborn human I don't want that one, I want the other guy's house. So it seems you're right - despite all humanity's other needs being supplied at zero cost, land still has value.

    So what am I going to offer Mr Jones for his lakeside house? Food? He doesn't need any, it's free. Labour? Free. Money? Doesn't exist, everything is free. Some other land that I own? Worthless, he doesn't want to live there and there's nothing anyone else can give him to rent it. There's little way of resolving this other than fighting him for it or letting the AI plug me into a virtual reality where Mr Jones doesn't exist and I can live in his house. Either way the land is still effectively worthless as I have nothing I can offer in trade.

    This is all totally out there science fiction stuff - and that's what a Singularity AI is. That's the inevitable consequence of an AI that can improve on itself and isn't constrained by the fleshy inconvenience of being human. It's not an advanced version of a self-scanning machine.

    Maybe this world will arrive in 20 years' time but to us in the present it's only useful as a fantasy, not to inform our financial planning.


    In the real world our ideas of success were simply based on survival.
    How does a human know that being eaten by a lion is bad or that falling off a cliff is bad?

    Defining what is a success and what is failure doesn't make google deepmind non AI
    They could have it play GO and tell it that losing is bad. How do you define losing. Sure the humans tell it it has lost. But is that any better or worse than the fall breaking your beck telling the Hunan falling is not good?

    I think you are doing a disservice to the discussion. You are just saying I have no idea when it will arrive so there is no need to think or worry or plan for it. I think this is silly information technology is an exponential which will mean just in a single 80 year Human life information technology could advance 1,000,000,000,000 x
  • economic
    economic Posts: 3,002 Forumite
    edited 10 January 2018 at 2:48PM
    GreatApe wrote: »
    In the real world our ideas of success were simply based on survival.
    How does a human know that being eaten by a lion is bad or that falling off a cliff is bad?

    Defining what is a success and what is failure doesn't make google deepmind non AI
    They could have it play GO and tell it that losing is bad. How do you define losing. Sure the humans tell it it has lost. But is that any better or worse than the fall breaking your beck telling the Hunan falling is not good?

    I think you are doing a disservice to the discussion. You are just saying I have no idea when it will arrive so there is no need to think or worry or plan for it. I think this is silly information technology is an exponential which will mean just in a single 80 year Human life information technology could advance 1,000,000,000,000 x

    But how can AI self learn to reach singularity if we have a changing world where we would need to tell AI whats wrong and whats right on a regular basis? Otherwise it may very well do the wrong thing that is not in the best interests of humans.

    I agree that things are moving at a rapidly fast pace. Just see that link i sent on the nvidia presentation and what they have achieved. Hardware is becoming more and more powerful and at the same time the form factor small and smaller.

    The issue is incentives. Humans are hard wired from conception to be self interested. It does not need to be taught as its been hardwired over time for us to look after ourselves primarily (and through our emotions, the ones we care about as well, but that is still fundamentally self interest as it serves our emotions). Just look at the Ameba cells in this video:

    https://www.armstrongeconomics.com/international-news/politics/why-governments-are-like-an-ameba/

    the interesting thing is whilst the ameba has no conscious or sell-awareness as we define it, it still is incentivized to look after its own interest. This suggests that AI could very well do the same if we let it which is potentially very dangerous. what difference is a simple cell structure like an ameba vs the best AI systems out there? I think the answer is that the Ameba has experienced collated over billions of years in its neurological structure. Can this be replicated in an AI even with the many million terabytes of data we feed it? Maybe we can if the right software is implemented and then its a matter of how fast (the hardware part) it can develop into a singularity.

    Its a complex and convoluted discussion we are having. its best to stay open minded as possible and assume nothing.
  • Malthusian
    Malthusian Posts: 11,055 Forumite
    Tenth Anniversary 10,000 Posts Name Dropper Photogenic
    GreatApe wrote: »
    In the real world our ideas of success were simply based on survival.
    How does a human know that being eaten by a lion is bad or that falling off a cliff is bad?

    A Terminator scenario involving a war between a race of flesh things who want to live vs a race of machines who don't know whether they prefer living or being smashed to bits ends very quickly. And not in the machines' favour.

    The holy grail of AI - an AI that can create an AI superior to itself - can think for itself and has self-determination, i.e. it forms its own goals without human intervention. Otherwise even if it can create a superior AI, it won't.

    This self-determination is the tricky bit.
    I think you are doing a disservice to the discussion. You are just saying I have no idea when it will arrive so there is no need to think or worry or plan for it.
    No I didn't. I said the nature of the event means that there is no point planning for it. "Either the AI will give me everything I want for the rest of my life or it will kill me or plug me into the Matrix... the potential loss is zero". To put it another way, those who plan for it will be in exactly the same position as those who don't plan for it. Everyone will either get everything they want for free for the rest of their life, or they will be plugged into the Matrix, or they will be killed. Whatever will happen, it makes no difference whether you plan for it or not.

    This being the case, when it happens is irrelevant. Those who plan for it will have wasted any money and mental energy they spent on their planning.
    I think this is silly information technology is an exponential which will mean just in a single 80 year Human life information technology could advance 1,000,000,000,000 x
    What does that even mean? 1,000,000,000,000 times what? Will computers be a billion times faster in GHz? Will we be able to produce a billion times as much food out of the same land? Will a computer be able to checkmate a human in one ten millionth of a turn?

    Technology is not growing exponentially at this moment, this is just using "exponentially" to mean "really really bigly" instead of what it actually means. When an AI is created that can improve upon itself, that will cause an exponential explosion in technology, but we aren't there yet, and we certainly aren't progressing exponentially towards it.

    Here is a list of all the occasions in the last year which someone has claimed is a step towards Singularity AI:

    1) Voice recognition software
    2) Self-service checkouts
    3) A machine that can beat humanity's top Go players at Go
    4) A machine that can beat other really good chess computers at chess, with less human instruction than the older chess computers
    5) A Google problem that randomly produces "art", mostly trippy psychedelic pictures of dogs
    6) Cars that drive themselves (after being told where to go and how to get there by humans)
    7) Customer service "chatbots" (basically a tarted-up FAQ search box)
    8) Being able to turn on the lights by saying "Alexa, turn on the hallway lights. A-LEX-A, TURN ON THE HALL-WAY LIGHTS" instead of pressing a switch

    Here is the list of all the occasions in the last year when a computer has actually demonstrated signs of self-determination and behaviour outside the parameters programmed into it:

    .
    .
    .
    .
    .
    .
    .
  • GreatApe
    GreatApe Posts: 4,452 Forumite
    economic wrote: »
    But how can AI self learn to reach singularity if we have a changing world where we would need to tell AI whats wrong and whats right on a regular basis? Otherwise it may very well do the wrong thing that is not in the best interests of humans.

    You wouldn't have to define every single good and bad you could give it a general good to aim for. Maybe something like learn human values and predict what they would be and help humanity along the way. How would it know that burning down cities is bad? Maybe look at historic events and see if humans values it as a bet good or a bet bad maybe simulate it and see what the outcome is positive or negative. There may even be competing AIs that grow or shrink depending on human validations of the AIs. I think the most likely course perhaps the inky course of action is that humans and AI merge with direct links to the human mind. A new digital layer in the brain. At some stage we may get rid of the biological part of the brain and just be digotial.
    I agree that things are moving at a rapidly fast pace. Just see that link i sent on the nvidia presentation and what they have achieved. Hardware is becoming more and more powerful and at the same time the form factor small and smaller.

    Application specific ships seem to offer huge jumps in performance and much lower power use. This will speed up AI progress.
    The issue is incentives. Humans are hard wired from conception to be self interested. It does not need to be taught as its been hardwired over time for us to look after ourselves primarily (and through our emotions, the ones we care about as well, but that is still fundamentally self interest as it serves our emotions). Just look at the Ameba cells in this video:

    https://www.armstrongeconomics.com/international-news/politics/why-governments-are-like-an-ameba/[/URL
    ]

    I don't think there is a lot of 'hard wiring' in the brain.
    Take the example of blindfolding a baby. Do that until it is say 20 and take the blindfold off. The person has a human brain has perfectly working eyes and has all the bits needed but the brain won't have blind to see so he will be blind. This is what happened to a man whose eyesight was restored after decades. To the doctors surprise he was still almost fully blind but not from the eyes from the brain. My guess is most things are learnt not hard wired.
    the interesting thing is whilst the ameba has no conscious or sell-awareness as we define it, it still is incentivized to look after its own interest. This suggests that AI could very well do the same if we let it which is potentially very dangerous. what difference is a simple cell structure like an ameba vs the best AI systems out there? I think the answer is that the Ameba has experienced collated over billions of years in its neurological structure. Can this be replicated in an AI even with the many million terabytes of data we feed it? Maybe we can if the right software is implemented and then its a matter of how fast (the hardware part) it can develop into a singularity.

    Its a complex and convoluted discussion we are having. its best to stay open minded as possible and assume nothing.

    There is a thought that humans do not have free will that our brains are deterministic. It seems to be true. It is true.

    If that is the case which it probably is. How would a smart AI take that news?
    We live as if we are in control and have free will. Surely the AI will not have this illusion
    If so how would it react to knowing it is just a really advanced deterministic calculator?
    We get by by forgetting or ignoring that fact.
This discussion has been closed.
Meet your Ambassadors

🚀 Getting Started

Hi new member!

Our Getting Started Guide will help you get the most out of the Forum

Categories

  • All Categories
  • 351.3K Banking & Borrowing
  • 253.2K Reduce Debt & Boost Income
  • 453.7K Spending & Discounts
  • 244.3K Work, Benefits & Business
  • 599.4K Mortgages, Homes & Bills
  • 177.1K Life & Family
  • 257.7K Travel & Transport
  • 1.5M Hobbies & Leisure
  • 16.2K Discuss & Feedback
  • 37.6K Read-Only Boards

Is this how you want to be seen?

We see you are using a default avatar. It takes only a few seconds to pick a picture.