We're aware that some users are experiencing technical issues which the team are working to resolve. See the Community Noticeboard for more info. Thank you for your patience.
📨 Have you signed up to the Forum's new Email Digest yet? Get a selection of trending threads sent straight to your inbox daily, weekly or monthly!

New SSD confusion!

Options
2

Comments

  • Fightsback
    Fightsback Posts: 2,504 Forumite
    edited 29 July 2015 at 10:22AM
    esuhl wrote: »
    Ah, the optical drive is connected via IDE (as is yet another hard drive!).

    1TB SATA drive:
    NTFS - Windows XP (100GB)
    ext3 - Linux root (75GB)
    NTFS - Windows 7 (250GB)
    ext2 - Linux /boot (64MB)
    ext3 - Linux /home (75GB)
    NTFS - Data (420GB)
    FAT32 - Data (12GB)
    swap - Linux swap (512MB)

    Ew, that's messy. I prefer to keep 'nix and windows on separate drives as windows has a nasty habit of doing evil things to grub or whatever other boot loader you use.

    Oh PS if not already aware, you'll need to sort the UUIDs when you clone the linux partitions.
    Science isn't exact, it's only confidence within limits.
  • londonTiger
    londonTiger Posts: 4,903 Forumite
    edited 29 July 2015 at 12:01PM
    it would be more cost effective to put 2 sata drives in RAID striped than buy a PCIE ssd.

    booting is a fairly simple procedure and the bottleneck isn't data speed any more. The operating system has to do a lot of checks, check for new hardware amongst other things and these aren't dependant on hard drive seek speed. My motherboard spends a whopping 25 seconds in POST before the OS even gets a look in, and that would not be fixed by a faster drive.

    It would be overkill to get a PCIE SSD just for boot speed. PCIE SSD is useful for video editors and others who work with vast amounts of data in short speeds of time. It makes rendering a lot easier weorking on PCIE SSD and cost in not an object.

    On a budget I'd suggest a SSD strip raid. You can even add redundancy to give you some backup too.
  • Stoke
    Stoke Posts: 3,182 Forumite
    it would be more cost effective to put 2 sata drives in RAID striped than buy a PCIE ssd.

    booting is a fairly simple procedure and the bottleneck isn't data speed any more. The operating system has to do a lot of checks, check for new hardware amongst other things and these aren't dependant on hard drive seek speed. My motherboard spends a whopping 25 seconds in POST before the OS even gets a look in, and that would not be fixed by a faster drive.

    It would be overkill to get a PCIE SSD just for boot speed. PCIE SSD is useful for video editors and others who work with vast amounts of data in short speeds of time. It makes rendering a lot easier weorking on PCIE SSD and cost in not an object.

    On a budget I'd suggest a SSD strip raid. You can even add redundancy to give you some backup too.
    You're seriously suggesting a striped RAID. Madness.
  • S0litaire
    S0litaire Posts: 3,535 Forumite
    Part of the Furniture 1,000 Posts Combo Breaker
    The inital BIOS boot screen can be shortened in the UEFI/BIOS settings, it's probably set to do a full check and "display an Branded image" rather than a quick silent check that alone can save 15-20 seconds on a boot.

    Windows boot times have dropped since Win7 because they took a leaf out of Linux's books and started to "pre-fetch".

    When linux boots it sometimes only uses "chunks" of code instead of the whole program. It copies these chunks to an other location and uses them to boot from saving time and memory.

    Windows started using this in Win7 (I think might have been in the server side first.) and called it "PreFetch"
    it's an index of what lines of code in which program and driver file it needs to boot up quickly.

    That's why the bottle neck has dropped dramatically From the CPU, it's now down to raw bandwidth between the CPU and the Drives it's pulling data from. PCIe cards offer the fastest way of moving data from SDD to the CPU currently.
    Laters

    Sol

    "Have you found the secrets of the universe? Asked Zebade "I'm sure I left them here somewhere"
  • londonTiger
    londonTiger Posts: 4,903 Forumite
    Stoke wrote: »
    You're seriously suggesting a striped RAID. Madness.

    Why not? if you want performance striped raid is the way to go. I have striped raid on my primary drive with my OS, programs and games on.

    Data is on a mechanical drive which gets online backup & NAS backup done on frequently. I do have a monthly image snapshot of my primary OS drive which is sufficient as there isn't any critical user data on there.

    Having redundancy on a home raid with no user data is foolish and slows things down unnecessarily.
  • Robisere
    Robisere Posts: 3,237 Forumite
    Ninth Anniversary 1,000 Posts Photogenic Combo Breaker
    Try this with your current board:
    http://www.sandisk.co.uk/products/ssd/sata/readycache/

    Available from Amazon, not nearly as expensive as what you are considering. It is a cache drive, basically that means it learns to hold information every time you boot and after a few boots, it gets quicker and quicker. This will give you the faster boot you need until you can afford a new system. And after all, why spend more of the money now, when you can save for something totally better, with a more modern, faster board, later? That will have USB 3.1, faster memory, processor, etc. Get the boot speed you want now, at less cost.
    I think this job really needs
    a much bigger hammer.
  • enfield_freddy
    enfield_freddy Posts: 6,147 Forumite
    S0litaire wrote: »
    The inital BIOS boot screen can be shortened in the UEFI/BIOS settings, it's probably set to do a full check and "display an Branded image" rather than a quick silent check that alone can save 15-20 seconds on a boot.

    Windows boot times have dropped since Win7 because they took a leaf out of Linux's books and started to "pre-fetch".

    When linux boots it sometimes only uses "chunks" of code instead of the whole program. It copies these chunks to an other location and uses them to boot from saving time and memory.

    Windows started using this in Win7 (I think might have been in the server side first.) and called it "PreFetch"
    it's an index of what lines of code in which program and driver file it needs to boot up quickly.

    That's why the bottle neck has dropped dramatically From the CPU, it's now down to raw bandwidth between the CPU and the Drives it's pulling data from. PCIe cards offer the fastest way of moving data from SDD to the CPU currently.




    OP : Ah, the optical drive is connected via IDE (as is yet another hard drive!).




    uefi on a board that has IDE?


    new one on me


    ide went out with the ark , sata has been here for 10 yrs + , uefi was brought in (on PCs) to utilise the features of win 8 ,, that did not come in on the ark
  • OGR
    OGR Posts: 157 Forumite
    I didn't see it mentioned here in a quick scan but if you go the PCI-E route you have to make sure it is AHCI and not NVMe if you are planning to keep those legacy OS. Only windows 8.1 and Windows 10 have NVMe support. I did see you asked the question about whether you could use PCI-E due to AHCI, you can get AHCI PCI-E based SSDs.

    PCI-E will be miles faster than SATA as SATA has reached its limit and will probably be phased out in the coming years for storage. NVMe is in theory quicker than AHCI but it is relatively new. Either way you will notice a massive difference going from a HDD to a PCI-E SSD.
  • londonTiger
    londonTiger Posts: 4,903 Forumite
    OGR wrote: »
    PCI-E will be miles faster than SATA as SATA has reached its limit and will probably be phased out in the coming years for storage. NVMe is in theory quicker than AHCI but it is relatively new. Either way you will notice a massive difference going from a HDD to a PCI-E SSD.

    emmmmmm no. Sata1 was phased out by sata 2 and sata 2 will be phased out by sata 3. Atm sata 3 isn't even fully implemented as only the newest motherboards support it.
  • OGR
    OGR Posts: 157 Forumite
    edited 29 July 2015 at 6:51PM
    OK I'm gonna explain this quick as I'm on my phone and its not great for long explainations.

    SATA has a maximum throuput of 6Gb/s, this ends up around 560MB/sec as SATA has more overheads than PCI-E. SSDs already satursate this and get held back.

    One lane of PCI-E has a throughout of 8Gb/s, a single lane can hit nearly 1000MB/s. You can throw 2 or 4 lanes at a PCI-E ssd and so a 2x SSD will have 16Gb/s at its disposal. PCI-E SSDs off the top for my head are easily hitting 1300-1500 MB/s and thats on new NVMe tech which is yet to mature.

    SATA is dead and isn't going to be developed any more. AHCI + SATA = old, PCI-E(M.2) + NVMe = the future. As time goes by more and more devices will move to PCI-E or M.2, especially because of space saving on small form factors.

    PCIe-vs-SATA-limit-ac.jpg
This discussion has been closed.
Meet your Ambassadors

🚀 Getting Started

Hi new member!

Our Getting Started Guide will help you get the most out of the Forum

Categories

  • All Categories
  • 350.9K Banking & Borrowing
  • 253.1K Reduce Debt & Boost Income
  • 453.5K Spending & Discounts
  • 243.9K Work, Benefits & Business
  • 598.7K Mortgages, Homes & Bills
  • 176.9K Life & Family
  • 257.2K Travel & Transport
  • 1.5M Hobbies & Leisure
  • 16.1K Discuss & Feedback
  • 37.6K Read-Only Boards

Is this how you want to be seen?

We see you are using a default avatar. It takes only a few seconds to pick a picture.