It looks like !buildapc community isn’t super active so I apologize for posting here. Mods, let me know if I should post there instead.

I built my first PC when I was I think 10-11 years old. Built my next PC after that and then sort of moved toward pre-made HP/Dell/etc. My last PC’s mobo just gave out and I’m looking to replace the whole thing. I’ve read over the last few years that prefabs from HP/Dell/etc. have gone to shit and don’t really work like they used to. Since I’m looking to expand comfortably, I’ve been thinking of giving building my own again.

I remember when I was a young lad, that there were two big pain points when putting the rig together: motherboard alignment with the case (I shorted two mobos by having it touch the bare metal of the grounded case; not sure how that happened but it did) and CPU pin alignment so you don’t bend any pins when inserting into the socket.

Since it’s been several decades since my last build, what are some things I should be aware of? Things I should avoid?

For example, I only recently learned what M.2 SSD are. My desktop has (had) SATA 3.5" drives, only one of which is an SSD.

I’ll admit I am a bit overwhelmed by some of my choices. I’ve spent some time on pcpartpicker and feel very overwhelmed by some of the options. Most of my time is spent in code development (primarily containers and node). I am planning on installing Linux (Ubuntu, most likely) and I am hoping to tinker with some AI models, something I haven’t been able to do with my now broken desktop due to it’s age. For ML/AI, I know I’ll need some sort of GPU, knowing only that NVIDIA cards require closed-source drivers. While I fully support FOSS, I’m not a OSS purist and fully accept that using a closed source drivers for linux may not be avoidable. Happy to take recommendations on GPUs!

Since I also host a myriad of self hosted apps on my desktop, I know I’ll need to beef up my RAM (I usually go the max or at least plan for the max).

My main requirements:

  • Intel i7 processor (I’ve tried i5s and they can’t keep up with what I code; I know i9s are the latest hotness but don’t think the price is worth it; I’ve also tried AMD processors before and had terrible luck. I’m willing to try them again but I’d need a GOOD recommendation)
  • At least 3 SATA ports so that I can carry my drives over
  • At least one M.2 port (I cannibalized a laptop I recycled recently and grabbed the 1TB M.2 card)
  • On-board Ethernet/NIC (on-board wifi/bluetooth not required, but won’t complain if they have them)
  • Support at least 32 GB of RAM
  • GPU that can support some sort of ML/AI with DisplayPort (preferred)

Nice to haves:

  • MoBo with front USB 3 ports but will accept USB 2 (C vs A doesn’t matter)
  • On-board sound (I typically use headphones or bluetooth headset so I don’t need anything fancy. I mostly listen to music when I code and occasionally do video calls.)

I threw together this list: https://pcpartpicker.com/list/n6wVRK

It didn’t matter to me if it was in stock; just wanted a place to start. Advice is very much appreciated!

EDIT: WOW!! I am shocked and humbled by the great advice I’ve gotten here. And you’ve given me a boost in confidence in doing this myself. Thank you all and I’ll keep replying as I can.

  • CosmicTurtle@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 months ago

    Two GPUs? Is that a thing? How does that work on a desktop? Honestly, if it wasn’t for my curiosity into AI, I’d just go with the onboard video though given my need for specific resolutions, I find comfort in having a dedicated card.

    I’ve been using ubuntu exclusively for 10 some years and don’t use snap at all. tbh, not even sure what snap is.

    If it’s not apt, then I don’t use it.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      9 months ago

      You put one GPU in one PCIe slot and one in another. Just be aware that just because a PCIe slot is full-length, it doesn’t mean it’s a full-speed x16 slot. Check the manual to be sure. Most cards will work with fewer lanes, but not all. And of course it’ll be slower. (Fun fact, you can put a long card in a short slot. Some have open backs to allow this, but if you have an oscillating Dremel tool and a steady hand, you can make your own.)

      But personally, for basic 3D games and work, I’d just use the integrated video (which is on the CPU these days, not the motherboard) and give the discrete card to the VM.

      • CosmicTurtle@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        With the conversations I’m having here, I’m leaning in the direction of integrated video (assuming I can get one with display port) and a discrete card just for AI work.

        I use VirtualBox for VMs. I’m assuming there are instructions on how to give the card to the VM? My cursory google search came up with dubious results.

        • youmaynotknow@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          Most new boards will have at least a display port and an HDMI port. Add that most also have Thunderbolt4, plug in an HDMI or Display Port dongle. The sky is the limit man. On the VM panorama, VMWare is now all fucked with their forced subscription model, Virrualbox is still a thing, but GPU passthrough (I’ve heard, can’t really confirm) seems to have turned into a real shitshow. KVM / Qemu seems to be the only alternative that makes sense right now.

        • catloaf@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          Looks like PCI passthrough was dropped in virtualbox 6.1.0. You may want to use Qemu or KVM.

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      You do use it as a bunch of snap packages automatically install the snap instead.

      For Nvidia I still think passthough is the best option as it isolates the Nvidia issues to a VM instead of the host. If you aren’t going to spend a bunch of time on AI then you can just use a CPU as long as you have enough ram.

    • youmaynotknow@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      SNAP is just a proprietary packaging by Canonical. Basically the same as a flatpak, but fully controlled by Canonical, store and all. Integrated graphics will give you as much resolution as most GPUs, albeit they won’t be able to render at dedicated GPU speeds. But unless you’re actually rendering very heavy videos, integrated, matched together with 1 or more CUDA TPUs, and YOU set the limits.

    • fuckwit_mcbumcrumble@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      If you don’t need a lot of GPU horsepower besides the AI stuff then you could just use the integrated graphics and have a dedicated GPU for the AI stuff.

      Having multiple GPUs in your system isn’t really that special. Plug HDMI into GPU1 to make GPU1 drive your display/play games. Plug HDMI into GPU2 to have GPU2 do stuff. If you’re doing AI work then you don’t need anything connected to the GPU, the program just needs to know it’s there and to use it.

      The only thing to look out for when using the iGPU and a dGPU is that the bios doesn’t turn off the iGPU if it detects the dGPU. If you have 2 dGPUs then it shouldn’t matter outside of maybe the bios wanting to use the first one.

    • CeeBee@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Two* GPUs? Is that a thing? How does that work on a desktop?

      I’ve been using two GPUs in a desktop since 15 years ago. One AMD and one Nvidia (although not lately).

      It really works just the same as a single GPU. The system doesn’t really care how many you have plugged in.

      The only difference you have to care about is specifying which GPU you want a program to use.

      For example, if you had multiple Nvidia GPUs you could specify which one to use from the command line with:

      CUDA_VISIBLE_DEVICES=0

      or the first two with:

      CUDA_VISIBLE_DEVICES=0,1

      Anyways, you get the idea. It’s a thing that people do and it’s fairly simple.

    • grue@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Two GPUs? Is that a thing? How does that work on a desktop?

      GPUs these days aren’t like your old Voodoo, with its daisy-chained VGA port and one-way, fixed-function graphics pipeline. They can actually send the results of their calculations back to the CPU over the PCIe bus instead of only out to the monitor!

      (In all seriousness though, you don’t actually need two GPUs.)