Benchmarking My Budget AI Server with Geekbench 6

Benchmarking My Budget AI Server with Geekbench 6
Photo by Alex Pudov / Unsplash

1. Introduction

As you already know, I’ve been busy bringing my so-called "budget AI supercomputer" to life (quotation marks fully intentional). After all the assembly, cable wrangling, BIOS drama, and late-night Ubuntu installations, it was finally time to ask the big question:

“But… how fast is it really?”

To answer that, I turned to one of the most popular benchmarking tools available today: Geekbench 6. It’s lightweight, cross-platform (Windows, macOS, Linux, even iOS and Android), and best of all, gives you a nice shiny number you can compare to other systems on the official Geekbench Browser.

So, armed with curiosity (and maybe too much coffee), I decided to run Geekbench on my dual Xeon E5-2699C v4 monster. The results were… enlightening.

My "Monster" results
dolpa_me’s Profile - Geekbench

2. A Quick Note About Geekbench

Geekbench runs a set of standardized workloads that simulate real-world tasks: image processing, data compression, machine learning, encryption, and so on. Each task gets a score, and then those are combined into an overall single-core and multi-core result.

The magic here is comparison. Geekbench defines a baseline system (a specific Intel Core i7-12700) as 2500 points. From there, all modern CPUs are scaled relative to that.

a stylized image of a cube with many smaller cubes
Photo by Vighnesh Dudani / Unsplash

3. My Test Setup

  • CPU: Dual Intel Xeon E5-2699C v4 (22 cores each, 44 cores total, 88 threads with Hyper-Threading)
  • RAM: 128 GB DDR4 ECC (because why not)
  • GPU: At this stage, a modest NVIDIA GTX 960 for display duties
  • OS: Ubuntu 25.04 (freshly upgraded from 24.10)
  • Power profile: Tested in both powersave and performance modes

4. The Results

Here’s what Geekbench 6 had to say:

  • Powersave Mode
    • Single-core: 397
    • Multi-core: 4719
  • Performance Mode
    • Single-core: 768
    • Multi-core: 7727

Not too shabby, right? But let’s put that into perspective.

5. Comparison to Modern PCs

To make sense of these numbers, I compared them with some of my other machines (and a few public Geekbench results for reference):

  • MacBook Air (2012, Intel i3-ish, 4 GB RAM)
    • Single-core: ~400
    • Multi-core: ~800
    • Yes, my Xeon beast in powersave mode was basically a sleepy MacBook Air.
  • Mac Mini (2017, Intel i7, 16 GB RAM)
    • Single-core: ~1300
    • Multi-core: ~5000
  • Dell XPS 15 (Intel i7 + GTX 1080 mobile)
    • Single-core: ~1300
    • Multi-core: ~4500
  • MacBook Pro 13” M1 (2020, 16 GB RAM)
    • Single-core: ~2300
    • Multi-core: ~7600
    • Yes, the humble M1 laptop matches my dual Xeons in multi-core, and leaves me in the dust in single-core. Progress is scary.

And of course:

  • My AI “Supercomputer” (Dual Xeon E5-2699C v4)
    • Single-core: 768
    • Multi-core: 7727

So what did I learn? More cores ≠ more performance. Modern CPUs are so efficient per core that even 22 ancient Xeon cores can’t keep up with 8 sleek Apple Silicon cores.

6. Lessons Learned

  • Geekbench is a humbling tool. My “88-thread beast” doesn’t feel so beastly next to an M1 MacBook.
  • Single-core performance matters. A lot. Most daily tasks still rely heavily on single-thread speed.
  • My server still shines for parallel workloads: Docker containers, VMs, background AI model training, etc. For that, 44 cores are very handy.
  • Power profiles make a big difference. Switching from powersave to performance nearly doubled the results.

7. Conclusion

Running Geekbench on my AI server was both fun and educational. It reminded me that while old Xeons can be useful for certain server and parallelized workloads, they’re no match for modern desktop CPUs in raw single-core speed.

Still, for my purposes—local AI model testing, Docker playgrounds, and experimenting with ComfyUI—the system delivers exactly what I need. And let’s be honest: it’s way more fun building a “supercomputer” out of parts than just buying a MacBook and calling it a day.

Read next