What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

A Silly idea (or is it?) - Self-govern CPUs

CMetaphor

Quadfather
Joined
May 5, 2007
Messages
5,845
Location
Montreal, Canada
I know we somewhat have this sort of thing already, but I'm wondering if someday we'll have even more. Note: since I'm an AMD fan, this is an idea using their APUs/CPUs.

CPUs. I'm talking about CPUs that completely govern themselves. How? 1) Number of cores active/number of threads/HT on or off. Let the CPU decide when to turn off cores for efficiency or single threaded performance (reduce heat from having the extra cores of threads active and use that TDP headroom to OC the few remaining cores that are in heavy use.

2) Frequency scaling, but to a greater extent. Linked with 1), imagine going from 16 threads at @3ghz to 4 threads at 5.Xghz. Then when the power isn't needed shut down most cores and clock the remaining ones to a ridiculously low frequency, like 800mhz. Save power.

3) Always-on intelligent hybrid crossfire:
3a)For CPUs with integrated graphics(APUs): let the motherboard IO graphics connection be the place to connect at all times. Let integrated GPU share a tiny part of the load from the GPU, or take over 2D rendering, leaving the GPU open for other tasks. Then, when less power is needed the CPU shuts down the GPU entirely and the system relies solely on the integrated graphics for power saving.
3b) for CPUs with no integrated, thelis feature would be limited and th discrete GPU can't be fully powered down.
(Ideally all CPUs will become APUs).

4) For the End user: Nothing? Or as little/much as they want to deal with. Wanting your PC to be faster can be as easy as getting a bigger heatsink, cleaning out dust from it, etc.
4a) Simple metrics shown to the end users could show your recent peak frequency, and your averages over time. This could also assist in troubleshooting problems and even be used to determine when you really DO need to dust out your PC. Possible even to detect if the PCs fans are worn out by monitoring temperatures.


Ah, and therein lies the rub. All of this would require 1) CPUs to be built with integrated graphics always (why not?) and 2) require integrated thermal probes, protections and redundancies.

And the biggest hurdle: do we think end users can be trusted with this? Can a PC tell it's user: "Hey my fan isn't performing well, replace it please" or "Toooo much dust! Clean me!", Etc.

What do you guys think?
 

lowfat

Moderator
Staff member
Joined
Feb 12, 2007
Messages
11,161
Location
Grande Prairie, AB
1 2 and 4 are pretty much covered by AMD PBO on Zen 3, no?
Precision Boost, not necessarily Precision Boost Overdrive.

And as for crossfire, multi-GPU is dead. And having a significantly slower GPU for one of them is asking for trouble.
 

MARSTG

Well-known member
Joined
Apr 22, 2011
Messages
4,540
Location
Montreal
Do we really care about efficiency that much in desktop cpu world ? I don't think so. Now, mobile is different as you are tied to a battery, but desktop or server segment i don't think it matters that much. Datacenter segment cares about overall efficiency but not something like that.
 

JD

Moderator
Staff member
Joined
Jul 16, 2007
Messages
10,645
Location
Toronto, ON
I would say #3 exists in laptops (switchable graphics). I don't think having switchable graphics in a desktop really makes sense as a dGPU can idle pretty low today (around 10-20W generally speaking). AMD tried Hybrid CrossFire X too back in the day, where you'd pair a low-end dGPU with the iGPU to get an "okay" GPU overall. That obviously didn't pan out well and nobody used it.

As mentioned above, #1 and #2 are done by Ryzen today. #4 kinda is with Ryzen Master/Intel XTU, though it's not in-your-face with what you need to do to the hardware to improve things. I don't think many manufacturers really want the general masses tinkering with their computers to that degree either though as it'll likely result in more RMA requests from user error.
 

CMetaphor

Quadfather
Joined
May 5, 2007
Messages
5,845
Location
Montreal, Canada
Hey hey replies!

Tbh I know some of this is done already, but I'm not sure to the same extent I'm thinking of. Can you put a huge 360mm rad on a Ryzen 5xxx and see 5ghz? I don't think the boost "trusts" the end user and its own sensors that much. And if 5ghz is needed on only 4 threads, I don't think anything nowadays can disable half is cores and turn off HT to reach it.

For a datacenter? Why the heck not? Do they *always* need to maximum number of cores at the maximum frequency possible? Or are there times when a job can be run faster if it was fewer cores and more GHz? I can see huge power savings in that environment if such a scenario was ideal.

Anyway, a lot of this is just musings, but I'm glad it inspired a bit of conversation! With these very difficult times, some extra interaction is welcome, in any form (for me at least).
 

lowfat

Moderator
Staff member
Joined
Feb 12, 2007
Messages
11,161
Location
Grande Prairie, AB
At least since Zen 2, AMD CPUs put all cores to sleep that aren't being used. If you are doing a single core load, one core will boost to its max clock and almost all other CPUs will be sleeping. Just browsing right now, 2 cores are sleeping, 5 cores are sub 500MHz, and 1 core is bouncing around 2GHz.

And the better the cooling, the higher it will boost. Replace the stock cooler w/ a high end air cooler or liquid and the CPU will absolutely to higher clocks and hold them longer. Although they are still obviously current limits, as you can still damage silicon even if cooling is exceptional.
 

sswilson

Moderator
Staff member
Joined
Dec 9, 2006
Messages
21,334
Location
Moncton NB
I don't know for sure, but don't installations like data centers already incorporate load sharing/management? The might not turn off/idle processors, but I'm fairly certain racked servers can (and do) idle down to low power states if not needed for the current load.
 

JD

Moderator
Staff member
Joined
Jul 16, 2007
Messages
10,645
Location
Toronto, ON
I would say the bigger shift happening in datacenters is the move towards ARM and having purpose-built CPUs. I believe Facebook, Amazon, and Google design their own hardware for use in their datacenters. I believe Microsoft is getting into this now too.

x86 might be reaching it's end of life? ARM seems to be taking over due to how modular it is and that anyone can license it and design it to their own specifications.
 
Top