>>105935204>Especially meme tier stuff like ARM that is very efficient when it's for a phone but as soon as you want to build real systems with it, it's fucking DOA trash.So meme, that Apple M chips are beating the crap out of competing x86 chips? Efficiency is everything the current landscape. If you only need 10 watts to perform the same FLOPs as your competitor does with 65 watts, you can turn a knob and go 65 watts to fuck your competitor so hard in the ass, their noses start bleeding uncontrollably. That 's the reason why Apple keeps jumping from ISA to ISA (Power -> x86 -> Arm, maybe RISC-V when it is ready next?) without a single hint of fear of breaking old tools and investing into tools like Rosetta to port those other old tools.
The same dumb reasoning
>hurr what do I need efficiency for, I've got POWAHis why Apple pulls those insane numbers and there's nothing x86 can do to match it in PRICE / PERF OR PERF / WATTS. Sure you can pull out a fucking EPYC or Threadripper to outperform it, but you'll pay $10k for it and it will guzzle 300 watts. That's just the CPU, no system.
And what's to say Apple couldn't decide to actually go into the server space to get some of that sweet sweet AI money on a whim? For now, the fact that they stopped caring about servers a decade ago. But I know nothing stops nvidia doing arm on the cloud and nvidia does take efficiency VERY seriously for AI (gaming is a meme for them).
Look, I'm a Linux x86 4evaR guy (pic related, that's my neofetch) myself, but at this rate x86 will become a relic FAST, even EPYC is not going to keep x86 alive forever.
>>105935326The reason why arm chips are capable of being more efficient than x86 is because arm has fixed width instructions. If x86S could ditch variable length instructions, they would gain a lot of silicon real-estate. Make the legacy x86 a co-processor that intercepts the variable length shit, and you're set. You wouldn't even need that co-processor on mobile applications like