I feel like this is a mistake. Pat's strategy is aggressive but what the company needs.
Intel's stock is jumping at this announcement, but I look at it as a bad signal for Intel 18a. If 18a was looking to be a smash hit then I don't think Pat gets retired. If 18a is a success then it is an even more short-sighted decision by the board.
What this likely means is two-fold:
1. Intel 18a is being delayed further and/or there are significant issues that will hamstring performance.
2. Pat is/was unwilling to split the foundry and design business / be part of a M&A but the board wants to do one or the other.
If 18a is not ready I think the best case scenario for Intel is a merger with AMD. The US Govt would probably co-sign on it for national security concerns overriding the fact that it creates an absolute monopoly on x86 processors. The moat of the two companies together would give the new combined company plenty of time to ramp up their fabs.
>> If 18a is not ready I think the best case scenario for Intel is a merger with AMD.
Aside from the x86 monopoly that would create, I don't think Intel has much of value to AMD at this point other than the fabs (which aren't delivering). IMHO if Intel is failing, let them fail and others will buy the pieces in bankruptcy. This would probably benefit several other companies that could use 22nm and up fab capacity and someone could even pick up the x86 and graphics businesses.
BTW I think at this point the graphics business is more valuable. Even though Intel is in 3rd place there are many players in the SoC world that can use a good GPU. You can build a SoC with Intel, ARM, or RISC-V but they all need a GPU.
Certainly feels like preempting news that Intel 18A is delayed.
Restoring Intel's foundry lead starting with 18A was central to Pat's vision and he essentially staked his job on it. 18A is supposed to enter production next year but recent rumors is that it's broken.
The original "5 Nodes in 4 Years" roadmap released in mid 2021 had 18A entering production 2H 2024. So it's already "delayed". The updated roadmap has it coming in Q3 2025 but I don't think anyone ever believed that. This after 20A was canceled, Intel 4 is only used for the Compute Tile in Meteor Lake, Intel 3 only ever made it into a couple of server chips, and Intel 7 was just renamed 10nm.
I have next to zero knowledge of semiconductor fabrication, but “Continued Momentum” does sound like the kind of corporate PR-speak that means “people haven't heard from us in a while and there's not much to show”.
I also would never have realized the 20A process was canceled were it not for your comment since this press release has one of the most generous euphemisms I've ever heard for canceling a project:
“One of the benefits of our early success on Intel 18A is that it enables us to shift engineering resources from Intel 20A earlier than expected as we near completion of our five-nodes-in-four-years plan.”
The first iteration of Intel 10nm was simply broken -- you had Ice Lake mobile CPUs in 2019 yes but desktop and server processors took another two years to be released. In 2012 Intel said they will ship 14nm in 2013 and 10nm in 2015. Not only they did fail to deliver 10nm Intel CPUs but they failed Nokia server division too, nearly killing it off in 2018, three years after their initial target. No one in the industry forgot that, it's hardly a surprise they have such trouble getting customers now.
And despite this total failure they spent many tens of on stock buybacks https://ycharts.com/companies/INTC/stock_buyback no less than ten billions in 2014 and in 2018-2021 over forty billions. That's an awful, awful lot of money to waste.
Yes, yes, yes, of course, the infamous CPU released just so Intel middle managers can get their bonus. GPU disabled, CPU gimped, the whole thing barely worked at all. Let's call it the 0th iteration of 10nm , it was not real, there was like one laptop in China, the Lenovo IdeaPad 330-15ICN, which paper launched.
Most of the stock buybacks happened under Bob Swan though. Krzanich dug the grave of Intel but it was Swan who kicked the company in there by wasting forty billion. (No wonder he landed at a18z.)
Intel GFX held back the industry 10 years. If people thought Windows Vista sucked it was because Intel "supported" it by releasing integrated GPUs which could almost handle Windows Vista but not quite.
The best they could do with the GFX business is a public execution. We've been hearing about terrible Intel GFX for 15 years and how they are just on the cusp of making one that is bad (not terrible). Most people who've been following hardware think Intel and GFX is just an oxymoron. Wall Street might see some value in it, but the rest of us, no.
My understanding is that most of the complaints about Vista being unstable came from the nvidia driver being rather awful [1]. You were likely to either have a system that couldn't actually run Vista or have one that crashed all the time, unless you were lucky enough to have an ATI GPU.
Parent talks about GMA900 from i910 series chipset.
It wasn't fully WDDM compatible for a quite minor (overall) part, but the performance were awful anyway and lack of running in the full WDDM mode (ie Aero) also didn't help too, partly because running in Aero was faster.
> If people thought Windows Vista sucked it was because Intel "supported" it by releasing integrated GPUs which could almost handle Windows Vista but not quite.
What does an OS need a GPU for?
My current laptop only has integrated Intel GPU. I'm not missing Nvidia, with its proprietary drivers, high power consumption, and corresponding extra heat and shorter battery life...
The GUI >99% of users used to interface with the OS required a GPU to composite the different 2d buffers with fancy effects. IIRC if you knew how do disable as much as possible of it the performance without GPU acceleration was not great, but acceptable. It really sucked when you had an already slow system and the GPU pretended to support the required APIs, but the implementation didn't satisfy the implied performance expectation e.g. pretending to support a feature in as "hardware accelerated", but implementing it mostly on the CPU inside the GPU driver, but even the things the old Intel GPUs really did in hardware were often a lot slower than a "real" GPU of the time. Also CPU and iGPU constantly fought over the often very limited memory bandwidth.
Composiors are generally switching to being gpu accelerated, not to mention apps will do their own gpu accelerated UIs just because the OS ui systems are all junk at the moment
We are at the perfect moment to re-embrace software rasterizers because CPU manufacturers are starting to add HBM and v-cache.
An 8K 32bpp framebuffer is ... omg 126MB for a single copy. I was going to argue that a software rasterizer running on vcache would be doable, but not for 8k.
For 4k, with 32MB per display buffer, it could be possible but heavy compositing will require going out to main memory. 1440p would be even better at only 15MB per display buffer.
For 1440p at 144Hz and 2TB/s (vcache max), best case is an overdraw of 984 frames/frame
I was doing a11y work for an application a few months back and got interested into the question of desktop screen sizes. I see all these ads for 4k and bigger monitors but they don't show up here
And on the steam hardware survey I am seeing a little more than 5% with a big screen.
Myself I am swimming in old monitors and TV to the point where I am going to start putting Pepper's ghost machines in my windows. I think I want to buy a new TV, but I get a free old TV. I pick up monitors that are in the hallway and people are tripping on them and I take them home. Hypothetically I want a state-of-the-art monitor with HDR and wide gamut and all that but the way things are going I might never buy a TV or monitor again.
All the browsers on my machine report my resolution as 1080p despite using 4k. I assume this is because I run at 200% scaling (I believe this is relatively common among anyone using a 4k resolution)
If the above-linked website uses data reported by the browser, I wonder how this scenario might be taken into consideration (or even if such a thing is possible)
A pixel is defined as 1/96th of an inch in the web world so it is dependent on your dpi/scaling. There is a window.devicePixelRatio that JavaScript can use to get actual pixels.
The PC I'm typing this on has two 27in 4k screens. I'm sitting so that I look at them from about 75cm away (that's 2.5 feet in weird units).
I archive most of my video files in 720p, because I genuinely don't see that big of a difference between even 720p and 1080p. It is definitely visible, but usually, it does not add much to the experience, considering that most videos today are produced to be watchable on smartphones and tablets just as much as cinema screens or huge TVs. I only make an exception here for "cinematic" content that was intended for the very big screen. That does not necessarily mean movies, but also certain YouTube videos, like "Timelapse of the Future": https://youtube.com/watch?v=uD4izuDMUQA - This one hits differently for me in 4K vs. just 1080p. Having to watch this in just 720p would be tragic, because its cinematography relies on 4K's ability to resolve very fine lines.
So why would I make a point to have both my screens be 4K? Because where else do you look at fine lines a lot? You're looking at it right now: Text. For any occupation that requires reading a lot of text (like programming!), 4K absolutely makes a difference. Even if I don't decrease the font size to get more text on screen at once, just having the outlines of the glyphs be sharper reduces eye strain in my experience.
No. It is in fact completely normal and has been repeated many times in human history. A unit based on an arbitrary fraction of the distance from the north pole to the equator is quite a bit more odd when you think about it.
Remember the days when you would be in danger of your apartment being burglarized and thieves would take your TV or receiver or CD player etc.? Nowadays that's called junk removal and you pay for it! How times have changed...
A complex desktop web form with several pages, lots of combo boxes, repeating fields, etc. I cleaned up the WCAG AA issues and even the S, but the AAA requirement for click targets was out of scope but had me thinking that I wanted to make labels (say on a dropdown menu bar) as big as I reasonably could and that in turn got me thinking about how much space I had to work with on different resolution screens so I looked up those stats and tried to see what would fit in which constraints.
“Ubuntu 7.10 is the first version of Ubuntu that ships with Compiz Fusion enabled by default on supported hardware. Compiz Fusion, which combines Compiz with certain components developed by the Beryl community, is a compositing window manager that adds a wide range of visual effects to Ubuntu's desktop environment. The default settings for Compiz enable basic effects—like window shadows and fading menus—that are subtle and unobtrusive. For more elaborate Compiz features, like wobbling windows, users can select the Extra option on the Visual Effects tab of the Appearance Preferences dialog.”
Yeah, I worked on that but I didn't think that would count since it was a distro, not a desktop environment. In that case Novell shipped compiz in 2006 so even earlier.
MacOS in 2000 was still old MacOS, with no compositing at all. The NeXT derived version of MacOS was still in beta, and I tried it back then, it was very rough. Even once OSX shipped in 2001, it was still software composited. Quartz Extreme implemented GPU compositing in 10.2, which shipped in 2002.
Windows finally got a composited desktop in Vista, released in 2007. It was GPU accelerated from day one.
If I recall correctly, Vista was hard depending on DirectX 9a for Aero. Intel GPU parts embedded in mobile CPUs were almost, but not fully DX 9a capable, but Intel convinced Microsoft to accept it as "compatible". That created lots of problems to everyone.
The modern paradigm of "application blasts out a rectangle of pixels" and "the desktop manager composes those into another rectangle of pixels and blasts them out to the screen".
It actually separates the OS from the GPU. Before WDDM your GFX device driver was the only software that could use GFX acceleration. After WDDM the GPU is another "processor" in your computer that can read and write to RAM and the application can use the GPU in user space any way it wants, and then the compositor can to the same (in user space) and in the end all the OS is managing communication with the GPU.
For that approach to work you need to have enough fill rate that you can redraw the screen several times per frame. Microsoft wanted to have enough they could afford some visual bling, but Intel didn't give to them.
As people noted, most of your GUI is being rendered by it. Every video you watch is accelerated by it, and if it has some compute support, some applications are using it for faster math at the background (mostly image editors, but who knows).
For a smaller gripe: they also bought Project Offset, which looked super cool, to turn into a Larabee tech demo. Then they killed Larabee and Project Offset along with it.
> Intel GFX held back the industry 10 years. If people thought Windows Vista sucked it was because Intel "supported" it by releasing integrated GPUs which could almost handle Windows Vista but not quite.
not sure about it. i had friends with discrete GPUs at the time and they told me that vista was essentially a gpu-stress program rather than an OS.
at the same time, compiz/beryl on linux worked beautifully on intel integrated gpus, and were doing way cooler things than vista was doing at the time (cube desktops? windows bursting into flames when closed?).
I'm a bit sad that compiz/beryl is not as popular anymore (with all the crazy things it could do).
I've been playing Minecraft fine with Intel GPUs on Linux for about 15 years. Works great. If Windows can't run with these GPUs, that's simply because Windows sucks.
I wonder how big a downside an x86 monopoly would actually be these days (an M4 MacBook being the best perf/watt way to run x86 Windows apps today as it is) and how that compares to the downsides of not allowing x86 to consolidate efforts against rising competition from ARM CPUs.
The problem with the "use the GPU in a SoC" proposition is everyone that makes the rest of a SoC also already has a GPU for it. Often better than what Intel can offer in terms of perf/die space or perf/watt. These SoC solutions tend to coalesce around tile based designs which keep memory bandwidth and power needs down compared to the traditional desktop IMR designs Intel has.
I'd like to address the aside for completeness' sake.
An x86 monopoly in the late 80s was a thing, but not now.
Today, there are sufficient competitive chip architectures with cross-compatible operating systems and virtualization that x86 does not represent control of the computing market in a manner that should prevent such a merger: ARM licensees, including the special case of Apple Silicon, Snapdragon, NVIDIA SOCs, RISC-V...
Windows, MacOS and Linux all run competitively on multiple non-x86 architectures.
> An x86 monopoly in the late 80s was a thing, but not now.
Incorrect, we have an even greater lack of x86 vendors now than we did in the 80s. In the 80s you had Intel, and they licensed to AMD, Harris, NEC, TI, Chips & Technologies, and in the 90s we had IBM, Cyrix, VIA, National Semi, NexGen, and for a hot minute Transmeta. Even more smaller vendors.
Today making mass market x86 chips we have: Intel, AMD, and a handful of small embedded vendors selling designs from the Pentium days.
I believe what you meant was that x86 is not a monopoly thanks to other ISAs, but x86 itself is even more of a monopoly than ever.
I believe in the 80s all those vendors were making the same intel design in their own fab. I don't think any of them did the design on their own. In the 90s some of them had their own designs.
Some were straight second sources but they all had the license to do what NEC, AMD, and OKI did, which is alter the design and sell these variants. They all started doing that with the 8086. There were variants of the 8086, 8088, and 80186, I'm unaware of variants of the 80188, or 80286 although there were still multiple manufacturers, I had a Harris 286 at 20MHz myself. Then with the 386 there were more custom variants of the 386 and 486. In the Pentium days Intel wouldn't license the Pentium design, but there were compatible competitors as AMD also began 100% custom designs that were only ISA compatible and pin compatible with the K5 and K6 lines.
At what point do we call a tweak to an original design different enough to count it... K5 and K6 where clearly new designs. The others were mostly intel with some changes. I'm going to count the rest as minor tweaks and not worth counting otherwise - but this is a case where you can argue there the line is and so others need to decide where they stand (if they care)
The NEC V20/30 series were significant advances over their Intel equivalent (basically all the 186 features plus more in an 8086/8 compatible package).
C&T had some super-386 chips that apparently barely made it to market (38605DX), and the Cyrix 5x86 (most of a 6x86) is substantially different from the AMD one (which is just a 486 clock-quadrupled)
> An x86 monopoly in the late 80s was a thing, but not now.
I think you're off by 20 years on this. In the 80s and early 90s we had reasonable competition from 68k, powerpc, and arm on desktops; and tons of competition in the server space (mips, sparc, power, alpha, pa-risc, edit: and vax!). It wasn't till the early 2000s that both the desktop/server space coalesced around x86.
Thank you for saying this. It's clear that processors are going through something really interesting right now after an extended dwindling and choke point onto x86. This x86 dominance has lasted entire careers, but from a longer perspective we're simply seeing another cycle in ecosystem diversity, specialized functions spinning out of and back into unified packages, and a continued downward push from commoditization forces that are affecting the entire product chain from fab to ISA licensing. We're not quite at the wild-west of the late 80s and 90s, but something's in the air.
It seems almost like the forces that are pushing against these long-term trends are focused more on trying to figure out how to saturate existing compute on the high-end, and using that to justify drives away from diversity and vertical integrated cost/price reduction. But there are, long-term, not as many users who need to host this technology as there are users of things like phones and computers who need the benefits the long-term trends provide.
Intel has acted somewhat as a rock in a river, and the rest of the world is finding ways around them after having been dammed up for a bit.
I remember when I was a senior in undergrad (1993) the profs were quite excited about the price/performance of 486 computers which thoroughly trashed the SPARC-based Sun work stations that we'd transitioned to because Motorola rug-pulled the 68k. Sure we were impressed by the generation of RISC machines that came out around that time like SPARC, PA RISC, POWER PC and such but in retrospect it was not those RISC machines that were fast it was 68k that was dying, but x86 was keeping up.
The bandwagon was actually an Ice Cream truck run by the old lady from the Sponge Bob movie.
Intel had just wiped the floor with x86 servers, all the old guard Unix vendors with their own chips were hurting. Then Intel makes the rounds with a glorious plan of how they were going to own the server landscape for a decade or more. So in various states of defeat and grief much of the industry followed them. Planned or not, the resulting rug pull really screwed them over. The organs that operated those lines of businesses were fully removed. It worked too well, I am going to say it was on accident.
Intel should have broken up its internal x86 hegemony a long time ago, which they have been trying since the day it was invented. Like the 6502, it was just too successful for its own good. Only x86 also built up the Vatican around itself.
X86 is more than just the ISA. What’s at stake is the relatively open PC architecture and hardware ecosystem. It was a fluke of history that made it happen, and it would be sad to lose it.
Sadly with the rise of laptops with soldered-in-everything, and the popularity of android/iphone/tablet devices, I share some of layer8's worries about the future of the relatively open PC architecture and hardware ecosystem.
On the one hand I do get the concern, on the other there’s never been a better time to be a hardware hacker. Cheap microcontrollers abound, raspberry pi etc, cheap fpgas, one can even make their own asic. So I just can’t get that worked up over pc architectures getting closed.
Hacking on that level is very different from building and upgrading PCs, being able to mix and match components from a wide range of different manufacturers. You won’t or can’t build a serious NAS, Proxmox homelab, gaming PC, workstation, or GPU/compute farm from Raspberry Pis or FPGAs.
We are really lucky that such a diverse and interoperable hardware platform like the PC exists. We should not discount it, and instead appreciate how important it is, and how unlikely for such a varied and high-performance platform to emerge again, should the PC platform die.
Today sure. If you want to custom make "serious" system then x86 is likely your best bet. But this isn't about today, you can have that system right now if you want, it's still there, so this is about the future.
All the use cases, except gaming PC, have "less serious" solutions in Linux/ARM and Linux/RISCV today, where I would argue there is more interoperability and diversity. Those solutions get better and closer to "serious" x86 solutions every day.
Will they be roughly equivalent in price/performance in 5 years... only time will tell but I suspect x86 PC is the old way and it's on it's way out.
You can't really build a PC with parts other than x86. The only other platform you can really build from parts is Arm, with the high end Ampere server chips. Most other platforms are usually pretty highly integrated, you can't just swap parts or work on it.
You can't just buy an ARM or POWER motherboard from one place, a CPU from another place, some RAM sticks from another place, a power supply, a heatsink/fan, some kind of hard drive (probably NVMe these days), a bunch of cables, and put them all together in your basement/living room and have a working system. With x86, this is pretty normal still. With other architectures, you're going to get a complete, all-in-one system that either 1) has no expandability whatsoever, at least by normal users, or 2) costs more than a house in NYC and requires having technicians from the vendor to fly to your location and stay in a hotel for a day or two to do service work on your system for you because you're not allowed to touch it.
I was only just today looking for low-power x86 machines to run FreePBX, which does not yet have an ARM64 port. Whilst the consumer computing space is now perfectly served by ARM and will soon be joined by RISC-V, if a widely-used piece of free and open source server software is still x86-only, you can bet that there are thousands of bespoke business solutions that are locked to the ISA. A monopoly would hasten migration away from these programs, but would nonetheless be a lucrative situation for Intel-AMD in the meantime.
The fact that C++ development has been effectively hijacked by the "no ABI breakage, ever"/backwards compatibility at all costs crowd certainly speaks to this.
There are a lot of pre-compiled binaries floating about that are depended on by lots of enterprise software whose source code is long gone, and these are effectively locked to x86_64 chips until the cost of interoperability becomes greater than reverse engineering their non-trivial functionality.
C++ language spec doesn't specify and doesn't care about ABI (infamously so; it's kept the language from being used in many places, and where people ignored ABI compat initially but absolutely needed it in the future, as with BeOS's Application Kit and Mac kexts, it's much harder to maintain than it should be.
"two factions" is only discussing source compatibility.
Indeed, I could use QEMU to run FreePBX on ARM64. However, the performance penalty would be pretty severe, and there isn't anything inherent to FreePBX that should prevent it from running natively on ARM64. It simply appears that nobody has yet spent the time to iron out any problems and make an official build for the architecture, but unfortunately I think there is still loads of other software in a similar situation.
I believe the "x86 monopoly" was meant to refere to how only Intel and AMD are legally allowed to make x86 chips due to patents. X86 is currently a duopoly, and if Intel and AMD were to merge, that would become a monopoly.
Oh, but xbar's interpretation of the phrase "x86 monopoly" is clearly the x86 architecture having a monopoly in the instruction set market. Under that interpretation, I don't really think it's relevant how many companies made x86 chips. I don't think xbar is necessarily wrong, I just think they're interpreting words to mean something they weren't intended so they're not making an effective argument
Did x86 have a monopoly in the 80s to begin with? If there is any period when that was true it would be the 2000s or early 2010s.
> intended so they're not making an effective argument
To be fair I'm really struggling to somehow connect the "x86 monopoly in the late 80s" with the remainder of their comment (which certainly makes sense).
x86 didn't have a monopoly, but IBM PC clones were clearly what everyone was talking about and there the monopoly existed. There are lots of different also ran processors, some with good market share in some niche, but overall x86 was clearly on the volume winners track by 1985.
> but overall x86 was clearly on the volume winners track by 1985.
By that standard if we exclude mobile x86 has a much stronger monopoly these days than in 1985. Unless we exclude low end PCs like Apple II and Commodore 64.
In 1990 x86 had ~80%, Apple ~7%, Amiga ~4% (with the remainder going to lowend or niche PCs) so again not that different than today.
This is all very true and why I think a merger between AMD and Intel is even possible. Nvidia and Intel is also a possible merger, but I actually think there is more regulatory concern with NVIDIA and how big and dominant they are becoming.
Intel and Samsung could be interesting, especially if it would get Samsung to open up more. Samsung would get better GPUs and x86, Intel gets access to the phone market and then you end up with things like x86 Samsung tablets that can run both Windows or Android.
Could also be Intel and Micron. Then you end up with full stack devices with Intel CPUs and Micron RAM and storage, and the companies have partnered in the past.
What part of a Samsung merger do you think would help them enter the phone market? My layman's understanding of history is that Intel tried and failed several times to build x86 chips for phones and they failed for power consumption reasons, not for lack of access to a phone maker willing to try their chips or anything like that.
They failed primarily for pricing reasons. They could make a low power CPU competitive with ARM (especially back then when Intel had the state of the art process), but then they wanted to charge a premium for it being x86 and the OEMs turned up their nose at that.
Samsung still has a fairly competitive process and could make x86 CPUs to put in their own tablets and laptops without having the OEM and Intel get into a fight about margins if they're the same company. And with the largest maker of Android devices putting x86 CPUs into them, you get an ecosystem built around it that you wouldn't when nobody is using them to begin with because Intel refuses to price competitively with ARM.
> An x86 monopoly in the late 80s was a thing, but not now
And then in the 2000s after AMD64 pretty much destroyed all competing architectures and then in the 2010s Intel itself effectively was almost a monopoly (outside of mobile) with AMD being on the verge of bankruptcy.
Wintel was a duopoly which had some power: Intel x86 has less dominance now partly because Windows has less dominance.
There are some wonderful papers on how game theory and monopoly plays out between Windows and Intel; and there's a great paper with analysis of why AMD struggled against the economic forces and why Microsoft preferred to team up with a dominant CPU manufacturer.
Ok, I'd like to pitch a Treehouse of Horror episode.
Part 1, combine branch predictor with the instruction trace cache to be able to detect workloads, have specific licenses for say Renderman, Oracle or CFD software.
Part 2, add a mesh network directly to the CPU, require time based signing keys to operate. Maybe every chip just has starlink included.
Part 3, In an BWM rent your seats move, the base CPU is just barely able to boot the OS, specific features can be unlocked with signed payloads. Using Shamir secrets so that Broadcom AND the cloud provider are both required for signing the feature request. One can rent AVX512, more last level cache, ECC, overclocking, underclocking.
The nice part about including radios in the CPUs directly means that updates can be applied without network connectivity and you can geofence your feature keys.
This last part we can petition the government to require as the grounds of being able to produce EAR regulated CPUs globally.
Yeah, what both companies would need to be competitive in the GPU sector is a cuda killer. That's perhaps the one benefit of merging Antel can more easily standardize something.
They weren’t interested in creating an open solution. Both intel and AMD have been somewhat short sighted and looked to recreate their own cuda, and the mistrust of each other has prevented them from a solution for both of them.
At least for Intel, that is just not true. Intel's DPC++ is as open as it gets. It implements a Khronos standard (SYCL), most of the development is happening in public on GitHub, it's permissively licensed, it has a viable backend infrastructure (with implementations for both CUDA and HIP). There's also now a UXL foundation with the goal of creating an "open standard accelerator software ecosystem".
This is all great, but how can we trust this will be supported next year? After Xeon Phi, Omnipath, and a host of other killed projects, Intel is approaching Google levels of mean time to deprecation.
The Intel A770 is currently $230 and 48GB of GDDR6 is only like a hundred bucks, so what people really want is to combine these things and pay $350 for that GPU with 48GB of memory. Heck, even double that price would have people lining up.
Apple will sell you a machine with 48GB of memory for thousands of dollars but plenty of people can't afford that, and even then the GPU is soldered so you can't just put four of them in one machine to get more performance and memory. The top end 40-core M4 GPUs only have performance comparable to a single A770, which is itself not even that fast of a discrete GPU.
OpenCL was born as a cuda-alike that could be apply to GPUs from AMD and NVIDIA, and general purpose CPUs. NVIDIA briefly embraced it (in order to woo Apple?) and then just about abandoned it to focus more on cuda. NVIDIA abandoning OpenCL meant that it just didn't thrive. Intel and AMD both embraced OpenCL. Though admittedly I don't know the more recent history of OpenCL.
This meme comes up from time to time but I'm not sure what the real evidence for it is or whether the people repeating it have that much experience actually trying to make compute work on AMD cards. Every time I've seen anyone try the problem isn't that the card lacks a library, but rather that calling the function that does what is needed causes a kernel panic. Very different issues - if CUDA allegedly "ran" on AMD cards that still wouldn't save them because the bugs would be too problematic.
> Every time I've seen anyone try the problem isn't that the card lacks a library, but rather that calling the function that does what is needed causes a kernel panic.
Do you have experience with SYCL? My experience with OpenCL was that it's really a PITA to work with. The thing that CUDA makes nice is the direct and minimal exercise to start running GPGPU kernels. write the code, compile with nvcc, cudaed.
OpenCL had just a weird dance to perform to get a kernel running. Find the OpenCL device using a magic filesystem token. Ask the device politely if it wants to OpenCL. Send over the kernel string blob to compile. Run the kernel. A ton of ceremony and then you couldn't be guarenteed it'd work because the likes of AMD, Intel, or nVidia were all spotty on how well they'd support it.
SYCL seems promising but the ecosystem is a little intimidating. It does not seem (and I could be wrong here) that there is a defacto SYCL compiler. The goals of SYCL compilers are also fairly diverse.
> OpenCL had just a weird dance to perform to get a kernel running...
Yeah but that entire list, if you step back and think big picture, probably isn't the problem. Programmers have a predictable response to that sort of silliness. Build a library over it & abstract it away. The sheer number of frameworks out there is awe-inspiring.
I gave up on OpenCL on AMD cards. It wasn't the long complex process that got me, it was the unavoidable crashes along the way. I suspect that is a more significant issue than I realised at the time (when I assumed it was just me) because it goes a long way to explain AMD's pariah-like status in the machine learning world. The situation is more one-sided than can be explained by just a well-optimised library. I've personally seen more success implementing machine learning frameworks on AMD CPUs than on AMD's GPUs, and that is a remarkable thing. Although I assume in 2024 the state of the game has changed a lot from when I was investigating the situation actively.
I don't think CUDA is the problem here, math libraries are commodity software that give a relatively marginal edge. The lack of CUDA is probably a symptom of deeper hardware problems once people stray off an explicitly graphical workflow. If the hardware worked to spec I expect someone would just build a non-optimised CUDA clone and we'd all move on. But AMD did build a CUDA clone and it didn't work for me at least - and the buzz suggests something is still going wrong for AMD's GPGPU efforts.
> Programmers have a predictable response to that sort of silliness. Build a library over it & abstract it away
Impossible. GPGPU runtimes are too close to hardware, and the hardware is proprietary with many trade secrets. You need support from GPU vendors.
BTW, if you want reliable cross-vendor GPU, just use Direct3D 11 compute shaders. Modern videogames use a lot of compute, to the point that UE5 even renders triangle meshes with compute shaders. AMD hardware is totally fine, it’s the software ecosystem.
Those packages only really perform with low-precision work. For scientific computing, using anything but CUDA is a painful workflow. DOE has been deploying AMD and Intel alternatives in their leadership class machines and it's been a pretty bad speedbump.
There's already a panoply of CUDA alternatives, and even several CUDA-to-non-Nvidia-GPU alternatives (which aren't supported by the hardware vendors and are in some sense riskier). To my knowledge (this isn't really my space), many of the higher-level frameworks already support these CUDA alternatives.
And yet still the popcorn gallery says "there no [realistic] alternative to CUDA." Methinks the real issue is that CUDA is the best software solution for Nvidia GPUs, and the alternative hardware vendors aren't seen as viable competitor for hardware reasons, and people attribute the failure to software failures.
10 years ago, I burned about 6 months of project time slogging through AMD / OpenCL bugs before realizing that I was being an absolute idiot and that the green tax was far cheaper than the time I was wasting. If you asked AMD, they would tell you that OpenCL was ready for new applications and support was right around the corner for old applications. This was incorrect on both counts. Disastrously so, if you trusted them. I learned not to trust them. Over the years, they kept making the same false promises and failing to deliver, year after year, generation after generation of grad students and HPC experts, filling the industry with once-burned-twice-shy received wisdom.
When NVDA pumped and AMD didn't, presumably AMD could no longer deny the inadequacy of their offerings and launched an effort to fix their shit. Eventually I am sure it will bear fruit. But is their shit actually fixed? Keeping in mind that they have proven time and time and time and time again that they cannot be trusted to answer this question themselves?
80% margins won't last forever, but the trust deficit that needs to be crossed first shouldn't be understated.
> alternative hardware vendors aren't seen as viable competitor for hardware reasons, and people attribute the failure to software failures.
It certainly seems like there's a "nobody ever got fired for buying nvidia" dynamic going on. We've seen this mentality repeatedly in other areas of the industry: that's why the phrase is a snowclone.
Eventually, someone is going to use non-nvidia GPU accelerators and get a big enough cost or performance win that industry attitudes will change.
> There's already a panoply of CUDA alternatives, and even several CUDA-to-non-Nvidia-GPU alternatives (which aren't supported by the hardware vendors and are in some sense riskier). To my knowledge (this isn't really my space), many of the higher-level frameworks already support these CUDA alternatives.
On paper, yes. But how many of them actually work? Every couple of years AMD puts out a press release saying they're getting serious this time and will fully support their thing, and then a couple of people try it and it doesn't work (or maybe the basic hello world test works, but anything else is too buggy), and they give up.
Why doesn’t NVIDIA buy intel? They have the cash and they have the pairing (M chips being NVIDIA and intel’s biggest competitors now). It would be an AMD/ATI move, and maybe NVIDIA could do its own M CPU competitor with…whatever intel can help with.
is it a lot more competitive for Nvidia to just keep winning? I feel like you want two roughly good choices for GPU compute and AMD needs a shot in the arm for that somewhere.
It is absolutely more competitive when nVidia is a separate company from Intel so they can't pull shit like "our GPUs only work with our GPUs" like Intel is now pulling with their WiFi chips.
>Gelsinger, who resigned on Dec. 1, left after a board meeting last week during which directors felt Gelsinger's costly and ambitious plan to turn Intel around was not working and the progress of change was not fast enough, according to a person familiar with the matter. The board told Gelsinger he could retire or be removed, and he chose to step down, according to the source.
A lot of people on this thread are underestimating how much of a hold Intel has on the chips industry. In my experience, Intel is synonymous with computer chip for the average person. Most people wouldn't be able to tell you what AMD does differently, they'd just say they're a knockoff Intel. Technologically, both companies are neck and neck. But for the average person, it's not even close.
Marketing campaigns only go so far. They’ve been riding the “Intel Inside” slogan for 25 years.
In the mean time, AMD/ARM already won phones, table and game consoles.
Server purchasing decisions aren’t made by everyday people. Intel’s roadmap in that space slipped year for year for at least 10 of the last 15 years.
That leaves Intel with the fraction of the non-mac laptop market that’s made up of people that haven’t been paying attention for the last ten years, and don’t ask anyone who has.
I work in video games and I think it is still sometimes a problem to use computers that are not based on x86 processors, both in the tool chains and software /engines. People here say that Intel has lost out on consoles and laptops, but in gaming that is because of x86 compatible AMD chips. Apple laptops were good for gaming when they had x86 and could duel boot. I see bugs people report on games made for Macs with x86 that don't work quite right with an Mx chip (though not a huge number).
A friend who worked in film post production was telling me about similar rare but annoying problems with Mx Apple computers. I feel like their are verticals where people will favor x86 chips for a while yet.
I am not as close to this as I was when I actually programmed games (oh so long ago!) so I wonder if this is just the point of view of a person who has lost touch with trends in tech.
>In the mean time, AMD/ARM already won phones, table and game consoles.
Don't forget laptops. Intel has been terrible on laptops due to their lack of efficiency. AMD has been wiping the floor with them for years now.
2024 is the first year that Intel has released a laptop chip that can compete in efficiency. I hope Intel continues to invest in this category and remain neck and neck with AMD if we have any hope of having Windows laptops with decent battery lide.
I doubt most people actually care about efficiency in a laptop. My wife is my anecdotal example. She's had a mac for years but refuses to give Apple one more penny because they've been awful - had to replace her previous laptop motherboard 7 times until we finally had to sue Apple in a class action which resulted in them sending her current 2015 MBP, which has now aged-out of MacOS updates. Sucks that this computer is now basically a paperweight.
Anyway, in my questions for her about what she really cares about in a new laptop, power efficiency was not a concern of hers. She does not care about efficiency at all. All she cared about was a good enough screen (2560x1440 or better), and a fast CPU to run the new Photoshop features, and the ability to move it from one location to another (hence the need for a laptop instead of a desktop). I'd wager that for most people, the fact that it's a portable computer has nothing to do with how long the battery lasts away from an outlet. She can transport the computer to another location and plug it in. There are very few situations that require extended use away from an outlet, and even in an airplane, we often see 120V outlets at the seats. There's really no use case for her that puts her away from an outlet for longer than an hour or two, so efficiency is the least of her concerns in buying a new laptop.
So we went with a new Dell laptop with the Intel i9-13900HX, which beats the Apple M4 Max 16 Core in terms of overall performance in CPU benchmarks. I would have looked at an AMD based laptop, but the price on this Dell and the performance of the i9 were great, it was $999 on sale. It's got a decent enough screen, and we can easily upgrade the RAM and storage on this laptop.
I doubt she'd even care if the new laptop didn't have a battery at all, so long as she can easily stuff it in a bag and carry it to another location and plug it in. I feel the exact same way, and I recently bought a new (AMD based) laptop, and power efficiency was not a thing in my decision making process at all. The battery lasts a few hours, and that's plenty. I don't get a hard-on for battery life, and I'm not really sure who does. Are these people dancing around with their laptops and simply can't sit still and plug it in?
I have plenty of people who care. Why? They always forget to plug in their laptop and then they want to open it and it's dead. Not to mention, x86 Windows machines do a poor job going to sleep.
Handed the wife M2 Macbook Air and she's thrilled how little she has to plug it in. She goes weeks between charges sometimes.
> her current 2015 MBP, which has now aged-out of MacOS updates
Not trying to invalidate or lessen your complaint (which I completely agree with) but want to make sure you are aware of OpenCore Legacy Patcher. It's a little hacky by nature but can give some extra life to that machine: https://dortania.github.io/OpenCore-Legacy-Patcher/MODELS.ht...
yeah, I looked at it, but this MBP has a speaker that died, the SD card reader died a long time ago, had to replace the battery, it's slow, and it doesn't really play nicely on the SMB network, etc, etc. I'll be glad when it's gone. And OpenCore patcher seemed like a lot of hassle to keep putting up with this machine. Thanks for suggesting it though.
Efficiency and performance are heavily correlated in portable devices. You only have a certain TDP that you can utilize before the device throttles due to a lack of heat dissipation. The more efficient a CPU you have, the more you can accomplish before you hit temps that will affect performance.
There is another angle at power efficiency: my work laptop is so bad, moderate load makes the fan spin and higher load creates a very annoying noise due to the cooling needs. All these while performance is far from stellar (compared to my desktop).
>That leaves Intel with the fraction of the non-mac laptop market that’s made up of people that haven’t been paying attention for the last ten years, and don’t ask anyone who has.
Evidently, that leaves Intel the majority of the market.
Remember, most people don't care as much as you or I. If they're going to buy a laptop to do taxes or web browsing or something, they will probably be mentally biased towards an Intel-based chip. Because it's been marketed for so long, AMD comparatively seems like a super new brand.
People miss this. A lot of people will only buy Intel. Businesses and IT departments rarely buy AMD, not just out of brand loyalty, but because of the software and hardware features Intel deploys that are catered to the business market.
This is in large part an OEM issue. Dell or HP will definitely have an Intel version of the machine you are looking for, but AMD versions are hit and miss.
I think this is partly because big OEMs doubt (used to doubt?) AMD’s ability to consistently deliver product in the kind of volume they need. Partly it’s because of Intel’s historically anticompetitive business practices.
Hasn't changed, there was an article back in September saying that the relationship between AMD and laptop OEMs is rocky:
> Multiple reports, citing sources at laptop OEMs, have covered what is said to be poor support, chip supply, and communication from AMD with its laptop partners, leading to generally poor execution. Chip consultancy firm AC Analysis says AMD's shift of focus to AI and data centers has led to a "'Cold War ice age' in relationships with OEMs," leading to a loss of trust from its partners.
> And ~75% (Intel) vs ~25% (AMD) for data center servers.
IIRC their data center CPU revenue was about even this quarter so this is a bit deceptive (i.e. you can buy 1 large CPU instead of several cheaper ones).
Those two terms are related but definitely are never interchangeable. Market share is the portion of new sales a company is getting. Install base is the portion of existing in-use products that were from that company. Install base is essentially market share integrated over time, less systems that are discarded or otherwise taken out of service. If market share never changes, install base will approach the same proportions but it's a lagging indicator.
Sure, but if the point is showing how Intel isn't really in such a bad spot as one might think just looking at the install base would be pretty deceiving and semi-meaningless.
I think data center revenue was in AMD's favor because AMD is second (obviously far behind NVidia) and Intel is third in AI accelerators, which both companies include in their data center numbers. So that pushes things in AMD's favor. I think on data center CPU's alone Intel is still ahead.
Data center revenue is not just CPU. It includes MI300 et al. So that's why data center revenue can be roughly equivalent between AMD & Intel while CPU revenue is still predominantly Intel.
Why do you think gaming community survey would be more relevant than Intel/AMD earning reports in which they unambiguously, for the most part, lay out the earnings per CPU type?
For PC’s that can’t be right. For overall consumer, Windows is at 25.75%, Linux is 1.43% and MacOS is at 5.53%.
Ignoring ChromeOS, and assuming 100% of windows and linux is x86 (decreasingly true - the only win11 I’ve ever seen is an arm VM on my mac) and 100% of Mac is arm (it will be moving forward), that puts arm at 20% of the PC market.
Interpolation from your numbers puts intel at 64% (with a ceiling of 80% of PC; 25% of consumer computing devices unless windows makes a comeback).
There is a common usage of “PC” that excludes Macs, Chromebooks, and the like. It means the x86-based PC platform descendant from IBM PC compatibles, with BIOS and all.
I dunno, I've seen more and more people referencing the crash bugs in the latest gens and how Intel lied about it through their teeth. And Intel having lost to Apple on the CPU front, never having caught up to Nvidia on the GPU front, and basically just not doing anything for the last decade certainly hasn't helped their reputation.
Let them die. Maybe we'd actually see some new competition?
I doubt many people are making purchasing decisions based on Intel branding. Any kind of speed advantage has not been a dominant factor in the minds of most low information/brand influenceable consumers who are buying x86 machines. Everybody else looks at reviews and benchmarks where Intel has to show up with a good product and their branding doesn't get them much.
> The moat of the two companies together would give the new combined company plenty of time to ramp up their fabs.
Doubt.
Neither of the companies is particularly competitive on either processor or GPU architecture nor fabrication.
A merger of those entities looks like nothing but a recipe for further x86 stagnation and an even quicker death for the entities involved imho.
In particular I cannot see what's good in it for AMD. The fabs have no use/clear path forward. Their processors/gpu either match our outmatch the Intel offering.
A Broadcom/Apple takeover of Intel sounds much more reasonable.
Apple is really good in making OTHER PEOPLE'S fabs work for their purposes. Running their own manufacturing was never particularly a forté.
Apple currently really enjoys being on the very latest process node. It's not a given that they could match or improve on that with their own fab (Sure, there is a lot of VLSI design and materials experience, but that does not automatically translate into a state of the art process, and is unlikely to contain the magic ingredient to get Intel back on top).
And in the unlikely case it SHOULD work, that will only invite further regulatory headaches.
How well is Apple doing with the modem team they purchased from Intel? I have yet to see Apple release their own in-house 5G modem. I don’t think Apple can magically turn things around for Intel and such an acquisition would be a waste of money.
Maybe for the fabs? It might be attractive for Apple to move production state side via Intels fabs but on the other hand I don't think Intels fabs can do what Apple wants
>In particular I cannot see what's good in it for AMD. The fabs have no use/clear path forward. Their processors/gpu either match our outmatch the Intel offering.
I can, but it's not technical. Intel has a huge advantage in several markets, and has strong relationships with many OEMs like Dell. Intel, even though their market cap is now a fraction of AMD's, still has a huge lead in marketshare in OEM systems and servers. (Several other posts in this thread have real numbers.)
If AMD bought out Intel, it would now get all of that, and be able to push all these OEM and server customers into AMD's solutions instead.
> is particularly competitive on either processor or GPU architecture nor fabrication.
Who is then? Apple is of course still ahead in lower power chips. But Apple is not in the the desktop/workstation/server market and there are hardly any alternatives to AMD or Intel there.
e.g. M2 Ultra Apple's fastest "desktop" CPU is slower than the 14700K you can get for $350. Seems pretty competitive...
>I feel like this is a mistake. Pat's strategy is aggressive but what the company needs.
He's also 63. Has plenty of money to survive the rest of his life. Has eight grandchildren. There's so much more to life than business. What's to say he doesn't want to simply enjoy life with more connection and community to loved ones around him?
That would be a healthy, balanced and long-term-oriented approach. But those who get to the level of CEO are subjected to intense forces that select against those traits.
I don’t know much about this guy but it’s reasonable to assume that any C-level exec will hold on to the position for dear life until they are forced out.
> any C-level exec will hold on to the position for dear life until they are forced out
I don't know. Frank Slootman's retirement from Snowflake earlier this year was certainly not celebrated by any significant stakeholders. I'd imagine at some point someone like Frank realizes that they are worth more than Tim Cook, they consider that they're in their mid-60s, and they decide the remaining time they have on earth might be better spent in other ways.
Every person in the workforce, no matter how ambitious or how senior, is forced into the calculus of money and power vs. good years remaining. I expect the rational ones will select the balance point for themselves.
There are certainly some indications this was something that was not long in the planning. On the other hand, when (solid) financial security is not a subject on the table, it's a lot easier for many folks to let go--especially given that they can probably do as many board or advisor gigs in the industry as they have an appetite for. Or just go on to a new chapter.
That "fabs will report to me" should be a tell that there is a lot of internal opposition...
Worse (for Intel) what can happen is Intel HP-isation - splits and sells.
But there is a lot of good news for them in cpu world: another B's granted, military buys Intel, new no-HT arch. And 80 bit memories like in Multics, can be true virtualisation on x86.
Even if x86 is dead Intel still have fabs - AMD can soon print in them :)
But that multigeneration refreshes are still a mistery - is it Intel's problem or maybe something else eg. simply someone have a some patent ? :>
He got fired, dawg. This is like being told that someone is sleeping with the fishes and concluding that the guy just finds lying in bed next to a bunch of sturgeon the most relaxing way to sleep.
> The moat of the two companies together would give the new combined company plenty of time to ramp up their fabs.
I'm not convinced of this. Fabs are incredibly expensive businesses. Intel has failed to keep up and AMD spun off their fabs to use TSMC.
There is also ARM knocking at the door for general computing. It's already gaining traction in previously x86 dominated markets.
The model for US based fabbing has to include selling large portions of capacity to third party ASIC manufacturers, otherwise I see it as doomed to failure.
> There is also ARM knocking at the door for general computing. It's already gaining traction in previously x86 dominated markets.
I know anecdotes aren't data, but I was talking with a colleague about chips recently and he noticed that converting all of his cloud JVM deployments to ARM machines both improved performance and lowered costs. The costs might not even be the chips themselves, but less power and thermal requirements that lowers the OpEx spend.
They would have at least 5 years to figure it out before ARM becomes viable on desktop assuming there continues to be movement in that direction. There is so little incentive to move away from x86 right now. The latest Intel mobile processors address the efficiency issues and prove that x86 can be efficient enough for laptops.
IT departments are not going to stop buying x86 processors until they absolutely are forced to. Gamers are not going to switch unless performance is actually better. There just isn't the incentive to switch.
> There is so little incentive to move away from x86 right now.
> IT departments are not going to stop buying x86 processors until they absolutely are forced to.
IT departments are buying arm laptops, Apple's.
And there is an incentive to switch, cost. If you are in AWS, you can save a pretty penny by adopting graviton processors.
Further, the only thing stopping handhelds from being arm machines is poor x86 emulation. A solvable problem with a small bit of hardware. (Only non-existent because current ARM vendors can't be bothered to add it and ARM hasn't standardized it).
Really the only reason arm is lagging is because the likes of Qualcomm have tunnel vision on what markets they want to address.
> corporate purchased laptops and AWS instances, which are quite different.
They are similar. Particularly because developing on a corporate hardware with an ARM processor is a surefire way to figure out if the software you write will have issues with ARM in AWS.
That's pretty much the entire reason x86 took off in the server market in the first place.
> About corporate laptops, do you have evidence to show that companies are switching to Macbooks from HP/Dell/ThinkPads?
Nope. Mostly just anecdotal. My company offers devs the option of either an x86 machine or a mac.
Lots of companies do that, and I wouldn't call it an x86/ARM choice but rather the same old Windows/Mac choice. For Windows, only x86 makes sense for companies with lots of legacy software, and the only choice for Mac is ARM.
Can those users get all the software they need? Many users who want a mac are told no because some weird software they need doesn't run on it. Others only get a mac because some executive demanded IT port that software to mac. So long as companies have any x86 only software they won't let people switch. Often "art" departments get a specific exception and they get to avoid all the jobs that require x86 only software just to run their mac.
Of course these days more and more of that is moving the the cloud and all IT needs to a web browser that works. Thus making their job easier.
> Of course these days more and more of that is moving the the cloud and all IT needs to a web browser that works. Thus making their job easier.
This was the point I was going to make. While not completely dead, the days of desktop applications are quickly coming to a close. Almost everything is SAAS now or just electron apps which are highly portable.
Even if it's not saas or electron, the only two languages I'd do a desktop app in now-a-days is C# or Java. Both of which are fairly portable.
The big problem is Excel. Microsoft will make sure never to give that one up. Browser version isn't enough. I'm sure Apple has their reasons for not financing a full-fledged, compatible version, but if they would it would massively increase market share. I'm guessing it's strategic - e.g. not incurring the wrath of Microsoft or a different non-technical, non-marketshare reason.
The world outside of the SV tech bubble runs on Excel.
Back when I worked for an F500 company, my development workstation was every bit as locked-down as a phone. Complete with having to select any software I wanted to use from the company's internal "app store" rather than installing it directly.
I work in a large enterprise company, have both windows and mac machines, and excel works equally great in both, but more and more excel runs in a browser.
We mostly email links to spreadsheets running in cloud. So it really doesn't matter what your OS is any more from an excel perspective, as long as your computer can run a modern browser you are good.
From what I've seen your company is an exception. Yes, for 95% of users, browser/Mac Excel is more than enough. But the non-tech companies I've seen still don't want to get Macs because of that 5%, they just don't want to bother with having to support two platforms. And leadership obviously doesn't care/have no idea.
95% of people who use Photoshop would probably be served just fine by Krita, or even GIMP if they learned the somewhat wonky UI, and would save a ton of money in the process. However, people usually want to use the "standard" because of some vague fear that the alternative isn't 100.00% compatible, or that it won't have some obscure feature that they don't even know about yet, etc. I think Excel is exactly like this today, and so is Word. There are many alternatives that are just as good (and much cheaper) for 99% of users, but people still want to stick with "the standard" instead of taking a small risk on something different.
Maybe in the coming Great Depression of 2025, people will think differently and start looking at cheaper alternatives.
I agree, and I think my post supports that, in a way. I'm just saying 95% of people probably could work just fine with GSheets or LibreOffice or whatever, but the very same is true for MacExcel (even more true, in fact, because it's closer to WinExcel than the alternatives).
ARM does not inherently have "massively lower power consumption and waste heat", though.
Market forces have traditionally pushed Intel and AMD to design their chips for a less efficient part of the frequency/power curve than ARM vendors. That changed a few years ago, and you can already see the results in x86 chips becoming more efficient.
> They would have at least 5 years to figure it out before ARM becomes viable on desktop assuming there continues to be movement in that direction.
What's this based on? Surely the proportion of desktops that need to be more powerful than anything Apple is doing on ARM is very small. And surely Apple isn't 5 years ahead?
It is less about the development of ARM on desktop and more about software support. Most apps on Windows are still emulated. Some will not work at all. Games are kind of a mess on ARM. A ton of security software that IT departments require are only going to work x86. Businesses run legacy custom applications designed for x86. Some major applications still run on emulation only and are therefore slower on ARM.
Apple can force the transition. It is not so straightforward on Windows/Linux.
Honestly, I wouldn't put it behind IBM to turn it around with POWER revival. They'd been doing some cool stuff recently with their NorthPole accelerator[1], and using 12nm process while at it, indicating there's much room for improvement. It could eventually become a relatively open, if not super affordable platform. There's precedent with OpenPOWER! And not to mention RISC-V, of course, championed by Jim Keller et al (Tenstorrent) but it's yet to blossom, all the while pppc64el is already there where it matters.
IBM did lay an egg with Power10, though. They cut corners and used proprietary IP and as a result there are few (are there any?) non-IBM Power10 systems because the other vendors stayed away. Raptor workstations and servers are a small-ish part of the market but they're comparatively highly visible - and they're still on POWER9 (no S1 yet).
They did realize the tactical error, so I'm hoping Power11 will reverse the damage.
Friends that work at intel said gelsinger and the board have done EVERYTHING wrong in the past four years. From blowing key customer accounts to borderline malfeasance with payouts. It’s also the board that needs to go too for enabling. The merger with amd sounds like the right path.
US government wouldn't let intel down, this is matter of national security (only grown semiconductor fabs left on US soil) and edge of US tech dominance
When that happens typically the company starts optimizing for sucking money from the government. From the point of view of the consumer Intel would be finished.
That's the bet I made after the crash last summer. I think the USG only really cares about the fabs, as we've shown the ability to design better chips than intel's here. Time will tell if I'm right.
Pretty much fair game for speculation. The only way this is not bad for the tech industry was if he resigned due to medical or age reasons. That would not be unexpected.
Doubtful that is the issue with Intel's track record. Curious when we will know if 18A is competitive or not.
> If 18a is not ready I think the best case scenario for Intel is a merger with AMD. The US Govt would probably co-sign on it for national security concerns overriding the fact that it creates an absolute monopoly on x86 processors. The moat of the two companies together would give the new combined company plenty of time to ramp up their fabs.
No way other countries would allow that. If A-tel (Amd-inTEL) can not sell to the EU the merger will not happen.
I tend to agree. He's not outside the window where someone might choose to retire. But no named permanent successor? Retiring immediately? Tend to speak to a fairly sudden decision.
Just stop buying new intel chips, and continue buying Arm chips. Its not like every single existing x86 CPU would need to be taken away and destroyed.
Apple has made it fairly obvious, even if it was not already with smartphones and chromebooks, that Arm is a viable, realistic, and battle-tested alternative for general purpose computing. Windows 11 even runs on Arm already.
It would not happen "tomorrow" - this would be years in court if nothing else. This would give Dell/HP/Lenovo/whoever plenty of time to start building Arm laptops & servers etc for the European market.
And who knows what RISC-V will look like in a few more years?
The EU has done a bunch of stupid anti-consumer shit in tech already (hello cookie warnings that everyone now ignores), so I would not be surprised if this happened.
Seize or terminate their patents and copyrights. Issue arrest warrants for criminal evasion. Compulsory licensing of x86 to a European design firm immunized by EU law.
> Perform rapid continental ARM transition
Yes.
Windows is on ARM. Apple is on ARM. AWS and Ampere make decent ARM servers. You have decent x86 user-space compatibility on ARM laptops. That is all users want.
I doubt it will cost 'mythical amounts of money'. Most users use a web browser and an office suite. I doubt they will know a difference for a while.
> Seize or terminate their patents and copyrights. Issue arrest warrants for criminal evasion. Compulsory licensing of x86 to a European design firm immunized by EU law.
My eyes rolled so far back I hurt myself.
Please provide some examples of where the EU has been able to do a fraction of what you listed to large, US based firms in the past.
Looking at the future, if you want a trade war and an excuse for the new US administration to completely neglect NATO obligations this is a great start.
People who are presumably very well-off financially can retire a tad on the early side for all sorts of reasons or a combination thereof. Certainly he has made some significant course corrections at Intel but the most charitable thing one can say is that they will take a long time to play out. As you say, a merger with AMD seems like a non-starter for a variety of reasons.
Is Intel really in such a dire situation that they need to merge with AMD?
AMD has been in troubling situation in the past(some thanks to Intel illegal dealings) yet they managed to survive and they were nowhere near Intel's size.
I think that Pats strategy is what the fab needs to do to be successful.
However, I think the fab and design should be separate companies, with separate accountability and goals/objectives. There is just too much baggage by keeping them coupled. It doesn't let either part of the company spread their wings and reach their full potential when they are attached at the hip. From the outside perspective, that is the thing that Pat has seemingly been focused on, keeping it together, and its why people have lost faith in his leadership.
I also don't think that from a investment / stock standpoint that accelerating the depreciation / losses related to restructuring on the most recent quarter was a wise decision, since what Intel really needed was a huge win right now.
Look at what Pat did to VMware. He's doing the exact same thing at Intel. He came in, muddied the waters by hiring way too many people to do way too many things and none of them got done appropriately. Pat is a huge part of the problem.
I had the unfortunate pleasure of watching him not understand, at all, VMware's core competency. It was a nightmare of misunderstanding and waste in that company under his leadership.
Intel turned into even more of a laughing stock under Gelsinger. I say: good riddance. He burned time, capital and people at both VMware and Intel. He's a cancer as a CEO.
When he came back to Intel I was saying that all this ‘finally an engineer’ in charge stuff was misunderstanding Pat Gelsinger and VMWare was front and center in my thinking that.
Both of those things might be true, but to me, it looks more like the board is acting out of fear of shareholder lawsuits. Pat's strategy has significantly destroyed value because the market lacks visibility.
Dropping Pat will alleviate their feeling of having to do "something."
As for M&A, it wouldn't just have to be approved at the DoJ. And the Chinese will never ever approve of it (but would have to). If they do a transaction without approval from the CMA, it would be like a nuclear financial war.
I think it's high time to gut intel into parts, a la GE. Sell the Fabless to QCOM or BCOM. Sell the Fabs of one by one to GF, Tower, UMC or even tsmc. Find a PE firm for the leading edge and reconstrue it with significant rnd credits as a kind of Bell labs 2.0.
Given the push of ARM designs into the desktop and server space, that monopoly doesn't seem to me as much of a danger as it might have a decade ago. I imagine any anti-competitive behavior in x86 would only accelerate that trend. Not that monopolies shouldn't be a concern at all, but my thought is that it's not quite that large of a danger.
If a breakup is in the works for Intel, merger of the foundry side with Global Foundries would make more sense than AMD. Intel's foundries, even in the state they're in would likely be a step up for GF. And given the political sensitiveness, GF already has DoD contracts for producing chips.
> I think the best case scenario for Intel is a merger with AMD (…) it creates an absolute monopoly on x86 processors
If this happens, couldn’t they force them giving out licenses as a condition? The licensing thing has been such an impediment to competition that it seems like it’s about time anyway.
One argument I've hard in favor of the split is this: If you are AMD/NVDA/other top player, do you want to send your IP to an Intel owned fab for production?
At least in theory, a fully independent, split/spun out standalone fab removes this concern.
That said - what does Intel have to offer the top players here? Their fabs are being state of the art. And what's the standalone value of post-spin fabless Intel if their chip designs are as behind as their fabs?
This certainly presents a conundrum for US policy since we need fabs domestically for national security reasons, but the domestically owned ones are behind.
Eh, firewalls can be made strong enough, at least for some things. A software parallel is: you are Apple / Facebook, do you use Azure and/or AWS? I wouldn't, if it were me, but they do.
Azure/AWS is cloud/B2B, AAPL/FB are B2C consumer goods/services. Different customers, different industries. There is some overlap, but moat is in other places.
AMD/NVIDIA are in same business as Intel and have same pool of customers.
Maybe but I think AWS arguably is a different scale of automation. There’s no Amazon human in the loop looking at your source. Sure a human SRE could peak into your system but that’s something of an exception.
I can’t imagine fabs have that level of automation. It’s not like sending a file to your printer. It’s a multi month or year project in some cases to get your design produced. There’s many humans involved surely.
Well, for at least a time they would have the entire x86 market. That is not nothing. Also AMD may want to get back into the fab business. Without competition in x86 why not use Intel's fabs?
They dont need to merge with intel to get the entire x86 market, they'll be getting that anyway if Intel folds.
Even if Intel gets bought out, it'll be in pieces. Nobody wants to enter the x86 market, but there may be smaller segmenrs of the business that can help an ARM based business, or someone looking to get into GPU's.
Why would Intel "fold"? Their revenue is still 2x higher than AMDs... I mean obviously they are not doing great but its silly to say something like that at this point.
If the ISA patent licenses opened up, that might not be the case. When the topic comes up, it's more about Intel shutting down license transfers, so naturally companies have avoided x86.
- A token legal remnant of Intel, with 0 employees or properties, might suffice to keep that license ticking.
- If the stakes appeared to be "or America will lose it's ability to make computers", then the government might find a judge willing to sign off on just about any sort of counterfactual, "because national security".
Merger with AMD is very unlikely for competitive reasons but I’ve read some rumors that 1) Apple will push some production to Ιntel from TSMC and 2) Apple (and Samsung) are considering buying Intel.
Sure they won't allow Intel to be bought by a foreign company, but surely everyone would much rather see Intel being bought by literally any other company than AMD and Nvidia.
Nvidia makes a lot more sense than AMD; it is better for the market (preserving some competition in x86), and at least Intel does something Nvidia doesn’t.
China and the EU would never allow an Nvidia Intel merger, not under any scenario the US would find acceptable.
They'll barely allow Nvidia to acquire anybody at this point, no matter how small. See recent EU response to Run:ai. Intel would be considered 100x worse.
Because Intel, AMD, etc have offices in EU and China, for sales, distribution and also R&D. If you intend to operate in those markets you need to comply with local regulations.
The same reason as anything else. If the merger goes ahead with opposition from foreign markets, those markets can impose import tariffs or outright bans. Smaller markets may be ones these combined companies are willing to piss off, but not Europe. Their opposition is defacto a deal killer.
China doesn't care. They are banned from buying western AI HW or making their own AI HW at TSMC/Samsung. They are pouring hundreds of billions to the semiconductor ecosystem.
Huawei is trying to make establish domestic Ascend/MindSpore ecosystem, but they are limited by the SMIC process (~7nm). Amount of defects is allegedly rather high, but they are the only "official" game in town (other than smuggled NVIDIA cards or outsourced datacenters in Middle East).
Then the point about China trying to block a Nvidia merger doesn't really sense if they will be going their own path anyways. It would exist to try to harm Nvidia before their homegrown alternatives ramp up.
What do they do that Nvidia doesn't (and that Nvidia would care about)?
They already do networking, photonics, GPUs, high speed interconnects, and CPUs. They are planning on selling their FPGAs (the Altera acquisition) to Lattice.
The only things left are their fab ops, thunderbolt/usbc, wifi, and ble.
Their fab ops would take over a decade of heavy investment to catch up to TSMC or Samsung and idk if even Nvidia is ambitious enough to take that on.
Wifi and BLE could be good additions if they wanted to branch out their mellanox portfolio to wireless. Thunderbolt/USB C also might be worthwhile.
But that IP is probably going to be cheaper to buy piecemeal so idk if it's worth it to buy the whole company outright.
I mean, ARM designs have had some wins lately, but x86 still does quite well in single-thread performance, right? Excluding Apple, because they are magic—Amazon, Ampere, these ARM CPUs make a reasonable pitch for applications that use lots of cores well, but that isn’t every application.
The colliquial "x86 License" is the AMD-Intel Patent cross-licensing agreement. i.e. All patents related to x86 or any extensions of the ISA are automatically cross-licensed between the companies. While in the past the ISA patent story mostly was leaning in Intel's favor, since AMD64/x86_64 really took off, ISA innovation really became a delicate stack of cards interwoven between Intel and AMD.
So if Intel sells, everyone is fucked until whoever buys can renegotiate the terms of the agreement. And if that's Nvidia, they could just sit dead on the IP and starve AMD of the bulk of their CPU revenue (which is what is keeping the company solvent). And requiring they keep the agreement they currently have would mean requiring AMD to give Nvidia a pretty detailed look at the AMD secret sauce which would increasingly push AMD into the red until they become insolvent, again leading to a Nvidia monopoly.
The US government as well as the EU will not allow that to happen so however things slice, ownership of the x86 ISA patents would not be going to Nvidia.
So for the 2009 cross licensing agreement, change of control terminates the cross licensing for both parties[0]. Since there are far more Intel x86 patents in the last 20 years than AMD, sounds like AMD would be the one more at risk, which I think agrees with what you say. In practice, any anti-trust review if Nvidia is the buyer would prevent Nvidia from using it to harm AMD's business.
I heard they’re building a iOS / android replacement. Think of the vertical integration!
I’m picturing a boot-looping cargo plane full of hummers dropping like a stone — doesn’t get much more vertically integrated than that. Think of all the layers they can eliminate.
I do not at all think it will happen, nor does it make any sense at all but the rumours of Apple seemingly being interested in buying out Intel dont seem to be going away.
I can see them wanting certain parts of the business (GPU mainly) but on a whole it doesn't make a lot of sense.
I don't see Intel as a single entity being valuable to any US business really. You're essentially buying last years fall line, theres very little use for Intel's fabs without a huge amount being spent on them to get them up to modern standards.
It'll all come down to IP and people that'll be the true value.
Apple is interesting. They certainly have the money and I think the idea of fabricating their own chips appeals to Apple, but at then end of the day I don't really think it makes sense. Apple won't want to fab for others or design chips for others.
The only way it happens is if it is kept separate kind of like the Beats acquisition. Apple lends some chip designs to Intel and Apple starts fabricating their chips on Intel fabs, but otherwise the companies operate independently.
Micron Technology is the only one that comes to mind, but they are more on the memory side of things - the last time they were on level with Intel was in the 90s when they both made DRAM but intel pivoted to processors and networking
Apple is the obvious one. Essentially the only place with both the capital to do it and the extreme vertical integration enthusiasm. AMD hopefully still remembers running a fab.
The only thing that Apple might find even remotely useful are Intel's fabs. The rest of the business would have to be sold ton someone else or closed down (which would never be approved by the government).
Even then there is zero indication that Apple would ever want to do their own manufacturing.
The government desperately wants US fabs because the military requires tech and it's increasingly dangerous to rely on globalization when the globe is going nuts -- the rest of it doesn't really matter.
I seem to recall that Intel was talking about the same kind of split. Maybe the Intel child company and AMDs would merge, or maybe they'll stay separate and the parents will merge?
> One of the key reasons why Rosetta 2 provides such a high level of translation efficiency is the support of x86-64 memory ordering in the Apple M1 SOC.
Dunno, who says a lot of effort was put into Rosetta 2?
It's mostly something that was needed in the first few years so you could run Chrome and Photoshop, but those have been ported now. It's mostly useful for running WINE but that's not super common outside games.
That said, a binary recompiler has a lot of uses once you have one: https://valgrind.org
Let's not walk around in circles. We already estabilished that the chip was designed to facilitate the transition.
What you're saying is exactly what I'm saying. Having that x86 translation layer helped them sell macs. It doesn't matter if it is not needed now (it is needed now, otherwise it would have been removed).
So, yes. Apple has a popular new band in town. But they still need to play some x86 covers to sell tickets.
As I said, they have a good plan, but they're _not there yet_.
When they'll be? When x86 translation is not shipped anymore. It's rather simple.
That assumes a functional antitrust mechanism. We don't know what the next admin will do yet other than attempt technically illegal revenge on people they hate.
agree with most except merging usually costs more than less, it usually happens to stack up because you get to axe a lot of employees in the name of synergy I mean duplicated departments
That's the best exit case for the shareholders. It's the worst case for Intel's employees, customers and partners.
> would probably co-sign on it for national security concerns
This is equally laughable and detestable at this point in history. My personal security is not impacted by this at all. Weapons manufacturers honestly should not have a seat at this discussion.
> overriding the fact that it creates an absolute monopoly on x86 processors.
Yet this isn't a problem for "national security?" This is why I find these sentiments completely ridiculous fabianesque nonsense.
>I think the best case scenario for Intel is a merger with AMD
Oh no no no no. Oh hell no. For now, we need competition in the x86 market, and that would kill it dead. Imagine Intel re-releasing the Core 2 Quad, forever.
The problem is that this transition is very capital intensive and all the money was spent on share buybacks the past decades. The stock market looks at CPUs and GPUs and likes the latter a lot more so no fresh money from there. At the moment the only stakeholder with a vital interest in Intel semiconductor capabilities is the US government and even that may change as a result of Trump.
Nope. From what i heard from VMWare he was just a bad manager. He seems to be just a person skillfully playing org game, yet when it came to deliver he just flopped.
The struggling companies with totally rotten management like to bring such "stars" (pretty shrewd people who built themselves a cute public image of supposedly talented engineers who got promoted into higher management on their merits) - Yahoo/Meyers come to mind as another example - who de-facto complete the destruction of the company while the management rides the gravy train.
I worked for 3 months for Intel. I can genuinely say that there is no saving that company. Recently, they are hiring many PhDs from various US universities (particularly Indians) to try to compensate (they offer generous stocks and are hiring like crazy right now). There are two major problems I saw: lack of genuine interest in fabs (most people are there for the Intel name and then leave or in the case of Indians, they are there for Visa purposes. Mind you, we were not allowed to hire people from China since Intel is subject to Export laws). The biggest problem by far is lack of talent. Most of the talent I know is either at Apple or Facebook/Google, including those trained in hardware. Intel is bound to crumble, so I hope we as taxpayers don't foot the bill. There was unwillingness to innovate and almost everyone wanted to maintain the status quo. This might work in traditional manufacturing (think tennis rackets, furniture...), but fabs must improve their lithography manufacturing nodes or they get eaten by the competition
A few years ago a CEO of Intel (not Gelsinger) said something like "our CPU architecture is so well-known every college student can work on it". A friend working at Intel at that time translated it to me: "we will hire cheap students to work on chips and we will let the expensive engineers leave". At least in his department the senior engineers left, they were replaced by fresh graduates. It did not work, that department closed. I have no idea how widespread this was inside Intel, but I saw it in other big companies, in some with my own eyes.
Funny you mention this. In my brief stint there, I saw a fresh college graduate get promoted to a lead for a Scanning Unit simply because the current lead was retiring (I was actually offered that position, but I was leaving and turned it down). They were trained in less than a month by shadowing the lead on-the-verge-of-retirement. The engineer who got promoted was at Intel less than a year, and had no prior internship experience (they were hired in 2021 when chips were in desperate need of talent. You might recall the chip shortage that affected cars etc.)
I know some people who understand x86 [0] very well. Most of them do not work at Intel. Those that do tend to be on the OSS side and don’t really have any sway in the parts of Intel that design new hardware.
And this is a problem! Most of Intel’s recent major architectural changes over the last decade or so have been flops. [1] Quite a few have been reverted, often even after they ship. I presume that Intel does not actually have many people who are really qualified to work on the architecture.
[0] I’m talking about how it interacts with an OS and how you put it together into a working system. Understand stuff like SIMD is a different thing.
[1] AVX and its successors are notable exceptions, but they still have issues, and Intel did not really rock the early implementations.
> A few years ago a CEO of Intel (not Gelsinger) said something like "our CPU architecture is so well-known every college student can work on it". A friend working at Intel at that time translated it to me: "we will hire cheap students to work on chips and we will let the expensive engineers leave"
Reminds me of the Boeing managers saying that they didn't need senior engineers because its products were mature..
A few blown doors and deadly crashes later, that didn't age well..
> Most of the talent I know is either at Apple or Facebook/Google
A relative of mine with a PhD sounds exactly like this. Worked for Intel on chip-production tech then was hired by Apple about 10 years ago, working on stuff that gets mentioned in Apple keynotes.
While I am sure the foot soldier quality is important, we ought to put the burden on leadership a bit more. I am not sure AMD had a better talent pool (I don't work in the industry so I don't know!) ten years ago. Culture is predominantly shaped by those already in power -- it sounds like they need a change in culture.
Do you think Intel can improve the quality of engineers on the fab side by offering PhD level salaries to people with BS/MS from good graduates from good US schools? I suspect that Intel hires PhDs from subpar universities.
> lack of genuine interest in fabs - most people are there for the Intel name
I can actually believe this. Of most of the rest of the arguments, that tend to be rather vague, and wave at implementation, or some stock related motivation (like we need TSMCs business), a lack of genuine interest in the employees that was not sold to them or the market especially effectively seems fairly believable.
Most people are there for the chips, for making great designs in silicon, and being market leaders in CPU architecture. Not running the equivalent of an offshoring division making other people's stuff.
The entire implementation has seemed rather haphazard and not sold with much real motivation. Honestly, the entire situation feels a bit like Afghanistan (if that's a bit incendiary)
Nobody really knows why they're going. Nobody really knows what they're trying to accomplish. The objectives on the ground seem vague, ephemeral, and constantly changing. There's not much passion among the ground troops about the idea. The leaders always seem to be far away, and making strange proclamations, without actually putting boots in the dirt. The information released often feels multiple personalityish, like there's far too many cooks in the kitchen, or far too many puppeteers pulling strings in every direction. And afterward you find out it was mostly some dumpster fire driven by completely different motivations than what were publicly acknowledged.
The senior engineers I saw there are talented. And Intel has benefits and stock packages that rival those of big tech. I think I can expand on your point by saying the more senior engineers were risk averse and on the verge of retirement, and the young engineers were just there for the Intel name or some other reason. There is surprisingly very few middle-aged long-term people down there. This would be expected in software (Facebook/Google), but it is a recipe for disaster in hardware where long term thinking is critical to advance lithography (changed don't happen overnight). I also was surprised by how few of Intel's engineers believed in Intel. The most stark observation I made was senior engineers would max their stock purchase plan, but many young engineers would abstain. If the engineers don't believe in the product they are working on, I don't accept that the gov. must bail it out. I hope some investigative journalist writes a book on Intel and Boeing someday, as I would be curious as to how things unfolded and got to this point. There are many similarities (I never worked for Boeing, but have friends in Seattle that describe the culture in similar terms to what I saw at Intel). Also, to your last point, the Intel name does not hold as much weight as it did in the Grove days.
Like I mentioned down below, used to work with the space agency back in the day, and by extension, Boeing. Even late 2000's, early 2010's, Boeing was a dumpster fire. Visited their local offices and the place looked like a hoarder hole. Boxes thrown everywhere, haphazard cabinets just left places, roaming meetings in sparse rooms. Seemed like homeless were camping there rather than a functional company.
The meetings with them felt that way too. Watch the same slides for months and wonder how nobody was moving anywhere with anything actually involving choices. "So, we've got these three alternatives for SLS, and like 10 other nice-to-have engineer pipe dreams." "Right, you'll choose the obvious low cost choice A. Why are we having this meeting again?" Many months later, after endless dithering, surprise, obvious choice using almost no hardware changes is chosen. All engineer nice-to-have pipe dreams are thrown away. There was much rejoicing at the money successfully squandered.
Thank you for the pay comparison website link. The pay scale difference is frankly is a little challenging to even believe in some cases. Facebook pays an M2 software manager $1,426,471? Comparables at other (F)AANGs are $700,000 to $800,000. Double ??? at the same grade? That seems like nonsensical escalation. Does not seem like FB/Meta/saggy infinity symbol formerly known as Prince is getting their money's worth.
When equity compensation is such a high proportion, one can not simply compare numbers. You have to take into account volatility of the business and its impact on share price (and hence impact on your compensation).
The equity compensation figures on levels.fyi are very rough estimates on what one might sees, but it is possible that Meta has to offer more equity due to more perceived volatility in its business rather than, say, Apple or Microsoft. Or maybe more expected work hours/days.
But also, Meta has long been known to pay more, famously being the business that eschewed the collusion that resulted in the famous high-tech employee antitrust litigation:
Should have mentioned Hardware. I remember Intel pays more than Apple hardware when adjusted for COL (Intel in Oregon/New Mexico vs Cupertino). It has been a while since I looked into the salaries though. I agree they do not pay well relative to big tech on the software side of things though
Top talent who can shop around for more pay is going to demand higher pay in exchange for the opportunity costs for having to live in Oregon/New Mexico compared to Cupertino.
It does help to retain the talent, those 5-10y engineers have a house on mortgage and need a higher pay incentive to uproot their lives. At least, that was probably the reasoning before, things have changed:
- WFH means you do not even have to move.
- Competitors have set up satellite offices in Hillsborough/Portland to poach talent.
- Intel does not feel like the stable employer anymore, with the almost yearly layoff waves since the BK era.
This case kind of depends on your priorities. If your goal is the Cupertino, Sunnyvale, Mountain View lifestyle and lateraling between the various companies in the area, then it's an opportunity cost. Has other positives also: low crime, wealthy neighbors, investment money, startup opportunities, mountains and hiking in the area, ocean nearby.
Has it's downsides though. Having visited NASA Ames over in Mountain View because of space agency work, it has a lot of the same issues as Aspen. Lot of sameness, lot of architecture that's McWealthy pseduo-Pueblo, lot of expensive blast plain suburbia, lot of people that start with "you're leaving money on the table" mentality, San Jose and San Fran traffic nearby can be nightmarish, and the crime of Oakland doesn't have that far to walk.
With many family in the Portland, Hillsboro, Beaverton area, that area also has it's positives: relatively low crime in Hillsboro / Beaverton (Portland's not great on the East side)[1], wealthy neighbors, huge amounts of hiking / outdoors / mountain climbing / biking / botanical gardens / parks, much less of blast plain suburbia, somewhat private feeling / hidden housing developments, ocean nearby, Columbia River nearby, significant art culture, lots of breweries / restaurants / food trucks / foodies, decent public transit, if you want dry desert wealth like Cali then Bend is not that far away.
Comparing the shows Portlandia vs Silicon Valley and Weeds is not a bad approximation.
The problem is that Intel is poorly run. Intel should be printing money, and did for a long time until a string of bad leadership. If they had brought in Gelsinger after Otellini, which they were reported to have considered, the company might be in a much better position.
But alas, Intel is a mega bureaucratic company with a few tiny siloed teams responsible for innovating and everyone else being test and process jockeys. I think Gelsinger wasn't given enough time, personally, and agree with the OP that even someone like Elon would struggle to keep this sinking ship afloat at this point.
BK wanted wearables and Bob Swan wanted to cut costs; neither of them were visionaries nor did they really understand that Intel was a hard tech company. Intel had achieved such dominance in such a in-demand, growing market, that all they had to do was make the technology better (smaller, faster, lower power) and the money would continue to flow. The mandate was straightforward and they failed.
I am sure Intel has enough cash to hire some good talent (like just offer talented people next-level salary at $FAANG), the problem is deeper in the hiring pipeline -- convincing people of the budget and actually scouting and retaining good people
The companies that are big tech today took the risks and are now reaping the rewards. Intel decided not to take the risks, so now it doesn’t reap the rewards.
No one stopped Intel from paying for talent. Intel’s shareholders decided to pay themselves instead of investing in the business by hiring great employees. That’s how you go from being a leader to a laggard.
Difficult to see how this is anything other than a failure. I had high hopes when Gelsinger returned, but it seems that BK had done too much damage and Gelsinger didn't really get a grip on things. One of the things I heard that really highlighted the failure of this recovery was that BK had let Intel balloon in all sorts of ways that needed to be pared back and refocused, but head count under Gelsinger didn't just stay static but continued to significantly grow. It's no good giving the same politically poisonous, under-delivering middle management more and more resources to fail at the same job. They really need to clear house in a spectacular way but I'm not sure who could even do that at this point.
They have made too many bad choices, and everyone else has been eating their lunch for the last few years. They are a non-factor in the mobile space where ARM architectures dominate. They are a non-factor in the GPU market where NVDA dominates ahead of AMD. They were focused heavily on data centres and desktop/laptop CPUs where ARM is also increasingly making inroads with more efficient designs that deliver comparable performance. They are still struggling with their fab processes, and even they don't have enough money to make the investment needed to catch back up to TSMC. There is a good reason that even Global Foundries has given up on the bleeding edge some time ago.
They are in a deep hole, and it is difficult to see a future where they can restore their former glory in the foreseeable future.
> where ARM is also increasingly making inroads with more efficient designs that deliver comparable performance.
ARM isn't doing any such thing. Apple & Qualcomm are, though. ARM itself if anything looks weak. Their mobile cores have stagnated, their laptop attempts complete failures, and while there's hints of life in data center it also seems mediocre overall.
This feels a bit pedantic. ARM-the-CPU-architecture is increasingly making inroads with more efficient designs that deliver comparable performance to Intel's x86 chips, thanks to Apple and Qualcomm. ARM-the-holding-company is not doing that, they just make mediocre designs and own IP.
The post I was replying to had 3 other companies mentioned or referred to (INTC, AMD, and NVDA), it seems odd that they'd suddenly have meant ARM-the-ISA instead of ARM-the-company when ISA wasn't otherwise being discussed at all.
But even if they meant ARM-the-ISA, that'd still put it in a fragile position when the 2 clear market leaders in the ARM-the-ISA space have no particular allegiance to ARM-the-ISA (Apple having changed ISAs several times already, and QCOM both being active in RISC-V and also being sued by ARM-the-company)
Being a customer shouldn’t protect a company from lawsuits. ARM feels they have merit here , just like Qualcomm did when they sued Apple. It’s not that rare in the corporate setting to have suits between companies with otherwise amicable relationships.
The optics can still be terrible. Qualcomm (or more accurately, Nuvia, the company they acquired) produced some stunning chips with almost unheard of battery life for a laptop, and Arm are suing them to use their own inferior designs. They even tried to have end user laptops recalled and destroyed! There's no world where this looks good for Arm.
There’s a very clear bias and idolization in this comment of yours which is based on a misunderstanding of the case at hand.
ARM aren’t trying to force Qualcomm to use ARMs cores. They’re trying to force them to update the licensing royalties to make up for the (as ARM sees it) licensing term violations of Nuvia who had a design license for specific use cases.
The part you reference (using ARM designs) is the fallback if Qualcomm lose their design license.
The destruction of the chips is standard practice to request in cases of license and IP infringement .
Qualcomm already had a design license prior to the acquisition of Nuvia. They were doing custom cores back in the original Kryo days which were an in-house custom ARMv8.0-A design.
Their design license doesn’t extend to Nuvia’s IP however according to ARM.
That is the entire crux of the issue. ARM gave Nuvia a specific license, and then Nuvia was acquired which transferred the IP to a license that ARM did not extend it to.
Hypothetically, if Qualcomm have broken their Arm licenses in a way that damages Arm’s business do you think Arm are supposed to just let them carry on? Should Arm say ‘legal action won’t look good so we’ll just let it pass’?
And the fact that Qualcomm got just about everyone to endorse the acquisition ahead of announcing it but didn’t even tell Arm is a bit of a tell.
Arm's major competition is RISC-V. Qualcomm engineers have been joining the important RISC-V committees recently. If Arm beats Qualcomm in the courts, Qualcomm will switch to RISC-V, and then Arm will have won the battle but lost the war.
If Qualcomm loses to Arm in the courts then they have a big problem in 2025 which switching to RISC-V at some point in the future will not solve for them.
They are not suing them to use "inferior designs," you're completely misrepresenting the issue. They are suing them over IP and contract violations. The ISA in question is irrelevant here, you could also get sued by SiFive if you licensed their cores and then did something that SiFive believed violated that license. It's not that deep.
Traditionally if you wanted SOTA semiconductors you'd go to IBM for the technology and then build your own fab. I'm not sure how true that is today but I wouldn't be surprised if it is.
I have to wonder if part of this is Intel wanting a Trumpier CEO so it can retain the favour of the US government, while Trump associates Gelsinger with the CHIPS act which he reflexively hates as a Biden thing.
It is not apple's decision that is the bad sign. There can be plenty of reasons for that as you mention and wouldn't be of note if their chips were poorer or even similar to intel in performance
It is the fact they can build a much better chip in almost any metric so far ahead of Intel is the red flag.
Re: fabs, Apple did by way of TSMC. Apple provided TSMC with massive amounts of cash in the 2010’s to fund new fabs. These were loans, but with very generous low interest terms that TSMC couldn’t have gotten elsewhere.
I think you’re correct that this was initially because Apple didn’t want to be at the behest of Samsung. But as Apple iterated on their mobile and tablet chips, they had the choice to bring them to their desktop and laptop lines or not to. They took that gamble and here we are today with non Intel Macs.
I would argue there were many good things but not well delivered. The Optane persistent memory should've been revolutionary for databases but Intel just put it out and expected people to do all the software.
I'm seeing the same thing now with Intel QAT / IAA / DSA. Only niche software support. Only AWS seems to have it and those "bare metal" machines don't even have local NVMe.
About 10 years ago Intel Research was publishing a lot of great research but no software for the users.
Contrast it with Nvidia and their amazing software stack and support for their hardware.
When I read the Intel QAT / IAA / DSA whitepaper I knew it was the beginning of the end for Intel.
Every aspect of that document was just dripping in corporate dinosaur / MBA practices.
For example, they include 4 cores of these accelerators in most of their Xeons, but soft fuse them off unless you buy a license.
Nobody is going to buy that license. Okay, maybe one or two hyperscalers, but nobody else for certain.
It's ultra-important with a feature like this to make it available to everybody, so that software is written to utilise it. This includes the starving university student contributing to Postgres, not just some big-enterprise customer that merely buys their software!
They're doing the same stupid "gating" with AVX-512 as well, where it's physically included in desktop chips, but it is fused off so that server parts can be "differentiated".
Meanwhile AMD just makes one compute tile that has a uniform programming API across both desktop and server chips. This means that geeks tuning their software to run on their own PCs are inadvertently optimising them for AMD's server chips as well!
PS: Microsoft figured this out a while ago and they fixed some of their products like SQL Server. It now enables practically all features in all SKUs. Previously when only Enterprise Edition has certain programmability features nobody would use them because software vendors couldn't write software that customers couldn't install because they only had Standard Edition!
It’s worse than just the licensing. They’re exposed as PCI devices, so they don’t automatically show up in VMs, they don’t access user memory in a particularly pleasant manner, they aren’t automatically usable in unprivileged programs, etc.
And this destroys 99% (maybe 99.99%) of the actual economic value for Intel! What Intel needs is for people to integrate these accelerators into software the way that AVX is integrated into software. Then software and libraries can advertise things like “decompression is 7x faster on Intel CPUs that support such-and-such”, and then customers will ask for Intel CPUs. And customers will ask their providers to please make sure they support such-and-such feature, etc.
But instead we have utterly inscrutable feature matrices, weird licensing conditions, and a near-complete lack of these features on development machines, and it’s no wonder that no one uses the features.
Then I guarantee you that there is an MBA in Intel somewhere crunching the numbers and calculating that "only 0.001% of our customers are utilising this feature", hence he's going to cut that team and stop offering the product in future Xeons.
Which means that any fool that did utilise this feature has tied their cart to a horse that's going to be put down.
Smarter people can see this coming a mile off and not bother investing any effort into support.
You see this again and again - when you gate technical features behind an enterprise barrier, the adoption rate is very low. AMD messed this up with CDNA and RDNA (now merging to UDNA), but Nvidia had it right from the start: CUDA from the bottom to the top.
Intel started doing this kind of right recently with their Xe cores (ideally same programming model between their integrated GPUs and their datacenter HPC GPUs), but we’ll see how different the Xe-LPG and Xe-HPC end up being when Falcon Shores really comes out (I’m very worried about this description of it as a XPU, which seems like real confusion about what ML people want).
It sounds like most of the good talent has already left; the people still there are just there for the paycheck or resume padding and not for the long-term.
His comment has nothing to do with quality of software or quality of support, but is about dealing with NVidia. Trying to work with NVidia (as a hardware manufacturer) must have been frustrating, but that has nothing to do with quality of the software.
The video is 12 years old. A lot changed in the meantime.
AMD has open source drivers and crashes often. NVidia has (or more precisely had) closed source drivers that nearly always work.
Torvalds wants open drivers, and NVidia doesn't do that. NVidia's drivers are better than their competitors by enough to make it worth buying NVidia even when their hardware is objectively worse, so much as I would prefer open-source in principle, I can understand why they don't want to give away the crown jewels.
Oh, yes. They spent too many years as the obvious #1, with a license to print money...when, longer-term, staying there required that Intel remain top-of-league in two profoundly difficult and fast-moving technologies (10e9+-transistor CPU design, and manufacturing such chips). Meanwhile, the natural rot of any large org - people getting promoted for their ladder-climbing skills, and making decisions for their own short-term benefit - were slow eating away at Intel's ability to stay at the top of those leagues.
Intel needed to be split in two as well, which Gelsinger only half-heartedly did. He split the company into two functions - foundry and design, but didn't take that to its logical conclusion and split up the company completely.
Wouldn't that just pretty much guarantee that the foundry business would fail since Intel wouldn't have any incentives to shift most of their manufacturing to TSMC? The same thing happened with AMD/Global Foundries..
AMD has a big wafer supply agreement with GlobalFoundries, and has since the spinoff. It was exclusive until the seventh WSA in 2019 which allowed AMD to purchase 7nm and beyond from other suppliers (without paying penalties) which was the only reasonable resolution after GloFo cancelled their 7nm fab (which may have been the best thing to happen to AMD). But AMD increased their GloFo orders in May and December 2021 during the chip crunch to $2.1B total through 2025. If you look at the first WSA amendment from March 2011 it includes AMD agreeing to pay an additional $430M if they get some (redacted) node in production in time.
Anyway, whatever woes GloFo is facing you can’t blame them on AMD. They had an exclusivity deal for a decade which only got broken when it was no longer tenable and AMD still buys a ton of their wafers. I suppose AMD may have bought more wafers if their products didn’t suck for most of that time but it had nothing to do with shifting production to TSMC which only happened after GloFo gave up.
right. so glofo couldn't keep up abandoned the bleeding edge. what's the evidence that intel foundaries, divorced from design, wouldn't suffer the same fate?
No evidence, intel doesn't have the resources to be fighting tsmc, and, arm and nvidia, apple and samsung in different technologies at the same time.( foundry, gpus, cpus, NAND ,SSD etc)
They already sold the NAND memory business to SK hynix in 2021.
They will have to focus, that means getting out lines of business which may likely die.
That would be better than going bankrupt and your competitors picking the pieces
With the design side, Intel foundries have struggled to keep up with TSMC. It's not clear that the design side helps. My guess is that it's actually a question of corporate culture, and that AMD's ambitious, driven people stuck with AMD.
Yep, one business line is an albatross around the other. Some think this means it’s better they stay together. Others think you can save one by separating the other.
I, personally, found my life to improve when we decided that the cleaning lady could be someone from outside the marriage.
Agree with OP that Intel was probably too deep into its downward spiral. While it seems Pat tried to make changes, including expanding into GPUs, it either wasn't enough or too much for the Intel board.
Splitting Intel is necessary but probably infeasible at this point in the game. Simple fact is that Intel Foundry Services has nothing to offer against the likes of TSMC and Samsung - perhaps only cheaper prices and even then it's unproven to fab any non-Intel chips. So the only way to keep it afloat is by continuing to fab Intel's own designs, until 18A node becomes viable/ready.
That means either he knew and allowed it to happen, which is bad, or he didn't know and allowed GPU division to squander the resources, which is even worse. Either way, it was an adventure Intel couldn't afford.
Disagree on "failed on GPU" as it depends on the goal.
Sure Intel GPUs are inferior to both Nvidia and AMD flagship offerings, but they're competitive at a price-to-performance ratio. I'd argue for a 1st gen product, it was quite successful at opening up the market and enabling for cross-selling opportunities with its CPUs.
That all said, I suspect the original intent was to fabricate the GPUs on IFS instead of TSMC in order to soak up idle capacity. But plans changed along the way (for likely performance reasons) and added to the IFS's poor perception.
The issue with the GPUs is that their transistor to performance ratio is poor. The A770 has as many transistors as about a 3070ti but only performs as well as a 3060 (3060ti on a good day).
So with that, they are outsourcing production of these chips to TSMC and using nearly cutting edge processes (battlemage is being announced tomorrow and will use either TSMC 5 or 4), and the dies are pretty large. That means they are paying for dies the size of 3080s and retaling them at prices of 3060s.
It has taken Nvidia decades to figure out how to use transistors as efficiently as it does. It was unlikely for Intel to come close with their first discrete GPU in decades.
That said, it is possible that better drivers would increase A770 performance, although I suspect that reaching parity with the RTX 3070 Ti would be a fantasy. The RTX 3070 Ti has both more compute and more memory bandwidth. The only advantage the A770 has on its side is triple the L2 cache.
To make matters worse for Intel, I am told that games tend to use vendor specific extensions to improve shader performance and those extensions are of course not going to be available to Intel GPUs running the same game. I am under the impression that this is one of the reasons why DXVK cannot outperform the Direct3D native stack on Nvidia GPUs. The situation is basically what Intel did to AMD with its compiler and the MKL in reverse.
In specific, information in these extensions is here:
Also, I vaguely recall that Doom Eternal used some AMD extension that was later incorporated into vulkan 1.1, but unless ID Software updated it, only AMD GPUs will be using that. I remember seeing AMD advertising the extension years ago, but I cannot find a reference when I search for it now. I believe the DXVK developers would know what it is if asked, as they are the ones that told me about it (as per my recollection).
Anyway, Intel entered the market with the cards stacked against it because of these extensions. On the bright side, it is possible for Intel to level the playing field by implementing the Vulkan extensions that its competitors use to get an edge, but that will not help it in Direct3D performance. I am not sure if it is possible for Intel to implement those as they are tied much more closely with their competitors’ drivers. That said, this is far from my area of expertise.
I will never understand this line of reasoning. Why would anyone expect an initial offering to match or best similar offerings from the industry leader? Isn't it understood that leadership requires several revisions to get right?
Oh, poor multi billion company. We should buy its product with poor value, just to make it feel better.
Intel had money and decades of integrated GPU experience. Any new entrant to the market must justify the value to the buyer. Intel didn't. He could sell them cheap to try to make a position in the market, though I think that would be a poor strategy (didn't have financials to make it work).
I think you misunderstood me. I wasn't calling for people to purchase a sub-par product, rather for management and investors to be less fickle and ADHD when it comes to engineering efforts one should reasonably expect to take several product cycles.
Honestly, even with their iGPU experience, Arc was a pretty impressive first dGPU since the i740. The pace of their driver improvement and their linux support have both been impressive. They've offered some niche features like https://en.wikipedia.org/wiki/Intel_Graphics_Technology#Grap... which Nvidia limits to their professional series.
I don't care if they have to do the development at a loss for half a dozen cycles, having a quality GPU is a requirement for any top-tier chip supplier these days. They should bite the bullet, attempt to recoup what they can in sales, but keep iterating toward larger wins.
I'm still upset with them for cancelling the larrabee uarch, as I think it would be ideal for many ML workloads. Who needs CUDA when it's just a few thousand x86 threads? I'm sure it looked unfavorable on some balance sheet, but it enabled unique workloads.
> I don't care if they have to do the development at a loss for half a dozen cycles,
And here is the problem. You are discussing a dream scenario with unlimited money. This thread is about how CEO of Intel has retired/was kicked out (far more likely) for business failures.
In real world, Intel was in a bad shape (see margins, stock price ect) and couldn't afford to squander resources. Intel couldn't commit and thus it should adjust strategy. It didn't. Money was wasted that Intel couldn't afford to waste.
Well, seeing as GPU is important across all client segments, in workstation and datacenter, in console where AMD has been dominant, and in emerging markets like automotive self-driving, not having one means exiting the industry in a different way.
I brought up Intel's insane chiplet [non-]strategy elsewhere in the thread as an example where it's clear to me that Intel screwed up. AMD made one chiplet and binned it across their entire product spectrum. Intel made dozens of chiplets, sometimes mirror images of otherwise identical chiplets, which provides none of the yield and binning benefits of AMD's strategy. Having a GPU in house is a no-brainer, whatever the cost. Many other decisions going on at Intel were not. I don't know of another chip manufacturer that makes as many unique dies as Intel, or has as many SKUs. A dGPU is only two or three of those and opens up worlds of possibility across the product line.
Pulling out of a vital long-term project because it can't deliver a short-term return would be a bigger waste. Unless you think Intel is already doomed and the CEO should be pursuing managed decline?
It's worth mentioning that IIRC the team responsible for the Arc GPU drivers was located in Russia, and after the invasion of Ukraine they had to deal with relocating the entire team to the EU and lost several engineers in the process. The drivers were the primary reason for the absolute failure of Arc.
Intel deserves a lot of blame but they also got hit by some really shit circumstances outside of their control.
He was CEO. Chief executing officer. It's literally his job to execute, i.e. fix that stuff/ensure it doesn't happen. Get them out of Russia, poach new devs, have a backup team, delay the product (i.e. no HVM until good drivers are in sight). That's literally his job.
This only reinforces my previous point. He had good ideas, but couldn't execute.
They chose to outsource the development of their core products to a country like Russia to save costs. How was that outside of their control? It's not like it was the most stable or reliable country to do business in even before 2022...
Individual Russian software developers might be reliable but that's hardly the point. They should've just moved them to US or even Germany or something like that if they were serious about entering the GPU market, though...
e.g. There are plenty of talented engineers in China as well but it would be severely idiotic for any western company to move their core R&D there. Same applied to Russia.
I doubt they began working on ARC/XE drivers back in 2000. If the entire driver team being in Russia (i.e. Intel trying to save money) was truly the main reason why ARC failed on launch they really only have themselves to blame...
Not just in hindsight -- but by 2011 it was clear to anyone paying attention where Russia was heading (if not to war, then certainly to a long-term dictatorship). Anyone who failed to see the signs, or chose to intellectualize past them - did so willingly.
I think if you're CEO of Intel, some foresight might be in order. Or else the ability to find a solution fast when things turn impredictibly sour. What did he get a $16mil salary for?
It had been obvious for quite a while even before 2022. There were the Chechen wars, and Georgia in 2008, and Crimea in 2014. All the journalists and opposition politicians killed over the years, and the constant concentration of power in the hands of Putin. The Ukraine invasion was difficult to predict, but Russia was a dangerous place long before that. It’s a CEO’s job to have a strategic vision, there must have been contingency plans.
War in Afghanistan (2001–2021)
US intervention in Yemen (2002–present)
Iraq War (2003–2011)
US intervention in the War in North-West Pakistan (2004–2018)
Second US Intervention in the Somali Civil War (2007–present)
Operation Ocean Shield (2009–2016)
Intervention in Libya (2011)
Operation Observant Compass (2011–2017)
US military intervention in Niger (2013–2024)
US-led intervention in Iraq (2014–2021)
US intervention in the Syrian civil war (2014–present)
US intervention in Libya (2015–2019)
Operation Prosperity Guardian (2023–present)
Wars involving Russia in the 21st century:
Second Chechen War (1999–2009)
Russo-Georgian War (2008)
Russo-Ukrainian War (2014–present)
Russian military intervention in the Syrian Civil War (2015–present)
Central African Republic Civil War (2018–present)
Mali War (2021–present)
Jihadist insurgency in Burkina Faso (2024–present)
I don’t know what you are trying to say. If you have a point to make, at least be honest about it.
Also, I am not American and not an I conditional supporter of their foreign policy. And considering the trajectory of American politics it is obvious that any foreign multinational developing in the US should have contingency plans.
My point was that great powers are always in some kind of military conflict, so it's not really a deciding factor when choosing where to build an R&D.
Putin's concentration of power has been alarming, but only since around 2012, to be honest. It was relatively stable between 2000 and 2012 in general (minus isolated cases of mysterious deaths and imprisonments). Russia was business-friendly back then, open to foreign investors, and most of Putin's authoritarian laws were yet to be issued. Most of the conflicts Russia was involved in were viewed as local conflicts in border areas (Chechen separatism, disputed Georgian territories, frozen East Ukrainian conflict, etc.). Only in 2022 did the Ukraine war escalate to its current scale, and few people really saw it coming (see: thousands of European/American businesses operating in Russia by 2022 without any issue)
So I kind of see why Intel didn't do much about it until 2022. In fact, they even built a second R&D center in 2020... (20 years after the first one).
The wars or military conflicts themselves are kind of tangential. It's the geopolitical risks that come along with them.
i.e. if you are an American/European company and you are doing business in Russia you must account for the potential risks of suddenly. The sanctions after 2014 were a clear signal and Intel had years to take that into account.
> So I kind of see why Intel didn't do much about it until 2022.
I'm pretty sure that the consensus (based on pretty objective evidence) is that Intel was run by incompetent hacks prior to 2021 (and probably still is).
> thousands of European/American businesses operating in Russia by 2022
Selling your products there or having local manufacturing is not quite the same as outsourcing your R&D there due to obvious reasons...
Weren't they pretty good (price/performance) after Intel fixed the drivers during the first year or so after release? The real failure was taking so long to ship the next gen..
For example, could "Intel A" continue to own the foundries, split off "Intel B" as the owner of the product lines, and then do some rebranding shenanigans so that the CPUs are still branded as "Intel"?
He should have cut 25% of the workforce to get started (and killed the dividend).
Also - the European expansion and Ohio projects, while good short-term strategies, were too early.
Cut the ship down to size and force existing sites to refocus or be closed. Get alarmist. Make sure you cut out all the bad apples. Prune the tree. Be ruthless and determined.
They should hire Reed Hastings now. He's the OG turnaround king.
Bob Swan was fine. He was no visionary, but he was trying to do the right thing with the advice he was being given, and actually started the cleanup of a lot of BK's mess.
BK will go down in history as the person who destroyed a once great engineering firm.
The market seems to think this is great news. I disagree strongly here, but I can see why traders and potentially the board thought this was the right move.
A lot of hate for Pat Gelsinger on Reddit and YouTube from armchair experts who don't really grasp the situation Intel were in or what was needed to turn the ship around, so if he was pushed it seems like it might be to pacify the (not particularly informed) YouTube/gamer community and bump the share price. That's all speculation, though.
I'd be interested to see who Intel intends to get to run the company in his place, as that would signal which audience they're trying to keep happy here (if any).
Agreed. My career at Intel coincided with Pat's, although I jumped ship a little earlier. Admittedly this means I probably have a little bias, but based on my hundreds of conversations with Intel vets I do think his business strategy was the right decision. He came on board a company years behind on process, packaging, and architecture technology after years of mismanagement by a largely nontechnical staff, which favoured buybacks and acquisitions over core business practice.
He had two routes with the capital available following a cash injection from COVID-19 and the rapid digitization of the workplace - compete with AMD/NVIDIA, or compete with TSMC/Samsung. The only sensible option that would capture the kind of capital needed to turn the ship around would be to become a company critical to the national security of the US, during a time of geopolitical stability, onshoring chip manufacture and receiving support from the government in doing so. He could either compete with competitors at home or those abroad, but not both simultaneously. The thesis makes sense; you've lost the advantage to NVIDIA/AMD, so pivot to become a partner rather than a rival.
I don't think it's a coincidence that just a week after Intel finally received the grant from the government, he announced his departure. The CHIPS act was a seminal moment in his career. It makes sense he'd want to see that through till completion. He's 63; now is as good a time as ever to hand over the reins, in this case to a very capable duo of MJ and Zisner (who were always two of the most impressive EVPs of the bunch in my book).
I'm not in the industry but from what I gather, i agree with you 100%. Bloomberg published an article on the matter though, but it seems they are reporting that Gelsinger was pushed out by a frustrated board because of "slow progress." This is a real head scratcher to me, even as someone looking in:
Gelsinger always said this was a 5-year plan, yet the board jettisons him at 4 just as IFS customers are starting to ramp up, the 18A node is close to readiness - the company's saving grace at this point - with Panther Lake on the horizon and Altera preparing for IPO in 2026 which should be a relatively good cash injection with PE investors already negotiating stakes. Maybe I just don't have the whole picture, but it seems poorly timed.
I hate the idea that the board might do this just as Intel's prospects start looking up after years of cost-cutting measures to bring the company back to profitability and take credit for a "miraculous turnaround" that was actually instigated 4 years prior by the same person they sacked. It's like an incoming political party coming in and within a couple of months of office taking credit for a good economy that they had no hand in creating.
Pat was seemed to understand the criticality of fabrication process lead in today's day and age. Hence his push and decision to invest in IFS, plus to win over the government funding to sustain the effort.
In short, a bad or subpar chip design/architecture can be masked by having the chip fabricated on a leading edge node but not the inverse. Hence everyone is vying for capacity on TSMC's newest nodes - especially Apple in trying to secure all capacity for themselves.
The market isn't really the greatest indicator of anything. Gelsinger has spend three years trying to turn Intel around and the company is only now starting to turn the wheel. It will be at least another three years before we see any results. The market doesn't have a three year patience, three months maybe.
I hold no opinion on Pat Gelsinger, but changing the CEO in the middle of ensuring that Intel remains relevant in the long term, seems like a bad move. Probably his plan for "fixing" Intel is to slow for the market and the board. Let's see who takes over, if it's not an engineer, then things just became more dangerous for Intel. The interim people are an administrator and a sales person, that does not bode well.
IIRC, Lisa Su and her team took nearly decade to orchestrate the turn-around at AMD, and they are still a distant second player in GPUs. Expecting Pat Gelsinger to turn around Intel (or any company in an industry with such long development and tech lead times), and replacing him in 3 years - given that he is an engineer with extensive domain and leadership experience - seems - reactive, as opposed to thoughtful and directed.
Wonder if they will approach Lisa Su to take the job now :D
It takes something like 5 or 6 years to go from the drawing board for a chip design, and many years to create a process node. Gelsinger hasn't really even had the chance to execute on designs that were started during his tenure. My understanding is that would've started with Intel 18A.
That doesn't really answer the question. Yes he has started initiatives that will take 5-6 years to pan out. Is there an early indication that they aren't all duds? How can anyone state with any certainty that Intel is on a better path today than it was 4 years ago when every single measurable metric is continuing to decline?
Both of the Co-CEOs have a finance background. I think that is rather telling. They are trying to appeal to Wallstreet and potentially have people that are equipped to manage an M&A deal.
The guy that got them into this situation started as an engineer. Swan was a money guy, but he did better than Krzanich. So, I think it is just hard to guess based on somebody’s background how they’ll do.
However, IMO: they need somebody like Lisa Su, somebody with more of an R&D-engineering background. Gelsinger was a step in the right direction, but he got a masters degree in the 80’s and did technically hardcore stuff in the late 80’s, early 90’s. That was when stuff just started to explode.
Su finished her PhD in the 90’s, and did technically interesting stuff through the computer revolution. It appears to have made a world of difference.
Gelsinger retires before Intel Foundry spin is ready. This means trouble.
Intel invested tens of billions into A20 and A18 nodes, but it has not paid off yet. News about yield seemed promising. Massive investments have been made. If somebody buys Intel foundries now, they pay one dollar and take debt + subsidies. Intel can write off the debt but don't get potential rewards.
Foundry is the most interesting part of Intel. It's where everything is happening despite all the risk.
> Massive investments have been made. If somebody buys Intel foundries now, they pay one dollar and take debt + subsidies. Intel can write off the debt but don't get potential rewards.
You are describing the start of the Vulture capitalist playbook for short term profits and dividents right there, take subsidies and loans and sell everything to pay dividents (or rent back to yourself via a shell company) then let the remaining stuff die and sell of for an aditional small profit. Don't know how it works here but it sure sounds like it.
I'm describing massive investment to fundamental manufacturing infrastructure. Deprecating that capital expenditure takes long time. Exact opposite of vulture capitalism and short determinism.
> Don't know how it works here but it sure sounds like it.
Thank you for being honest and saying that you are describing how things you don't understand sound for you.
Sad to see Gelsinger go, but despite the churn, I don't think Intel is doomed. At worst, per their most recent Q3 results, I see $150Bn in assets and $4Bn in outstanding stock, and I see the US Gov (both incoming and outgoing) gearing up for a long war against China where Intel would be an asset of national importance.
My back of envelope calculation says Intel should be about $35 a share (150/4). If they stumble when they report Q4, I think the US Gov will twist some arms to make sure that fresh ideas make it onto the board of directors, and perhaps even make Qualcomm buy them.
I do think that Intel need to ditch some of their experiments like Mobileye. Its great (essential) to try and "rebalance their portfolio" away from server/pc chips by trying new businesses, but Mobileye hasnt grown very much.
Taiwanese law forbids TSMC manufacturing chips abroad using their latest process, so no 2nm in the US fabs,this leaves Intel's 18A as the mist advanced one in US soil.
TSMC Arizona 4 nm fabs are a contingency. TSMC received $6+ billion in the CHIPS and Science Act, and the fab opening is delayed until 2025 due to they don't have the local talent yet.
I would say yes. Speculation follows: If the unthinkable happens, and assuming it devolves into a cold rather than a hot war (eg the Trump administration decide not to send soldiers and weapons to Taiwan but let the Chinese have the island), then US TSMC is appropriated, Intel or AMD or Qualcomm are told to run it, and all three are instructed to ramp up manufacturing capacity as aggressively as possible. If it's more like the status quo saber rattling, then I think USG would still want a 100% domestic supplier to be acting as a second source for the local economy and a primary source for anything the defense-industrial complex needs.
I imagine it takes a lot behind the scenes - especially priceless professional experience concentrated at HQ - to know how to set up new sites, set the future direction at all levels of the organization/timeframes, etc. etc. etc. What happens to the fabs long-term if leadership from Taiwan is decapitated?
Its very back of envelope. Basically price to book value, except I didn't account for liabilities. 150B assets divided by 4B shares implies one share should account for 150/4 = $37.50-worth of the business. Its a quick sanity check but absolutely not a robust yardstick!
Hi! CEOing has always been a hobby of mine, but with the recent news, I thought this would be a great opportunity for us to synergize.
Sure, I don’t have much experience in the chip making world but I did buy a Pentium when I built my own computer. I also have never run a multinational company but I visited several other countries on a Disney cruise.
Let me lay it out- you would get a dynamic new CEO at a fraction of the going market rate, and I’d get to take my Chief Executiving skills to the next level.
Let’s do this together!
You can reply here and let me know if you’re interested.
I am amazed -- stunned -- how many people here seem to think that Gelsinger was the right person for the job, but wronged by the people who pre-dated him (BK?!) or the CHIPS act (?!) or other conditions somehow out of his control.
Gelsinger was -- emphatically -- the wrong person for the job: someone who had been at Intel during its glory days and who obviously believed in his heart that he -- and he alone! -- could return the company to its past. That people fed into this messianic complex by viewing him as "the return of the engineer" was further problematic. To be clear: when Gelsinger arrived in 2021, the company was in deep crisis. It needed a leader who could restore it to technical leadership, but could do so by making some tough changes (namely, the immediate elimination of the dividend and a very significant reduction in head count). In contrast, what Gelsinger did was the worst of all paths: allowed for a dividend to be paid out for far (FAR!) too long and never got into into really cutting the middle management undergrowth. Worst of all, things that WERE innovative at Intel (e.g., Tofino) were sloppily killed, destroying the trust that Intel desperately needs if it is to survive.
No one should count Intel out (AMD's resurrection shows what's possible here!), but Intel under Gelsinger was an unmitigated disaster -- and a predictable one.
I don't think you're wrong but the overarching structure of the chip business is very different from times gone by. It's not even clear what "technical leadership" should mean. When Intel was the leading edge volume leader just on their own processor line, that gave them a scale advantage they no longer have and that won't come back. They built a culture and organization around thick margins and manufacturing leadership, what we're seeing now looks like everyone from investors to employees searching for anyone who will tell them a happy story of a return to at least the margins part. Without a cohesive version of what the next iteration of a healthy Intel should look like all the cost cutting in the world won't save them.
Interestingly, people were bullish about Gelsinger at VMware too. Many still talk about the glory days with him at the helm despite decisions that IMO significantly harmed the company
I find stories about the opposite. How it was very good for the VMware company, so good that nobody ousted him, but he left for a bigger company instead.
I agree. I don't see him having achieved anything particularly noteworthy over his tenure.
I'm not sure where Intel needs to go from here - ultimately lots of problems are solved if they can just design a solid CPU (or GPU) and make enough of them to meet demand. Their problems recently are just down to them being incapable of doing so. If their fab node pans out that's another critical factor.
Intel still has tons of potential. They're by no means uncompetitive with AMD, really. The fabs are a millstone right now, the entire reason they as cheap as they are until they can start printing money with them. It does feel like they don't have any magic, though, no big moonshots or cool projects left since they basically killed all of them :(.
Yeah but it was the return to the wrong engineer. While Pat allegedly architecting i486 is major respect, it was ultimately an iteration of i386 which is where the alpha is at. Guy who architected i386 (John Crawford) is too old. So that leaves Intel Core which was architected by Uri Frank. Now get this. Two weeks after Pat Gelsinger is appointed CEO of Intel in February 2021, Uri Frank announces he's going to be joining Google to lead the development of their cloud chips. That guy is probably Intel's natural leader. So it'd be interesting to know if there's more to this story.
"On May 2, 2013, Executive Vice President and COO Brian Krzanich was elected as Intel's sixth CEO [...]"
The next paragraph is ominous ...
'As of May 2013, Intel's board of directors consists of Andy Bryant, John Donahoe, Frank Yeary, Ambassador Charlene Barshefsky, Susan Decker, Reed Hundt, Paul Otellini, James Plummer, David Pottruck, and David Yoffie and Creative director will.i.am. The board was described by former Financial Times journalist Tom Foremski as "an exemplary example of corporate governance of the highest order" and received a rating of ten from GovernanceMetrics International, a form of recognition that has only been awarded to twenty-one other corporate boards worldwide.'
I definitely have some issues with BK, but it's more that there is another entire CEO between BK and Gelsinger (Bob Swan!) -- and I think it's strange to blame BK more than Swan?
Intel has had some big failures (or missed opportunities) over the years. Just going from memory - Pentium 4 architecture, not recognizing market opportunities with ARM, Itanium, AMD beating them to 64 bits on x86, AMD beating them to chiplets and high number of PCIe lanes with EPYC, poor battery life (generally) on laptops. The innovations from Apple with Apple Silicon (ARM) and AMD with EPYC make Intel look like they're completely lost. That's before we even touch on what RISC-V might do to them. It seems like the company has a long history of complacency and hubris.
Not an insider, but this doesn't seem good. I more or less agreed with every call Intel's made over the past few years, and was bullish on 18A. I liked that Pat was an engineer. His interim replacements don't seem to have that expertise.
Intel wasn't doing great to start, but Pat put it on the path, potentially, towards greatness. Now even that is in major jeopardy.
I am perhaps one of Pat's biggest fan. Was in tears when I knew he is back at Intel [1];
>"Notable absent from that list is he fired Pat Gelsinger. Please just bring him back as CEO." - 2012 on HN, when Paul Otellini Retired.
>"The only one who may have a slim chance to completely transform Intel is Pat Gelsinger, if Andy Grove saved Intel last time, it will be his apprentice to save Intel again. Unfortunately given what Intel has done to Pat during his last tenure, I am not sure if he is willing to pick up the job, especially the board's Chairman is Bryant, not sure how well they go together. But we know Pat still loves Intel, and I know a lot of us miss Pat." [2] - June, 2018
I am sad to read this. As I wrote [2] only a few hours ago about how the $8B from Chip ACT is barely anything if US / Intel wants to compete with TSMC.
Unfortunately there were lot of things that didn't go to plan. Or from my reading of it was out of his control. Cutting Dividends was a No No from Board until late. Big Cut of headcount wasn't allowed until too late. Basically he was tasked to turn the ship around rapidly, not allow to rock the ship too much all while it has leaky water at the bottom. And I will again, like I have already wrote in [1], point the finger at the board.
My reading of this is that it is a little too late to save Intel, both as foundry and chip making. Having Pat "retired" would likely mean the board is now planning to sell Intel off since Pat would likely be the biggest opponents to this idea.
At less than $100B I am sure there are plenty of interested buyers for various part of Intel. Broadcomm could be one. Qualcomm or may be even AMD. I am just not sure who will take the Foundry or if the Foundry will be a separate entity.
I dont want Pat and Intel to end like this. But the world is unforgiving and cruel. I have been watching TSMC grow and cheer leading them in 2010 before 99.99% of the internet even heard of its name and I know their game far too well. So I know competing against TSMC is a task that is borderline impossible in many aspect. But I would still wanted to see Pat bring Intel back to leading edge again. The once almightily Intel.....
Farewell and Goodbye Pat. Thank You for everything you have done.
I hope they sell off some more distracting/minor parts of the business, and then work with a fund to take the company private.
Similar to Yahoo a number of years ago, there's some real business still there, they just need to remove the expectation of constant quarterly calls and expectations and make long term, consistent investments again.
Yahoo's acquisition by Verizon was a tire-fire train wreck of epic proportions. In no universe is this something to emulate. There's a good reason every single Verizon exec involved in that deal was gone relatively shortly after this disaster.
They basically incinerated $5 billion in the process of buying Yahoo, merging it with AOL to form "Oath", doing many rounds of layoffs with extremely overgenerous severance packages, strip-mining the business units in ill-conceived deals (often in exchange for stock in absurd companies like MoviePass and Buzzfeed), and then eventually dumping the remainder on a PE firm at a huge loss.
Or, as Wikipedia succinctly summarizes these events, "the integration did not deliver the expected value".
I am old enough to remember when AMD was nowhere, and Intel was the only game in town for both consumer and server chips, laptop and desktop. I think it was 2011.
Then 2013 rolled around and I built a desktop with an AMD processor, because it was dirt cheap. I remember people saying that they were excited and looking forward to the upcoming AMD architectures and roadmap. I couldn't understand how AMD could possibly have come from behind to beat Intel.
Realistically, you just need to come up with a good roadmap, get good people, get the resources, execute.
Roadmap is good, people - not so sure about this one, resources - seem a little low, execution - seems hampered by too many cooks. If Intel broke up into smaller companies and their new leading edge chip design group had 500 good people, I honestly think they would have a better shot. So I think we are seeing what makes the most sense, Intel must die and let the phoenixes rise from the ashes.
Fast forward to 2024/2025, and remember, anything is possible if AMD beat 2011 Intel.
AMD doesn't have the burden of capital expense that a fab is. It did take several years for AMD to capitalize on the freedom this enables though. As far as I know Intel is the only company left that is vertically integrated. This might be the last domino to fall. Arrow Lake uses TSMC heavily. Maybe that's a sign of things to come. Andy Grove said only the paranoid survive. It didn't seem like Intel management was that paranoid. They didn't invent 64bit x86, had to license it from AMD. Spent so much money on stock buy backs, when it should have been funneled back into the fab, getting yields up on the next process node. It's easy for me to be an armchair quarterback though, I'm not aware of everything that happened behind the scenes. I saw D1X being built, and the future seemed bright.
Pat returned as CEO in 2021. I don't think that 3 years is enough time to clean out the internal rot that has led Intel to where it is today and a lot of problems have been in motion for a while. Short-term this might be better for Intel but this move lacks long term vision. Intel also just got a big hand-out from the government that would've kept them moving in the right direction regardless of what CEO is leading the pack.
Some of these people have strong contracts with clauses against being fired. It is also very difficult to replace them if capable people left the company. Especially if they are the majority of middle management.
A few years, Pat said Intel had internally rebuilt their "Grovian execution Engine". I found those words empty, a far cry from Andy Grove's hard decision to dump memory in favor of microprocessors. Andy Grove's decisions made Intel, Intel, not "execution".
It's unfortunate the situation is beyond almost anyone's grasp but I wonder if Pat should have talked less, and done more.
Unfortunately not surprising, looking at the past year or so.
When he took over, I remember the enthusiasm and optimism, not only in business, but in hacker circles also. It's a pity it didn't work out. I wonder if it is even possible to save Intel (not trying to imply that "if Gelsinger can't do it, than no one can", just wondering if Intel is just doomed, regardless of what their management does).
I got to meet and interact with Pat a few times while he was the CEO of VMware. I really liked him and his approach to things. He has done the best he could with the hand that was dealt to him.
The CEO of a public company is a glorified cheerleader. Doing the right things is half the job, convincing people you are doing the right things is the other half. Considering Intel's share price dropped 61% under Gelsinger's tenure, no matter the work he did towards the first half, it's pretty clear he thoroughly failed at the second. They desperately need someone who will drive back investor confidence in the company, and fast.
> The CEO of a public company is a glorified cheerleader.
I find this view limited. If we look at the most successful tech CEOs, they all personally drove the direction of products. Steve Jobs is an obvious example. Elon pushes the products of his companies so much that he even became a fellow of the National Academy of Engineering. Jeff Bezos was widely credited as the uber product manager in Amazon. Andrew Grove pushed Intel to sell their memory business and go all in on CPUs. Walton Jr. was famous for pushing IBM to go all in on electronic computers and later the mainframes. The list can go on and on. In contrary, we can see how mere cheerleading can lead to the demise of companies. Jeff Immelt as described in the book Lights Out can be such an example.
All the CEOs in your example did some great work, yes, but also created a cult following around themselves. They were all shareholder darlings. Most of them are featured in business school textbooks as examples of how to run a company. All this kind of proves the point I'm trying to make. Just being involved in products isn't enough, not by a long shot. You need to make investors go "Steve Jobs is in charge, so the company is in good hands". If you can't do that, you may as well be a mid-level product manager or director or VP doing all those same things.
Given the average shelf life of S&P companies, I assume that even an established company needs to go through 0-to-1 all the time. The aforementioned companies all reinvented themselves multiple times.
> Revenue expectations, margins expectations, and investors are entirely different between the two.
Yeah, it's hard. How the great CEOs achieve their successes are beyond me. I was just thinking that a great CEO needs to drive product changes.
> I assume that even an established company needs to go through 0-to-1 all the time
Not significantly. Massive product and strategy pivots are rare because they can fail horribly (eg. For every turnaround like Apple there's a failure in execution like Yahoo)
> How the great CEOs achieve their successes are beyond me
Luck in most cases.
I've worked with plenty of CEOs and in most cases, most people at the C-Suite and SVP level of large organizations are comparable to each other skills and intellect wise.
Administration/mid-level management buy-in is also critical. C-Suite doesn't have much visibility into the lower levels of an organization (it's unrealistic), so management is their eyes and ears.
Intel has no GPU. Intel has no mobile processors / SOCs. These seem to be the drivers of growth nowadays. And their CPUs have hard time competing with AMD now. I'm not sure that the 3 years which Geslinger had were enough to turn the tide.
I own one, Arc isn't the best, but it's still able to compete with lower end Nvidia and AMD GPUs. They ended up having to mark them down pretty drastically though.
I actually owned an Intel Zenphone about 8 years ago. Aside from being absolutely massive, it was decent.
I think Intel got arrogant. Even today, with all the benchmarks showing Intel isn't any faster than AMD, Intel is still more expensive for PC builds.
Check Microcenter, at least in the US, the cheapest AMD bundle is 299 vs 399 for Intel.
Yes. Intel might have a GPU, and maybe even a phone SOC, if they tried hard enough. Intel's integrated GPU cores are quite decent; I had high hopes on Arc eventually becoming the third major discrete offering. Alas.
Intel indeed rested too long on their laurels from early 2000s. It's one of the most dangerous things for a company to do.
> The CEO of a public company is a glorified cheerleader.
It can be. You've just noticed the fact that for publicly traded companies where the value of the stock exceeds the total value of the assets you actually get to put that difference on your books into categories like "Good will" or "Consumer confidence."
For companies struggling to create genuine year over year value this is a currently validated mechanism for faking it. The majority of companies do not need to do this. That companies operating in monopolized sectors often have to do it is what really should be scrutinized.
Is this not a bit too short a time for results to show yet? Turning a ship too many times would just result in it spinning in circles around the same position
>Is this not a bit too short a time for results to show yet?
Pat so suddenly getting "retired" like this isn't based on the success or failure of the new foundry nodes. You're correct that they weren't supposed to ship yet anyway. With this news most are expecting they'll be (further) delayed soon, but the real cause of today's action is strategic.
Things at Intel are so far gone that there's now no choice but to look at splitting up the company and/or selling/merging key parts off. Pat didn't want to preside over breaking up Intel, he wanted to save the company by shipping the new nodes. This was always a long shot plan which would require the existing businesses to do well and new things like GPUs to contribute while massively cutting costs in other areas.
Those things didn't work out. The GPUs were late and under performed forcing them to be sold near break even. The market for desktop and laptop CPUs was much softer than expected for macro economic reasons and, worse, there were massive, delayed death field failures of the last two gens of desktop CPUs. Competitors like AMD generally took more share from Intel faster than expected in other markets like data center. The big layoffs announced last Summer should have been done in 2021. Those things combined caused time and money to run out sooner than the new nodes could show up to save the day. This is reality finally being acknowledged. Frankly, this should have happened last Summer or even earlier. Now the value has eroded further making it even harder to save what's left.
From a customer's perspective:
NO, I don't like the BIG-little core architecture of CPUs on desktop platforms, and I don't enjoy the quality issues of the 13-14th gen CPUs; I also don't like the lack of software support for middling performing GPUs. I used to like intel's SSD, but the division was sold.
They weren't. Intel and Micron used to co-develop the flash memory but had different strategies for SSD controllers, and Intel did a full generation of flash memory after breaking up with Micron and before selling their SSD business to Hynix.
Here are 2 things I have noticed that seem obvious weaknesses:
1. Looking at the CyberMonday and BlackFriday deals, I see that the 12900K and 14900* series CPUs are what is being offered on the Intel side. Meanwhile AMD has released newer chips. So Intel has issues with either yield, pricing or other barriers to adoption of their latest.
2. The ARC GPUs are way behind; it seems obvious to me that a way to increase usage and sales is to simply add more VRAM to the GPUs - Nvidia limits the 4090 to 24GB; so if Intel shipped a GPU with 48 or 64GB VRAM, more people would buy those just to have the extra VRAM. It would spur more development, more usage, more testing and ultimately result in ARC being a better choice for LLMs, image processing, etc.
Your comment confuses me. BOTH have released newer chips. Intel with the Core Ultra 200 series and AMD with Ryzen 9000. Neither are going to be on Black Friday sales.
> So Intel has issues with either yield, pricing or other barriers to adoption of their latest.
How does not putting their latest chips on sale indicate trouble selling them?
> pricing or other barriers to adoption of their latest.
They are just not very good? There is basically no point in buying the current gen equivalent 14900K with current pricing (the only real advantage is lower power usage).
For that to work, they need a software stack to compete with CUDA.
Nvidia is so far ahead that a single manufacturer won’t be able to compete for developers. Instead, the only chance the rest of the AI GPU market has is to build some portable open source thing and use it to gang up on Nvidia.
That means bigger GPU players (AMD, hyperscalers) will need to be involved.
Having half the number of GPUs in a workstation/local server setup to have same amount of VRAM might make up for whatever slowdown there would be if you had to use less-optimized code. For instance running or training a model that required 192GB of VRAM would take 4x48GB VRAM but 8x24GB VRAM GPUs.
Gelsinger seemed well connected to Washington / the democratic administration in particular and the CHIPS act seemed to be designed to save/bail out Intel.
IMO Intel is far more of a strategic asset to allow such short-sighted policy from whatever administration. The upcoming administration surely knows that, and as far as I know Intel has never made strong steps that would alienate the coming administration from them. Also getting it back on its feet is actually inline with the narrative, it is easy to give it such spin, even if some personal differences are at play, this is more important than that.
It is strategically important for the US to be ahead in technology; however Intel is no longer ahead and is not really the strategic asset it used to be.
It is still strategically important to be able to supply domestic civilian, industrial, and military computation needs with good enough chips. If you are not ahead, but still good enough, than you have sovereignty, and have a good chance to get back to the top eventually (in the sort/mid term).
China is not ahead. Still they are capable of mass-producing somewhat capable chips, with good enough yields for strategic projects.
Also they can mass produce now outdated designs, which are still good enough for "office use" to provide continuity of the government bureaucracy's operation.
China has less advanced nodes where it can produce for industrial applications.
They have the potential to eventually get ahead, but now a total embargo you only slow them down, but not cripple them. This situation is acceptable for any state as a temporary measure until reorganizing efforts.
Also Intel is still producing better stuff then the chineese can. It is still ahead. And as I detailed above, I think it would need to fall behind way more to loose its strategic nature.
Also capacities in Taiwan and in South-Korea are very fragile from a strategic perspective, even if they are/were more advanced than what Intel has.
It seems really strange to me that the CEOs of two major companies have announced immediate retirements on the same day (the other being Carlos Tavaris of Stellantis).
I'm more surprised about Gelsinger than Tavares. Gelsinger seemed to have a viable plan going forward, even if progress was slow.
But Tavares has been doing a terrible job for the past year: New Jeep inventories stacking up on dealer lots for over a year, mass layoffs, plant closings, unreliable cars, and no meaningful price reductions or other measures to correct the situation. You couldn't pay me to take a new Jeep or Ram truck right now.
This is not good. We already know what happened when CFO took over. It was a time when Intel totaly lost control. They are gona get bought for penies. OMG
Instead of saying to AMD they will be in the rearview mirror, they should have been paranoid. Not do stupid GPUs. and destroy others where it mattered
Hard disagree here x100. Investing in GPUs in the time when Nvidia and AMD started to gouge the market is actually the best decision Intel did in recent times. It's the piece of semiconductor with some of the highest margins in the business and they already own a lot of the patents and IP building blocks to make it happen.
The only stupid thing they did was not getting into GPUs earlier so they would already be on the market during the pandemic GPU shortage and AI boom.
Intel just has to make a decent/mediocre GPU with 64GB+ memory at a $500 price point and they will instantly become the defacto local transformer leader. It's a true "build it and they will come" situation.
>Intel just has to make a decent/mediocre GPU with 64GB+ memory at a $500 price point and they will instantly become the defacto local LLM leader
Intel should have appointed people form the HN comment section as their CEO, as they clearly know more about running a giant chip design and fabrication company than the guy who worked there for 10+ years.
I actually did email Deepak Patil (head of Intel Graphics division) about this around a year ago, haha. Never did get a response though.
It is something that is easy to miss if you are just looking at typical business strategy and finances. A high memory consumer GPU would undercut their server GPUs, which are presumably higher margin intended golden geese. It's easy to see them chasing server markets and "gamers" being an afterthought.
However there is huge demand right now for a modern, even a crappy modern, GPU with gobs of memory. Make the card and the open source AI tooling for it will start sprouting in days after it's release.
It's an extremely powerful position to have every at-home AI geek's setup to be bound to using intel cards and intel focused tooling. Nvidia and AMD won't do it because they want to protect their server cards.
> It's an extremely powerful position to have every at-home AI geek's setup to be bound to using intel cards
So, incredibly small market share while your competitors already have the first-mover advantage and nailed down the ecosystem? With no data backing it up, I think, graphics cards for local LLM needs is not really on demand. Even for gaming it’s probably more attractive, but then again, that’s not even where the real money is.
>So, incredibly small market share while your competitors already have the first-mover advantage and nailed down the ecosystem?
Exactly. This x100. It was easy for Nvidia to succed in the LLM market by winging it, in the days when there was no LLM market, so they had the greenfield and first mover advantages.
But today, when Nvidia dominates the mature LLM market, Intel winging it the same way Nvidia did, won't provide nearly the same success as Nvidia had.
Ferruccio Lamborghini also built a successful sports car company by building tractors and cars in his garage. Today you won't be able to create a Lamborghini competitor with something you can build in your garage. The market has changed unrecognizably in the mean time.
The market share is incredibly small but also incredibly well aimed.
The people learning how to do local LLMs will be the people directing build out of on-prem transformers for small-midsize companies. The size of the market is irrelevant here, it's who is in that market and the power they will have that is extremely relevant.
> ..open source AI tooling for it will start sprouting...
AMD has tried this for many of its technologies and I don't think it is working. Granted, they suck at open sourcing, but a shitload of it was open sourced. See TinyGrad voyage into the Red Box driver (streams on youtube).
Intel doesn't have to open source anything. People will build everything needed to run intel cards efficiently as there is currently zero options for affordable video cards with high memory.
It's either old slow Tesla cards with 48GB or $2000 nvidia cards with 24GB.
> People will build everything needed to run intel cards efficiently as there is currently zero options for affordable video cards with high memory.
I think you're overestimating what people can and will do.
Nvidia didn't succeed because it just launchend cards and let people write CUDA for them. Nvidia is where it is because it has an army of researchers and SW engineers developing the full stack from research papers, to frameworks, to proofs of concepts, showing customers the value of paying for their pricey HW + SW, most of it proprietary, not community developed.
"People" alone won't be able to get even 10% there. And that's ignoring the fact that Nvidia HW is not FOSS so they'd be working blind.
You are jesting, but there is some wisdom to that post. No reasonable person is suggesting global company changes direction on the basis of one post on the internet, but the advice provided is not without merit. Surely, a company of that size can do some research to see if it is a viable path. In fact, if it does anything right, it should have people like that ready to do appropriate analysis.
I have my thoughts on the matter and cautiously welcomed their move to GPUs ( though admittedly on the basis that we -- consumers -- need more than amd/nvidia duopoly in that space; so I am not exactly unbiased ).
I'm saying nobody can guarantee the claim of the GP I've replied to, that if Intel would have produced mediocre GPUs with 64+ GB of RAM that would have magically help them rise to the top of ML HW sales and save them.
That's just speculations from people online. I don't see any wisdom in that like you do, all I see is just a guessing game from people who think they know an industry when they don't (armchair experts to put it politely).
What made Nvidia dominant was not weak GPUs with a lot of RAM. The puzzle of their success had way more pieces that made the whole package appealing over many years, and a great timing of the market also helped. Intel making underperforming GPUs with a lot of RAM would not guarantee the same outcome at a later time in the market with an already entrenched Nvidia and a completely different landscape.
Your comments that nobody knows anything for sure are generically applicable to any discussion of anything.
But since they obviously apply just as well to Intel itself, it is a poor reason to dismiss other’s ideas.
—
> What made Nvidia dominant was not weak GPUs with a lot of RAM.
Intel doesn’t have the luxury of repeating NVidia’s path in GPUs. NVidia didn’t have to compete with an already existing NVidia-like incumbent.
That requires no speculation.
—
Competing with an incumbent via an underserved low end, then moving up market, is called disruption.
It is a very effective strategy since (1) underserved markets may be small but are are immediately profitable, and (2) subsequent upward growth is very hard for the incumbent to defend against. The incumbent would have to lower their margins, and hammer their own market value.
And it would fit with Intel’s need to grow their foundry business from the low end up too.
They should take every low-end underserved market they can find. Those are good cards to play for ambitious startups and comebacks.
And the insane demand for both GPUs and chip making is increasing the number of such markets.
<< That's just speculations from people online. I don't see any wisdom in that like you do, all I see is just a guessing game from people who think they know an industry when they don't (armchair experts to put it politely).
True, it is just speculation. 'Any' seems to be a strong qualifier. One of the reasons I troll landscape of HN is that some of the thoughts and recommendations expressed here ended up being useful in my life. One still has to apply reason and common sense, but I would not dream of saying it has no ( any ) wisdom.
<< What made Nvidia dominant was not weak GPUs with a lot of RAM.
I assume you mean: 'not in isolation'. If so, that statement is true. AMD cards at the very least had parity with nvidia, so it clearly wasn't just a question of ram.
<< The puzzle of their success had way more pieces that made the whole package appealing over many years, and a great timing of the market also helped.
I will be honest. I am biased against nvidia so take the next paragraph for the hate speech that it is.
Nvidia got lucky. CUDA was a big bet that paid off first on crypto and now on ai. Now, we can argue how much of that bet was luck meets preparation, because the bet itself was admittedly a well educated guess.
To your point, without those two waves, nvidia would still likely be battling amd in incremental improvements so the great market timing accounts for majority of its success. I will go as far as to say that we would likely not see a rush to buy 'a100s' and 'AI accellerators' with exception of very niche applications.
<< Intel making underperforming GPUs with a lot of RAM would not guarantee the same outcome at a later time in the market with an already entrenched Nvidia and a completely different landscape.
Underperforming may be the key word here and it is a very broad brush. In what sense are they underperforming and which segment are they intended for? As for ram, it would be kinda silly in current environment to put a new card out with 8gb; I think we can agree on that at least.
<< I'm saying nobody can guarantee the claim of the GP I've replied to,
True, but it is true for just about every aspect of life so as statements go, so it is virtually meaningless as an argument. Best one can do is argue possibilities based on what we do know about the world and the models it tends to follow.
There's no doubt that the statement is true. Some use cases would absolutely benefit from GPUs with a boatload of VRAM, even if it's relatively slow (~500 GB/s).
The market for that is just not that large, it wouldn't move the needle on Intels revenue, but then again it could get the enthusiasts onboard and get Intels CUDA alternative moving.
Disrupting a market from the bottom always looks like a small market opportunity. Initially.
But then you move up, and the incumbents have to choose to keep ceding more of their lower end or lower their margins. And it is very hard and costly for a company succeeding at the high end to do the later.
That would have been a fantastic sign Intel was getting more nimble and strategic.
And been a good fit with a come back in the low end of fabs too.
Many people here are very knowledgeable: they have Putnams, they are CEOs smurfing, run 300 people teams, made some software that all know (e.g. we have the Diablo 2 guy), people from hardare side, VCs..
Some are probably multi millionaires smurfing (and I dont mean cryptobros).
I think the issue is that Intel has a culture and focus. It is cpus. It is a large company and this is where the revenue comes from and it has momentum.
There are a lot of strategic shifts Intel could do but their timeline to paying off at the scale Intel needs is very long.
What I see is a bunch of endeavours that get killed off too quickly because they were not paying off fast enough and this creates mistrust in the community around Intel’s new initiatives that are not core that make them harder to succeed going forward. It is a bit of a death spiral.
Basically big companies find it hard to learn new tricks when their core offering starts to fail. The time to learn new tricks was a while ago, now it is too late.
I’d argue that focus is what Intel fundamentally lacks. Or any kind of real vision.
If they had focused more on mobile CPUs, GPUs, or especially GPGPUs a decade ago, they could have more product diversity now to hold them over.
Instead, they dipped their toes into a new market every few years and then ran away screaming when they realized how difficult and it would be to gain market share.
If they had any actual vision, they could have a line of ARM CPUs now to compete with the onslaught in all major CPU markets.
They should have listened to their customers and market forces instead of trying to force x86 down everyone’s throats.
>they could have a line of ARM CPUs now to compete with the onslaught in all major CPU markets.
Disagree. Selling ARM chips with high profit margins is tough. There's too much competition from the likes of Samsung, MediaTek and until the US ban, Hi Silicon(Huawei). ARM chips are a race to the bottom in terms of price with a market dominated by companies from Asia. There's no guarantee Intel could have had a competitive ARM design that could beat Apple's or Qualcom's.
> There's too much competition from the likes of Samsung, MediaTek and until the US ban, Hi Silicon(Huawei). ARM chips are a race to the bottom in terms of price with a market dominated by companies from Asia
Yes, without proprietary technology, margins are slim.
But Qualcomm has somewhat succeeded in this area anyhow. I guess they took the ARM base but innovated on top of it in order to command higher margins.
>But Qualcomm has somewhat succeeded in this area anyhow
It's wasn't just anyhow. Qualcomm succeed in the mobile SoC space because they also had the best modems in the industry (name comes form Quality Communications after all). And also the best mobile GPU IP they bought from ATI.
Well they have to try something and actually invest in it. Every year, it looks more and more like x86 is a sinking ship.
Intel and AMD can dominate the x86 market all they want. But x86 has been steadily losing ground every year to cheaper and more power efficient ARM processors. It’s still a big market now, but I don’t think it’s going to be a great business in a decade or two.
ARM was just an example. If Intel spent a decade strengthening their Larrabee GPGPU platform and building AI and crypto ecosystems on it, they may have been well positioned to benefit immensely like Nvidia has over the last 5 years.
Cheap and power efficient is mostly not about the ISA. ARMv8 is designed for efficient fast CPUs and it's good at it, but you could do it with x86 if you tried. Intel doesn't try because they want to make expensive high-power server chips.
Intel did have an ARM license at one point. The margins would have never been acceptable to Intel. Annapurna Labs / P. A Semi probably sold to Amazon / Apple respectively for the same reason.
>I think the issue is that Intel has a culture and focus. It is cpus. It is a large company and this is where the revenue comes from and it has momentum.
With this logic, Apple should have also stayed with making Macs when it had financial troubles in in 1999, since that's its focus, not venture into making stupid stuff like Mp3 players and phones, everyone knows that's the core focus of Sony and Nokia who will rules those markets forever.
> With this logic, Apple should have also stayed with making Macs when it had financial troubles in in 1999, since that's its focus, not venture into making stupid stuff like Mp3 players and phones, everyone knows that's the core focus of Sony and Nokia who will rules those markets forever.
You’re off by a few years. Jobs axed any project not related to the Mac when he came back in 1997, so they actually did what you say they did not. The iPod project started around 1999-2000 when Apple was in a massive growth phase after the iMac and the G3 and then G4 desktops.
Also, in the alternate reality where the iPod did not massively succeed, it would very likely have been killed or remain as a hobby like the Apple TV. Apple might not be as huge as they are now, but the Mac was doing great at the time.
One way you can look at it is that Apple succeeded at those by taking things that weren't their product (radios, flashlights, calculators, etc) and turning them into their product (computers).
Extreme outsider perspective but they seemed like dilettantes. They'd dip their toe into doing GPUs and then cancel the project every couple years.
A few weeks ago Gelsinger even commented he saw "less need for discrete graphics in the market going forward" - just seemed like a very Intel thing to say
As a fellow outsider, that makes a lot of sense to me.
Even if Intel started outperforming nVidia today... who would want to put serious effort into making their stuff work with Intel hardware in a market they're likely to pull out of at any moment?
The early stuff (Larabee, KNL, KNC etc.) was hamstrung by x86. Wrong architecture for the types of things GPUs are good at. Their igpus have generally been good but that's not competing in the compute segment. Then they acquired a few startups (Nervana, Habana) that didn't really work out either. And now they finally have a discrete GPU lineup that is making slow progress. We'll see.
Yes, gaming GPUs are a decent money maker - but Intel GPUs can currently only compete in the midrange segment, where there is a lot less money to be made. And to change that, they need to invest a lot more money (with uncertain outcome). And for AI it's basically the same story - with the added difficulty of Nvidias CUDA moat, which even AMD is having trouble with.
GPUs exist because CPUs just aren't fast enough. Whether or not people are going to start making GPU-only computers is debatable (although there has clearly been a lot of CPU+GPU single chips).
What a frankly weird oversimplification? GPUs don't exist because CPUs aren't fast enough. GPUs exist because CPUs aren't parallel enough. And to achieve that parallelism, they sacrifice massive amounts of performance to get it.
A GPU-only computer would be absolutely horrendous to use. It'd be incredibly slow and unresponsive as GPUs just absolutely suck at running single-threaded code or branchy code.
I mean you can splits hairs about the difference between CPU and GPUs all you want.
The overall point is that work is being increasingly done _not_ on the CPU. If your business is CPUs-only then you're going to have rough times as the world moves away from a CPU-centric world. You don't need to predict AI; you just need to see that alternatives are being looked at by competitors and you'll lose an edge if you don't also look at them.
It's not going to matter much if you have a crappy CPU if most of the work is done on the GPU. Its like how iPhones don't advertise themselves as surround sound; phones aren't about calling people anymore so no reason to advertise legacy features.
> The overall point is that work is being increasingly done _not_ on the CPU.
Eh? GPGPU has been a thing for decades and yet barely made a dent in the demand for CPUs. Heck, CUDA is 17 years old!
The world has not budged from being CPU-centric and it isn't showing any signs of doing so. GPUs remain an accelerator for specialized workloads and are going to continue to be just that.
Hmm, this isn't power-efficient thinking. iPhones have lots of accelerators beyond GPUs - not because they're more performant but because they're better at performance/power.
That's orthogonal. Fixed-function hardware has a power efficiency advantage over programmable hardware. This isn't new or surprising, but it's also unrelated to CPU vs. GPU. Both CPU & GPU are programmable, so that's not relevant.
Are those GPUs really stupid? They seem like a great price for performance devices when ultra high level gaming is not the priority.
EDIT: I personally always liked intel iGPUs because they were always zero bullshit on Linux minus some screen tearing issues and mumbo-jumbo fixes required in X11.
They can encode AV1 even. Really amazing, amazing chips for the price.
The "stupid" thing with them (maybe) is that they cannot do anything exceptionally good, and that while having compatibility problems. They are cheap yes, but there are many other chips for the same price, and while they are less capable, they are more compatible.
Make A380 cost 30% less or invest way more to the drivers and IMHO it'd been completely different.
Another nice thing—it looked like Intel was lagging AMD in PCIe lane counts, until somewhat recently. I suspect selling GPUs has put them in the headspace of thinking of PCIe lanes as a real figure of merit.
AMD AM5 are also not great at having enough PCIe lanes, hence at most one connected PCIe5 x16 GPU, if you need more it's x8 for 2 GPUs and so on, and that's before we connect fast M2 storage, fast USB4 slots etc. If you need more PCIe lanes, you have to buy a Threadripper or Epyc and that's easily 10 times the price for the whole system.
PCIe lanes and DDR channels take up the most pins on a CPU connector (ignoring power). The common solution (for desktops) is to have a newest generation protocol (5) at the CPU level, then use the chipset to fan out more lanes at a lower generation (4).
Yeah. Theadripper/Epyc is what I’m thinking of—it isn’t obvious (to me at least) if it was just a coincidence of the chiplet strategy or what, if so it is an odd coincidence. The company that makes both CPUs and GPUs has ended up with data center CPUs that are a great match for the era where we really want data center CPUs that can host a ton of GPUs, haha.
Nah. AMD discreet GPUs are fantastic in Linux these days. You don't need to install a proprietary driver! They just work. It's really nice not having to think about the GPU's drivers or configuration at all.
The only area where AMD's discreet GPUs are lagging behind is AI stuff. You get a lot more performance with Nvidia GPUs for the same price. For gaming, though AMD is the clear winner in the price/performance space.
Of course, Nvidia is still a bit of a pain but it's of their own making. You still need to install their proprietary driver (which IMHO isn't that big a deal) but the real issue is that if you upgrade your driver from say, 550 to 555 you have to rebuild all your CUDA stuff. In theory you shouldn't have to do that but in reality I had to blow away my venv and reinstall everything in order to get torch working again.
Nvidia's GPUs work well on Linux. A friend and I use them and they are fairly problem free. In the past, when I did have some issues (mainly involving freesync), I contacted Nvidia and they fixed them. More specifically, I found that they needed to add sddm to their exclusion list, told them and they added it to the list after a few driver releases. They have also fixed documentation on request too.
The GPUs seemed smart. Too late, and timid, but smart. What continually blew my mind was Intel's chiplet strategy. While AMD was making scads of a single chiplet and binning the best for Epyc and recovering cost at the low end with Ryzen, Intel designed and fabricated a dizzying number of single-purpose chiplets. In some cases, just the mirror image of another otherwise identical chiplet. The mind boggles. What phenomenal inattention to opportunity.
What you describe is mad inattention to costs, not only to opportunities: maybe a symptom of widespread misaligned incentives (e.g. delivering a design with mirrored chiplets quickly could be more "useful" for an engineer than saving a few millions for the company by taking one more week to design a more complex assembly of identical chiplets) and toxic priorities (e.g. theoretical quality over market value and profits, risk aversion, risk/cost externalization towards departments you want to be axed instead of your own).
To me, AMD's approach demonstrates planning and buy-in across multiple teams (CPU die, Ryzen IO die, Epyc IO die, etc), and that suggests a high degree of management involvement in these engineering decisions.
Intel's activities seem comparatively chaotic. Which I agree smells of disincentives and disinterested middle management hyperfixated on KPIs.
Same with the general preparedness for the AGI Cambrian explosion. Give their position, they should have been able to keep pace with Nvidia beyond the data center, and they fumbled it.
Their purported VRAM offerings for Battlemage are also lower than hoped for, which is going to be a turnoff to many buying for local inference.
I think we have too many decision-makers gunshy from crypto mining that haven't yet realized that compute isn't just a phase.
The GPUs are elemental for a shift to APUs - the future as Apple has shown for performance and energy efficiency. Strix Halo will be a game changer, without a GPU Intel has no future on the laptop (and later desktop).
One of the things I was wondering about a few years ago is whether intel would attempt to bid on the Sony/MS console contracts which AMD has had tied up for a long time now and would be a dependable income along with reduced software compatibility concerns compared to the breadth and history of windows games. I don't think they got to the point of having a big iGPU integrated to the extent that AMD has had for years though.
Apparently AMD has at least the Sony PS6 contract now.
Gelsinger apparently agreed with you. However, the market very clearly has enormous demand for discreet GPUs. Specifically for AI workloads (not PC gaming).
If I was on Intel's board I would've fired him for this statement. The demand for GPUs (parallel matrix processors with super fast local memory) is NOT going to go down. Certainly not in the next five to ten years!
I know Intel makes a lot of things, has a lot of IP, and is involved in many different markets but the fact that they're NOT a major player in the GPU/AI space is their biggest failure (in recent times). It's one of those things that should've been painfully obvious at some point in 2022 and here we have Gelsinger saying just a few months ago that somehow demand for AI stuff is just going to disappear (somehow).
It's magic hand waving like this that got him fired.
I disagree. They need to have a dedicated GPU and iGPU they had needs to be improved or they will absolutely fail. The current path forward is an APU or more of a system on a chip with CPU, GPU, and I guess, NPU.
Their dGPU is also pretty promising, I have it on my list to get - even if not for gaming, its possibly the best media encoding / decoding card for the money to get today. The only thing holding it back for entry level or mid level gaming is the drivers - for some games, it wont matter it seems, but for others it has some growing pains but they seem to be diligently working on them with every driver release.
The GPUs aren't a stupid idea. Right now Nvidia basically controls the market and has totally abandoned the lower/mid range end.
Intel has made vast improvements even within their first generation of dedicated desktop cards. They will likely never compete with cards like a 4080/4090, but they may be great options for people on a budget. Helps put pressure on AMD to be better as well.
> The two largest fabless companies in the world never used, and have no plans to use Intel foundries.
Such plans would be something like 4-5 years ahead, so you wouldn't have heard of it yet unless they decided to talk about it. Chip development takes a long time.
Of course, that means you have to expect the foundry to still be there in 5 years.
The AWS deal is not a "Foundry win" in the true sense. It is still a chip designed and built by Intel for AWS: custom Xeon chip and custom Intel Clearwater Forest AI chip.
Unlike true foundries which manufacture chips designed by customers.
Full disclosure, the day Bob Swan announced his exit was the day I purchased Intel stock.
Pat had the mandate from both the board and the market to do whatever he deemed necessary to bring Intel back to the forefront of semiconductor manufacturing and a leading edge node. Frankly, I don't think he used that mandate the way he should have. Hindsight is 20/20 and all, but he probably could have used that mandate to eliminate a lot of the middle management layer in the company and refocus on pure engineering. From the outside it seems like there's something rotten in that layer as the ship hasn't been particularly responsive to his steering, even with the knowledge of the roughly 4-5 year lead time that a company like Intel has to deal with. Having been there for so long though, a lot of his friends were probably in that layer and I can empathize with him being a bit over confident and believing he could turn it around while letting everyone keep their jobs.
The market reaction really does highlight how what's good for Intel long term as a business is not necessarily what the market views as good.
Folks in this thread are talking about a merger with AMD or splitting the foundry/design business. I doubt AMD wants this. They're relatively lean and efficient at this point and turning Intel around is a huge project that doesn't seem worth the effort when they're firing on all cylinders. Splitting the business is probably great for M&A bankers, but it's hard to see how that would actually help the US keep a leading semi-conductor manufacturer on shore. That business would likely end up just going the same way as GlobalFoundries and we all know that didn't really work out.
The most bizarre thing to me is that they've appointed co-CEO's that are both basically CFO's. That doesn't smell of success to me.
I think one of the more interesting directions Intel could go is if Nvidia leveraged their currently enormous stock value and moved in for an acquisition of the manufacturing division. (Quick glance shows a market cap of 3.4 trillion. I knew it was high, but still, wow.) Nvidia has the stock price and cash to support the purchase and rather uniquely, has the motive with the GPU shortage to have their own manufacturing arm. Plus, they've already been rumored to be making plays in the general compute space, but in ARM and RISC-V, not x86. Personally, Jensen is one of the few CEO's that I can imagine having the right tempo and tenor for taming that beast.
I'm curious what others think of the Nvidia acquisition idea.
> The most bizarre thing to me is that they've appointed co-CEO's that are both basically CFO's.
It makes sense once you understand the board has finally accepted the obvious reality that the only option remaining is to sell/spin-off/merge large parts of the business. Of course, the foundry business must remain in one piece and go to an American company or the US govt won't approve it.
Gelsinger 'got resigned' so suddenly because he wouldn't agree to preside over the process of splitting up the company. These co-CEOs are just caretakers to manage the M&A process, so they don't need to be dynamic turnaround leaders or even Wall Street investable.
Intel stock went up on the news not because the street expects a turnaround but because they think the pieces may be worth more separately.
Feels like the wrong move. Turning a chip company around has timescales of a decade. Getting rid of a CEO 3 years in simply means no turnaround is going to happen.
Pat was at the helm for just a few years and has already been sent to the guillotine.
Maybe it isn’t wise that the USA dump billions into private companies that can’t even get their own house in order.
I wouldn’t be surprised if the next CEO installed by the board will be hellbent on prepping INTC for sale. A few of the board of directors of INTC are from private investment firms.
You always hope as a technical person to see an engineer running a company. This move, in these circumstances do little to inspire confidence for engineers occupying c-suite positions. I had high hopes for Pat, given his previous track record. But it appears the damage to Intel had already been done.
The promise of state backing for a US champion SC fab was taken seriously by Gelsinger, who went all-in in trying to remake Intel as TSMC 2.0. But the money turned out to be way too late and far more conditional than Gelsinger thought. This is bad news for Intel, bad news for US industrial policy
Something I was thinking about the other day: All the mega successful chip companies at the moment: TSMC, Nvidia, Samsung - are all lead by founders or family of founders. It got me wondering that if you really want to innovate in the most innovative industry in the world, you need more skin in the game than stock options. Gelsinger was the closest thing Intel had to a founder-leader, someone who deeply cared about the company and its legacy beyond what they paid him, and was willing to make tough decisions rather than maintain the status quo until his options vest.
Intel either go the IDM route or split the company into two. Intel can't afford to fuck around and find out.
AMD is gaining ground on x86 without the burden of fab, and Apple has demonstrated that desktop ARM is viable. Nvidia is showing the world that GPUs can handle parallel computing better.
Well their stock is already in the dumps, they can't do much worse than they already have been.
Divesting from their discrete GPUs just as they were starting to become a viable value option was one of their most recent asinine decisions. No idea why they didn't try test the market with a high RAM 64GB+ card before bowing out to see how well they'd do. Nvidia's too busy printing money to add more RAM to their consumer GPUs, they'd have the cheap GPU VRAM market all to themselves.
It still has a market cap over over $100b. Trust me, things can absolutely get worse from here. The default state of big companies that have been deeply mismanaged for 10+ years is that they go bust and end up getting bought out for a pittance. If the fabs can't be made to work in a reasonable timeframe when they are still competitive in the marketplace, then they turn out to be giant write-offs and malinvestment only good for shielding future income from taxes.
Exactly, I was discussing this with a friend the other day. I'm sure there must be a market for high RAM GPUs, even if they're not as fast as NVIDIA GPUs.
With the B770 16GB gone and the idea for the B580 to be cheaper than current 7600 XT @ 16 GB by cutting 4 GB makes Battlemage DOA. A gamer making an investment on a card for ~3 years cares less about spending ~$30 more vs being unable to run high resolution texture packs on a gimped GPU. The XMX cores are superior for AI for a month until Blackwell with smaller 2 and 4FP units, but a month is too little lead to overcome the CUDA software inertia. The next beancounter CEO gets the gift of terrible sales numbers and the excuse to drop the prices to move them out before RTX 5000 & RX 8000 competition hits.
Maybe you are right and Battlemage is DOA. Perhaps intel know that and just want to dump inventory before announcing they will get out of the GPU business.
On the other hand, maybe not.
My point is that although you might think they are going to divest the GPU business in future, we don't know that for sure and it's kind of weird to present it as having already happened.
Intel's Xe2 GPU architecture works well as an efficient iGPU and has market fit there. The problem is taking it the other way to discrete high end and data center rather than the opposite as NVidia has done with GPUs and AMD with CPUs. Using the low power consumer target to experiment on architecture and new nodes that then scales to the high end has been Intel's strategy ("efficiency" cores) is logical from a foundry perspective for monolithic chips. But Intel has failed to execute in the new AI data center investment reality. Where are the GPUs from Gaudi 3? Pat: "Putting AI into all the chips, not just ones in the cloud, might be more important in the long run" maybe, but failing to win where the money is at now is a huge barrier to surviving in that long run.
He may have been talking about something like, "all GPU stuff will just be integrated into the CPU" but I highly doubt that's realistic. Nvidia's latest chip die size is enormous! Their new 5090 die is reportedly 744mm squared: https://www.tomshardware.com/pc-components/gpus/the-rtx-5090...
There's no way you're going to be getting an equivalent level of performance from a CPU with integrated GPU when the GPU part of the die needs to be that big.
He said less demand. He didn't say there would be no demand. Importantly he didn't say intel was divesting discrete GPUs. Perhaps even more importantly they demonstrably aren't divesting.
These large troubled incumbents seems to have infinite lives to linger on towards a slow death path destroying value in their journey. Like Boeing, why does intel hangs around taking up space, opportunities, resources, one failed attempt after another failed attempt instead of making space for newer ideas, organizations? at this point is so clear these public companies are just milking the moat their predecessors built around them and the good will (or is it naive will) of their new investors who continue to pour money buying out the ones jumping ship
> Gil was a nice guy, but he had a saying. He said, 'Apple is like a ship with a hole in the bottom, leaking water, and my job is to get the ship pointed in the right direction.'
Plenty of people here talk about the mistakes from Intel CEOs. But do they really have so much influence over the success of new production lines? Or is this maybe caused by some group of middle-management that backed each other's ass the last 10 years? How high is the possibility that the production problems with new nodes are mostly bad luck? I haven't seen anything trying analyse and quantise those questions.
They've already been headed that way for a long time, the problem is that they need huge amounts of capital to complete their foundry road map and that is meant to be cross-subsidized by the other part of their business, but now it's looking shakey that they'll even have the capital to execute the plan - setting aside whether it would even work if they could.
Stock moving up slightly? In the context of the last week even it seems like Wall Street mostly doesn't care either way.
At this stage in Intel's life I think having a financier overseeing the company might actually be a good move. Engineering leadership is of course critical across the business, but they company has historically paid large dividends and is now moving into a stage where it's removing those pay outs, taking on debt and managing large long-term investments across multiple parts of the business.
Someone needs to be able to manage those investments and communicate that to Wall Street/Investors in a way that makes sense and doesn't cause any surprises.
Pat's error isn't that Intel revenues are slowing or that things are struggling, it's the fact he continued to pay dividends and pretend it wasn't a huge issue... until it was.
I’ve heard rumors he was viewed as a lightweight, or at least not as much of a serious engineer as Morris Chang was in his 60s, but even still his tenure was surprisingly short…
Eh. Pat hasn't seemed to be doing anything special. Presided over a bunch of hiring and firing (pointlessly), plowing money into fabs like anyone else would, often misleads during presentations, hasn't managed Intel's cash very well...
I have to agree with a few others, don't really see why people looked up to him. Seems like a pretty mediocre CEO overall.
I also, personally, do not actually see the problem with a financial CEO. Ultimately I don't believe technical acumen is that important to the decisions a CEO makes - they should be relying on subordinates to correctly inform them of the direction and prospects of the company and act accordingly.
I am concerned about Intel's long term prospects - obviously he wouldn't be leaving if everything was looking up, IMO.
Only CEOs get to press release their firings as something else: retirement, wanting to spend more time with family, voluntarily leaving etc.
He was sacked. The board fired him. Tell it like it is.
What does this mean for Intel? I think they’re too important to get bought or broken up because they have fabs in the US. They’re like Boeing in that sense. Or a big bank. They’re too big (strategically important) for the US to let them fail. But maybe they’ll get bought by another US company or investor. I dunno.
No, they already do. None of the fabs (intel, tsmc, Samsung, etc.) build their own equipment. The secret sauce comes in designing the processes to integrate these machines into a cohesive months-long manufacturing process, building out the IP (aka what types of transistors and low-level features they can build) and so on. Companies like ASML and AMAT make machines with guarantees like “we can draw lines this small and deposit layers this thin” and it is up to intel or any other fab to figure out the rest.
What timescale are we looking at to decide if building foundries and manufacturing chips in the States is a good idea? There's an argument that there aren't nearly enough skilled workers to do high tech manufacturing here. Didn't TSMC also struggle to hire enough skilled workers in Arizona?
It's a good idea because otherwise China will bomb your foundry or steal your designs. Besides, you don't ever get skilled workers if you don't try to develop them.
> Didn't TSMC also struggle to hire enough skilled workers in Arizona?
That was mostly marketing so they could ask for more tax credits. In practice it seems to be going okay.
Has Intel ever made a production chip using their own EUV process?
They keep moving the goal post so they'll be state of the art when they get there - currently aiming for "18A" with high NA using the absolute latest equipment from ASML. But I don't think they've demonstrated mastery of EUV at any node yet.
18A isn't going to use high NA yet, it's just EUV. Intel is hoping to start using high-NA with their 14A process in 2026. Obviously with their current state that 2026 deadline might get pushed further.
> Intel Corporation (NASDAQ: INTC) today announced that CEO Pat Gelsinger retired from the company after a distinguished 40-plus-year career and has stepped down from the board of directors, effective Dec. 1, 2024
The latter implies the government money comes with strings attached and that the forces eager to see a turnaround will be active participants. It's good to see.
> The market already figured out that it doesn't want leading edge manufacturing in the US.
Exactly. The market figured out there's a lot of short-term profit to be made for shareholders in selling off the nation's industry, and moving it down the value chain. They're running the country like a legacy business they're winding down.
Give Wall Street a few more decades, and the US will have an agriculture and resource extraction economy.
National security (and other similar externalities) are not priced into stocks, which is why every trader will happily sell his own nation's defence in exchange for profit.
They’ll need to dump a lot more money than $8B to Intel to compete in all fronts of manufacturing with China, and focus on it for a decade. Those time horizons are politically impossible, since next elections are less than 4 years away. That being said, competition is good.
There are a number of things they are doing other than direct money. Buy American acts, sanctions (and other taxes/deductions). Even the threat to do something that isn't done is a powerful tool.
This is a very short-sighted point of view. $8B is about $50 per taxpayer; keeping the US semiconductor industry alive is a better investment - and more important - than whatever you'd spend it on.
I would wholeheartedly agree with you if my tax dollars were going to keeping the US semiconductor industry alive.
Instead we fed however many CHIPS' and other grants' billions of tax dollars to Intel and all we got were five digit layoffs and the ouster of the one CEO who ostensibly had the right idea.
Naw, fuck that. I want my tax dollars spent on anything else other than that or more wars.
I feel like this is a mistake. Pat's strategy is aggressive but what the company needs.
Intel's stock is jumping at this announcement, but I look at it as a bad signal for Intel 18a. If 18a was looking to be a smash hit then I don't think Pat gets retired. If 18a is a success then it is an even more short-sighted decision by the board.
What this likely means is two-fold:
1. Intel 18a is being delayed further and/or there are significant issues that will hamstring performance.
2. Pat is/was unwilling to split the foundry and design business / be part of a M&A but the board wants to do one or the other.
If 18a is not ready I think the best case scenario for Intel is a merger with AMD. The US Govt would probably co-sign on it for national security concerns overriding the fact that it creates an absolute monopoly on x86 processors. The moat of the two companies together would give the new combined company plenty of time to ramp up their fabs.
>> If 18a is not ready I think the best case scenario for Intel is a merger with AMD.
Aside from the x86 monopoly that would create, I don't think Intel has much of value to AMD at this point other than the fabs (which aren't delivering). IMHO if Intel is failing, let them fail and others will buy the pieces in bankruptcy. This would probably benefit several other companies that could use 22nm and up fab capacity and someone could even pick up the x86 and graphics businesses.
BTW I think at this point the graphics business is more valuable. Even though Intel is in 3rd place there are many players in the SoC world that can use a good GPU. You can build a SoC with Intel, ARM, or RISC-V but they all need a GPU.
Certainly feels like preempting news that Intel 18A is delayed.
Restoring Intel's foundry lead starting with 18A was central to Pat's vision and he essentially staked his job on it. 18A is supposed to enter production next year but recent rumors is that it's broken.
The original "5 Nodes in 4 Years" roadmap released in mid 2021 had 18A entering production 2H 2024. So it's already "delayed". The updated roadmap has it coming in Q3 2025 but I don't think anyone ever believed that. This after 20A was canceled, Intel 4 is only used for the Compute Tile in Meteor Lake, Intel 3 only ever made it into a couple of server chips, and Intel 7 was just renamed 10nm.
https://www.intel.com/content/www/us/en/newsroom/opinion/con...
I have next to zero knowledge of semiconductor fabrication, but “Continued Momentum” does sound like the kind of corporate PR-speak that means “people haven't heard from us in a while and there's not much to show”.
I also would never have realized the 20A process was canceled were it not for your comment since this press release has one of the most generous euphemisms I've ever heard for canceling a project:
“One of the benefits of our early success on Intel 18A is that it enables us to shift engineering resources from Intel 20A earlier than expected as we near completion of our five-nodes-in-four-years plan.”
The first iteration of Intel 10nm was simply broken -- you had Ice Lake mobile CPUs in 2019 yes but desktop and server processors took another two years to be released. In 2012 Intel said they will ship 14nm in 2013 and 10nm in 2015. Not only they did fail to deliver 10nm Intel CPUs but they failed Nokia server division too, nearly killing it off in 2018, three years after their initial target. No one in the industry forgot that, it's hardly a surprise they have such trouble getting customers now.
And despite this total failure they spent many tens of on stock buybacks https://ycharts.com/companies/INTC/stock_buyback no less than ten billions in 2014 and in 2018-2021 over forty billions. That's an awful, awful lot of money to waste.
Ice Lake wasn't the first iteration of 10nm - that was the disastrous Cannon Lake in 2018.
Yes, yes, yes, of course, the infamous CPU released just so Intel middle managers can get their bonus. GPU disabled, CPU gimped, the whole thing barely worked at all. Let's call it the 0th iteration of 10nm , it was not real, there was like one laptop in China, the Lenovo IdeaPad 330-15ICN, which paper launched.
Indeed. Brian Krzanich destroyed Intel.
Most of the stock buybacks happened under Bob Swan though. Krzanich dug the grave of Intel but it was Swan who kicked the company in there by wasting forty billion. (No wonder he landed at a18z.)
> Certainly feels like preempting news that Intel 18A is delayed.
I think at this point no one believes Intel can deliver. So news or not..
Intel GFX held back the industry 10 years. If people thought Windows Vista sucked it was because Intel "supported" it by releasing integrated GPUs which could almost handle Windows Vista but not quite.
The best they could do with the GFX business is a public execution. We've been hearing about terrible Intel GFX for 15 years and how they are just on the cusp of making one that is bad (not terrible). Most people who've been following hardware think Intel and GFX is just an oxymoron. Wall Street might see some value in it, but the rest of us, no.
My understanding is that most of the complaints about Vista being unstable came from the nvidia driver being rather awful [1]. You were likely to either have a system that couldn't actually run Vista or have one that crashed all the time, unless you were lucky enough to have an ATI GPU.
[1] https://www.engadget.com/2008-03-27-nvidia-drivers-responsib...
Parent talks about GMA900 from i910 series chipset.
It wasn't fully WDDM compatible for a quite minor (overall) part, but the performance were awful anyway and lack of running in the full WDDM mode (ie Aero) also didn't help too, partly because running in Aero was faster.
> If people thought Windows Vista sucked it was because Intel "supported" it by releasing integrated GPUs which could almost handle Windows Vista but not quite.
What does an OS need a GPU for?
My current laptop only has integrated Intel GPU. I'm not missing Nvidia, with its proprietary drivers, high power consumption, and corresponding extra heat and shorter battery life...
The GUI >99% of users used to interface with the OS required a GPU to composite the different 2d buffers with fancy effects. IIRC if you knew how do disable as much as possible of it the performance without GPU acceleration was not great, but acceptable. It really sucked when you had an already slow system and the GPU pretended to support the required APIs, but the implementation didn't satisfy the implied performance expectation e.g. pretending to support a feature in as "hardware accelerated", but implementing it mostly on the CPU inside the GPU driver, but even the things the old Intel GPUs really did in hardware were often a lot slower than a "real" GPU of the time. Also CPU and iGPU constantly fought over the often very limited memory bandwidth.
Composiors are generally switching to being gpu accelerated, not to mention apps will do their own gpu accelerated UIs just because the OS ui systems are all junk at the moment
We are at the perfect moment to re-embrace software rasterizers because CPU manufacturers are starting to add HBM and v-cache.
An 8K 32bpp framebuffer is ... omg 126MB for a single copy. I was going to argue that a software rasterizer running on vcache would be doable, but not for 8k.
For 4k, with 32MB per display buffer, it could be possible but heavy compositing will require going out to main memory. 1440p would be even better at only 15MB per display buffer.
For 1440p at 144Hz and 2TB/s (vcache max), best case is an overdraw of 984 frames/frame
Does 4k matter?
I was doing a11y work for an application a few months back and got interested into the question of desktop screen sizes. I see all these ads for 4k and bigger monitors but they don't show up here
https://gs.statcounter.com/screen-resolution-stats/desktop/w...
And on the steam hardware survey I am seeing a little more than 5% with a big screen.
Myself I am swimming in old monitors and TV to the point where I am going to start putting Pepper's ghost machines in my windows. I think I want to buy a new TV, but I get a free old TV. I pick up monitors that are in the hallway and people are tripping on them and I take them home. Hypothetically I want a state-of-the-art monitor with HDR and wide gamut and all that but the way things are going I might never buy a TV or monitor again.
All the browsers on my machine report my resolution as 1080p despite using 4k. I assume this is because I run at 200% scaling (I believe this is relatively common among anyone using a 4k resolution)
If the above-linked website uses data reported by the browser, I wonder how this scenario might be taken into consideration (or even if such a thing is possible)
A pixel is defined as 1/96th of an inch in the web world so it is dependent on your dpi/scaling. There is a window.devicePixelRatio that JavaScript can use to get actual pixels.
> Does 4k matter?
The PC I'm typing this on has two 27in 4k screens. I'm sitting so that I look at them from about 75cm away (that's 2.5 feet in weird units).
I archive most of my video files in 720p, because I genuinely don't see that big of a difference between even 720p and 1080p. It is definitely visible, but usually, it does not add much to the experience, considering that most videos today are produced to be watchable on smartphones and tablets just as much as cinema screens or huge TVs. I only make an exception here for "cinematic" content that was intended for the very big screen. That does not necessarily mean movies, but also certain YouTube videos, like "Timelapse of the Future": https://youtube.com/watch?v=uD4izuDMUQA - This one hits differently for me in 4K vs. just 1080p. Having to watch this in just 720p would be tragic, because its cinematography relies on 4K's ability to resolve very fine lines.
So why would I make a point to have both my screens be 4K? Because where else do you look at fine lines a lot? You're looking at it right now: Text. For any occupation that requires reading a lot of text (like programming!), 4K absolutely makes a difference. Even if I don't decrease the font size to get more text on screen at once, just having the outlines of the glyphs be sharper reduces eye strain in my experience.
A unit based on the average size of the human foot is not "weird".
Correct. It is insane. Bonkers. Absurd. How the hell can you live with that-stupid.
No. It is in fact completely normal and has been repeated many times in human history. A unit based on an arbitrary fraction of the distance from the north pole to the equator is quite a bit more odd when you think about it.
Remember the days when you would be in danger of your apartment being burglarized and thieves would take your TV or receiver or CD player etc.? Nowadays that's called junk removal and you pay for it! How times have changed...
Curious to hear more about the a11y work.
Run of the mill.
A complex desktop web form with several pages, lots of combo boxes, repeating fields, etc. I cleaned up the WCAG AA issues and even the S, but the AAA requirement for click targets was out of scope but had me thinking that I wanted to make labels (say on a dropdown menu bar) as big as I reasonably could and that in turn got me thinking about how much space I had to work with on different resolution screens so I looked up those stats and tried to see what would fit in which constraints.
And by "are generally switching" you're really trying to say "generally switched a decade ago".
'cept for Linux.
GNOME 3 had a hardware accelerated compositor on release in 2011.
Ubuntu came with Compiz by default in 2007: https://arstechnica.com/information-technology/2007/11/ubunt...
“Ubuntu 7.10 is the first version of Ubuntu that ships with Compiz Fusion enabled by default on supported hardware. Compiz Fusion, which combines Compiz with certain components developed by the Beryl community, is a compositing window manager that adds a wide range of visual effects to Ubuntu's desktop environment. The default settings for Compiz enable basic effects—like window shadows and fading menus—that are subtle and unobtrusive. For more elaborate Compiz features, like wobbling windows, users can select the Extra option on the Visual Effects tab of the Appearance Preferences dialog.”
I remember Beryl, those were fun times.
Different desktop but KDE still has the wobbly windows as an option, I enabled them out of nostalgia recently.
Yeah, I worked on that but I didn't think that would count since it was a distro, not a desktop environment. In that case Novell shipped compiz in 2006 so even earlier.
macOS had one in 2000. Windows shortly thereafter.
What?
MacOS in 2000 was still old MacOS, with no compositing at all. The NeXT derived version of MacOS was still in beta, and I tried it back then, it was very rough. Even once OSX shipped in 2001, it was still software composited. Quartz Extreme implemented GPU compositing in 10.2, which shipped in 2002.
Windows finally got a composited desktop in Vista, released in 2007. It was GPU accelerated from day one.
There's a fancy terminal emulator written in Rust that uses GPU acceleration. I mean, it is emulating a needlepoint printer...
If I recall correctly, Vista was hard depending on DirectX 9a for Aero. Intel GPU parts embedded in mobile CPUs were almost, but not fully DX 9a capable, but Intel convinced Microsoft to accept it as "compatible". That created lots of problems to everyone.
IIRC they also implemented some features by reporting them as available and mostly emulating them on the CPU to qualify.
> What does an OS need a GPU for?
https://en.m.wikipedia.org/wiki/Windows_Aero
The modern paradigm of "application blasts out a rectangle of pixels" and "the desktop manager composes those into another rectangle of pixels and blasts them out to the screen".
It actually separates the OS from the GPU. Before WDDM your GFX device driver was the only software that could use GFX acceleration. After WDDM the GPU is another "processor" in your computer that can read and write to RAM and the application can use the GPU in user space any way it wants, and then the compositor can to the same (in user space) and in the end all the OS is managing communication with the GPU.
For that approach to work you need to have enough fill rate that you can redraw the screen several times per frame. Microsoft wanted to have enough they could afford some visual bling, but Intel didn't give to them.
> My current laptop only has integrated Intel GPU.
Which is far more powerful than the ones that caused problems almost two decades ago.
More than you think.
As people noted, most of your GUI is being rendered by it. Every video you watch is accelerated by it, and if it has some compute support, some applications are using it for faster math at the background (mostly image editors, but who knows).
For a smaller gripe: they also bought Project Offset, which looked super cool, to turn into a Larabee tech demo. Then they killed Larabee and Project Offset along with it.
> Intel GFX held back the industry 10 years. If people thought Windows Vista sucked it was because Intel "supported" it by releasing integrated GPUs which could almost handle Windows Vista but not quite.
not sure about it. i had friends with discrete GPUs at the time and they told me that vista was essentially a gpu-stress program rather than an OS.
at the same time, compiz/beryl on linux worked beautifully on intel integrated gpus, and were doing way cooler things than vista was doing at the time (cube desktops? windows bursting into flames when closed?).
I'm a bit sad that compiz/beryl is not as popular anymore (with all the crazy things it could do).
I've been playing Minecraft fine with Intel GPUs on Linux for about 15 years. Works great. If Windows can't run with these GPUs, that's simply because Windows sucks.
I wonder how big a downside an x86 monopoly would actually be these days (an M4 MacBook being the best perf/watt way to run x86 Windows apps today as it is) and how that compares to the downsides of not allowing x86 to consolidate efforts against rising competition from ARM CPUs.
The problem with the "use the GPU in a SoC" proposition is everyone that makes the rest of a SoC also already has a GPU for it. Often better than what Intel can offer in terms of perf/die space or perf/watt. These SoC solutions tend to coalesce around tile based designs which keep memory bandwidth and power needs down compared to the traditional desktop IMR designs Intel has.
That’s actually a pretty good pint, honestly
I'd like to address the aside for completeness' sake.
An x86 monopoly in the late 80s was a thing, but not now.
Today, there are sufficient competitive chip architectures with cross-compatible operating systems and virtualization that x86 does not represent control of the computing market in a manner that should prevent such a merger: ARM licensees, including the special case of Apple Silicon, Snapdragon, NVIDIA SOCs, RISC-V...
Windows, MacOS and Linux all run competitively on multiple non-x86 architectures.
> An x86 monopoly in the late 80s was a thing, but not now.
Incorrect, we have an even greater lack of x86 vendors now than we did in the 80s. In the 80s you had Intel, and they licensed to AMD, Harris, NEC, TI, Chips & Technologies, and in the 90s we had IBM, Cyrix, VIA, National Semi, NexGen, and for a hot minute Transmeta. Even more smaller vendors.
Today making mass market x86 chips we have: Intel, AMD, and a handful of small embedded vendors selling designs from the Pentium days.
I believe what you meant was that x86 is not a monopoly thanks to other ISAs, but x86 itself is even more of a monopoly than ever.
I believe in the 80s all those vendors were making the same intel design in their own fab. I don't think any of them did the design on their own. In the 90s some of them had their own designs.
Some were straight second sources but they all had the license to do what NEC, AMD, and OKI did, which is alter the design and sell these variants. They all started doing that with the 8086. There were variants of the 8086, 8088, and 80186, I'm unaware of variants of the 80188, or 80286 although there were still multiple manufacturers, I had a Harris 286 at 20MHz myself. Then with the 386 there were more custom variants of the 386 and 486. In the Pentium days Intel wouldn't license the Pentium design, but there were compatible competitors as AMD also began 100% custom designs that were only ISA compatible and pin compatible with the K5 and K6 lines.
At what point do we call a tweak to an original design different enough to count it... K5 and K6 where clearly new designs. The others were mostly intel with some changes. I'm going to count the rest as minor tweaks and not worth counting otherwise - but this is a case where you can argue there the line is and so others need to decide where they stand (if they care)
The NEC V20/30 series were significant advances over their Intel equivalent (basically all the 186 features plus more in an 8086/8 compatible package).
C&T had some super-386 chips that apparently barely made it to market (38605DX), and the Cyrix 5x86 (most of a 6x86) is substantially different from the AMD one (which is just a 486 clock-quadrupled)
I called the K5 and 6 new designs, I said they were only ISA and pin compatible, but not the same design.
> An x86 monopoly in the late 80s was a thing, but not now.
I think you're off by 20 years on this. In the 80s and early 90s we had reasonable competition from 68k, powerpc, and arm on desktops; and tons of competition in the server space (mips, sparc, power, alpha, pa-risc, edit: and vax!). It wasn't till the early 2000s that both the desktop/server space coalesced around x86.
Thank you for saying this. It's clear that processors are going through something really interesting right now after an extended dwindling and choke point onto x86. This x86 dominance has lasted entire careers, but from a longer perspective we're simply seeing another cycle in ecosystem diversity, specialized functions spinning out of and back into unified packages, and a continued downward push from commoditization forces that are affecting the entire product chain from fab to ISA licensing. We're not quite at the wild-west of the late 80s and 90s, but something's in the air.
It seems almost like the forces that are pushing against these long-term trends are focused more on trying to figure out how to saturate existing compute on the high-end, and using that to justify drives away from diversity and vertical integrated cost/price reduction. But there are, long-term, not as many users who need to host this technology as there are users of things like phones and computers who need the benefits the long-term trends provide.
Intel has acted somewhat as a rock in a river, and the rest of the world is finding ways around them after having been dammed up for a bit.
I remember when I was a senior in undergrad (1993) the profs were quite excited about the price/performance of 486 computers which thoroughly trashed the SPARC-based Sun work stations that we'd transitioned to because Motorola rug-pulled the 68k. Sure we were impressed by the generation of RISC machines that came out around that time like SPARC, PA RISC, POWER PC and such but in retrospect it was not those RISC machines that were fast it was 68k that was dying, but x86 was keeping up.
> It wasn't till the early 2000s that both the desktop/server space coalesced around x86.
A lot of companies killed off their in-house architectures and hopped on the Itanium bandwagon. The main two exceptions were Sun and IBM.
The bandwagon was actually an Ice Cream truck run by the old lady from the Sponge Bob movie.
Intel had just wiped the floor with x86 servers, all the old guard Unix vendors with their own chips were hurting. Then Intel makes the rounds with a glorious plan of how they were going to own the server landscape for a decade or more. So in various states of defeat and grief much of the industry followed them. Planned or not, the resulting rug pull really screwed them over. The organs that operated those lines of businesses were fully removed. It worked too well, I am going to say it was on accident.
Intel should have broken up its internal x86 hegemony a long time ago, which they have been trying since the day it was invented. Like the 6502, it was just too successful for its own good. Only x86 also built up the Vatican around itself.
X86 is more than just the ISA. What’s at stake is the relatively open PC architecture and hardware ecosystem. It was a fluke of history that made it happen, and it would be sad to lose it.
PCI-e is the culmination of that ecosystem, and like PCI before it, is available on all architectures to anyone who pays PCI-SIG.
PCIe is great, yes.
Sadly with the rise of laptops with soldered-in-everything, and the popularity of android/iphone/tablet devices, I share some of layer8's worries about the future of the relatively open PC architecture and hardware ecosystem.
On the one hand I do get the concern, on the other there’s never been a better time to be a hardware hacker. Cheap microcontrollers abound, raspberry pi etc, cheap fpgas, one can even make their own asic. So I just can’t get that worked up over pc architectures getting closed.
Hacking on that level is very different from building and upgrading PCs, being able to mix and match components from a wide range of different manufacturers. You won’t or can’t build a serious NAS, Proxmox homelab, gaming PC, workstation, or GPU/compute farm from Raspberry Pis or FPGAs.
We are really lucky that such a diverse and interoperable hardware platform like the PC exists. We should not discount it, and instead appreciate how important it is, and how unlikely for such a varied and high-performance platform to emerge again, should the PC platform die.
Today sure. If you want to custom make "serious" system then x86 is likely your best bet. But this isn't about today, you can have that system right now if you want, it's still there, so this is about the future.
All the use cases, except gaming PC, have "less serious" solutions in Linux/ARM and Linux/RISCV today, where I would argue there is more interoperability and diversity. Those solutions get better and closer to "serious" x86 solutions every day.
Will they be roughly equivalent in price/performance in 5 years... only time will tell but I suspect x86 PC is the old way and it's on it's way out.
You can't really build a PC with parts other than x86. The only other platform you can really build from parts is Arm, with the high end Ampere server chips. Most other platforms are usually pretty highly integrated, you can't just swap parts or work on it.
What about the POWER9-based Talos II systems? Extraordinary niche, I know, but aren't they PC-ish?
Why not? Ram is ram, storage is storage.
You can't just buy an ARM or POWER motherboard from one place, a CPU from another place, some RAM sticks from another place, a power supply, a heatsink/fan, some kind of hard drive (probably NVMe these days), a bunch of cables, and put them all together in your basement/living room and have a working system. With x86, this is pretty normal still. With other architectures, you're going to get a complete, all-in-one system that either 1) has no expandability whatsoever, at least by normal users, or 2) costs more than a house in NYC and requires having technicians from the vendor to fly to your location and stay in a hotel for a day or two to do service work on your system for you because you're not allowed to touch it.
I was only just today looking for low-power x86 machines to run FreePBX, which does not yet have an ARM64 port. Whilst the consumer computing space is now perfectly served by ARM and will soon be joined by RISC-V, if a widely-used piece of free and open source server software is still x86-only, you can bet that there are thousands of bespoke business solutions that are locked to the ISA. A monopoly would hasten migration away from these programs, but would nonetheless be a lucrative situation for Intel-AMD in the meantime.
The fact that C++ development has been effectively hijacked by the "no ABI breakage, ever"/backwards compatibility at all costs crowd certainly speaks to this.
https://herecomesthemoon.net/2024/11/two-factions-of-cpp/
There are a lot of pre-compiled binaries floating about that are depended on by lots of enterprise software whose source code is long gone, and these are effectively locked to x86_64 chips until the cost of interoperability becomes greater than reverse engineering their non-trivial functionality.
C++ language spec doesn't specify and doesn't care about ABI (infamously so; it's kept the language from being used in many places, and where people ignored ABI compat initially but absolutely needed it in the future, as with BeOS's Application Kit and Mac kexts, it's much harder to maintain than it should be.
"two factions" is only discussing source compatibility.
They had ABI breakage when C++11 support was implemented in GCC 5 and that was extremely painful. Honestly, I still wish that they had avoided it.
You can still use the old ABI with -D_GLIBCXX_USE_CXX11_ABI=0
Surely there must be an emulator you could use?
Indeed, I could use QEMU to run FreePBX on ARM64. However, the performance penalty would be pretty severe, and there isn't anything inherent to FreePBX that should prevent it from running natively on ARM64. It simply appears that nobody has yet spent the time to iron out any problems and make an official build for the architecture, but unfortunately I think there is still loads of other software in a similar situation.
Microsoft (and other parties) already demonstrated quite effective x86 emulation, opening the migration path away from this anachronistic ISA.
I believe the "x86 monopoly" was meant to refere to how only Intel and AMD are legally allowed to make x86 chips due to patents. X86 is currently a duopoly, and if Intel and AMD were to merge, that would become a monopoly.
This is how I interpreted it as well. The others seem to be arguing due to a misunderstanding of what was said/meant.
Didn't AMD start making x86 chips in 1982?
That seems correct from some quick Wikipedia reading, but I don't understand what it has to do with anything?
The existence of an imaginary x86 monopoly in the 80s?
Oh, but xbar's interpretation of the phrase "x86 monopoly" is clearly the x86 architecture having a monopoly in the instruction set market. Under that interpretation, I don't really think it's relevant how many companies made x86 chips. I don't think xbar is necessarily wrong, I just think they're interpreting words to mean something they weren't intended so they're not making an effective argument
Did x86 have a monopoly in the 80s to begin with? If there is any period when that was true it would be the 2000s or early 2010s.
> intended so they're not making an effective argument
To be fair I'm really struggling to somehow connect the "x86 monopoly in the late 80s" with the remainder of their comment (which certainly makes sense).
x86 didn't have a monopoly, but IBM PC clones were clearly what everyone was talking about and there the monopoly existed. There are lots of different also ran processors, some with good market share in some niche, but overall x86 was clearly on the volume winners track by 1985.
> but overall x86 was clearly on the volume winners track by 1985.
By that standard if we exclude mobile x86 has a much stronger monopoly these days than in 1985. Unless we exclude low end PCs like Apple II and Commodore 64.
In 1990 x86 had ~80%, Apple ~7%, Amiga ~4% (with the remainder going to lowend or niche PCs) so again not that different than today.
This is all very true and why I think a merger between AMD and Intel is even possible. Nvidia and Intel is also a possible merger, but I actually think there is more regulatory concern with NVIDIA and how big and dominant they are becoming.
Intel and Samsung could be interesting, especially if it would get Samsung to open up more. Samsung would get better GPUs and x86, Intel gets access to the phone market and then you end up with things like x86 Samsung tablets that can run both Windows or Android.
Could also be Intel and Micron. Then you end up with full stack devices with Intel CPUs and Micron RAM and storage, and the companies have partnered in the past.
Samsung has its own leading edge fabrication plants. Merging the two would drop the number of leading edge foundries from 3 to 2.
Isn't Intel's main problem that they've ceased to be a leading edge foundry?
Maybe they should follow AMD's lead and spin off the foundry business.
What part of a Samsung merger do you think would help them enter the phone market? My layman's understanding of history is that Intel tried and failed several times to build x86 chips for phones and they failed for power consumption reasons, not for lack of access to a phone maker willing to try their chips or anything like that.
They failed primarily for pricing reasons. They could make a low power CPU competitive with ARM (especially back then when Intel had the state of the art process), but then they wanted to charge a premium for it being x86 and the OEMs turned up their nose at that.
Samsung still has a fairly competitive process and could make x86 CPUs to put in their own tablets and laptops without having the OEM and Intel get into a fight about margins if they're the same company. And with the largest maker of Android devices putting x86 CPUs into them, you get an ecosystem built around it that you wouldn't when nobody is using them to begin with because Intel refuses to price competitively with ARM.
> An x86 monopoly in the late 80s was a thing, but not now
And then in the 2000s after AMD64 pretty much destroyed all competing architectures and then in the 2010s Intel itself effectively was almost a monopoly (outside of mobile) with AMD being on the verge of bankruptcy.
Itanium’s hype killed the competing architectures. AMD64 then took over since it was cost effective and fast.
> x86 monopoly
Wintel was a duopoly which had some power: Intel x86 has less dominance now partly because Windows has less dominance.
There are some wonderful papers on how game theory and monopoly plays out between Windows and Intel; and there's a great paper with analysis of why AMD struggled against the economic forces and why Microsoft preferred to team up with a dominant CPU manufacturer.
Ooh got links?
I could see Broadcom picking up x86.
This is great "write a horror story in 7 words" content.
Ok, I'd like to pitch a Treehouse of Horror episode.
Part 1, combine branch predictor with the instruction trace cache to be able to detect workloads, have specific licenses for say Renderman, Oracle or CFD software.
Part 2, add a mesh network directly to the CPU, require time based signing keys to operate. Maybe every chip just has starlink included.
Part 3, In an BWM rent your seats move, the base CPU is just barely able to boot the OS, specific features can be unlocked with signed payloads. Using Shamir secrets so that Broadcom AND the cloud provider are both required for signing the feature request. One can rent AVX512, more last level cache, ECC, overclocking, underclocking.
The nice part about including radios in the CPUs directly means that updates can be applied without network connectivity and you can geofence your feature keys.
This last part we can petition the government to require as the grounds of being able to produce EAR regulated CPUs globally.
I think I'll just sell these patents to Myhrvold.
sir there are children reading this site.
I'm not sure I've ever laughed this much at a HN comment chain
Yeah, what both companies would need to be competitive in the GPU sector is a cuda killer. That's perhaps the one benefit of merging Antel can more easily standardize something.
You don't get a CUDA killer without the software infrastructure.
Intel finally seem to have got their act together a bit with OneAPI but they've languished for years in this area.
They weren’t interested in creating an open solution. Both intel and AMD have been somewhat short sighted and looked to recreate their own cuda, and the mistrust of each other has prevented them from a solution for both of them.
Disclaimer: I work on this stuff for Intel
At least for Intel, that is just not true. Intel's DPC++ is as open as it gets. It implements a Khronos standard (SYCL), most of the development is happening in public on GitHub, it's permissively licensed, it has a viable backend infrastructure (with implementations for both CUDA and HIP). There's also now a UXL foundation with the goal of creating an "open standard accelerator software ecosystem".
This is all great, but how can we trust this will be supported next year? After Xeon Phi, Omnipath, and a host of other killed projects, Intel is approaching Google levels of mean time to deprecation.
Neat. Now release 48gb GPUs to the hobbyist devs and we’ll use intel for LLMs!
Apple is your savior if you are looking at it as a CPU/GPU/NPU package for consumer/hobbyists.
I decided that I have to start looking at Apple's AI docs
The Intel A770 is currently $230 and 48GB of GDDR6 is only like a hundred bucks, so what people really want is to combine these things and pay $350 for that GPU with 48GB of memory. Heck, even double that price would have people lining up.
Apple will sell you a machine with 48GB of memory for thousands of dollars but plenty of people can't afford that, and even then the GPU is soldered so you can't just put four of them in one machine to get more performance and memory. The top end 40-core M4 GPUs only have performance comparable to a single A770, which is itself not even that fast of a discrete GPU.
Actual links to the github would be much appreciated, as well a half-page tuto on how to get this up an running on a simple Linux+Intel setup.
This link should cover everything: https://www.intel.com/content/www/us/en/developer/tools/onea...
What’s happening with intel wino? That seemed like their cuda ish effort.
OpenCL was born as a cuda-alike that could be apply to GPUs from AMD and NVIDIA, and general purpose CPUs. NVIDIA briefly embraced it (in order to woo Apple?) and then just about abandoned it to focus more on cuda. NVIDIA abandoning OpenCL meant that it just didn't thrive. Intel and AMD both embraced OpenCL. Though admittedly I don't know the more recent history of OpenCL.
This meme comes up from time to time but I'm not sure what the real evidence for it is or whether the people repeating it have that much experience actually trying to make compute work on AMD cards. Every time I've seen anyone try the problem isn't that the card lacks a library, but rather that calling the function that does what is needed causes a kernel panic. Very different issues - if CUDA allegedly "ran" on AMD cards that still wouldn't save them because the bugs would be too problematic.
> Every time I've seen anyone try the problem isn't that the card lacks a library, but rather that calling the function that does what is needed causes a kernel panic.
Do you have experience with SYCL? My experience with OpenCL was that it's really a PITA to work with. The thing that CUDA makes nice is the direct and minimal exercise to start running GPGPU kernels. write the code, compile with nvcc, cudaed.
OpenCL had just a weird dance to perform to get a kernel running. Find the OpenCL device using a magic filesystem token. Ask the device politely if it wants to OpenCL. Send over the kernel string blob to compile. Run the kernel. A ton of ceremony and then you couldn't be guarenteed it'd work because the likes of AMD, Intel, or nVidia were all spotty on how well they'd support it.
SYCL seems promising but the ecosystem is a little intimidating. It does not seem (and I could be wrong here) that there is a defacto SYCL compiler. The goals of SYCL compilers are also fairly diverse.
> Do you have experience with SYCL?
No, I bought a Nvidia card and just use CUDA.
> OpenCL had just a weird dance to perform to get a kernel running...
Yeah but that entire list, if you step back and think big picture, probably isn't the problem. Programmers have a predictable response to that sort of silliness. Build a library over it & abstract it away. The sheer number of frameworks out there is awe-inspiring.
I gave up on OpenCL on AMD cards. It wasn't the long complex process that got me, it was the unavoidable crashes along the way. I suspect that is a more significant issue than I realised at the time (when I assumed it was just me) because it goes a long way to explain AMD's pariah-like status in the machine learning world. The situation is more one-sided than can be explained by just a well-optimised library. I've personally seen more success implementing machine learning frameworks on AMD CPUs than on AMD's GPUs, and that is a remarkable thing. Although I assume in 2024 the state of the game has changed a lot from when I was investigating the situation actively.
I don't think CUDA is the problem here, math libraries are commodity software that give a relatively marginal edge. The lack of CUDA is probably a symptom of deeper hardware problems once people stray off an explicitly graphical workflow. If the hardware worked to spec I expect someone would just build a non-optimised CUDA clone and we'd all move on. But AMD did build a CUDA clone and it didn't work for me at least - and the buzz suggests something is still going wrong for AMD's GPGPU efforts.
> Programmers have a predictable response to that sort of silliness. Build a library over it & abstract it away
Impossible. GPGPU runtimes are too close to hardware, and the hardware is proprietary with many trade secrets. You need support from GPU vendors.
BTW, if you want reliable cross-vendor GPU, just use Direct3D 11 compute shaders. Modern videogames use a lot of compute, to the point that UE5 even renders triangle meshes with compute shaders. AMD hardware is totally fine, it’s the software ecosystem.
There are already packages that let people run CUDA programs unmodified on other GPUs: see https://news.ycombinator.com/item?id=40970560
For whatever reason, people just delete these tools from their minds, then claim Nvidia still has a monopoly on CUDA.
And which of these have the level of support that would let a company put a multi-million dollar project on top of?
We have trillions of dollars riding on one-person open-source projects. This is not the barrier for "serious businesses" that it used to be.
Resilience is not as valued as it should be... Average bus factor is how small these days? :/
What are you talking about?
Those packages only really perform with low-precision work. For scientific computing, using anything but CUDA is a painful workflow. DOE has been deploying AMD and Intel alternatives in their leadership class machines and it's been a pretty bad speedbump.
('DOE' = US Department of Energy)
There's already a panoply of CUDA alternatives, and even several CUDA-to-non-Nvidia-GPU alternatives (which aren't supported by the hardware vendors and are in some sense riskier). To my knowledge (this isn't really my space), many of the higher-level frameworks already support these CUDA alternatives.
And yet still the popcorn gallery says "there no [realistic] alternative to CUDA." Methinks the real issue is that CUDA is the best software solution for Nvidia GPUs, and the alternative hardware vendors aren't seen as viable competitor for hardware reasons, and people attribute the failure to software failures.
> There's already a panoply of CUDA alternatives
Is there?
10 years ago, I burned about 6 months of project time slogging through AMD / OpenCL bugs before realizing that I was being an absolute idiot and that the green tax was far cheaper than the time I was wasting. If you asked AMD, they would tell you that OpenCL was ready for new applications and support was right around the corner for old applications. This was incorrect on both counts. Disastrously so, if you trusted them. I learned not to trust them. Over the years, they kept making the same false promises and failing to deliver, year after year, generation after generation of grad students and HPC experts, filling the industry with once-burned-twice-shy received wisdom.
When NVDA pumped and AMD didn't, presumably AMD could no longer deny the inadequacy of their offerings and launched an effort to fix their shit. Eventually I am sure it will bear fruit. But is their shit actually fixed? Keeping in mind that they have proven time and time and time and time again that they cannot be trusted to answer this question themselves?
80% margins won't last forever, but the trust deficit that needs to be crossed first shouldn't be understated.
This is absolutely it. You pay the premium not to have to deal with the BS.
> alternative hardware vendors aren't seen as viable competitor for hardware reasons, and people attribute the failure to software failures.
It certainly seems like there's a "nobody ever got fired for buying nvidia" dynamic going on. We've seen this mentality repeatedly in other areas of the industry: that's why the phrase is a snowclone.
Eventually, someone is going to use non-nvidia GPU accelerators and get a big enough cost or performance win that industry attitudes will change.
> There's already a panoply of CUDA alternatives, and even several CUDA-to-non-Nvidia-GPU alternatives (which aren't supported by the hardware vendors and are in some sense riskier). To my knowledge (this isn't really my space), many of the higher-level frameworks already support these CUDA alternatives.
On paper, yes. But how many of them actually work? Every couple of years AMD puts out a press release saying they're getting serious this time and will fully support their thing, and then a couple of people try it and it doesn't work (or maybe the basic hello world test works, but anything else is too buggy), and they give up.
Why doesn’t NVIDIA buy intel? They have the cash and they have the pairing (M chips being NVIDIA and intel’s biggest competitors now). It would be an AMD/ATI move, and maybe NVIDIA could do its own M CPU competitor with…whatever intel can help with.
They don’t need it they have Grace
Why would you want this kind of increased monopolization? That is, CPU companies also owning the GPU market?
is it a lot more competitive for Nvidia to just keep winning? I feel like you want two roughly good choices for GPU compute and AMD needs a shot in the arm for that somewhere.
It is absolutely more competitive when nVidia is a separate company from Intel so they can't pull shit like "our GPUs only work with our GPUs" like Intel is now pulling with their WiFi chips.
WGSL seems like a nice standard everyone could get behind
Reuters has some inside information: https://www.reuters.com/business/intel-ceo-pat-gelsinger-ret...
>Gelsinger, who resigned on Dec. 1, left after a board meeting last week during which directors felt Gelsinger's costly and ambitious plan to turn Intel around was not working and the progress of change was not fast enough, according to a person familiar with the matter. The board told Gelsinger he could retire or be removed, and he chose to step down, according to the source.
Thanks, finally a signal in this thread of noise. I found it unbelievable that all media presented it as if it was his decision. Of course it wasn't.
I "predicted" this three months ago (really, it was inevitable), but gave it 1-6 months.
Now for the what-happens-next popcorn. In a normal world, they would go bankrupt, but this is very much not a normal world.
There are precious few companies who can turn rocks into thinking machines, Intel is one of them for better or worse. It’s a national security issue.
A lot of people on this thread are underestimating how much of a hold Intel has on the chips industry. In my experience, Intel is synonymous with computer chip for the average person. Most people wouldn't be able to tell you what AMD does differently, they'd just say they're a knockoff Intel. Technologically, both companies are neck and neck. But for the average person, it's not even close.
Marketing campaigns only go so far. They’ve been riding the “Intel Inside” slogan for 25 years.
In the mean time, AMD/ARM already won phones, table and game consoles.
Server purchasing decisions aren’t made by everyday people. Intel’s roadmap in that space slipped year for year for at least 10 of the last 15 years.
That leaves Intel with the fraction of the non-mac laptop market that’s made up of people that haven’t been paying attention for the last ten years, and don’t ask anyone who has.
I work in video games and I think it is still sometimes a problem to use computers that are not based on x86 processors, both in the tool chains and software /engines. People here say that Intel has lost out on consoles and laptops, but in gaming that is because of x86 compatible AMD chips. Apple laptops were good for gaming when they had x86 and could duel boot. I see bugs people report on games made for Macs with x86 that don't work quite right with an Mx chip (though not a huge number).
A friend who worked in film post production was telling me about similar rare but annoying problems with Mx Apple computers. I feel like their are verticals where people will favor x86 chips for a while yet.
I am not as close to this as I was when I actually programmed games (oh so long ago!) so I wonder if this is just the point of view of a person who has lost touch with trends in tech.
That's because the GPU is very different on a Mac. The x86 emulation is perfect.
>In the mean time, AMD/ARM already won phones, table and game consoles.
Don't forget laptops. Intel has been terrible on laptops due to their lack of efficiency. AMD has been wiping the floor with them for years now.
2024 is the first year that Intel has released a laptop chip that can compete in efficiency. I hope Intel continues to invest in this category and remain neck and neck with AMD if we have any hope of having Windows laptops with decent battery lide.
I doubt most people actually care about efficiency in a laptop. My wife is my anecdotal example. She's had a mac for years but refuses to give Apple one more penny because they've been awful - had to replace her previous laptop motherboard 7 times until we finally had to sue Apple in a class action which resulted in them sending her current 2015 MBP, which has now aged-out of MacOS updates. Sucks that this computer is now basically a paperweight.
Anyway, in my questions for her about what she really cares about in a new laptop, power efficiency was not a concern of hers. She does not care about efficiency at all. All she cared about was a good enough screen (2560x1440 or better), and a fast CPU to run the new Photoshop features, and the ability to move it from one location to another (hence the need for a laptop instead of a desktop). I'd wager that for most people, the fact that it's a portable computer has nothing to do with how long the battery lasts away from an outlet. She can transport the computer to another location and plug it in. There are very few situations that require extended use away from an outlet, and even in an airplane, we often see 120V outlets at the seats. There's really no use case for her that puts her away from an outlet for longer than an hour or two, so efficiency is the least of her concerns in buying a new laptop.
So we went with a new Dell laptop with the Intel i9-13900HX, which beats the Apple M4 Max 16 Core in terms of overall performance in CPU benchmarks. I would have looked at an AMD based laptop, but the price on this Dell and the performance of the i9 were great, it was $999 on sale. It's got a decent enough screen, and we can easily upgrade the RAM and storage on this laptop.
I doubt she'd even care if the new laptop didn't have a battery at all, so long as she can easily stuff it in a bag and carry it to another location and plug it in. I feel the exact same way, and I recently bought a new (AMD based) laptop, and power efficiency was not a thing in my decision making process at all. The battery lasts a few hours, and that's plenty. I don't get a hard-on for battery life, and I'm not really sure who does. Are these people dancing around with their laptops and simply can't sit still and plug it in?
I have plenty of people who care. Why? They always forget to plug in their laptop and then they want to open it and it's dead. Not to mention, x86 Windows machines do a poor job going to sleep.
Handed the wife M2 Macbook Air and she's thrilled how little she has to plug it in. She goes weeks between charges sometimes.
> her current 2015 MBP, which has now aged-out of MacOS updates
Not trying to invalidate or lessen your complaint (which I completely agree with) but want to make sure you are aware of OpenCore Legacy Patcher. It's a little hacky by nature but can give some extra life to that machine: https://dortania.github.io/OpenCore-Legacy-Patcher/MODELS.ht...
yeah, I looked at it, but this MBP has a speaker that died, the SD card reader died a long time ago, had to replace the battery, it's slow, and it doesn't really play nicely on the SMB network, etc, etc. I'll be glad when it's gone. And OpenCore patcher seemed like a lot of hassle to keep putting up with this machine. Thanks for suggesting it though.
Efficiency and performance are heavily correlated in portable devices. You only have a certain TDP that you can utilize before the device throttles due to a lack of heat dissipation. The more efficient a CPU you have, the more you can accomplish before you hit temps that will affect performance.
There is another angle at power efficiency: my work laptop is so bad, moderate load makes the fan spin and higher load creates a very annoying noise due to the cooling needs. All these while performance is far from stellar (compared to my desktop).
>That leaves Intel with the fraction of the non-mac laptop market that’s made up of people that haven’t been paying attention for the last ten years, and don’t ask anyone who has.
Evidently, that leaves Intel the majority of the market.
Remember, most people don't care as much as you or I. If they're going to buy a laptop to do taxes or web browsing or something, they will probably be mentally biased towards an Intel-based chip. Because it's been marketed for so long, AMD comparatively seems like a super new brand.
If they don't buy a Mac, they'll be biased to whatever BestBuy sells them.
People miss this. A lot of people will only buy Intel. Businesses and IT departments rarely buy AMD, not just out of brand loyalty, but because of the software and hardware features Intel deploys that are catered to the business market.
This is in large part an OEM issue. Dell or HP will definitely have an Intel version of the machine you are looking for, but AMD versions are hit and miss.
I think this is partly because big OEMs doubt (used to doubt?) AMD’s ability to consistently deliver product in the kind of volume they need. Partly it’s because of Intel’s historically anticompetitive business practices.
Hasn't changed, there was an article back in September saying that the relationship between AMD and laptop OEMs is rocky:
> Multiple reports, citing sources at laptop OEMs, have covered what is said to be poor support, chip supply, and communication from AMD with its laptop partners, leading to generally poor execution. Chip consultancy firm AC Analysis says AMD's shift of focus to AI and data centers has led to a "'Cold War ice age' in relationships with OEMs," leading to a loss of trust from its partners.
https://www.tomshardware.com/tech-industry/amds-laptop-oems-...
>A lot of people will only buy Intel. Businesses and IT departments rarely buy AMD
That's because Intel bribed OEMs to use only Intel chips
Intel’s board is (or should be!) in exactly the right position to assess whether this dam is springing leaks. (It is.)
Last report I read it was ~80% (Intel) vs ~20% (AMD) for PC market. And ~75% (Intel) vs ~25% (AMD) for data center servers.
> And ~75% (Intel) vs ~25% (AMD) for data center servers.
IIRC their data center CPU revenue was about even this quarter so this is a bit deceptive (i.e. you can buy 1 large CPU instead of several cheaper ones).
From https://www.tomshardware.com/pc-components/cpus/amds-desktop...
"When it comes to servers, AMD's share totaled 24.2%"
and
"Intel, of course, retained its volume lead with a 75.8% unit market share."
Market share is often measured in install base.
Those two terms are related but definitely are never interchangeable. Market share is the portion of new sales a company is getting. Install base is the portion of existing in-use products that were from that company. Install base is essentially market share integrated over time, less systems that are discarded or otherwise taken out of service. If market share never changes, install base will approach the same proportions but it's a lagging indicator.
Sure, but if the point is showing how Intel isn't really in such a bad spot as one might think just looking at the install base would be pretty deceiving and semi-meaningless.
Of a market that is dying between two growing at the edges. Mobile and server clear trump personal compute. This the markets devaluing intel.
To be fair it's not like there is that much profits in mobile either. ARM CPUs are almost a commodity and the margins aren't that great.
I think data center revenue was in AMD's favor because AMD is second (obviously far behind NVidia) and Intel is third in AI accelerators, which both companies include in their data center numbers. So that pushes things in AMD's favor. I think on data center CPU's alone Intel is still ahead.
Data center revenue is not just CPU. It includes MI300 et al. So that's why data center revenue can be roughly equivalent between AMD & Intel while CPU revenue is still predominantly Intel.
Steam hardware survey: https://store.steampowered.com/hwsurvey/processormfg/
Windows:
Intel: 64.23%
AMD: 35.71%
Linux:
Intel: 30.15%
AMD: 69.85%
Why do you think gaming community survey would be more relevant than Intel/AMD earning reports in which they unambiguously, for the most part, lay out the earnings per CPU type?
For PC’s that can’t be right. For overall consumer, Windows is at 25.75%, Linux is 1.43% and MacOS is at 5.53%.
Ignoring ChromeOS, and assuming 100% of windows and linux is x86 (decreasingly true - the only win11 I’ve ever seen is an arm VM on my mac) and 100% of Mac is arm (it will be moving forward), that puts arm at 20% of the PC market.
Interpolation from your numbers puts intel at 64% (with a ceiling of 80% of PC; 25% of consumer computing devices unless windows makes a comeback).
https://gs.statcounter.com/os-market-share
There is a common usage of “PC” that excludes Macs, Chromebooks, and the like. It means the x86-based PC platform descendant from IBM PC compatibles, with BIOS and all.
I dunno, I've seen more and more people referencing the crash bugs in the latest gens and how Intel lied about it through their teeth. And Intel having lost to Apple on the CPU front, never having caught up to Nvidia on the GPU front, and basically just not doing anything for the last decade certainly hasn't helped their reputation.
Let them die. Maybe we'd actually see some new competition?
I doubt many people are making purchasing decisions based on Intel branding. Any kind of speed advantage has not been a dominant factor in the minds of most low information/brand influenceable consumers who are buying x86 machines. Everybody else looks at reviews and benchmarks where Intel has to show up with a good product and their branding doesn't get them much.
> The moat of the two companies together would give the new combined company plenty of time to ramp up their fabs.
Doubt.
Neither of the companies is particularly competitive on either processor or GPU architecture nor fabrication.
A merger of those entities looks like nothing but a recipe for further x86 stagnation and an even quicker death for the entities involved imho.
In particular I cannot see what's good in it for AMD. The fabs have no use/clear path forward. Their processors/gpu either match our outmatch the Intel offering.
A Broadcom/Apple takeover of Intel sounds much more reasonable.
A Broadcom/Apple takeover of Intel sounds much more reasonable.
Out of curiosity, what would make Intel interesting to Apple? Apple already acquired Intel's modem business and they have their own CPU and GPU.
I think Apple has the cash, culture and management to make the fabs work.
But I'm just speculating.
Apple is really good in making OTHER PEOPLE'S fabs work for their purposes. Running their own manufacturing was never particularly a forté.
Apple currently really enjoys being on the very latest process node. It's not a given that they could match or improve on that with their own fab (Sure, there is a lot of VLSI design and materials experience, but that does not automatically translate into a state of the art process, and is unlikely to contain the magic ingredient to get Intel back on top).
And in the unlikely case it SHOULD work, that will only invite further regulatory headaches.
While I agree, I also think that Apple needs a new source revenue and a well executed fab one might be it.
Just speculating.
How well is Apple doing with the modem team they purchased from Intel? I have yet to see Apple release their own in-house 5G modem. I don’t think Apple can magically turn things around for Intel and such an acquisition would be a waste of money.
Maybe for the fabs? It might be attractive for Apple to move production state side via Intels fabs but on the other hand I don't think Intels fabs can do what Apple wants
That would work if the fabs were competitive. For Apple, they are technologically behind.
Depends on what Apple wants…
Yes Congressman, we agree to take over Intel fabs if you agree to drop this antitrust nonsense. And we would like our $20B from Google back too.
>In particular I cannot see what's good in it for AMD. The fabs have no use/clear path forward. Their processors/gpu either match our outmatch the Intel offering.
I can, but it's not technical. Intel has a huge advantage in several markets, and has strong relationships with many OEMs like Dell. Intel, even though their market cap is now a fraction of AMD's, still has a huge lead in marketshare in OEM systems and servers. (Several other posts in this thread have real numbers.)
If AMD bought out Intel, it would now get all of that, and be able to push all these OEM and server customers into AMD's solutions instead.
> is particularly competitive on either processor or GPU architecture nor fabrication.
Who is then? Apple is of course still ahead in lower power chips. But Apple is not in the the desktop/workstation/server market and there are hardly any alternatives to AMD or Intel there.
e.g. M2 Ultra Apple's fastest "desktop" CPU is slower than the 14700K you can get for $350. Seems pretty competitive...
>I feel like this is a mistake. Pat's strategy is aggressive but what the company needs.
He's also 63. Has plenty of money to survive the rest of his life. Has eight grandchildren. There's so much more to life than business. What's to say he doesn't want to simply enjoy life with more connection and community to loved ones around him?
That would be a healthy, balanced and long-term-oriented approach. But those who get to the level of CEO are subjected to intense forces that select against those traits.
I don’t know much about this guy but it’s reasonable to assume that any C-level exec will hold on to the position for dear life until they are forced out.
Here's more info on Pat. He is not your average CEO. He wrote this book almost 20 years ago. Why do you resort to conversation about assumptions?
The Juggling Act: Bringing Balance to Your Faith, Family, and Work
https://www.amazon.com/Juggling-Act-Bringing-Balance-Family/...
[flagged]
> any C-level exec will hold on to the position for dear life until they are forced out
I don't know. Frank Slootman's retirement from Snowflake earlier this year was certainly not celebrated by any significant stakeholders. I'd imagine at some point someone like Frank realizes that they are worth more than Tim Cook, they consider that they're in their mid-60s, and they decide the remaining time they have on earth might be better spent in other ways.
Every person in the workforce, no matter how ambitious or how senior, is forced into the calculus of money and power vs. good years remaining. I expect the rational ones will select the balance point for themselves.
There are certainly some indications this was something that was not long in the planning. On the other hand, when (solid) financial security is not a subject on the table, it's a lot easier for many folks to let go--especially given that they can probably do as many board or advisor gigs in the industry as they have an appetite for. Or just go on to a new chapter.
That’s not at all who Pat is
True CEO retirements are announced in advance and do not lead to a strange co-interim CEO situation.
It's possible he's sick. Who could know?
That sort of transition would be managed unambiguously. This is a firing.
> What's to say he doesn't want to simply enjoy life
News: https://www.cnbc.com/2024/12/02/intel-ceo-pat-gelsinger-is-o...
Intel CEO Pat Gelsinger ousted by board
That "fabs will report to me" should be a tell that there is a lot of internal opposition...
Worse (for Intel) what can happen is Intel HP-isation - splits and sells.
But there is a lot of good news for them in cpu world: another B's granted, military buys Intel, new no-HT arch. And 80 bit memories like in Multics, can be true virtualisation on x86.
Even if x86 is dead Intel still have fabs - AMD can soon print in them :)
But that multigeneration refreshes are still a mistery - is it Intel's problem or maybe something else eg. simply someone have a some patent ? :>
He got fired, dawg. This is like being told that someone is sleeping with the fishes and concluding that the guy just finds lying in bed next to a bunch of sturgeon the most relaxing way to sleep.
> The moat of the two companies together would give the new combined company plenty of time to ramp up their fabs.
I'm not convinced of this. Fabs are incredibly expensive businesses. Intel has failed to keep up and AMD spun off their fabs to use TSMC.
There is also ARM knocking at the door for general computing. It's already gaining traction in previously x86 dominated markets.
The model for US based fabbing has to include selling large portions of capacity to third party ASIC manufacturers, otherwise I see it as doomed to failure.
> There is also ARM knocking at the door for general computing. It's already gaining traction in previously x86 dominated markets.
I know anecdotes aren't data, but I was talking with a colleague about chips recently and he noticed that converting all of his cloud JVM deployments to ARM machines both improved performance and lowered costs. The costs might not even be the chips themselves, but less power and thermal requirements that lowers the OpEx spend.
Yeah, my company is gearing up to do the same. We primarily use the JVM so doing the arm switcharoo only makes sense.
They would have at least 5 years to figure it out before ARM becomes viable on desktop assuming there continues to be movement in that direction. There is so little incentive to move away from x86 right now. The latest Intel mobile processors address the efficiency issues and prove that x86 can be efficient enough for laptops.
IT departments are not going to stop buying x86 processors until they absolutely are forced to. Gamers are not going to switch unless performance is actually better. There just isn't the incentive to switch.
> There is so little incentive to move away from x86 right now.
> IT departments are not going to stop buying x86 processors until they absolutely are forced to.
IT departments are buying arm laptops, Apple's.
And there is an incentive to switch, cost. If you are in AWS, you can save a pretty penny by adopting graviton processors.
Further, the only thing stopping handhelds from being arm machines is poor x86 emulation. A solvable problem with a small bit of hardware. (Only non-existent because current ARM vendors can't be bothered to add it and ARM hasn't standardized it).
Really the only reason arm is lagging is because the likes of Qualcomm have tunnel vision on what markets they want to address.
Looks like your post is talking about two things -- corporate purchased laptops and AWS instances, which are quite different.
About corporate laptops, do you have evidence to show that companies are switching to Macbooks from HP/Dell/ThinkPads?
> corporate purchased laptops and AWS instances, which are quite different.
They are similar. Particularly because developing on a corporate hardware with an ARM processor is a surefire way to figure out if the software you write will have issues with ARM in AWS.
That's pretty much the entire reason x86 took off in the server market in the first place.
> About corporate laptops, do you have evidence to show that companies are switching to Macbooks from HP/Dell/ThinkPads?
Nope. Mostly just anecdotal. My company offers devs the option of either an x86 machine or a mac.
Lots of companies do that, and I wouldn't call it an x86/ARM choice but rather the same old Windows/Mac choice. For Windows, only x86 makes sense for companies with lots of legacy software, and the only choice for Mac is ARM.
> handhelds from being arm machines is poor x86 emulation
Also Qualcomm's GPUs are pretty shit (compared to Intel's and AMDs or Apple's)
> IT departments are not going to stop buying x86 processors until they absolutely are forced to.
Plenty of them are buying Macbooks. It's definitely a small percentage of the worldwide market, but not entirely insignificant either.
Yes, but that is because users demand it. And they do so begrudgingly. Users are not going to demand an ARM Windows laptop.
But will IT departments buy them if they are 100, 200, or $400 cheaper than a competing x86 machine?
That's the question that remains to be seen.
Can those users get all the software they need? Many users who want a mac are told no because some weird software they need doesn't run on it. Others only get a mac because some executive demanded IT port that software to mac. So long as companies have any x86 only software they won't let people switch. Often "art" departments get a specific exception and they get to avoid all the jobs that require x86 only software just to run their mac.
Of course these days more and more of that is moving the the cloud and all IT needs to a web browser that works. Thus making their job easier.
> Of course these days more and more of that is moving the the cloud and all IT needs to a web browser that works. Thus making their job easier.
This was the point I was going to make. While not completely dead, the days of desktop applications are quickly coming to a close. Almost everything is SAAS now or just electron apps which are highly portable.
Even if it's not saas or electron, the only two languages I'd do a desktop app in now-a-days is C# or Java. Both of which are fairly portable.
The big problem is Excel. Microsoft will make sure never to give that one up. Browser version isn't enough. I'm sure Apple has their reasons for not financing a full-fledged, compatible version, but if they would it would massively increase market share. I'm guessing it's strategic - e.g. not incurring the wrath of Microsoft or a different non-technical, non-marketshare reason.
The world outside of the SV tech bubble runs on Excel.
2 things.
1. Excel now-a-days is mostly just an electron app. That's what the office 365 conversion was effectively.
2. MS has supported ARM platforms for quite some time now. [1]
[1] https://www.windowscentral.com/office-windows-11-arm-starts-...
[2] http://www.emulators.com/docs/abc_arm64ec_explained.htm
If Microsoft insists on treating them like phones with locked-down software stacks, still no.
Back when I worked for an F500 company, my development workstation was every bit as locked-down as a phone. Complete with having to select any software I wanted to use from the company's internal "app store" rather than installing it directly.
If you need to use Excel at work, you need x86 since Excel for Mac is a gutted toy (MS wants your company yo buy / subscribe to windows too).
And google sheets in my opinion is not good for complicated stuff - the constant lag..
I would bet 95%+ of people who use Excel are not affected by any difference between Excel for macOS versus Windows.
I work in a large enterprise company, have both windows and mac machines, and excel works equally great in both, but more and more excel runs in a browser.
We mostly email links to spreadsheets running in cloud. So it really doesn't matter what your OS is any more from an excel perspective, as long as your computer can run a modern browser you are good.
From what I've seen your company is an exception. Yes, for 95% of users, browser/Mac Excel is more than enough. But the non-tech companies I've seen still don't want to get Macs because of that 5%, they just don't want to bother with having to support two platforms. And leadership obviously doesn't care/have no idea.
Excel in browser is unreaponsive, laggy and unproductive for power users.
It is like a toy version of the standalone app.
Also it sucks with lists, pivot tables...
95% of people who use Photoshop would probably be served just fine by Krita, or even GIMP if they learned the somewhat wonky UI, and would save a ton of money in the process. However, people usually want to use the "standard" because of some vague fear that the alternative isn't 100.00% compatible, or that it won't have some obscure feature that they don't even know about yet, etc. I think Excel is exactly like this today, and so is Word. There are many alternatives that are just as good (and much cheaper) for 99% of users, but people still want to stick with "the standard" instead of taking a small risk on something different.
Maybe in the coming Great Depression of 2025, people will think differently and start looking at cheaper alternatives.
I was disputing this claim:
> If you need to use Excel at work, you need x86 since Excel for Mac is a gutted toy
The nominal cost of Excel was not the topic being discussed. It was the cost of using Excel for MacOS rather than Excel for Windows.
Almost no one needs the Windows specific features of Excel, so almost no one needs to give up using macOS just because of Excel.
I agree, and I think my post supports that, in a way. I'm just saying 95% of people probably could work just fine with GSheets or LibreOffice or whatever, but the very same is true for MacExcel (even more true, in fact, because it's closer to WinExcel than the alternatives).
> There is so little incentive to move away from x86 right now
Massively lower power consumption and way less waste heat to dispose of.
Literally the two biggest concerns of every data centre on earth.
ARM does not inherently have "massively lower power consumption and waste heat", though.
Market forces have traditionally pushed Intel and AMD to design their chips for a less efficient part of the frequency/power curve than ARM vendors. That changed a few years ago, and you can already see the results in x86 chips becoming more efficient.
> They would have at least 5 years to figure it out before ARM becomes viable on desktop assuming there continues to be movement in that direction.
What's this based on? Surely the proportion of desktops that need to be more powerful than anything Apple is doing on ARM is very small. And surely Apple isn't 5 years ahead?
It is less about the development of ARM on desktop and more about software support. Most apps on Windows are still emulated. Some will not work at all. Games are kind of a mess on ARM. A ton of security software that IT departments require are only going to work x86. Businesses run legacy custom applications designed for x86. Some major applications still run on emulation only and are therefore slower on ARM.
Apple can force the transition. It is not so straightforward on Windows/Linux.
Honestly, I wouldn't put it behind IBM to turn it around with POWER revival. They'd been doing some cool stuff recently with their NorthPole accelerator[1], and using 12nm process while at it, indicating there's much room for improvement. It could eventually become a relatively open, if not super affordable platform. There's precedent with OpenPOWER! And not to mention RISC-V, of course, championed by Jim Keller et al (Tenstorrent) but it's yet to blossom, all the while pppc64el is already there where it matters.
I say, diversity rules!
[1]: https://research.ibm.com/blog/northpole-llm-inference-result...
IBM did lay an egg with Power10, though. They cut corners and used proprietary IP and as a result there are few (are there any?) non-IBM Power10 systems because the other vendors stayed away. Raptor workstations and servers are a small-ish part of the market but they're comparatively highly visible - and they're still on POWER9 (no S1 yet).
They did realize the tactical error, so I'm hoping Power11 will reverse the damage.
PPC’s likely last hope died when Google didn’t go ahead with OpenPower.
Talos is the exception that proves the rule, sadly.
Friends that work at intel said gelsinger and the board have done EVERYTHING wrong in the past four years. From blowing key customer accounts to borderline malfeasance with payouts. It’s also the board that needs to go too for enabling. The merger with amd sounds like the right path.
My friends at Intel are essentially saying the same thing. This includes the ones that got laid off- and those that got 'transitioned' to Solidigm.
Are your friends on the fab/foundry side of things?
US government wouldn't let intel down, this is matter of national security (only grown semiconductor fabs left on US soil) and edge of US tech dominance
When that happens typically the company starts optimizing for sucking money from the government. From the point of view of the consumer Intel would be finished.
That's the bet I made after the crash last summer. I think the USG only really cares about the fabs, as we've shown the ability to design better chips than intel's here. Time will tell if I'm right.
> only grown semiconductor fabs left on US soil
not sure what a "grown" semiconductor fab is but follow this link and sort by location https://en.wikipedia.org/wiki/List_of_semiconductor_fabricat... The number of homegrown companies with fabs is greater than 1
okay, I forgot about texas instruments but lets not kidding ourselves here Intel is literally run pretty much (consumer and bussiness) the most here
it would create so much loses not just by security but also effect on economy would be so high
Since HN generally considers everything older than two nodes completely irrelevant, all of those fabs except for Intel are completely irrelevant.
because other company that still run is so small in terms of marketshare even when they bankrupt tomorrow, it wouldn't make market shakes
but intel going bankrupt would change market permanently
Pretty much fair game for speculation. The only way this is not bad for the tech industry was if he resigned due to medical or age reasons. That would not be unexpected.
Doubtful that is the issue with Intel's track record. Curious when we will know if 18A is competitive or not.
> If 18a is not ready I think the best case scenario for Intel is a merger with AMD. The US Govt would probably co-sign on it for national security concerns overriding the fact that it creates an absolute monopoly on x86 processors. The moat of the two companies together would give the new combined company plenty of time to ramp up their fabs.
No way other countries would allow that. If A-tel (Amd-inTEL) can not sell to the EU the merger will not happen.
This does not seem orderly/planned enough to be simple age-related.
I tend to agree. He's not outside the window where someone might choose to retire. But no named permanent successor? Retiring immediately? Tend to speak to a fairly sudden decision.
> If A-tel (Amd-inTEL) can not sell to the EU the merger will not happen.
What the EU gonna do then? Stop buying computers? Perform rapid continental ARM transition for mythical amount of money?
Just stop buying new intel chips, and continue buying Arm chips. Its not like every single existing x86 CPU would need to be taken away and destroyed.
Apple has made it fairly obvious, even if it was not already with smartphones and chromebooks, that Arm is a viable, realistic, and battle-tested alternative for general purpose computing. Windows 11 even runs on Arm already.
It would not happen "tomorrow" - this would be years in court if nothing else. This would give Dell/HP/Lenovo/whoever plenty of time to start building Arm laptops & servers etc for the European market.
And who knows what RISC-V will look like in a few more years?
The EU has done a bunch of stupid anti-consumer shit in tech already (hello cookie warnings that everyone now ignores), so I would not be surprised if this happened.
> What the EU gonna do then?
Seize or terminate their patents and copyrights. Issue arrest warrants for criminal evasion. Compulsory licensing of x86 to a European design firm immunized by EU law.
> Perform rapid continental ARM transition
Yes.
Windows is on ARM. Apple is on ARM. AWS and Ampere make decent ARM servers. You have decent x86 user-space compatibility on ARM laptops. That is all users want.
I doubt it will cost 'mythical amounts of money'. Most users use a web browser and an office suite. I doubt they will know a difference for a while.
> Seize or terminate their patents and copyrights. Issue arrest warrants for criminal evasion. Compulsory licensing of x86 to a European design firm immunized by EU law.
My eyes rolled so far back I hurt myself.
Please provide some examples of where the EU has been able to do a fraction of what you listed to large, US based firms in the past.
Looking at the future, if you want a trade war and an excuse for the new US administration to completely neglect NATO obligations this is a great start.
> Please provide some examples of where the EU has been able to do a fraction of what you listed to large, US based firms in the past.
Provide an example of where a large firm decided to ignore EU law and go ahead with a merger that the EU objected to.
No-one wants the nuclear option, on either side. But if anyone ever tries to call the EU's bluff, they may find out the EU wasn't bluffing at all.
Hard to imagine it ever coming to that but presumably massive fines?
> Perform rapid continental ARM transition for mythical amount of money?
And what is Intel + AMD going to do? Not sell CPUs in Europe?
People who are presumably very well-off financially can retire a tad on the early side for all sorts of reasons or a combination thereof. Certainly he has made some significant course corrections at Intel but the most charitable thing one can say is that they will take a long time to play out. As you say, a merger with AMD seems like a non-starter for a variety of reasons.
Is Intel really in such a dire situation that they need to merge with AMD? AMD has been in troubling situation in the past(some thanks to Intel illegal dealings) yet they managed to survive and they were nowhere near Intel's size.
More importantly, AMD troubles made them refocus and improve their products to the levels we're seeing today.
Giving life support to dinosaurs isn't how you create a competitive economy.
I think that Pats strategy is what the fab needs to do to be successful.
However, I think the fab and design should be separate companies, with separate accountability and goals/objectives. There is just too much baggage by keeping them coupled. It doesn't let either part of the company spread their wings and reach their full potential when they are attached at the hip. From the outside perspective, that is the thing that Pat has seemingly been focused on, keeping it together, and its why people have lost faith in his leadership.
I also don't think that from a investment / stock standpoint that accelerating the depreciation / losses related to restructuring on the most recent quarter was a wise decision, since what Intel really needed was a huge win right now.
Nope.
Look at what Pat did to VMware. He's doing the exact same thing at Intel. He came in, muddied the waters by hiring way too many people to do way too many things and none of them got done appropriately. Pat is a huge part of the problem.
I had the unfortunate pleasure of watching him not understand, at all, VMware's core competency. It was a nightmare of misunderstanding and waste in that company under his leadership.
Intel turned into even more of a laughing stock under Gelsinger. I say: good riddance. He burned time, capital and people at both VMware and Intel. He's a cancer as a CEO.
When he came back to Intel I was saying that all this ‘finally an engineer’ in charge stuff was misunderstanding Pat Gelsinger and VMWare was front and center in my thinking that.
Do you have more details or a reference for the VMware activity? The wikipedia VMware article is pretty brief
Both of those things might be true, but to me, it looks more like the board is acting out of fear of shareholder lawsuits. Pat's strategy has significantly destroyed value because the market lacks visibility.
Dropping Pat will alleviate their feeling of having to do "something."
As for M&A, it wouldn't just have to be approved at the DoJ. And the Chinese will never ever approve of it (but would have to). If they do a transaction without approval from the CMA, it would be like a nuclear financial war.
I think it's high time to gut intel into parts, a la GE. Sell the Fabless to QCOM or BCOM. Sell the Fabs of one by one to GF, Tower, UMC or even tsmc. Find a PE firm for the leading edge and reconstrue it with significant rnd credits as a kind of Bell labs 2.0.
Or something like that.
Given the push of ARM designs into the desktop and server space, that monopoly doesn't seem to me as much of a danger as it might have a decade ago. I imagine any anti-competitive behavior in x86 would only accelerate that trend. Not that monopolies shouldn't be a concern at all, but my thought is that it's not quite that large of a danger.
If a breakup is in the works for Intel, merger of the foundry side with Global Foundries would make more sense than AMD. Intel's foundries, even in the state they're in would likely be a step up for GF. And given the political sensitiveness, GF already has DoD contracts for producing chips.
Didn't the US government just give Intel $7b on the condition they don't spin off the foundry business?
https://finance.yahoo.com/news/intels-7-86-billion-subsidy-0...
I think you would have to go through the specific conditions - they put up restrictive conditions but does not seem impossible to work with
> I think the best case scenario for Intel is a merger with AMD (…) it creates an absolute monopoly on x86 processors
If this happens, couldn’t they force them giving out licenses as a condition? The licensing thing has been such an impediment to competition that it seems like it’s about time anyway.
It's a good point, and yes, afaik any redress can be requested as a condition for blessing a merger.
One argument I've hard in favor of the split is this: If you are AMD/NVDA/other top player, do you want to send your IP to an Intel owned fab for production?
At least in theory, a fully independent, split/spun out standalone fab removes this concern.
That said - what does Intel have to offer the top players here? Their fabs are being state of the art. And what's the standalone value of post-spin fabless Intel if their chip designs are as behind as their fabs?
This certainly presents a conundrum for US policy since we need fabs domestically for national security reasons, but the domestically owned ones are behind.
Eh, firewalls can be made strong enough, at least for some things. A software parallel is: you are Apple / Facebook, do you use Azure and/or AWS? I wouldn't, if it were me, but they do.
Azure/AWS is cloud/B2B, AAPL/FB are B2C consumer goods/services. Different customers, different industries. There is some overlap, but moat is in other places.
AMD/NVIDIA are in same business as Intel and have same pool of customers.
Maybe but I think AWS arguably is a different scale of automation. There’s no Amazon human in the loop looking at your source. Sure a human SRE could peak into your system but that’s something of an exception.
I can’t imagine fabs have that level of automation. It’s not like sending a file to your printer. It’s a multi month or year project in some cases to get your design produced. There’s many humans involved surely.
> ...the best case scenario for Intel is a merger with AMD...
Why would AMD want a merger? They aren't a charity, and certainly don't need the distraction.
Well, for at least a time they would have the entire x86 market. That is not nothing. Also AMD may want to get back into the fab business. Without competition in x86 why not use Intel's fabs?
They dont need to merge with intel to get the entire x86 market, they'll be getting that anyway if Intel folds.
Even if Intel gets bought out, it'll be in pieces. Nobody wants to enter the x86 market, but there may be smaller segmenrs of the business that can help an ARM based business, or someone looking to get into GPU's.
> they'll be getting that anyway if Intel folds.
Why would Intel "fold"? Their revenue is still 2x higher than AMDs... I mean obviously they are not doing great but its silly to say something like that at this point.
> Nobody wants to enter the x86 market
If the ISA patent licenses opened up, that might not be the case. When the topic comes up, it's more about Intel shutting down license transfers, so naturally companies have avoided x86.
And everyone would rush to migrate away from x86.
Having a sole supplier for CPUs is a bad strategy.
Yet everyone is okay with a de-facto single supplier of CUDA capable AI GPUs.
This is only true if “everyone” excludes all of the hyperscalers and the client device manufacturers.
Perhaps not. But (unlike in this hypothetical situation) nobody besides AMD and Intel can do much about that...
Outside of Nvidia, nobody is okay with that.
They would technically have no market, because the Intel-AMD X86 license is non-transferable and expires if one party goes out of business.
IANAL, however:
- A token legal remnant of Intel, with 0 employees or properties, might suffice to keep that license ticking.
- If the stakes appeared to be "or America will lose it's ability to make computers", then the government might find a judge willing to sign off on just about any sort of counterfactual, "because national security".
Merger with AMD is very unlikely for competitive reasons but I’ve read some rumors that 1) Apple will push some production to Ιntel from TSMC and 2) Apple (and Samsung) are considering buying Intel.
Why would the US govt allow a merge with AMD?
Sure they won't allow Intel to be bought by a foreign company, but surely everyone would much rather see Intel being bought by literally any other company than AMD and Nvidia.
Nvidia makes a lot more sense than AMD; it is better for the market (preserving some competition in x86), and at least Intel does something Nvidia doesn’t.
China and the EU would never allow an Nvidia Intel merger, not under any scenario the US would find acceptable.
They'll barely allow Nvidia to acquire anybody at this point, no matter how small. See recent EU response to Run:ai. Intel would be considered 100x worse.
Why would China and the EU have input on a US merger?
Because Intel, AMD, etc have offices in EU and China, for sales, distribution and also R&D. If you intend to operate in those markets you need to comply with local regulations.
The same reason as anything else. If the merger goes ahead with opposition from foreign markets, those markets can impose import tariffs or outright bans. Smaller markets may be ones these combined companies are willing to piss off, but not Europe. Their opposition is defacto a deal killer.
They literally don't any serious homegrown alternative though, they'd be effectively forfeiting the AI race
China doesn't care. They are banned from buying western AI HW or making their own AI HW at TSMC/Samsung. They are pouring hundreds of billions to the semiconductor ecosystem.
Huawei is trying to make establish domestic Ascend/MindSpore ecosystem, but they are limited by the SMIC process (~7nm). Amount of defects is allegedly rather high, but they are the only "official" game in town (other than smuggled NVIDIA cards or outsourced datacenters in Middle East).
Then the point about China trying to block a Nvidia merger doesn't really sense if they will be going their own path anyways. It would exist to try to harm Nvidia before their homegrown alternatives ramp up.
Nvidia is only one side of merger. The other is Intel. Intel does sell processors in China. Only AI is banned, not generic CPU (except HPC).
> Why
Well they obviously would..
Also EU has promised various significant subsidies to Intel. They obviously has fabs in Ireland and building one in Germany and perhaps even Poland..
If this is a rhetorical question, just make your point instead.
If not, look up e.g. Microsoft 's purchase of Activision, both US companies.
They wouldn't, but companies at this scale are global, regardless of where their nominal headquarters or stock listing is.
Do they want to sell pentiums in China or EU?
What do they do that Nvidia doesn't (and that Nvidia would care about)?
They already do networking, photonics, GPUs, high speed interconnects, and CPUs. They are planning on selling their FPGAs (the Altera acquisition) to Lattice.
The only things left are their fab ops, thunderbolt/usbc, wifi, and ble.
Their fab ops would take over a decade of heavy investment to catch up to TSMC or Samsung and idk if even Nvidia is ambitious enough to take that on.
Wifi and BLE could be good additions if they wanted to branch out their mellanox portfolio to wireless. Thunderbolt/USB C also might be worthwhile.
But that IP is probably going to be cheaper to buy piecemeal so idk if it's worth it to buy the whole company outright.
I mean, ARM designs have had some wins lately, but x86 still does quite well in single-thread performance, right? Excluding Apple, because they are magic—Amazon, Ampere, these ARM CPUs make a reasonable pitch for applications that use lots of cores well, but that isn’t every application.
x86 CPUs.
Yeah I wonder if maybe the x86 license is the most valuable art of Intel at this point...
Unfortunately rights to the x86_64 license expire on the event of transfer of either company to a new owner.
License for what? The patents have expired.
The colliquial "x86 License" is the AMD-Intel Patent cross-licensing agreement. i.e. All patents related to x86 or any extensions of the ISA are automatically cross-licensed between the companies. While in the past the ISA patent story mostly was leaning in Intel's favor, since AMD64/x86_64 really took off, ISA innovation really became a delicate stack of cards interwoven between Intel and AMD.
So if Intel sells, everyone is fucked until whoever buys can renegotiate the terms of the agreement. And if that's Nvidia, they could just sit dead on the IP and starve AMD of the bulk of their CPU revenue (which is what is keeping the company solvent). And requiring they keep the agreement they currently have would mean requiring AMD to give Nvidia a pretty detailed look at the AMD secret sauce which would increasingly push AMD into the red until they become insolvent, again leading to a Nvidia monopoly.
The US government as well as the EU will not allow that to happen so however things slice, ownership of the x86 ISA patents would not be going to Nvidia.
So for the 2009 cross licensing agreement, change of control terminates the cross licensing for both parties[0]. Since there are far more Intel x86 patents in the last 20 years than AMD, sounds like AMD would be the one more at risk, which I think agrees with what you say. In practice, any anti-trust review if Nvidia is the buyer would prevent Nvidia from using it to harm AMD's business.
[0]https://www.kitguru.net/components/cpu/anton-shilov/amd-clar...
For the old stuff maybe, but they keep adding new stuff. AVX is from 2008, AVX2 from 2013.
How do you know those are encumbered? Intel invented them, after all....
X86 cores remain pretty good at branchy, lightly threaded codes, right?
That would be a hard call
One of the reasons where Intel "let" AMD compete in the x86 space is US Gov requirements for being able to source chips from two vendors at least
Maybe they'll sell Intel to Northrop Grumman /hj
Boeing should buy Intel the way they bought McDonald Douglass. It's gonna be a success, trust me.
Hey I mean two negatives make a positive right? Can't possibly go any worse than it already is.
Probably should add GMC in there.
I heard they’re building a iOS / android replacement. Think of the vertical integration!
I’m picturing a boot-looping cargo plane full of hummers dropping like a stone — doesn’t get much more vertically integrated than that. Think of all the layers they can eliminate.
"Vertical velocity" is the new buzzword in professional businessing
What other US companies are equipped and interested in running a giant chip design/fab? NVIDIA and AMD are likely the only two candidates.
I do not at all think it will happen, nor does it make any sense at all but the rumours of Apple seemingly being interested in buying out Intel dont seem to be going away.
I can see them wanting certain parts of the business (GPU mainly) but on a whole it doesn't make a lot of sense.
I don't see Intel as a single entity being valuable to any US business really. You're essentially buying last years fall line, theres very little use for Intel's fabs without a huge amount being spent on them to get them up to modern standards.
It'll all come down to IP and people that'll be the true value.
Apple is interesting. They certainly have the money and I think the idea of fabricating their own chips appeals to Apple, but at then end of the day I don't really think it makes sense. Apple won't want to fab for others or design chips for others.
The only way it happens is if it is kept separate kind of like the Beats acquisition. Apple lends some chip designs to Intel and Apple starts fabricating their chips on Intel fabs, but otherwise the companies operate independently.
Micron Technology is the only one that comes to mind, but they are more on the memory side of things - the last time they were on level with Intel was in the 90s when they both made DRAM but intel pivoted to processors and networking
There are also options like Texas Instruments or Microchip. Of course far more unlikely than either nvidia or amd, but definitely options.
Apple is the obvious one. Essentially the only place with both the capital to do it and the extreme vertical integration enthusiasm. AMD hopefully still remembers running a fab.
The only thing that Apple might find even remotely useful are Intel's fabs. The rest of the business would have to be sold ton someone else or closed down (which would never be approved by the government).
Even then there is zero indication that Apple would ever want to do their own manufacturing.
The government desperately wants US fabs because the military requires tech and it's increasingly dangerous to rely on globalization when the globe is going nuts -- the rest of it doesn't really matter.
We're talking about Apple, though?
Ford, GM... The big automakers got burned with the chip shortage after COVID (this is their fault, but still they got burned)
>the best case scenario for Intel is a merger with AMD
Oh man, the risk in that is extreme. We are moving away from x86 in general, but wow, that's... a big jump in risk.
And really, AMD spun off Global Foundries. AMD doesn't want to run a fab.
Does this mean that Intel's fabs should split for Global Foundries, and the Intel design team should go to AMD?
I seem to recall that Intel was talking about the same kind of split. Maybe the Intel child company and AMDs would merge, or maybe they'll stay separate and the parents will merge?
It's so much worse. They put a CFO and a Marketing/Sales professional in charge.
Sure, let's create a monopoly around one of the most valuable commodities in the world.
What could go wrong?
I think Apple Silicon has shown us that x86 doesn't have the monopoly potential it once had.
Apple Silicon was designed to be efficient at emulating x86-64.
If you take that away, it becomes irrelevant (like many other ARM-based processors that struggle to be a good product because of compatibility).
Apple has a promising path for x86 liberation, but it is not there yet.
> One of the key reasons why Rosetta 2 provides such a high level of translation efficiency is the support of x86-64 memory ordering in the Apple M1 SOC.
https://www.sciencedirect.com/science/article/pii/S138376212...
TSO might be slower than ARM’s memory ordering, but implementing it in hardware is considered faster than implementing it in software.
It was designed to be efficient (performance/watt) for everything. Most users aren't using Rosetta.
It's not the only ARM CPU with TSO support though, some server platforms also do it.
"efficient for everything": that is a bold claim, can you elaborate more on it?
Also, can you name these other platforms?
Also, can you back up the information about most users not leveraging Rosetta?
> "efficient for everything": that is a bold claim, can you elaborate more on it?
https://share.transistor.fm/s/124f8bc4
> Also, can you name these other platforms?
Fujitsu A64FX, and https://en.wikipedia.org/wiki/Project_Denver
> Also, can you back up the information about most users not leveraging Rosetta?
Well I couldn't tell you that, but most people just use web browsers.
Sorry, I'm not gonna hear some podcast PR by Apple. I heard their PR already.
If most people just use web browsers, why so much effort has been put on Rosetta 2? Sounds very wasteful.
Also, A64FX seems to agree with my statement that an ARM with some x86 spice makes a better product than just an ARM.
And, if we're talking about making ARM mainstream, Raspberry Pi did more for the platform than Apple and deserves to get the good reputation.
Dunno, who says a lot of effort was put into Rosetta 2?
It's mostly something that was needed in the first few years so you could run Chrome and Photoshop, but those have been ported now. It's mostly useful for running WINE but that's not super common outside games.
That said, a binary recompiler has a lot of uses once you have one: https://valgrind.org
Let's not walk around in circles. We already estabilished that the chip was designed to facilitate the transition.
What you're saying is exactly what I'm saying. Having that x86 translation layer helped them sell macs. It doesn't matter if it is not needed now (it is needed now, otherwise it would have been removed).
So, yes. Apple has a popular new band in town. But they still need to play some x86 covers to sell tickets.
As I said, they have a good plan, but they're _not there yet_.
When they'll be? When x86 translation is not shipped anymore. It's rather simple.
Merging AMD and Intel sounds like Penn Central all over again
> ...then I don't think Pat gets retired.
This implies that he was pushed out, rather than chose to retire. I can't see anything in the article to suggest this, do you have another source?
> I think the best case scenario for Intel is a merger with AMD
I think a merger with Nvidia would be more likely given the antitrust issues that a merger with AMD would bring up.
That assumes a functional antitrust mechanism. We don't know what the next admin will do yet other than attempt technically illegal revenge on people they hate.
Hell no, I don't want the Intel management structures coming over here. Qualcomm is welcome to them.
agree with most except merging usually costs more than less, it usually happens to stack up because you get to axe a lot of employees in the name of synergy I mean duplicated departments
> the best case scenario for Intel
That's the best exit case for the shareholders. It's the worst case for Intel's employees, customers and partners.
> would probably co-sign on it for national security concerns
This is equally laughable and detestable at this point in history. My personal security is not impacted by this at all. Weapons manufacturers honestly should not have a seat at this discussion.
> overriding the fact that it creates an absolute monopoly on x86 processors.
Yet this isn't a problem for "national security?" This is why I find these sentiments completely ridiculous fabianesque nonsense.
>I think the best case scenario for Intel is a merger with AMD
Oh no no no no. Oh hell no. For now, we need competition in the x86 market, and that would kill it dead. Imagine Intel re-releasing the Core 2 Quad, forever.
The problem is that this transition is very capital intensive and all the money was spent on share buybacks the past decades. The stock market looks at CPUs and GPUs and likes the latter a lot more so no fresh money from there. At the moment the only stakeholder with a vital interest in Intel semiconductor capabilities is the US government and even that may change as a result of Trump.
He did not get Larrabee flying but yeah it was after him that intel did not put the effort into what was needed.
Nonetheless his comment on nvidia being lucky was everything than a smart comment.
[dead]
[flagged]
The mistake Pat Gelsinger made was that he put his faith in the American manufacturing workforce. Very sad.
Nope. From what i heard from VMWare he was just a bad manager. He seems to be just a person skillfully playing org game, yet when it came to deliver he just flopped.
The struggling companies with totally rotten management like to bring such "stars" (pretty shrewd people who built themselves a cute public image of supposedly talented engineers who got promoted into higher management on their merits) - Yahoo/Meyers come to mind as another example - who de-facto complete the destruction of the company while the management rides the gravy train.
I worked for 3 months for Intel. I can genuinely say that there is no saving that company. Recently, they are hiring many PhDs from various US universities (particularly Indians) to try to compensate (they offer generous stocks and are hiring like crazy right now). There are two major problems I saw: lack of genuine interest in fabs (most people are there for the Intel name and then leave or in the case of Indians, they are there for Visa purposes. Mind you, we were not allowed to hire people from China since Intel is subject to Export laws). The biggest problem by far is lack of talent. Most of the talent I know is either at Apple or Facebook/Google, including those trained in hardware. Intel is bound to crumble, so I hope we as taxpayers don't foot the bill. There was unwillingness to innovate and almost everyone wanted to maintain the status quo. This might work in traditional manufacturing (think tennis rackets, furniture...), but fabs must improve their lithography manufacturing nodes or they get eaten by the competition
A few years ago a CEO of Intel (not Gelsinger) said something like "our CPU architecture is so well-known every college student can work on it". A friend working at Intel at that time translated it to me: "we will hire cheap students to work on chips and we will let the expensive engineers leave". At least in his department the senior engineers left, they were replaced by fresh graduates. It did not work, that department closed. I have no idea how widespread this was inside Intel, but I saw it in other big companies, in some with my own eyes.
Funny you mention this. In my brief stint there, I saw a fresh college graduate get promoted to a lead for a Scanning Unit simply because the current lead was retiring (I was actually offered that position, but I was leaving and turned it down). They were trained in less than a month by shadowing the lead on-the-verge-of-retirement. The engineer who got promoted was at Intel less than a year, and had no prior internship experience (they were hired in 2021 when chips were in desperate need of talent. You might recall the chip shortage that affected cars etc.)
I know some people who understand x86 [0] very well. Most of them do not work at Intel. Those that do tend to be on the OSS side and don’t really have any sway in the parts of Intel that design new hardware.
And this is a problem! Most of Intel’s recent major architectural changes over the last decade or so have been flops. [1] Quite a few have been reverted, often even after they ship. I presume that Intel does not actually have many people who are really qualified to work on the architecture.
[0] I’m talking about how it interacts with an OS and how you put it together into a working system. Understand stuff like SIMD is a different thing.
[1] AVX and its successors are notable exceptions, but they still have issues, and Intel did not really rock the early implementations.
> A few years ago a CEO of Intel (not Gelsinger) said something like "our CPU architecture is so well-known every college student can work on it". A friend working at Intel at that time translated it to me: "we will hire cheap students to work on chips and we will let the expensive engineers leave"
Reminds me of the Boeing managers saying that they didn't need senior engineers because its products were mature..
A few blown doors and deadly crashes later, that didn't age well..
https://archive.is/H9eh9#selection-3935.0-3935.163
> Most of the talent I know is either at Apple or Facebook/Google
A relative of mine with a PhD sounds exactly like this. Worked for Intel on chip-production tech then was hired by Apple about 10 years ago, working on stuff that gets mentioned in Apple keynotes.
While I am sure the foot soldier quality is important, we ought to put the burden on leadership a bit more. I am not sure AMD had a better talent pool (I don't work in the industry so I don't know!) ten years ago. Culture is predominantly shaped by those already in power -- it sounds like they need a change in culture.
Do you think Intel can improve the quality of engineers on the fab side by offering PhD level salaries to people with BS/MS from good graduates from good US schools? I suspect that Intel hires PhDs from subpar universities.
> lack of genuine interest in fabs - most people are there for the Intel name
I can actually believe this. Of most of the rest of the arguments, that tend to be rather vague, and wave at implementation, or some stock related motivation (like we need TSMCs business), a lack of genuine interest in the employees that was not sold to them or the market especially effectively seems fairly believable.
Most people are there for the chips, for making great designs in silicon, and being market leaders in CPU architecture. Not running the equivalent of an offshoring division making other people's stuff.
The entire implementation has seemed rather haphazard and not sold with much real motivation. Honestly, the entire situation feels a bit like Afghanistan (if that's a bit incendiary)
Nobody really knows why they're going. Nobody really knows what they're trying to accomplish. The objectives on the ground seem vague, ephemeral, and constantly changing. There's not much passion among the ground troops about the idea. The leaders always seem to be far away, and making strange proclamations, without actually putting boots in the dirt. The information released often feels multiple personalityish, like there's far too many cooks in the kitchen, or far too many puppeteers pulling strings in every direction. And afterward you find out it was mostly some dumpster fire driven by completely different motivations than what were publicly acknowledged.
The senior engineers I saw there are talented. And Intel has benefits and stock packages that rival those of big tech. I think I can expand on your point by saying the more senior engineers were risk averse and on the verge of retirement, and the young engineers were just there for the Intel name or some other reason. There is surprisingly very few middle-aged long-term people down there. This would be expected in software (Facebook/Google), but it is a recipe for disaster in hardware where long term thinking is critical to advance lithography (changed don't happen overnight). I also was surprised by how few of Intel's engineers believed in Intel. The most stark observation I made was senior engineers would max their stock purchase plan, but many young engineers would abstain. If the engineers don't believe in the product they are working on, I don't accept that the gov. must bail it out. I hope some investigative journalist writes a book on Intel and Boeing someday, as I would be curious as to how things unfolded and got to this point. There are many similarities (I never worked for Boeing, but have friends in Seattle that describe the culture in similar terms to what I saw at Intel). Also, to your last point, the Intel name does not hold as much weight as it did in the Grove days.
Like I mentioned down below, used to work with the space agency back in the day, and by extension, Boeing. Even late 2000's, early 2010's, Boeing was a dumpster fire. Visited their local offices and the place looked like a hoarder hole. Boxes thrown everywhere, haphazard cabinets just left places, roaming meetings in sparse rooms. Seemed like homeless were camping there rather than a functional company.
The meetings with them felt that way too. Watch the same slides for months and wonder how nobody was moving anywhere with anything actually involving choices. "So, we've got these three alternatives for SLS, and like 10 other nice-to-have engineer pipe dreams." "Right, you'll choose the obvious low cost choice A. Why are we having this meeting again?" Many months later, after endless dithering, surprise, obvious choice using almost no hardware changes is chosen. All engineer nice-to-have pipe dreams are thrown away. There was much rejoicing at the money successfully squandered.
> And Intel has benefits and stock packages that rival those of big tech.
Given Intel’s stock returns over the past 15 years, Intel would have to offer insane cash compensation to rival big tech.
Levels.fyi indicates Intel heavily underpays, which is what I would expect.
https://www.levels.fyi/?compare=Intel,Apple,Google&track=Sof...
Thank you for the pay comparison website link. The pay scale difference is frankly is a little challenging to even believe in some cases. Facebook pays an M2 software manager $1,426,471? Comparables at other (F)AANGs are $700,000 to $800,000. Double ??? at the same grade? That seems like nonsensical escalation. Does not seem like FB/Meta/saggy infinity symbol formerly known as Prince is getting their money's worth.
When equity compensation is such a high proportion, one can not simply compare numbers. You have to take into account volatility of the business and its impact on share price (and hence impact on your compensation).
The equity compensation figures on levels.fyi are very rough estimates on what one might sees, but it is possible that Meta has to offer more equity due to more perceived volatility in its business rather than, say, Apple or Microsoft. Or maybe more expected work hours/days.
But also, Meta has long been known to pay more, famously being the business that eschewed the collusion that resulted in the famous high-tech employee antitrust litigation:
https://en.wikipedia.org/wiki/High-Tech_Employee_Antitrust_L...
Should have mentioned Hardware. I remember Intel pays more than Apple hardware when adjusted for COL (Intel in Oregon/New Mexico vs Cupertino). It has been a while since I looked into the salaries though. I agree they do not pay well relative to big tech on the software side of things though
Top talent who can shop around for more pay is going to demand higher pay in exchange for the opportunity costs for having to live in Oregon/New Mexico compared to Cupertino.
It does help to retain the talent, those 5-10y engineers have a house on mortgage and need a higher pay incentive to uproot their lives. At least, that was probably the reasoning before, things have changed:
- WFH means you do not even have to move.
- Competitors have set up satellite offices in Hillsborough/Portland to poach talent.
- Intel does not feel like the stable employer anymore, with the almost yearly layoff waves since the BK era.
This case kind of depends on your priorities. If your goal is the Cupertino, Sunnyvale, Mountain View lifestyle and lateraling between the various companies in the area, then it's an opportunity cost. Has other positives also: low crime, wealthy neighbors, investment money, startup opportunities, mountains and hiking in the area, ocean nearby.
Has it's downsides though. Having visited NASA Ames over in Mountain View because of space agency work, it has a lot of the same issues as Aspen. Lot of sameness, lot of architecture that's McWealthy pseduo-Pueblo, lot of expensive blast plain suburbia, lot of people that start with "you're leaving money on the table" mentality, San Jose and San Fran traffic nearby can be nightmarish, and the crime of Oakland doesn't have that far to walk.
With many family in the Portland, Hillsboro, Beaverton area, that area also has it's positives: relatively low crime in Hillsboro / Beaverton (Portland's not great on the East side)[1], wealthy neighbors, huge amounts of hiking / outdoors / mountain climbing / biking / botanical gardens / parks, much less of blast plain suburbia, somewhat private feeling / hidden housing developments, ocean nearby, Columbia River nearby, significant art culture, lots of breweries / restaurants / food trucks / foodies, decent public transit, if you want dry desert wealth like Cali then Bend is not that far away.
Comparing the shows Portlandia vs Silicon Valley and Weeds is not a bad approximation.
[1] https://www.thetrace.org/2023/02/gun-violence-map-america-sh...
Were you working on the fab side of things?
Don't want to dox myself, but I worked at one of the fabs (Fab11X)
Your two major problems would both be solved by paying more, but it sounds like they are paying well, according to you?
Genuine interest is not the only way to get great results. Excellent pay can do so as well.
And lack of talent, again, excellent pay.
[flagged]
Got a link?
> Most of the talent I know is either at Apple or Facebook/Google
That's a damn shame. Big tech monopolies are screwing up the talent market. Nobody can match their comp and it's bullshit
The problem is that Intel is poorly run. Intel should be printing money, and did for a long time until a string of bad leadership. If they had brought in Gelsinger after Otellini, which they were reported to have considered, the company might be in a much better position.
But alas, Intel is a mega bureaucratic company with a few tiny siloed teams responsible for innovating and everyone else being test and process jockeys. I think Gelsinger wasn't given enough time, personally, and agree with the OP that even someone like Elon would struggle to keep this sinking ship afloat at this point.
BK wanted wearables and Bob Swan wanted to cut costs; neither of them were visionaries nor did they really understand that Intel was a hard tech company. Intel had achieved such dominance in such a in-demand, growing market, that all they had to do was make the technology better (smaller, faster, lower power) and the money would continue to flow. The mandate was straightforward and they failed.
I am sure Intel has enough cash to hire some good talent (like just offer talented people next-level salary at $FAANG), the problem is deeper in the hiring pipeline -- convincing people of the budget and actually scouting and retaining good people
The companies that are big tech today took the risks and are now reaping the rewards. Intel decided not to take the risks, so now it doesn’t reap the rewards.
No one stopped Intel from paying for talent. Intel’s shareholders decided to pay themselves instead of investing in the business by hiring great employees. That’s how you go from being a leader to a laggard.
Difficult to see how this is anything other than a failure. I had high hopes when Gelsinger returned, but it seems that BK had done too much damage and Gelsinger didn't really get a grip on things. One of the things I heard that really highlighted the failure of this recovery was that BK had let Intel balloon in all sorts of ways that needed to be pared back and refocused, but head count under Gelsinger didn't just stay static but continued to significantly grow. It's no good giving the same politically poisonous, under-delivering middle management more and more resources to fail at the same job. They really need to clear house in a spectacular way but I'm not sure who could even do that at this point.
They have made too many bad choices, and everyone else has been eating their lunch for the last few years. They are a non-factor in the mobile space where ARM architectures dominate. They are a non-factor in the GPU market where NVDA dominates ahead of AMD. They were focused heavily on data centres and desktop/laptop CPUs where ARM is also increasingly making inroads with more efficient designs that deliver comparable performance. They are still struggling with their fab processes, and even they don't have enough money to make the investment needed to catch back up to TSMC. There is a good reason that even Global Foundries has given up on the bleeding edge some time ago.
They are in a deep hole, and it is difficult to see a future where they can restore their former glory in the foreseeable future.
> where ARM is also increasingly making inroads with more efficient designs that deliver comparable performance.
ARM isn't doing any such thing. Apple & Qualcomm are, though. ARM itself if anything looks weak. Their mobile cores have stagnated, their laptop attempts complete failures, and while there's hints of life in data center it also seems mediocre overall.
This feels a bit pedantic. ARM-the-CPU-architecture is increasingly making inroads with more efficient designs that deliver comparable performance to Intel's x86 chips, thanks to Apple and Qualcomm. ARM-the-holding-company is not doing that, they just make mediocre designs and own IP.
The post I was replying to had 3 other companies mentioned or referred to (INTC, AMD, and NVDA), it seems odd that they'd suddenly have meant ARM-the-ISA instead of ARM-the-company when ISA wasn't otherwise being discussed at all.
But even if they meant ARM-the-ISA, that'd still put it in a fragile position when the 2 clear market leaders in the ARM-the-ISA space have no particular allegiance to ARM-the-ISA (Apple having changed ISAs several times already, and QCOM both being active in RISC-V and also being sued by ARM-the-company)
Don't forget their "brilliant" strategy of suing their own customers.
Being a customer shouldn’t protect a company from lawsuits. ARM feels they have merit here , just like Qualcomm did when they sued Apple. It’s not that rare in the corporate setting to have suits between companies with otherwise amicable relationships.
The optics can still be terrible. Qualcomm (or more accurately, Nuvia, the company they acquired) produced some stunning chips with almost unheard of battery life for a laptop, and Arm are suing them to use their own inferior designs. They even tried to have end user laptops recalled and destroyed! There's no world where this looks good for Arm.
There’s a very clear bias and idolization in this comment of yours which is based on a misunderstanding of the case at hand.
ARM aren’t trying to force Qualcomm to use ARMs cores. They’re trying to force them to update the licensing royalties to make up for the (as ARM sees it) licensing term violations of Nuvia who had a design license for specific use cases.
The part you reference (using ARM designs) is the fallback if Qualcomm lose their design license.
The destruction of the chips is standard practice to request in cases of license and IP infringement .
Qualcomm already had a design license prior to the acquisition of Nuvia. They were doing custom cores back in the original Kryo days which were an in-house custom ARMv8.0-A design.
Their design license doesn’t extend to Nuvia’s IP however according to ARM.
That is the entire crux of the issue. ARM gave Nuvia a specific license, and then Nuvia was acquired which transferred the IP to a license that ARM did not extend it to.
The terms of that license was specific to what they can do. ARM is claiming it doesn't cover some of the things they are doing.
I'm not enough of a lawyer to figure out who is right.
Hypothetically, if Qualcomm have broken their Arm licenses in a way that damages Arm’s business do you think Arm are supposed to just let them carry on? Should Arm say ‘legal action won’t look good so we’ll just let it pass’?
And the fact that Qualcomm got just about everyone to endorse the acquisition ahead of announcing it but didn’t even tell Arm is a bit of a tell.
Arm's major competition is RISC-V. Qualcomm engineers have been joining the important RISC-V committees recently. If Arm beats Qualcomm in the courts, Qualcomm will switch to RISC-V, and then Arm will have won the battle but lost the war.
If Qualcomm loses to Arm in the courts then they have a big problem in 2025 which switching to RISC-V at some point in the future will not solve for them.
They are not suing them to use "inferior designs," you're completely misrepresenting the issue. They are suing them over IP and contract violations. The ISA in question is irrelevant here, you could also get sued by SiFive if you licensed their cores and then did something that SiFive believed violated that license. It's not that deep.
AWS is basically running 100% of their own software on their own Graviton ARM CPU
The US government wants then making SOTA semiconductors. Biden wanted it and I highly suspect Trump will want it too.
Traditionally if you wanted SOTA semiconductors you'd go to IBM for the technology and then build your own fab. I'm not sure how true that is today but I wouldn't be surprised if it is.
That's what Rapidus is doing
I have to wonder if part of this is Intel wanting a Trumpier CEO so it can retain the favour of the US government, while Trump associates Gelsinger with the CHIPS act which he reflexively hates as a Biden thing.
[flagged]
Apple saw the writing on the wall and bailed years ago, built their own chips and denied them a large customer.
It’s a really bad sign when a customer decides it can out innovate you by internally copying your entire business and production line.
> It’s a really bad sign when a customer decides it can out innovate you by internally copying your entire business and production line.
Not necessarily, at scale, especially Apple's scale, vertical integration can make a ton of sense - for control, innovation, price, risk hedging.
It is not apple's decision that is the bad sign. There can be plenty of reasons for that as you mention and wouldn't be of note if their chips were poorer or even similar to intel in performance
It is the fact they can build a much better chip in almost any metric so far ahead of Intel is the red flag.
They haven't made their own fabs. Not yet, anyway.
And historically, wasn't this juts an extension of their fight with Samsung in the mobile space more than a rejection of Intel?
Re: fabs, Apple did by way of TSMC. Apple provided TSMC with massive amounts of cash in the 2010’s to fund new fabs. These were loans, but with very generous low interest terms that TSMC couldn’t have gotten elsewhere.
I think you’re correct that this was initially because Apple didn’t want to be at the behest of Samsung. But as Apple iterated on their mobile and tablet chips, they had the choice to bring them to their desktop and laptop lines or not to. They took that gamble and here we are today with non Intel Macs.
last few years.
It's pretty much two decades at this point.
I would argue there were many good things but not well delivered. The Optane persistent memory should've been revolutionary for databases but Intel just put it out and expected people to do all the software.
I'm seeing the same thing now with Intel QAT / IAA / DSA. Only niche software support. Only AWS seems to have it and those "bare metal" machines don't even have local NVMe.
About 10 years ago Intel Research was publishing a lot of great research but no software for the users.
Contrast it with Nvidia and their amazing software stack and support for their hardware.
When I read the Intel QAT / IAA / DSA whitepaper I knew it was the beginning of the end for Intel.
Every aspect of that document was just dripping in corporate dinosaur / MBA practices.
For example, they include 4 cores of these accelerators in most of their Xeons, but soft fuse them off unless you buy a license.
Nobody is going to buy that license. Okay, maybe one or two hyperscalers, but nobody else for certain.
It's ultra-important with a feature like this to make it available to everybody, so that software is written to utilise it. This includes the starving university student contributing to Postgres, not just some big-enterprise customer that merely buys their software!
They're doing the same stupid "gating" with AVX-512 as well, where it's physically included in desktop chips, but it is fused off so that server parts can be "differentiated".
Meanwhile AMD just makes one compute tile that has a uniform programming API across both desktop and server chips. This means that geeks tuning their software to run on their own PCs are inadvertently optimising them for AMD's server chips as well!
PS: Microsoft figured this out a while ago and they fixed some of their products like SQL Server. It now enables practically all features in all SKUs. Previously when only Enterprise Edition has certain programmability features nobody would use them because software vendors couldn't write software that customers couldn't install because they only had Standard Edition!
It’s worse than just the licensing. They’re exposed as PCI devices, so they don’t automatically show up in VMs, they don’t access user memory in a particularly pleasant manner, they aren’t automatically usable in unprivileged programs, etc.
And this destroys 99% (maybe 99.99%) of the actual economic value for Intel! What Intel needs is for people to integrate these accelerators into software the way that AVX is integrated into software. Then software and libraries can advertise things like “decompression is 7x faster on Intel CPUs that support such-and-such”, and then customers will ask for Intel CPUs. And customers will ask their providers to please make sure they support such-and-such feature, etc.
But instead we have utterly inscrutable feature matrices, weird licensing conditions, and a near-complete lack of these features on development machines, and it’s no wonder that no one uses the features.
Then I guarantee you that there is an MBA in Intel somewhere crunching the numbers and calculating that "only 0.001% of our customers are utilising this feature", hence he's going to cut that team and stop offering the product in future Xeons.
Which means that any fool that did utilise this feature has tied their cart to a horse that's going to be put down.
Smarter people can see this coming a mile off and not bother investing any effort into support.
It's so predictable that it's just depressing.
You see this again and again - when you gate technical features behind an enterprise barrier, the adoption rate is very low. AMD messed this up with CDNA and RDNA (now merging to UDNA), but Nvidia had it right from the start: CUDA from the bottom to the top.
Intel started doing this kind of right recently with their Xe cores (ideally same programming model between their integrated GPUs and their datacenter HPC GPUs), but we’ll see how different the Xe-LPG and Xe-HPC end up being when Falcon Shores really comes out (I’m very worried about this description of it as a XPU, which seems like real confusion about what ML people want).
Preach. I gave up hoping on Intel after getting burned. I feel sorry for the good talent there and I hope they find a home somewhere better.
It sounds like most of the good talent has already left; the people still there are just there for the paycheck or resume padding and not for the long-term.
> Nvidia and their amazing software stack and support for their hardware.
Linus seems to disagree https://m.youtube.com/watch?v=tQIdxbWhHSM
His comment has nothing to do with quality of software or quality of support, but is about dealing with NVidia. Trying to work with NVidia (as a hardware manufacturer) must have been frustrating, but that has nothing to do with quality of the software.
The video is 12 years old. A lot changed in the meantime.
AMD has open source drivers and crashes often. NVidia has (or more precisely had) closed source drivers that nearly always work.
That was about drivers and in 2012. At the time Linux was not interesting for them as clients. But now with AI, Nvidia has open source drivers.
https://www.phoronix.com/news/NVIDIA-Transitions-OSS-KMD
>Nvidia has open source drivers.
Kernel Module Driver which is most likely less than 5% of drivers.
Torvalds wants open drivers, and NVidia doesn't do that. NVidia's drivers are better than their competitors by enough to make it worth buying NVidia even when their hardware is objectively worse, so much as I would prefer open-source in principle, I can understand why they don't want to give away the crown jewels.
(I can't watch videos)
Oh, yes. They spent too many years as the obvious #1, with a license to print money...when, longer-term, staying there required that Intel remain top-of-league in two profoundly difficult and fast-moving technologies (10e9+-transistor CPU design, and manufacturing such chips). Meanwhile, the natural rot of any large org - people getting promoted for their ladder-climbing skills, and making decisions for their own short-term benefit - were slow eating away at Intel's ability to stay at the top of those leagues.
Intel needed to be split in two as well, which Gelsinger only half-heartedly did. He split the company into two functions - foundry and design, but didn't take that to its logical conclusion and split up the company completely.
> Intel needed to be split in two as well
Wouldn't that just pretty much guarantee that the foundry business would fail since Intel wouldn't have any incentives to shift most of their manufacturing to TSMC? The same thing happened with AMD/Global Foundries..
AMD has a big wafer supply agreement with GlobalFoundries, and has since the spinoff. It was exclusive until the seventh WSA in 2019 which allowed AMD to purchase 7nm and beyond from other suppliers (without paying penalties) which was the only reasonable resolution after GloFo cancelled their 7nm fab (which may have been the best thing to happen to AMD). But AMD increased their GloFo orders in May and December 2021 during the chip crunch to $2.1B total through 2025. If you look at the first WSA amendment from March 2011 it includes AMD agreeing to pay an additional $430M if they get some (redacted) node in production in time.
Anyway, whatever woes GloFo is facing you can’t blame them on AMD. They had an exclusivity deal for a decade which only got broken when it was no longer tenable and AMD still buys a ton of their wafers. I suppose AMD may have bought more wafers if their products didn’t suck for most of that time but it had nothing to do with shifting production to TSMC which only happened after GloFo gave up.
right. so glofo couldn't keep up abandoned the bleeding edge. what's the evidence that intel foundaries, divorced from design, wouldn't suffer the same fate?
No evidence, intel doesn't have the resources to be fighting tsmc, and, arm and nvidia, apple and samsung in different technologies at the same time.( foundry, gpus, cpus, NAND ,SSD etc) They already sold the NAND memory business to SK hynix in 2021.
They will have to focus, that means getting out lines of business which may likely die.
That would be better than going bankrupt and your competitors picking the pieces
With the design side, Intel foundries have struggled to keep up with TSMC. It's not clear that the design side helps. My guess is that it's actually a question of corporate culture, and that AMD's ambitious, driven people stuck with AMD.
Yep, one business line is an albatross around the other. Some think this means it’s better they stay together. Others think you can save one by separating the other.
I, personally, found my life to improve when we decided that the cleaning lady could be someone from outside the marriage.
Agree with OP that Intel was probably too deep into its downward spiral. While it seems Pat tried to make changes, including expanding into GPUs, it either wasn't enough or too much for the Intel board.
Splitting Intel is necessary but probably infeasible at this point in the game. Simple fact is that Intel Foundry Services has nothing to offer against the likes of TSMC and Samsung - perhaps only cheaper prices and even then it's unproven to fab any non-Intel chips. So the only way to keep it afloat is by continuing to fab Intel's own designs, until 18A node becomes viable/ready.
He failed on GPU. The product was substandard.
That means either he knew and allowed it to happen, which is bad, or he didn't know and allowed GPU division to squander the resources, which is even worse. Either way, it was an adventure Intel couldn't afford.
There is a similar story in other areas.
Disagree on "failed on GPU" as it depends on the goal.
Sure Intel GPUs are inferior to both Nvidia and AMD flagship offerings, but they're competitive at a price-to-performance ratio. I'd argue for a 1st gen product, it was quite successful at opening up the market and enabling for cross-selling opportunities with its CPUs.
That all said, I suspect the original intent was to fabricate the GPUs on IFS instead of TSMC in order to soak up idle capacity. But plans changed along the way (for likely performance reasons) and added to the IFS's poor perception.
The issue with the GPUs is that their transistor to performance ratio is poor. The A770 has as many transistors as about a 3070ti but only performs as well as a 3060 (3060ti on a good day).
So with that, they are outsourcing production of these chips to TSMC and using nearly cutting edge processes (battlemage is being announced tomorrow and will use either TSMC 5 or 4), and the dies are pretty large. That means they are paying for dies the size of 3080s and retaling them at prices of 3060s.
The A770 actually has more transistors than the RTX 3070 Ti:
RTX 3070 Ti: 17,400 million transistors
A770: 21,700 million transistors
https://www.techpowerup.com/gpu-specs/geforce-rtx-3070-ti.c3...
https://www.techpowerup.com/gpu-specs/arc-a770.c3914
It has taken Nvidia decades to figure out how to use transistors as efficiently as it does. It was unlikely for Intel to come close with their first discrete GPU in decades.
That said, it is possible that better drivers would increase A770 performance, although I suspect that reaching parity with the RTX 3070 Ti would be a fantasy. The RTX 3070 Ti has both more compute and more memory bandwidth. The only advantage the A770 has on its side is triple the L2 cache.
To make matters worse for Intel, I am told that games tend to use vendor specific extensions to improve shader performance and those extensions are of course not going to be available to Intel GPUs running the same game. I am under the impression that this is one of the reasons why DXVK cannot outperform the Direct3D native stack on Nvidia GPUs. The situation is basically what Intel did to AMD with its compiler and the MKL in reverse.
In specific, information in these extensions is here:
https://gpuopen.com/amd-gpu-services-ags-library/ https://developer.nvidia.com/rtx/path-tracing/nvapi/get-star...
Also, I vaguely recall that Doom Eternal used some AMD extension that was later incorporated into vulkan 1.1, but unless ID Software updated it, only AMD GPUs will be using that. I remember seeing AMD advertising the extension years ago, but I cannot find a reference when I search for it now. I believe the DXVK developers would know what it is if asked, as they are the ones that told me about it (as per my recollection).
Anyway, Intel entered the market with the cards stacked against it because of these extensions. On the bright side, it is possible for Intel to level the playing field by implementing the Vulkan extensions that its competitors use to get an edge, but that will not help it in Direct3D performance. I am not sure if it is possible for Intel to implement those as they are tied much more closely with their competitors’ drivers. That said, this is far from my area of expertise.
> He failed on GPU. The product was substandard.
I will never understand this line of reasoning. Why would anyone expect an initial offering to match or best similar offerings from the industry leader? Isn't it understood that leadership requires several revisions to get right?
Oh, poor multi billion company. We should buy its product with poor value, just to make it feel better.
Intel had money and decades of integrated GPU experience. Any new entrant to the market must justify the value to the buyer. Intel didn't. He could sell them cheap to try to make a position in the market, though I think that would be a poor strategy (didn't have financials to make it work).
I think you misunderstood me. I wasn't calling for people to purchase a sub-par product, rather for management and investors to be less fickle and ADHD when it comes to engineering efforts one should reasonably expect to take several product cycles.
Honestly, even with their iGPU experience, Arc was a pretty impressive first dGPU since the i740. The pace of their driver improvement and their linux support have both been impressive. They've offered some niche features like https://en.wikipedia.org/wiki/Intel_Graphics_Technology#Grap... which Nvidia limits to their professional series.
I don't care if they have to do the development at a loss for half a dozen cycles, having a quality GPU is a requirement for any top-tier chip supplier these days. They should bite the bullet, attempt to recoup what they can in sales, but keep iterating toward larger wins.
I'm still upset with them for cancelling the larrabee uarch, as I think it would be ideal for many ML workloads. Who needs CUDA when it's just a few thousand x86 threads? I'm sure it looked unfavorable on some balance sheet, but it enabled unique workloads.
> I don't care if they have to do the development at a loss for half a dozen cycles,
And here is the problem. You are discussing a dream scenario with unlimited money. This thread is about how CEO of Intel has retired/was kicked out (far more likely) for business failures.
In real world, Intel was in a bad shape (see margins, stock price ect) and couldn't afford to squander resources. Intel couldn't commit and thus it should adjust strategy. It didn't. Money was wasted that Intel couldn't afford to waste.
Well, seeing as GPU is important across all client segments, in workstation and datacenter, in console where AMD has been dominant, and in emerging markets like automotive self-driving, not having one means exiting the industry in a different way.
I brought up Intel's insane chiplet [non-]strategy elsewhere in the thread as an example where it's clear to me that Intel screwed up. AMD made one chiplet and binned it across their entire product spectrum. Intel made dozens of chiplets, sometimes mirror images of otherwise identical chiplets, which provides none of the yield and binning benefits of AMD's strategy. Having a GPU in house is a no-brainer, whatever the cost. Many other decisions going on at Intel were not. I don't know of another chip manufacturer that makes as many unique dies as Intel, or has as many SKUs. A dGPU is only two or three of those and opens up worlds of possibility across the product line.
Pulling out of a vital long-term project because it can't deliver a short-term return would be a bigger waste. Unless you think Intel is already doomed and the CEO should be pursuing managed decline?
It's worth mentioning that IIRC the team responsible for the Arc GPU drivers was located in Russia, and after the invasion of Ukraine they had to deal with relocating the entire team to the EU and lost several engineers in the process. The drivers were the primary reason for the absolute failure of Arc.
Intel deserves a lot of blame but they also got hit by some really shit circumstances outside of their control.
He was CEO. Chief executing officer. It's literally his job to execute, i.e. fix that stuff/ensure it doesn't happen. Get them out of Russia, poach new devs, have a backup team, delay the product (i.e. no HVM until good drivers are in sight). That's literally his job.
This only reinforces my previous point. He had good ideas, but couldn't execute.
Executive, not executing.
On a side note getting people in russia write your drivers sounds a bit insane. Yea lower cost and probably ok quality, but the risks...
CEO stands for Chief Executive Officer.
a chief executive officer, the highest-ranking person in a company or other institution, ultimately responsible for making managerial decisions.
Maybe you mean COO?
> shit circumstances outside of their control.
They chose to outsource the development of their core products to a country like Russia to save costs. How was that outside of their control? It's not like it was the most stable or reliable country to do business in even before 2022...
Russia is reliable when it comes to software engineering. I've met a few guys from Intel Russia, bright folks. The politics, though...
Individual Russian software developers might be reliable but that's hardly the point. They should've just moved them to US or even Germany or something like that if they were serious about entering the GPU market, though...
e.g. There are plenty of talented engineers in China as well but it would be severely idiotic for any western company to move their core R&D there. Same applied to Russia.
Well, Intel Russia opened in 2000 back when USA and Russia were on good terms, and Putin was relatively unknown. Sure it was a mistake in hindsight...
I doubt they began working on ARC/XE drivers back in 2000. If the entire driver team being in Russia (i.e. Intel trying to save money) was truly the main reason why ARC failed on launch they really only have themselves to blame...
Not just in hindsight -- but by 2011 it was clear to anyone paying attention where Russia was heading (if not to war, then certainly to a long-term dictatorship). Anyone who failed to see the signs, or chose to intellectualize past them - did so willingly.
I think if you're CEO of Intel, some foresight might be in order. Or else the ability to find a solution fast when things turn impredictibly sour. What did he get a $16mil salary for?
It had been obvious for quite a while even before 2022. There were the Chechen wars, and Georgia in 2008, and Crimea in 2014. All the journalists and opposition politicians killed over the years, and the constant concentration of power in the hands of Putin. The Ukraine invasion was difficult to predict, but Russia was a dangerous place long before that. It’s a CEO’s job to have a strategic vision, there must have been contingency plans.
Wars involving the US in the 21st century:
Wars involving Russia in the 21st century:I don’t know what you are trying to say. If you have a point to make, at least be honest about it.
Also, I am not American and not an I conditional supporter of their foreign policy. And considering the trajectory of American politics it is obvious that any foreign multinational developing in the US should have contingency plans.
My point was that great powers are always in some kind of military conflict, so it's not really a deciding factor when choosing where to build an R&D.
Putin's concentration of power has been alarming, but only since around 2012, to be honest. It was relatively stable between 2000 and 2012 in general (minus isolated cases of mysterious deaths and imprisonments). Russia was business-friendly back then, open to foreign investors, and most of Putin's authoritarian laws were yet to be issued. Most of the conflicts Russia was involved in were viewed as local conflicts in border areas (Chechen separatism, disputed Georgian territories, frozen East Ukrainian conflict, etc.). Only in 2022 did the Ukraine war escalate to its current scale, and few people really saw it coming (see: thousands of European/American businesses operating in Russia by 2022 without any issue)
So I kind of see why Intel didn't do much about it until 2022. In fact, they even built a second R&D center in 2020... (20 years after the first one).
The wars or military conflicts themselves are kind of tangential. It's the geopolitical risks that come along with them.
i.e. if you are an American/European company and you are doing business in Russia you must account for the potential risks of suddenly. The sanctions after 2014 were a clear signal and Intel had years to take that into account.
> So I kind of see why Intel didn't do much about it until 2022.
I'm pretty sure that the consensus (based on pretty objective evidence) is that Intel was run by incompetent hacks prior to 2021 (and probably still is).
> thousands of European/American businesses operating in Russia by 2022
Selling your products there or having local manufacturing is not quite the same as outsourcing your R&D there due to obvious reasons...
Yeah, so if you’re a Russian company, you shouldn’t outsource to the US. Or what are you trying to tell us?
Well obviously? I mean you'd be entirely screwed because of the sanctions.
Then again Yandex kind of pulled it off.
> He failed on GPU. The product was substandard.
Weren't they pretty good (price/performance) after Intel fixed the drivers during the first year or so after release? The real failure was taking so long to ship the next gen..
They legally cannot split due to the CHIPS Act.
Not entirely true. It just requires that Intel retain a 51% stake in any split foundry company.
I wonder about the details of this.
For example, could "Intel A" continue to own the foundries, split off "Intel B" as the owner of the product lines, and then do some rebranding shenanigans so that the CPUs are still branded as "Intel"?
eg. HP and HPE.
I don’t know if it’s legally possible, but HP shows the branding bit can kinda work.
No idea if that's true or not, but the CHIPS Act didn't exist when he started as CEO.
Agreed.
He should have cut 25% of the workforce to get started (and killed the dividend).
Also - the European expansion and Ohio projects, while good short-term strategies, were too early.
Cut the ship down to size and force existing sites to refocus or be closed. Get alarmist. Make sure you cut out all the bad apples. Prune the tree. Be ruthless and determined.
They should hire Reed Hastings now. He's the OG turnaround king.
Who/what is BK? Are they the previous person who held Pat Gelsingers position?
Brian Krzanich, the CEO before Pat
Don't forget Bob Swan
Bob Swan was fine. He was no visionary, but he was trying to do the right thing with the advice he was being given, and actually started the cleanup of a lot of BK's mess.
BK will go down in history as the person who destroyed a once great engineering firm.
They really need to clear house in a spectacular way but I'm not sure who could even do that at this point.
An alien from Vega looking at our constellation of tech companies and their leadership might point at an obvious answer…
The market seems to think this is great news. I disagree strongly here, but I can see why traders and potentially the board thought this was the right move.
A lot of hate for Pat Gelsinger on Reddit and YouTube from armchair experts who don't really grasp the situation Intel were in or what was needed to turn the ship around, so if he was pushed it seems like it might be to pacify the (not particularly informed) YouTube/gamer community and bump the share price. That's all speculation, though.
I'd be interested to see who Intel intends to get to run the company in his place, as that would signal which audience they're trying to keep happy here (if any).
Agreed. My career at Intel coincided with Pat's, although I jumped ship a little earlier. Admittedly this means I probably have a little bias, but based on my hundreds of conversations with Intel vets I do think his business strategy was the right decision. He came on board a company years behind on process, packaging, and architecture technology after years of mismanagement by a largely nontechnical staff, which favoured buybacks and acquisitions over core business practice.
He had two routes with the capital available following a cash injection from COVID-19 and the rapid digitization of the workplace - compete with AMD/NVIDIA, or compete with TSMC/Samsung. The only sensible option that would capture the kind of capital needed to turn the ship around would be to become a company critical to the national security of the US, during a time of geopolitical stability, onshoring chip manufacture and receiving support from the government in doing so. He could either compete with competitors at home or those abroad, but not both simultaneously. The thesis makes sense; you've lost the advantage to NVIDIA/AMD, so pivot to become a partner rather than a rival.
I don't think it's a coincidence that just a week after Intel finally received the grant from the government, he announced his departure. The CHIPS act was a seminal moment in his career. It makes sense he'd want to see that through till completion. He's 63; now is as good a time as ever to hand over the reins, in this case to a very capable duo of MJ and Zisner (who were always two of the most impressive EVPs of the bunch in my book).
I'm not in the industry but from what I gather, i agree with you 100%. Bloomberg published an article on the matter though, but it seems they are reporting that Gelsinger was pushed out by a frustrated board because of "slow progress." This is a real head scratcher to me, even as someone looking in:
https://www.bloomberg.com/news/articles/2024-12-02/intel-ceo...
Gelsinger always said this was a 5-year plan, yet the board jettisons him at 4 just as IFS customers are starting to ramp up, the 18A node is close to readiness - the company's saving grace at this point - with Panther Lake on the horizon and Altera preparing for IPO in 2026 which should be a relatively good cash injection with PE investors already negotiating stakes. Maybe I just don't have the whole picture, but it seems poorly timed.
I hate the idea that the board might do this just as Intel's prospects start looking up after years of cost-cutting measures to bring the company back to profitability and take credit for a "miraculous turnaround" that was actually instigated 4 years prior by the same person they sacked. It's like an incoming political party coming in and within a couple of months of office taking credit for a good economy that they had no hand in creating.
Pat was seemed to understand the criticality of fabrication process lead in today's day and age. Hence his push and decision to invest in IFS, plus to win over the government funding to sustain the effort.
In short, a bad or subpar chip design/architecture can be masked by having the chip fabricated on a leading edge node but not the inverse. Hence everyone is vying for capacity on TSMC's newest nodes - especially Apple in trying to secure all capacity for themselves.
The market isn't really the greatest indicator of anything. Gelsinger has spend three years trying to turn Intel around and the company is only now starting to turn the wheel. It will be at least another three years before we see any results. The market doesn't have a three year patience, three months maybe.
I hold no opinion on Pat Gelsinger, but changing the CEO in the middle of ensuring that Intel remains relevant in the long term, seems like a bad move. Probably his plan for "fixing" Intel is to slow for the market and the board. Let's see who takes over, if it's not an engineer, then things just became more dangerous for Intel. The interim people are an administrator and a sales person, that does not bode well.
IIRC, Lisa Su and her team took nearly decade to orchestrate the turn-around at AMD, and they are still a distant second player in GPUs. Expecting Pat Gelsinger to turn around Intel (or any company in an industry with such long development and tech lead times), and replacing him in 3 years - given that he is an engineer with extensive domain and leadership experience - seems - reactive, as opposed to thoughtful and directed.
Wonder if they will approach Lisa Su to take the job now :D
> and the company is only now starting to turn the wheel
How is it starting to turn the wheel?
It takes something like 5 or 6 years to go from the drawing board for a chip design, and many years to create a process node. Gelsinger hasn't really even had the chance to execute on designs that were started during his tenure. My understanding is that would've started with Intel 18A.
That doesn't really answer the question. Yes he has started initiatives that will take 5-6 years to pan out. Is there an early indication that they aren't all duds? How can anyone state with any certainty that Intel is on a better path today than it was 4 years ago when every single measurable metric is continuing to decline?
Both of the Co-CEOs have a finance background. I think that is rather telling. They are trying to appeal to Wallstreet and potentially have people that are equipped to manage an M&A deal.
The guy that got them into this situation started as an engineer. Swan was a money guy, but he did better than Krzanich. So, I think it is just hard to guess based on somebody’s background how they’ll do.
However, IMO: they need somebody like Lisa Su, somebody with more of an R&D-engineering background. Gelsinger was a step in the right direction, but he got a masters degree in the 80’s and did technically hardcore stuff in the late 80’s, early 90’s. That was when stuff just started to explode.
Su finished her PhD in the 90’s, and did technically interesting stuff through the computer revolution. It appears to have made a world of difference.
why would the board care about "pacifying the YouTube/gamer community"? seems like a very unlikely reason for a CEO to be fired.
If anything, the streams are reversed.
I'd expect Intel marketing and Public Relations to be paying YouTube Influencers to have a particular opinion, the one most favorable to the board.
Bad news can be good news depending on your time horizon.
Gelsinger retires before Intel Foundry spin is ready. This means trouble.
Intel invested tens of billions into A20 and A18 nodes, but it has not paid off yet. News about yield seemed promising. Massive investments have been made. If somebody buys Intel foundries now, they pay one dollar and take debt + subsidies. Intel can write off the debt but don't get potential rewards.
Foundry is the most interesting part of Intel. It's where everything is happening despite all the risk.
> Massive investments have been made. If somebody buys Intel foundries now, they pay one dollar and take debt + subsidies. Intel can write off the debt but don't get potential rewards.
You are describing the start of the Vulture capitalist playbook for short term profits and dividents right there, take subsidies and loans and sell everything to pay dividents (or rent back to yourself via a shell company) then let the remaining stuff die and sell of for an aditional small profit. Don't know how it works here but it sure sounds like it.
I'm describing massive investment to fundamental manufacturing infrastructure. Deprecating that capital expenditure takes long time. Exact opposite of vulture capitalism and short determinism.
> Don't know how it works here but it sure sounds like it.
Thank you for being honest and saying that you are describing how things you don't understand sound for you.
Sad to see Gelsinger go, but despite the churn, I don't think Intel is doomed. At worst, per their most recent Q3 results, I see $150Bn in assets and $4Bn in outstanding stock, and I see the US Gov (both incoming and outgoing) gearing up for a long war against China where Intel would be an asset of national importance.
My back of envelope calculation says Intel should be about $35 a share (150/4). If they stumble when they report Q4, I think the US Gov will twist some arms to make sure that fresh ideas make it onto the board of directors, and perhaps even make Qualcomm buy them.
I do think that Intel need to ditch some of their experiments like Mobileye. Its great (essential) to try and "rebalance their portfolio" away from server/pc chips by trying new businesses, but Mobileye hasnt grown very much.
Nitpick: they don't have $4B in public stock, they have about 4B shares outstanding, each of which is currently trading at about $25.
Interested in your opinion, since TSMC has fabs in the US now, are Intel still relevant even in the context of a Chinese invasion of Taiwan?
TSMC's US fabs cannot produce enough to replace all that they produce in Taiwan, nor are the US fabs producing a leading-edge node.
The next step companies (for packaging etc) are also still in Taiwan iirc, but they're more replaceable.
Taiwanese law forbids TSMC manufacturing chips abroad using their latest process, so no 2nm in the US fabs,this leaves Intel's 18A as the mist advanced one in US soil.
TSMC Arizona 4 nm fabs are a contingency. TSMC received $6+ billion in the CHIPS and Science Act, and the fab opening is delayed until 2025 due to they don't have the local talent yet.
I would say yes. Speculation follows: If the unthinkable happens, and assuming it devolves into a cold rather than a hot war (eg the Trump administration decide not to send soldiers and weapons to Taiwan but let the Chinese have the island), then US TSMC is appropriated, Intel or AMD or Qualcomm are told to run it, and all three are instructed to ramp up manufacturing capacity as aggressively as possible. If it's more like the status quo saber rattling, then I think USG would still want a 100% domestic supplier to be acting as a second source for the local economy and a primary source for anything the defense-industrial complex needs.
I imagine it takes a lot behind the scenes - especially priceless professional experience concentrated at HQ - to know how to set up new sites, set the future direction at all levels of the organization/timeframes, etc. etc. etc. What happens to the fabs long-term if leadership from Taiwan is decapitated?
how do you calculate share prices? curious
Its very back of envelope. Basically price to book value, except I didn't account for liabilities. 150B assets divided by 4B shares implies one share should account for 150/4 = $37.50-worth of the business. Its a quick sanity check but absolutely not a robust yardstick!
Latest story from Bloomberg confirms "He was given the option to retire or be removed, and chose to announce the end of his career at Intel" https://finance.yahoo.com/news/intel-ceo-gelsinger-leaves-ch...
Dear Intel Executive Search Team,
Hi! CEOing has always been a hobby of mine, but with the recent news, I thought this would be a great opportunity for us to synergize.
Sure, I don’t have much experience in the chip making world but I did buy a Pentium when I built my own computer. I also have never run a multinational company but I visited several other countries on a Disney cruise.
Let me lay it out- you would get a dynamic new CEO at a fraction of the going market rate, and I’d get to take my Chief Executiving skills to the next level.
Let’s do this together!
You can reply here and let me know if you’re interested.
You joke, but the media-to-leadership pipeline is coming for US business.
I am amazed -- stunned -- how many people here seem to think that Gelsinger was the right person for the job, but wronged by the people who pre-dated him (BK?!) or the CHIPS act (?!) or other conditions somehow out of his control.
Gelsinger was -- emphatically -- the wrong person for the job: someone who had been at Intel during its glory days and who obviously believed in his heart that he -- and he alone! -- could return the company to its past. That people fed into this messianic complex by viewing him as "the return of the engineer" was further problematic. To be clear: when Gelsinger arrived in 2021, the company was in deep crisis. It needed a leader who could restore it to technical leadership, but could do so by making some tough changes (namely, the immediate elimination of the dividend and a very significant reduction in head count). In contrast, what Gelsinger did was the worst of all paths: allowed for a dividend to be paid out for far (FAR!) too long and never got into into really cutting the middle management undergrowth. Worst of all, things that WERE innovative at Intel (e.g., Tofino) were sloppily killed, destroying the trust that Intel desperately needs if it is to survive.
No one should count Intel out (AMD's resurrection shows what's possible here!), but Intel under Gelsinger was an unmitigated disaster -- and a predictable one.
I don't think you're wrong but the overarching structure of the chip business is very different from times gone by. It's not even clear what "technical leadership" should mean. When Intel was the leading edge volume leader just on their own processor line, that gave them a scale advantage they no longer have and that won't come back. They built a culture and organization around thick margins and manufacturing leadership, what we're seeing now looks like everyone from investors to employees searching for anyone who will tell them a happy story of a return to at least the margins part. Without a cohesive version of what the next iteration of a healthy Intel should look like all the cost cutting in the world won't save them.
Interestingly, people were bullish about Gelsinger at VMware too. Many still talk about the glory days with him at the helm despite decisions that IMO significantly harmed the company
I find stories about the opposite. How it was very good for the VMware company, so good that nobody ousted him, but he left for a bigger company instead.
I agree. I don't see him having achieved anything particularly noteworthy over his tenure.
I'm not sure where Intel needs to go from here - ultimately lots of problems are solved if they can just design a solid CPU (or GPU) and make enough of them to meet demand. Their problems recently are just down to them being incapable of doing so. If their fab node pans out that's another critical factor.
Intel still has tons of potential. They're by no means uncompetitive with AMD, really. The fabs are a millstone right now, the entire reason they as cheap as they are until they can start printing money with them. It does feel like they don't have any magic, though, no big moonshots or cool projects left since they basically killed all of them :(.
Yeah but it was the return to the wrong engineer. While Pat allegedly architecting i486 is major respect, it was ultimately an iteration of i386 which is where the alpha is at. Guy who architected i386 (John Crawford) is too old. So that leaves Intel Core which was architected by Uri Frank. Now get this. Two weeks after Pat Gelsinger is appointed CEO of Intel in February 2021, Uri Frank announces he's going to be joining Google to lead the development of their cloud chips. That guy is probably Intel's natural leader. So it'd be interesting to know if there's more to this story.
let's not pretend BK did any good for the company though, you sound like he did an ok job
I had forgotten who "BK" was , so I dove into https://en.wikipedia.org/wiki/Intel
"On May 2, 2013, Executive Vice President and COO Brian Krzanich was elected as Intel's sixth CEO [...]"
The next paragraph is ominous ...
'As of May 2013, Intel's board of directors consists of Andy Bryant, John Donahoe, Frank Yeary, Ambassador Charlene Barshefsky, Susan Decker, Reed Hundt, Paul Otellini, James Plummer, David Pottruck, and David Yoffie and Creative director will.i.am. The board was described by former Financial Times journalist Tom Foremski as "an exemplary example of corporate governance of the highest order" and received a rating of ten from GovernanceMetrics International, a form of recognition that has only been awarded to twenty-one other corporate boards worldwide.'
I definitely have some issues with BK, but it's more that there is another entire CEO between BK and Gelsinger (Bob Swan!) -- and I think it's strange to blame BK more than Swan?
Intel has had some big failures (or missed opportunities) over the years. Just going from memory - Pentium 4 architecture, not recognizing market opportunities with ARM, Itanium, AMD beating them to 64 bits on x86, AMD beating them to chiplets and high number of PCIe lanes with EPYC, poor battery life (generally) on laptops. The innovations from Apple with Apple Silicon (ARM) and AMD with EPYC make Intel look like they're completely lost. That's before we even touch on what RISC-V might do to them. It seems like the company has a long history of complacency and hubris.
Not an insider, but this doesn't seem good. I more or less agreed with every call Intel's made over the past few years, and was bullish on 18A. I liked that Pat was an engineer. His interim replacements don't seem to have that expertise.
Intel wasn't doing great to start, but Pat put it on the path, potentially, towards greatness. Now even that is in major jeopardy.
>His interim replacements don't seem to have that expertise.
MJ has a pretty good reputation inside the company.
I am perhaps one of Pat's biggest fan. Was in tears when I knew he is back at Intel [1];
>"Notable absent from that list is he fired Pat Gelsinger. Please just bring him back as CEO." - 2012 on HN, when Paul Otellini Retired.
>"The only one who may have a slim chance to completely transform Intel is Pat Gelsinger, if Andy Grove saved Intel last time, it will be his apprentice to save Intel again. Unfortunately given what Intel has done to Pat during his last tenure, I am not sure if he is willing to pick up the job, especially the board's Chairman is Bryant, not sure how well they go together. But we know Pat still loves Intel, and I know a lot of us miss Pat." [2] - June, 2018
I am sad to read this. As I wrote [2] only a few hours ago about how the $8B from Chip ACT is barely anything if US / Intel wants to compete with TSMC.
Unfortunately there were lot of things that didn't go to plan. Or from my reading of it was out of his control. Cutting Dividends was a No No from Board until late. Big Cut of headcount wasn't allowed until too late. Basically he was tasked to turn the ship around rapidly, not allow to rock the ship too much all while it has leaky water at the bottom. And I will again, like I have already wrote in [1], point the finger at the board.
My reading of this is that it is a little too late to save Intel, both as foundry and chip making. Having Pat "retired" would likely mean the board is now planning to sell Intel off since Pat would likely be the biggest opponents to this idea.
At less than $100B I am sure there are plenty of interested buyers for various part of Intel. Broadcomm could be one. Qualcomm or may be even AMD. I am just not sure who will take the Foundry or if the Foundry will be a separate entity.
I dont want Pat and Intel to end like this. But the world is unforgiving and cruel. I have been watching TSMC grow and cheer leading them in 2010 before 99.99% of the internet even heard of its name and I know their game far too well. So I know competing against TSMC is a task that is borderline impossible in many aspect. But I would still wanted to see Pat bring Intel back to leading edge again. The once almightily Intel.....
Farewell and Goodbye Pat. Thank You for everything you have done.
[1] https://news.ycombinator.com/item?id=25765443
[2] https://news.ycombinator.com/item?id=42293541
I hope they sell off some more distracting/minor parts of the business, and then work with a fund to take the company private.
Similar to Yahoo a number of years ago, there's some real business still there, they just need to remove the expectation of constant quarterly calls and expectations and make long term, consistent investments again.
Yahoo's acquisition by Verizon was a tire-fire train wreck of epic proportions. In no universe is this something to emulate. There's a good reason every single Verizon exec involved in that deal was gone relatively shortly after this disaster.
They basically incinerated $5 billion in the process of buying Yahoo, merging it with AOL to form "Oath", doing many rounds of layoffs with extremely overgenerous severance packages, strip-mining the business units in ill-conceived deals (often in exchange for stock in absurd companies like MoviePass and Buzzfeed), and then eventually dumping the remainder on a PE firm at a huge loss.
Or, as Wikipedia succinctly summarizes these events, "the integration did not deliver the expected value".
oh this board would not do that. they just want to cannibalize intel.
I am old enough to remember when AMD was nowhere, and Intel was the only game in town for both consumer and server chips, laptop and desktop. I think it was 2011.
Then 2013 rolled around and I built a desktop with an AMD processor, because it was dirt cheap. I remember people saying that they were excited and looking forward to the upcoming AMD architectures and roadmap. I couldn't understand how AMD could possibly have come from behind to beat Intel.
Realistically, you just need to come up with a good roadmap, get good people, get the resources, execute.
Roadmap is good, people - not so sure about this one, resources - seem a little low, execution - seems hampered by too many cooks. If Intel broke up into smaller companies and their new leading edge chip design group had 500 good people, I honestly think they would have a better shot. So I think we are seeing what makes the most sense, Intel must die and let the phoenixes rise from the ashes.
Fast forward to 2024/2025, and remember, anything is possible if AMD beat 2011 Intel.
AMD doesn't have the burden of capital expense that a fab is. It did take several years for AMD to capitalize on the freedom this enables though. As far as I know Intel is the only company left that is vertically integrated. This might be the last domino to fall. Arrow Lake uses TSMC heavily. Maybe that's a sign of things to come. Andy Grove said only the paranoid survive. It didn't seem like Intel management was that paranoid. They didn't invent 64bit x86, had to license it from AMD. Spent so much money on stock buy backs, when it should have been funneled back into the fab, getting yields up on the next process node. It's easy for me to be an armchair quarterback though, I'm not aware of everything that happened behind the scenes. I saw D1X being built, and the future seemed bright.
Pat returned as CEO in 2021. I don't think that 3 years is enough time to clean out the internal rot that has led Intel to where it is today and a lot of problems have been in motion for a while. Short-term this might be better for Intel but this move lacks long term vision. Intel also just got a big hand-out from the government that would've kept them moving in the right direction regardless of what CEO is leading the pack.
How long would you give him to run it into the ground?
I'm not sure but from the outside it seems his biggest sin was not clearing out the executives that let the rot get to this point to begin with.
How is 3 years not enough time to dump some mediocre middle management?
Some of these people have strong contracts with clauses against being fired. It is also very difficult to replace them if capable people left the company. Especially if they are the majority of middle management.
A few years, Pat said Intel had internally rebuilt their "Grovian execution Engine". I found those words empty, a far cry from Andy Grove's hard decision to dump memory in favor of microprocessors. Andy Grove's decisions made Intel, Intel, not "execution".
It's unfortunate the situation is beyond almost anyone's grasp but I wonder if Pat should have talked less, and done more.
Unfortunately not surprising, looking at the past year or so.
When he took over, I remember the enthusiasm and optimism, not only in business, but in hacker circles also. It's a pity it didn't work out. I wonder if it is even possible to save Intel (not trying to imply that "if Gelsinger can't do it, than no one can", just wondering if Intel is just doomed, regardless of what their management does).
I got to meet and interact with Pat a few times while he was the CEO of VMware. I really liked him and his approach to things. He has done the best he could with the hand that was dealt to him.
The CEO of a public company is a glorified cheerleader. Doing the right things is half the job, convincing people you are doing the right things is the other half. Considering Intel's share price dropped 61% under Gelsinger's tenure, no matter the work he did towards the first half, it's pretty clear he thoroughly failed at the second. They desperately need someone who will drive back investor confidence in the company, and fast.
> The CEO of a public company is a glorified cheerleader.
I find this view limited. If we look at the most successful tech CEOs, they all personally drove the direction of products. Steve Jobs is an obvious example. Elon pushes the products of his companies so much that he even became a fellow of the National Academy of Engineering. Jeff Bezos was widely credited as the uber product manager in Amazon. Andrew Grove pushed Intel to sell their memory business and go all in on CPUs. Walton Jr. was famous for pushing IBM to go all in on electronic computers and later the mainframes. The list can go on and on. In contrary, we can see how mere cheerleading can lead to the demise of companies. Jeff Immelt as described in the book Lights Out can be such an example.
All the CEOs in your example did some great work, yes, but also created a cult following around themselves. They were all shareholder darlings. Most of them are featured in business school textbooks as examples of how to run a company. All this kind of proves the point I'm trying to make. Just being involved in products isn't enough, not by a long shot. You need to make investors go "Steve Jobs is in charge, so the company is in good hands". If you can't do that, you may as well be a mid-level product manager or director or VP doing all those same things.
Makes sense. I was saying that cheerleading is not a sufficient condition, while your point is that cheerleading is a necessary condition.
0-to-1-ing a company is different muscle from managing an established company.
Revenue expectations, margins expectations, and investors are entirely different between the two.
This is why most founder CEOs tend to either get booted out or choose to keep their companies private as long as possible.
Given the average shelf life of S&P companies, I assume that even an established company needs to go through 0-to-1 all the time. The aforementioned companies all reinvented themselves multiple times.
> Revenue expectations, margins expectations, and investors are entirely different between the two.
Yeah, it's hard. How the great CEOs achieve their successes are beyond me. I was just thinking that a great CEO needs to drive product changes.
> I assume that even an established company needs to go through 0-to-1 all the time
Not significantly. Massive product and strategy pivots are rare because they can fail horribly (eg. For every turnaround like Apple there's a failure in execution like Yahoo)
> How the great CEOs achieve their successes are beyond me
Luck in most cases.
I've worked with plenty of CEOs and in most cases, most people at the C-Suite and SVP level of large organizations are comparable to each other skills and intellect wise.
Administration/mid-level management buy-in is also critical. C-Suite doesn't have much visibility into the lower levels of an organization (it's unrealistic), so management is their eyes and ears.
January 18, 2022 - "Intel CEO says AMD is in the rearview mirror and 'never again will they be in the windshield'"
https://www.pcgamer.com/intel-ceo-says-amd-is-in-the-rearvie...
Intel has no GPU. Intel has no mobile processors / SOCs. These seem to be the drivers of growth nowadays. And their CPUs have hard time competing with AMD now. I'm not sure that the 3 years which Geslinger had were enough to turn the tide.
>Intel has no GPU.
I own one, Arc isn't the best, but it's still able to compete with lower end Nvidia and AMD GPUs. They ended up having to mark them down pretty drastically though.
I actually owned an Intel Zenphone about 8 years ago. Aside from being absolutely massive, it was decent.
I think Intel got arrogant. Even today, with all the benchmarks showing Intel isn't any faster than AMD, Intel is still more expensive for PC builds.
Check Microcenter, at least in the US, the cheapest AMD bundle is 299 vs 399 for Intel.
They're lagging behind in every metric.
Yes. Intel might have a GPU, and maybe even a phone SOC, if they tried hard enough. Intel's integrated GPU cores are quite decent; I had high hopes on Arc eventually becoming the third major discrete offering. Alas.
Intel indeed rested too long on their laurels from early 2000s. It's one of the most dangerous things for a company to do.
> The CEO of a public company is a glorified cheerleader.
It can be. You've just noticed the fact that for publicly traded companies where the value of the stock exceeds the total value of the assets you actually get to put that difference on your books into categories like "Good will" or "Consumer confidence."
For companies struggling to create genuine year over year value this is a currently validated mechanism for faking it. The majority of companies do not need to do this. That companies operating in monopolized sectors often have to do it is what really should be scrutinized.
Is this not a bit too short a time for results to show yet? Turning a ship too many times would just result in it spinning in circles around the same position
Part of me is wondering if in an imaginary world, these same people are on AMD's board, would Lisu Su have already been fired a few years ago?
>Is this not a bit too short a time for results to show yet?
Pat so suddenly getting "retired" like this isn't based on the success or failure of the new foundry nodes. You're correct that they weren't supposed to ship yet anyway. With this news most are expecting they'll be (further) delayed soon, but the real cause of today's action is strategic.
Things at Intel are so far gone that there's now no choice but to look at splitting up the company and/or selling/merging key parts off. Pat didn't want to preside over breaking up Intel, he wanted to save the company by shipping the new nodes. This was always a long shot plan which would require the existing businesses to do well and new things like GPUs to contribute while massively cutting costs in other areas.
Those things didn't work out. The GPUs were late and under performed forcing them to be sold near break even. The market for desktop and laptop CPUs was much softer than expected for macro economic reasons and, worse, there were massive, delayed death field failures of the last two gens of desktop CPUs. Competitors like AMD generally took more share from Intel faster than expected in other markets like data center. The big layoffs announced last Summer should have been done in 2021. Those things combined caused time and money to run out sooner than the new nodes could show up to save the day. This is reality finally being acknowledged. Frankly, this should have happened last Summer or even earlier. Now the value has eroded further making it even harder to save what's left.
From a customer's perspective: NO, I don't like the BIG-little core architecture of CPUs on desktop platforms, and I don't enjoy the quality issues of the 13-14th gen CPUs; I also don't like the lack of software support for middling performing GPUs. I used to like intel's SSD, but the division was sold.
They were just rebranded micron ssd’s the last time I checked.
They weren't. Intel and Micron used to co-develop the flash memory but had different strategies for SSD controllers, and Intel did a full generation of flash memory after breaking up with Micron and before selling their SSD business to Hynix.
18A is an absolute disaster, it's investor lawsuit worthy.
When the news came out that 20A was canceled the spin was 18A was so advanced that they no longer needed an in-between node.
NOPE, what happened was that 18A was a failure, and they renamed 20A to 18A.
How do you know this?
Here are 2 things I have noticed that seem obvious weaknesses:
1. Looking at the CyberMonday and BlackFriday deals, I see that the 12900K and 14900* series CPUs are what is being offered on the Intel side. Meanwhile AMD has released newer chips. So Intel has issues with either yield, pricing or other barriers to adoption of their latest.
2. The ARC GPUs are way behind; it seems obvious to me that a way to increase usage and sales is to simply add more VRAM to the GPUs - Nvidia limits the 4090 to 24GB; so if Intel shipped a GPU with 48 or 64GB VRAM, more people would buy those just to have the extra VRAM. It would spur more development, more usage, more testing and ultimately result in ARC being a better choice for LLMs, image processing, etc.
> Meanwhile AMD has released newer chips.
Your comment confuses me. BOTH have released newer chips. Intel with the Core Ultra 200 series and AMD with Ryzen 9000. Neither are going to be on Black Friday sales.
> So Intel has issues with either yield, pricing or other barriers to adoption of their latest.
How does not putting their latest chips on sale indicate trouble selling them?
> pricing or other barriers to adoption of their latest.
They are just not very good? There is basically no point in buying the current gen equivalent 14900K with current pricing (the only real advantage is lower power usage).
For that to work, they need a software stack to compete with CUDA.
Nvidia is so far ahead that a single manufacturer won’t be able to compete for developers. Instead, the only chance the rest of the AI GPU market has is to build some portable open source thing and use it to gang up on Nvidia.
That means bigger GPU players (AMD, hyperscalers) will need to be involved.
I am not a PyTorch user, but there is already Intel-supplied ARC acceleration for PyTorch: https://www.intel.com/content/www/us/en/developer/articles/t...
Having half the number of GPUs in a workstation/local server setup to have same amount of VRAM might make up for whatever slowdown there would be if you had to use less-optimized code. For instance running or training a model that required 192GB of VRAM would take 4x48GB VRAM but 8x24GB VRAM GPUs.
> For that to work, they need a software stack to compete with CUDA.
Doesn't Intel have pretty decent support in PyTorch? It's not like most people working on/with AI use CUDA directly.
And especially for stuff like LLMs software is hardly the main or even a significant barrier.
Gelsinger seemed well connected to Washington / the democratic administration in particular and the CHIPS act seemed to be designed to save/bail out Intel.
Perhaps this is a fall out from the election.
IMO Intel is far more of a strategic asset to allow such short-sighted policy from whatever administration. The upcoming administration surely knows that, and as far as I know Intel has never made strong steps that would alienate the coming administration from them. Also getting it back on its feet is actually inline with the narrative, it is easy to give it such spin, even if some personal differences are at play, this is more important than that.
It is strategically important for the US to be ahead in technology; however Intel is no longer ahead and is not really the strategic asset it used to be.
It is still strategically important to be able to supply domestic civilian, industrial, and military computation needs with good enough chips. If you are not ahead, but still good enough, than you have sovereignty, and have a good chance to get back to the top eventually (in the sort/mid term).
China is not ahead. Still they are capable of mass-producing somewhat capable chips, with good enough yields for strategic projects.
Also they can mass produce now outdated designs, which are still good enough for "office use" to provide continuity of the government bureaucracy's operation.
China has less advanced nodes where it can produce for industrial applications.
They have the potential to eventually get ahead, but now a total embargo you only slow them down, but not cripple them. This situation is acceptable for any state as a temporary measure until reorganizing efforts.
Also Intel is still producing better stuff then the chineese can. It is still ahead. And as I detailed above, I think it would need to fall behind way more to loose its strategic nature.
Also capacities in Taiwan and in South-Korea are very fragile from a strategic perspective, even if they are/were more advanced than what Intel has.
Claiming the upcoming administration surely knows anything is a bit of a stretch.
If AMD can come back from the brink of bankruptcy, then Intel can too. I believe.
[Edit]: Though it might have to be that Intel literally has to come to the brink of bankruptcy in order for that comeback to happen.
AMD had Lisa Su, who does intel have?
Didn’t they get Jim Keller to design Zen?
It seems really strange to me that the CEOs of two major companies have announced immediate retirements on the same day (the other being Carlos Tavaris of Stellantis).
I'm more surprised about Gelsinger than Tavares. Gelsinger seemed to have a viable plan going forward, even if progress was slow.
But Tavares has been doing a terrible job for the past year: New Jeep inventories stacking up on dealer lots for over a year, mass layoffs, plant closings, unreliable cars, and no meaningful price reductions or other measures to correct the situation. You couldn't pay me to take a new Jeep or Ram truck right now.
From what I heard Stellantis is soo screwed its not even funny anymore.
Enjoy retirement Pat! His book "The Juggling Act" was very formative for me in my early years in tech, I really enjoyed it.
This is very bad news for any hopes of establishing a homegrown foundry in the US.
This is not good. We already know what happened when CFO took over. It was a time when Intel totaly lost control. They are gona get bought for penies. OMG
Instead of saying to AMD they will be in the rearview mirror, they should have been paranoid. Not do stupid GPUs. and destroy others where it mattered
>Not do stupid GPUs
Hard disagree here x100. Investing in GPUs in the time when Nvidia and AMD started to gouge the market is actually the best decision Intel did in recent times. It's the piece of semiconductor with some of the highest margins in the business and they already own a lot of the patents and IP building blocks to make it happen.
The only stupid thing they did was not getting into GPUs earlier so they would already be on the market during the pandemic GPU shortage and AI boom.
Intel just has to make a decent/mediocre GPU with 64GB+ memory at a $500 price point and they will instantly become the defacto local transformer leader. It's a true "build it and they will come" situation.
Undercut the big boys with affordable on-prem AI.
>Intel just has to make a decent/mediocre GPU with 64GB+ memory at a $500 price point and they will instantly become the defacto local LLM leader
Intel should have appointed people form the HN comment section as their CEO, as they clearly know more about running a giant chip design and fabrication company than the guy who worked there for 10+ years.
I actually did email Deepak Patil (head of Intel Graphics division) about this around a year ago, haha. Never did get a response though.
It is something that is easy to miss if you are just looking at typical business strategy and finances. A high memory consumer GPU would undercut their server GPUs, which are presumably higher margin intended golden geese. It's easy to see them chasing server markets and "gamers" being an afterthought.
However there is huge demand right now for a modern, even a crappy modern, GPU with gobs of memory. Make the card and the open source AI tooling for it will start sprouting in days after it's release.
It's an extremely powerful position to have every at-home AI geek's setup to be bound to using intel cards and intel focused tooling. Nvidia and AMD won't do it because they want to protect their server cards.
> It's an extremely powerful position to have every at-home AI geek's setup to be bound to using intel cards
So, incredibly small market share while your competitors already have the first-mover advantage and nailed down the ecosystem? With no data backing it up, I think, graphics cards for local LLM needs is not really on demand. Even for gaming it’s probably more attractive, but then again, that’s not even where the real money is.
>So, incredibly small market share while your competitors already have the first-mover advantage and nailed down the ecosystem?
Exactly. This x100. It was easy for Nvidia to succed in the LLM market by winging it, in the days when there was no LLM market, so they had the greenfield and first mover advantages.
But today, when Nvidia dominates the mature LLM market, Intel winging it the same way Nvidia did, won't provide nearly the same success as Nvidia had.
Ferruccio Lamborghini also built a successful sports car company by building tractors and cars in his garage. Today you won't be able to create a Lamborghini competitor with something you can build in your garage. The market has changed unrecognizably in the mean time.
The market share is incredibly small but also incredibly well aimed.
The people learning how to do local LLMs will be the people directing build out of on-prem transformers for small-midsize companies. The size of the market is irrelevant here, it's who is in that market and the power they will have that is extremely relevant.
> ..open source AI tooling for it will start sprouting...
AMD has tried this for many of its technologies and I don't think it is working. Granted, they suck at open sourcing, but a shitload of it was open sourced. See TinyGrad voyage into the Red Box driver (streams on youtube).
Intel doesn't have to open source anything. People will build everything needed to run intel cards efficiently as there is currently zero options for affordable video cards with high memory.
It's either old slow Tesla cards with 48GB or $2000 nvidia cards with 24GB.
> People will build everything needed to run intel cards efficiently as there is currently zero options for affordable video cards with high memory.
I think you're overestimating what people can and will do.
Nvidia didn't succeed because it just launchend cards and let people write CUDA for them. Nvidia is where it is because it has an army of researchers and SW engineers developing the full stack from research papers, to frameworks, to proofs of concepts, showing customers the value of paying for their pricey HW + SW, most of it proprietary, not community developed.
"People" alone won't be able to get even 10% there. And that's ignoring the fact that Nvidia HW is not FOSS so they'd be working blind.
>Nvidia is where it is because it has an army of researchers and SW engineers developing
The current local model/open source model community is literally an army of SWE's and researchers. They already make tons of tooling too.
Tooling which depends on Cuda.
Who knows. The same reply could have been written to someone opining that Kodak should pivot to digital cameras.
We can't see the future, but neither can CEOs, no matter how well paid and respected they are.
After all the current CEO is being ousted, so obviously he didn't do the right things despite being a CEO.
You are jesting, but there is some wisdom to that post. No reasonable person is suggesting global company changes direction on the basis of one post on the internet, but the advice provided is not without merit. Surely, a company of that size can do some research to see if it is a viable path. In fact, if it does anything right, it should have people like that ready to do appropriate analysis.
I have my thoughts on the matter and cautiously welcomed their move to GPUs ( though admittedly on the basis that we -- consumers -- need more than amd/nvidia duopoly in that space; so I am not exactly unbiased ).
>but there is some wisdom to that post.
That's just speculation. There's no guarantee that would have happened. Nobody has a crystal ball to guarantee that as the outcome.
It's like saying if someone would have killed Hitler as a baby, that would have prevented WW2.
I think you may be either misinterpreting my post or misunderstanding the sequence of events.
What do you think has happened so far?
Your mental model of the world may help me understanding the point you are trying to make.
I'm saying nobody can guarantee the claim of the GP I've replied to, that if Intel would have produced mediocre GPUs with 64+ GB of RAM that would have magically help them rise to the top of ML HW sales and save them.
That's just speculations from people online. I don't see any wisdom in that like you do, all I see is just a guessing game from people who think they know an industry when they don't (armchair experts to put it politely).
What made Nvidia dominant was not weak GPUs with a lot of RAM. The puzzle of their success had way more pieces that made the whole package appealing over many years, and a great timing of the market also helped. Intel making underperforming GPUs with a lot of RAM would not guarantee the same outcome at a later time in the market with an already entrenched Nvidia and a completely different landscape.
Your comments that nobody knows anything for sure are generically applicable to any discussion of anything.
But since they obviously apply just as well to Intel itself, it is a poor reason to dismiss other’s ideas.
—
> What made Nvidia dominant was not weak GPUs with a lot of RAM.
Intel doesn’t have the luxury of repeating NVidia’s path in GPUs. NVidia didn’t have to compete with an already existing NVidia-like incumbent.
That requires no speculation.
—
Competing with an incumbent via an underserved low end, then moving up market, is called disruption.
It is a very effective strategy since (1) underserved markets may be small but are are immediately profitable, and (2) subsequent upward growth is very hard for the incumbent to defend against. The incumbent would have to lower their margins, and hammer their own market value.
And it would fit with Intel’s need to grow their foundry business from the low end up too.
They should take every low-end underserved market they can find. Those are good cards to play for ambitious startups and comebacks.
And the insane demand for both GPUs and chip making is increasing the number of such markets.
<< That's just speculations from people online. I don't see any wisdom in that like you do, all I see is just a guessing game from people who think they know an industry when they don't (armchair experts to put it politely).
True, it is just speculation. 'Any' seems to be a strong qualifier. One of the reasons I troll landscape of HN is that some of the thoughts and recommendations expressed here ended up being useful in my life. One still has to apply reason and common sense, but I would not dream of saying it has no ( any ) wisdom.
<< What made Nvidia dominant was not weak GPUs with a lot of RAM.
I assume you mean: 'not in isolation'. If so, that statement is true. AMD cards at the very least had parity with nvidia, so it clearly wasn't just a question of ram.
<< The puzzle of their success had way more pieces that made the whole package appealing over many years, and a great timing of the market also helped.
I will be honest. I am biased against nvidia so take the next paragraph for the hate speech that it is.
Nvidia got lucky. CUDA was a big bet that paid off first on crypto and now on ai. Now, we can argue how much of that bet was luck meets preparation, because the bet itself was admittedly a well educated guess.
To your point, without those two waves, nvidia would still likely be battling amd in incremental improvements so the great market timing accounts for majority of its success. I will go as far as to say that we would likely not see a rush to buy 'a100s' and 'AI accellerators' with exception of very niche applications.
<< Intel making underperforming GPUs with a lot of RAM would not guarantee the same outcome at a later time in the market with an already entrenched Nvidia and a completely different landscape.
Underperforming may be the key word here and it is a very broad brush. In what sense are they underperforming and which segment are they intended for? As for ram, it would be kinda silly in current environment to put a new card out with 8gb; I think we can agree on that at least.
<< I'm saying nobody can guarantee the claim of the GP I've replied to,
True, but it is true for just about every aspect of life so as statements go, so it is virtually meaningless as an argument. Best one can do is argue possibilities based on what we do know about the world and the models it tends to follow.
There's no doubt that the statement is true. Some use cases would absolutely benefit from GPUs with a boatload of VRAM, even if it's relatively slow (~500 GB/s).
The market for that is just not that large, it wouldn't move the needle on Intels revenue, but then again it could get the enthusiasts onboard and get Intels CUDA alternative moving.
Disrupting a market from the bottom always looks like a small market opportunity. Initially.
But then you move up, and the incumbents have to choose to keep ceding more of their lower end or lower their margins. And it is very hard and costly for a company succeeding at the high end to do the later.
That would have been a fantastic sign Intel was getting more nimble and strategic.
And been a good fit with a come back in the low end of fabs too.
looking at their sept 2024 p&l: the rando probably couldn't have done much worse
You joke, but seriously, had they asked Redditors, they'd know better than putting fucking Raja of all the people, in charge of their GPU team.
Many people here are very knowledgeable: they have Putnams, they are CEOs smurfing, run 300 people teams, made some software that all know (e.g. we have the Diablo 2 guy), people from hardare side, VCs..
Some are probably multi millionaires smurfing (and I dont mean cryptobros).
Do you even have a Putnam award?
[dead]
I think the issue is that Intel has a culture and focus. It is cpus. It is a large company and this is where the revenue comes from and it has momentum.
There are a lot of strategic shifts Intel could do but their timeline to paying off at the scale Intel needs is very long.
What I see is a bunch of endeavours that get killed off too quickly because they were not paying off fast enough and this creates mistrust in the community around Intel’s new initiatives that are not core that make them harder to succeed going forward. It is a bit of a death spiral.
Basically big companies find it hard to learn new tricks when their core offering starts to fail. The time to learn new tricks was a while ago, now it is too late.
I’d argue that focus is what Intel fundamentally lacks. Or any kind of real vision.
If they had focused more on mobile CPUs, GPUs, or especially GPGPUs a decade ago, they could have more product diversity now to hold them over.
Instead, they dipped their toes into a new market every few years and then ran away screaming when they realized how difficult and it would be to gain market share.
If they had any actual vision, they could have a line of ARM CPUs now to compete with the onslaught in all major CPU markets.
They should have listened to their customers and market forces instead of trying to force x86 down everyone’s throats.
>they could have a line of ARM CPUs now to compete with the onslaught in all major CPU markets.
Disagree. Selling ARM chips with high profit margins is tough. There's too much competition from the likes of Samsung, MediaTek and until the US ban, Hi Silicon(Huawei). ARM chips are a race to the bottom in terms of price with a market dominated by companies from Asia. There's no guarantee Intel could have had a competitive ARM design that could beat Apple's or Qualcom's.
> There's too much competition from the likes of Samsung, MediaTek and until the US ban, Hi Silicon(Huawei). ARM chips are a race to the bottom in terms of price with a market dominated by companies from Asia
Yes, without proprietary technology, margins are slim.
But Qualcomm has somewhat succeeded in this area anyhow. I guess they took the ARM base but innovated on top of it in order to command higher margins.
>But Qualcomm has somewhat succeeded in this area anyhow
It's wasn't just anyhow. Qualcomm succeed in the mobile SoC space because they also had the best modems in the industry (name comes form Quality Communications after all). And also the best mobile GPU IP they bought from ATI.
You forgot also the best lawyers in the industry
Well they have to try something and actually invest in it. Every year, it looks more and more like x86 is a sinking ship.
Intel and AMD can dominate the x86 market all they want. But x86 has been steadily losing ground every year to cheaper and more power efficient ARM processors. It’s still a big market now, but I don’t think it’s going to be a great business in a decade or two.
ARM was just an example. If Intel spent a decade strengthening their Larrabee GPGPU platform and building AI and crypto ecosystems on it, they may have been well positioned to benefit immensely like Nvidia has over the last 5 years.
Cheap and power efficient is mostly not about the ISA. ARMv8 is designed for efficient fast CPUs and it's good at it, but you could do it with x86 if you tried. Intel doesn't try because they want to make expensive high-power server chips.
Although their latest plan is to turn it into ARMv8 anyway: https://www.intel.com/content/www/us/en/developer/articles/t...
The main problem with x86 is that it has poor security, not poor performance.
Intel did have an ARM license at one point. The margins would have never been acceptable to Intel. Annapurna Labs / P. A Semi probably sold to Amazon / Apple respectively for the same reason.
>I think the issue is that Intel has a culture and focus. It is cpus. It is a large company and this is where the revenue comes from and it has momentum.
With this logic, Apple should have also stayed with making Macs when it had financial troubles in in 1999, since that's its focus, not venture into making stupid stuff like Mp3 players and phones, everyone knows that's the core focus of Sony and Nokia who will rules those markets forever.
> With this logic, Apple should have also stayed with making Macs when it had financial troubles in in 1999, since that's its focus, not venture into making stupid stuff like Mp3 players and phones, everyone knows that's the core focus of Sony and Nokia who will rules those markets forever.
You’re off by a few years. Jobs axed any project not related to the Mac when he came back in 1997, so they actually did what you say they did not. The iPod project started around 1999-2000 when Apple was in a massive growth phase after the iMac and the G3 and then G4 desktops.
Also, in the alternate reality where the iPod did not massively succeed, it would very likely have been killed or remain as a hobby like the Apple TV. Apple might not be as huge as they are now, but the Mac was doing great at the time.
Yes, Apple succeeded. But the number of companies that attempt such branching out and succeed are few compared to how many try.
The CEO of company X may think that he's as talented as Steve Jobs, but is he really?
One way you can look at it is that Apple succeeded at those by taking things that weren't their product (radios, flashlights, calculators, etc) and turning them into their product (computers).
Apple under Steve Jobs is exceptional and not the rule.
Trying to use that the fact Einstein or Muhammad Ali or any other genius in his area could do something or did something is not a counterpoint.
>The only stupid thing they did was not getting into GPUs earlier
Intel has been "getting into" GPUs for two decades now going back to Larrabee. They are just not good at it.
What is holding Intel back, there?
Engineering chops?
AMD and nVidia already patented too much of the good stuff?
Too much existing code optimized for AMD and nVidia quirks?
Extreme outsider perspective but they seemed like dilettantes. They'd dip their toe into doing GPUs and then cancel the project every couple years.
A few weeks ago Gelsinger even commented he saw "less need for discrete graphics in the market going forward" - just seemed like a very Intel thing to say
As a fellow outsider, that makes a lot of sense to me. Even if Intel started outperforming nVidia today... who would want to put serious effort into making their stuff work with Intel hardware in a market they're likely to pull out of at any moment?
The early stuff (Larabee, KNL, KNC etc.) was hamstrung by x86. Wrong architecture for the types of things GPUs are good at. Their igpus have generally been good but that's not competing in the compute segment. Then they acquired a few startups (Nervana, Habana) that didn't really work out either. And now they finally have a discrete GPU lineup that is making slow progress. We'll see.
Yeah that was so stupid of them to not see into the future and predict AI.
You don't need the AI boom. Gaming GPUs were already a decent money maker, then add the GPGPU boom.
Yes, gaming GPUs are a decent money maker - but Intel GPUs can currently only compete in the midrange segment, where there is a lot less money to be made. And to change that, they need to invest a lot more money (with uncertain outcome). And for AI it's basically the same story - with the added difficulty of Nvidias CUDA moat, which even AMD is having trouble with.
I'm not sure you needed to predict AI.
GPUs exist because CPUs just aren't fast enough. Whether or not people are going to start making GPU-only computers is debatable (although there has clearly been a lot of CPU+GPU single chips).
What a frankly weird oversimplification? GPUs don't exist because CPUs aren't fast enough. GPUs exist because CPUs aren't parallel enough. And to achieve that parallelism, they sacrifice massive amounts of performance to get it.
A GPU-only computer would be absolutely horrendous to use. It'd be incredibly slow and unresponsive as GPUs just absolutely suck at running single-threaded code or branchy code.
I mean you can splits hairs about the difference between CPU and GPUs all you want.
The overall point is that work is being increasingly done _not_ on the CPU. If your business is CPUs-only then you're going to have rough times as the world moves away from a CPU-centric world. You don't need to predict AI; you just need to see that alternatives are being looked at by competitors and you'll lose an edge if you don't also look at them.
It's not going to matter much if you have a crappy CPU if most of the work is done on the GPU. Its like how iPhones don't advertise themselves as surround sound; phones aren't about calling people anymore so no reason to advertise legacy features.
> The overall point is that work is being increasingly done _not_ on the CPU.
Eh? GPGPU has been a thing for decades and yet barely made a dent in the demand for CPUs. Heck, CUDA is 17 years old!
The world has not budged from being CPU-centric and it isn't showing any signs of doing so. GPUs remain an accelerator for specialized workloads and are going to continue to be just that.
Hmm, this isn't power-efficient thinking. iPhones have lots of accelerators beyond GPUs - not because they're more performant but because they're better at performance/power.
Of course they also add GPU-like things to the CPU too for the same reason: https://developer.arm.com/documentation/109246/0100/SME-Over...
That's orthogonal. Fixed-function hardware has a power efficiency advantage over programmable hardware. This isn't new or surprising, but it's also unrelated to CPU vs. GPU. Both CPU & GPU are programmable, so that's not relevant.
Are those GPUs really stupid? They seem like a great price for performance devices when ultra high level gaming is not the priority.
EDIT: I personally always liked intel iGPUs because they were always zero bullshit on Linux minus some screen tearing issues and mumbo-jumbo fixes required in X11.
They can encode AV1 even. Really amazing, amazing chips for the price.
The "stupid" thing with them (maybe) is that they cannot do anything exceptionally good, and that while having compatibility problems. They are cheap yes, but there are many other chips for the same price, and while they are less capable, they are more compatible.
Make A380 cost 30% less or invest way more to the drivers and IMHO it'd been completely different.
Another nice thing—it looked like Intel was lagging AMD in PCIe lane counts, until somewhat recently. I suspect selling GPUs has put them in the headspace of thinking of PCIe lanes as a real figure of merit.
AMD AM5 are also not great at having enough PCIe lanes, hence at most one connected PCIe5 x16 GPU, if you need more it's x8 for 2 GPUs and so on, and that's before we connect fast M2 storage, fast USB4 slots etc. If you need more PCIe lanes, you have to buy a Threadripper or Epyc and that's easily 10 times the price for the whole system.
PCIe lanes and DDR channels take up the most pins on a CPU connector (ignoring power). The common solution (for desktops) is to have a newest generation protocol (5) at the CPU level, then use the chipset to fan out more lanes at a lower generation (4).
I understand the tradeoff, but it left a segment of the market between pure consumer solutions and pure productivity/server solutions in no mans land.
Yeah. Theadripper/Epyc is what I’m thinking of—it isn’t obvious (to me at least) if it was just a coincidence of the chiplet strategy or what, if so it is an odd coincidence. The company that makes both CPUs and GPUs has ended up with data center CPUs that are a great match for the era where we really want data center CPUs that can host a ton of GPUs, haha.
I am basically biased towards discrete GPU = asking for trouble in Linux.
Driver stability, less heat and fan noise, battery life is almost assured in the Intel iGPU.
Nah. AMD discreet GPUs are fantastic in Linux these days. You don't need to install a proprietary driver! They just work. It's really nice not having to think about the GPU's drivers or configuration at all.
The only area where AMD's discreet GPUs are lagging behind is AI stuff. You get a lot more performance with Nvidia GPUs for the same price. For gaming, though AMD is the clear winner in the price/performance space.
Of course, Nvidia is still a bit of a pain but it's of their own making. You still need to install their proprietary driver (which IMHO isn't that big a deal) but the real issue is that if you upgrade your driver from say, 550 to 555 you have to rebuild all your CUDA stuff. In theory you shouldn't have to do that but in reality I had to blow away my venv and reinstall everything in order to get torch working again.
Nvidia's GPUs work well on Linux. A friend and I use them and they are fairly problem free. In the past, when I did have some issues (mainly involving freesync), I contacted Nvidia and they fixed them. More specifically, I found that they needed to add sddm to their exclusion list, told them and they added it to the list after a few driver releases. They have also fixed documentation on request too.
On the question of integrated versus discrete GPUs, what are the practical differences?
I am trying to learn this but having difficulty finding good explanations. I know the Wikipedia-level overview, but need more details.
The GPUs seemed smart. Too late, and timid, but smart. What continually blew my mind was Intel's chiplet strategy. While AMD was making scads of a single chiplet and binning the best for Epyc and recovering cost at the low end with Ryzen, Intel designed and fabricated a dizzying number of single-purpose chiplets. In some cases, just the mirror image of another otherwise identical chiplet. The mind boggles. What phenomenal inattention to opportunity.
What you describe is mad inattention to costs, not only to opportunities: maybe a symptom of widespread misaligned incentives (e.g. delivering a design with mirrored chiplets quickly could be more "useful" for an engineer than saving a few millions for the company by taking one more week to design a more complex assembly of identical chiplets) and toxic priorities (e.g. theoretical quality over market value and profits, risk aversion, risk/cost externalization towards departments you want to be axed instead of your own).
Couldn't agree more.
To me, AMD's approach demonstrates planning and buy-in across multiple teams (CPU die, Ryzen IO die, Epyc IO die, etc), and that suggests a high degree of management involvement in these engineering decisions.
Intel's activities seem comparatively chaotic. Which I agree smells of disincentives and disinterested middle management hyperfixated on KPIs.
Same with the general preparedness for the AGI Cambrian explosion. Give their position, they should have been able to keep pace with Nvidia beyond the data center, and they fumbled it.
Their purported VRAM offerings for Battlemage are also lower than hoped for, which is going to be a turnoff to many buying for local inference.
I think we have too many decision-makers gunshy from crypto mining that haven't yet realized that compute isn't just a phase.
The GPUs are elemental for a shift to APUs - the future as Apple has shown for performance and energy efficiency. Strix Halo will be a game changer, without a GPU Intel has no future on the laptop (and later desktop).
One of the things I was wondering about a few years ago is whether intel would attempt to bid on the Sony/MS console contracts which AMD has had tied up for a long time now and would be a dependable income along with reduced software compatibility concerns compared to the breadth and history of windows games. I don't think they got to the point of having a big iGPU integrated to the extent that AMD has had for years though.
Apparently AMD has at least the Sony PS6 contract now.
> Not do stupid GPUs.
"Gelsinger said the market will have less demand for dedicated graphics cards in the future."
(from: https://www.techspot.com/news/105407-intel-not-repeat-lunar-...)
Gelsinger apparently agreed with you. However, the market very clearly has enormous demand for discreet GPUs. Specifically for AI workloads (not PC gaming).
If I was on Intel's board I would've fired him for this statement. The demand for GPUs (parallel matrix processors with super fast local memory) is NOT going to go down. Certainly not in the next five to ten years!
I know Intel makes a lot of things, has a lot of IP, and is involved in many different markets but the fact that they're NOT a major player in the GPU/AI space is their biggest failure (in recent times). It's one of those things that should've been painfully obvious at some point in 2022 and here we have Gelsinger saying just a few months ago that somehow demand for AI stuff is just going to disappear (somehow).
It's magic hand waving like this that got him fired.
There's someone else who makes computers with a lot of memory that are good at AI inference but don't have discrete graphics cards.
…Although it looks like Intel tried to go that route without understanding why and got cold feet.
The AI revolution runs on GPUs. Intel needs every last bit of GPU experience and prowess it can get, otherwise it will continue to be left behind.
I disagree. They need to have a dedicated GPU and iGPU they had needs to be improved or they will absolutely fail. The current path forward is an APU or more of a system on a chip with CPU, GPU, and I guess, NPU.
Their dGPU is also pretty promising, I have it on my list to get - even if not for gaming, its possibly the best media encoding / decoding card for the money to get today. The only thing holding it back for entry level or mid level gaming is the drivers - for some games, it wont matter it seems, but for others it has some growing pains but they seem to be diligently working on them with every driver release.
Intel without GPUs will be an ARM competitor. Maybe. With some luck. GPUs are existential for them. They’ve got 2 years.
The GPUs aren't a stupid idea. Right now Nvidia basically controls the market and has totally abandoned the lower/mid range end.
Intel has made vast improvements even within their first generation of dedicated desktop cards. They will likely never compete with cards like a 4080/4090, but they may be great options for people on a budget. Helps put pressure on AMD to be better as well.
>They are gona get bought for penies.
No they aren't - much like Boeing, at this point they are considered a national security asset.
Does Nvidia use Intel foundry? No.
Does Apple use Intel foundry? No.
The two largest fabless companies in the world never used, and have no plans to use Intel foundries.
"Intel foundry" as a service is a fiction, and will remain so. Intel can't get others to use their weird custom software tooling.
> The two largest fabless companies in the world never used, and have no plans to use Intel foundries.
Such plans would be something like 4-5 years ahead, so you wouldn't have heard of it yet unless they decided to talk about it. Chip development takes a long time.
Of course, that means you have to expect the foundry to still be there in 5 years.
Amazon AWS said they'd use the hypothetical Intel foundry to fab their chips. https://press.aboutamazon.com/aws/2024/9/intel-and-aws-expan...
The AWS deal is not a "Foundry win" in the true sense. It is still a chip designed and built by Intel for AWS: custom Xeon chip and custom Intel Clearwater Forest AI chip.
Unlike true foundries which manufacture chips designed by customers.
Text book CEO.
He came, he did all these things, he left (likely with golden parachutes): https://kevinkruse.com/the-ceo-and-the-three-envelopes/
Full disclosure, the day Bob Swan announced his exit was the day I purchased Intel stock.
Pat had the mandate from both the board and the market to do whatever he deemed necessary to bring Intel back to the forefront of semiconductor manufacturing and a leading edge node. Frankly, I don't think he used that mandate the way he should have. Hindsight is 20/20 and all, but he probably could have used that mandate to eliminate a lot of the middle management layer in the company and refocus on pure engineering. From the outside it seems like there's something rotten in that layer as the ship hasn't been particularly responsive to his steering, even with the knowledge of the roughly 4-5 year lead time that a company like Intel has to deal with. Having been there for so long though, a lot of his friends were probably in that layer and I can empathize with him being a bit over confident and believing he could turn it around while letting everyone keep their jobs.
The market reaction really does highlight how what's good for Intel long term as a business is not necessarily what the market views as good.
Folks in this thread are talking about a merger with AMD or splitting the foundry/design business. I doubt AMD wants this. They're relatively lean and efficient at this point and turning Intel around is a huge project that doesn't seem worth the effort when they're firing on all cylinders. Splitting the business is probably great for M&A bankers, but it's hard to see how that would actually help the US keep a leading semi-conductor manufacturer on shore. That business would likely end up just going the same way as GlobalFoundries and we all know that didn't really work out.
The most bizarre thing to me is that they've appointed co-CEO's that are both basically CFO's. That doesn't smell of success to me.
I think one of the more interesting directions Intel could go is if Nvidia leveraged their currently enormous stock value and moved in for an acquisition of the manufacturing division. (Quick glance shows a market cap of 3.4 trillion. I knew it was high, but still, wow.) Nvidia has the stock price and cash to support the purchase and rather uniquely, has the motive with the GPU shortage to have their own manufacturing arm. Plus, they've already been rumored to be making plays in the general compute space, but in ARM and RISC-V, not x86. Personally, Jensen is one of the few CEO's that I can imagine having the right tempo and tenor for taming that beast.
I'm curious what others think of the Nvidia acquisition idea.
> The most bizarre thing to me is that they've appointed co-CEO's that are both basically CFO's.
It makes sense once you understand the board has finally accepted the obvious reality that the only option remaining is to sell/spin-off/merge large parts of the business. Of course, the foundry business must remain in one piece and go to an American company or the US govt won't approve it.
Gelsinger 'got resigned' so suddenly because he wouldn't agree to preside over the process of splitting up the company. These co-CEOs are just caretakers to manage the M&A process, so they don't need to be dynamic turnaround leaders or even Wall Street investable.
Intel stock went up on the news not because the street expects a turnaround but because they think the pieces may be worth more separately.
Feels like the wrong move. Turning a chip company around has timescales of a decade. Getting rid of a CEO 3 years in simply means no turnaround is going to happen.
So the new interim co-CEOs are the CFO and a sales/marketing woman nobody heard of? Intel is beyond saving.
Register article: https://www.theregister.com/2024/12/02/intel_gelsinger_leave...
"At least he'll have company as he joins 15K colleagues headed for the door"
Heyoo!
Pat was at the helm for just a few years and has already been sent to the guillotine.
Maybe it isn’t wise that the USA dump billions into private companies that can’t even get their own house in order.
I wouldn’t be surprised if the next CEO installed by the board will be hellbent on prepping INTC for sale. A few of the board of directors of INTC are from private investment firms.
The Intel story is very simple folks: they spent all of their cash on stock purchases instead of upgrading equipment.
You always hope as a technical person to see an engineer running a company. This move, in these circumstances do little to inspire confidence for engineers occupying c-suite positions. I had high hopes for Pat, given his previous track record. But it appears the damage to Intel had already been done.
Gelsinger got screwed by the CHIPS Act.
The promise of state backing for a US champion SC fab was taken seriously by Gelsinger, who went all-in in trying to remake Intel as TSMC 2.0. But the money turned out to be way too late and far more conditional than Gelsinger thought. This is bad news for Intel, bad news for US industrial policy
It was a big mistake to rely on Government handouts to save the company then. You never want to rely on something you have almost no control over.
yeah true. Also the govt prob not used to state intervention in this way - learning a lot of new lessons on how to do it, on the fly
Echoing the sentiment this is a huge mistake.
Something I was thinking about the other day: All the mega successful chip companies at the moment: TSMC, Nvidia, Samsung - are all lead by founders or family of founders. It got me wondering that if you really want to innovate in the most innovative industry in the world, you need more skin in the game than stock options. Gelsinger was the closest thing Intel had to a founder-leader, someone who deeply cared about the company and its legacy beyond what they paid him, and was willing to make tough decisions rather than maintain the status quo until his options vest.
I wonder if this is the nail in the coffin.
Quick put the financial and sales and marketing people back in charge because we all know how that worked so well in the past.
Intel either go the IDM route or split the company into two. Intel can't afford to fuck around and find out.
AMD is gaining ground on x86 without the burden of fab, and Apple has demonstrated that desktop ARM is viable. Nvidia is showing the world that GPUs can handle parallel computing better.
Well their stock is already in the dumps, they can't do much worse than they already have been.
Divesting from their discrete GPUs just as they were starting to become a viable value option was one of their most recent asinine decisions. No idea why they didn't try test the market with a high RAM 64GB+ card before bowing out to see how well they'd do. Nvidia's too busy printing money to add more RAM to their consumer GPUs, they'd have the cheap GPU VRAM market all to themselves.
It still has a market cap over over $100b. Trust me, things can absolutely get worse from here. The default state of big companies that have been deeply mismanaged for 10+ years is that they go bust and end up getting bought out for a pittance. If the fabs can't be made to work in a reasonable timeframe when they are still competitive in the marketplace, then they turn out to be giant write-offs and malinvestment only good for shielding future income from taxes.
Exactly, I was discussing this with a friend the other day. I'm sure there must be a market for high RAM GPUs, even if they're not as fast as NVIDIA GPUs.
Definitely is. That's what the various LLMs are running inference on.
They aren't divesting from discrete GPUs. The Battlemage launch is tomorrow.
With the B770 16GB gone and the idea for the B580 to be cheaper than current 7600 XT @ 16 GB by cutting 4 GB makes Battlemage DOA. A gamer making an investment on a card for ~3 years cares less about spending ~$30 more vs being unable to run high resolution texture packs on a gimped GPU. The XMX cores are superior for AI for a month until Blackwell with smaller 2 and 4FP units, but a month is too little lead to overcome the CUDA software inertia. The next beancounter CEO gets the gift of terrible sales numbers and the excuse to drop the prices to move them out before RTX 5000 & RX 8000 competition hits.
Maybe you are right and Battlemage is DOA. Perhaps intel know that and just want to dump inventory before announcing they will get out of the GPU business.
On the other hand, maybe not.
My point is that although you might think they are going to divest the GPU business in future, we don't know that for sure and it's kind of weird to present it as having already happened.
Intel's Xe2 GPU architecture works well as an efficient iGPU and has market fit there. The problem is taking it the other way to discrete high end and data center rather than the opposite as NVidia has done with GPUs and AMD with CPUs. Using the low power consumer target to experiment on architecture and new nodes that then scales to the high end has been Intel's strategy ("efficiency" cores) is logical from a foundry perspective for monolithic chips. But Intel has failed to execute in the new AI data center investment reality. Where are the GPUs from Gaudi 3? Pat: "Putting AI into all the chips, not just ones in the cloud, might be more important in the long run" maybe, but failing to win where the money is at now is a huge barrier to surviving in that long run.
Gelsinger doesn't think there will be any demand at all for GPUs in the near future!
> "Gelsinger said the market will have less demand for dedicated graphics cards in the future."
From: https://www.techspot.com/news/105407-intel-not-repeat-lunar-...
He may have been talking about something like, "all GPU stuff will just be integrated into the CPU" but I highly doubt that's realistic. Nvidia's latest chip die size is enormous! Their new 5090 die is reportedly 744mm squared: https://www.tomshardware.com/pc-components/gpus/the-rtx-5090...
There's no way you're going to be getting an equivalent level of performance from a CPU with integrated GPU when the GPU part of the die needs to be that big.
He said less demand. He didn't say there would be no demand. Importantly he didn't say intel was divesting discrete GPUs. Perhaps even more importantly they demonstrably aren't divesting.
They wanted to preserve his dignity, so he was retired instead of being fired.
His dignity is fully intact imo. A great captain can't always right a ship that's already sinking.
These large troubled incumbents seems to have infinite lives to linger on towards a slow death path destroying value in their journey. Like Boeing, why does intel hangs around taking up space, opportunities, resources, one failed attempt after another failed attempt instead of making space for newer ideas, organizations? at this point is so clear these public companies are just milking the moat their predecessors built around them and the good will (or is it naive will) of their new investors who continue to pour money buying out the ones jumping ship
Reminds me of Jobs quoting Gil Amelio at D5[1]:
> Gil was a nice guy, but he had a saying. He said, 'Apple is like a ship with a hole in the bottom, leaking water, and my job is to get the ship pointed in the right direction.'
[1]: https://youtu.be/wvhW8cp15tk?t=1263
Plenty of people here talk about the mistakes from Intel CEOs. But do they really have so much influence over the success of new production lines? Or is this maybe caused by some group of middle-management that backed each other's ass the last 10 years? How high is the possibility that the production problems with new nodes are mostly bad luck? I haven't seen anything trying analyse and quantise those questions.
Break their fabs up, make one of those focus on GPUs, another on NPUs, so FPGAs, and robotics. Some will die some will yield new "Intel's."
TSMC handles AMD chips and NVidia chips. Why do you think Intel needs a special fab _for_ GPUs? It's the same lithography process.
It's not about it needing special fabs, but about breaking it up into market segments.
Time to split off Intel's foundry operations?
They've already been headed that way for a long time, the problem is that they need huge amounts of capital to complete their foundry road map and that is meant to be cross-subsidized by the other part of their business, but now it's looking shakey that they'll even have the capital to execute the plan - setting aside whether it would even work if they could.
I remembered Andy Grove's quote: "If we got kicked out and the board brought in a new CEO, what do you think he would do?"
Pat didn't do that, I guess.
Not really.
https://www.reuters.com/technology/intels-786-billion-subsid...
Intel can’t split due to the CHIPS Act.
Pretty much missing out on AI completely was and is a problem for Intel.
[flagged]
Stock jumps 5%+ initially.
Presumably in the hopes of a more "shareholder friendly" CEO being appointed.
"shareholder friendly" and "good for the company" are not at all the same things.
Cover the news event.
Vultures
Stock moving up slightly? In the context of the last week even it seems like Wall Street mostly doesn't care either way.
At this stage in Intel's life I think having a financier overseeing the company might actually be a good move. Engineering leadership is of course critical across the business, but they company has historically paid large dividends and is now moving into a stage where it's removing those pay outs, taking on debt and managing large long-term investments across multiple parts of the business.
Someone needs to be able to manage those investments and communicate that to Wall Street/Investors in a way that makes sense and doesn't cause any surprises.
Pat's error isn't that Intel revenues are slowing or that things are struggling, it's the fact he continued to pay dividends and pretend it wasn't a huge issue... until it was.
I’ve heard rumors he was viewed as a lightweight, or at least not as much of a serious engineer as Morris Chang was in his 60s, but even still his tenure was surprisingly short…
Eh. Pat hasn't seemed to be doing anything special. Presided over a bunch of hiring and firing (pointlessly), plowing money into fabs like anyone else would, often misleads during presentations, hasn't managed Intel's cash very well...
I have to agree with a few others, don't really see why people looked up to him. Seems like a pretty mediocre CEO overall.
I also, personally, do not actually see the problem with a financial CEO. Ultimately I don't believe technical acumen is that important to the decisions a CEO makes - they should be relying on subordinates to correctly inform them of the direction and prospects of the company and act accordingly.
I am concerned about Intel's long term prospects - obviously he wouldn't be leaving if everything was looking up, IMO.
Someone should get fired for discontinuing ifort...
Only CEOs get to press release their firings as something else: retirement, wanting to spend more time with family, voluntarily leaving etc.
He was sacked. The board fired him. Tell it like it is.
What does this mean for Intel? I think they’re too important to get bought or broken up because they have fabs in the US. They’re like Boeing in that sense. Or a big bank. They’re too big (strategically important) for the US to let them fail. But maybe they’ll get bought by another US company or investor. I dunno.
Is it as simple as Intel purchasing the equipment from ASML to make their chips?
No, they already do. None of the fabs (intel, tsmc, Samsung, etc.) build their own equipment. The secret sauce comes in designing the processes to integrate these machines into a cohesive months-long manufacturing process, building out the IP (aka what types of transistors and low-level features they can build) and so on. Companies like ASML and AMAT make machines with guarantees like “we can draw lines this small and deposit layers this thin” and it is up to intel or any other fab to figure out the rest.
What timescale are we looking at to decide if building foundries and manufacturing chips in the States is a good idea? There's an argument that there aren't nearly enough skilled workers to do high tech manufacturing here. Didn't TSMC also struggle to hire enough skilled workers in Arizona?
It's a good idea because otherwise China will bomb your foundry or steal your designs. Besides, you don't ever get skilled workers if you don't try to develop them.
> Didn't TSMC also struggle to hire enough skilled workers in Arizona?
That was mostly marketing so they could ask for more tax credits. In practice it seems to be going okay.
Maybe this is part of their Tick-Tock CEO strategy?
US should sanctions TSMC or put 100% tariffs
I see a lot more speculation on this thread than data.
Seems like the board is going for a breakup strategy..
Joined Feb 2021 and out within 4 years.
He's been with Intel since 1979 (only interrupted by a stint as CEO of EMC/VMware).
That stint at EMC/VMWare started in 2009...
Has Intel ever made a production chip using their own EUV process? They keep moving the goal post so they'll be state of the art when they get there - currently aiming for "18A" with high NA using the absolute latest equipment from ASML. But I don't think they've demonstrated mastery of EUV at any node yet.
18A isn't going to use high NA yet, it's just EUV. Intel is hoping to start using high-NA with their 14A process in 2026. Obviously with their current state that 2026 deadline might get pushed further.
[1]: https://wccftech.com/intel-adds-14a-process-node-to-its-road...
Yes, Intel 4 is in HVM and used for Meteor Lake: https://en.wikipedia.org/wiki/Meteor_Lake#Process_technology
I don't know why everybody is talking nicely of him and being boohoo.
The guy thumped religious stuff on his twitter, didn't acknowledge competition, didn't reform company culture and was left in the dust.
Intel is better off without him.
I do think it is time for me to throw my hat in to the Silicon Valley ring. I could be an amazing Intel CEO. Give me a call.
Anyone got the inside scoop?
Is it just me or the latest chips from Intel look relatively better? I meant the Ultra Core 1 and 2.
It's because the 'core' compute part of the chip is actually made by TSMC. The surroundings are made by Intel it's own fab.
Better than what they previously had: Yes, with a but. Better than the competition: Overall no.
More power efficient, yes. "Better" no. They're objectively slower than the 14th gen, which is pretty pathetic for a new CPU release.
> Intel Corporation (NASDAQ: INTC) today announced that CEO Pat Gelsinger retired from the company after a distinguished 40-plus-year career and has stepped down from the board of directors, effective Dec. 1, 2024
Is it a "he retired" or a "we retired him"?
If it were a planned, age-related retirement, it would've been announced half a year to a year ahead, no?
Maybe. Sometimes it would be, but sometimes such things are not announced for various reasons.
Rhetorical question.
Brian’s affair with an underling was also surprisingly conveniently timed back then.
With Brian it was damn too late if you ask me.
Why does it matter? Let him have his dignity either way.
Sounds like the latter to be honest
>Is it a "he retired" or a "we retired him"?
Does it matter?
The latter implies the government money comes with strings attached and that the forces eager to see a turnaround will be active participants. It's good to see.
[dead]
[flagged]
[flagged]
I would be surprised if he made any official decisions based on his faith.
It’s odd to me someone would come in here and blame his Christianity over every other problem Intel has had before and during his tenure.
Edit: Ah, not their first time. https://news.ycombinator.com/item?id=40704262
[flagged]
[flagged]
The market already figured out that it doesn't want leading edge manufacturing in the US.
> The market already figured out that it doesn't want leading edge manufacturing in the US.
Exactly. The market figured out there's a lot of short-term profit to be made for shareholders in selling off the nation's industry, and moving it down the value chain. They're running the country like a legacy business they're winding down.
Give Wall Street a few more decades, and the US will have an agriculture and resource extraction economy.
National security (and other similar externalities) are not priced into stocks, which is why every trader will happily sell his own nation's defence in exchange for profit.
Politicians and the military disagree though. They can place plenty of pressure on the market to change the market's mind if they care to.
They’ll need to dump a lot more money than $8B to Intel to compete in all fronts of manufacturing with China, and focus on it for a decade. Those time horizons are politically impossible, since next elections are less than 4 years away. That being said, competition is good.
There are a number of things they are doing other than direct money. Buy American acts, sanctions (and other taxes/deductions). Even the threat to do something that isn't done is a powerful tool.
I have a sizeable bet that government spending will be higher after the term is up. I am not at all worried about losing the bet.
The market does whatever makes the market the most money. That is frequently not the best thing for you or your country.
That is still better than seeing my (our) tax dollars thrown down the figurative drain.
This is a very short-sighted point of view. $8B is about $50 per taxpayer; keeping the US semiconductor industry alive is a better investment - and more important - than whatever you'd spend it on.
I would wholeheartedly agree with you if my tax dollars were going to keeping the US semiconductor industry alive.
Instead we fed however many CHIPS' and other grants' billions of tax dollars to Intel and all we got were five digit layoffs and the ouster of the one CEO who ostensibly had the right idea.
Naw, fuck that. I want my tax dollars spent on anything else other than that or more wars.
Politicians love giving mega industries blank checks.