The Evolution of the Semiconductor Service Model

The Evolution of the Semiconductor Service Model

Once upon a time, servicing the machines used to make microchips was like servicing a car: if a machine stopped working, you called a technician and he fixed it. We call this break/fix model Service 1.0 and it worked fine when chips were (relatively) simple in design and thus straightforward to make, with just a few dozen steps.

As chips became more sophisticated – with millions of transistors and multiple layers of wiring – a new service model emerged to cope with the extra manufacturing complexity. Instead of just repairing the equipment, Service 2.0 aimed to make it work better, with higher output and lower cost of ownership.

Today, chips are almost unimaginably more complex than their forebears, with literally billions of transistors featuring exotic materials and 3D architectures. The wires inside a chip are now just a hundred or so atoms across. The manufacturing sequence can be over a thousand steps long! For a device to work as intended, all of those steps must be perfectly executed. It’s like walking a tightrope: one false step and there’s no coming back.

The business of chipmaking is, ultimately, all about yield, which boils down to the following equation:




It costs essentially the same to make a bad chip as a good one, so yield is a pretty direct lever affecting a factory’s bottom line. In this era of multi-billion dollar megafabs that turn out billions of chips a year, even extremely small changes can have a big financial impact.



When a new chip design enters production, the initial yield is often fairly poor, but rises rapidly as the bugs are worked out of the manufacturing process. We call this “yield learning” and eventually, the yield tends to plateau.


If you’re going to be making a given chip for a while, it’s worth devoting considerable resources to achieve incremental gains in yield.

However, chip manufacturers often don’t have the luxury of making a given chip for an extended period. Product lifecycles are now so short that any given design may only be in production for a few months. You may not reach the yield plateau before the chip is obsolete. The ability to rapidly ramp yield learning is suddenly mission-critical. The race is on to maximize the slope of the curve. In quasi-calculus terms, we might say:




No manufacturing process on earth is more closely monitored. Every day a fab runs millions of unit processes through hundreds of machines, each of which is storing information from hundreds of sensors. Add in defect and electrical inspection, and this is clearly Big Data.

It takes some serious expertise to extract meaning from the mass quantities of data available, to find the key links between individual process parameters and chip performance. Enter Service 3.0, which monitors and evaluates everything in the fab to identify, and even predict, subtle manufacturing shifts that could lead to poor chip performance.

Recently, Dan Hutcheson, CEO of VLSIresearch, explored the evolution of the service model with Charlie Pappis, group vice president and general manager of Applied Global Services, in a video interview titled “Service 3.0 - the New Era of the Partnering Model”.

To anyone interested in how the semiconductor industry manages to keep up with the most complex factories on the planet while developing new, value-based service offerings, it’s well worth watching.

Receive updates in your inbox!

Subscribe Now

Want to join the discussion?

Add new comment:*

* Comments must adhere to our Discussion Guidelines and Rules of Engagement.

You can also fill out this form to contact us directly and we will get back to you.