Completed chips coming in from the foundry are topic to a battery of exams. For these destined for essential methods in vehicles, these exams are significantly intensive and may add 5 to 10 % to the price of a chip. However do you actually need to do each single check?
Engineers at NXP have developed a machine-learning algorithm that learns the patterns of check outcomes and figures out the subset of exams which can be actually wanted and those who they might safely do with out. The NXP engineers described the method on the IEEE Worldwide Check Convention in San Diego final week.
NXP makes all kinds of chips with complicated circuitry and superior chip-making expertise, together with inverters for EV motors, audio chips for client electronics, and key-fob transponders to safe your automotive. These chips are examined with completely different alerts at completely different voltages and at completely different temperatures in a check course of known as continue-on-fail. In that course of, chips are examined in teams and are all subjected to the entire battery, even when some elements fail a few of the exams alongside the way in which.
Chips have been topic to between 41 and 164 exams, and the algorithm was capable of advocate eradicating 42 to 74 % of these exams.
“Now we have to make sure stringent high quality necessities within the subject, so we’ve to do loads of testing,” says Mehul Shroff, an NXP Fellow who led the analysis. However with a lot of the particular manufacturing and packaging of chips outsourced to different corporations, testing is likely one of the few knobs most chip corporations can flip to manage prices. “What we have been attempting to do right here is provide you with a approach to scale back check value in a manner that was statistically rigorous and gave us good outcomes with out compromising subject high quality.”
A Check Recommender System
Shroff says the issue has sure similarities to the machine learning-based recommender methods utilized in e-commerce. “We took the idea from the retail world, the place a knowledge analyst can have a look at receipts and see what objects individuals are shopping for collectively,” he says. “As an alternative of a transaction receipt, we’ve a singular half identifier and as an alternative of the objects {that a} client would buy, we’ve an inventory of failing exams.”
The NXP algorithm then found which exams fail collectively. After all, what’s at stake for whether or not a purchaser of bread will need to purchase butter is kind of completely different from whether or not a check of an automotive half at a selected temperature means different exams don’t should be performed. “We have to have 100% or close to 100% certainty,” Shroff says. “We function in a special house with respect to statistical rigor in comparison with the retail world, but it surely’s borrowing the identical idea.”
As rigorous because the outcomes are, Shroff says that they shouldn’t be relied upon on their very own. It’s a must to “be sure that it is smart from engineering perspective and that you may perceive it in technical phrases,” he says. “Solely then, take away the check.”
Shroff and his colleagues analyzed information obtained from testing seven microcontrollers and purposes processors constructed utilizing superior chipmaking processes. Relying on which chip was concerned, they have been topic to between 41 and 164 exams, and the algorithm was capable of advocate eradicating 42 to 74 % of these exams. Extending the evaluation to information from different forms of chips led to a good wider vary of alternatives to trim testing.
The algorithm is a pilot challenge for now, and the NXP staff is seeking to broaden it to a broader set of elements, scale back the computational overhead, and make it simpler to make use of.
From Your Web site Articles
Associated Articles Across the Internet