Mehul Shroff

Technical Director, Radiation & Intrinsic Reliability, NXP

Below is the video transcript.

NXP is a global semiconductor company with a wide variety of semiconductor products and a strong focus on quality and results. We're in a business where quality demands are extremely stringent. And at the same time, we cannot over test our products to the point of having insanely high test requirements that are not really applicable because there is a competitiveness component.

And so we have to strike the right balance: How do we right-size this without taking undue risks in the field, but without making it impossible for us to make and sell our product? Data science and analytics is a huge factor in us getting to that goal.

In an effort to improve data literacy and data analytics, NXP has rolled out the Citizen Data Scientist Program. The idea is that you bring people in from different areas of specialization, train them, and have them work on a project where you either solve a problem or figure out a better way to do something that leverages the power and capability of big data, advanced analysis tools, and platforms. It's more than just  training because we all have to come in with a use case that we want to solve at the end.

The project that we did was completely unrelated to my day job. There were two levels of learning. The first was data science, which everyone else was doing. And then the second learning for me was just this particular aspect of our engineering work. The project was initially defined for us by one of our product teams, and they said, "This is the solution we'd like to see."

We have to do a lot of testing, but testing isn't free. It comes at a cost. There's a hardware cost, there's the time and so on and so forth. So as the products mature and stabilize, there's a need to reduce tests for the purposes of operational efficiency and cost reduction.

The way that it was being done in the past was subjective. Every engineer did it a different way. It led to a lot of subjectivity in the analysis. So we started trying to figure out if we could do this in a mathematically and statistically rigorous manner; not to replace the engineering knowledge, but give the product and test engineers a tool to streamline their work.

What they were doing would take a few weeks of work. If we could do this, say, in an afternoon of compute time, give them a table saying, "Here are these tests you can remove, and here's the statistical justification for removing these tests," then they still get to use their subject matter expertise and make final decisions.

JMP is one of our primary tools for data analysis. It is very amenable to collaboration. It makes it easy to build on each other's learning. You can build on somebody else's work and analysis. You can reuse the analysis.

JMP has this great feature where the analysis can be output as a script, or now with the Workflow Builder, that can then just be handed off to somebody else who has to do the exact same kind of analysis on a different data set. Dynamic linking, the ability to take report outputs and move that into a data table for subsequent analysis and calculations. That integration in JMP really helps simplify that process.

In the case of the project that I worked on, the next best alternative was some macros that people used in Excel a long time ago. Or, in one of our databases there's ways to program some of this through Python.

What we saw when we surveyed it is that none of them had the statistical rigor that we were able to develop. We had to take the JMP output and re-engineer it for what we needed to do. That framework doesn't really exist in any of the other approaches that we surveyed. So in that sense, at least qualitatively, it's a night-and-day impact.

What we saw with our test case is, with our approach, we were able to demonstrate that we could have eliminated 100+ tests without losing any coverage and without taking any quality risks.

The other way to quantify it is in a lot of our applications, we have to notify customers that we are making changes to the test program and justify it. And the customers can push back and challenge it. So, having the statistical metrics, the statistical rigor, I think will make it easier to sell the idea to the customer. That could be very powerful.

A third element would be to take the analysis upstream, go back to the way the tests are architected and configured, and then in the future, can we use that to design our tests more efficiently? That's not something we've really started yet, but that would be the natural progression of the work.

Try JMP free for 30 days

The results illustrated in this article are specific to the particular situations, business models, data input and computing environments described herein. Each JMP customer’s experience is unique, based on business and technical variables, and all statements must be considered nontypical. Actual savings, results and performance characteristics will vary depending on individual customer configurations and conditions. JMP does not guarantee or represent that every customer will achieve similar results. The only warranties for JMP products and services are those that are set forth in the express warranty statements in the written agreement for such products and services. Nothing herein should be construed as constituting an additional warranty. Customers have shared their successes with JMP as part of an agreed-upon contractual exchange or project success summarization following a successful implementation of JMP software.