Announcement: SOA releases June 2019 Exam STAM passing candidate numbers and congratulates the new FSAs for August 2019.

Letters To The Editor

Letters To The Editor

By Thomas N. Herzog and David W. Dickson

The following is written in response to the "An Actuary in the World of Six Sigma? article that ran in the February/March issue of The Actuary.

Dear Editor:

I enjoyed reading David Dickson's article "An Actuary in the World of Six Sigma" that appeared in the February/March issue of The Actuary. I agree with Mr. Dickson that actuaries can learn a lot from those espousing the "six sigma" approach as well as from reading Dr. Deming's book Out of the Crisis.

However, I do have a few minor quibbles with this article. First, on page 20 of his article, Mr. Dickson lists "Hypothesis Testing" as one of the "fields of thought included in six sigma." Certainly, Dr. Deming would have taken issue with that. He and I had a lengthy conversation on just this subject at an ASQC meeting in New Jersey in 1980. So, it comes as no surprise to me that on page 272 of Out of the Crisis Dr. Deming writes:

"Incidentally, Chi Square and other tests of significance, taught in some statistical courses, have no application here or anywhere."

Second, Mr. Dickson writes on page 22 of his article that:

"The key output of a DFSS [Design For Six Sigma] solution is the ability to predict performance within some confidence limits."

Then on page 23, Mr. Dickson goes on to advocate the use of "confidence intervals for reserve estimations." Again, this directly contradicts Dr. Deming's views as stated in Out of the Crisis. Specifically, on page 132 of Out of the Crisis Dr. Deming writes:

"But a confidence interval has no operational meaning for prediction, hence provides no degree of belief for planning."

So, what is the actuary to do if he or she wants to predict performance? The answer is to construct a predictive distribution using the constructs of the Bayesian paradigm of statistics. One reference for this type of approach is the third edition of my textbook Introduction to Credibility Theory published by ACTEX.

Sincerely yours,

Thomas N. Herzog, Ph.D., A.S.A.

Author's Reply
I want to thank Dr. Herzog for his comments. While Six Sigma and TQM have similar roots and goals, they do go about achieving their goals in different ways. I am sure Dr. Deming would have more than a few difficulties with the GE version of Six Sigma, but the proof is in the results. Interestingly, Dr. Deming would not have claimed authorship of TQM or Six Sigma. To quote Ms. Marilyn Monda, one of Dr. Deming's mentees, "Deming would never consider himself a founder of Six Sigma and certainly not a founder of TQM. People used to say that to Deming all the time. He would always reply, "TQM–what's THAT?, "implying that there was no one operational definition, so how could he support it (never mind develop it!)."

To read one analysis of the differences between the two approaches see chapter 3 of The Six Sigma Way, by Pande, Neuman and Cavanagh, © 2000 McGraw–Hill. Interestingly, the two approaches are beginning to come together in a tiered problem solving approach in what is now known as Lean Six Sigma. .

Ms. Monda also points out that Dr. Deming did speak and write volumes about the analytic versus enumerative aspect of statistics, which I believe addresses Dr. Herzog's concerns regarding hypothesis testing. "The sample (and statistics/statistical tests generated from this sample) is only descriptive (enumerative) of the frame from which it was randomly sampled. It is only analytic (predictive of the future) to the extent that frame represents future frames. That would only happen when the process is in a state of statistical control. That is why Deming said that the process whose control chart is stable is predictive to the near future. (Therefore in Six Sigma), we confirm (suspected) X's with pilots and/or experiments. It is one of the great strengths of the DMAIC (and DFSS) process that we do not rely entirely on statistical results to ensure process improvement? we also have process analysis methodologies as well. It is a great mixture of statistical, graphical and subject matter expertise that leads us to our solution."

Dr. Herzog makes an astute correction regarding the usefulness of confidence intervals. The key output of a DFSS [Design For Six Sigma] solution is really the ability to make probabilistic statements regarding the predicted performance of a product or process. To do this, the construction of a predictive distribution or a predictive model (modeling expectations and variation under a range of inputs) is necessary. Our original reserve ranges project started with trying to develop confidence intervals for reserves. We quickly discovered that from a practical decision–making point of view that the ability to state the probability of actual results exceeding a booked point value was much more useful when speaking to senior leaders in our company. While striving to build predictive distributions and models, we always need to keep in mind what another famous statistician, George E. P. Box, is quoted as saying, "All models are wrong. Some are useful."

Dr. Herzog also suggested the use of Bayesian statistics. Without saying so, in Six Sigma we almost always start with an expert or prior opinion, which is essentially a Bayesian approach: Statistics which incorporate prior knowledge and accumulated experience into probability calculations or statistics that uses subjective probability as a starting point for assessing a subsequent probability.

Finally, I want to emphasize that while it is the analytical side of Six Sigma which draws most of an actuary's attention and criticism, the true value of Six Sigma to the actuary is the rigorous process and project management discipline and group decision-making focus that are ingrained into the Six Sigma way.

David W. Dickson, FSA, MAAA, GE Insurance Solutions, actuarial project manager, Six Sigma Master Black Belt.