Announcement: SOA congratulates the new FSAs for December 2024.

Back to the Futurism—New and Improved!

By Steven W. Easson and Theodore J. Gordon

Predictive Analytics and Futurism Section Newsletter, April 2021

paf-2021-04-easson-hero.jpg

In forecasting assumptions for purposes such as valuation, capital projections, scenario analysis and business contingency planning, our typical actuarial toolkit includes items such as experience studies, historical market data, stochastic modeling, credibility theory, etc.: Very useful tools in relatively stable times. However, along come “outliers” that “no one foresaw coming.” The Great Financial Crisis of 2008 and more recently of course COVID-19—the plague of 2020, are two prime examples. In retrospect, lots of ex-ante precision in calibrating assumptions derived before the start of these ex-post “outlier” scenarios. But now, lots of work and extraordinary judgements are necessary to refine those assumptions as a result of the “outliers.”

So, what extra tools are available ex-ante, and ex-post, to provide insights into deriving plausible forecasts of surprising futures? “Futurism” to the rescue!

The success of the predictive analytics aspect of the Predictive Analytics and Futurism (PAF) Section of the Society of Actuaries has been phenomenal. But let us not forget the roots of the section, the original Futurism Section, the SOA’s second oldest section. Futures Research methods are still valuable tools. Continued recognition of this motivated the section to conduct another study to demonstrate the Futures Research methods known as the Delphi method and the Trend Impact Analysis (TIA) method, along with curve fitting and Monte Carlo simulation software.

Sponsored by the PAF, as well as the Financial Reporting Section, the Investment Section and the Canadian Institute of Actuaries, these methods were applied to forecasting plausible values of four U.S. economic variables under three future time periods (two, five and 10 years hence):

  • Annual increase in the U.S. Consumer Price Index.
  • 10–Year U.S. Treasury yields.
  • S&P500 total rate of return.
  • Corporate Baa spot yields.

The study’s full report (dated May 2020) contains in-depth descriptions of the Delphi and TIA methods; listings of the rationales and thought processes, the plausible future developments that could influence the values of these four economic variables and the resulting “fan of possibilities” for the values of these variables. It is available at https://www.soa.org/resources/research-reports/2020/real-time-delphi-study/ and https://www.cia-ica.ca/publications/publication-details/rp220077.

The study was a repeat of a similar study completed in 2005, the section’s trailblazer for future Delphi studies, but utilizing impressive advances in the Futures Research methods over the last 15 years.

As was the case for the 2005 study, was the primary purpose of this study to provide actuaries with a source of best-estimate assumptions of future values of economic variables? Absolutely not. Was the primary purpose of the study to create an awareness for actuaries of other forecasting methods to consider adding to their toolkit? Absolutely yes.

Futures Research methods are dynamic tools to facilitate discussion among experts to produce descriptions of plausible futures of potential use by decision makers. Deterministic scenarios consist of descriptions of chains of sequential events illustrating what could happen, often after a triggering development, including for example, the likelihood and the impacts of consequential events. When randomizing steps are introduced into the process, a “fan of plausible futures” results, for example the range of developments flowing from the current COVID-19 pandemic. An exact science? Certainly not. Useful forecasting tools? Definitely.

The study addressed the following questions:

  • What are some plausible forecasts of the economic environment of the United States two, five and 10 years into the future?
  • Is it even possible to make judgments about volatile economic variables?
  • What are the rationales and thought processes behind the judgments of a diverse group of experts who make forecasts of economic variables?
  • How can we gather and share these judgments and brainstorm plausible events and discontinuities that could influence the future values of economic variables?
  • Can this be done in such a manner that avoids a “follow the herd mentality,” biases inherent in consensus forecasts and weighting past trends too heavily?
  • Are these expert judgments useful in augmenting historical data in setting modeling parameters and assumptions?

Although numeric forecasts for the economic variables were produced, it was recognized from the outset that it was impossible to forecast such variables with accuracy and confidence over this time period. The main success of this project rests on the educational value it provides to practitioners, specifically the tools that can be used to augment traditional actuarial methods.

Methodology

The major objective of this work was to introduce and demonstrate the following two Future Research methods: 1) Real Time Delphi (RTD), a systematic means of collecting opinions from a group of experts, and 2) Trend Impact Analysis (TIA), a system for modifying extrapolations to include perceptions about unprecedented developments.

These futures methods were supplemented by the following two methods already familiar to actuaries: 1) curve fitting, a technique for extrapolating historical data, and 2) Monte Carlo modeling, a statistical technique for introducing randomness into deterministic forecasts.

Delphi studies were used beginning in the 1960s to collect expert judgment from small groups of experts using sequential questionnaires, each building on the results of the prior questionnaire. The questioning sequence was designed to elicit reasons for outlier positions, which when fed back to the group tended to move the group average toward stability of results or consensus. The essential elements of a Delphi study are the need for expert participants, since panel sizes are generally small; anonymity of participants to avoid some biases; and feedback of group opinion. Despite their popularity, Delphi studies have been expensive and take months to complete: A three-round Delphi can take three to four months. Real Time Delphi, by contrast, is an efficient online system that does not employ sequential rounds but rather displays group responses to all participants immediately after they are generated. It differs from classic online surveys by providing real-time group feedback as the questionnaire is being completed so that the participants can learn from the group as the study progresses. The seminal paper on Real Time Delphi was published in 2006; and since then, several versions have been produced and used in a variety of applications (see Wikipedia “Real Time Delphi” and Gordon and Pease (2006)).

The study consisted of two Real-Time Delphi’s (RTD).

The first step in this study, named RTD#1, was designed to obtain direct estimates of the future values of the variables and to learn about the thought processes behind the forecasts. It was performed in July 2019 and collected judgments from about 30 experts, principally actuaries and futurists. The experts were asked for their high, most likely and low estimates of the value of the four variables at three future time periods (two, five and 10 years hence) and to provide the rationales for their answers. The study also produced a listing of future-shaping developments that respondents provided; these were quite useful in the TIA portion of the study.

The second step in this study, named RTD#2, ran between Nov. 11, 2019, and Jan. 31, 2020, and asked respondents for their judgments about future external developments—economic, political, technological or social—that could swing forecasts based on fitting curves to historical data or to the direct forecasts produced in RTD#1.

A review of the rationales provided by participants in RTD#1 resulted in a list of some 90 developments deserving further consideration. The list was consolidated down to 28 items for further consideration in RTD#2. Under RTD#2, study participants judged the likelihoods and impacts of these events. Using the TIA along with curve fitting and Monte Carlo simulations, the impact on the extrapolative forecasts derived from RTD#1 were adjusted for each of the developments using likelihood and impact estimates of the developments provided by study participants in RTD#2.  

Study Results

Rationales

Many of the reasons participants provided were eloquent statements of hope and uncertainty about the future. The range of expectations was quite wide, perhaps wider than at any time in the recent past. From an economic point of view, the forecasts generally reflected an inflationary future, largely determined by uncertain politics, man-made and natural disasters, and chance.

Although the study was completed a few weeks before the COVID-19 pandemic became a major global issue, the panel identified several developments that were soon to capture the word’s attention; these were hypothetical at the time of the study:

  • “Pandemic kills 1% of world population (Spanish flu of 1918 is estimated to have killed between 50 million and 100 million people worldwide).”
  • “Price of oil drops below $30 for more than a year.”
  • “U.S. federal debt/GDP reaching 150%.”

The panel judged that the probabilities of the first two of these developments was quite low (less than 15 percent and the third as only just above 30 percent), nevertheless their inclusion was remarkable. Other developments relating to the US national election and international affairs are yet to be resolved.

Quantitative Forecasts

Our apologies. To keep the length of this article manageable, to further emphasize the secondary importance of actual results (demonstrating the use of Future Research methods being the primary purpose of the study) and to nudge you to read the report, our highlights of the quantitative forecasts here are brief and intended to whet your appetite.

(a) Most likely of the 28 future developments obtained under RTD#1 (probability of development in brackets):

  • #19. Incumbent loses re-election in 2020; U.S. policies revert to former era (48 percent).
  • #21. There is rapid growth of the use of robotics and artificial intelligence in major economies worldwide; machines take over one-third of today’s jobs (40 percent).
  • #23. Climate change initiatives prove to be ineffective; food prices increase so much that there is food insecurity for one-third of Americans. (35 percent).

(b) The three least likely developments obtained under RTD#1 (probability of development in brackets):

  • # 28. Space travel becomes economical for 10 percent of US citizens (3 percent).
  • #3. The U.S. defaults on debt or pegs U.S. dollar to gold at $10,000 level (6 percent).
  • #10. U.S. taxation increased to a level that balances the budget (7 percent).

As for impacts of developments on the four U.S. economic variables, RTD#2 asked for judgments about:

  • The probability of the developments: “What is the likelihood of the onset or occurrence of this development before 2030?”
  • The year of maximum impact: “In what year do you believe the development will have its maximum impact on the variables?”
  • The size of the maximum impact: “How much would the variable change in the year of maximum impact? Example: if the variable had a value of 5 percent without the development and 4.5 percent with the development, enter −.5.”

See Section 5 of the report for results. Section 6 of the report (TIA Forecasts) outlines how extrapolations of the historical data obtained under RTD#1 are modified for the developments with their associated probabilities and impacts. In Section 8 (Example of Use in Sensitivity Testing), the TIA model consisting of the list of developments, their probabilities and impacts, and the extrapolations of historical data were used to demonstrate how TIA could test the consequences of “what if” assumptions.

Principal Conclusions

This work illustrated several systematic methods for forecasting the future value of time series variables by collecting estimates of individuals in a group; by combining extrapolative forecasts obtained through use of historical data and statistical curve fit methods; and through combining group judgments about future developments that could deflect the extrapolations.

The curve fitting methods that were used are well known and are based on regression to minimize errors when curves of known shapes are fit to the data. The method for eliciting expert judgments about future developments and their consequences was Real Time Delphi. The method for combining the expert judgments about probability and impacts of future developments with extrapolations was Trend Impact Analysis. A Monte Carlo model was used in which random numbers determined the assumed occurrence/non-occurrence of future developments based on their estimated probabilities; this model was used to create many mini-quantitative scenarios that led to definition of expected median and interquartile ranges of the variables under study. The algorithms developed for this study are available for SOA/CIA member use.

The study also demonstrated how the methods could be used in policy analysis by simulating policy decisions through changing probabilities or impacts and observing the effects on the variables of interest.

Statements of fact and opinions expressed herein are those of the individual authors and are not necessarily those of the Society of Actuaries, the editors, or the respective authors’ employers.


Steven W. Easson, FSA, FCIA, CFA, conceived and created this project and chaired the Society of Actuaries project Oversight Group and Working Group for this study. He is a former chairperson of the Society of Actuaries Futurism Section. He can be contacted at seassonF86@outlook.com.

Theodore J. Gordon is senior research fellow with the Millennium Project of the American Council for the United Nations University. He is a well-known futurist, an inventor of several quantitative methods of forecasting and a co-author of the initial Delphi study produced by RAND in the 1960s. He was the principal consultant to the study team.

The Project Oversight Group consisted of the following members of the Society of Actuaries/Canadian Institute of Actuaries: Steve Easson (chair), Dave Armstrong, Jack Gibson, Hal Pederson, Jim Reiskytl, Max Rudolph, Keith Walter, and Ben Wolzenski. SOA and CIA staff who were instrumental to the project’s success were: Shlomit Jacobson, Ph.D., research program manager of the Canadian Institute of Actuaries; Jan Schuh, senior research administrator; and Ronora Stryker, ASA, MAAA, senior practice research actuary of the Society of Actuaries: The software used for this study is available on request to these SOA and CIA staff members.