Get it right the first time & learn smarter today

# The Reinsurance Expected Loss Cost Formula

Disclaimer: This work has been submitted by a student. This is not an example of the work written by our professional academic writers. You can view samples of our professional work here.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.

Published: Mon, 5 Dec 2016

ELCF is the excess loss cost factor (as a percentage of total lost cost). PCP is the primary company/subject premium. PCPLR is the primary company permissible loss ratio (including any loss adjustment expenses covered as a part of loss)

RCF is the rate correction factor which is the reinsurer’s adjustment for the estimated adequacy or inadequacy of the primary rate

Given that the coverage of this treaty is per-occurrence, we must also weigh the manual difference rate for the clash exposure.

In order to determine the reinsurer’s excess share the ALAE is added to each claim, and therefore claims from policy limits which are below the attachment point will be introduced into the excess layer.

The reinsure may have own data that describe the bi-variate distribution of indemnity and ALAE, or such information can be obtained from ISO or similar organization outside of United States of America. With these data the reinsurer is able to construct the increased limits tables with ALAE added to the loss instead of residing in its entirety in the basic limits coverage.

Another more simple alternative is to adjust the manual increased limits factors so that they to account for the addition of the ALAE to the loss. A basic way of doing this is to use the assumption that the ALAE for each and every claim is a deterministic function of indemnity amount for the claim, which means adding exactly Î³% to each claim value for the range of claim sizes that are near the layer of interest.

This Î³ factor is smaller than the overall ratio of ALAE to ground-up indemnity loss, as much of the total ALAE relates to small claims or claims closed with no indemnity.

Assumption: when ALAE is added to loss, every claim with indemnity greater than \$300,000 = (1+ Î³) enters the layer \$1,400,000 excess of \$600,000, and that the loss amount in the layer reaches \$1,400,000 when the ground-up indemnity reaches \$2,000,000 = (1+ Î³).

From this the standard increased limits factors can be modified to account for

ALAE added to the loss. In this liability context, Formula for RELC can be used with PCP as the basic limit premium and PCPLR can be used as the primary company permissible basic limits loss ratio.

Assumption: Given the clash exposure an overall loss loading of Î´% is sufficient enough to adjust the loss cost for this layer predicted from the stand-alone policies.

Then ELCF determines the excess loss in the layer \$1,400,000 with excess of \$600,000 which arises from each policy limit and plus its contribution to the clash losses as a percentage of the basic limits loss that arise from the same policy limit.

The formula for ELCF which is evaluated at limit (Lim) is as follows:

## Formula : Liability ELCF for ALAE Added to Indemnity Loss

ELCF(Lim) = 0

Where

Attachment Point AP = \$600,000

Reinsurance Limit RLim = \$1,400,000

The table 2 displays this method for a part of Allstate’s exposure using the hypothetical increased limits factors to calculate the excess loss cost factors with both ALAE and risk load excluded.

(1)

Policy Limit in \$

(2)

ILF w/o risk load and w/o ALAE

(3)

ELCF

200,000

1.0000

0

500,000

1.2486

0

600,000

1.2942

0.0575

1,000,000

1.4094

0.2026

1,666,666

1.5273

0.3512

2,000,000 or more

1.5687

0.4033

Source: own calculation based on Patrik (2001)

Using the Formula 4., the ELCF(\$600,000) = 1.20*1.05*(1.2942-1.2486) = 0.0575, and ELCF(\$2,000,000) =1.20*1.05*(1.5687-1.2486) = 0.4033.

Assumption1: for this exposure the Allstate’s permissible basic limit loss ratio is PCPLR = 70%.

Assumption2: reinsurer’s evaluation indicates that the cedant’s rates and offsets are sufficient and therefore RCF is 1.00.

The reinsurer can now calculate the exposure rate RELC and the reinsurer’s undiscounted estimate of loss cost in the excess layer as can be seen in the table 3.

## Table : Reinsurance Expected Loss Cost (undiscounted)

(1)

Policy Limit

in \$

(2)

Estimated Subject Premium Year 2009 in \$

(3)

Manual

ILF

(4)

Estimated Basic Limit Loss Cost

0.70x(2)/(3)

(5)

ELCF

(6)

RELC in \$

(4)x(5)

Below 600,000

2,000,000

1.10 (avg.)

1272727.27

0

0

600,000

2,000,000

1.35

1,037,037.04

0.0575

59,629.63

1,000,000

2,000,000

1.50

933,333.33

0.2026

189,093.33

2,000,000 or more

4,000,000

1.75 (avg.)

1,600,000.00

0.3512

562,920.00

Total

10,000,000

n.a.

4,843,197.64

n.a.

811,642.96

Source: own calculation based on Patrik (2001)

An exposure loss cost can be estimated using probability models of the claim size distributions.

This directly gives the reinsurer the claim count and the claim severity information which the reinsurer can use in the simple risk theoretic model for the aggregate loss.

Assumption: the indemnity loss distribution underlying Table 2 is Pareto with q =1.1 and b =5,000. Then the simple model of adding the 20% ALAE to the indemnity per-occurrence changes the indemnity of a Pareto distribution to a new Pareto with q =1.1and b=5,000*1.20 = 6,000.

The reinsurer has to adjust the layer severity for a clash and this can be done by multiplying with 1+Î´ =1.05. The reinsurer can therefore calculate from each policy limit the excess expected claim sizes, after dividing the expected claim size by the RELC for each limit the reinsurer obtains the estimates of expected claim count. This is done in Table 4.

The expected claim size can be calculated as follows:

Firstly the expected excess claim severity over the attachment point d and subject to the reinsurance limit RLim for a policy limit Î» can has to be calculated. This can be done as follows:

For Î»= 600,000

For Î»=1,000,000

For Î»=2,000,000

The reinsurer is now able to calculate the expected claim count, the estimation can be seen in the table 4:

## Table : Excess Expected Loss, Claim Severity and Claim Count

Policy Count

in \$

(2)

RELC in \$

(3)

Expected Claim

Size in \$

(4)

Expected Claim Count

(2)/(3)

600,000

59,629.63

113,928

0.523

1,000,000

189,093.33

423,164

0.447

2,000,000 or more

562,920.00

819,557

0.687

Total

811,642.96

1,356,649

1.68

Source: own calculation based on Patrik (2001)

The total excess expected claim size for this exposure is \$1,356,649.

If the independence of claim events across all of the exposures can be assumed, the reinsurer can also obtain total estimates of the overall excess expected occurrence (claim) size and the expected occurrence (claim) count.

Now we are going to estimate the experience rating.

Step 3: Gather and reconcile primary claims data segregated by major rating class groups.

As in the Example of property quota share treaties, the reinsurer needs the claims data separated as the exposure data, and the reinsurer also wants some history of the individual large claims. The reinsurer usually receives information on all claims which are greater than one-half of the proposed attachment point, but it is important to receive as much data as possible.

Assumption: a claims review has been performed and the reinsurer received a detailed history for each known claim larger than \$100,000 occurring 2000-2010, which were evaluated 12/31/00, 12/31/01â€¦, 12/31/09, and 6/30/10.

Step 4: Filter the major catastrophic claims out of the claims data.

The reinsurer wants to identify clash claims and the mass tort claims which are significant. By separating out the clash claims, the reinsurer can estimate their size and their frequency and how they relate to the non-clash claims. These statistics should be compared to the values that the reinsurer knows from other cedants and therefore is able to get a better approximation for the Î´ loading.

Step 5: Trend the claims data to the rating period.

As with the example for the property-quota share treaties, the trending should be for the inflation and also for other changes in the exposure (e.g. higher policy limits) which may affect the loss potential, but unlike with the proportional coverage, this step cannot be skipped. The reason for this is the leveraged effect which has the inflation upon the excess claims. The constant inflation rate increases the aggregate loss beyond any attachment point and it increases faster than the aggregate loss below, as the claims grow into the excess layer, whereas their value below is stopped at the attachment point. Each ground-up claim value is trended at each evaluation, including ALAE, from year of occurrence to 2011. For example, consider the treatment of a 2003 claim in the table 5.

## Table : Trending an Accident Year 2003 Claim

(1)

Evaluation Date

(2)

Value at Evaluation In \$

(3)

Trend factor

(4)

2011 Level Value in 4

(5)

Excess Amount in\$

12/31/03

0

1.62

0

0

12/31/04

0

1.62

0

0

12/31/05

250,000

1.62

405,000

0

12/31/06

250,000

1.62

405,000

0

12/31/07

300,000

1.62

486,000

0

12/31/08

400,000

1.62

648,000

48,000

12/31/09

400,000

1.62

648,000

48,000

06/30/10

400,000

1.62

648,000

48,000

Source: own calculation based on Patrik (2001)

The reasoning for a single trend factor in this example is that the trend affects the claim values according to the accident date and not by an evaluation date.

The trending of the policy limits is a delicate issue, because if a 2003 claim on a policy which has limit that is less than \$500,000 inflates to above \$600,000 ( plus ALAE), will be the policy limit that will be sold in the year 2011 greater than \$500,000?

It seems that over long periods of time, that the policy limits change with inflation.

Therefore the reinsurer should over time, if possible, receive information on the Allstate’s policy limit distributions.

Step 6: Develop the claims data to settlement values.

The next step is to construct the historical accident year, thus we want to develop the year triangles for each type of a large claim from the data which was produced in column (5) of Table 5. Typically all claims should be combined together by major line of business. Afterwards the loss development factors should be estimated and applied on the excess claims data while using the standard methods. Also in order to check for reasonableness and comparable coverages we want to compare the development patterns that were estimated from Allstate’s data to our own expectations which have their basis in our own historical data. When considering the claim in Table 5 we see that only \$48,000 is over the attachment point, and also only at the fifth development point

## Table : Trended Historical Claims in the Layer \$1,400,000 Excess of \$600,000 (in \$1,000’s)

Assumption: our triangle looks like the Table 6:

Acc. Year

Age 1 in \$

Age 2 in \$

Age 3 in \$

Age 9 in \$

Age 10 in \$

Age 10.5 in \$

2000

0

90

264

259

351

351

2001

0

0

154

763

798

2008

77

117

256

2009

0

0

2010

0

ATA

4.336

1.573

1.166

1,349

n.a.

n.a.

ATU

15.036

3.547

2.345

1.401

1.050

= tail

Smoothed Lags

11.9%

28.7%

47.7%

## â€¦

93.1%

95.3%

96.7%

Source: own calculation based on Patrik (2001)

Where:

ATA is Age-To-Age development factor

ATU is Age-To-Ultimate development factor

Lag(t) is the percentage of loss reported at time t

The selection of the tail factor of 1.05 is based upon the general information about the development for this type of an exposure beyond ten years.

By changing to the inverse for the point of view from the age-to-ultimate factors, the time lags of the claim dollar reporting, the loss reporting view is transformed to that of the cumulative distribution function (CDF) whose domain is [0,), this transformation gives a better outlook of the loss development pattern. It also allows considering and measuring the average (expected) lag and some other moments, that are comparable to the moments of loss development patterns from other exposures.

Given the chaotic development of excess claims, it is a important to employ smoothing technique. If the smoothed factors are correctly estimated they should more credible loss development estimates which are more credible. They also allow to evaluate the function Lag( ) at every positive time.

The smoothing which was introduced in the last row of Table 6 is based on a Gamma distribution with a mean of 4 (years) and a standard deviation of 3.

It is also usually useful to analyze the large claim paid data, if possible, both to estimate the patterns of the excess claims payment and also to supplement the ultimate estimates which are based only on the reported claims that were used above.

Sometimes the only data available are the data on aggregate excess claims, which would be the historical accident year per development year \$1,400,000 excess of \$600,000 aggregate loss triangle. Pricing without specific information about the large claims in such a situation, is very risky, but it is occasionally done.

Step 7: Estimate the catastrophic loss potential.

The mass tort claims such as pollution clean-up claims distort the historical data and therefore need special treatment. As with the property coverage, the analysis of

Allstate’s exposures may allow us to predict some suitable loading for the future mass tort claim potential.

As was said in the Step 4, the reinsurer needs to identify the clash claims.

With the separation of the clash claims, for each claim, the various parts are then added together to be applied to the occurrence loss amount at the attachment point and at the reinsurance limit. If it is not possible to identify the clash claims, then the estimation of the experience of RELC has to include a clash loading which is based on judgment of the general type of exposure.

Step 8: Adjust the historical exposures to the rating period.

As in the example on the property quota-share treaties the historical exposure (premium) data has to be adjusted in such a manner that makes the data are reasonably relevant to the rating period, therefore the trending should be for the primary rate, for the underwriting changes and also for other changes in exposure that may affect the loss potential of the treaty..

Step 9: Estimate an experience expected loss cost, PVRELC, and, if desirable, a loss cost rate, PVRELC/PCP.

Assumption: we have trended and developed excess losses for all classes of Allstate’s casualty exposure. The standard practice is to add the pieces up as seen in the table 7.

## Table : Allstate Insurance Company Casualty Business

(1)

Accident Year

(2)

Onlevel PCP in \$

(3)

Trended and Developed Loss and Excess Loss (estimated RELC) in \$

(4)

Estimated Cost Rate in %

(3)/(2)

2002

171,694

6,714

3.91

2003

175,906

9,288

5.28

2004

178,152

13,522

7.59

2005

185.894

10,820

5.82

2006

188,344

9,134

4.58

2007

191,348

6,658

3.48

2008

197122

8,536

4.33

2009

198,452

12,840

6.47

2010

99,500

2,826

2.84

Total

1,586,412

80,336

5.06

Total w/o 2010

1,486,912

77,510

5.21

Source: own calculation based on Patrik (2001)

The average loss cost rate for eight years is 5.21%, where the data from the year 2010 was eliminated as it is too green (undeveloped) and there does not seem to be a particular trend from year to year.

Table 7 gives us the experience-based estimate, RELC=PCP =5.21%, but this estimate has to be loaded for the existing mass tort exposure, and also for the clash claims if we had insufficient information on the clash claims in the claims data.

Step 10: Estimate a “credibility” loss cost or loss cost rate from the exposure and experience loss costs or loss cost rates

The experience loss cost rate has to be weighed against the exposure loss cost rate that we already calculated. If there is more than one answer with different various answers that cannot be further reconciled, the final answers for the \$1.400, 000 excess of \$600,000 claim count and for the severity may be based on the credibility balancing of these separate estimates. All the differences should however not be ignored, but should be included in the estimates of the parameter (and model) uncertainty, and therefore providing a rise to a more realistic measures of the variances, etc., and of the risk.

Assumption: simple situation, where there are weighed together only the experience loss cost estimate and the exposure loss cost estimate. The six considerations for deciding on how much weight should be given to the exposure loss cost estimate are:

“The accuracy of the estimate of RCF, the primary rate correction factor, and thus the accuracy of the primary expected loss cost or loss ratio

The accuracy of the predicted distribution of subject premium by line of business

For excess coverage, the accuracy of the predicted distribution of subject premium by increased limits table for liability, by state for workers compensation, or by type of insured for property, within a line of business

For excess coverage, the accuracy of the predicted distribution of subject premium by policy limit within increased limits table for liability, by hazard group for workers compensation, by amount insured for property

For excess coverage, the accuracy of the excess loss cost factors for coverage above the attachment point

For excess coverage, the degree of potential exposure not contemplated by the excess loss cost factors”

The credibility of the exposure loss cost estimation decreases if there are problems with any of these six items listed.

Also the six considerations from which can be decided how much weight can be given to the experience loss cost estimate are:

“The accuracy of the estimates of claims cost inflation

The accuracy of the estimates of loss development

The accuracy of the subject premium on-level factors

The stability of the loss cost, or loss cost rate, over time

The possibility of changes in the underlying exposure over time

For excess coverage, the possibility of changes in the distribution of policy limits over time”

The credibility of the experience loss cost estimate lessens with problems with any of the six items.

Assumption: the credibility loss cost rate is RELC/PCP = 5.75%.

For each of the exposure category a loss discount factor is estimated, which is based on the expected loss payment pattern for the exposure in the layer \$1,400,000 excess of \$600,000, and on a chosen investment yield. Most actuaries support the use of a risk-free yield, such as U.S. Treasuries for U.S. business, for the approximation of the maturity of the average claim payment lag. Discounting is significant only for longer tail business.

On a practical base for a bond maturity which is between five to ten years it is better to use a single, constant fixed rate.

Assumption: the overall discount factor for the loss cost rate of 5.75% is RDF= 75%, which gives PVRELC/PCP = RDF*RELC/PCP =0.75*5.75%= 4.31%, or PVRELC= 4.31% * \$200,000,000 = \$8,620,000.

The steps 11 and 12 with this example are reversed.

Step 12: Specify values for RCR, RIXL, and RTER

Assumption: the standard guidelines for this size and type of a contract and this type of an exposure specify RIXL = 5% and RTER = 15%.

The reinsurance pure premium RPP can be calculated as RPP = PVRLC/(1-RTER) = \$8,620,000/0.85 = \$10,141,176 with an expected profit as RPP – PVRELC = \$10,141,176 – \$8,620,000 = \$1,521,176 for the risk transfer. As the RCR = 0% we can calculate the technical reinsurance premium of RP = RPP/(1-RIXL) = \$10,141,176 /0.95 = \$10,674,922. This technical premium is therefore above the maximum of \$10,000,000 which was specified by the Allstate Insurance Company.

If there is nothing wrong with technical calculations, then the reinsurer has two options. The first one is to accept the expected reinsurance premium of \$10,000,000 at a rate of 5%, with the expected profit reduced to \$10,000,000 – \$8,620,000 = \$1,380,000

Or secondly the reinsurer can propose a variable rate contract, with the reinsurance rate varying due to the reinsurance loss experience, which in this case is a retrospectively rated contract.

As the Allstate Insurance Company is asking for a retrospectively rated contract we select the second possibility. To construct a fair and balanced rating plan, the distribution of the reinsurance of an aggregate loss has to be estimated. Now we proceed with step 11.

Step 11: Estimate the probability distribution of the aggregate reinsurance loss if desirable, and perhaps other distributions such as for claims payment timing.

In this step the Gamma distribution approximation will be used. As our example is lower (excess) claim frequency situation, the standard risk theoretic model for aggregate losses will be used together with the first two moments of the claim count and the claim severity distributions to approximate the distribution of aggregate reinsurance loss.

The aggregate loss in the standard model is written as the sum of the individual claims, as follows.

## Formula : Aggregate Loss

L=X1 + X2 +â€¦+ XN

with

L as a random variable (rv) for aggregate loss

N as a rv for number of claims (events, occurrences)

Xi as rv for the dollar size of the ith claim

The N and Xi are referring to the amount of the ith claim and to the excess number of claims.

To see how the standard risk theoretic model relates to the distributions of L, N and the Xi’s see Patrik (2001). We are working with the assumption that the Xi’s are both identically and independently distributed and also independent of N, further we assume that the kth moment of L is determined completely by the first k moments of N and the Xi’s. There is following relationships.

## Formula : First Two Central Moments of the Distribution of Aggregate Loss under the Standard Risk Theoretic Model

E[L] = E[N] x E[X]

Var[L] = E[N] x E[X2] + (Var[N] – E[N]) x E[X]2

Assumption: the E[L] = RELC =5.75%*\$200,000,000 = \$11,500,000 (undiscounted).

We assume simplistically independent and identical distribution of the excess claim sizes and also the independency of the excess claim (occurrence) count. Usually this is a reasonable assumption.

For our layer \$1,400,000 excess of \$600,000, our modeling assumptions and results are shown in the formula below.