3.4.3. Dealing with Outliers

Outliers can indicate a problem with your dataset (e.g. data errors to fix or encoding choices to understand1). And even if they are correct data, they might skew your analysis so that it’s less useful! or example, in the first plot on the “Role of Viz” page , and outliers in dataset III and IV will cause your analysis (“What is the slope of the line describing how X and Y are related?”) to give an answer that doesn’t reflect a layman’s sense of the “typical” relationship in those datasets.

3.4.3.1. Finding outliers

How can we find outliers?

  1. Plot your data: scatterplot, hexplot, box plot, density, etc.

  2. Compute z-scores and .describe() the data

Plotting is essential in EDA and data cleaning, as we’ve covered. I’m going to suggest the second route though as a quick and systematic way to identify which variables you should look into more.

Let’s download our firm data (ccm).

# copied from 3.3.4.1

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# these three are used to download the file
from io import BytesIO
from zipfile import ZipFile
from urllib.request import urlopen

url = 'https://github.com/LeDataSciFi/ledatascifi-2022/blob/main/data/CCM_cleaned_for_class.zip?raw=true'

#firms = pd.read_stata(url)   
# <-- that code would work, but GH said it was too big and
# forced me to zip it, so here is the work around to download it:

with urlopen(url) as request:
    data = BytesIO(request.read())

with ZipFile(data) as archive:
    with archive.open(archive.namelist()[0]) as stata:
        ccm = pd.read_stata(stata)				

And load a utility function called outlier_report from the community codebook.

def outlier_report(df,vars_to_examine=None,color='red',thres=4,
                   return_df=False,no_print=False):
    '''
    Parameters
    ----------
    df : DATAFRAME
        Input dataframe
    vars_to_examine : LIST, optional
        List of variables to examine from dataframe. The default is df.columns.
    color : STRING, optional
        Color for cell highlighting. The default is 'red'.
    thres : int, optional
        Highlight cells where z score is above thres. The default is 4.
    return_df : Boolean, optional
        If true, will return the df obj (without styling) for further use. 
        The default is False.
    no_print : Boolean, optional
        If true, will not print. 
        The default is False.
    
    Displays (if no_print=False)
    -------
    Table with distribution of z-scores of variables of interest. 
    
    Returns (if return_df=True)
    -------
    Table with distribution of z-scores of variables of interest (without styling).     
    '''
        
    def highlight_extreme(s):
        '''
        Highlight extreme values in a series.
        '''
        is_extreme = abs(s) > thres
        return ['background-color: '+color if v else '' for v in is_extreme]
    
    if vars_to_examine==None:
        vars_to_examine=df.columns
    
    _tab = (
            # compute z scores
            ((df[vars_to_examine] - df[vars_to_examine].mean())/df[vars_to_examine].std())
            # output dist of z   
            .describe(percentiles=[.01,.05,.25,.5,.75,.95,.99]).T
            # add a new column = highest of min and max column
            .assign(max_z_abs = lambda x: x[['min','max']].abs().max(axis=1))
            # now sort on it
            .sort_values('max_z_abs',ascending = False)
    )
    
    if no_print == False:
        display(_tab
             .style.format('{:,.2f}')
                   .format({"count": '{:,.0f}'})           
                   .apply(highlight_extreme, 
                          subset=['mean', 'std', 'min', '1%', '5%', '25%', '50%', '75%', '95%','99%', 'max', 'max_z_abs'])
        ) 
    
    if return_df == True:
        return _tab

Now we can pick the variables we want to check and use our utility function:

vars_to_check = ['l_a', 'l_sale', 'prof_a', 'mb', 'ppe_a', 'capx_a', 'xrd_a', 
                 'cash_a', 'div_d', 'td_a', 'td_mv', 'dltt_a', 'dv_a', 	'invopps_FG09',
                 'sales_g', 'short_debt', 'long_debt_dum', 'atr', 'smalltaxlosscarry', 
                 'largetaxlosscarry', 'tnic3hhi', 'tnic3tsimm', 'prodmktfluid', 
                 'delaycon', 'equitydelaycon', 'debtdelaycon', 'privdelaycon', 'l_emp',
                 'l_ppent', 'l_laborratio']

outlier_report(ccm,vars_to_check,thres=4)
  count mean std min 1% 5% 25% 50% 75% 95% 99% max max_z_abs
dv_a 206,679 0.000008 1.000025 -0.118470 -0.084514 -0.084514 -0.084514 -0.084514 -0.007486 0.187701 0.700054 356.749908 356.749908
sales_g 196,652 -0.000000 1.000001 -4.746074 -0.033699 -0.023137 -0.015847 -0.013135 -0.009619 0.006292 0.074698 281.981049 281.981049
mb 219,269 0.000003 1.000350 -0.197607 -0.145698 -0.125848 -0.101243 -0.076536 -0.007981 0.297340 0.960738 272.324158 272.324158
invopps_FG09 193,956 0.000001 1.000081 -0.178516 -0.151816 -0.130827 -0.100820 -0.070047 -0.002876 0.286780 0.908775 261.272278 261.272278
dltt_a 222,002 0.000014 1.000429 -0.556220 -0.556220 -0.556220 -0.525482 -0.193894 0.319376 1.162748 2.012844 234.930130 234.930130
td_a 221,468 0.000007 1.000288 -0.783106 -0.649797 -0.649797 -0.521486 -0.126007 0.326616 1.103748 1.851083 199.825989 199.825989
prof_a 217,167 -0.000000 1.000022 -187.908798 -0.957429 -0.337969 -0.025532 0.039572 0.096390 0.195927 0.316679 171.164459 187.908798
xrd_a 223,001 -0.000111 0.999853 -2.599572 -0.251713 -0.251713 -0.251713 -0.251713 -0.090278 0.965442 3.058030 112.508926 112.508926
capx_a 206,399 0.000007 0.999992 -23.300566 -0.708069 -0.706551 -0.538163 -0.268454 0.183921 1.619012 3.727182 95.099922 95.099922
short_debt 194,560 0.000016 1.000243 -15.076594 -0.976676 -0.976676 -0.820585 -0.384852 0.619398 2.044735 2.044735 67.278854 67.278854
prodmktfluid 88,332 0.000017 0.999996 -1.913105 -1.532486 -1.279758 -0.729692 -0.176650 0.551248 1.840396 2.969515 8.516870 8.516870
l_laborratio 199,821 0.000002 0.999993 -6.399152 -2.043165 -1.357046 -0.634864 -0.131639 0.475080 1.912665 2.982906 7.570347 7.570347
tnic3tsimm 96,951 0.000004 1.000006 -0.510284 -0.456139 -0.454538 -0.438301 -0.376669 -0.149755 2.407460 4.657985 6.706356 6.706356
debtdelaycon 57,130 0.000000 1.000001 -3.087471 -1.959609 -1.474460 -0.693134 -0.105186 0.602144 1.808033 2.723151 5.990375 5.990375
l_emp 208,991 -0.000005 1.000000 -0.891766 -0.891766 -0.880651 -0.762565 -0.391239 0.449048 2.165354 3.227964 5.856739 5.856739
equitydelaycon 57,130 0.000002 0.999996 -2.940712 -1.838654 -1.428327 -0.713450 -0.113647 0.575714 1.865933 2.817583 5.281975 5.281975
privdelaycon 57,130 -0.000001 1.000001 -3.145257 -1.930963 -1.476244 -0.702254 -0.095857 0.590432 1.832518 2.740916 5.079062 5.079062
l_a 222,978 -0.000001 1.000002 -5.038009 -1.997626 -1.551229 -0.726541 -0.055015 0.679680 1.713310 2.473113 4.116832 5.038009
delaycon 57,130 0.000000 0.999998 -2.856582 -1.921187 -1.482765 -0.718439 -0.092811 0.625459 1.791125 2.681112 4.952653 4.952653
l_sale 218,779 0.000021 1.000002 -4.775811 -2.633651 -1.631154 -0.627081 -0.006244 0.655297 1.643119 2.292021 3.389674 4.775811
cash_a 222,332 -0.000001 0.999999 -1.123044 -0.784296 -0.767143 -0.666088 -0.421390 0.253314 2.358167 3.608904 4.096847 4.096847
l_ppent 218,212 -0.000000 0.999959 -1.455006 -1.455006 -1.373409 -0.801000 -0.144473 0.655299 1.867579 2.585898 3.810933 3.810933
tnic3hhi 96,951 0.000010 0.999986 -0.966308 -0.915834 -0.852095 -0.670299 -0.377095 0.280165 2.341212 3.602426 3.602426 3.602426
ppe_a 218,189 0.000011 0.999983 -1.090727 -1.090727 -1.064053 -0.844128 -0.289022 0.580115 2.083263 2.529110 2.886195 2.886195
td_mv 217,961 0.000018 0.999981 -1.254491 -1.070976 -1.070976 -0.926746 -0.249189 0.702723 1.950860 2.467848 2.752145 2.752145
smalltaxlosscarry 148,275 0.000050 1.000280 -0.457720 -0.457720 -0.457720 -0.457720 -0.457720 -0.457720 2.185233 2.185233 2.185233 2.185233
long_debt_dum 222,025 -0.000078 1.000694 -2.155536 -2.155536 -2.155536 0.464023 0.464023 0.464023 0.464023 0.464023 0.464023 2.155536
largetaxlosscarry 148,275 0.000057 0.999194 -0.591850 -0.591850 -0.591850 -0.591850 -0.591850 1.687363 1.687363 1.687363 1.687363 1.687363
atr 222,168 0.000019 0.999959 -1.483940 -1.483940 -1.483940 -0.629190 -0.360728 1.332937 1.332937 1.332937 1.332937 1.483940
div_d 206,688 0.000107 1.001241 -0.907077 -0.907077 -0.907077 -0.907077 -0.907077 1.103647 1.103647 1.103647 1.103647 1.103647

And you can extract a list of problematic variables from this function too:

vars_with_big_outliers = list(outlier_report(ccm,vars_to_check,thres=4,return_df=True,no_print=True)
                              .query('max_z_abs > 5').index)
print(vars_with_big_outliers)
['dv_a', 'sales_g', 'mb', 'invopps_FG09', 'dltt_a', 'td_a', 'prof_a', 'xrd_a', 'capx_a', 'short_debt', 'prodmktfluid', 'l_laborratio', 'tnic3tsimm', 'debtdelaycon', 'l_emp', 'equitydelaycon', 'privdelaycon', 'l_a']

3.4.3.2. Options for dealing with outliers:

  1. Fix the data: Look at the data and correct it. Sometimes this is costly or impossible to do, but it should be the first thing you look into.

  2. Winsorize: Change the outliers so that they are closer to the rest of the distribution.

    • Example: Any value above the 99th percentile for a variable is changed to equal the 99th percentile; do the same for the left tail

    • This is a common and cheap ad-hoc correction

    • Intuitively: Winsorizing downplays the weight of the outlier in your analysis because the values are reduced, without tossing out the observation altogether

    • Tough question that depends on the data/application: What is the “right” amount of the distribution to winsorize? To the 99th percentile, or the 95th? In practice, 1%/99% is most common.

  3. Censor: Delete observations that are outliers. This is rarely done in practical applications unless the outlier suggests a critical data error for the observation.

3.4.3.3. Winsorizing

Use the function winsorizer_with_missing, available in the community codebook. (A skeleton version is below so I can do some quick examples.)

  1. If a value is below the 1st percentile, change it to the 1st percentile

  2. If a value is above the 99th percentile, change it to the 99th percentile

The first consideration is what “cutoff values” to use. Sometimes researchers use different values, say 5th and 95th percentiles.

The second consideration is whether to winsorize over the whole dataset or within subgroups?

The latter is most commonly useful when the distribution of a variable changes over time. For example, the average firm does much more R&D now than in 2000. So the 99th percentile of R&D in 2020 and 2000 are very different.

If you winsorize R&D over both years at once, you’ll chop off the lower values in 2000 and the upper values in year 2020. Perhaps it makes more sense to winsorize each year separately!

Code:

# over the whole dataset:
df = winsorizer_with_missing(df) # adtl options available

# just some variables:
df = winsorizer_with_missing(df, cols=[<a list here>])

# winsorize each year separately 
df = df.groupby('year').apply(lambda x: winsorizer_with_missing(x))

# you winsorize each year separately for only some variables using the "cols" argument

3.4.3.3.1. Example 1

First let’s look at the min and max of each variable:

practice_df = ccm.copy() # don't do this "copy" step in your code
                         # just run the remaining lines, replacing the
                         # practice_df's name with the name of the df
                         # you're working on

# output summary stats
(practice_df
 .describe(percentiles=[.01,.05,.25,.5,.75,.95,.99]).T
 # add a new column = highest of min and max column
 .assign(abs_maxmin = lambda x: x[['min','max']].abs().max(axis=1))
 # now sort on it
 .sort_values('abs_maxmin',ascending = False)
 [['count','min','1%',"99%",'max']] # only need these to see winsorizing in action
 [11:13] # just print a few vars
 .style.format('{:,.2f}')
)        
  count min 1% 99% max
prof_a 217,167.00 -218.00 -1.06 0.42 198.67
tnic3tsimm 96,951.00 0.00 1.00 95.21 132.94

Now let’s load a version of the function:

def winsorizer_with_missing(df,low_=.01,hi_=.99,cols=None): 
    # this function, with more features and explanation
    # is in the community codebook!
    
    if not cols: # if no cols provides, winsorize all columns
        cols=df.columns
        
    df[cols] = df[cols].clip(lower=df[cols].quantile(low_),
                             upper=df[cols].quantile(hi_),
                             axis=1)
    return df

When we use winsorizer_with_missing, the data is changed and now the min and max equal the 1/99 percentiles!

(
    #winsorize - remember, I am showing you an example
    winsorizer_with_missing(practice_df,
                            cols = ['prof_a','tnic3tsimm'])

    # print the output:    
    [['prof_a','tnic3tsimm']]
    .describe(percentiles=[.01,.99]).T
    [['count','min','1%',"99%",'max']] # only need these to see winsorizing in action
    .style.format('{:,.2f}') 
)    
  count min 1% 99% max
prof_a 217,167.00 -1.06 -1.06 0.42 0.42
tnic3tsimm 96,951.00 1.00 1.00 95.21 95.21

When we winsorize by a group (here, the grouping is each year), the min no longer need equal the percentile threshold. Also notice that the 1st and 99th percentile are a little different. This is ok when we winsorize by subgroups.

(
    # fyear is the "Fiscal year" variable
    practice_df.groupby('fyear').apply(lambda x: winsorizer_with_missing(x,cols = ['prof_a','tnic3tsimm']))
    
    # print the output:    
    [['prof_a','tnic3tsimm']]
    .describe(percentiles=[.01,.99]).T
    [['count','min','1%',"99%",'max']] # only need these to see winsorizing in action
    .style.format('{:,.2f}') 
)
  count min 1% 99% max
prof_a 217,167.00 -1.06 -0.96 0.40 0.42
tnic3tsimm 96,951.00 1.00 1.00 94.96 95.21

In this example so far, I just ran the winsorization function and printed summary stats. If you actually want to use the data after winsorizing, you need to save the updated data. Just adapt the code suggestion above.

So instead of

df = winsorizer_with_missing(df, cols=[<a list here>])

you’ll write

ccm = winsorizer_with_missing(ccm,cols=vars_with_big_outliers)

3.4.3.3.2. Example 2 - Visual Intuition

Here are datasets III and IV from the “Role of Viz” page , after applying the same winsorizing function. (I flipped the X and Y axis because the plotting function can’t fit a vertical line.)

You can play with the amount of winsorization by using the slider. Notice that as you winsorize more,

  1. The extreme data points (along the X and Y dimensions) shrink.

  2. The estimated slope changes.

df_orig = sns.load_dataset("anscombe").query('dataset in ["III","IV"]')
df_plot = pd.DataFrame()

# winsorize it a bunch of times and drop it into a df to use with plotly

for tails in [0,.01,.05,.1,.15,.2]:
    df = df_orig.copy()
    df[["x", "y"]] = df_orig.groupby("dataset").apply(
        lambda x: winsorizer_with_missing(x[["x", "y"]], low_=tails, hi_=1 - tails)
    )
    df['winsorized']=tails
    df_plot=df_plot.append(df)

# the plotly mechanics

import warnings
warnings.filterwarnings("ignore", message="divide by zero encountered in double_scalars")

import plotly.express as px # pip install plotly.. the animation below is from plotly module
from IPython.core.display import display, HTML
from plotly.offline import init_notebook_mode, plot
init_notebook_mode(connected=True)

fig = px.scatter(df_plot,trendline="ols",facet_row="dataset",
                  y='x', x='y', animation_frame="winsorized", 
                 range_y=[4,20], range_x=[4,14],height=800,
                 title = "Same datasets, but I flipped the X and Y axes")
plot(fig, filename = 'quartet.html',auto_play=False)
display(HTML('quartet.html'))