Aggregating statistics for multiple columns in pandas with groupby

I’ve recently started using Python’s excellent Pandas library as a data analysis tool, and, while finding the transition from R’s excellent data.table library frustrating at times, I’m finding my way around and finding most things work quite well.

One aspect that I’ve recently been exploring is the task of grouping large data frames by different variables, and applying summary functions on each group. This is accomplished in Pandas using the “groupby()” and “agg()” functions of Panda’s DataFrame objects.

A Sample DataFrame

Download File IconIn order to demonstrate the effectiveness and simplicity of the grouping commands, we will need some data. For an example dataset, I have extracted my own mobile phone usage records. I analyse this type of data using Pandas during my work on KillBiller. If you’d like to follow along – the full csv file is available here.

The dataset contains 830 entries from my mobile phone log spanning a total time of 5 months. The CSV file can be loaded into a pandas DataFrame using the pandas.DataFrame.from_csv() function, and looks like this:

 

indexdatedurationitemmonthnetworknetwork_type
015/10/14 06:5834.429data2014-11datadata
115/10/14 06:5813.000call2014-11Vodafonemobile
215/10/14 14:4623.000call2014-11Meteormobile
315/10/14 14:484.000call2014-11Tescomobile
415/10/14 17:274.000call2014-11Tescomobile
515/10/14 18:554.000call2014-11Tescomobile
616/10/14 06:5834.429data2014-11datadata
716/10/14 15:01602.000call2014-11Threemobile
816/10/14 15:121050.000call2014-11Threemobile
916/10/14 15:3019.000call2014-11voicemailvoicemail
1016/10/14 16:211183.000call2014-11Threemobile
1116/10/14 22:181.000sms2014-11Meteormobile

The main columns in the file are:

  1. date: The date and time of the entry
  2. duration: The duration (in seconds) for each call, the amount of data (in MB) for each data entry, and the number of texts sent (usually 1) for each sms entry.
  3. item: A description of the event occurring – can be one of call, sms, or data.
  4. month: The billing month that each entry belongs to – of form ‘YYYY-MM’.
  5. network: The mobile network that was called/texted for each entry.
  6. network_type: Whether the number being called was a mobile, international (‘world’), voicemail, landline, or other (‘special’) number.

Phone numbers were removed for privacy. The date column can be parsed using the extremely handy dateutil library.

Summarising the DataFrame

Once the data has been loaded into Python, Pandas makes the calculation of different statistics very simple. For example, mean, max, min, standard deviations and more for columns are easily calculable:

The need for custom functions is minimal unless you have very specific requirements. The full range of basic statistics that are quickly calculable and built into the base Pandas package are:

FunctionDescription
countNumber of non-null observations
sumSum of values
meanMean of values
madMean absolute deviation
medianArithmetic median of values
minMinimum
maxMaximum
modeMode
absAbsolute Value
prodProduct of values
stdUnbiased standard deviation
varUnbiased variance
semUnbiased standard error of the mean
skewUnbiased skewness (3rd moment)
kurtUnbiased kurtosis (4th moment)
quantileSample quantile (value at %)
cumsumCumulative sum
cumprodCumulative product
cummaxCumulative maximum
cumminCumulative minimum

Summarising Groups in the DataFrame

There’s further power put into your hands by mastering the Pandas “groupby()” functionality. Groupby essentially splits the data into different groups depending on a variable of your choice. For example, the expression  data.groupby('month')  will split our current DataFrame by month. The groupby() function returns a GroupBy object, but essentially describes how the rows of the original data set has been split. the GroupBy object .groups variable is a dictionary whose keys are the computed unique groups and corresponding values being the axis labels belonging to each group. For example:

Functions like max(), min(), mean(), first(), last() can be quickly applied to the GroupBy object to obtain summary statistics for each group – an immensely useful function. This functionality is similar to the dplyr and plyr libraries for R. Different variables can be excluded / included from each summary requirement.

You can also group by more than one variable, allowing more complex queries.

Multiple Statistics per Group

The final piece of syntax that we’ll examine is the “agg()” function for Pandas. The aggregation functionality provided by the agg() function allows multiple statistics to be calculated per group in one calculation. The syntax is simple, and is similar to that of MongoDB’s aggregation framework.

Aggregating statistics for multiple columns in pandas with groupby

 

Instructions for aggregation are provided in the form of a python dictionary. Use the dictionary keys to specify the columns upon which you’d like to operate, and the values to specify the function to run.

For example:

The aggregation dictionary syntax is flexible and can be defined before the operation. You can also define functions inline using “lambda” functions to extract statistics that are not provided by the built-in options.

The final piece of the puzzle is the ability to rename the newly calculated columns and to calculate multiple statistics from a single column in the original data frame. Such calculations are possible through nested dictionaries, or by passing a list of functions for a column. Our final example calculates multiple values from the duration column and names the results appropriately. Note that the results have multi-indexed column headers.

Aggregation and summarisation of data using pandas python on mobile phone data

The groupby functionality in Pandas is well documented in the official docs and performs at speeds on a par (unless you have massive data and are picky with your milliseconds) with R’s data.table and dplyr libraries.

If you are interested in another example for practice, I used these same techniques to analyse weather data for this post, and I’ve put “how-to” instructions here.

  1. Great post ! Learned so much from your blog. Thanks.

    BTW: How to draw a table like you did in out[47] ? It looks nice !
    Thank you !

    • Hi Chandler, thanks for writing in, great to see that people are reading the blog and finding it interesting.

      The output in the diagram is the default layout for Pandas Dataframes when you print them in a “Jupyter notebook”. If you’re not already using notebooks – I would recommend having a look as a great tool to share analysis results and explore things. You should find the install instructions at http://jupyter.org and they also come installed with the “Anaconda” python distribution. https://www.continuum.io/downloads

  2. This is amazing, thank you! One question, once you’ve grouped and aggregated data, how do you select it and filter on it? For example, in your last example, you have a column for count. How would you limit the data in your df to only include counts of above or below a certain number? This is easy for me in SQL, but I haven’t been able to understand how to do this in Pandas.

    Thanks!

    • Hi Ashley, thanks for the feedback. Filtering in Pandas is pretty easy, I tend to go with logical vectors to filter the data frame. So for example, to filter a data frame “df” on the “count” variable, you can use df[df[‘count’] > 5] or df[df[‘count’] == 10], or you can specify the index separately:
      idx = (df[‘count’] > 1) & (df[‘count’] < 10) # get index where count is between 1 and 10. df = df[idx] # Filter the actual data frame.

  3. Hi , I tried using the agg() to get mean,std and max for a column in DF , but it gives me an error ‘Series object has no attribute ‘agg’ . Could you help me with that.

    • Hey Am, sounds like you are trying to apply the .agg() function to a pandas Series rather than a DataFrame – have a look at the datatype to check before you run the code.

  4. Thanks – very helpful – please note typo in the first code block

    data = pd.DataFrame.from_csv(‘phone_data.cv’) you have cv instead of csv – took me a few runs to figure it out!

  5. Excellent blog. Thanks a lot for taking time to put this together and in the process helping many people like me understand Pandas better.
    There is lot of material out there on Pandas but I think this is one of the best in terms of explaining stuff with excellent example and clarity.
    Great Job!

    • Thanks Chala, glad that you found it useful. I think it fits well with the pandas stuff out there – but perhaps with a slant towards a data-science user, which is how I use pandas!

  6. Very nice write-up. Do you know how to preserve the order of the aggregated columns? They do not show up in the same order as give in the aggregators object.

    • Great question, and I don’t know the answer – the columns in the results do appear to be relatively randomly ordered. There appears to be a relationship with whether you have “sub-queries” to the order in the pandas output, but I think you may just have to order them yourself afterwards if order is important.

  7. Thank you so much for this post! You’ve solved the problem I’d been struggling with for ages due to a misunderstanding about pandas operations. I can finally move on with my project!

  8. Thanks for a great post, really useful!
    I just had one problem reproducing your last block on defining the aggregation calculation and renaming columns – specifically

    ‘num_days’: lambda x: max(x) – min(x) # Calculate the date range per group

    returned an error

    TypeError: unsupported operand type(s) for -: ‘str’ and ‘str’

    I am using python 3.5 – dont know if that makes a difference?

    • Hey Michael, sounds like the data type for the column “num_days” in your data frame is being loaded as a string. Have a look using data[‘num_days’].dtype and ensure that it’s an integer/float before you run the code. There may be something up with the CSV input data in this case.

  9. Great Blog !!! Very helful.
    I have a question .. please help me to clarify.

    I have to add a new column in my panda dataframe and needs to copy the records in new column from the other column of same dataframe on condition like df.groupby([id]).first().

    I know how to do it in sas but not sure how to do it in pandas:

    In sas i can do something like this:

    data df;
    retain col_1″ “;
    set df1;
    by id;
    if first.id then col_1=col_2;
    else col_1=col_1;
    run;

Leave a Reply