Exploratory Data Analysis: A prerequisite for successful analysis and data modeling

Armel Djangone
13 min readJan 16, 2021

Introduction

Exploratory Data Analysis (EDA) is a crucial step before you start further analyzing or modeling your data. It prepares, cleans your data, and provides the context needed to develop an appropriate model — and explains the results correctly. Some of the key steps in EDA include:

  • Summarizing main characteristics of the data
  • Gaining better understanding of the data set
  • Uncovering relationship between variables
  • Extracting important variables

Importing the dataset

The complete code and dataset can be found on my Github. Please feel free to dowanload it at: https://github.com/amaso13/EDA-with-Python.git

Let’s begin by importing the necessary libraries in Python. These libraries include Numpy, Pandas, Seaborn, Matplotlib. To install seaborn we use the pip which is the python package manager.

Now that we have imported the necessary libraries, let’s load the dataset into a data frame called df by using pandas library and displaying the first five rows by using head()

Analyzing Individual Feature Patterns using Visualization

We imported above visualization packages “Matplotlib” and “Seaborn”, don’t forget about “%matplotlib inline” to plot in a Jupyter notebook. Let’s recall that here

How to choose the right visualization method?

When visualizing individual variables, it is important to first understand what type of variable you are dealing with. This will help us find the right visualization method for that variable. I will write an entire blog about why data type matter in your analysis. For now, just keep in mind that depending on the type of data, specific visualization methods may be appropriate.

Let’s take a look at the data type for each column in our dataset by using dtypes() method

we notice different type of data: numeric data: int64 and flaot64, and string or object.

We notice different types of data: numeric data: int64 and flaot64, and string or object. For example, we can calculate the correlation between variables of type “int64” or “float64” as they are numeric data using the method “corr”.

The diagonal elements are always one; we will study correlation more precisely Pearson correlation in-depth at the end of this blog post.

Find the correlation between the following columns: bore, troke,compression-ratio, and horsepower.

Continuous numerical variables:

Continuous numerical variables are variables that may contain any value within some range. Continuous numerical variables can have the type “int64” or “float64”. A great way to visualize these variables is by using scatterplots with fitted lines.

In order to start understanding the (linear) relationship between an individual variable and the price. We can do this by using “regplot”, which plots the scatterplot plus the fitted regression line for the data.

Let’s see several examples of different linear relationships:

Positive linear relationship

  • Let’s find the scatterplot of “engine-size” and “price”

Interpretation: We can that as the engine-size goes up, the price goes up: this indicates a positive direct correlation between these two variables. Engine size seems like a pretty good predictor of price since the regression line is almost a perfect diagonal line.

We can examine the correlation between ‘engine-size’ and ‘price’ by calculating correlation using the method .corr() and see it’s approximately 0.87.

  • Let’s examine the relation between Highway mpg and price; and we will see that Highway mpg is a potential predictor variable of price

Interpretation: As the highway-mpg goes up, the price goes down: this indicates an inverse/negative relationship between these two variables. Highway mpg could potentially be a predictor of price.

We can examine the correlation between ‘highway-mpg’ and ‘price’ by using the method .corr(); and see it’s approximately -0.704

Weak Linear Relationship

Let’s see if “Peak-rpm” as a predictor variable of “price”.

Interpretation: Peak rpm does not seem like a good predictor of the price at all since the regression line is close to horizontal. Also, the data points are very scattered and far from the fitted line, showing lots of variability. Therefore it’s it is not a reliable variable.

Similarly, we can examine the correlation between ‘peak-rpm’ and ‘price’ by using the method .corr(); and see it’s approximately -0.101616.

  • Let’s see if “stroke” isa predictor variable of “price”.

Interpretation: Yes. There is a weak correlation between the variable ‘stroke’ and ‘price.’ as such regression will not work well. Let’s demonstrate by using “regplot”.

Categorical variables

These are variables that describe a ‘characteristic’ of a data unit, and are selected from a small group of categories. The categorical variables can have the type “object” or “int64”. A good way to visualize categorical variables is by using boxplots.

  • Let’s look at the relationship between “body-style” and “price”.

Interpretation: We see that the distributions of price between the different body-style categories have a significant overlap, and so body-style would not be a good predictor of price.

  • Let’s examine engine “engine-location” and “price”:

Interpretation: Here we see that the distribution of price between these two engine-location categories, front, and rear, are distinct enough to take engine-location as a potential good predictor of price.

  • Let’s examine “drive-wheels” and “price”

Interpretation: Here we see that the distribution of price between the different drive-wheels categories differs; as such drive-wheels could potentially be a predictor of price.

Descriptive Statistical Analysis

Let’s first take a look at the variables by utilizing a description method.

The describe function automatically computes basic statistics for all continuous variables. Any NaN values are automatically skipped in these statistics.

This will show:

  • the count of that variable
  • the mean
  • the standard deviation (std)
  • the minimum value
  • the IQR (Interquartile Range: 25%, 50% and 75%)
  • the maximum value

We can apply the method “describe” as follows:

The default setting of “describe” skips variables of type object. We can apply the method “describe” on the variables of type ‘object’ as follows:

Data Cleaning

It is the process of detecting and correcting (or removing) corrupt or inaccurate records from a recordset, table, or database and refers to identifying incomplete, incorrect, inaccurate, or irrelevant parts of the data and then replacing, modifying, deleting the dirty or renaming columns. Data cleaning is the most time-consuming task. There is a half-funny, half-true saying amongst data professionals which goes: “80% of the data scientist’s job is data cleaning” (Mester T., 2020).

No matter how educated, experienced, or your programming skills are, without cleaned data, you won’t be able to successfully perform your analysis/modeling. As Data professionals, we need to learn and master data analysis techniques.

In this stage, we will rename columns, drop columns, perform some basic groupby.

  • Value count

Value-counts is a good way of understanding how many units of each characteristic/variable we have. We can apply the “value_counts” method on the column ‘drive-wheels’. Don’t forget the method “value_counts” only works on Pandas series, not Pandas Dataframes. As a result, we only include one bracket “df[‘drive-wheels’]” not two brackets “df[[‘drive-wheels’]]”.

We can convert the series to a Dataframe as follows :

Let’s repeat the above steps but save the results to the dataframe “drive_wheels_counts” and rename the column ‘drive-wheels’ to ‘value_counts’.

  • Now let’s rename the index to ‘drive-wheels’:
  • We can repeat the above process for the variable ‘engine-location’.

Examining the value counts of the engine location would not be a good predictor variable for the price. This is because we only have three cars with a rear engine and 198 with an engine in the front, this result is skewed. Thus, we are not able to draw any conclusions about the engine location.

  • Basics of Gouping

The “groupby” method groups data by different categories. The data is grouped based on one or several variables and analysis is performed on the individual groups.

For example, let’s group by the variable “drive-wheels”. We see that there are 3 different categories of drive wheels

If we want to know, on average, which type of drive wheel is most valuable, we can group “drive-wheels” and then average them. We can select the columns ‘drive-wheels’, ‘body-style’ and ‘price’, then assign it to the variable “df_group_one”.

We can then calculate the average price for each of the different categories of data

From our data, it seems rear-wheel drive vehicles are, on average, the most expensive, while 4-wheel and front-wheel are approximately the same in price. You can also group with multiple variables. For example, let’s group by both ‘drive-wheels’ and ‘body-style’. This groups the dataframe by the unique combinations ‘drive-wheels’ and ‘body-style’. We can store the results in the variable ‘grouped_test1’.

This grouped data is much easier to visualize when it is made into a pivot table. A pivot table is like an Excel spreadsheet, with one variable along the column and another along the row. We can convert the dataframe to a pivot table using the method “pivot “ to create a pivot table from the groups.

In this case, we will leave the drive-wheel variable as the rows of the table, and pivot body-style to become the columns of the table:

Often, we won’t have data for some of the pivot cells. We can fill these missing cells with the value 0, but any other value could potentially be used as well. It should be mentioned that missing data is quite a complex subject and is an entire course on its own.

  • Using the “groupby” function to find the average “price” of each car based on “body-style”

Variables: Drive Wheels and Body Style vs Price

Let’s use a heat map to visualize the relationship between Body Style vs Price.

Interpretation: The heatmap plots the target variable (price) proportional to colour with respect to the variables ‘drive-wheel’ and ‘body-style’ in the vertical and horizontal axis respectively. This allows us to visualize how the price is related to ‘drive-wheel’ and ‘body-style’.

The default labels convey no useful information to us. Let’s change that:

The main question we want to answer in this module, is “What are the main characteristics which have the most impact on the car price?”.

To get a better measure of the important characteristics, we look at the correlation of these variables with the car price, in other words: how is the car price dependent on this variable?

Correlation and Causation

Correlation: a measure of the extent of interdependence between variables.

Causation: the relationship between cause and effect between two variables.

It is important to know the difference between these two and that correlation does not imply causation. Determining correlation is much simpler the determining causation as causation may require independent experimentation.

  • Pearson Correlation

The Pearson Correlation measures the linear dependence between two variables X and Y.

The resulting coefficient is a value between -1 and 1 inclusive, where:

  • 1: Total positive linear correlation.
  • 0: No linear correlation, the two variables most likely do not affect each other.
  • -1: Total negative linear correlation

Pearson Correlation is the default method of the function “corr”. Like before we can calculate the Pearson Correlation of the of the ‘int64’ or ‘float64’ variables

sometimes we would like to know the significant of the correlation estimate.

P-value:

What is this P-value? The P-value is the probability value that the correlation between these two variables is statistically significant. Normally, we choose a significance level of 0.05, which means that we are 95% confident that the correlation between the variables is significant.

By convention, when the

  • p-value is << 0.001: we say there is strong evidence that the correlation is significant.
  • the p-value is << 0.05: there is moderate evidence that the correlation is significant.
  • the p-value is << 0.1: there is weak evidence that the correlation is significant.
  • the p-value is >> 0.1: there is no evidence that the correlation is significant.

We can obtain this information using “stats” module in the “scipy” library.

  • Wheel-base vs price

Let’s calculate the Pearson Correlation Coefficient and P-value of ‘wheel-base’ and ‘price’.

Conclusion: Since the p-value is << 0.001, the correlation between wheel-base and price is statistically significant, although the linear relationship isn’t extremely strong (~0.585)

  • Horsepower vs Price

Let’s calculate the Pearson Correlation Coefficient and P-value of ‘horsepower’ and ‘price’

Conclusion: Since the p-value is << 0.001, the correlation between horsepower and price is statistically significant, and the linear relationship is quite strong (~0.809, close to 1)

Following the same formulate, I calculated the Pearon Correlation coefficient and P-values for the following:

  • Length vs Price: The Pearson Correlation Coefficient is 0.690628380448364 with a P-value of P = 8.016477466158986e-30
  • Width vs Price: The Pearson Correlation Coefficient is 0.7512653440522674 with a P-value of P = 9.200335510481516e-38
  • Curb-weight vs Price: The Pearson Correlation Coefficient is 0.8344145257702846 with a P-value of P = 2.1895772388936914e-53
  • Engine-size vs Price: The Pearson Correlation Coefficient is 0.8723351674455185 with a P-value of P = 9.265491622198389e-64
  • Bore vs Price: The Pearson Correlation Coefficient is 0.5431553832626602 with a P-value of P = 8.049189483935489e-17
  • City-mpg vs Price: The Pearson Correlation Coefficient is -0.6865710067844677 with a P-value of P = 2.321132065567674e-29
  • Highway-mpg vs Price: The Pearson Correlation Coefficient is -0.7046922650589529 with a P-value of P = 1.7495471144477352e-31

ANOVA

ANOVA: Analysis of Variance

The Analysis of Variance (ANOVA) is a statistical method used to test whether there are significant differences between the means of two or more groups. ANOVA returns two parameters:

F-test score: ANOVA assumes the means of all groups are the same, calculates how much the actual means deviate from the assumption, and reports it as the F-test score. A larger score means there is a larger difference between the means.

P-value: P-value tells how statistically significant is our calculated score value.

If our price variable is strongly correlated with the variable we are analyzing, expect ANOVA to return a sizeable F-test score and a small p-value

  • Drive Wheels

Since ANOVA analyzes the difference between different groups of the same variable, the groupby function will come in handy. Because the ANOVA algorithm averages the data automatically, we do not need to take the average before hand.

Let’s see if different types ‘drive-wheels’ impact ‘price’, we group the data.

Let’s see if different types ‘drive-wheels’ impact ‘price’, we group the data.

We can obtain the values of the method group using the method “get_group”

we can use the function ‘f_oneway’ in the module ‘stats’ to obtain the F-test score and P-value.

This is a great result, with a large F test score showing a strong correlation and a P value of almost 0 implying almost certain statistical significance. But does this mean all three tested groups are all this highly correlated?

  • Separately:fwd and rwd
  • Let’s examine the other groups
  • 4wd and rwd
  • 4wd and fwd

Conclusion: Important Variables

We now have a better idea of what our data looks like and which variables are important to take into account when predicting the car price. We have narrowed it down to the following variables:

Continuous numerical variables:

  • Length
  • Width
  • Curb-weight
  • Engine-size
  • Horsepower
  • City-mpg
  • Highway-mpg
  • Wheel-base
  • Bore

Categorical variables:

  • Drive-wheels

As we now move into building machine learning models to automate our analysis, feeding the model with variables that meaningfully affect our target variable will improve our model’s prediction performance.

References

https://data36.com/data-scientists-day/

--

--