Data Munging using #rstats Part 1 -Understanding Data Quality

This is a series of posts on Data Munging using R.

we will examine the various ways to input data and examine errors in the data input stage. We will accordingly study ways to detect errors and rectify them using the R language. People estimate that almost 60-70% of a project’s time goes in the data input, data quality and data validation stage. By the principle of Garbage-In -Garbage -Out, we believe that an analysis is only as good as the input quality of data. Thus data quality is both an integral part as well as one of the first stages in a project before we move to comprehensive statistical analysis.

Data Quality is an important part of studying data manipulation. How do we define Data Quality?

In this chapter, Data quality is defined as manipulating data in the desired shape, size and format. We further elaborate that as follows-

Data that is useful for analysis without any errors is high quality data.

Data that is problematic for accurate analysis because of any errors is low quality data.

Data Quality errors are defined as deviations from actual data, due to systematic, computing or human mistakes.

Rectifying data quality errors involves the steps of error detection, missing value imputation. It also involves using the feedback from these steps to design better data input mechanisms.

The major types of Data Quality errors are-

Missing Data- This is defined as when data is simply missing. It may be represented by a “. “or a blank space or by special notation like NA (not available) . In R , missing data is represented by NA. Missing data is the easiest to detect but it is tough to rectify since most of the time we deal with data collected in real time in the past time and it is difficult and expensive to replace it with actual data. Some methods of replacing missing data is by imputing or inferring what the missing values could be , by looking at measures of central tendency like median , or mean, or by checking correlation with other variables or data points with better data population or by looking at historic data for a particular sub-set. Accordingly missing values for a particular data variable can be divided into sub sets for imputation by various means (like for different Geographic Values, or Different Time Values)

Invalid Data (too high or too low numeric (and date-time) data, character data in invalid format).

Incorrect Data (due to input errors including invalid or obsolete business rules, human input, low quality OCR scans)

The major causes of Data Quality Errors are-

Human Error (due to input, typing )

Machine Error ( due to invalid input readable eg. like by low resolution scanning device)

Syntax Error ( due to invalid logic or assumptions)

Data Format Error (due to a format that is not readable by software reading in data)

Steps for Diagnosis-

Missing Value Detection (using functions related to is.NA) and Missing Value Imputation

Distribution Analysis (using functions like summary,describe, and visualizations like boxplot, histogram)

Outliers (Bonferroni) Detection and Outlier Capping ( Minimum- Maximum)

Correlation with other variables ( using correlation statistics)

Diagnosis of Data Quality

The following functions in R will help us evaluate the quality of data in our data object.

str– gives structure of object for a data frame including class, dimensions, variable names, variable types, first few observations of each variable)

names– gives variable names.

dim– dimensions of object.

length– gives length of data object.

nrow– gives number of rows of data object.

ncol – gives number of columns of data object.

class– gives data class of object. This can be list, matrix or data.frame or other classes.

We use the famous iris dataset and attach it or load it in our R session using the command

data(iris). We then try out each of the functions given above.

`> data(iris)`

`> str(iris)`

`data.frame': 150 obs. of 5 variables:`

`\$ Sepal.Length: num 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ...`

`\$ Sepal.Width : num 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...`

`\$ Petal.Length: num 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ...`

`\$ Petal.Width : num 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ...`

`\$ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...`

`> names(iris)`

`[1] "Sepal.Length" "Sepal.Width" "Petal.Length" "Petal.Width" "Species" `

`> dim(iris)`

`[1] 150 5`

`> length(iris)`

`[1] 5`

`> nrow(iris)`

`[1] 150`

`> ncol(iris)`

`[1] 5`

`> class(iris)`

`[1] "data.frame"`

It is quite clear that the str function by itself is enough for the first step data quality as it contains all the other parameters.

We now and try and print out a part of the object to check what is stored there. By default we can print the entire object by just writing it’s name. However this may be inconvenient in some cases when there are a large number of rows.

Accordingly we use the head and tail functions to look at the beginning and last rows in a data object.

head – gives first few observations in a data object as specified by parameter in head (objectname, number of rows)

tail -gives last few observations in a data object as specified by parameter in tail (objectname, number of rows)

Here we take the first 7 rows and the last 3 rows of dataset iris. Note that the first column in the output below is the row.number.
`> head(iris,7)`

`Sepal.Length Sepal.Width Petal.Length Petal.Width Species`

`1 5.1 3.5 1.4 0.2 setosa`

`2 4.9 3.0 1.4 0.2 setosa`

`3 4.7 3.2 1.3 0.2 setosa`

`4 4.6 3.1 1.5 0.2 setosa`

`5 5.0 3.6 1.4 0.2 setosa`

`6 5.4 3.9 1.7 0.4 setosa`

`7 4.6 3.4 1.4 0.3 setosa`

`> tail(iris,3)`

`Sepal.Length Sepal.Width Petal.Length Petal.Width Species`

`148 6.5 3.0 5.2 2.0 virginica`

`149 6.2 3.4 5.4 2.3 virginica`

`150 5.9 3.0 5.1 1.8 virginica`

We can also pass negative numbers as parameters to head and tail. Here we are trying to take the first and last 7 rows ( or numbers of rows in object -143 rows). Since the object iris has 150 rows , -143 evaluates to 7 in head and tail functions.

`> head(iris,-143)`

`Sepal.Length Sep``al.Width Petal.Length Petal.Width Species`

`1 5.1 3.5 1.4 0.2 setosa`

`2 4.9 3.0 1.4 0.2 setosa`

`3 4.7 3.2 1.3 0.2 setosa`

`4 4.6 3.1 1.5 0.2 setosa`

`5 5.0 3.6 1.4 0.2 setosa`

`6 5.4 3.9 1.7 0.4 setosa`

`7 4.6 3.4 1.4 0.3 setosa`

`> tail(iris,-143)`

`Sepal.Length Sepal.Width Petal.Length Petal.Width Species`

`144 6.8 3.2 5.9 2.3 virginica`

`145 6.7 3.3 5.7 2.5 virginica`

`146 6.7 3.0 5.2 2.3 virginica`

`147 6.3 2.5 5.0 1.9 virginica`

`148 6.5 3.0 5.2 2.0 virginica`

`149 6.2 3.4 5.4 2.3 virginica`

`150 5.9 3.0 5.1 1.8 virginica`

1.2 Strings

One of the most common errors in data analytics is mismatch in string variables . String variables also known as character variables are non-numeric text, and even a single misplacement in white space, or upper case, lower case can cause discrepancies in the data. One of the most common types of data for which this error attains criticality is address data and name data.

From the perspective of R, the data “virginica” is a different data (or factor-level) from “ virginica” and from “Virginica”.“1600 Penn Avenue” is a different address from “1600 Pennsylvania Avenue” and from “1600 PA”. This can lead to escalation of costs especially since users of business analytics try and create unique and accurate contact details ( names and addresses). This attains even more importance for running credit checks and financial data, since an inaccurate data mismatch can lead to a wrong credit score to a person, leading to liability of the credit provider.

For changing case we use the functions toupper and tolower

`> a=c("ajay","vijay","ravi","rahul","bharat")`

`> toupper(a)`

`[1] "AJAY" "VIJAY" "RAVI" "RAHUL" "BHARAT"`

`> b=c("Jane","JILL","AMY","NaNCY")`

`> tolower(b)`

`[1] "jane" "jill" "amy" "nancy"`

sub,gsub,grepl

grepl can be used to find a part of a string . For example, in cricket we denote a not out score of 250 runs by a star, .i.e. 250* but denote a score of 250 out as 250. This can create a problem if we are trying to read in data. It will either treat it as character level data, or if we coerce it to return numeric values, it will show the not out scores by missing values.

We want to find all instance of “*” in address field and see if they are not out. `grepl` returns a logical vector (match or not for each element of `x`). We will further expand on this example in our Case Study for Cricket Analytics

`table2\$HSNotOut=``grepl``("``\\``*",table2\$HS)`

We use sub and gsub to substitute parts of string. While the sub function replaces the first occurrence, the gsub function replaces all occurrences of the matching pattern with the parameter supplied.

Here we are trying to replace white space in a sentence. Notice the sub function seems to work better than gsub in this case.

`> newstring=" Hello World We are Experts in Learning R"`

`> sub(" ","",newstring)`

`[1] "Hello World We are ``Experts`` in Learning R"`

`> gsub(" ","",newstring)`

`[1] "HelloWorldWeareExpertsinLearningR"`

Let us try to convert currency data into numeric data.For the sake of learning we are using a small data object , a list called “money” with three different inputs.

`> money=c("\$10,000","20000","32,000")`

`> money`

`[1] "\$10,000" "20000" "32,000"`

We replace a comma (used mainly for thousands in currency data) using gsub as shown before.

`> money2=gsub(",","",money)`

`> money2`

`[1] "\$10000" "20000" "32000"`

\$ indicates the end of a line in regular expressions. \\$ is a dollar sign. So we have to use \\\$ as an input in the gsub expression.

`> money3=gsub("\\\$","",money2)`

`> money3`

`[1] "10000" "20000" "32000"`

At this point we may be satisfied that we have got the format we wanted. However this is an error, as these are still strings- as we find out by running the mean function

`> mean(money3)`

`[1] NA`

`Warning message:`

`In mean.default(money3) : argument is not numeric or logical: returning NA`

We then use the as operator to convert one data type (character) into another ( numeric).The as operator is generally used in syntax as.outputdataobject.class. Accordingly we will use as.numeric for the conversion.

`> money4=as.numeric(money3)`

`> money4`

`[1] 10000 20000 32000`

`> mean(money4)`

`[1] 20666.67`

Please note , we used many intermediate steps to do the multiple steps of data manipulation and used the = sign to assign this to new objects. We can combine two steps into one by putting them within successive brackets. This is illustrated below, when we are trying to convert character data containing (% Percentages) into Numeric data.

`> mean(as.numeric(gsub("%","",percentages)))`

`[1] 35`

`> percentages`

`[1] "%20" "%30" "%40" "50"`

Note we have found the mean but the original object is not changed.

Do gsub only one variable at a time

Slight problem is suppose there is data like 1,504 – it will be converted to NA instead of 1504.The way to solve this is use the nice gsub function ONLY on that variable. Since the comma is also the most commonly used delimiter , you dont want to replace all the commas, just only the one in that variable.

`dataset\$Variable2=as.numeric(paste(gsub(“,”,”",dataset\$Variable)))`

Additional– The function setAs creates methods for the as function to use. This is an advanced usage.

One thought on “Data Munging using #rstats Part 1 -Understanding Data Quality”

1. Anindita says:

Good Article