Data Munging using #rstats Part 1 -Understanding Data Quality

This is a series of posts on Data Munging using R.

we will examine the various ways to input data and examine errors in the data input stage. We will accordingly study ways to detect errors and rectify them using the R language. People estimate that almost 60-70% of a project’s time goes in the data input, data quality and data validation stage. By the principle of Garbage-In -Garbage -Out, we believe that an analysis is only as good as the input quality of data. Thus data quality is both an integral part as well as one of the first stages in a project before we move to comprehensive statistical analysis.

Data Quality is an important part of studying data manipulation. How do we define Data Quality?

In this chapter, Data quality is defined as manipulating data in the desired shape, size and format. We further elaborate that as follows-

Data that is useful for analysis without any errors is high quality data.

Data that is problematic for accurate analysis because of any errors is low quality data.

Data Quality errors are defined as deviations from actual data, due to systematic, computing or human mistakes.

Rectifying data quality errors involves the steps of error detection, missing value imputation. It also involves using the feedback from these steps to design better data input mechanisms.

The major types of Data Quality errors are-

Missing Data- This is defined as when data is simply missing. It may be represented by a “. “or a blank space or by special notation like NA (not available) . In R , missing data is represented by NA. Missing data is the easiest to detect but it is tough to rectify since most of the time we deal with data collected in real time in the past time and it is difficult and expensive to replace it with actual data. Some methods of replacing missing data is by imputing or inferring what the missing values could be , by looking at measures of central tendency like median , or mean, or by checking correlation with other variables or data points with better data population or by looking at historic data for a particular sub-set. Accordingly missing values for a particular data variable can be divided into sub sets for imputation by various means (like for different Geographic Values, or Different Time Values)

Invalid Data (too high or too low numeric (and date-time) data, character data in invalid format).

Incorrect Data (due to input errors including invalid or obsolete business rules, human input, low quality OCR scans)

The major causes of Data Quality Errors are-

Human Error (due to input, typing )

Machine Error ( due to invalid input readable eg. like by low resolution scanning device)

Syntax Error ( due to invalid logic or assumptions)

Data Format Error (due to a format that is not readable by software reading in data)

Steps for Diagnosis-

Missing Value Detection (using functions related to is.NA) and Missing Value Imputation

Distribution Analysis (using functions like summary,describe, and visualizations like boxplot, histogram)

Outliers (Bonferroni) Detection and Outlier Capping ( Minimum- Maximum)

Correlation with other variables ( using correlation statistics)

Diagnosis of Data Quality

 

The following functions in R will help us evaluate the quality of data in our data object.

str– gives structure of object for a data frame including class, dimensions, variable names, variable types, first few observations of each variable)

names– gives variable names.

dim– dimensions of object.

length– gives length of data object.

nrow– gives number of rows of data object.

ncol – gives number of columns of data object.

class– gives data class of object. This can be list, matrix or data.frame or other classes.

We use the famous iris dataset and attach it or load it in our R session using the command

data(iris). We then try out each of the functions given above.

> data(iris)

> str(iris)

data.frame': 150 obs. of 5 variables:

$ Sepal.Length: num 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ...

$ Sepal.Width : num 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...

$ Petal.Length: num 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ...

$ Petal.Width : num 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ...

$ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...

> names(iris)

[1] "Sepal.Length" "Sepal.Width" "Petal.Length" "Petal.Width" "Species"

> dim(iris)

[1] 150 5

> length(iris)

[1] 5

> nrow(iris)

[1] 150

> ncol(iris)

[1] 5

> class(iris)

[1] "data.frame"

It is quite clear that the str function by itself is enough for the first step data quality as it contains all the other parameters.

We now and try and print out a part of the object to check what is stored there. By default we can print the entire object by just writing it’s name. However this may be inconvenient in some cases when there are a large number of rows.

Accordingly we use the head and tail functions to look at the beginning and last rows in a data object.

head – gives first few observations in a data object as specified by parameter in head (objectname, number of rows)

tail -gives last few observations in a data object as specified by parameter in tail (objectname, number of rows)

Here we take the first 7 rows and the last 3 rows of dataset iris. Note that the first column in the output below is the row.number.
> head(iris,7)

Sepal.Length Sepal.Width Petal.Length Petal.Width Species

1 5.1 3.5 1.4 0.2 setosa

2 4.9 3.0 1.4 0.2 setosa

3 4.7 3.2 1.3 0.2 setosa

4 4.6 3.1 1.5 0.2 setosa

5 5.0 3.6 1.4 0.2 setosa

6 5.4 3.9 1.7 0.4 setosa

7 4.6 3.4 1.4 0.3 setosa

> tail(iris,3)

Sepal.Length Sepal.Width Petal.Length Petal.Width Species

148 6.5 3.0 5.2 2.0 virginica

149 6.2 3.4 5.4 2.3 virginica

150 5.9 3.0 5.1 1.8 virginica

We can also pass negative numbers as parameters to head and tail. Here we are trying to take the first and last 7 rows ( or numbers of rows in object -143 rows). Since the object iris has 150 rows , -143 evaluates to 7 in head and tail functions.

> head(iris,-143)

Sepal.Length Sepal.Width Petal.Length Petal.Width Species

1 5.1 3.5 1.4 0.2 setosa

2 4.9 3.0 1.4 0.2 setosa

3 4.7 3.2 1.3 0.2 setosa

4 4.6 3.1 1.5 0.2 setosa

5 5.0 3.6 1.4 0.2 setosa

6 5.4 3.9 1.7 0.4 setosa

7 4.6 3.4 1.4 0.3 setosa

> tail(iris,-143)

Sepal.Length Sepal.Width Petal.Length Petal.Width Species

144 6.8 3.2 5.9 2.3 virginica

145 6.7 3.3 5.7 2.5 virginica

146 6.7 3.0 5.2 2.3 virginica

147 6.3 2.5 5.0 1.9 virginica

148 6.5 3.0 5.2 2.0 virginica

149 6.2 3.4 5.4 2.3 virginica

150 5.9 3.0 5.1 1.8 virginica

1.2 Strings

One of the most common errors in data analytics is mismatch in string variables . String variables also known as character variables are non-numeric text, and even a single misplacement in white space, or upper case, lower case can cause discrepancies in the data. One of the most common types of data for which this error attains criticality is address data and name data.

From the perspective of R, the data “virginica” is a different data (or factor-level) from “ virginica” and from “Virginica”.“1600 Penn Avenue” is a different address from “1600 Pennsylvania Avenue” and from “1600 PA”. This can lead to escalation of costs especially since users of business analytics try and create unique and accurate contact details ( names and addresses). This attains even more importance for running credit checks and financial data, since an inaccurate data mismatch can lead to a wrong credit score to a person, leading to liability of the credit provider.

For changing case we use the functions toupper and tolower

> a=c("ajay","vijay","ravi","rahul","bharat")

> toupper(a)

[1] "AJAY" "VIJAY" "RAVI" "RAHUL" "BHARAT"

> b=c("Jane","JILL","AMY","NaNCY")

> tolower(b)

[1] "jane" "jill" "amy" "nancy"

sub,gsub,grepl

 

grepl can be used to find a part of a string . For example, in cricket we denote a not out score of 250 runs by a star, .i.e. 250* but denote a score of 250 out as 250. This can create a problem if we are trying to read in data. It will either treat it as character level data, or if we coerce it to return numeric values, it will show the not out scores by missing values.

We want to find all instance of “*” in address field and see if they are not out. grepl returns a logical vector (match or not for each element of x). We will further expand on this example in our Case Study for Cricket Analytics

table2$HSNotOut=grepl("\\*",table2$HS)


We use sub and gsub to substitute parts of string. While the sub function replaces the first occurrence, the gsub function replaces all occurrences of the matching pattern with the parameter supplied.

Here we are trying to replace white space in a sentence. Notice the sub function seems to work better than gsub in this case.

> newstring=" Hello World We are Experts in Learning R"

> sub(" ","",newstring)

[1] "Hello World We are Experts in Learning R"

> gsub(" ","",newstring)

[1] "HelloWorldWeareExpertsinLearningR"

Let us try to convert currency data into numeric data.For the sake of learning we are using a small data object , a list called “money” with three different inputs.

> money=c("$10,000","20000","32,000")

> money

[1] "$10,000" "20000" "32,000"

We replace a comma (used mainly for thousands in currency data) using gsub as shown before.

> money2=gsub(",","",money)

> money2

[1] "$10000" "20000" "32000"

$ indicates the end of a line in regular expressions. \$ is a dollar sign. So we have to use \\$ as an input in the gsub expression.

> money3=gsub("\\$","",money2)

> money3

[1] "10000" "20000" "32000"

At this point we may be satisfied that we have got the format we wanted. However this is an error, as these are still strings- as we find out by running the mean function

> mean(money3)

[1] NA

Warning message:

In mean.default(money3) : argument is not numeric or logical: returning NA

We then use the as operator to convert one data type (character) into another ( numeric).The as operator is generally used in syntax as.outputdataobject.class. Accordingly we will use as.numeric for the conversion.

 

> money4=as.numeric(money3)

> money4

[1] 10000 20000 32000

> mean(money4)

[1] 20666.67


Please note , we used many intermediate steps to do the multiple steps of data manipulation and used the = sign to assign this to new objects. We can combine two steps into one by putting them within successive brackets. This is illustrated below, when we are trying to convert character data containing (% Percentages) into Numeric data.

> mean(as.numeric(gsub("%","",percentages)))

[1] 35

> percentages

[1] "%20" "%30" "%40" "50"

Note we have found the mean but the original object is not changed.

 

Do gsub only one variable at a time

Slight problem is suppose there is data like 1,504 – it will be converted to NA instead of 1504.The way to solve this is use the nice gsub function ONLY on that variable. Since the comma is also the most commonly used delimiter , you dont want to replace all the commas, just only the one in that variable.

dataset$Variable2=as.numeric(paste(gsub(“,”,”",dataset$Variable)))

 

 

Additional– The function setAs creates methods for the as function to use. This is an advanced usage.

 

 

 

 

Interview Jason Kuo SAP Analytics #Rstats

Here is an interview with Jason Kuo who works with SAP Analytics as Group Solutions Marketing Manager. Jason answers questions on SAP Analytics and it’s increasing involvement with R statistical language.

Ajay- What made you choose R as the language to tie important parts of your technology platform like HANA and SAP Predictive Analysis. Did you consider other languages like Julia or Python.

Jason- It’s the most popular. Over 50% of the statisticians and data analysts use R. With 3,500+ algorithms its arguably the most comprehensive statistical analysis language. That said,we are not closing the door on others.

Ajay- When did you first start getting interested in R as an analytics platform?

Jason- SAP has been tracking R for 5+ years. With R’s explosive growth over the last year or two, it made sense for us to dramatically increase our investment in R.

Ajay- Can we expect SAP to give back to the R community like Google and Revolution Analytics does- by sponsoring Package development or sponsoring user meets and conferences?

Will we see SAP’s R HANA package in this year’s R conference User 2012 in Nashville

Jason- Yes. We plan to provide a specific driver for HANA tables for input of the data to native R. This planned for end of 2012. We’ll then review our event strategy. SAP has been a sponsor of Predictive Analytics World for several years and was indeed a founding sponsor. We may be attending the year’s R conference in Nashville.

Ajay- What has been some of the initial customer feedback to your analytics expansion and offerings. 

Jason- We have completed two very successful Pilots of the R Integration for HANA with two of SAP’s largest customers.

About-

Jason has over 15 years of BI and Data Warehousing industry experience. Having worked at Oracle, Business Objects, and now SAP, Jason has been involved in numerous technical marketing roles involving performance management dashboards, information management, text analysis, predictive analytics, and now big data. He has a bachelor’s of science in operations research from the University of Michigan.

 

Timo Elliott on 2012

Continuing the DecisionStats series on  trends for 2012, Timo Elliott , Technology Evangelist  at SAP Business Objects, looks at the predictions he made in the beginning of  2011 and follows up with the things that surprised him in 2011, and what he foresees in 2012.

You can read last year’s predictions by Mr Elliott at http://www.decisionstats.com/brief-interview-timo-elliott/

Timo- Here are my comments on the “top three analytics trends” predictions I made last year:

(1) Analytics, reinvented. New DW techniques make it possible to do sub-second, interactive analytics directly against row-level operational data. Now BI processes and interfaces need to be rethought and redesigned to make best use of this — notably by blurring the distinctions between the “design” and “consumption” phases of BI.

I spent most of 2011 talking about this theme at various conferences: how existing BI technology israpidly becoming obsolete and how the changes are akin to the move from film to digital photography. Technology that has been around for many years (in-memory, column stores, datawarehouse appliances, etc.) came together to create exciting new opportunities and even generally-skeptical industry analysts put out press releases such as “Gartner Says Data Warehousing Reaching Its Most Significant Inflection Point Since Its Inception.” Some of the smaller BI vendors had been pushing in-memory analytics for years, but the general market started paying more attention when megavendors like SAP started painting a long-term vision of in-memory becoming a core platform for applications, not just analytics. Database leader Oracle was forced to upgrade their in-memory messaging from “It’s a complete fantasy” to “we have that too”.

(2) Corporate and personal BI come together. The ability to mix corporate and personal data for quick, pragmatic analysis is a common business need. The typical solution to the problem — extracting and combining the data into a local data store (either Excel or a departmental data mart) — pleases users, but introduces duplication and extra costs and makes a mockery of information governance. 2011 will see the rise of systems that let individuals and departments load their data into personal spaces in the corporate environment, allowing pragmatic analytic flexibility without compromising security and governance.

The number of departmental “data discovery” initiatives continued to rise through 2011, but new tools do make it easier for business people to upload and manipulate their own information while using the corporate standards. 2012 will see more development of “enterprise data discovery” interfaces for casual users.

(3) The next generation of business applications. Where are the business applications designed to support what people really do all day, such as implementing this year’s strategy, launching new products, or acquiring another company? 2011 will see the first prototypes of people-focused, flexible, information-centric, and collaborative applications, bringing together the best of business intelligence, “enterprise 2.0”, and existing operational applications.

2011 saw the rise of sophisticated, user-centric mobile applications that combine data from corporate systems with GPS mapping and the ability to “take action”, such as mobile medical analytics for doctors or mobile beauty advisor applications, and collaborative BI started becoming a standard part of enterprise platforms.

And one that should happen, but probably won’t: (4) Intelligence = Information + PEOPLE. Successful analytics isn’t about technology — it’s about people, process, and culture. The biggest trend in 2011 should be organizations spending the majority of their efforts on user adoption rather than technical implementation.

Unsurprisingly, there was still high demand for presentations on why BI projects fail and how to implement BI competency centers.  The new architectures probably resulted in even more emphasis on technology than ever, while business peoples’ expectations skyrocketed, fueled by advances in the consumer world. The result was probably even more dissatisfaction in the past, but the benefits of the new architectures should start becoming clearer during 2012.

What surprised me the most:

The rapid rise of Hadoop / NoSQL. The potentials of the technology have always been impressive, but I was surprised just how quickly these technology has been used to address real-life business problems (beyond the “big web” vendors where it originated), and how quickly it is becoming part of mainstream enterprise analytic architectures (e.g. Sybase IQ 15.4 includes native MapReduce APIs, Hadoop integration and federation, etc.)

Prediction for 2012:

As I sat down to gather my thoughts about BI in 2012, I quickly came up with the same long laundry list of BI topics as everybody else: in-memory, mobile, predictive, social, collaborative decision-making, data discovery, real-time, etc. etc.  All of these things are clearly important, and where going to continue to see great improvements this year. But I think that the real “next big thing” in BI is what I’m seeing when I talk to customers: they’re using these new opportunities not only to “improve analytics” but also fundamentally rethink some of their key business processes.

Instead of analytics being something that is used to monitor and eventually improve a business process, analytics is becoming a more fundamental part of the business process itself. One example is a large telco company that has transformed the way they attract customers. Instead of laboriously creating a range of rate plans, promoting them, and analyzing the results, they now use analytics to automatically create hundreds of more complex, personalized rate plans. They then throw them out into the market, monitor in real time, and quickly cull any that aren’t successful. It’s a way of doing business that would have been inconceivable in the past, and a lot more common in the future.

 

About

 

Timo Elliott

Timo Elliott is a 20-year veteran of SAP BusinessObjects, and has spent the last quarter-century working with customers around the world on information strategy.

He works closely with SAP research and innovation centers around the world to evangelize new technology prototypes.

His popular Business Analytics blog tracks innovation in analytics and social media, including topics such as augmented corporate reality, collaborative decision-making, and social network analysis.

His PowerPoint Twitter Tools lets presenters see and react to tweets in real time, embedded directly within their slides.

A popular and engaging speaker, Elliott presents regularly to IT and business audiences at international conferences, on subjects such as why BI projects fail and what to do about it, and the intersection of BI and enterprise 2.0.

Prior to Business Objects, Elliott was a computer consultant in Hong Kong and led analytics projects for Shell in New Zealand. He holds a first-class honors degree in Economics with Statistics from Bristol University, England

Timo can be contacted via Twitter at https://twitter.com/timoelliott

 Part 1 of this series was from James Kobielus, Forrestor at http://www.decisionstats.com/jim-kobielus-on-2012/