Ten steps to analysis using R

I am just listing down a set of basic R functions that allow you to start the task of business analytics, or analyzing a dataset(data.frame). I am doing this both as a reference for myself as well as anyone who wants to learn R- quickly.

I am not putting in data import functions, because data manipulation is a seperate baby altogether. Instead I assume you have a dataset ready for analysis and what are the top R commands you would need to analyze it.

 

For anyone who thought R was too hard to learn- here is ten functions to learning R

1) str(dataset) helps you with the structure of dataset

2) names(dataset) gives you the names of variables

3)mean(dataset) returns the mean of numeric variables

4)sd(dataset) returns the standard deviation of numeric variables

5)summary(variables) gives the summary quartile distributions and median of variables

That about gives me the basic stats I need for a dataset.

> data(faithful)
> names(faithful)
[1] "eruptions" "waiting"
> str(faithful)
'data.frame':   272 obs. of  2 variables:
 $ eruptions: num  3.6 1.8 3.33 2.28 4.53 ...
 $ waiting  : num  79 54 74 62 85 55 88 85 51 85 ...
> summary(faithful)
   eruptions        waiting
 Min.   :1.600   Min.   :43.0
 1st Qu.:2.163   1st Qu.:58.0
 Median :4.000   Median :76.0
 Mean   :3.488   Mean   :70.9
 3rd Qu.:4.454   3rd Qu.:82.0
 Max.   :5.100   Max.   :96.0

> mean(faithful)
eruptions   waiting
 3.487783 70.897059
> sd(faithful)
eruptions   waiting
 1.141371 13.594974

6) I can do a basic frequency analysis of a particular variable using the table command and $ operator (similar to dataset.variable name in other statistical languages)

> table(faithful$waiting)

43 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 62 63 64 65 66 67 68 69 70
 1  3  5  4  3  5  5  6  5  7  9  6  4  3  4  7  6  4  3  4  3  2  1  1  2  4
71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 96
 5  1  7  6  8  9 12 15 10  8 13 12 14 10  6  6  2  6  3  6  1  1  2  1  1
or I can do frequency analysis of the whole dataset using
> table(faithful)
         waiting
eruptions 43 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 62 63 64 65 66 67
    1.6    0  0  0  0  0  0  0  0  1  0  0  0  0  0  0  0  0  0  0  0  0  0  0
    1.667  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  1  0  0  0
    1.7    0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  1  0  0  0  0  0  0  0
    1.733  0  0  0  0  0  0  0  0  0  0  1  0  0  0  0  0  0  0  0  0  0  0  0
.....output truncated
7) plot(dataset)
It helps plot the dataset

8) hist(dataset$variable) is better at looking at histograms

hist(faithful$waiting)

9) boxplot(dataset)

10) The tenth function for a beginner would be cor(dataset$var1,dataset$var2)

> cor(faithful)
          eruptions   waiting
eruptions 1.0000000 0.9008112
waiting   0.9008112 1.0000000

 

I am assuming that as a beginner you would use the list of GUI at http://rforanalytics.wordpress.com/graphical-user-interfaces-for-r/  to import and export Data. I would deal with ten steps to data manipulation in R another post.

 

CrowdANALYTIX

Here is a contest based community called CrowdANALYTIX.com which is quite nice and offers you free Revolution R for the statistical and analytical contests based there (a bit like Kaggle.com http://www.kaggle.com/). There are only 3 contests right now and that too low volume but I guess that number should increase. Also they seem to have a consulting arm.

Latest Analytics website- welcome! http://www.crowdanalytix.com/contests

Interview Eberhard Miethke and Dr. Mamdouh Refaat, Angoss Software

Here is an interview with Eberhard Miethke and Dr. Mamdouh Refaat, of Angoss Software. Angoss is a global leader in delivering business intelligence software and predictive analytics solutions that help businesses capitalize on their data by uncovering new opportunities to increase sales and profitability and to reduce risk.

Ajay-  Describe your personal journey in software. How can we guide young students to pursue more useful software development than just gaming applications.

 Mamdouh- I started using computers long time ago when they were programmed using punched cards! First in Fortran, then C, later C++, and then the rest. Computers and software were viewed as technical/engineering tools, and that’s why we can still see the heavy technical orientation of command languages such as Unix shells and even in the windows Command shell. However, with the introduction of database systems and Microsoft office apps, it was clear that business will be the primary user and field of application for software. My personal trip in software started with scientific applications, then business and database systems, and finally statistical software – which you can think of it as returning to the more scientific orientation. However, with the wide acceptance of businesses of the application of statistical methods in different fields such as marketing and risk management, it is a fast growing field that in need of a lot of innovation.

Ajay – Angoss makes multiple data mining and analytics products. could you please introduce us to your product portfolio and what specific data analytics need they serve.

a- Attached please find our main product flyers for KnowledgeSTUDIO and KnowledgeSEEKER. We have a 3rd product called “strategy builder” which is an add-on to the decision tree modules. This is also described in the flyer.

(see- Angoss Knowledge Studio Product Guide April2011  and http://www.scribd.com/doc/63176430/Angoss-Knowledge-Seeker-Product-Guide-April2011  )

Ajay-  The trend in analytics is for big data and cloud computing- with hadoop enabling processing of massive data sets on scalable infrastructure. What are your plans for cloud computing, tablet based as well as mobile based computing.

a- This is an area where the plan is still being figured out in all organizations. The current explosion of data collected from mobile phones, text messages, and social websites will need radically new applications that can utilize the data from these sources. Current applications are based on the relational database paradigm designed in the 70’s through the 90’s of the 20th century.

But data sources are generating data in volumes and formats that are challenging this paradigm and will need a set of new tools and possibly programming languages to fit these needs. The cloud computing, tablet based and mobile computing (which are the same thing in my opinion, just different sizes of the device) are also two technologies that have not been explored in analytics yet.

The approach taken so far by most companies, including Angoss, is to rely on new xml-based standards to represent data structures for the particular models. In this case, it is the PMML (predictive modelling mark-up language) standard, in order to allow the interoperability between analytics applications. Standardizing on the representation of models is viewed as the first step in order to allow the implementation of these models to emerging platforms, being that the cloud or mobile, or social networking websites.

The second challenge cited above is the rapidly increasing size of the data to be analyzed. Angoss has already identified this challenge early on and is currently offering in-database analytics drivers for several database engines: Netezza, Teradata and SQL Server.

These drivers allow our analytics products to translate their routines into efficient SQL-based scripts that run in the database engine to exploit its performance as well as the powerful hardware on which it runs. Thus, instead of copying the data to a staging format for analytics, these drivers allow the data to be analyzed “in-place” within the database without moving it.

Thus offering performance, security and integrity. The performance is improved because of the use of the well tuned database engines running on powerful hardware.

Extra security is achieved by not copying the data to other platforms, which could be less secure. And finally, the integrity of the results are vastly improved by making sure that the results are always obtained by analyzing the up-to-date data residing in the database rather than an older copy of the data which could be obsolete by the time the analysis is concluded.

Ajay- What are the principal competing products to your offerings, and what makes your products special or differentiated in value to them (for each customer segment).

a- There are two major players in today’s market that we usually encounter as competitors, they are: SAS and IBM.

SAS offers a data mining workbench in the form of SAS Enterprise Miner, which is closely tied to SAS data mining methodology known as SEMMA.

On the other hand, IBM has recently acquired SPSS, which offered its Clementine data mining software. IBM has now rebranded Clementine as IBM SPSS Modeller.

In comparison to these products, our KnowledgeSTUDIO and KnowledgeSEEKER offer three main advantages: ease of use; affordability; and ease of integration into existing BI environments.

Angoss products were designed to look-and-feel-like popular Microsoft office applications. This makes the learning curve indeed very steep. Typically, an intermediate level analyst needs only 2-3 days of training to become proficient in the use of the software with all its advanced features.

Another important feature of Angoss software products is their integration with SAS/base product, and SQL-based database engines. All predictive models generated by Angoss can be automatically translated to SAS and SQL scripts. This allows the generation of scoring code for these common platforms. While the software interface simplifies all the tasks to allow business users to take advantage of the value added by predictive models, the software includes advanced options to allow experienced statisticians to fine-tune their models by adjusting all model parameters as needed.

In addition, Angoss offers a unique product called StrategyBuilder, which allows the analyst to add key performance indicators (KPI’s) to predictive models. KPI’s such as profitability, market share, and loyalty are usually required to be calculated in conjunction with any sales and marketing campaign. Therefore, StrategyBuilder was designed to integrate such KPI’s with the results of a predictive model in order to render the appropriate treatment for each customer segment. These results are all integrated into a deployment strategy that can also be translated into an execution code in SQL or SAS.

The above competitive features offered by the software products of Angoss is behind its success in serving over 4000 users from over 500 clients worldwide.

Ajay -Describe a major case study where using Angoss software helped save a big amount of revenue/costs by innovative data mining.

a-Rogers Telecommunications Inc. is one of the largest Canadian telecommunications providers, serving over 8.5 million customers and a revenue of 11.1 Billion Canadian Dollars (2009). In 2008, Rogers engaged Angoss in order to help with the problem of ballooning accounts receivable for a period of 18 months.

The problem was approached by improving the efficiency of the call centre serving the collections process by a set of predictive models. The first set of models were designed to find accounts likely to default ahead of time in order to take preventative measures. A second set of models were designed to optimize the call centre resources to focus on delinquent accounts likely to pay back most of the outstanding balance. Accounts that were identified as not likely to pack quickly were good candidates for “Early-out” treatment, by forwarding them directly to collection agencies. Angoss hosted Rogers’ data and provided on a regular interval the lists of accounts for each treatment to be deployed by the call centre dialler. As a result of this Rogers estimated an improvement of 10% of the collected sums.

Biography-

Mamdouh has been active in consulting, research, and training in various areas of information technology and software development for the last 20 years. He has worked on numerous projects with major organizations in North America and Europe in the areas of data mining, business analytics, business analysis, and engineering analysis. He has held several consulting positions for solution providers including Predict AG in Basel, Switzerland, and as ANGOSS Corp. Mamdouh is the Director of Professional services for EMEA region of ANGOSS Software. Mamdouh received his PhD in engineering from the University of Toronto and his MBA from the University of Leeds, UK.

Mamdouh is the author of:

"Credit Risk Scorecards: Development and Implmentation using SAS"
 "Data Preparation for Data Mining Using SAS",
 (The Morgan Kaufmann Series in Data Management Systems) (Paperback)
 and co-author of
 "Data Mining: Know it all",Morgan Kaufmann



Eberhard Miethke  works as a senior sales executive for Angoss

 

About Angoss-

Angoss is a global leader in delivering business intelligence software and predictive analytics to businesses looking to improve performance across sales, marketing and risk. With a suite of desktop, client-server and in-database software products and Software-as-a-Service solutions, Angoss delivers powerful approaches to turn information into actionable business decisions and competitive advantage.

Angoss software products and solutions are user-friendly and agile, making predictive analytics accessible and easy to use.

Interview Jaime Fitzgerald President Fitzgerald Analytics

Here is an interview with noted analytics expert Jaime Fitzgerald, of Fitzgerald Analytics.

Ajay-Describe your career journey from being a Harvard economist to being a text analytics thought leader.

 Jaime- I was attracted to economics because of the logic, the structured and systematic approach to understanding the world and to solving problems. In retrospect, this is the same passion for logic in problem solving that drives my business today.

About 15 years ago, I began working in consulting and initially took a traditional career path. I worked for well-known strategy consulting firms including First Manhattan Consulting Group, Novantas LLC, Braun Consulting, and for the former Japan-focused division of Deloitte Consulting, which had spun off as an independent entity. I was the only person in their New York City office for whom Japanese was not the first language.

While I enjoyed traditional consulting, I was especially passionate about the role of data, analytics, and process improvement. In traditional strategy consulting, these are important factors, but I had a vision for a “next generation” approach to strategy consulting that would be more transparent, more robust, and more focused on the role that information, analysis, and process plays in improving business results. I often explain that while my firm is “not your father’s consulting model,” we have incorporated key best practices from traditional consulting, and combined them with an approach that is more data-centric, technology-centric, and process-centric.

At the most fundamental level, I was compelled to found Fitzgerald Analytics more than six years ago by my passion for the role information plays in improving results, and ultimately improving lives. In my vision, data is an asset waiting to be transformed into results, including profit as well as other results that matter deeply to people. For example,one of the most fulfilling aspects of our work at Fitzgerald Analytics is our support of non-profits and social entrepreneurs, who we help increase their scale and their success in achieving their goals.

Ajay- How would you describe analytics as a career option to future students. What do you think are the most essential qualities an analytics career requires.

Jaime- My belief is that analytics will be a major driver of job-growth and career growth for decades. We are just beginning to unlock the full potential of analytics, and already the demand for analytic talent far exceeds the supply.

To succeed in analytics, the most important quality is logic. Many people believe that math or statistical skills are the most important quality, but in my experience, the most essential trait is what I call “ThoughtStyle” — critical thinking, logic, an ability to break down a problem into components, into sub-parts.

Ajay -What are your favorite techniques and methodologies in text analytics. How do you see social media and Big Data analytics as components of text analytics

 Jaime-We do a lot of work for our clients measuring Customer Experience, by which I mean the experience customers have when interacting with our clients. For example, we helped a major brokerage firm to measure 12 key “Moments that Matter,” including the operational aspects of customer service, customer satisfaction and sentiment, and ultimately customer behavior. Clients care about this a lot, because customer experience drives customer loyalty, which in turn drives customer behavior, customer loyalty, and customer profitability.

Text analytics plays a key role in these projects because much of our data on customer sentiment comes via unstructured text data. For example, we have access to call center transcripts and notes, to survey responses, and to social media comments.

We use a variety of methods, some of which I’m not in a position to describe in great detail. But at a high level, I would say that our favorite text analytics methodologies are “hybrid solutions” which use a two-step process to answer key questions for clients:

Step 1: convert unstructured data into key categorical variables (for example, using contextual analysis to flag users who are critical vs. neutral vs. advocates)

Step 2: linking sentiment categories to customer behavior and profitability (for example, linking customer advocacy and loyalty with customer profits as well as referral volume, to define the ROI that clients accrue for customer satisfaction improvements)

Ajay- Describe your consulting company- Fitzgerald Analytics and some of the work that you have been engaged in.

 Jaime- Our mission is to “illuminate reality” using data and to convert Data to Dollars for our clients. We have a track record of doing this well, with concrete and measurable results in the millions of dollars. As a result, 100% of our clients have engaged us for more than one project: a 100% client loyalty rate.

Our specialties–and most frequent projects–include customer profitability management projects, customer segmentation, customer experience management, balanced scorecards, and predictive analytics. We are often engaged to address high-stakes analytic questions, including issues that help to set long-term strategy. In other cases, clients hire us to help them build their internal capabilities. We have helped build several brand new analytic teams for clients, which continue to generate millions of dollars of profits with their fact-based recommendations.

Our methodology is based on Steven Covey’s principle: “begin with the end in mind,” the concept of starting with the client’s goal and working backwards from there. I often explain that our methods are what you would have gotten if Steven Covey had been a data analyst…we are applying his principles to the world of data analytics.

Ajay- Analytics requires more and more data while privacy requires the least possible data. What do you think are the guidelines that need to be built in sharing internet browsing and user activity data and do we need regulations just like we do for sharing financial data.

 Jaime- Great question. This is an essential challenge of the big data era. My perspective is that firms who depend on user data for their analysis need to take responsibility for protecting privacy by using data management best practices. Best practices to adequately “mask” or remove private data exist…the problem is that these best practices are often not applied. For example, Facebook’s practice of sharing unique user IDs with third-party application companies has generated a lot of criticism, and could have been avoided by applying data management best practices which are well known among the data management community.

If I were able to influence public policy, my recommendation would be to adopt a core set of simple but powerful data management standards that would protect consumers from perhaps 95% of the privacy risks they face today. The number one standard would be to prohibit sharing of static, personally identifiable user IDs between companies in a manner that creates “privacy risk.” Companies can track unique customers without using a static ID…they need to step up and do that.

Ajay- What are your favorite text analytics software that you like to work with.

 Jaime- Because much of our work in deeply embedded into client operations and systems, we often use the software our clients already prefer. We avoid recommending specific vendors unless our client requests it. In tandem with our clients and alliance partners, we have particular respect for Autonomy, Open Text, Clarabridge, and Attensity.

Biography-

http://www.fitzgerald-analytics.com/jaime_fitzgerald.html

The Founder and President of Fitzgerald Analytics, Jaime has developed a distinctively quantitative, fact-based, and transparent approach to solving high stakes problems and improving results.  His approach enables translation of Data to Dollars™ using methodologies clients can repeat again and again.  He is equally passionate about the “human side of the equation,” and is known for his ability to link the human and the quantitative, both of which are needed to achieve optimal results.

Experience: During more than 15 years serving clients as a management strategy consultant, Jaime has focused on customer experience and loyalty, customer profitability, technology strategy, information management, and business process improvement.  Jaime has advised market-leading banks, retailers, manufacturers, media companies, and non-profit organizations in the United States, Canada, and Singapore, combining strategic analysis with hands-on implementation of technology and operations enhancements.

Career History: Jaime began his career at First Manhattan Consulting Group, specialists in financial services, and was later a Co-Founder at Novantas, the strategy consultancy based in New York City.  Jaime was also a Manager for Braun Consulting, now part of Fair Isaac Corporation, and for Japan-based Abeam Consulting, now part of NEC.

Background: Jaime is a graduate of Harvard University with a B.A. in Economics.  He is passionate and supportive of innovative non-profit organizations, their effectiveness, and the benefits they bring to our society.

Upcoming Speaking Engagements:   Jaime is a frequent speaker on analytics, information management strategy, and data-driven profit improvement.  He recently gave keynote presentations on Analytics in Financial Services for The Data Warehousing Institute, the New York Technology Council, and the Oracle Financial Services Industry User Group. A list of Jaime’s most interesting presentations on analyticscan be found here.

He will be presenting a client case study this fall at Text Analytics World re:   “New Insights from ‘Big Legacy Data’: The Role of Text Analytics” 

Connecting with Jaime:  Jaime can be found at Linkedin,  and Twitter.  He edits the Fitzgerald Analytics Blog.

Credit Downgrade of USA and Triple A Whining

As a person trained , deployed and often asked to comment on macroeconomic shenanigans- I have the following observations to make on the downgrade of US Debt by S&P

1) Credit rating is both a mathematical exercise of debt versus net worth as well as intention to repay. Given the recent deadlock in United States legislature on debt ceiling, it is natural and correct to assume that holding US debt is slightly more risky in 2011 as compared to 2001. That means if the US debt was AAA in 2001 it sure is slightly more risky in 2011.

2) Politicians are criticized the world over in democracies including India, UK and US. This is natural , healthy and enforced by checks and balances by constitution of each country. At the time of writing this, there are protests in India on corruption, in UK on economic disparities, in US on debt vs tax vs spending, Israel on inflation. It is the maturity of the media as well as average educational level of citizenry that amplifies and inflames or dampens sentiment regarding policy and business.

3) Conspicuous consumption has failed both at an environmental and economic level. Cheap debt to buy things you do not need may have made good macro economic sense as long as the things were made by people locally but that is no longer the case. Outsourcing is not all evil, but it sure is not a perfect solution to economics and competitiveness. Outsourcing is good or outsourcing is bad- well it depends.

4) In 1944 , the US took debt to fight Nazism, build atomic power and generally wage a lot of war and lots of dual use inventions. In 2004-2010 the US took debt to fight wars in Iraq, Afghanistan and bail out banks and automobile companies. Some erosion in the values represented by a free democracy has taken place, much to the delight of authoritarian regimes (who have managed to survive Google and Facebook).

5) A Double A rating is still quite a good rating. Noone is moving out of the US Treasuries- I mean seriously what are your alternative financial resources to park your government or central bank assets, euro, gold, oil, rare earth futures, metals or yen??

6) Income disparity as a trigger for social unrest in UK, France and other parts is an ominous looming threat that may lead to more action than the poor maths of S &P. It has been some time since riots occured in the United States and I believe in time series and cycles especially given the rising Gini coefficients .

Gini indices for the United States at various times, according to the US Census Bureau:[8][9][10]

  • 1929: 45.0 (estimated)
  • 1947: 37.6 (estimated)
  • 1967: 39.7 (first year reported)
  • 1968: 38.6 (lowest index reported)
  • 1970: 39.4
  • 1980: 40.3
  • 1990: 42.8
    • (Recalculations made in 1992 added a significant upward shift for later values)
  • 2000: 46.2
  • 2005: 46.9
  • 2006: 47.0 (highest index reported)
  • 2007: 46.3
  • 2008: 46.69
  • 2009: 46.8

7) Again I am slightly suspicious of an American Corporation downgrading the American Governmental debt when it failed to reconcile numbers by 2 trillion and famously managed to avoid downgrading Lehman Brothers.  What are the political affiliations of the S &P board. What are their backgrounds. Check the facts, Watson.

The Chinese government should be concerned if it is holding >1000 tonnes of Gold and >1 trillion plus of US treasuries lest we have a third opium war (as either Gold or US Treasuries will burst)

. Opium in 1850 like the US Treasuries in 2010 have no inherent value except for those addicted to them.

8   ) Ron Paul and Paul Krugman are the two extremes of economic ideology in the US.

Reminds me of the old saying- Robbing Peter to pay Paul. Both the Pauls seem equally unhappy and biased.

I have to read both WSJ and NYT to make sense of what actually is happening in the US as opinionated journalism has managed to elbow out fact based journalism. Do we need analytics in journalism education/ reporting?

9) Panic buying and selling would lead to short term arbitrage positions. People like W Buffet made more money in the crash of 2008 than people did in the boom years of 2006-7

If stocks are cheap- buy. on the dips. Acquire companies before they go for IPOs. Go buy your own stock if you are sitting on  a pile of cash. Buy some technology patents in cloud , mobile, tablet and statistical computing if you have a lot of cash and need to buy some long term assets.

10) Follow all advice above at own risk and no liability to this author 😉

 

The Top Statisticians in the World

 

 

 

 

 

 

http://en.wikipedia.org/wiki/John_Tukey

 

John Tukey

From Wikipedia, the free encyclopedia
John Tukey

John Wilder Tukey
Born June 16, 1915
New Bedford, Massachusetts, USA
Died July 26, 2000 (aged 85)
New Brunswick, New Jersey
Residence United States
Nationality American
Fields Mathematician
Institutions Bell Labs
Princeton University
Alma mater Brown University
Princeton University
Doctoral advisor Solomon Lefschetz
Doctoral students Frederick Mosteller
Kai Lai Chung
Known for FFT algorithm
Box plot
Coining the term ‘bit’
Notable awards Samuel S. Wilks Award (1965)
National Medal of Science (USA) in Mathematical, Statistical, and Computational Sciences (1973)
Shewhart Medal (1976)
IEEE Medal of Honor (1982)
Deming Medal (1982)
James Madison Medal (1984)
Foreign Member of the Royal Society(1991)

John Wilder Tukey ForMemRS[1] (June 16, 1915 – July 26, 2000) was an American statistician.

Contents

[hide]

[edit]Biography

Tukey was born in New Bedford, Massachusetts in 1915, and obtained a B.A. in 1936 and M.Sc.in 1937, in chemistry, from Brown University, before moving to Princeton University where he received a Ph.D. in mathematics.[2]

During World War II, Tukey worked at the Fire Control Research Office and collaborated withSamuel Wilks and William Cochran. After the war, he returned to Princeton, dividing his time between the university and AT&T Bell Laboratories.

Among many contributions to civil society, Tukey served on a committee of the American Statistical Association that produced a report challenging the conclusions of the Kinsey Report,Statistical Problems of the Kinsey Report on Sexual Behavior in the Human Male.

He was awarded the IEEE Medal of Honor in 1982 “For his contributions to the spectral analysis of random processes and the fast Fourier transform (FFT) algorithm.”

Tukey retired in 1985. He died in New Brunswick, New Jersey on July 26, 2000.

[edit]Scientific contributions

His statistical interests were many and varied. He is particularly remembered for his development with James Cooley of the Cooley–Tukey FFT algorithm. In 1970, he contributed significantly to what is today known as the jackknife estimation—also termed Quenouille-Tukey jackknife. He introduced the box plot in his 1977 book,”Exploratory Data Analysis“.

Tukey’s range test, the Tukey lambda distributionTukey’s test of additivity and Tukey’s lemma all bear his name. He is also the creator of several little-known methods such as the trimean andmedian-median line, an easier alternative to linear regression.

In 1974, he developed, with Jerome H. Friedman, the concept of the projection pursuit.[3]

http://en.wikipedia.org/wiki/Ronald_Fisher

Sir Ronald Aylmer Fisher FRS (17 February 1890 – 29 July 1962) was an English statistician,evolutionary biologisteugenicist and geneticist. Among other things, Fisher is well known for his contributions to statistics by creating Fisher’s exact test and Fisher’s equationAnders Hald called him “a genius who almost single-handedly created the foundations for modern statistical science”[1] while Richard Dawkins named him “the greatest biologist since Darwin“.[2]

 

contacts.xls

http://en.wikipedia.org/wiki/William_Sealy_Gosset

William Sealy Gosset (June 13, 1876–October 16, 1937) is famous as a statistician, best known by his pen name Student and for his work on Student’s t-distribution.

Born in CanterburyEngland to Agnes Sealy Vidal and Colonel Frederic Gosset, Gosset attendedWinchester College before reading chemistry and mathematics at New College, Oxford. On graduating in 1899, he joined the Dublin brewery of Arthur Guinness & Son.

Guinness was a progressive agro-chemical business and Gosset would apply his statistical knowledge both in the brewery and on the farm—to the selection of the best yielding varieties ofbarley. Gosset acquired that knowledge by study, trial and error and by spending two terms in 1906–7 in the biometric laboratory of Karl Pearson. Gosset and Pearson had a good relationship and Pearson helped Gosset with the mathematics of his papers. Pearson helped with the 1908 papers but he had little appreciation of their importance. The papers addressed the brewer’s concern with small samples, while the biometrician typically had hundreds of observations and saw no urgency in developing small-sample methods.

Another researcher at Guinness had previously published a paper containing trade secrets of the Guinness brewery. To prevent further disclosure of confidential information, Guinness prohibited its employees from publishing any papers regardless of the contained information. However, after pleading with the brewery and explaining that his mathematical and philosophical conclusions were of no possible practical use to competing brewers, he was allowed to publish them, but under a pseudonym (“Student”), to avoid difficulties with the rest of the staff.[1] Thus his most famous achievement is now referred to as Student’s t-distribution, which might otherwise have been Gosset’s t-distribution.

Contribution to #Rstats by Revolution

I have been watching for Revolution Analytics product almost since the inception of the company. It has managed to sail over storms, naysayers and critics with simple and effective strategy of launching good software, making good partnerships and keeping up media visibility with white papers, joint webinars, blogs, conferences and events.

However this is a listing of all technical contributions made by Revolution Analytics products to the #rstats project.

1) Useful Packages mostly in parallel processing or more efficient computing like

 

2) RevoScaler package to beat R’s memory problem (this is probably the best in my opinion as it is yet to be replicated by the open source version and is a clear cut reason for going in for the paid version)

http://www.revolutionanalytics.com/products/enterprise-big-data.php

  • Efficient XDF File Format designed to efficiently handle huge data sets.
  • Data Step Functionality to quickly clean, transform, explore, and visualize huge data sets.
  • Data selection functionality to store huge data sets out of memory, and select subsets of rows and columns for in-memory operation with all R functions.
  • Visualize Large Data sets with line plots and histograms.
  • Built-in Statistical Algorithms for direct analysis of huge data sets:
    • Summary Statistics
    • Linear Regression
    • Logistic Regression
    • Crosstabulation
  • On-the-fly data transformations to include derived variables in models without writing new data files.
  • Extend Existing Analyses by writing user- defined R functions to “chunk” through huge data sets.
  • Direct import of fixed-format text data files and SAS data sets into .xdf format

 

3) RevoDeploy R for  API based R solution – I somehow think this feature will get more important as time goes on but it seems a lower visibility offering right now.

http://www.revolutionanalytics.com/products/enterprise-deployment.php

  • Collection of Web services implemented as a RESTful API.
  • JavaScript and Java client libraries, allowing users to easily build custom Web applications on top of R.
  • .NET Client library — includes a COM interoperability to call R from VBA
  • Management Console for securely administrating servers, scripts and users through HTTP and HTTPS.
  • XML and JSON format for data exchange.
  • Built-in security model for authenticated or anonymous invocation of R Scripts.
  • Repository for storing R objects and R Script execution artifacts.

 

4) Revolutions IDE (or Productivity Environment) for a faster coding environment than command line. The GUI by Revolution Analytics is in the works. – Having used this- only the Code Snippets function is a clear differentiator from newer IDE and GUI. The code snippets is awesome though and even someone who doesnt know much R can get analysis set up quite fast and accurately.

http://www.revolutionanalytics.com/products/enterprise-productivity.php

  • Full-featured Visual Debugger for debugging R scripts, with call stack window and step-in, step-over, and step-out capability.
  • Enhanced Script Editor with hover-over help, word completion, find-across-files capability, automatic syntax checking, bookmarks, and navigation buttons.
  • Run Selection, Run to Line and Run to Cursor evaluation
  • R Code Snippets to automatically generate fill-in-the-blank sections of R code with tooltip help.
  • Object Browser showing available data and function objects (including those in packages), with context menus for plotting and editing data.
  • Solution Explorer for organizing, viewing, adding, removing, rearranging, and sourcing R scripts.
  • Customizable Workspace with dockable, floating, and tabbed tool windows.
  • Version Control Plug-in available for the open source Subversion version control software.

 

Marketing contributions from Revolution Analytics-

1) Sponsoring R sessions and user meets

2) Evangelizing R at conferences  and partnering with corporate partners including JasperSoft, Microsoft , IBM and others at http://www.revolutionanalytics.com/partners/

3) Helping with online initiatives like http://www.inside-r.org/ (which is curiously dormant and now largely superseded by R-Bloggers.com) and the syntax highlighting tool at http://www.inside-r.org/pretty-r. In addition Revolution has been proactive in reaching out to the community

4) Helping pioneer blogging about R and Twitter Hash tag discussions , and contributing to Stack Overflow discussions. Within a short while, #rstats online community has overtaken a lot more established names- partly due to decentralized nature of its working.

 

Did I miss something out? yes , they share their code by GPL.

 

Let me know by feedback