Why Cloud?

Here are some reasons why cloud computing is very helpful to small business owners like me- and can be very helpful to even bigger people.

1) Infrastructure Overhead becomes zero

– I need NOT invest in secure powerbackups (like a big battery for electricity power-outs-true in India), data disaster management (read raid), software licensing compliance.

All this is done for me by infrastructure providers like Google and Amazon.

For simple office productivity, I type on Google Docs that auto-saves my data,writing on cloud. I need not backup- Google does it for me.  Ditto for presentations and spreadsheets. Amazon gets me the latest Window software installed whenever I logon- I need not be  bothered by software contracts (read bug fixes and patches) any more.

2) Renting Hardware by the hour- A small business owner cannot invest too much in computing hardware (or software). The pay as you use makes sense for them. I could never afford a 8 cores desktop with 25 gb RAM- but I sure can rent and use it to bid for heavier data projects that I would have had to let go in the past.

3) Renting software by the hour- You may have bought your last PC for all time

An example- A windows micro instance costs you 3 cents per hour on Amazon. If you take a mathematical look at upgrading your PC to latest Windows, buying more and more upgraded desktops just to keep up, those costs would exceed 3 cents per hour. For Unix, it is 2 cents per hour, and those softwares (like Red Hat Linux and Ubuntu have increasingly been design friendly even for non techie users)

Some other software companies especially in enterprise software plan to and already offer paid machine images that basically adds their software layer on top of the OS and you can rent software for the hour.

It does not make sense for customers to effectively subsidize golf tournaments, rock concerts, conference networks by their own money- as they can rent software by the hour and switch to pay per use.

People especially SME consultants, academics and students and cost conscious customers – in Analytics would love to see a world where they could say run SAS Enterprise Miner for 10 dollars a hour for two hours to build a data mining model on 25 gb RAM, rather than hurt their pockets and profitability in Annual license models. Ditto for SPSS, JMP, KXEN, Revolution R, Oracle Data Mining (already available on Amazon) , SAP (??), WPS ( on cloud ???? ) . It’s the economy, stupid.

Corporates have realized that cutting down on Hardware and software expenses is more preferable to cutting down people. Would you rather fire people in your own team to buy that big HP or Dell or IBM Server (effectively subsidizing jobs in those companies). IF you had to choose between an annual license renewal for your analytics software TO renting software by the hour and using those savings for better benefits for your employees, what makes business sense for you to invest in.

Goodbye annual license fees.  Welcome brave new world.

IBM SPSS 19: Marketing Analytics and RFM

What is RFM Analysis?

Recency Frequency Monetization is basically a technique to classify your entire customer list. You may be a retail player with thousands of customers or a enterprise software seller with only two dozen customers.

RFM Analysis can help you cut through and focus on the real customer that drives your profit.

As per Wikipedia

http://en.wikipedia.org/wiki/RFM

RFM is a method used for analyzing customer behavior and defining market segments. It is commonly used in database marketing and direct marketing and has received particular attention in retail.

RFM stands for

  • Recency – How recently a customer has purchased?
  • Frequency – How often he purchases?
  • Monetary Value – How much does he spend?

To create an RFM analysis, one creates categories for each attribute. For instance, the Recency attribute might be broken into three categories: customers with purchases within the last 90 days; between 91 and 365 days; and longer than 365 days. Such categories may be arrived at by applying business rules, or using a data mining technique, such asCHAID, to find meaningful breaks.

—————————————————————————————————-

Even if you dont know what or how to do a RFM, see below for an easy to do way.

I just got myself an evaluation copy of a fully loaded IBM SPSS 19 Module and did some RFM Analysis on some data- the way SPSS recent version is it makes it very very useful even to non statistical tool- but an extremely useful one to a business or marketing user.

Here are some screenshots to describe the features.

1) A simple dashboard to show functionality (with room for improvement for visual appeal)

2) Simple Intuitive design to inputting data3) Some options in creating marketing scorecards4) Easy to understand features for a business audiences

rather than pseudo techie jargon5) Note the clean design of the GUI in specifying data input type6) Again multiple options to export results in a very user friendly manner with options to customize business report7) Graphical output conveniently pasted inside a word document rather than a jumble of images. Auto generated options for customized standard graphs.8) An attractive heatmap to represent monetization for customers. Note the effect that a scale of color shades have in visual representation of data.9) Comparative plots placed side by side with easy to understand explanation (in the output word doc not shown here)10) Auto generated scores attached to data table to enhance usage. 

Note here I am evaluating RFM as a marketing technique (which is well known) but also the GUI of IBM SPSS 19 Marketing Analytics. It is simple, and yet powerful into turning what used to be a purely statistical software for nerds into a beautiful easy to implement tool for business users.

So what else can you do in Marketing Analytics with SPSS 19.

IBM SPSS Direct Marketing

The Direct Marketing add-on option allows organizations to ensure their marketing programs are as effective as possible, through techniques specifically designed for direct marketing, including:

• RFM Analysis. This technique identifies existing customers who are most likely to respond to a new offer.

• Cluster Analysis. This is an exploratory tool designed to reveal natural groupings (or clusters) within your data. For example, it can identify different groups of customers based on various demographic and purchasing characteristics.

• Prospect Profiles. This technique uses results from a previous or test campaign to create descriptive profiles. You can use the profiles to target specific groups of contacts in future campaigns.

• Postal Code Response Rates. This technique uses results from a previous campaign to calculate postal code response rates. Those rates can be used to target specific postal codes in future campaigns.

• Propensity to Purchase. This technique uses results from a test mailing or previous campaign to generate propensity scores. The scores indicate which contacts are most likely to respond.

• Control Package Test. This technique compares marketing campaigns to see if there is a significant difference in effectiveness for different packages or offers.

Click here to find out more about Direct Marketing.

Economic: Indian Caste System -Simplification

I am often asked by Western and non Indian people regarding the caste system. It trips me a lot trying to explain the complexity, necessity and current scenario given the history.

Here is an effort- The Indian /Hindu caste system was primarily an economic system to divide labor. In the original Manusmriti ,named by the King Manu- it was flexible.

A son of blue collar worker could become a warrior if he was brave etc.

A couple of centuries later – the top castes primarily the priests decided to make it rigid. No more social intermingling or marriage between castes, and no more migration of occupation regardless of merit.

This led to a lot of lower caste people leaving Hinduism to join religions like Islam ( post 1000 AD, Muslim Invasions and Mughal Rule) and Christianity ( post the arrival of English).


Post 1947 , many of “lower castes” preferred to remain within Hinduism but adopted Buddhism as their primary worship mechanism.Also India‘s leaders in the 1940’s , many of whom were educated in UK as lawyers ( including Mahatma Gandhi, Subhash Chandra Bose, Jawahar Lal Nehru) decided this system had weakened the nation state and divided the energies of India, besides being obviously inhumane and degrading.

The Constitution of India was shepharded in 1950  by an assembly led by Dr. B R Ambedkar , one of the very first educated lower castes ( also called Harijan , after Mahatma Gandhi’s name for them, literally Hari -Jan people of the Lord).That Cosntitution endures as India remains the finest example of a Democracy in the non Western world.

The Indian constitution established 7.5 % jobs reservation in Government jobs and educational institutes at a college and masters level only for lowest and most educationally backward castes ( hence called scheduled castes), 15 % jobs reservation in Government jobs only for tribal people ( hence called scheduled tribes). The provision is renewed every 10 years. Think of it as a constitutionallu bound affirmative action.

In 1990, another 27.5 % of jobs and educational seats were reserved for castes that were socially okay but educationally backward. This caused some riots, delays, political actions, but was finally implemented by 2007.

Opponents of the new affirmative action say that this is like doing two wrongs to make a right. Supporters say data proves that reservation has led to social advancement ( especially in the State of Tamil Nadu).Rollback of the new system is a political impossibilty thanks to unity among hitherto repressed classes.

As an upper caste Hindu ( embarassingly enough my caste is both a warrior and a kingly royal caste , which gives me zero benefit in 2010 AD)……..

In God we Trust..All others must bring Data.

Unfortunately, when it comes to politics the same data is either hidden, partially hidden, or interpreted in different ways especially with regards to projecting sampling error or decisions.

Phew…!! That was an analytical layman definition of the Indian Caste System over 2000 years.

Note- The Indian soldier caste is Kshatriyas not Kshatritas..

Creating 3D Graphs with Data in R

Creating 3D graphs  in a 3d scatterplot is a 2 minute task in R using the woderful R Commander GUI. You can see an example video-

I loaded R,                                                                                                                                               then loaded the GUI,                                                                                                                           inputted data (from an attached package) but you can input data from a csv,           then went to Graphs- 3D ScatterPlot.

Here is the result-

and here is the video.

Not bad for 2 minutes of clicking a GUI.

Here is the auto generated code by R Commander.

> data(iris3, package="datasets")
> iris3 <- as.data.frame(iris3)
> names(iris3) <- make.names(names(iris3))
> library(rgl, pos=4)
> library(mgcv, pos=4)
> scatter3d(iris3$Petal.W..Setosa, iris3$Petal.L..Setosa, +   iris3$Sepal.L..Setosa, fit="linear", residuals=TRUE, bg="black", +   axis.scales=TRUE, grid=TRUE, ellipsoid=FALSE, xlab="Petal.W..Setosa", +   ylab="Petal.L..Setosa", zlab="Sepal.L..Setosa")
> scatter3d(iris3$Petal.L..Versicolor, iris3$Petal.L..Setosa, +   iris3$Petal.L..Virginica, fit="linear", residuals=TRUE, bg="white", +   axis.scales=TRUE, grid=TRUE, ellipsoid=FALSE, xlab="Petal.L..Versicolor", +   ylab="Petal.L..Setosa", zlab="Petal.L..Virginica")
> rgl.snapshot("C:/Documents and Settings/abc/Desktop/RGLGraph.png")

Better Data Visualization in WordPress.com Stats

WordPress.com Stats is the analytical software which helps bloggers on WP.com hosted blogs. It recently underwent a revamp in design-

Note a simple change from Line to Histogram charts, and added Tabs can add so much value to data.

However WP.com really needs to addin Geo-Coded Stats (Visitors from where) and Some level of Campaign Tracking (similar to Goals in Google Analytics)

Earlier WP Stats

Now WP Stats

Interview Dean Abbott Abbott Analytics

Here is an interview with noted Analytics Consultant and trainer Dean Abbott. Dean is scheduled to take a workshop on Predictive Analytics at PAW (Predictive Analytics World Conference)  Oct 18 , 2010 in Washington D.C

Ajay-  Describe your upcoming hands on workshop at Predictive Analytics World and how it can help people learn more predictive modeling.

Refer- http://www.predictiveanalyticsworld.com/dc/2010/handson_predictive_analytics.php

Dean- The hands-on workshop is geared toward individuals who know something about predictive analytics but would like to experience the process. It will help people in two regards. First, by going through the data assessment, preparation, modeling and model assessment stages in one day, the attendees will see how predictive analytics works in reality, including some of the pain associated with false starts and mistakes. At the same time, they will experience success with building reasonable models to solve a problem in a single day. I have found that for many, having to actually build the predictive analytics solution if an eye-opener. Seeing demonstrations show the capabilities of a tool, but greater value for an end-user is the development of intuition of what to do at each each stage of the process that makes the theory of predictive analytics real.

Second, they will gain experience using a top-tier predictive analytics software tool, Enterprise Miner (EM). This is especially helpful for those who are considering purchasing EM, but also for those who have used open source tools and have never experienced the additional power and efficiencies that come with a tool that is well thought out from a business solutions standpoint (as opposed to an algorithm workbench).

Ajay-  You are an instructor with software ranging from SPSS, S Plus, SAS Enterprise Miner, Statistica and CART. What features of each software do you like best and are more suited for application in data cases.

Dean- I’ll add Tibco Spotfire Miner, Polyanalyst and Unica’s Predictive Insight to the list of tools I’ve taught “hands-on” courses around, and there are at least a half dozen more I demonstrate in lecture courses (JMP, Matlab, Wizwhy, R, Ggobi, RapidMiner, Orange, Weka, RandomForests and TreeNet to name a few). The development of software is a fascinating undertaking, and each tools has its own strengths and weaknesses.

I personally gravitate toward tools with data flow / icon interface because I think more that way, and I’ve tired of learning more programming languages.

Since the predictive analytics algorithms are roughly the same (backdrop is backdrop no matter which tool you use), the key differentiators are

(1) how data can be loaded in and how tightly integrated can the tool be with the database,

(2) how well big data can be handled,

(3) how extensive are the data manipulation options,

(4) how flexible are the model reporting options, and

(5) how can you get the models and/or predictions out.

There are vast differences in the tools on these matters, so when I recommend tools for customers, I usually interview them quite extensively to understand better how they use data and how the models will be integrated into their business practice.

A final consideration is related to the efficiency of using the tool: how much automation can one introduce so that user-interaction is minimized once the analytics process has been defined. While I don’t like new programming languages, scripting and programming often helps here, though some tools have a way to run the visual programming data diagram itself without converting it to code.

Ajay- What are your views on the increasing trend of consolidation and mergers and acquisitions in the predictive analytics space. Does this increase the need for vendor neutral analysts and consultants as well as conferences.

Dean- When companies buy a predictive analytics software package, it’s a mixed bag. SPSS purchasing of Clementine was ultimately good for the predictive analytics, though it took several years for SPSS to figure out what they wanted to do with it. Darwin ultimately disappeared after being purchased by Oracle, but the newer Oracle data mining tool, ODM, integrates better with the database than Darwin did or even would have been able to.

The biggest trend and pressure for the commercial vendors is the improvements in the Open Source and GNU tools. These are becoming more viable for enterprise-level customers with big data, though from what I’ve seen, they haven’t caught up with the big commercial players yet. There is great value in bringing both commercial and open source tools to the attention of end-users in the context of solutions (rather than sales) in a conference setting, which is I think an advantage that Predictive Analytics World has.

As a vendor-neutral consultant, flux is always a good thing because I have to be proficient in a variety of tools, and it is the breadth that brings value for customers entering into the predictive analytics space. But it is very difficult to keep up with the rapidly-changing market and that is something I am weighing myself: how many tools should I keep in my active toolbox.

Ajay-  Describe your career and how you came into the Predictive Analytics space. What are your views on various MS Analytics offered by Universities.

Dean- After getting a masters degree in Applied Mathematics, my first job was at a small aerospace engineering company in Charlottesville, VA called Barron Associates, Inc. (BAI); it is still in existence and doing quite well! I was working on optimal guidance algorithms for some developmental missile systems, and statistical learning was a key part of the process, so I but my teeth on pattern recognition techniques there, and frankly, that was the most interesting part of the job. In fact, most of us agreed that this was the most interesting part: John Elder (Elder Research) was the first employee at BAI, and was there at that time. Gerry Montgomery and Paul Hess were there as well and left to form a data mining company called AbTech and are still in analytics space.

After working at BAI, I had short stints at Martin Marietta Corp. and PAR Government Systems were I worked on analytics solutions in DoD, primarily radar and sonar applications. It was while at Elder Research in the 90s that began working in the commercial space more in financial and risk modeling, and then in 1999 I began working as an independent consultant.

One thing I love about this field is that the same techniques can be applied broadly, and therefore I can work on CRM, web analytics, tax and financial risk, credit scoring, survey analysis, and many more application, and cross-fertilize ideas from one domain into other domains.

Regarding MS degrees, let me first write that I am very encouraged that data mining and predictive analytics are being taught in specific class and programs rather than as just an add-on to an advanced statistics or business class. That stated, I have mixed feelings about analytics offerings at Universities.

I find that most provide a good theoretical foundation in the algorithms, but are weak in describing the entire process in a business context. For those building predictive models, the model-building stage nearly always takes much less time than getting the data ready for modeling and reporting results. These are cross-discipline tasks, requiring some understanding of the database world and the business world for us to define the target variable(s) properly and clean up the data so that the predictive analytics algorithms to work well.

The programs that have a practicum of some kind are the most useful, in my opinion. There are some certificate programs out there that have more of a business-oriented framework, and the NC State program builds an internship into the degree itself. These are positive steps in the field that I’m sure will continue as predictive analytics graduates become more in demand.

Biography-

DEAN ABBOTT is President of Abbott Analytics in San Diego, California. Mr. Abbott has over 21 years of experience applying advanced data mining, data preparation, and data visualization methods in real-world data intensive problems, including fraud detection, response modeling, survey analysis, planned giving, predictive toxicology, signal process, and missile guidance. In addition, he has developed and evaluated algorithms for use in commercial data mining and pattern recognition products, including polynomial networks, neural networks, radial basis functions, and clustering algorithms, and has consulted with data mining software companies to provide critiques and assessments of their current features and future enhancements.

Mr. Abbott is a seasoned instructor, having taught a wide range of data mining tutorials and seminars for a decade to audiences of up to 400, including DAMA, KDD, AAAI, and IEEE conferences. He is the instructor of well-regarded data mining courses, explaining concepts in language readily understood by a wide range of audiences, including analytics novices, data analysts, statisticians, and business professionals. Mr. Abbott also has taught both applied and hands-on data mining courses for major software vendors, including Clementine (SPSS, an IBM Company), Affinium Model (Unica Corporation), Statistica (StatSoft, Inc.), S-Plus and Insightful Miner (Insightful Corporation), Enterprise Miner (SAS), Tibco Spitfire Miner (Tibco), and CART (Salford Systems).