Interview- Phil Rack

 

Phil Rack is the creator of a Bridge to R and SAS Bridge to R which enables both WPS and SAS softwares to connect to R. He is also a WPS Reseller. WPS is a base SAS equivalent that can take in SAS code , SAS datasets, write SAS code, and create SAS datasets ( and also create its own format)- at the cost of 660 $ a license ( and almost one tenth of a SAS Institute installation on network servers). Having worked in SAS language and analytics consulting for almost 26 years ,Phil runs www.minequest.com besides running the SAS Consultants network that mentors analytics consultant globally ( I am an ex- member :))

Ajay- What has been your career journey. What advice would you give to someone entering a science career after high school?

Phil- I started out consulting full-time in 1983. I left an analytics job with McMillan-McGraw-Hill Publishing because I didn’t believe the company was investing in BI tools and training as it should. That was pretty early in terms of when BI was becoming important.

Many companies at that point saw BI as only two things.

(a) Ability to forecast sales and

(b) ad-hoc reporting with sums/totals and percentages.

It was obvious to me that I had to make a change to do the kind of work I wanted to do. In terms of training, I was formally trained as a demographer and did my graduate studies at Ohio State so I received a pretty good dose of quantitative subject matter as well as a unique perspective on the social implications of markets and geography. If I had to do it over again, I would probably take more course work in the subject area of the “Family.”  I’m always amazed how many times the work I do in banking and finance revolves around the family lifecycle.

 

Ajay- What has been the biggest project success you have seen in your consulting practice?

Phil- This goes back three years to a project where I was working on Basel II compliance with a commercial banking client that I just loved working with.

A few months into the engagement, they pulled me aside and asked me to put together an automotive portfolio stress test for them. This bank had very large loan exposures to the auto market for second and third tier suppliers to the Big Three as well as international auto manufacturers.

The Risk Management group and I sat down for a couple of days and pulled together a project plan and an outline of what we needed to be able implement a dynamic Auto Risk Stress Test Model for this portfolio. The software used was SAS/Base and Excel and the program allowed us to modify 50 to 60 parameters to model different scenarios. All together, it took perhaps three weeks to implement and it was amazingly indicative of the fall out of the auto industry as well as foreshadowing some of the financial carnage in South Eastern Michigan such as lower property values and unemployment.

 

Ajay- “It is not what you know, it is whom you know.” Comment please as an SAS consultant.

Phil- In terms of my business, 80-90% of the work I do is either based on prior work that I’ve done for that company or through referrals.  If you want to have a successful consulting career, you really have to pay attention to developing your network. I’ve taken advantage of social gatherings such as charity events and other social mixers to try to extend my network. I hand out a lot of business cards every year. Formal organizations exist here in Columbus, Ohio such as TechColumbus.org that is a dynamite organization that helps small tech businesses in the area of networking, financing, access to different hardware platforms for testing, etc…  I have mixed emotions about the value of some social networks however.

I see so many individuals on LinkedIn that have 5,000 connections that I have to wonder what it is these folks really do. Who has the time to read all the updates and postings for 5,000 people and still be able to get work out the door? ( Note from Ajay- I have 6300 connections on LinkedIn . Ouch !!)

 

Ajay- What motivated you to write the SAS to R and WPS to R bridges? (Which IS your favorite analytical tool, since you are active in all three?)

Phil- It started out as a “proof-of-concept” exercise and it’s just keeps growing. The WPS to R Bridge is a piece of software that I wrote originally for WPS users to access R from within the WPS Workbench. For those who are unfamiliar with WPS, it’s a SAS/Base alternative that is extremely compatible with your existing SAS/Base software and your code is just plug-and-play. WPS doesn’t have the statistical capabilities of SAS such as SAS/STAT, ETS, OR, etc… so the idea was to write a bridge so that WPS users wouldn’t have to learn a new GUI/IDE to use R. The Bridge gives WPS users access to R graphics as well as any of the R statistical libraries but it has the advantage of the superior data handling of the SAS language. One of the new features is the ability of the WPS to R Bridge to run R programs in parallel. Depending on your hardware, you can easily run six to a dozen R programs simultaneously and collect the R listing and log files back into the WPS listing and log in the order you submitted the programs.

I did write a Bridge to R for SAS users but very few SAS users have expressed interest in it. I suppose that SAS users are happy enough paying the fat licensing fees to SAS that it just doesn’t matter to them. I have to say, my favorite tool at the moment is WPS. I find the interface/workbench to be so superior to what SAS has to offer that I now find myself writing code in WPS and then taking it over to SAS if that’s what the client requires.

 

Ajay- What do you think about internet based delivery and social networking including communities and lists changing the software product cycle?

Phil- This somewhat goes back to question #3 in terms of communities. I think it has its value as a place to share your concerns and find answers to difficult programming issues. Now, Internet delivery and cloud computing I find very interesting. I think there’s some strong advantages to using the cloud to provide services to your clients. If you look at the SAS pricing model, they really take it to you financially if you want to use your license to be a DSP (data service provider) or put your code on an intranet/internet. For some reason, SAS is just hostile when it comes to small and medium sized businesses. Companies like World Programming who license WPS have a much more realistic idea of licensing in that you can expose your WPS license to your intranet/internet and not have to pay 10x the fees that SAS charges. WPS doesn’t charge additional fees for those who are DSP’s either and there are quite a few of them in the Pharma domain.

Beyond security challenges associated with cloud computing, I think SaaS that provides analytical services such as high performance forecasting and name and address cleanup and verification are ripe for the picking. One other issue I see with cloud computing is when you have tens of gigs of data that you have to move from your desktop or server to the cloud. The infrastructure just isn’t fast enough, or let’s say reasonably priced, to allow for moving this amount of data to really scale well.

Ajay- How does MineQuest intend to influence the analytical software paradigm?

Phil- I think the role for MineQuest in the next few years is twofold.

We’ll keep offering services to banks and other financial service firms in the area of Operational Risk and SAS programming.

The other area is to help these large financial service companies realize that they can save millions of dollars by moving their SAS Server licenses to WPS. This
also allows the smaller businesses who have steered away from SAS software because of cost to begin using WPS and not take such a big financial hit. I find it exciting to think how this will also open the job market for the thousands of SAS programmers out there already.

The BI battles are taking place on the desktop and Windows Servers and MineQuest has invested a lot of time and effort in creating macro libraries to help these organizations migrate their code to WPS and access R for advanced statistical capabilities.

We believe that the bread and butter software for almost any financial organization in the BI realm ultimately revolves around the SAS language for reporting, summarization and disbursement of data and we plan to continue to serve that market.

About Minequest –

MineQuest has been providing SAS Consulting and Programming Services for more than 25 years. Our associates and employees are expert SAS programmers and specialize in the Banking and Financial Industries. Our staff has expertise in such areas as Market Analytics, ETL and Reporting Systems, Fraud Detection, and Credit Risk and Operational Risk segments. Validating Operational Risk models using SAS, in support of the Basel II Capital Framework is one of our specialties. We have real world experience developing SAS software to test and validate Credit and Operational Risk Systems like Fair Isaac’s Blaze Advisor which is one of our areas with subject matter expertise.

MineQuest, LLC

SAS & WPS Consulting and WPS Reseller

Tel: (614) 457-3714

Web: www.MineQuest.com

Blog: www.MineQuest.com/WordPress

image001

( Ajay –

SAS language uses mainly Procs and Data step for output and input.Base SAS is a product copyrighted by the SAS Institute (www.sas.com) .SAS Institute has been leading the analytics world since the 70’s.WPS is copyright of World Programming Company (WPC) (www.teamwpc.co.uk/products/wps ) )

Interview –Michael Zeller CEO,Zementis

As mentioned before, Zementis is at the forefront of using Cloud Computing ( Amazon EC2 ) for open source analytics. Recently I came in contact with Michael Zeller for a business problem , and Mike being the gentleman he is not only helped me out but also agreed on an extensive and exclusive interview.(!)

image

Ajay- What are the traditional rivals to scoring solutions offered by you. How does ADAPA compare to each of them. Case Study- Assume I have 50000 leads daily on a Car buying website. How would ADAPA help me in scoring the model ( created say by KXEN or , R or,SAS, or SPSS).What would my approximate cost advantages be if I intend to mail say the top 5 deciles everyday.

Michael- Some of the traditional scoring solutions used today are based on SAS, in-database scoring like Oracle, MS SQL Server, or very often even custom code.  ADAPA is able to import the models from all tools that support the PMML standard, so any of the above tools, open source or commercial, could serve as an excellent development environment.

The key differentiators for ADAPA are simple and focus on cost-effective deployment:

1) Open Standards – PMML & SOA:

Freedom to select best-of-breed development tools without being locked into a specific vendor;  integrate easily with other systems.

2) SaaS-based Cloud Computing:

Delivers a quantum leap in cost-effectiveness without compromising on scalability.

In your example, I assume that you’d be able to score your 50,000 leads in one hour using one ADAPA engine on Amazon.  Therefore, you could choose to either spend US$100,000 or more on hardware, software, maintenance, IT services, etc., write a project proposal, get it approved by management, and be ready to score your model in 6-12 months…

OR, you could use ADAPA at something around US$1-$2 per day for the scenario above and get started today!  To get my point across here, I am of course simplifying the scenario a little bit, but in essence these are your choices.

Sounds too good to be true?  We often get this response, so please feel free to contact us today [http://www.zementis.com/contact.htm] and we will be happy show you how easy it can be to deploy predictive models with ADAPA!

 

Ajay- The ADAPA solution seems to save money on both hardware and software costs. Comment please. Also any benchmarking tests that you have done on a traditional scoring configuration system versus ADAPA.

Michael-Absolutely, the ADAPA Predictive Analytics Edition [http://www.zementis.com/predictive_analytics_edition.htm] on Amazon’s cloud computing infrastructure (Amazon EC2) eliminates the upfront investment in hardware and software.  It is a true Software as a Service (SaaS) offering on Amazon EC2 [http://www.zementis.com/howtobuy.htm] whereby users only pay for the actual machine time starting at less than US$1 per machine hour.  The ADAPA SaaS model is extremely dynamic, e.g., a user is able to select an instance type most appropriate for the job at hand (small, large, x-large) or launch one or even 100 instances within minutes.

In addition to the above savings in hardware/software, ADAPA also cuts the time-to-market for new models (priceless!) which adds to business agility, something truly critical for the current economic climate.

Regarding a benchmark comparison, it really depends on what is most important to the business.  Business agility, time-to-market, open standards for integration, or pure scoring performance?  ADAPA addresses all of the above.  At its core, it is a highly scalable scoring engine which is able to process thousands of transactions per second.  To tackle even the largest problems, it is easy to scale ADAPA via more CPUs, clustering, or parallel execution on multiple independent instances. 

Need to score lots of data once a month which would take 100 hours on one computer?  Simply launch 10 instances and complete the job in 10 hours over night.  No extra software licenses, no extra hardware to buy — that’s capacity truly on-demand, whenever needed, and cost-effective.

Ajay- What has been your vision for Zementis. What exciting products are we going to see from it next.

Michael – Our vision at Zementis [http://www.zementis.com] has been to make it easier for users to leverage analytics.  The primary focus of our products is on the deployment side, i.e., how to integrate predictive models into the business process and leverage them in real-time.  The complexity of deployment and the cost associated with it has been the main hurdle for a more widespread adoption of predictive analytics. 

Adhering to open standards like the Predictive Model Markup Language (PMML) [http://www.dmg.org/] and SOA-based integration, our ADAPA engine [http://www.zementis.com/products.htm] paves the way for new use cases of predictive analytics — wherever a painless, fast production deployment of models is critical or where the cost of real-time scoring has been prohibitive to date.

We will continue to contribute to the R/PMML export package [http://www.zementis.com/pmml_exporters.htm] and extend our free PMML converter [http://www.zementis.com/pmml_converters.htm] to support the adoption of the standard.  We believe that the analytics industry will benefit from open standards and we are just beginning to grasp what data-driven decision technology can do for us.  Without giving away much of our roadmap, please stay tuned for more exciting products that will make it easier for businesses to leverage the power of predictive analytics!

Ajay- Any India or Asia specific plans for the Zementis.

Michael-Zementis already serves customers in the Asia/Pacific region from its office in Hong Kong.  We expect rapid growth for predictive analytics in the region and we think our cost-effective SaaS solution on Amazon EC2 will be of great service to this market.  I could see various analytics outsourcing and consulting firms benefit from using ADAPA as their primary delivery mechanism to provide clients with predictive  models that are ready to be executed on-demand.

Ajay-What do you believe be the biggest challenges for analytics in 2009. What are the biggest opportunities.

Michael-The biggest challenge for analytics will most likely be the reduction in technology spending in a deep, global recession.  At the same time, companies must take advantage of analytics to cut cost, optimize processes, and to become more competitive.  Therefore, the biggest opportunity for analytics will be in the SaaS field, enabling clients to employ analytics without upfront capital expenditures.

Ajay – What made you choose a career in science. Describe your journey so far.What would your advice be to young science graduates in this recessionary times.

Michael- As a physicist, my research focused on neural networks and intelligent systems.  Predictive analytics is a grea
t way for me to stay close to science while applying such complex algorithms to solve real business problems.  Even in a recession, there is always a need for good people with the desire to excel in their profession.  Starting your career, I’d say the best way is to remain broad in expertise rather than being too specialized on one particular industry or proficient in a single analytics tool.  A good foundation of math and computer science, combined with curiosity in how to apply analytics to specific business problems will provide opportunities, even in the current economic climate.

About Zementis

Zementis, Inc. is a software company focused on predictive analytics and advanced Enterprise Decision Management technology. We combine science and software to create superior business imageand industrial solutions for our clients. Our scientific expertise includes statistical algorithms, machine learning, neural networks, and intelligent systems and our scientists have a proven record in producing effective predictive models to extract hidden patterns from a variety of data types. It is complemented by our product offering ADAPA®, a decision engine framework for real-time execution of predictive models and rules. For more information please visit www.zementis.com

Ajay-If you have a lot of data ( GB’s and GB’s) , an existing model ( in SAS,SPSS,R) which you converted to PMML, and it is time for you to choose between spending more money to upgrade your hardware, renew your software licenses  then instead take a look at the ADAPA from www.zementis.com and score models as low as 1$ per hour. Check it out ( test and control !!)

Do you have any additional queries from Michael ? Use the comments page to ask….

As mentioned before, Zementis is at the forefront of using Cloud Computing ( Amazon EC2 ) for open source analytics. Recently I came in contact with Michael Zeller for a business problem , and Mike being the gentleman he is not only helped me out but also agreed on an extensive and exclusive interview.(!)

image

Ajay- What are the traditional rivals to scoring solutions offered by you. How does ADAPA compare to each of them. Case Study- Assume I have 50000 leads daily on a Car buying website. How would ADAPA help me in scoring the model ( created say by KXEN or , R or,SAS, or SPSS).What would my approximate cost advantages be if I intend to mail say the top 5 deciles everyday.

Michael- Some of the traditional scoring solutions used today are based on SAS, in-database scoring like Oracle, MS SQL Server, or very often even custom code.  ADAPA is able to import the models from all tools that support the PMML standard, so any of the above tools, open source or commercial, could serve as an excellent development environment.

The key differentiators for ADAPA are simple and focus on cost-effective deployment:

1) Open Standards – PMML & SOA:

Freedom to select best-of-breed development tools without being locked into a specific vendor;  integrate easily with other systems.

2) SaaS-based Cloud Computing:

Delivers a quantum leap in cost-effectiveness without compromising on scalability.

In your example, I assume that you’d be able to score your 50,000 leads in one hour using one ADAPA engine on Amazon.  Therefore, you could choose to either spend US$100,000 or more on hardware, software, maintenance, IT services, etc., write a project proposal, get it approved by management, and be ready to score your model in 6-12 months

OR, you could use ADAPA at something around US$1-$2 per day for the scenario above and get started today!  To get my point across here, I am of course simplifying the scenario a little bit, but in essence these are your choices.

Sounds too good to be true?  We often get this response, so please feel free to contact us today [http://www.zementis.com/contact.htm] and we will be happy show you how easy it can be to deploy predictive models with ADAPA!

 

Ajay- The ADAPA solution seems to save money on both hardware and software costs. Comment please. Also any benchmarking tests that you have done on a traditional scoring configuration system versus ADAPA.

Michael-Absolutely, the ADAPA Predictive Analytics Edition [http://www.zementis.com/predictive_analytics_edition.htm] on Amazon’s cloud computing infrastructure (Amazon EC2) eliminates the upfront investment in hardware and software.  It is a true Software as a Service (SaaS) offering on Amazon EC2 [http://www.zementis.com/howtobuy.htm] whereby users only pay for the actual machine time starting at less than US$1 per machine hour.  The ADAPA SaaS model is extremely dynamic, e.g., a user is able to select an instance type most appropriate for the job at hand (small, large, x-large) or launch one or even 100 instances within minutes.

In addition to the above savings in hardware/software, ADAPA also cuts the time-to-market for new models (priceless!) which adds to business agility, something truly critical for the current economic climate.

Regarding a benchmark comparison, it really depends on what is most important to the business.  Business agility, time-to-market, open standards for integration, or pure scoring performance?  ADAPA addresses all of the above.  At its core, it is a highly scalable scoring engine which is able to process thousands of transactions per second.  To tackle even the largest problems, it is easy to scale ADAPA via more CPUs, clustering, or parallel execution on multiple independent instances. 

Need to score lots of data once a month which would take 100 hours on one computer?  Simply launch 10 instances and complete the job in 10 hours over night.  No extra software licenses, no extra hardware to buy — that’s capacity truly on-demand, whenever needed, and cost-effective.

Ajay- What has been your vision for Zementis. What exciting products are we going to see from it next.

Michael – Our vision at Zementis [http://www.zementis.com] has been to make it easier for users to leverage analytics.  The primary focus of our products is on the deployment side, i.e., how to integrate predictive models into the business process and leverage them in real-time.  The complexity of deployment and the cost associated with it has been the main hurdle for a more widespread adoption of predictive analytics. 

Adhering to open standards like the Predictive Model Markup Language (PMML) [http://www.dmg.org/] and SOA-based integration, our ADAPA engine [http://www.zementis.com/products.htm] paves the way for new use cases of predictive analytics — wherever a painless, fast production deployment of models is critical or where the cost of real-time scoring has been prohibitive to date.

We will continue to contribute to the R/PMML export package [http://www.zementis.com/pmml_exporters.htm] and extend our free PMML converter [http://www.zementis.com/pmml_converters.htm] to support the adoption of the standard.  We believe that the analytics industry will benefit from open standards and we are just beginning to grasp what data-driven decision technology can do for us.  Without giving away much of our roadmap, please stay tuned for more exciting products that will make it easier for businesses to leverage the power of predictive analytics!

Ajay- Any India or Asia specific plans for the Zementis.

Michael-Zementis already serves customers in the Asia/Pacific region from its office in Hong Kong.  We expect rapid growth for predictive analytics in the region and we think our cost-effective SaaS solution on Amazon EC2 will be of great service to this market.  I could see various analytics outsourcing and consulting firms benefit from using ADAPA as their primary delivery mechanism to provide clients with predictive  models that are ready to be executed on-demand.

Ajay-What do you believe be the biggest challenges for analytics in 2009. What are the biggest opportunities.

Michael-The biggest challenge for analytics will most likely be the reduction in technology spending in a deep, global recession.  At the same time, companies must take advantage of analytics to cut cost, optimize processes, and to become more competitive.  Therefore, the biggest opportunity for analytics will be in the SaaS field, enabling clients to employ analytics without upfront capital expenditures.

Ajay – What made you choose a career in science. Describe your journey so far.What would your advice be to young science graduates in this recessionary times.

Michael- As a physicist, my research focused on neural networks and intelligent systems.  Predictive analytics is a great
way for me to stay close to science while applying such complex algorithms to solve real business problems.  Even in a recession, there is always a need for good people with the desire to excel in their profession.  Starting your career, I’d say the best way is to remain broad in expertise rather than being too specialized on one particular industry or proficient in a single analytics tool.  A good foundation of math and computer science, combined with curiosity in how to apply analytics to specific business problems will provide opportunities, even in the current economic climate.

About Zementis

Zementis, Inc. is a software company focused on predictive analytics and advanced Enterprise Decision Management technology. We combine science and software to create superior business imageand industrial solutions for our clients. Our scientific expertise includes statistical algorithms, machine learning, neural networks, and intelligent systems and our scientists have a proven record in producing effective predictive models to extract hidden patterns from a variety of data types. It is complemented by our product offering ADAPA, a decision engine framework for real-time execution of predictive models and rules. For more information please visit www.zementis.com

Ajay-If you have a lot of data ( GBs and GBs) , an existing model ( in SAS,SPSS,R) which you converted to PMML, and it is time for you to choose between spending more money to upgrade your hardware, renew your software licenses  then instead take a look at the ADAPA from www.zementis.com and score models as low as 1$ per hour. Check it out ( test and control !!)

Do you have any additional queries from Michael ? Use the comments page to ask.

Interview:Richard Schultz , CEO REvolution Computing

Here is an interview with the CEO of REvolution Computing, Richard Schultz. Mr. Schultz offers his perspectives on aspects of the open source, predictive analytics, cloud computing as well his vision for R Commercial.

Note from Ajay-As I blogged previously, commercial establishments now have an option to use R commercially with a full service contract and all guarantees which they expect and get from existing analytics software vendors.

Ajay -Linux has not really succeeded in capturing Windows /Desktop Operating market. What are the technical and business reasons that you think R will succeed in analytics desktop software market.

Richard- To start, Linux was never really targeted at the Windows desktop market, but rather at deseating proprietary Unix deployments (particularly in finance), which it did quite successfully.  This is a similar trend to what we’re seeing in the R world – it’s not that R is generally replacing Excel, for instance.  In addition, with the large and growing base of both users and contributors, the vibrancy of the R community has taken on a life of its own.

As to R and Windows, two things are worth noting:

1. Microsoft has moved rapidly to embrace R and REvolution for that matter.

2. Windows is still the predominate operating system in large commercial enterprises. Because we deploy R on multiprocessors, which are now common on all computers including those pre-loaded with Windows, REvolution R is very much at home in both Windows, Mac, and Linux environments.

Ajay- What are the biggest challenges to Revolution Computing while explaining R Pro to users of traditional statistics softwares. What are the biggest advantages?

Richard- The biggest challenge is getting the word out that there now exists validated and supported R products designed for commercial use. But that’s changing rapidly, as your own interest in REvolution Computing demonstrates. Our biggest advantages are several:

1. we are focused on building a close and collegial relationship with the open source R community;

2. our company has a deep history in super computing and parallelization;

3. with, by Intel’s estimate, over 1 million R users and growing, there is a large community eager to adapt our products as its members advance their careers in the business and research worlds.

Ajay- Which softwares do you think will be affected the most by R’s spread across colleges and companies. What do you believe will be their strategies to compete.


Richard – I want to be politic here. Let me say that the programming software likely most affected by the rise of R is probably proprietary.

We see many opportunities to partner and leverage the strengths of REvolution’s products specifically – high performance, handling of large data, validation, IDE / user interface.

Ajay- How do you intend to incorporate the cloud computing and Software as a Service Model for R Pro. When , if at all, do you think it be possible  for a person to simply upload a zipped csv file, work on a remote cloud computer for analytics and forecasting, and just pay for the hired software,hardware,bandwidth.

Richard – We were thinking of something based on the Ohri framework.  ;-). ( Ajay- Touché!)

In fact, we have deployed, and are deploying cloud-based REvolution R for clients, and it’s something we expect to evolve as those technologies evolve.


Ajay- Asian countries have huge demand for analytics, and are more price conscious on softwares. What would your strategy to sell in Asia /China and India be.

Richard – Open source can be a tremendous win for users in Asia / China / India.  The upfront costs are low, the technology is leading-edge, and there is a distribution network for support.  REvolution has partners, and is continuing to build its partner network to be able to reach these markets.  We expect to accelerate our efforts in these regions toward the end of 2009.

Ajay- What has been the story so far for your career. What prompted you to join/start Revolution Computing. What would be the advice you would give to young science graduates in today’s recession.

Richard – My own background is in computer science, business… and music. Through school I held various positions at IBM, and after graduate school, I worked at Dunn & Bradsteet in a product management role and developed a taste for entrepreneurship. I’ve started two companies so far, MetaServer, a business intelligence middleware company that catered to the insurance industry, and REvolution Computing. Today, MetaServer is part of Oracle. And I continue to play music – guitar and piano. One of these days we’ll get a REvolution Computing band together.

My advice to young science graduates is the same recession or no: follow your enthusiasms; find a passion outside of work like playing music; master open source program languages because that is the future and the future is here.

About Richard Schultz –Chief Executive Officer,REvolution Computing

Richard guides REvolution’s long-range business strategy and leads the company’s teams on a daily basis. His experience developing and growing Business Intelligence software companies includes founding and leading Metaserver, Inc., now a part of Oracle, from inception to sale. Richard has been named Innovator of the Year by Business New Haven; served on the board of the Connecticut Venture Group; and been the keynote speaker for CIO Forum and other technology industry events.  A graduate of Washington University with degrees in Computer Science, Business and Music, Richard also holds a Masters degree in Computer Science from the State University of New York at Stonybrook and has held senior positions at Dunn and Bradstreet and IBM.

Ajay -REvolution Computing has been a leader in this field and going by the latest product launch –well you can try it yourself and see from here http://www.revolution-computing.com

Updated-R for SAS and SPSS Users

Updated  –I  finally got my hardback copy of the R for SAS and SPSS users . Digital copies are one thing, but a paper book is really beautiful .I had written an article on R ( with some mild sarcasm on some other softwares that are mildly more expensive) at Smart Data Collective. That created around 711 views of that article, ( my website got X00 hits that day, which is a personal best ,ehmm 🙂

It also inspired Sandro, a terrific data miner from Switzerland and a PhD to write an article called 5 reasons R is good for you, which can be accessed here http://smartdatacollective.com/Home/15756 and http://dataminingresearch.blogspot.com/2009/01/top-5-reasons-r-is-good-for-you.html

The story of how I wrote that Top Ten R article is also amusing – mentioned here by Jerry who creates terrific communities for content , all extremely digital and informative , readable here –http://www.socialmediatoday.com/SMC/67268

Now the reason I originally became involved with R, was because I couldn’t afford SAS and SPSS on my own computer after years of getting companies to pick up the tab. A question on the R help list led me to Bob Muenchen , who had written a short guidebook on R for SAS and SPSS users, and was then finishing his book. The following article is interesting given that it was done almost 3-4 months back yet some themes and events seemed to recur exactly as Bob mentioned them. I still bounce between Bob’s book and the Rattle guide for R programming but I am getting there !!!

Note-Robert Muenchen (pronounced Min’-chen) is the author of the famous R for SAS and SPSS users, and his book is an extensive tutorial on anyone wanting to learn either SAS,SPSS,or R or even to migrate from one platform to another. In an exclusive interview Bob agreed to answer some questions on the book , and on students planning to enter science careers.

What made you write the R For SAS and SPSS users?

The book-

A few years ago, all my colleagues seemed to be suddenly talking about R. Had I tried it? What did I think? Wasn’t it amazing? I searched around for a review and found an article by Patrick Burns, "R Relative to Statistics Packages" which is posted on the UCLA site (http://www.ats.ucla.edu/stat/technicalreports/). That article pointed out the many advantages of R and in it Burns claimed that knowing a standard statistics package interfered with learning R. That article really got my interest up. Pat’s article was a rejoinder to "Strategically using General Purpose Statistics Packages: A Look at Stata, SAS and SPSS" by Michael Mitchell, then the manager of statistical consulting at UCLA (it’s at that same site). In it he said little about R, other than he had "enormous difficulties" learning it that he had especially found the documentation lacking.

I dove in and started learning R. It was incredibly hard work, most of which was caused by my expectations of how I thought it ought to work. I did have a lot to "unlearn" but once I figured a certain step out, I could see that explaining it to another SAS or SPSS user would be relatively easy. I started keeping notes on these differences for myself initially. I finally posted them on the Internet as the first version of R for SAS and SPSS Users. It was only 80 pages and much of its explanation was in the form of extensive R program comments. I provided 27 example programs, each done in SAS, SPSS and R. A person could see how they differed, topic by topic. When a person ran the sections of the R programs and read all the comments, he or she would learn how R worked.

A web page counter on that document showed it was getting about 10,000 hits a month. That translates into about 300 users, paging back and forth through the document. An editor from Springer emailed me to ask if I could make it a book. I said it might be 150 pages when I wrote out the prose to replace all the comments. It turned out to be 480 pages!

What are the salient points in this book ?

The main point is that having R taught to you using terms you already know will make R much easier to learn. SAS and SPSS concepts are used in the body of the book as well as the table of contents, the index and even the glossary. For example, the table of contents has an entry for "Value Labels or Formats" even though R uses neither of those terms as SPSS and SAS do, respectively. The index alone took over 80 hours to compile because it is important for people to be able to look up things like "length" as both a SAS statement and as an R function. The glossary defines R terms using SAS/SPSS jargon and then again using proper R definitions.

SAS and SPSS each have five main parts: 1) commands to read and manage data, 2) procedures for statistics & graphics, 3) output management systems that allow you to use output as input to other analyses, 4) a macro language to automate the above steps and finally 5) a matrix language to help you extend the packages. All five of these parts use different statements and rules that do not apply to the others. Due to the complexity of all this, many SAS and SPSS users never get past the first two parts.

R instead has all these functions unified into a common single structure. That makes it much more flexible and powerful. This claim may seem to be a matter of opinion, but the evidence to back it up comes from the companies themselves. The developers at SAS Institute and SPSS Inc. don’t write their procedures in their own languages, R developers do.

How do you think R will impact the statistical software vendors?

With more statistical procedures than any other package, and its free price, some people think R will put many of the proprietary vendors out of business. R is a tsunami coming at the vendors and how they respond will determine their future. Take SPSS Inc. for example. They have written an excellent interface to R that lets you transfer your data back and forth, letting you run R functions in the middle of your SPSS programs. I show how to use it in my book. Starting with SPSS 17, you can also add R functions to the SPSS menus. This is particularly important because most SPSS users prefer to use menus. The company itself is adding menus to R functions, letting them rapidly expand SPSS’ capabilities at very little expense. They saw the R tsunami coming and they hopped on a surfboard to make the most of it. I think this attitude will help them thrive in the future.

SAS Institute so far as been ignoring R. That means if you need to use an analytic method that is only available in R, you must learn much more R than an SPSS user would. Once you have done that, you might be much more likely to switch over completely to R. Colleagues inside SAS Institute tell me they are debating whether they should follow SPSS’ lead and write a link to R. T
his has already been done by MineQuest, LLC (see http://www.minequest.com/Products.html ) with their amusingly named, "A bridge to R" product (playing off "A Bridge Too Far.")

Statistica is officially supporting R. You can read about the details at (http://www.statsoft.com/industries/Rlanguage.htm) . Statacorp has not supported R in Stata yet, although a user, Roger Newson, has written an R interface to it (http://ideas.repec.org/c/boc/bocode/s456847.html).

The company with the most to lose are the makers of S-PLUS. That was Insightful Corp. until they were recently bought out by Tibco. Since R is an implementation of the S language, S-PLUS could be hit pretty hard. On the other hand, they do have functions that handle "big data" so there is a chance that people will develop programs in R, run out of memory and then end up porting them to S-PLUS. S-PLUS also has a more comprehensive graphical user interface than R does, giving them an advantage. However, XL-Solutions Corp. has their new R-PLUS version that adds a slick GUI to R (http://www.experience-rplus.com/). There could be a rocky road ahead for S-PLUS. IBM faced a similar dilemma when computing hardware started becoming commodities. They prospered by making up the difference with service income. Perhaps Tibco can too.

Do you have special discounts for students?

My original version of R for SAS and SPSS Users is still online at http://RforSASandSPSSusers.com so students can get it there for free. The book version has a small market that is mostly students so pricing was set with that in mind.

What made you choose a career in Science and what have been the reasons for your success in it.

I started out as an accounting major. I was lucky enough to have had two years of bookkeeping in high school, and I worked part-time in the accounting department of ServiceMaster Industries for several years. I got to fill in for whoever was on vacation, so I got a broad range of accounting experience. I also got my first experience with statistics by helping the auditors. We took a stratified sample of transactions. With transactions divided into segments by their value, and sample a greater proportion as the value increased. For the most expensive transactions, we examined them all. My job was to be the "gofer" who collected all the invoices, checks, etc. to prove that the transactions were real. For a kid in high school, that was great fun!

By the time I was a freshman at Bradley University, I became excited by three new areas: mathematics, computing and psychology. I got to work in a lab at the Peoria Addictions Research Institute, studying addiction in rats and the parts of the brain that were involved. I wrote a simple stat package in FORTRAN to analyze data. After getting my B.A. in psychology, I worked on a PhD in Educational Psychology at Arizona State University. I loved that field and did well, but the job market for professors in that field was horrible at the time. So I transferred to a PhD program in Industrial/Organizational Psychology at The University of Tennessee. It turned out that I did not really care for that area at all, and I spent much of my time studying computing and calculus. My assistantship was with the Department of Statistics. By the time my first year was up, I transferred to statistics. At the time the department lacked a PhD program, so after four years of grad school I stopped with an M.S. in Statistics and got a job as a computing consultant helping people with their SAS, SPSS and STATGRAPHICS programs. Later I was able to expand that role, creating a full-fledged statistical consulting center in partnership with the Department of Statistics. Ongoing funding cuts have been chipping away at that concept though.

What made me a success? I love my job! I get to work with a lot of smart scientists and their grad students, expanding scientific knowledge. What could be better?

Science is boring, and not well paying career compared to being a lawyer or a sales job. People think you are a nerd. Please comment based on your experiences.

Science is constantly making new discoveries. That’s not boring! An area that most people can relate to is medicine. When we finish a study that shows a new treatment is better than an old one, our efforts will help thousands of people. In one study we compared a new, very expensive anti-nausea drug to an old one that was quite cheap. The pharmaceutical company claimed the new drug was better of course, but our study showed that it was not. That ended up helping to control health care costs that we all see escalating rapidly.

Another study found for the first time, a measure that could predict how well a hearing aid would help a person. Now, it’s easy to measure a hearing aid and see that it is doing what it is supposed to do, but a huge proportion of people who buy them don’t like them and stop wearing them after a brief period. Scientists tried for decades to predict which people would not be good candidates for hearing aids. A very sharp scientist at UT, Anna Nabelek, came up with the concept of Acceptable Noise Level. We measured how much background noise people were willing to tolerate before trying a hearing aid. That allowed us to develop a model that could predict well for the first time if someone should bother spending up to $5,000 for hearing aids. For retired people on a fixed income, that was an important finding. An audiology journal devoted an entire issue to the work.

It’s true that you can make more money in many other fields. But the excitement of discovery and the feeling that I’m helping to extend science very satisfying and well worth the lower salary. Plus, having a job in science means you will never have a chance to get bored!

What is your view on Rice University’s initiatives to create open source textbooks at http://cnx.org/ .

I think this is a really good idea. One of my favorite statistics books is Statnotes: Topics in Multivariate Analysis, by G David Garson. You can read it for free at http://www2.chass.ncsu.edu/garson/pa765/statnote.htm .

Universities pay professors to spend their time doing research, which must be published to get credit. So why not pay professors to write text books too? There have been probably hundreds of introductory books in every imaginable field. They cannot all make it in the marketplace so when they drop out of publication, why not make them available for free? I still have my old Introductory Statistics textbook from 30 years ago and the material is still good. It may be missing a few modern things like boxplots, but it would not take much effort to bring it up to date.

I’m also a huge fan of Project Gutenburg (http://www.archive.org/details/gutenberg). That is a collection of over 20,000 books, articles, etc. available there for free download. My wife does volunteer project management and post-processing with Distributed Proofreaders (http://www.pgdp.net/) which supplies books for Gutenburg.

What are your vie
ws on students uploading scanned copies of books to torrent sharing web sites because of expensive books.

The cost of textbooks has gotten out of hand. I think students should pressure universities and professors to consider cheaper alternatives. However scanning books putting them up on web sites isn’t sharing, it’s stealing. I put in most of my weekends and nights for 2 ½ years on my book that will be lucky to sell a few thousand copies. That works out to pennies per hour. Seeing it scanned in would be quite depressing.

When is the book coming out ? What is taking so long ?

We ran into problems when the book was translated from Microsoft Word to LaTeX. The translator program did not anticipate that an index would already be in place. That resulted in 2-3 errors per page. We’re working through that and should finally get it printed in early October.

Biography

Robert A. Muenchen is a consulting statistician with 28 years of experience. He is currently the manager of the Statistical Consulting Center at the University of Tennessee. He holds a B.A. in Psychology and an M.S. in Statistics. Bob has conducted research for a variety of public and private organizations and has assisted on more than 1,000 graduate theses and dissertations. He has coauthored over 40 articles published in scientific journals and conference proceedings. Bob has served on the advisory boards of SPSS Inc., the Statistical Graphics Corporation and PC Week Magazine. His suggested improvements have been incorporated into SAS, SPSS, JMP, STATGRAPHICS and several R packages. His research interests include statistical computing, data graphics and visualization,text analysis, data mining, psychometrics and resampling.

Ajay-He is also a very modest and great human being.

http://www.amazon.com/SAS-SPSS-Users-Statistics-Computing/dp/0387094172/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1217456813&sr=8-1

Interview :Dr Graham Williams

(Updated with comments from Dr Graham in the comments section )


I have often talked about how the Graphical User Interface ,Rattle for R language makes learning R and building models quite simple. Rattle‘s latest version has been released and got extensive publicity including in KD Nuggets .I wrote to it’s creator Dr Graham, and he agreed for an extensive interview explaining data mining, its evolution and the philosophy and logic behind open source languages like R as well as Rattle.

Dr Graham Williams is the author of the Rattle data mining software and Adjunct Professor, University of Canberra and Australian National University.  Rattle is available from rattle.togaware.com.

Ajay Could you describe your career journey . What made you enter this field and what experiences helped shape your perspectives . What would your advice be to young professionals entering this field today.

Graham – With a PhD in Artificial Intelligence (topic: combining multiple decision trees to build ensembles) and a strong interest in practical applications, I started out in the late 1980’s developing expert systems for business and government, including bank loan assessment systems and bush fire prediction.

When data mining emerged as a discipline in the early 1990’s I was involved in setting up the first data mining team in Australia with the government research organization (CSIRO). In 2004 I joined the Australian Taxation Office and provide the technical lead for the deployment of its Analytics team, overseeing the development of a data
mining capability. I have been teaching data mining at the Australian National University (and elsewhere) since 1995 and continue to do so.

The business needs for Data Mining and Analytics continues to grow, although courses in Data Mining are still not so common. A data miner combines good backgrounds in Computer Science and Statistics. The Computer Science is too little emphasized, but is crucial for skills in developing repeatable procedures and good software engineering
practices, which I believe to be important in Data Mining.

Data Mining is more than just using a point and click graphical user interface (GUI). It is an experimental endeavor where we really need to be able to follow our nose as we explore through our data, and then capture the whole process in an automatically repeatable manner that can be readily communicated to others. A programming language offers this sophisticated level of communications.

Too often, I see analysts, when given a new dataset that updates last years data, essentially start from scratch with the data pre-processing, cleaning, and then mining, rather than beginning with last year’s captured processes and tuning to this year’s data.  The GUI generation of software often does not encourage repeatability.

Ajay -What made you get involved with R . What is the advantage of using Rattle
versus normal R.

Graham- I have used Clementine and SAS Enterprise miner over many years (and IBM’s original Intelligent Miner and Thinking Machines’ Darwin, and many other tools that emerged early on with Data Mining). Commercial vendors come and go (even large one’s like IBM, in terms of the products they support).

Lock-in is one problem with commercial tools. Another is that many vendors, understandably, won’t put resources into new algorithms until they are well accepted.
Because it is open source, R is robust, reliable, and provides access to the most advanced statistics. Many research Statisticians publish their new algorithms in R. But what is most important is that the source code is always going to be available. Not everyone has the skill to delve into that source code, but at least we have a chance to
do so. We also know that there is a team of highly qualified developers whose work is openly peer reviewed. I can monitor their coding changes, if I so wanted.  This helps ensure quality and integrity.

Rolling out R to a community of data analysts, though, does present challenges. Being primarily a language for statistics, we need to learn to speak that language. That is, we need to communicate with language rather than pictures (or GUI). It is, of course, easier to draw pictures, but pictures can be limiting. I believe a written language allows us to express and communicate ideas better and more formally. But it needs to be with the philosophy that we are communicating those ideas to our fellow humans, not just writing code to be executed by the computer.

Nonetheless, GUIs are great as memory aides, for doing simple tasks, and for learning how to perform particular tasks. Rattle aims to do the standard data mining steps, but to also expose everything that is done as R commands in the log. In fact, the log is designed to be able to be run as an R script, and to teach the user the R commands.

Ajay- What are the advantages of using Rattle  instead of SAS or SPSS. What are the disadvantages of using Rattle instead of SAS or SPSS.

Graham- Because it is free and open source, Rattle (and R) can be readily used in teaching data mining.  In business it is, initially, useful for people who want to experiment with data mining without the sometimes quite significant up front costs of the commercial offerings. For serious data mining, Rattle and R offers all of the data mining algorithms offered by the commercial vendors, but also many more. Rattle provides a simple, tab-based, user interface which is not as graphically sophisticated as Clementine in SPSS and SAS Enterprise Miner.

But with just 4 button clicks you will have built your first data mining model.

The usual disadvantage quoted for R (and so Rattle) is in the handling of large datasets – SAS and SPSS can handle datasets out of memory although they do slow down when doing so. R is memory based, so going to a 64bit platform is often necessary for the larger datasets. A very rough rule of thumb has been that the 2-3GB limit of the common 32bit processors can handle a dataset of up to about 50,000 rows with 100 columns (or 100,000 rows and 10 columns, etc), depending on the algorithms you deploy. I generally recommend, as quite a powerful yet inexpensive data mining machine, one running on an AMD64 processor, running the Debian GNU/Linux operating system, with as much memory as you can afford (e.g., 4GB to 32GB, although some machines today can go up to 128 GB, but memory gets expensive at that end of the scale).

Ajay – Rattle is free to download and use- yet it must have taken you some time
to build it.What are your revenue streams to support your time and efforts?

Graham –Yes, Rattle is free software: free for anyone to use, free to review the code, free to extend the code, free to use it for whatever purpose.  I have been developing Rattle for a few years now, with a number of
contributions from other users. Rattle, of course, gets its full power from R. The R community works together to help each other,
and others, for the benefit of all. Rattle and R can be the basic toolkit for knowledge workers providing analyses. I know of a number of data mining consultants around the world who are using Rattle to support their day-to-day consultancy work.

As a company, Togaware provides user support, installations of R and Rattle, runs training in using Rattle and in doing data mining. It also delivers data mining projects to clients. Togaware also provides support for incorporating Rattle (and R) into other products (e.g., as RStat for Information Builders).

Ajay – What is your vision of analytics for the future. How do you think the recession of 2008 and slowdown in 2009 will affect choice of softwares.

Graham- Watching the growth of data mining and analytics over the past 18 years it does seem that there has been and continues to be a monotonically increasing interest and demand for Analytics. Analytics continues to demonstrate benefit.

The global financial crisis, as others have suggested, should lead organizations to consider alternatives to expensive software. Good quality free and open source software has been available for a while now, but the typical CTO is still more comfortable purchasing expensive software. A purchase gives some sense of (false?) security but formally provides no warranty. My philosophy has been that we
should invest in our people, within an organization, and treat software as a commodity, that we openly contribute back into.

Imagine a world where we only use free open source software. The savings made by all will be substantial (consider OpenOffice versus MS/Office license fees paid by governments world wide, or Rattle versus SAS Enterprise Miner annual license fees). A small part of that saving might be expended on ensuring we have staff who are capable of understanding and extending that software to suit our needs, rather than vice versa (i.e., changing our needs to suit the software). We feed our extensions back into the grid of open source software, whilst also benefiting from contributions others are making. Some commercial vendors like to call this “communism” as part of their attempt to discredit open source, but we had better learn to share, for the good of the planet, before we lose it.

( Note from Ajay – If you are curious to try R , and have just 15 minutes to try it in, download Rattle from rattle.togaware.com. It has a click and point  interface and auto generates R code in it’s log. Trust me, it would time well spent.)

Interview: Roger Haddad, Founder of KXEN Automated Modeling Software

KXEN_logo_300dpi I first talked about KXEN,the automated modeling software  in this post http://www.decisionstats.com/2008/12/automating-regression-models-kxen/

So I asked Roger Haddad ,its founder and CEO if he could give an email interview and Roger being the great guy he is , both remembered me from user analyst days as well worked in the holidays to give this interview. Before founding KXEN, Mr. Roger Haddad was president of Azlan France, the first network distributor in France. Under his management, sales revenues increased 850 percent in four years.In 1977, Mr. Haddad founded Metrologie International, a leading software and hardware provider, which went public on the Paris Stock Exchange in 1985. When Mr. Haddad left in 1991, Metrologie had 4,500 employees, $900M in revenue and subsidiaries in 13 European countries.Mr. Haddad holds a master’s degree in electrical metrology from George Washington University and a bachelor’s degree in electrical engineering from Ecole Supérieure d’Electricité, Paris, France.

 

Ajay : What would your advice be to young professionals entering the job world today ?

Roger : If you are talking about Statisticians, I would tell them to concentrate on the data and the process rather than on the statistical orthodoxy

Ajay : What interested you most in being the head of KXEN. What is the best feature you like in KXEN. – both as a company and as a product.


Roger :To Make it happen !! Data mining is at its infancy, because SAS and others made it difficult to work with !! they made for an elite of people !!

KXEN role is to open this bottleneck and give power to the users – Analysts will help to train business users and get them confident with their findings.

As a product, I am always suprised by the quality of KXEN results in a fraction of the time compared to first generation workbench and automatically !! 🙂

Ajay : What areas has KXEN been most suitable for ? Biggest sucess story so far.

Roger : Classification, regression with thousands of variables and tricky data sets !! We have hundreds of success stories 

Ajay :Could you also comment on how the slowdown and recession would affect the analytics world in terms of newer solutions , Software as a service , more acceptance of trying out the unfamiliar etc ?

Roger : I believe the recession and the slowdown will push analytics further and particularly KXEN approach , since we allow corporations to do much more with less or with the same Team. We are seeing many Analytics Group being reduced and people calling on us to deliver what need to be delivered!!

Ajay : What areas would you rather not recommend KXEN? What other softwares would you recommend in those cases ?

Roger : I would not recommend KXEN in genetics – SVM would be more apropriate

Ajay : Asia has a nascent but high potential market. What are you Asian plans and any clients /case studies here ?

Roger:  We have a presence in every countries but in India – Japan is by far our best country and we have there a fantastic Distributor -WE also have Customers in China , in Asean too – We are looking for a good Distributor in India , but this seems quite difficult.(Note from Ajay – I decided to apply straight away)

Ajay : What is the biggest challenge you have faced while introducing KXEN to a wider audience.
Roger: THe resistance to Change and the fear of classical statisticians that they will loose their job !! in fact this never happened and on  the contrary they become hero in their Corporation after adopting KXEN

  Roger Haddad 1Founder and Chief Executive Officer

Mr. Haddad is responsible for overseeing the KXEN sales team, the distribution channel management, as well as the direction of the company and the strategic growth of the organization. With more than 30 years experience as an industry expert, Mr. Haddad is a forward-looking entrepreneur with an expertise in successfully running companies with multiple channels
of distribution. Mr. Haddad has a long and successful track record in developing new companies into profitable enterprises.