Interview John Sall Founder JMP/SAS Institute

Here is an interview with John Sall, inventor of SAS and JMP and co-founder and co-owner of SAS Institute, the largest independent business intelligence and analytics software firm. In a free wheeling and exclusive interview, John talks of the long journey within SAS and his experiences in helping make JMP the data visualization software of choice.
JMP is perfect for anyone who wants to do exploratory data analysis and modeling in a visual and interactive way – John Sall

untitled2

Ajay- Describe your early science career. How would you encourage today’s generation to take up science and math careers?

John- I was a history major in college, but I graduated into a weak job market. So I went to graduate school and discovered statistics and computer science to be very captivating. Of course, I grew up in the moon-race science generation and was always a science enthusiast.

Ajay- Archimedes leapt out the bath shouting “Eureka” when he discovered his principle. Could you describe a “Eureka” moment while creating the SAS language when you and Jim Goodnight were working on it?

John- I think that the moments of discovery were more like “Oh, we were idiots” as we kept having to rewrite much of the product to handle emerging environments, like CMS, minicomputers, bitmap workstations, personal computers, Windows, client-server, and now the cloud. Several of the rewrites were even changing the language we implemented it in. But making the commitment to evolve led to an amazing sequence of growth that is still going on after 35 years.

Ajay- Describe the origins of JMP. What specific market segments does the latest release of JMP target?

John- JMP emerged from a recognition of two things: size and GUI. SAS’ enterprise footprint was too big a commitment for some potential users, and we needed a product to really take advantage of graphical interactivity. It was a little later that JMP started being dedicated more to the needs of engineering and science users, who are most of our current customers.

Ajay- What other non-SAS Institute software do you admire or have you worked with? Which areas is JMP best suited for? For which areas would you recommend software other than JMP to customers?

John- My favorite software was the Metrowerks CodeWarrior development environment. Sadly, it was abandoned among various Macintosh transitions, and now we are stuck with the open-source GCC and Xcode. It’s free, but it’s not as good.

JMP is perfect for anyone who wants to do exploratory data analysis and modeling in a visual and interactive way. This is something organizations of all kinds want to do. For analytics beyond what JMP can do, I recommend SAS, which has unparalleled breadth, depth and power in its analytic methods.

Ajay- I have yet to hear of a big academic push for JMP distribution in Asia. Are there any plans to distribute JMP for free or at very discounted prices in academic institutions in countries like India, China or even the rest of the USA?

John- We are increasing our investment in supporting academic institutions, but it has not been an area of strength for us. Professors seem to want the package they learned long ago, the language that is free or the spreadsheet program their business students already have. JMP’s customers do tell us that they wish the universities would train their prospective future employees in JMP, but the universities haven’t been hearing them. Fortunately, JMP is easy enough to pick up after you enter the work world. JMP does substantially discount prices for academic users.

Ajay- What are your views on tech offshoring, given the recession in the United States?

John- As you know, our products are mostly made in the USA, but we do have growing R&D operations in Pune and Beijing that have been performing very well. Even when the software is authored in the US, considerable work happens in each country to localize, customize and support our local users, and this will only increase as we become more service-oriented. In this recession, JMP has still been growing steadily.

Ajay-  What advice would you give to young graduates in this recession? How does learning JMP enhance their prospect of getting a job?

John- Quantitative fields have been fairly resistant to the recession. North Carolina State University, near the SAS campus, even has a Master of Science in Analytics < http://analytics.ncsu.edu/ > to get people job-ready. JMP experience certainly helps get jobs at our major customers.

Ajay- What does John Sall do in his free time, when not creating world-class companies or groovy statistical discovery software?

John- I lead the JMP division, which has been a fairly small part of a large software company (SAS), but JMP is becoming bigger than the whole company was when JMP was started. In my spare time, I go to meetings and travel with the Nature Conservancy <http://www.nature.org/ >, North Carolina State University <http:// http://ncsu.edu/ >, WWF <http://wwf.org/ >, CARE <http://www.care.org/ > and several other nonprofit organizations that my wife or I work with.

Official Biography

John Sall is a co-founder and Executive Vice President of SAS, the world’s largest privately held software company. He also leads the JMP business division, which creates interactive and highly visual data analysis software for the desktop.

Sall joined Jim Goodnight and two others in 1976 to establish SAS. He designed, developed and documented many of the earliest analytical procedures for Base SAS® software and was the initial author of SAS/ETS® software and SAS/IML®. He also led the R&D effort that produced SAS/OR®, SAS/QC® and Version 6 of Base SAS.

Sall was elected a Fellow of the American Statistical Association in 1998 and has held several positions in the association’s Statistical Computing section. He serves on the board of The Nature Conservancy, reflecting his strong interest in international conservation and environmental issues. He also is a member of the North Carolina State University (NCSU) Board of Trustees. In 1997, Sall and his wife, Ginger, contributed to the founding of Cary Academy, an independent college preparatory day school for students in grades 6 through 12.

Sall received a bachelor’s degree in history from Beloit College in Beloit, WI, and a master’s degree in economics from Northern Illinois University in DeKalb, IL. He studied graduate-level statistics at NCSU, which awarded him an honorary doctorate in 2003.

About JMP-

Originally nicknamed as John’s Macintosh Program, JMP is a leading software program in data visualization for statistical software. Researchers and engineers – whose jobs didn’t revolve solely around statistical analysis – needed an easy-to-use and affordable stats program. A new software product, today known as JMP®, was launched in 1989 to dynamically link statistical analysis with the graphical capabilities of Macintosh computers. Now running on all platforms, JMP continues to play an important role in modeling processes across industries as a desktop data visualization tool. It also provides a visual interface to SAS in an expanding line of solutions that includes SAS Visual BI and SAS Visual Data Discovery. Sall remains the lead architect for JMP.

Citation- http://www.sas.com/presscenter/bios/jsall.html

Ajay- I am thankful to John and his marketing communication specialist Arati for this interview.With an increasing focus on data to drive more rational decision making, SAS remains an interesting company to watch for in the era of mega- vendors and any SAS Institute deal and alliance will be  making potential investment bankers as well as newer customers drool. For previous interviews and coverage of SAS please use www.decisionstats.com/tag/sas

SPSS bought by Big Blue

SPSS Inc maker of PASW series of analytics softwares is being bought by IBM ( unless Oracle spikes this deal too). IBM is seeking a play in the rapidly growing analytics market and is also a strategic partner to WPS ( who makes the Base SAS alternative SAS language software).

In a personal note- I just entered University of Tennessee as a statistics student.

Interesting community event by R/Statistical community

Citation-
http://en.oreilly.com/oscon2009/public/schedule/detail/10432

StackOverflow Flash Mob for the R User Community
Moderated by: Michael E. Driscoll
7:00pm Wednesday, 07/22/2009
Location: Ballroom A2

In concert with users online across the country, this session will lead a flashmob to populate StackOverflow with R language content.

R, the open source statistical language, has a notoriously steep learning curve. The same technical questions tend be asked repeatedly on the R-help mailing lists, to the detriment of both R experts (who tire of repeating themselves) and the learners (who often receive a technically correct, but terse response).

We have developed a list of the most common 100 technical R questions, based on an analysis of (i) queries sent to the RSeek.org web portal, and (ii) an examination of the R-help list archives, and (iii) a survey of members of R Users Groups in San Francisco, LA, and New York City.

In the first hour, participants will pair up to claim a question, formulate it on StackOverflow, and provide a comprehensive answer. In the second hour, participants will rate, review, and comment on the set of submitted questions and answers.

While Stackoverflow currently lacks content for the R language, we believe this effort will provide the spark to attract more R users, and emerge as a valuable resource to the growing R community.

This is an interesting example of a statistical software community using twitter for a tech help event. I hope this trend/ event gets replicated again and again-

Statisticians worldwide unite in the language of maths !!!

Please follow @rstatsmob to participate. See you at 7 PM PST!

twitter.com/Rstatsmob

Growing Rapidly: Rapid Miner 4.5

The Europe based Rapid Miner came out with version 4.5 of their data mining tool ( also known as Yale) with a much promising “Script” tool.

Also, Rapid Miner came in 1st in open source data mining tools in a poll by Industry benchmark www.kdnuggets.com

They have a brilliant video here for people who just want to have a look at the new Rapid Miner

http://rapid-i.com/videos/rapidminer_tour_3_4_en.html

Citation-

http://rapid-i.com/content/view/147/1/

New Operators:

  • FormulaExtractor
  • Trend
  • LagSeries
  • VectorLinearRegression
  • ExampleSetMinus
  • ExampleSetIntersect
  • Partition
  • Script
  • ForwardSelection
  • NeuralNetImproved
  • KernelNaiveBayes
  • ExhaustiveSubgroupDiscovery
  • URLExampleSource
  • NonDominatedSorting
Image

More Features:

  • The new Script operator allows for arbitrary user defined operations based on Groovy script combined with a simplified RapidMiner syntax
  • Improved the join operator and added options for left and right outer joins
  • New notification mail mechanism at the end of processes
  • Most file based data input operators now provide an option to skip error lines
  • Most file based example source operators as well as the IOObjectReader and the new URLExampleSource now accept URLs instead of a filename for the input source location

Managing Twitter Relationships:Refollow.com

If you have more than 100 followers, or people following you on Twitter and want to have some kind of Outlook like manager for managing so much info- this is a great tool from www.refollow.com

Added benefits-
1) Secure Login using Twitter Authorization
2) Visual Click and Easy Follow- Unfollow Blocking based on activities
3) Segmenting groups of people basede on behavior
4) Hidden insights on who all suddenly have un-followed me ( after I accidentally revealed the spoiler end of latest Harry Potter movie- Dumbeldore will sleep with the fishes)

The Screenshot (of my refollow page) below shows you most of the properties-

a1

Personal- Google's bADSENSE

If you notice I removed the ads from this site, the Goggle Ad Sense ads. The reason for that was I found it no corelation at all between what I was writing and what kind of ads I saw.

Maybe it is my location- India, but after watching ads for career job sites, video games, marketing networks and computer training alongside some of my writing- I decided to split with the big G and call it an end to Google’s bad Adsense.

The irony of a data mining blog failing to get relevant data mining ads from a data mining search engine.

(NOT coming up- Decisionstats T Shirts with quotes from interviews)

Now back to coding and research.

R language on the GPU

Here are some nice articles on using R on Graphical Processing Units (GPU) mainly made by NVidia. Think of a GPU as a customized desktop with specialized computing equivalent to much faster computing. i.e. Matlab users can read the webinars here http://www.nvidia.com/object/webinar.html

Now a slightly better definition of GPU computing is from http://www.nvidia.com/object/GPU_Computing.html

GPU computing is the use of a GPU (graphics processing unit) to do general purpose scientific and engineering computing.
The model for GPU computing is to use a CPU and GPU together in a heterogeneous computing model. The sequential part of the application runs on the CPU and the computationally-intensive part runs on the GPU. From the user’s perspective, the application just runs faster because it is using the high-performance of the GPU to boost performance.

rgpu

Citation:

http://brainarray.mbni.med.umich.edu/brainarray/rgpgpu/

R is the most popular open source statistical environment in the biomedical research community. However, most of the popular R function implementations involve no parallelism and they can only be executed as separate instances on multicore or cluster hardware for large data-parallel analysis tasks. The arrival of modern graphics processing units (GPUs) with user friendly programming tools, such as nVidia’s CUDA toolkit (http://www.nvidia.com/cuda), provides a possibility of increasing the computational efficiency of many common tasks by more than one order of magnitude (http://gpgpu.org/). However, most R users are not trained to program a GPU, a key obstacle for the widespread adoption of GPUs in biomedical research.

The research project at the page mentioned above has developed special packages for the above need- R on a GPU.

he initial package is hosted by CRAN as gputools a sorce package for UNIX and Linux systems. Be sure to set the environment variable CUDA_HOME to the root of your CUDA toolkit installation. Then install the package in the usual R manner. The installation process will automatically make use of nVidia’s nvcc compiler and CUBLAS shared library.

and some figures

speedupFigure 1 provides performance comparisons between original R functions assuming a four thread data parallel solution on Intel Core i7 920 and our GPU enabled R functions for a GTX 295 GPU. The speedup test consisted of testing each of three algorithms with five randomly generated data sets. The Granger causality algorithm was tested with a lag of 2 for 200, 400, 600, 800, and 1000 random variables with 10 observations each. Complete hierarchical clustering was tested with 1000, 2000, 4000, 6000, and 8000 points. Calculation of Kendall’s correlation coefficient was tested with 20, 30, 40, 50, and 60 random variables with 10000 observations each

Ajay- For hard core data mining people ,customized GPU’s for accelerated analytics and data mining sounds like fun and common sense. Are there other packages for customization on a GPU – let me know.

Citation:

http://brainarray.mbni.med.umich.edu/brainarray/rgpgpu/

Download

Download the gputools package for R on a Linux platform here: version 0.01.