Chrome Experiments

Here are some nice data visualization methods elaborated in http://www.chromeexperiments.com/

 

I created one using Social Collider for searching @smartdataco and generated this data mapcoolied

The site  ( which goes by the tag of  Not your mother’s Javascript) is created by Google ,Creator of Chrome Browser.

 

In light of these deeply held beliefs, we created this site to showcase cool experiments for both JavaScript and web browsers.

These experiments were created by designers and programmers from around the world. Their work is making the web faster, more fun, and more open – the same spirit in which we built Google Chrome.

Here is an experiment called Canopy available at http://www.chromeexperiments.com/detail/canopy/

It generates Fractals

 

Another more useful experiment is Social Collider which enables you to search Twitter for specific words, and create a data map for that

SAS Global Conference 2009

The resources for SAS Global Conference are now online at

http://support.sas.com/resources/papers/proceedings09/TOC.html

The SAS Global Conference starts next week on March 22 till March 25 in Washington D.C.It is one of the oldest ,most renowned community conferences for any statistical software. Ever.

Here is a link to the SAS2009 Ballot Results in which users were polled on what features they like /dislike and want added to SAS Institute‘s suite of products and indeed to the SAS Language itself

 

http://support.sas.com/resources/papers/proceedings09/Ballot09.pdf

I really liked the blog as well the YouTube video here –http://blogs.sas.com/sgf/

 

Citation:

SAS Institute Inc. 2009. Proceedings of
the SAS® Global Forum 2009 Conference. Cary, NC: SAS Institute Inc.

Business Intelligence and The Heisenberg Principle

The Heisenberg Principle states that for certain things accuracy and certainty in knowing one quality ( say position of an atom) has to be a trade off with certainty of another quality (like momentum). I was drawn towards the application of this while in an email interaction with Edith Ohri , who is a leading data mining person in Israel and has her own customized GT solution.Edith said that it seems it is impossible to have data that is both accurate (data quality) and easy to view across organizations (data transparency). More often than not the metrics that we measure are the metrics we are forced to measure due to data adequacy and data quality issues.

Now there exists a tradeoff in the price of perfect information in managerial economics , but is it really true that the Business Intelligence we deploy is more often than not constrained by simple things like input data and historic database tables.And that more often than not Data quality is the critical constraint that determines speed and efficacy of deployment.

I personally find that much more of the time in database projects goes in data measurement,aggregation, massaging outliers, missing value assumptions than in the “high value” activities like insight generation and business issue resolution.

Is it really true ? Analysis is easy but the data which is tough ?

What do you think in terms of the uncertainty inherent in data quality and data transparency-

Interview –Jon Peck SPSS

JonPeck

 

I was in the middle of interviewing people as well as helping the good people in my new role as a community evangelist at Smart Data Collective when I got a LinkedIn Request to join the SDC group  from Jon Peck .

SPSS Inc. is a leading worldwide provider of predictive analytics software and solutions. Founded in 1968, today SPSS has more than 250,000 customers worldwide, served by more than 1,200 employees in 60 countries .Now Jon is a legendary SPSS figure and a great teacher in this field .I asked him for an interview he readily agreed.

Jon Peck is a Principal Software Engineer and Technical Advisor at SPSS. He has been working with SPSS since 1983  and in the interview he talks from the breadth of his perspective and experience on things in analytics and at SPSS .

Ajay – Describe your career journey from college to today. What advice would you give to young students seeking to be hedge fund managers rather than scientists.  What are the basic things that a science education can help students with , in your opinion ?

Jon– After graduating from college with a B.A. in math, I earned a Ph. D in Economics, specializing in econometrics, and taught at a top American university for 13 years in the Economics and Statistics Departments and the School of Organization and Management.  Working in an academic environment all that time was a great opportunity to grow intellectually.  I was increasingly drawn to computing and eventually decided to join a statistical software company.  There were only two substantial ones at the time.  After a lot of thought, I joined SPSS as it seemed to be the more interesting place and one where I would be able to work in a wider variety of areas.  That was over 25 years ago!  Now I have some opportunities to teach and speak again as well as working in development, which I enjoy a lot.

I still believe in getting a broad liberal arts education along with as much quantitative training as possible.  Being able to work in very different areas has been a big asset for me.  Most people will have multiple careers, so preparing broadly is the most important career thing you can do.  As for hedge fund jobs – if there are any left, I’d say not to be starry-eyed about the money.  If you don’t choose a career that really interests you, you won’t be very successful anyway. Do what you love – subject to earning a living.

Math and scientific reasoning skills are preparation for working in many areas as well as being helpful in making the many decisions with quantitative aspects in life.  Math, especially, provides a foundation useful in many areas.  The recently announced program in the UK to improve general understanding of probability illustrates some practical value.

Ajay- What are SPSS’s contribution to Open Source software . What ,if you can disclose are any plans for further increasing that involvement.

Jon-  I wish I could talk about SPSS future plans, but I can’t.  However, the company is committed to continuing its efforts in Python and R.  By opening up the SPSS technology with these open source technologies, we are able to expand what we and our users can do.  At the same time, we can make R more attractive through nicer output and simpler syntax and taking away much of the pain.  One of the things I love about this approach is how quickly and easily new things can be produced and distributed this way compared to the traditional development cycle.  I wrote about productivity and Python recently on my blog at insideout.spss.com.

Ajay – How happy is the SPSS developer community with Python . Are there any other languages that you are considering in the future.

Jon- Many in the SPSS user community were more used to packaged procedures than to programming (except in the area of data transformations).  So Python, first, and then R were a shock.  But the benefits are so large that we have had an excellent response to both the Python and R technologies.  Some have mastered the technology and have been very successful and have made contributions back to the SPSS community.  Others are consumers of this technology, especially through our custom dialogs and extension commands that eliminate the need to learn Python or R in order to use programs in these languages.  Python is an outstanding language.  It is easy to get started with it, but it has very sophisticated features.  It has fewer dark corners than any other language I know.  While there are a few other more popular languages, Python popularity has been steadily growing, especially in the scientific and statistical communities.  But we already have support for three high-level languages, and if there is enough demand, we’ll do more.

Some of our partners prefer to use the lower-level C language interfaces we offer.  That’s fine, too.  We’re not Python zealots (well, maybe, I am).  Python, as a scripting language, isn’t as fast as a compiled language.  For many purposes this does not matter, and Python itself is written in C.  I recently wrote a Python module for TURF analysis.  The computations are simple but computationally explosive, so I was worried that it would be too slow to be useful.   It turned out to be pretty fast because of the way I could use some of Python’s built-in data structures and algorithms.  And the popular numPy and SciPy scientific and numerical libraries are written in C.

Users who would not think of themselves as developers sometimes find that a small Python effort can automate manual work with big time and accuracy improvements.  I got a note recently from a user who said, "I got it to work, and this is FANTASTIC! It will save me a lot of time in my survey analysis work."

Ajay- What are the areas where SPSS is not a good fit for using. What areas suit SPSS software the most compared to other solutions.

Jon- SPSS Statistics, the product,  is not a database.  Our strength is in applying analytical methods to data for model building, prediction, and insight.  Although SPSS Statistics is used in a wide variety of areas, we focus first on people data and think of that first when planning and designing new features.  SPSS Statistics and other SPSS products all work well with databases, and we have solutions for deploying analytics into production systems, but we’re not going to do your payroll.  One thing that was a surprise to me a few years ago is that we have a significant number of users who use SPSS Statistics as a basic reporting product but don’t do any inferential statistics.  They find that they can do customized reporting – often using the Custom Tables module – very quickly.  With Version 17, they can also do fancier and dynamic output formatting without resorting to script writing or manual editing, which is proving very attractive.

Ajay- Are there any plans for SPSS to use Software as a Service Model . Any plans to use advances in remote and cloud computing for SPSS ?

Jon- We are certainly looking at cloud computing.  The biggest challenge is being able to put things in the cloud that will be robust and reliable.

Ajay- What are SPSS’s Asia plans ? Which
country has the maximum penetration of SPSS in terms of usage.

Jon- SPSS, the company, has long been strong in Japan, and Taiwan, and Korea is also strong.  China is increasingly important, of course.  We have a large data center in Singapore.  Although India has a strong, long, history in statistical methodology, it is a much less well-developed market for us.  We have a presence there, but I don’t know the numbers. (Ajay – SPSS has been one of my first experiences in statistical software when I came up with it at my business school in 2001. In India SPSS has been very active with academia licensing and it introduced us to the nice and easy menu driven features of SPSS.)

Biography – Jon earned his Ph. D. from Yale University and taught econometrics and statistics there for 13 years before joining SPSS.

Jon joined the SPSS company in 1983 and worked on many aspects of the very first SPSS DOS product, including writing the first C code that SPSS ever shipped. Among the features he has designed are OMS (the Output Management System), the Visual Bander, Define Variable Properties, ALTER TYPE, Unicode support, and the Date and Time Wizard. Jon is the author of many of the modules on Developer Central. He is an active cyclist and hiker.

Jon Peck blogs on  SPSS Inside-Out.

Accelerate your business – Even in a weak Economy

Accelerate Your Business–Even in a Weak Economy

Learn how you can use business intelligence to accelerate your company’s growth, even in this difficult economy.  This free webinar from SAP Business Objects will help you discover how you can become a faster, leaner, more agile business by using BI focused on three core strategies:  Cutting operating expenses and discretionary spending; using technology to eliminate inefficiencies, and giving  first priority to your existing customers.  LEARN MORE…

http://events.businessobjects.com/forms/Q109/road/web/index.php?partner=SMtoday1

 

Here ‘s  a nice webinar to attend if you have time.

Modeling Visualization Macros

Here is a nice SAS Macro from Wensui’s blog at http://statcompute.spaces.live.com/blog/

Its particularly useful for Modelling chaps, I have seen a version of this Macro sometime back which had curves also plotted but this one is quite nice too

SAS MACRO TO CALCULATE GAINS CHART WITH KS

%macro ks(data = , score = , y = );

options nocenter mprint nodate;

data _tmp1;
  set 
&data;
  where &score ~= . and y in (1, 0);
  random = ranuni(1);
  keep &score &y random;
run;

proc sort data = _tmp1 sortsize = max;
  by descending &score random;
run;

data _tmp2;
  set _tmp1;
  by descending &score random;
  i + 1;
run;

proc rank data = _tmp2 out = _tmp3 groups = 10;
  var i;
run;

proc sql noprint;
create table
  _tmp4 as
select
  i + 1       as decile,
  count(*)    as cnt,
  sum(&y)     as bad_cnt,
  min(&score) as min_scr format = 8.2,
  max(&score) as max_scr format = 8.2
from
  _tmp3
group by
  i;

select
  sum(cnt) into :cnt
from
  _tmp4;

select
  sum(bad_cnt) into :bad_cnt
from
  _tmp4;    
quit;

data _tmp5;
  set _tmp4;
  retain cum_cnt cum_bcnt cum_gcnt;
  cum_cnt  + cnt;
  cum_bcnt + bad_cnt;
  cum_gcnt + (cnt – bad_cnt);
  cum_pct  = cum_cnt  / &cnt;
  cum_bpct = cum_bcnt / &bad_cnt;
  cum_gpct = cum_gcnt / (&cnt &bad_cnt);
  ks       = (max(cum_bpct, cum_gpct) – min(cum_bpct, cum_gpct)) * 100;

  format cum_bpct percent9.2 cum_gpct percent9.2
         ks       6.2;
  
  label decile    = ‘DECILE’
        cnt       = ‘#FREQ’
        bad_cnt   = ‘#BAD’
        min_scr   = ‘MIN SCORE’
        max_scr   = ‘MAX SCORE’
        cum_gpct  = ‘CUM GOOD%’
        cum_bpct  = ‘CUM BAD%’
        ks        = ‘KS’;
run;

title "%upcase(&score) KS";
proc print data  = _tmp5 label noobs;
  var decile cnt bad_cnt min_scr max_scr cum_bpct cum_gpct ks;
run;    
title;

proc datasets library = work nolist;
  delete _: / memtype = data;
run;
quit;

%mend ks;    

data test;
  do i = 1 to 1000;
    score = ranuni(1);
    if score * 2 + rannor(1) * 0.3 > 1.5 then y = 1;
    else y = 0;
    output;
  end;
run;

%ks(data = test, score = score, y = y);

/*
SCORE KS              
                                MIN         MAX
DECILE    #FREQ    #BAD       SCORE       SCORE     CUM BAD%    CUM GOOD%        KS
   1       100      87         0.91        1.00      34.25%        1.74%      32.51
   2       100      78         0.80        0.91      64.96%        4.69%      60.27
   3       100      49         0.69        0.80      84.25%       11.53%      72.72
   4       100      25         0.61        0.69      94.09%       21.58%      72.51
   5       100      11         0.51        0.60      98.43%       33.51%      64.91
   6       100       3         0.40        0.51      99.61%       46.51%      53.09
   7       100       1         0.32        0.40     100.00%       59.79%      40.21
 &#
160; 8       100       0         0.20        0.31     100.00%       73.19%      26.81
   9       100       0         0.11        0.19     100.00%       86.60%      13.40
  10       100       0         0.00        0.10     100.00%      100.00%       0.00
*/

Its particularly useful for Modelling , I have seen a version of this Macro sometime back which had curves also plotted but this one is quite nice too.

Here is another example of a SAS Macro for ROC Curve  and this one comes from http://www2.sas.com/proceedings/sugi22/POSTERS/PAPER219.PDF

APPENDIX A
Macro
/***************************************************************/;
/* MACRO PURPOSE: CREATE AN ROC DATASET AND PLOT */;
/* */;
/* VARIABLES INTERPRETATION */;
/* */;
/* DATAIN INPUT SAS DATA SET */;
/* LOWLIM MACRO VARIABLE LOWER LIMIT FOR CUTOFF */;
/* UPLIM MACRO VARIABLE UPPER LIMIT FOR CUTOFF */;
/* NINC MACRO VARIABLE NUMBER OF INCREMENTS */;
/* I LOOP INDEX */;
/* OD OPTICAL DENSITY */;
/* CUTOFF CUTOFF FOR TEST */;
/* STATE STATE OF NATURE */;
/* TEST QUALITATIVE RESULT WITH CUTOFF */;
/* */;
/* DATE WRITTEN BY */;
/* */;
/* 09-25-96 A. STEAD */;
/***************************************************************/;
%MACRO ROC(DATAIN,LOWLIM,UPLIM,NINC=20);
OPTIONS MTRACE MPRINT;
DATA ROC;
SET &DATAIN;
LOWLIM = &LOWLIM; UPLIM = &UPLIM; NINC = &NINC;
DO I = 1 TO NINC+1;
CUTOFF = LOWLIM + (I-1)*((UPLIM-LOWLIM)/NINC);
IF OD > CUTOFF THEN TEST="R"; ELSE TEST="N";
OUTPUT;
END;
DROP I;
RUN;
PROC PRINT;
RUN;
PROC SORT; BY CUTOFF;
RUN;
PROC FREQ; BY CUTOFF;
TABLE TEST*STATE / OUT=PCTS1 OUTPCT NOPRINT;
RUN;
DATA TRUEPOS; SET PCTS1; IF STATE="P" AND TEST="R";
TP_RATE = PCT_COL; DROP PCT_COL;
RUN;
DATA FALSEPOS; SET PCTS1; IF STATE="N" AND TEST="R";
FP_RATE = PCT_COL; DROP PCT_COL;
RUN;
DATA ROC; MERGE TRUEPOS FALSEPOS; BY CUTOFF;
IF TP_RATE = . THEN TP_RATE=0.0;
IF FP_RATE = . THEN FP_RATE=0.0;
RUN;
PROC PRINT;
RUN;
PROC GPLOT DATA=ROC;
PLOT TP_RATE*FP_RATE=CUTOFF;
RUN;
%MEND;

VERSION 9.2 of SAS has a macro called %ROCPLOT http://support.sas.com/kb/25/018.html

SPSS also uses ROC curve and there is a nice document here on that

http://www.childrensmercy.org/stats/ask/roc.asp

Here are some examples from R with the package ROCR from

http://rocr.bioinf.mpi-sb.mpg.de/

 

image

Using ROCR’s 3 commands to produce a simple ROC plot:
pred <- prediction(predictions, labels)
perf <- performance(pred, measure = "tpr", x.measure = "fpr")
plot(perf, col=rainbow(10))

The graphics are outstanding in the R package and here is an example

Citation:

Tobias Sing, Oliver Sander, Niko Beerenwinkel, Thomas Lengauer.
ROCR: visualizing classifier performance in R.
Bioinformatics 21(20):3940-3941 (2005).

 

New Search Engine ?

Here is a new search engine called Kosmix which claims to help make the world more organized. Now I gave it the Google story test which means I compared it’s results to Google’s results for

1) Google / Kosmix

2) Jim Goodnight /Sergey Brin

3) Regression

4) Data Mining

Results were frighteningly good. Some screenshots are below. What Kosmix does is aggregate searches across various platforms like the Net, and adds Images, Videos Results so it looks like a report compiled on specific search word rather than a list of links like Google does.

Take a look.

 

image image

image image

Clearly Kosmix is the winner because it adds Google Search, You Tube search and Google Blog Search results as well to its results. I have talked a long time back on the need for Google to give more customization especially for business research users ( see item 10 b).

Google continues to be a list of Ranked web pages, while Kosmix is a new organized information source.I just hope it does not end up like www.cuil.com which started with a  big hype and is now claiming to be the biggest search engine in the world with the largest index  , but not the biggest users I guess.

http://www.cuil.com/info/features/

Any search engine that places a category of Indian film actors when you search for “Ajay Ohri” can not do too well ! Period.

image