Professors and Patches: For a Betterrrr R

Professors sometime throw out provocative statements to ensure intellectual debate. I have had almost 1500+ hits in less than 2 days ( and I am glad I am on wordpress.com , my old beloved server would have crashed))

The remarks from Ross Ihaka, covered before and also at Xian’s blog at

Note most of his remarks are techie- and only a single line refers to Revlution Analytics.

Other senior members of community (read- professors are silent, though brobably some thought may have been ignited behind scenes)

http://xianblog.wordpress.com/2010/09/06/insane/comment-page-4/#comments

Ross Ihaka Says:
September 12, 2010 at 1:23 pm

Since (something like) my name has been taken in vain here, let me
chip in.

I’ve been worried for some time that R isn’t going to provide the base
that we’re going to need for statistical computation in the
future. (It may well be that the future is already upon us.) There
are certainly efficiency problems (speed and memory use), but there
are more fundamental issues too. Some of these were inherited from S
and some are peculiar to R.

One of the worst problems is scoping. Consider the following little
gem.

f =
function() {
if (runif(1) > .5)
x = 10
x
}

The x being returned by this function is randomly local or global.
There are other examples where variables alternate between local and
non-local throughout the body of a function. No sensible language
would allow this. It’s ugly and it makes optimisation really
difficult. This isn’t the only problem, even weirder things happen
because of interactions between scoping and lazy evaluation.

In light of this, I’ve come to the conclusion that rather than
“fixing” R, it would be much more productive to simply start over and
build something better. I think the best you could hope for by fixing
the efficiency problems in R would be to boost performance by a small
multiple, or perhaps as much as an order of magnitude. This probably
isn’t enough to justify the effort (Luke Tierney has been working on R
compilation for over a decade now).

To try to get an idea of how much speedup is possible, a number of us
have been carrying out some experiments to see how much better we
could do with something new. Based on prototyping we’ve been doing at
Auckland, it looks like it should be straightforward to get two orders
of magnitude speedup over R, at least for those computations which are
currently bottle-necked. There are a couple of ways to make this
happen.

First, scalar computations in R are very slow. This in part because
the R interpreter is very slow, but also because there are a no scalar
types. By introducing scalars and using compilation it looks like its
possible to get a speedup by a factor of several hundred for scalar
computations. This is important because it means that many ghastly
uses of array operations and the apply functions could be replaced by
simple loops. The cost of these improvements is that scope
declarations become mandatory and (optional) type declarations are
necessary to help the compiler.

As a side-effect of compilation and the use of type-hinting it should
be possible to eliminate dispatch overhead for certain (sealed)
classes (scalars and arrays in particular). This won’t bring huge
benefits across the board, but it will mean that you won’t have to do
foreign language calls to get efficiency.

A second big problem is that computations on aggregates (data frames
in particular) run at glacial rates. This is entirely down to
unnecessary copying because of the call-by-value semantics.
Preserving call-by-value semantics while eliminating the extra copying
is hard. The best we can probably do is to take a conservative
approach. R already tries to avoid copying where it can, but fails in
an epic fashion. The alternative is to abandon call-by-value and move
to reference semantics. Again, prototyping indicates that several
hundredfold speedup is possible (for data frames in particular).

The changes in semantics mentioned above mean that the new language
will not be R. However, it won’t be all that far from R and it should
be easy to port R code to the new system, perhaps using some form of
automatic translation.

If we’re smart about building the new system, it should be possible to
make use of multi-cores and parallelism. Adding this to the mix might just
make it possible to get a three order-of-magnitude performance boost
with just a fraction of the memory that R uses. I think it’s something
really worth putting some effort into.

I also think one other change is necessary. The license will need to a
better job of protecting work donated to the commons than GPL2 seems
to have done. I’m not willing to have any more of my work purloined by
the likes of Revolution Analytics, so I’ll be looking for better
protection from the license (and being a lot more careful about who I
work with).

The discussion spilled over to Stack Overflow as well

http://stackoverflow.com/questions/3706990/is-r-that-bad-that-it-should-be-rewritten-from-scratch/3710667#3710667

n the past week I’ve been following a discussion where Ross Ihaka wrote (here ):

I’ve been worried for some time that R isn’t going to provide the base that we’re going to need for statistical computation in the future. (It may well be that the future is already upon us.) There are certainly efficiency problems (speed and memory use), but there are more fundamental issues too. Some of these were inherited from S and some are peculiar to R.

He then continued explaining. This discussion started from this post, and was then followed by commentsherehereherehereherehere and maybe some more places I don’t know of.

We all know the problem now.

R can be improved substantially in terms of speed.

For some solutions, here are the patches by Radford-

http://www.cs.toronto.edu/~radford/speed-patches-doc

patch-dollar

    Speeds up access to lists, pairlists, and environments using the
    $ operator.  The speedup comes mainly from avoiding the overhead of 
    calling DispatchOrEval if there are no complexities, from passing
    on the field to extract as a symbol, or a name, or both, as available,
    and then converting only as necessary, from simplifying and inlining
    the pstrmatch procedure, and from not translating string multiple
    times.  

    Relevant timing test script:  test-dollar.r 

    This test shows about a 40% decrease in the time needed to extract
    elements of lists and environments.

    Changes unrelated to speed improvement:

    A small error-reporting bug is fixed, illustrated by the following
    output with r52822:

    > options(warnPartialMatchDollar=TRUE)
    > pl <- pairlist(abc=1,def=2)
    > pl$ab
    [1] 1
    Warning message:
    In pl$ab : partial match of 'ab' to ''

    Some code is changed at the end of R_subset3_dflt because it seems 
    to be more correct, as discussed in code comments. 

patch-evalList

    Speeds up a large number of operations by avoiding allocation of
    an extra CONS cell in the procedures for evaluating argument lists.

    Relevant timing test scripts:  all of them, but will look at test-em.r 

    On test-em.r, the speedup from this patch is about 5%.

patch-fast-base

    Speeds up lookup of symbols defined in the base environment, by
    flagging symbols that have a base environment definition recorded
    in the global cache.  This allows the definition to be retrieved
    quickly without looking in the hash table.  

    Relevant timing test scripts:  all of them, but will look at test-em.r 

    On test-em.r, the speedup from this patch is about 3%.

    Issue:  This patch uses the "spare" bit for the flag.  This bit is
    misnamed, since it is already used elsewhere (for closures).  It is
    possible that one of the "gp" bits should be used instead.  The
    "gp" bits should really be divided up for faster access, and so that
    their present use is apparent in the code.

    In case this use of the "spare" bit proves unwise, the patch code is 
    conditional on FAST_BASE_CACHE_LOOKUP being defined at the start of
    envir.r.

patch-fast-spec

    Speeds up lookup of function symbols that begin with a character
    other than a letter or ".", by allowing fast bypass of non-global
    environments that do not contain (and have never contained) symbols 
    of this sort.  Since it is expected that only functions will be
    given names of this sort, the check is done only in findFun, though
    it could also be done in findVar.

    Relevant timing test scripts:  all of them, but will look at test-em.r 

    On test-em.r, the speedup from this patch is about 8%.    

    Issue:  This patch uses the "spare" bit to flag environments known
    to not have symbols starting with a special character.  See remarks
    on patch-fast-base.

    In case this use of the "spare" bit proves unwise, the patch code is 
    conditional on FAST_SPEC_BYPASS being defined at the start of envir.r.

patch-for

    Speeds up for loops by not allocating new space for the loop
    variable every iteration, unless necessary.  

    Relevant timing test script:  test-for.r

    This test shows a speedup of about 5%.  

    Change unrelated to speed improvement:

    Fixes what I consider to be a bug, in which the loop clobbers a
    global variable, as demonstrated by the following output with r52822:

    > i <- 99
    > f <- function () for (i in 1:3) { print(i); if (i==2) rm(i); }
    > f()
    [1] 1
    [1] 2
    [1] 3
    > print(i)
    [1] 3

patch-matprod

    Speeds up matrix products, including vector dot products.  The
    speed issue here is that the R code checks for any NAs, and 
    does the multiply in the matprod procedure (in array.c) if so,
    since BLAS isn't trusted with NAs.  If this check takes about
    as long as just doing the multiply in matprod, calling a BLAS
    routine makes no sense.  

    Relevant time test script:  test-matprod.r

    With no external BLAS, this patch speeds up long vector-vector 
    products by a factor of about six, matrix-vector products by a
    factor of about three, and some matrix-matrix products by a 
    factor of about two.

    Issue:  The matrix multiply code in matprod using an LDOUBLE
    (long double) variable to accumulate sums, for improved accuracy.  
    On a SPARC system I tested on, operations on long doubles are 
    vastly slower than on doubles, so that the patch produces a 
    large slowdown rather than an improvement.  This is also an issue 
    for the "sum" function, which also uses an LDOUBLE to accumulate
    the sum.  Perhaps an ordinarly double should be used in these
    places, or perhaps the configuration script should define LDOUBLE 
    as double on architectures where long doubles are extraordinarily 
    slow.

    Due to this issue, not defining MATPROD_CAN_BE_DONE_HERE at the
    start of array.c will disable this patch.

patch-parens

    Speeds up parentheses by making "(" a special operator whose
    argument is not evaluated, thereby bypassing the overhead of
    evalList.  Also slightly speeds up curly brackets by inlining
    a function that is stylistically better inline anyway.

    Relevant test script:  test-parens.r

    In the parens part of test-parens.r, the speedup is about 9%.

patch-protect

    Speeds up numerous operations by making PROTECT, UNPROTECT, etc.
    be mostly macros in the files in src/main.  This takes effect
    only for files that include Defn.h after defining the symbol
    USE_FAST_PROTECT_MACROS.  With these macros, code of the form
    v = PROTECT(...) must be replaced by PROTECT(v = ...).  

    Relevant timing test scripts:  all of them, but will look at test-em.r 

    On test-em.r, the speedup from this patch is about 9%.

patch-save-alloc

    Speeds up some binary and unary arithmetic operations by, when
    possible, using the space holding one of the operands to hold
    the result, rather than allocating new space.  Though primarily
    a speed improvement, for very long vectors avoiding this allocation 
    could avoid running out of space.

    Relevant test script:  test-complex-expr.r

    On this test, the speedup is about 5% for scalar operands and about
    8% for vector operands.

    Issues:  There are some tricky issues with attributes, but I think
    I got them right.  This patch relies on NAMED being set correctly 
    in the rest of the code.  In case it isn't, the patch can be disabled 
    by not defining AVOID_ALLOC_IF_POSSIBLE at the top of arithmetic.c.

patch-square

    Speeds up a^2 when a is a long vector by not checking for the
    special case of an exponent of 2 over and over again for every 
    vector element.

    Relevant test script:  test-square.r

    The time for squaring a long vector is reduced in this test by a
    factor of more than five.

patch-sum-prod

    Speeds up the "sum" and "prod" functions by not checking for NA
    when na.rm=FALSE, and other detailed code improvements.

    Relevant test script:  test-sum-prod.r

    For sum, the improvement is about a factor of 2.5 when na.rm=FALSE,
    and about 10% when na.rm=TRUE.

    Issue:  See the discussion of patch-matprod regarding LDOUBLE.
    There is no change regarding this issue due to this patch, however.

patch-transpose

    Speeds up the transpose operation (the "t" function) from detailed
    code improvements.

    Relevant test script:  test-transpose.r

    The improvement for 200x60 matrices is about a factor of two.
    There is little or no improvement for long row or column vectors.

patch-vec-arith

    Speeds up arithmetic on vectors of the same length, or when on
    vector is of length one.  This is done with detailed code improvements.

    Relevant test script:  test-vec-arith.r

    On long vectors, the +, -, and * operators are sped up by about     
    20% when operands are the same length or one operand is of length one.

    Rather mysteriously, when the operands are not length one or the
    same length, there is about a 20% increase in time required, though
    this may be due to some strange C optimizer peculiarity or some 
    strange cache effect, since the C code for this is the same as before,
    with negligible additional overhead getting to it.  Regardless, this 
    case is much less common than equal lengths or length one.

    There is little change for the / operator, which is much slower than
    +, -, or *.

patch-vec-subset

    Speeds up extraction of subsets of vectors or matrices (eg, v[10:20]
    or M[1:10,101:110]).  This is done with detailed code improvements.

    Relevant test script:  test-vec-subset.r

    There are lots of tests in this script.  The most dramatic improvement
    is for extracting many rows and columns of a large array, where the 
    improvement is by about a factor of four.  Extracting many rows from
    one column of a matrix is sped up by about 30%. 

    Changes unrelated to speed improvement:

    Fixes two latent bugs where the code incorrectly refers to NA_LOGICAL
    when NA_INTEGER is appropriate and where LOGICAL and INTEGER types
    are treated as interchangeable.  These cause no problems at the moment,
    but would if representations were changed.

patch-subscript

    (Formerly part of patch-vec-subset)  This patch also speeds up
    extraction, and also replacement, of subsets of vectors or
    matrices, but focuses on the creation of the indexes rather than
    the copy operations.  Often avoids a duplication (see below) and
    eliminates a second scan of the subscript vector for zero
    subscripts, folding it into a previous scan at no additional cost.

    Relevant test script:  test-vec-subset.r

    Speeds up some operations with scalar or short vector indexes by
    about 10%.  Speeds up subscripting with a longer vector of
    positive indexes by about 20%.

    Issues:  The current code duplicates a vector of indexes when it
    seems unnecessary.  Duplication is for two reasons:  to handle
    the situation where the index vector is itself being modified in
    a replace operation, and so that any attributes can be removed, which 
    is helpful only for string subscripts, given how the routine to handle 
    them returns information via an attribute.  Duplication for the
    second reasons can easily be avoided, so I avoided it.  The first
    reason for duplication is sometimes valid, but can usually be avoided
    by first only doing it if the subscript is to be used for replacement
    rather than extraction, and second only doing it if the NAMED field
    for the subscript isn't zero.

    I also removed two layers of procedure call overhead (passing seven
    arguments, so not trivial) that seemed to be doing nothing.  Probably 
    it used to do something, but no longer does, but if instead it is 
    preparation for some future use, then removing it might be a mistake.

Software problems are best solved by writing code or patches in my opinion rather than discussing endlessly
Some other solutions to a BETTERRRR R
1) Complete Code Design Review
2) Version 3 - Tuneup
3) Better Documentation
4) Suing Revolution Analytics for the code - Hand over da code pardner

Linux= Who did what and how much?

A report distributed under Creative Commons 3 and available at

That shows Canonical — the commercial arm of Ubuntu — has contributed only about one percent of the code to the GNOME desktop for Linux. while Red Hat accounts for 17 percent of the code and Novell developers are responsible for about 11 percent. That prompted some heartburn from Mark, creator- founder Cannonical/ Ubuntu at http://www.markshuttleworth.com/archives/517

And it would be a very different story if it weren’t for the Mozilla folks and Netscape before them, and GNOME and KDE, and Google and everyone else who have exercised that stack in so many different ways, making it better along the way. There are tens of thousands of people who are not in any way shape or form associated with Ubuntu, who make this story real. Many of them have been working at it for more than a decade – it takes a long time to make an overnight success :) while Ubuntu has only been on the scene six years. So Ubuntu cannot be credited solely for the delight of its users.

Nevertheless, the Ubuntu Project does bring something unique, special and important to free software: a total commitment to everyday users and use cases, the idea that free software should be “for everyone” both economically and in ease of use, and a willingness to chase down the problems that stand between here and there. I feel that commitment is a gift back to the people who built every one of those packages. If we can bring free software to ten times the audience, we have amplified the value of your generosity by a factor of ten, we have made every hour spent fixing an issue or making something amazing, ten times as valuable. I’m very proud to be spending the time and energy on Ubuntu that I do. Yes, I could do many other things, but I can’t think of another course which would have the same impact on the world.

I recognize that not everybody will feel the same way. Bringing their work to ten times the audience without contributing features might just feel like leeching, or increasing the flow of bug reports 10x. I suppose you could say that no matter how generous we are to downstream users, if upstream is only measuring code, then any generosity other than code won’t be registered. I don’t really know what to do about that – I didn’t found Ubuntu as a vehicle for getting lots of code written, that didn’t seem to me to be what the world needed.

Open source communities work like democracies with all noise whereas R and D within corporates have a stricter hierarchy. Still for all that – Ubuntu and Android have made Linux mainstream just as R has made statistical software available to all.

And Ubuntu also has great support for R (particularly the single click R Commander Install and Icon) available at http://packages.ubuntu.com/lucid/math/r-cran-rcmdr

Trrrouble in land of R…and Open Source Suggestions

Recently some comments by Ross Ihake , founder of R Statistical Software on Revolution Analytics, leading commercial vendor of R….. came to my attention-

http://www.stat.auckland.ac.nz/mail/archive/r-downunder/2010-May/000529.html

[R-downunder] Article on Revolution Analytics

Ross Ihaka ihaka at stat.auckland.ac.nz
Mon May 10 14:27:42 NZST 2010


On 09/05/10 09:52, Murray Jorgensen wrote:
> Perhaps of interest:
>
> http://www.theregister.co.uk/2010/05/06/revolution_commercial_r/

Please note that R is "free software" not "open source".  These guys
are selling a GPLed work without disclosing the source to their part
of the work. I have complained to them and so far they have given me
the brush off. I am now considering my options.

Don't support these guys by buying their product. The are not feeding
back to the rights holders (the University of Auckland and I are rights
holders and they didn't even have the courtesy to contact us).

--
Ross Ihaka                         Email:  ihaka at stat.auckland.ac.nz
Department of Statistics           Phone:  (64-9) 373-7599 x 85054
University of Auckland             Fax:    (64-9) 373-7018
Private Bag 92019, Auckland
New Zealand
and from http://www.theregister.co.uk/2010/05/06/revolution_commercial_r/
Open source purists probably won't be all too happy to learn that Revolution is going to be employing an "open core" strategy, which means the core R programs will remain open source and be given tech support under a license model, but the key add-ons that make R more scalable will be closed source and sold under a separate license fee. Because most of those 2,500 add-ons for R were built by academics and Revolution wants to supplant SPSS and SAS as the tools used by students, Revolution will be giving the full single-user version of the R Enterprise stack away for free to academics. 
Conclusion-
So one co-founder of R is advocating not to buy from Revolution Analytics , which has the other co-founder of R, Gentleman on its board. 
Source- http://www.revolutionanalytics.com/aboutus/leadership.php

2) If Revolution Analytics is using 2500 packages for free but insisting on getting paid AND closing source of it’s packages (which is a technical point- how exactly can you prevent source code of a R package from being seen)

Maybe there can be a PACKAGE marketplace just like Android Apps, Facebook Apps, and Salesforce.com Apps – so atleast some of the thousands of R package developers can earn – sorry but email lists do not pay mortgages and no one is disputing the NEED for commercializing R or rewarding developers.

Though Barr created SAS, he gave up control to Goodnight and Sall https://decisionstats.wordpress.com/2010/06/02/sas-early-days/

and Goodnight and Sall do pay their developers well- to the envy of not so well paid counterparts.

3) I really liked the innovation of Revolution Analytics RevoScalar, and I wish that the default R dataset be converted to XDF dataset so that it basically kills

off the R criticism of being slow on bigger datasets. But I also realize the need for creating an analytics marketplace for R developers and R students- so academic version of R being free and Revolution R being paid seems like a trade off.

Note- You can still get a job faster as a stats student if you mention SAS and not R as a statistical skill- not all stats students go into academics.

4) There can be more elegant ways of handling this than calling for ignoring each other as REVOLUTION and Ihake seem to be doing to each other.

I can almost hear people in Cary, NC chuckling at Norman Nie, long time SPSS opponent and now REVOLUTION CEO, and his antagonizing R’s academicians within 1 year of taking over- so I hope this ends well for all. The road to hell is paved with good intentions- so if REVOLUTION can share some source code with say R Core members (even Microsoft shares source code with partners)- and R Core and Revolution agree on a licensing royalty from each other, they can actually speed up R package creation rather than allow this 2 decade effort to end up like S and S plus and TIBCO did.

Maybe Richard Stallman can help-or maybe Ihaka has a better sense of where things will go down in a couple of years-he must know something-he invented it, didnt he

On 09/05/10 09:52, Murray Jorgensen wrote:
> Perhaps of interest:
>
> http://www.theregister.co.uk/2010/05/06/revolution_commercial_r/

Please note that R is "free software" not "open source".  These guys
are selling a GPLed work without disclosing the source to their part
of the work. I have complained to them and so far they have given me
the brush off. I am now considering my options.

Don't support these guys by buying their product. The are not feeding
back to the rights holders (the University of Auckland and I are rights
holders and they didn't even have the courtesy to contact us).

--
Ross Ihaka                         Email:  ihaka at stat.auckland.ac.nz
Department of Statistics           Phone:  (64-9) 373-7599 x 85054
University of Auckland             Fax:    (64-9) 373-7018
Private Bag 92019, Auckland
New Zealand

Google AppInventor -Android and Business Intelligence

Here is a great new tool for techies to start creating Android Apps right away- even if you have no knowledge of the platform. Of course there are existing great number of apps- including my favorite Android Data Mining App in R – called AnalyticDroid http://analyticdroid.togaware.com/

Basically it calls the Rattle (R Analytical Tool To Learn Easily) Data Mining GUI -enabling data mining from an Android Mobile using remote computing.

I dont know if any other statistical application is available on Android Mobiles- though SAS did have a presentation on using SAS on IPhone

http://www.wuss.org/proceedings09/09WUSSProceedings/papers/dpr/DPR-Truong.pdf



SAS Mobile -Iphone App

All you need to do is go to http://appinventor.googlelabs.com/about/index.html and request access (yes there is a 2 week approval waiting line)

Because App Inventor provides access to a GPS-location sensor, you can build apps that know where you are. You can build an app to help you remember where you parked your car, an app that shows the location of your friends or colleagues at a concert or conference, or your own custom tour app of your school, workplace, or a museum.
You can write apps that use the phone features of an Android phone. You can write an app that periodically texts “missing you” to your loved ones, or an app “No Text While Driving” that responds to all texts automatically with “sorry, I’m driving and will contact you later”. You can even have the app read the incoming texts aloud to you (though this might lure you into responding).
App Inventor provides a way for you to communicate with the web. If you know how to write web apps, you can use App Inventor to write Android apps that talk to your favorite web sites, such as Amazon and Twitter.

Here is a not so statistical Android App I am trying to create called Hang-Out

using the current GPS location of your phone to find nearest Pub, Movie or Diner and catch Bus- Train based on your location city, the GPS and time of request and schedule of those cities public transport- very much WIP

KDNuggets Poll on SAS: Churn in Analytics Users

Here are the some surprising results from the Bible of all Data Miners , KDNuggets.com with some interesting comments about SAS being the Microsoft of analytics.

I believe technically advanced users will probably want to try out R before going in for a commercial license from Revolution Analytics as it is free to try out. Also WPS offers a one month free preview for its software- the latest release of it competes with SAS/Stat and SAS/Access, SAS/Graph and Base SAS- so anyone having these installations on a server would be interested to atleast test it for free. Also WPS would be interested in increasing engines (like they have for Oracle and Teradata).

One very crucial difference for SAS is it’s ability to pull in data from almost all data formats- so if you are using SAS/Connect to remote submit code- then you may not be able to switch soon.

Also the more license heavy customers are not the kind of cutomers who have lots of data in their local desktops but is usually pulled and then crunched before analysed. R has recently made some strides with the RevoScaler package from Revolution Analytics but it’s effectiveness would be tested and tried in the coming months- it seems like a great step in the right direction.

For SAS, the feedback should be a call to improve their product bundling – some of which can feel like over selling at times- but they have been fighting off challenges since past 4 decades and have the pockets and intention to sustain market share battles including discounts ( for repeat customers SAS can be much cheaper than say a first time user of WPS or R)

http://teamwpc.co.uk/home

This really should come as a surprise to some people. You can see the comments on WPS and R at the site itself. Interesting stufff and we can see after say 1 year to see how many actually DID switch.

http://www.kdnuggets.com/polls/2010/switching-from-sas-to-wps.html

Interview Stephanie McReynolds Director Product Marketing, AsterData

Here is an interview with Stephanie McReynolds who works as as Director of Product Marketing with AsterData. I asked her a couple of questions about the new product releases from AsterData in analytics and MapReduce.

Ajay – How does the new Eclipse Plugin help people who are already working with huge datasets but are new to AsterData’s platform?

Stephanie- Aster Data Developer Express, our new SQL-MapReduce development plug-in for Eclipse, makes MapReduce applications easy to develop. With Aster Data Developer Express, developers can develop, test and deploy a complete SQL-MapReduce application in under an hour. This is a significant increase in productivity over the traditional analytic application development process for Big Data applications, which requires significant time coding applications in low-level code and testing applications on sample data.

Ajay – What are the various analytical functions that are introduced by you recently- list say the top 10.

Stephanie- At Aster Data, we have an intense focus on making the development process easier for SQL-MapReduce applications. Aster Developer Express is a part of this initiative, as is the release of pre-defined analytic functions. We recently launched both a suite of analytic modules and a partnership program dedicated to delivering pre-defined analytic functions for the Aster Data nCluster platform. Pre-defined analytic functions delivered by Aster Data’s engineering team are delivered as modules within the Aster Data Analytic Foundation offering and include analytics in the areas of pattern matching, clustering, statistics, and text analysis– just to name a few areas. Partners like Fuzzy Logix and Cobi Systems are extending this library by delivering industry-focused analytics like Monte Carlo Simulations for Financial Services and geospatial analytics for Public Sector– to give you a few examples.

Ajay – So okay I want to do a K Means Cluster on say a million rows (and say 200 columns) using the Aster method. How do I go about it using the new plug-in as well as your product.

Stephanie- The power of the Aster Data environment for analytic application development is in SQL-MapReduce. SQL is a powerful analytic query standard because it is a declarative language. MapReduce is a powerful programming framework because it can support high performance parallel processing of Big Data and extreme expressiveness, by supporting a wide variety of programming languages, including Java, C/C#/C++, .Net, Python, etc. Aster Data has taken the performance and expressiveness of MapReduce and combined it with the familiar declarativeness of SQL. This unique combination ensures that anyone who knows standard SQL can access advanced analytic functions programmed for Big Data analysis using MapReduce techniques.

kMeans is a good example of an analytic function that we pre-package for developers as part of the Aster Data Analytic Foundation. What does that mean? It means that the MapReduce portion of the development cycle has been completed for you. Each pre-packaged Aster Data function can be called using standard SQL, and executes the defined analytic in a fully parallelized manner in the Aster Data database using MapReduce techniques. The result? High performance analytics with the expressiveness of low-level languages accessed through declarative SQL.

Ajay – I see an an increasing focus on Analytics. Is this part of your product strategy and how do you see yourself competing with pure analytics vendors.

Stephanie – Aster Data is an infrastructure provider. Our core product is a massively parallel processing database called nCluster that performs at or beyond the capabilities of any other analytic database in the market today. We developed our analytics strategy as a response to demand from our customers who were looking beyond the price/performance wars being fought today and wanted support for richer analytics from their database provider. Aster Data analytics are delivered in nCluster to enable analytic applications that are not possible in more traditional database architectures.

Ajay – Name some recent case studies in Analytics of implementation of MR-SQL with Analytical functions

Stephanie – There are three new classes of applications that Aster Data Express and Aster Analytic Foundation support: iterative analytics, prediction and optimization, and ad hoc analysis.

Aster Data customers are uncovering critical business patterns in Big Data by performing hypothesis-driven, iterative analytics. They are exploring interactively massive volumes of data—terabytes to petabytes—in a top-down deductive manner. ComScore, an Aster Data customer that performs website experience analysis is a good example of an Aster Data customer performing this type of analysis.

Other Aster Data customers are building applications for prediction and optimization that discover trends, patterns, and outliers in data sets. Examples of these types of applications are propensity to churn in telecommunications, proactive product and service recommendations in retail, and pricing and retention strategies in financial services. Full Tilt Poker, who is using Aster Data for fraud prevention is a good example of a customer in this space.

The final class of application that I would like to highlight is ad hoc analysis. Examples of ad hoc analysis that can be performed includes social network analysis, advanced click stream analysis, graph analysis, cluster analysis and a wide variety of mathematical, trigonometry, and statistical functions. LinkedIn, whose analysts and data scientists have access to all of their customer data in Aster Data are a good example of a customer using the system in this manner.

While Aster Data customers are using nCluster in a number of other ways, these three new classes of applications are areas in which we are seeing particularly innovative application development.

Biography-

Stephanie McReynolds is Director of Product Marketing at Aster Data, where she is an evangelist for Aster Data’s massively parallel data-analytics server product. Stephanie has over a decade of experience in product management and marketing for business intelligence, data warehouse, and complex event processing products at companies such as Oracle, Peoplesoft, and Business Objects. She holds both a master’s and undergraduate degree from Stanford University.

Dryad- Microsoft's answer to MR

While reading across the internet I came across Microsoft’s version to MapReduce called Dryad- which has been around for some time, but has not generated quite the buzz that Hadoop or MapReduce are doing.

http://research.microsoft.com/en-us/projects/dryadlinq/

DryadLINQ

DryadLINQ is a simple, powerful, and elegant programming environment for writing large-scale data parallel applications running on large PC clusters.

Overview

New! An academic release of Dryad/DryadLINQ is now available for public download.

The goal of DryadLINQ is to make distributed computing on large compute cluster simple enough for every programmers. DryadLINQ combines two important pieces of Microsoft technology: the Dryad distributed execution engine and the .NET Language Integrated Query (LINQ).

Dryad provides reliable, distributed computing on thousands of servers for large-scale data parallel applications. LINQ enables developers to write and debug their applications in a SQL-like query language, relying on the entire .NET library and using Visual Studio.

DryadLINQ translates LINQ programs into distributed Dryad computations:

  • C# and LINQ data objects become distributed partitioned files.
  • LINQ queries become distributed Dryad jobs.
  • C# methods become code running on the vertices of a Dryad job.

DryadLINQ has the following features:

  • Declarative programming: computations are expressed in a high-level language similar to SQL
  • Automatic parallelization: from sequential declarative code the DryadLINQ compiler generates highly parallel query plans spanning large computer clusters. For exploiting multi-core parallelism on each machine DryadLINQ relies on the PLINQ parallelization framework.
  • Integration with Visual Studio: programmers in DryadLINQ take advantage of the comprehensive VS set of tools: Intellisense, code refactoring, integrated debugging, build, source code management.
  • Integration with .Net: all .Net libraries, including Visual Basic, and dynamic languages are available.
  • and
  • Conciseness: the following line of code is a complete implementation of the Map-Reduce computation framework in DryadLINQ:
    • public static IQueryable<R>
      MapReduce<S,M,K,R>(this IQueryable<S> source,
      Expression<Func<S,IEnumerable<M>>> mapper,
      Expression<Func<M,K>> keySelector,
      Expression<Func<K,IEnumerable<M>,R>> reducer)
      {
      return source.SelectMany(mapper).GroupBy(keySelector, reducer);
      }

    and http://research.microsoft.com/en-us/projects/dryad/

    Dryad

    The Dryad Project is investigating programming models for writing parallel and distributed programs to scale from a small cluster to a large data-center.

    Overview

    New! An academic release of DryadLINQ is now available for public download.

    Dryad is an infrastructure which allows a programmer to use the resources of a computer cluster or a data center for running data-parallel programs. A Dryad programmer can use thousands of machines, each of them with multiple processors or cores, without knowing anything about concurrent programming.

    The Structure of Dryad Jobs

    A Dryad programmer writes several sequential programs and connects them using one-way channels. The computation is structured as a directed graph: programs are graph vertices, while the channels are graph edges. A Dryad job is a graph generator which can synthesize any directed acyclic graph. These graphs can even change during execution, in response to important events in the computation.

    Dryad is quite expressive. It completely subsumes other computation frameworks, such as Google’s map-reduce, or the relational algebra. Moreover, Dryad handles job creation and management, resource management, job monitoring and visualization, fault tolerance, re-execution, scheduling, and accounting.

    The Dryad Software Stack

    As a proof of Dryad’s versatility, a rich software ecosystem has been built on top Dryad:

    • SSIS on Dryad executes many instances of SQL server, each in a separate Dryad vertex, taking advantage of Dryad’s fault tolerance and scheduling. This system is currently deployed in a live production system as part of one of Microsoft’s AdCenter log processing pipelines.
    • DryadLINQ generates Dryad computations from the LINQ Language-Integrated Query extensions to C#.
    • The distributed shell is a generalization of the pipe concept from the Unix shell in three ways. If Unix pipes allow the construction of one-dimensional (1-D) process structures, the distributed shell allows the programmer to build 2-D structures in a scripting language. The distributed shell generalizes Unix pipes in three ways:
      1. It allows processes to easily connect multiple file descriptors of each process — hence the 2-D aspect.
      2. It allows the construction of pipes spanning multiple machines, across a cluster.
      3. It virtualizes the pipelines, allowing the execution of pipelines with many more processes than available machines, by time-multiplexing processors and buffering results.
    • Several languages are compiled to distributed shell processes. PSQL is an early version, recently replaced with Scope.

    Publications

    Dryad: Distributed Data-Parallel Programs from Sequential Building Blocks
    Michael Isard, Mihai Budiu, Yuan Yu, Andrew Birrell, and Dennis Fetterly
    European Conference on Computer Systems (EuroSys), Lisbon, Portugal, March 21-23, 2007

    Video of a presentation on Dryad at the Google Campus, given by Michael Isard, Nov 1, 2007.

    Also interesting to read-

    Why does Dryad use a DAG?

    he basic computational model we decided to adopt for Dryad is the directed-acyclic graph (DAG). Each node in the graph is a computation, and each edge in the graph is a stream of data traveling in the direction of the edge. The amount of data on any given edge is assumed to be finite, the computations are assumed to be deterministic, and the inputs are assumed to be immutable. This isn’t by any means a new way of structuring a distributed computation (for example Condor had DAGMan long before Dryad came along), but it seemed like a sweet spot in the design space given our other constraints.

    So, why is this a sweet spot? A DAG is very convenient because it induces an ordering on the nodes in the graph. That makes it easy to design scheduling policies, since you can define a node to be ready when its inputs are available, and at any time you can choose to schedule as many ready nodes as you like in whatever order you like, and as long as you always have at least one scheduled you will continue to make progress and never deadlock. It also makes fault-tolerance easy, since given our determinism and immutability assumptions you can backtrack as far as you want in the DAG and re-execute as many nodes as you like to regenerate intermediate data that has been lost or is unavailable due to cluster failures.

    from

    http://blogs.msdn.com/b/dryad/archive/2010/07/23/why-does-dryad-use-a-dag.aspx