HANA Oncolyzer

An interesting use case of technology for better health is HANA Oncolyzer at http://epic.hpi.uni-potsdam.de/Home/HanaOncolyzer

“Build on the newest in-memory technology the HANA Oncolyzer is able to analyze even huge amounts of medical data in shortest time”, says Dr. Alexander Zeier, Deputy Chair of EPIC. Research institutes and university hospital support from HANA Oncolyzer by building the basis for a flexible exchange of information about efficiency of medicines and treatments.

In near future, the tumor’s DNA of all cancer patients needs to be analyzed to support specific patient therapies. These analyses result in medical data in amount of multiple terabytes. “These data need to be analyzed regarding mutations and anomalies in real-time”, says Matthias Steinbrecher at SAP’s Innovation Center in Potsdam. As one of the aims the research prototype HANA Oncolyzer was developed at our chair in cooperation with SAP’s Innovation Center in Potsdam. “The ‘heart’ of our development builds the in-memory technology that supports the parallel analysis of million of data within seconds in main memory”, saysMatthieu Schapranow, Ph.D. cand. at the HPI.

and

research activities result in 500.000 or more data points per patient.

and

With the help of a dedicated iPad application medical doctors can access all data mobile at any location anytime.

 

SAP HANA Contest

From SAP, a new contest based on HANA

http://wiki.sdn.sap.com/wiki/display/events/SAP%20HANA%20InnoJam%20Online%202011?bc=true

Become a champion and join our new developer challenge! Explore SAP HANA and share a critical business problem you want to solve on idea place for SAP HANA InnoJam online and describe your idea. Your participation starts here today, 13th September. The first 100 submitters will play on SAP HANA and get a chance to shine and win big prizes! To be prepared for the new challange, please read All sources you might need for your SAP HANA training, first chapters of the SAP HANA Pocketbook (coming soon) and read more on the official SAP HANA webpage.

Phase 1

  • Share a critical business problem you want to solve on Idea Place, the place is open from September 13th, 2011
  • First 100 submitters go to Phase 2

Phase 2

  • Join SAP HANA Champions community, get started with HANA, build teams
  • Access SAP HANA sandbox
  • Use SAP HANA to solve your business problem
  • Get helped by SAP experts if and when needed

Phase 3

  • Share your solution: written description and a short video
  • Top 8 finalists go to Phase 4

Phase 4

  • 8 finalists present their problem/solution/feedback in front of SAP executives and a panel of judges (early January in Palo Alto)
  • Winner prizes will be identified soon
Also suggested reading-

#SAS 9.3 and #Rstats 2.13.1 Released

A bit early but the latest editions of both SAS and R were released last week.

SAS 9.3 is clearly a major release with multiple enhancements to make SAS both relevant and pertinent in enterprise software in the age of big data. Also many more R specific, JMP specific and partners like Teradata specific enhancements.

http://support.sas.com/software/93/index.html

Features

Data management

  • Enhanced manageability for improved performance
  • In-database processing (EL-T pushdown)
  • Enhanced performance for loading oracle data
  • New ET-L transforms
  • Data access

Data quality

  • SAS® Data Integration Server includes DataFlux® Data Management Platform for enhanced data quality
  • Master Data Management (DataFlux® qMDM)
    • Provides support for master hub of trusted entity data.

Analytics

  • SAS® Enterprise Miner™
    • New survival analysis predicts when an event will happen, not just if it will happen.
    • New rate making capability for insurance predicts optimal insurance premium for individuals based on attributes known at application time.
    • Time Series Data Mining node (experimental) applies data mining techniques to transactional, time-stamped data.
    • Support Vector Machines node (experimental) provides a supervised machine learning method for prediction and classification.
  • SAS® Forecast Server
    • SAS Forecast Server is integrated with the SAP APO Demand Planning module to provide SAP users with access to a superior forecasting engine and automatic forecasting capabilities.
  • SAS® Model Manager
    • Seamless integration of R models with the ability to register and manage R models in SAS Model Manager.
    • Ability to perform champion/challenger side-by-side comparisons between SAS and R models to see which model performs best for a specific need.
  • SAS/OR® and SAS® Simulation Studio
    • Optimization
    • Simulation
      • Automatic input distribution fitting using JMP with SAS Simulation Studio.

Text analytics

  • SAS® Text Miner
  • SAS® Enterprise Content Categorization
  • SAS® Sentiment Analysis

Scalability and high-performance

  • SAS® Analytics Accelerator for Teradata (new product)
  • SAS® Grid Manager
 and latest from http://www.r-project.org/ I was a bit curious to know why the different licensing for R now (from GPL2 to GPL2- GPL 3)

LICENCE:

No parts of R are now licensed solely under GPL-2. The licences for packages rpart and survival have been changed, which means that the licence terms for R as distributed are GPL-2 | GPL-3.


This is a maintenance release to consolidate various minor fixes to 2.13.0.
CHANGES IN R VERSION 2.13.1:

  NEW FEATURES:

    • iconv() no longer translates NA strings as "NA".

    • persp(box = TRUE) now warns if the surface extends outside the
      box (since occlusion for the box and axes is computed assuming
      the box is a bounding box). (PR#202.)

    • RShowDoc() can now display the licences shipped with R, e.g.
      RShowDoc("GPL-3").

    • New wrapper function showNonASCIIfile() in package tools.

    • nobs() now has a "mle" method in package stats4.

    • trace() now deals correctly with S4 reference classes and
      corresponding reference methods (e.g., $trace()) have been added.

    • xz has been updated to 5.0.3 (very minor bugfix release).

    • tools::compactPDF() gets more compression (usually a little,
      sometimes a lot) by using the compressed object streams of PDF
      1.5.

    • cairo_ps(onefile = TRUE) generates encapsulated EPS on platforms
      with cairo >= 1.6.

    • Binary reads (e.g. by readChar() and readBin()) are now supported
      on clipboard connections.  (Wish of PR#14593.)

    • as.POSIXlt.factor() now passes ... to the character method
      (suggestion of Joshua Ulrich).  [Intended for R 2.13.0 but
      accidentally removed before release.]

    • vector() and its wrappers such as integer() and double() now warn
      if called with a length argument of more than one element.  This
      helps track down user errors such as calling double(x) instead of
      as.double(x).

  INSTALLATION:

    • Building the vignette PDFs in packages grid and utils is now part
      of running make from an SVN checkout on a Unix-alike: a separate
      make vignettes step is no longer required.

      These vignettes are now made with keep.source = TRUE and hence
      will be laid out differently.

    • make install-strip failed under some configuration options.

    • Packages can customize non-standard installation of compiled code
      via a src/install.libs.R script. This allows packages that have
      architecture-specific binaries (beyond the package's shared
      objects/DLLs) to be installed in a multi-architecture setting.

  SWEAVE & VIGNETTES:

    • Sweave() and Stangle() gain an encoding argument to specify the
      encoding of the vignette sources if the latter do not contain a
      \usepackage[]{inputenc} statement specifying a single input
      encoding.

    • There is a new Sweave option figs.only = TRUE to run each figure
      chunk only for each selected graphics device, and not first using
      the default graphics device.  This will become the default in R
      2.14.0.

    • Sweave custom graphics devices can have a custom function
      foo.off() to shut them down.

    • Warnings are issued when non-portable filenames are found for
      graphics files (and chunks if split = TRUE).  Portable names are
      regarded as alphanumeric plus hyphen, underscore, plus and hash
      (periods cause problems with recognizing file extensions).

    • The Rtangle() driver has a new option show.line.nos which is by
      default false; if true it annotates code chunks with a comment
      giving the line number of the first line in the sources (the
      behaviour of R >= 2.12.0).

    • Package installation tangles the vignette sources: this step now
      converts the vignette sources from the vignette/package encoding
      to the current encoding, and records the encoding (if not ASCII)
      in a comment line at the top of the installed .R file.

  DEPRECATED AND DEFUNCT:

    • The internal functions .readRDS() and .saveRDS() are now
      deprecated in favour of the public functions readRDS() and
      saveRDS() introduced in R 2.13.0.

    • Switching off lazy-loading of code _via_ the LazyLoad field of
      the DESCRIPTION file is now deprecated.  In future all packages
      will be lazy-loaded.

    • The off-line help() types "postscript" and "ps" are deprecated.

  UTILITIES:

    • R CMD check on a multi-architecture installation now skips the
      user's .Renviron file for the architecture-specific tests (which
      do read the architecture-specific Renviron.site files).  This is
      consistent with single-architecture checks, which use
      --no-environ.

    • R CMD build now looks for DESCRIPTION fields BuildResaveData and
      BuildKeepEmpty for per-package overrides.  See ‘Writing R
      Extensions’.

  BUG FIXES:

    • plot.lm(which = 5) was intended to order factor levels in
      increasing order of mean standardized residual.  It ordered the
      factor labels correctly, but could plot the wrong group of
      residuals against the label.  (PR#14545)

    • mosaicplot() could clip the factor labels, and could overlap them
      with the cells if a non-default value of cex.axis was used.
      (Related to PR#14550.)

    • dataframe[[row,col]] now dispatches on [[ methods for the
      selected column (spotted by Bill Dunlap).

    • sort.int() would strip the class of an object, but leave its
      object bit set.  (Reported by Bill Dunlap.)

    • pbirthday() and qbirthday() did not implement the algorithm
      exactly as given in their reference and so were unnecessarily
      inaccurate.

      pbirthday() now solves the approximate formula analytically
      rather than using uniroot() on a discontinuous function.

      The description of the problem was inaccurate: the probability is
      a tail probablity (‘2 _or more_ people share a birthday’)

    • Complex arithmetic sometimes warned incorrectly about producing
      NAs when there were NaNs in the input.

    • seek(origin = "current") incorrectly reported it was not
      implemented for a gzfile() connection.

    • c(), unlist(), cbind() and rbind() could silently overflow the
      maximum vector length and cause a segfault.  (PR#14571)

    • The fonts argument to X11(type = "Xlib") was being ignored.

    • Reading (e.g. with readBin()) from a raw connection was not
      advancing the pointer, so successive reads would read the same
      value.  (Spotted by Bill Dunlap.)

    • Parsed text containing embedded newlines was printed incorrectly
      by as.character.srcref().  (Reported by Hadley Wickham.)

    • decompose() used with a series of a non-integer number of periods
      returned a seasonal component shorter than the original series.
      (Reported by Rob Hyndman.)

    • fields = list() failed for setRefClass().  (Reported by Michael
      Lawrence.)

    • Reference classes could not redefine an inherited field which had
      class "ANY". (Reported by Janko Thyson.)

    • Methods that override previously loaded versions will now be
      installed and called.  (Reported by Iago Mosqueira.)

    • addmargins() called numeric(apos) rather than
      numeric(length(apos)).

    • The HTML help search sometimes produced bad links.  (PR#14608)

    • Command completion will no longer be broken if tail.default() is
      redefined by the user. (Problem reported by Henrik Bengtsson.)

    • LaTeX rendering of markup in titles of help pages has been
      improved; in particular, \eqn{} may be used there.

    • isClass() used its own namespace as the default of the where
      argument inadvertently.

    • Rd conversion to latex mis-handled multi-line titles (including
      cases where there was a blank line in the \title section).
Also see this interesting blog
Examples of tasks replicated in SAS and R

Brief Interview Timo Elliott

Here is a brief interview with Timo Elliott.Timo Elliott is a 19-year veteran of SAP Business Objects.

Ajay- What are the top 5 events in Business Integration and Data Visualization services you saw in 2010 and what are the top three trends you see in these in 2011.


Timo-

Top five events in 2010:

(1) Back to strong market growth. IT spending plummeted last year (BI continued to grow, but more slowly than previous years). This year, organizations reopened their wallets and funded new analytics initiatives — all the signs indicate that BI market growth will be double that of 2009.

(2) The launch of the iPad. Mobile BI has been around for years, but the iPad opened the floodgates of organizations taking a serious look at mobile analytics — and the easy-to-use, executive-friendly iPad dashboards have considerably raised the profile of analytics projects inside organizations.

(3) Data warehousing got exciting again. Decades of incremental improvements (column databases, massively parallel processing, appliances, in-memory processing…) all came together with robust commercial offers that challenged existing data storage and calculation methods. And new “NoSQL” approaches, designed for the new problems of massive amounts of less-structured web data, started moving into the mainstream.

(4) The end of Google Wave, the start of social BI.Google Wave was launched as a rethink of how we could bring together email, instant messaging, and social networks. While Google decided to close down the technology this year, it has left its mark, notably by influencing the future of “social BI”, with several major vendors bringing out commercial products this year.

(5) The start of the big BI merge. While several small independent BI vendors reported strong growth, the major trend of the year was consolidation and integration: the BI megavendors (SAP, Oracle, IBM, Microsoft) increased their market share (sometimes by acquiring smaller vendors, e.g. IBM/SPSS and SAP/Sybase) and integrated analytics with their existing products, blurring the line between BI and other technology areas.

Top three trends next year:

(1) Analytics, reinvented. New DW techniques make it possible to do sub-second, interactive analytics directly against row-level operational data. Now BI processes and interfaces need to be rethought and redesigned to make best use of this — notably by blurring the distinctions between the “design” and “consumption” phases of BI.

(2) Corporate and personal BI come together. The ability to mix corporate and personal data for quick, pragmatic analysis is a common business need. The typical solution to the problem — extracting and combining the data into a local data store (either Excel or a departmental data mart) — pleases users, but introduces duplication and extra costs and makes a mockery of information governance. 2011 will see the rise of systems that let individuals and departments load their data into personal spaces in the corporate environment, allowing pragmatic analytic flexibility without compromising security and governance.

(3) The next generation of business applications. Where are the business applications designed to support what people really do all day, such as implementing this year’s strategy, launching new products, or acquiring another company? 2011 will see the first prototypes of people-focused, flexible, information-centric, and collaborative applications, bringing together the best of business intelligence, “enterprise 2.0”, and existing operational applications.

And one that should happen, but probably won’t:

(4) Intelligence = Information + PEOPLE. Successful analytics isn’t about technology — it’s about people, process, and culture. The biggest trend in 2011 should be organizations spending the majority of their efforts on user adoption rather than technical implementation.                 About- http://timoelliott.com/blog/about

Timo Elliott is a 19-year veteran of SAP BusinessObjects, and has spent the last twenty years working with customers around the world on information strategy.

He works closely with SAP research and innovation centers around the world to evangelize new technology prototypes.

His popular Business Analytics and SAPWeb20 blogs track innovation in analytics and social media, including topics such as augmented corporate reality, collaborative decision-making, and social network analysis.

His PowerPoint Twitter Tools lets presenters see and react to tweets in real time, embedded directly within their slides.

A popular and engaging speaker, Elliott presents regularly to IT and business audiences at international conferences, on subjects such as why BI projects fail and what to do about it, and the intersection of BI and enterprise 2.0.

Prior to Business Objects, Elliott was a computer consultant in Hong Kong and led analytics projects for Shell in New Zealand. He holds a first-class honors degree in Economics with Statistics from Bristol University, England. He blogs on http://timoelliott.com/blog/ (one of the best designed blogs in BI) . You can see more about him personal web site here and photo/sketch blog here. You should follow Timo at http://twitter.com/timoelliott

Art Credit- Timo Elliott

Related Articles

Quantifying Analytics ROI

Japanese House Crest “Go-Shichi no Kiri”
Image via Wikipedia

I had a brief twitter exchange with Jim Davis, Chief Marketing Officer, SAS Institute on Return of Investment on Business Analytics Projects for customers. I have interviewed Jim Davis before last year https://decisionstats.com/2009/06/05/interview-jim-davis-sas-institute/

Now Jim Davis is a big guy, and he is rushing from the launch of SAS Institute’s Social Media Analytics in Japan- to some arguably difficult flying conditions in time to be home in America for Thanksgiving. That and and I have not been much of a good Blog Boy recently, more swayed by love of open source, than love of software per se. I love equally, given I am bad at both equally.

Anyways, Jim’s contention  ( http://twitter.com/Davis_Jim ) was customers should go in business analytics only if there is Positive Return on Investment.  I am quoting him here-

What is important is that there be a positive ROI on each and every BA project. Otherwise don’t do it.

That’s not the marketing I was taught in my business school- basically it was sell, sell, sell.

However I see most BI sales vendors also go through -let me meet my sales quota for this quarter- and quantifying customer ROI is simple maths than predictive analytics but there seems to be some information assymetry in it.

Here is a paper from North Western University on ROI in IT projects-.

but overall it would be in the interest of customers and Business Analytics Vendors to publish aggregated ROI.

The opponents to this transparency in ROI would be market leaders in market share, who have trapped their customers by high migration costs (due to complexity) or contractually.

A recent study listed Oracle having a large percentage of unhappy customers who would still renew!, SAP had problems when it raised prices for licensing arbitrarily (that CEO is now CEO of HP and dodging legal notices from Oracle).

Indeed Jim Davis’s famous unsettling call for focusing on Business Analytics,as Business Intelligence is dead- that call has been implemented more aggressively by IBM in analytical acquisitions than even SAS itself which has been conservative about inorganic growth. Quantifying ROI, should theoretically aid open source software the most (since they are cheapest in up front licensing) or newer technologies like MapReduce /Hadoop (since they are quite so fast)- but I think that market has a way of factoring in these things- and customers are not as foolish neither as unaware of costs versus benefits of migration.

The contrary to this is Business Analytics and Business Intelligence are imperfect markets with duo-poly  or big players thriving in absence of customer regulation.

You get more protection as a customer of $20 bag of potato chips, than as a customer of a $200,000 software. Regulators are wary to step in to ensure ROI fairness (since most bright techies are qither working for private sector, have their own startup or invested in startups)- who in Govt understands Analytics and Intelligence strong enough to ensure vendor lock-ins are not done, and market flexibility is done. It is also a lower choice for embattled regulators to ensure ROI on enterprise software unlike the aggressiveness they have showed in retail or online software.

Who will Analyze the Analysts and who can quantify the value of quants (or penalize them for shoddy quantitative analytics)- is an interesting phenomenon we expect to see more of.

 

 

Nice BI Tutorials

Tutorials screenshot.
Image via Wikipedia

Here is a set of very nice, screenshot enabled tutorials from SAP BI. They are a bit outdated (3 years old) but most of it is quite relevant- especially from a Tutorial Design Perspective –

Most people would rather see screenshot based step by step powerpoints, than cluttered or clever presentations , or even videos that force you to sit like a TV zombie. Unfortunately most tutorial presentations I see especially for BI are either slides with one or two points, that abruptly shift to “concepts” or videos that are atleast more than 10 minutes long. That works fine for scripting tutorials or hands on workshops, but cannot be reproduced for later instances of study.

The mode of tutorials especially for GUI software can vary, it may be Slideshare, Scribd, Google Presentation,Microsoft Powerpoint but a step by step screenshot by screenshot tutorial is much better for understanding than commando line jargon/ Youtub   Videos presentations, or Powerpoint with Points.

Have a look at these SAP BI 7 slideshares

and

Speaking of BI, the R Package called Brew is going to brew up something special especially combined with R Apache. However I wish R Apache, or R Web, or RServe had step by step install screenshot tutorials to increase their usage in Business Intelligence.

I tried searching for JMP GUI Tutorials too, but I believe putting all your content behind a registration wall is not so great. Do a Pareto Analysis of your training material, surely you can share a couple more tutorials without registration. It also will help new wanna-migrate users to get a test and feel for the installation complexities as well as final report GUI.

 

Open Source’s worst enemy is itself not Microsoft/SAS/SAP/Oracle

The decision of quality open source makers to offer their software at bargain basement prices even to enterprise customers who are used to pay prices many times more-pricing is the reason open source software is taking a long time to command respect in enterprise software.

I hate to be the messenger who brings the bad news to my open source brethren-

but their worst nightmare is not the actions of their proprietary competitors like Oracle, SAP, SAS, Microsoft ( they hate each other even more than open source )

nor the collective marketing tactics which are textbook like (but referred as Fear Uncertainty Doubt by those outside that golden quartet)- it is their own communities and their own cheap pricing.

It is community action which prevents them from offering their software by ridiculously low bargain basement prices. James Dixon, head geek and founder at Pentaho has a point when he says traditional metrics like revenue need o be adjusted for this impact in his article at http://jamesdixon.wordpress.com/2010/11/02/comparing-open-source-and-proprietary-software-markets/

But James, why offer software to enterprise customers at one tenth the next competitor- one reason is open source companies more often than not compete more with their free community version software than with big proprietary packages.

Communities including academics are used to free- hey how about paying say 1$ for each download.

There are two million R users- if say even 50 % of them  paid 1 $ as a lifetime license fee- you could sponsor enough new packages than twenty years of Google Summer of Code does right now.

Secondly, this pricing can easily be adjusted by shifting the licensing to say free for businesses less than 2 people (even for the enhanced corporate software version not just the plain vanilla community software thus further increasing the spread of the plain vanilla versions)- for businesses from 10 to 20 people offer a six month trial rather than one month trial.

– but adjust the pricing to much more realistic levels compared to competing software. Make enterprise software pay a real value.

That’s the only way to earn respect. as well as a few dollars more.

As for SAS, it is time it started ridiculing Python now that it has accepted R.

Python is even MORE powerful than R in some use cases for stat computing

Dixon’s Pentaho and the Jaspersoft/ Revolution combo are nice _ I tested both Jasper and Pentaho thanks to these remarks this week 🙂  (see slides at http://www.jaspersoft.com/sites/default/files/downloads/events/Analytics%20-Jaspersoft-SEP2010.pdf or http://www.revolutionanalytics.com/news-events/free-webinars/2010/deploying-r/index.php )

Pentaho and Jasper do give good great graphics in BI (Graphical display in BI is not a SAS forte though probably I dont know how much they cross sell JMP to BI customers- probably too much JMP is another division syndrome there)

%d bloggers like this: