I just checked out this new software for making PMML models. It is called Augustus and is created by the Open Data Group (http://opendatagroup.com/) , which is headed by Robert Grossman, who was the first proponent of using R on Amazon Ec2.
Probably someone like Zementis ( http://adapasupport.zementis.com/ ) can use this to further test , enhance or benchmark on the Ec2. They did have a joint webinar with Revolution Analytics recently.
See Recent News for more details and all recent news.
Augustus
Augustus is a PMML 4-compliant scoring engine that works with segmented models. Augustus is designed for use with statistical and data mining models. The new release provides Baseline, Tree and Naive-Bayes producers and consumers.
There is also a version for use with PMML 3 models. It is able to produce and consume models with 10,000s of segments and conforms to a PMML draft RFC for segmented models and ensembles of models. It supports Baseline, Regression, Tree and Naive-Bayes.
Augustus is written in Python and is freely available under the GNU General Public License, version 2.
Predictive Model Markup Language (PMML) is an XML mark up language to describe statistical and data mining models. PMML describes the inputs to data mining models, the transformations used to prepare data for data mining, and the parameters which define the models themselves. It is used for a wide variety of applications, including applications in finance, e-business, direct marketing, manufacturing, and defense. PMML is often used so that systems which create statistical and data mining models (“PMML Producers”) can easily inter-operate with systems which deploy PMML models for scoring or other operational purposes (“PMML Consumers”).
Change Detection using Augustus
For information regarding using Augustus with Change Detection and Health and Status Monitoring, please see change-detection.
Open Data
Open Data Group provides management consulting services, outsourced analytical services, analytic staffing, and expert witnesses broadly related to data and analytics. It has experience with customer data, supplier data, financial and trading data, and data from internal business processes.
It has staff in Chicago and San Francisco and clients throughout the U.S. Open Data Group began operations in 2002.
Overview
The above example contains plots generated in R of scoring results from Augustus. Each point on the graph represents a use of the scoring engine and a chart is an aggregation of multiple Augustus runs. A Baseline (Change Detection) model was used to score data with multiple segments.
Typical Use
Augustus is typically used to construct models and score data with models. Augustus includes a dedicated application for creating, or producing, predictive models rendered as PMML-compliant files. Scoring is accomplished by consuming PMML-compliant files describing an appropriate model. Augustus provides a dedicated application for scoring data with four classes of models, Baseline (Change Detection) Models, Tree Models, Regression Models and Naive Bayes Models. The typical model development and use cycle with Augustus is as follows:
Identify suitable data with which to construct a new model.
Provide a model schema which proscribes the requirements for the model.
Run the Augustus producer to obtain a new model.
Run the Augustus consumer on new data to effect scoring.
Separate consumer and producer applications are supplied for Baseline (Change Detection) models, Tree models, Regression models and for Naive Bayes models. The producer and consumer applications require configuration with XML-formatted files. The specification of the configuration files and model schema are detailed below. The consumers provide for some configurability of the output but users will often provide additional post-processing to render the output according to their needs. A variety of mechanisms exist for transmitting data but user’s may need to provide their own preprocessing to accommodate their particular data source.
In addition to the producer and consumer applications, Augustus is conceptually structured and provided with libraries which are relevant to the development and use of Predictive Models. Broadly speaking, these consist of components that address the use of PMML and components that are specific to Augustus.
Post Processing
Augustus can accommodate a post-processing step. While not necessary, it is often useful to
Re-normalize the scoring results or performing an additional transformation.
Supplements the results with global meta-data such as timestamps.
Formatting of the results.
Select certain interesting values from the results.
Restructure the data for use with other applications.
XBRL is a member of the family of languages based on XML, or Extensible Markup Language, which is a standard for the electronic exchange of data between businesses and on the internet. Under XML, identifying tags are applied to items of data so that they can be processed efficiently by computer software.
XBRL is a powerful and flexible version of XML which has been defined specifically to meet the requirements of business and financial information. It enables unique identifying tags to be applied to items of financial data, such as ‘net profit’. However, these are more than simple identifiers. They provide a range of information about the item, such as whether it is a monetary item, percentage or fraction. XBRL allows labels in any language to be applied to items, as well as accounting references or other subsidiary information.
XBRL can show how items are related to one another. It can thus represent how they are calculated. It can also identify whether they fall into particular groupings for organisational or presentational purposes. Most importantly, XBRL is easily extensible, so companies and other organisations can adapt it to meet a variety of special requirements.
The rich and powerful structure of XBRL allows very efficient handling of business data by computer software. It supports all the standard tasks involved in compiling, storing and using business data. Such information can be converted into XBRL by suitable mapping processes or generated in XBRL by software. It can then be searched, selected, exchanged or analysed by computer, or published for ordinary viewing.
With more than 7,000 new U.S. companies facing extensible business reporting language (XBRL) filing mandates in 2011, Oracle has released a free XBRL extension on top of the latest release of Oracle Database.
Oracle’s XBRL extension leverages Oracle Database 11g Release 2 XML to manage the collection, validation, storage, and analysis of XBRL data. It enables organizations to create one or more back-end XBRL repositories based on Oracle Database, providing secure XBRL storage and query-ability with a set of XBRL-specific services.
In addition, the extension integrates easily with Oracle Business Intelligence Suite Enterprise Edition to provide analytics, plus interactive development environments (IDEs) and design tools for creating and editing XBRL taxonomies.
The Other Side of XBRL
“While the XBRL mandate continues to grow, the feedback we keep hearing from the ‘other side’ of XRBL—regulators, academics, financial analysts, and investors—is that they lack sufficient tools and historic data to leverage the full potential of XBRL,” says John O’Rourke, vice president of product marketing, Oracle.
However, O’Rourke says this is quickly changing as XBRL mandates enter their third year—and more and more companies have to comply. While the new extension should be attractive to organizations that produce XBRL filings, O’Rourke expects it will prove particularly valuable to regulators, stock exchanges, universities, and other organizations that need to collect, analyze, and disseminate XBRL-based filings.
Outsourcing, a Bolt-on Solution, or Integrated XBRL Tagging
Until recently, reporting organizations had to choose between expensive third-party outsourcing or manual, in-house tagging with bolt-on solutions— both of which introduce the possibility of error.
In response, Oracle launched Oracle Hyperion Disclosure Management, which provides an XBRL tagging solution that is integrated with the financial close and reporting process for fast and reliable XBRL report submission—without relying on third-party providers. The solution enables organizations to
Author regulatory filings in Microsoft Office and “hot link” them directly to financial reporting systems so they can be easily updated
Graphically perform XBRL tagging at several levels—within Microsoft Office, within EPM system reports, or in the data source metadata
Modify or extend XBRL taxonomies before the mapping process, as well as set up multiple taxonomies
Create and validate final XBRL instance documents before submission
CHANGES IN R VERSION 2.12.2: http://cran.r-project.org/src/base/NEWS
SIGNIFICANT USER-VISIBLE CHANGES:
• Complex arithmetic (notably z^n for complex z and integer n) gave
incorrect results since R 2.10.0 on platforms without C99 complex
support. This and some lesser issues in trignometric functions
have been corrected.
Such platforms were rare (we know of Cygwin and FreeBSD).
However, because of new compiler optimizations in the way complex
arguments are handled, the same code was selected on x86_64 Linux
with gcc 4.5.x at the default -O2 optimization (but not at -O).
• There is a workaround for crashes seen with several packages on
systems using zlib 1.2.5: see the INSTALLATION section.
NEW FEATURES:
• PCRE has been updated to 8.12 (two bug-fix releases since 8.10).
• rep(), seq(), seq.int() and seq_len() report more often when the
first element is taken of an argument of incorrect length.
• The Cocoa back-end for the quartz() graphics device on Mac OS X
provides a way to disable event loop processing temporarily
(useful, e.g., for forked instances of R).
• kernel()'s default for m was not appropriate if coef was a set of
coefficients. (Reported by Pierre Chausse.)
• bug.report() has been updated for the current R bug tracker,
which does not accept emailed submissions.
• R CMD check now checks for the correct use of $(LAPACK_LIBS) (as
well as $(BLAS_LIBS)), since several CRAN recent submissions have
ignored ‘Writing R Extensions’.
INSTALLATION:
• The zlib sources in the distribution are now built with all
symbols remapped: this is intended to avoid problems seen with
packages such as XML and rggobi which link to zlib.so.1 on
systems using zlib 1.2.5.
• The default for FFLAGS and FCFLAGS with gfortran on x86_64 Linux
has been changed back to -g -O2: however, setting -g -O may still
be needed for gfortran 4.3.x.
PACKAGE INSTALLATION:
• A LazyDataCompression field in the DESCRIPTION file will be used
to set the value for the --data-compress option of R CMD INSTALL.
• Files R/sysdata.rda of more than 1Mb are now stored in the
lazyload daabase using xz compression: this for example halves
the installed size of package Imap.
• R CMD INSTALL now ensures that directories installed from inst
have search permission for everyone.
It no longer installs files inst/doc/Rplots.ps and
inst/doc/Rplots.pdf. These are almost certainly left-overs from
Sweave runs, and are often large.
DEPRECATED & DEFUNCT:
• The ‘experimental’ alternative specification of a name space via
.Export() etc is now deprecated.
• zip.file.extract() is now deprecated.
• Zip-ing data sets in packages (and hence R CMD INSTALL
--use-zip-data and the ZipData: yes field in a DESCRIPTION file)
is deprecated: using efficiently compressed .rda images and
lazy-loading of data has superseded it.
BUG FIXES:
• identical() could in rare cases generate a warning about
non-pairlist attributes on CHARSXPs. As these are used for
internal purposes, the attribute check should be skipped.
(Reported by Niels Richard Hansen).
• If the filename extension (usually .Rnw) was not included in a
call to Sweave(), source references would not work properly and
the keep.source option failed. (PR#14459)
• format.data.frame() now keeps zero character column names.
• pretty(x) no longer raises an error when x contains solely
non-finite values. (PR#14468)
• The plot.TukeyHSD() function now uses a line width of 0.5 for its
reference lines rather than lwd = 0 (which caused problems for
some PDF and PostScript viewers).
• The big.mark argument to prettyNum(), format(), etc. was inserted
reversed if it was more than one character long.
• R CMD check failed to check the filenames under man for Windows'
reserved names.
• The "Date" and "POSIXt" methods for seq() could overshoot when to
was supplied and by was specified in months or years.
• The internal method of untar() now restores hard links as file
copies rather than symbolic links (which did not work for
cross-directory links).
• unzip() did not handle zip files which contained filepaths with
two or more leading directories which were not in the zipfile and
did not already exist. (It is unclear if such zipfiles are valid
and the third-party C code used did not support them, but
PR#14462 created one.)
• combn(n, m) now behaves more regularly for the border case m = 0.
(PR#14473)
• The rendering of numbers in plotmath expressions (e.g.
expression(10^2)) used the current settings for conversion to
strings rather than setting the defaults, and so could be
affected by what has been done before. (PR#14477)
• The methods of napredict() and naresid() for na.action =
na.exclude fits did not work correctly in the very rare event
that every case had been omitted in the fit. (Reported by Simon
Wood.)
• weighted.residuals(drop0=TRUE) returned a vector when the
residuals were a matrix (e.g. those of class "mlm"). (Reported
by Bill Dunlap.)
• Package HTML index files /html/00Index.html were generated
with a stylesheet reference that was not correct for static
browsing in libraries.
• ccf(na.action = na.pass) was not implemented.
• The parser accepted some incorrect numeric constants, e.g. 20x2.
(Reported by Olaf Mersmann.)
• format(*, zero.print) did not always replace the full zero parts.
• Fixes for subsetting or subassignment of "raster" objects when
not both i and j are specified.
• R CMD INSTALL was not always respecting the ZipData: yes field of
a DESCRIPTION file (although this is frequently incorrectly
specified for packages with no data or which specify lazy-loading
of data).
R CMD INSTALL --use-zip-data was incorrectly implemented as
--use-zipdata since R 2.9.0.
• source(file, echo=TRUE) could fail if the file contained #line
directives. It now recovers more gracefully, but may still
display the wrong line if the directive gives incorrect
information.
• atan(1i) returned NaN+Infi (rather than 0+Infi) on platforms
without C99 complex support.
• library() failed to cache S4 metadata (unlike loadNamespace())
causing failures in S4-using packages without a namespace (e.g.
those using reference classes).
• The function qlogis(lp, log.p=TRUE) no longer prematurely
overflows to Inf when exp(lp) is close to 1.
• Updating S4 methods for a group generic function requires
resetting the methods tables for the members of the group (patch
contributed by Martin Morgan).
• In some circumstances (including for package XML), R CMD INSTALL
installed version-control directories from source packages.
• Added PROTECT calls to some constructed expressions used in C
level eval calls.
• utils:::create.post() (used by bug.report() and help.request())
failed to quote arguments to the mailer, and so often failed.
• bug.report() was naive about how to extract maintainer email
addresses from package descriptions, so would often try mailing
to incorrect addresses.
• debugger() could fail to read the environment of a call to a
function with a ... argument. (Reported by Charlie Roosen.) • prettyNum(c(1i, NA), drop0=TRUE) or str(NA_complex_) now work
correctly.
Here is a short list of resources and material I put together as starting points for R and Cloud Computing It’s a bit messy but overall should serve quite comprehensively.
Cloud computing is a commonly used expression to imply a generational change in computing from desktop-servers to remote and massive computing connections,shared computers, enabled by high bandwidth across the internet.
As per the National Institute of Standards and Technology Definition,
Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.
Rweb is developed and maintained by Jeff Banfield. The Rweb Home Page provides access to all three versions of Rweb—a simple text entry form that returns output and graphs, a more sophisticated JavaScript version that provides a multiple window environment, and a set of point and click modules that are useful for introductory statistics courses and require no knowledge of the R language. All of the Rweb versions can analyze Web accessible datasets if a URL is provided.
The paper “Rweb: Web-based Statistical Analysis”, providing a detailed explanation of the different versions of Rweb and an overview of how Rweb works, was published in the Journal of Statistical Software (http://www.jstatsoft.org/v04/i01/).
Ulf Bartel has developed R-Online, a simple on-line programming environment for R which intends to make the first steps in statistical programming with R (especially with time series) as easy as possible. There is no need for a local installation since the only requirement for the user is a JavaScript capable browser. See http://osvisions.com/r-online/ for more information.
Rcgi is a CGI WWW interface to R by MJ Ray. It had the ability to use “embedded code”: you could mix user input and code, allowing the HTMLauthor to do anything from load in data sets to enter most of the commands for users without writing CGI scripts. Graphical output was possible in PostScript or GIF formats and the executed code was presented to the user for revision. However, it is not clear if the project is still active.
Currently, a modified version of Rcgi by Mai Zhou (actually, two versions: one with (bitmap) graphics and one without) as well as the original code are available from http://www.ms.uky.edu/~statweb/.
David Firth has written CGIwithR, an R add-on package available from CRAN. It provides some simple extensions to R to facilitate running R scripts through the CGI interface to a web server, and allows submission of data using both GET and POST methods. It is easily installed using Apache under Linux and in principle should run on any platform that supports R and a web server provided that the installer has the necessary security permissions. David’s paper “CGIwithR: Facilities for Processing Web Forms Using R” was published in the Journal of Statistical Software (http://www.jstatsoft.org/v08/i10/). The package is now maintained by Duncan Temple Lang and has a web page athttp://www.omegahat.org/CGIwithR/.
Rpad, developed and actively maintained by Tom Short, provides a sophisticated environment which combines some of the features of the previous approaches with quite a bit of JavaScript, allowing for a GUI-like behavior (with sortable tables, clickable graphics, editable output), etc.
Jeff Horner is working on the R/Apache Integration Project which embeds the R interpreter inside Apache 2 (and beyond). A tutorial and presentation are available from the project web page at http://biostat.mc.vanderbilt.edu/twiki/bin/view/Main/RApacheProject.
Rserve is a project actively developed by Simon Urbanek. It implements a TCP/IP server which allows other programs to use facilities of R. Clients are available from the web site for Java and C++ (and could be written for other languages that support TCP/IP sockets).
OpenStatServer is being developed by a team lead by Greg Warnes; it aims “to provide clean access to computational modules defined in a variety of computational environments (R, SAS, Matlab, etc) via a single well-defined client interface” and to turn computational services into web services.
Two projects use PHP to provide a web interface to R. R_PHP_Online by Steve Chen (though it is unclear if this project is still active) is somewhat similar to the above Rcgi and Rweb. R-php is actively developed by Alfredo Pontillo and Angelo Mineo and provides both a web interface to R and a set of pre-specified analyses that need no R code input.
webbioc is “an integrated web interface for doing microarray analysis using several of the Bioconductor packages” and is designed to be installed at local sites as a shared computing resource.
Rwui is a web application to create user-friendly web interfaces for R scripts. All code for the web interface is created automatically. There is no need for the user to do any extra scripting or learn any new scripting techniques. Rwui can also be found at http://rwui.cryst.bbk.ac.uk.
Finally, the R.rsp package by Henrik Bengtsson introduces “R Server Pages”. Analogous to Java Server Pages, an R server page is typically HTMLwith embedded R code that gets evaluated when the page is requested. The package includes an internal cross-platform HTTP server implemented in Tcl, so provides a good framework for including web-based user interfaces in packages. The approach is similar to the use of the brew package withRapache with the advantage of cross-platform support and easy installation.
Remote access to R/Bioconductor on EBI’s 64-bit Linux Cluster
Start the workbench by downloading the package for your operating system (Macintosh or Windows), or via Java Web Start, and you will get access to an instance of R running on one of EBI’s powerful machines. You can install additional packages, upload your own data, work with graphics and collaborate with colleagues, all as if you are running R locally, but unlimited by your machine’s memory, processor or data storage capacity.
Most up-to-date R version built for multicore CPUs
Access to all Bioconductor packages
Access to our computing infrastructure
Fast access to data stored in EBI’s repositories (e.g., public microarray data in ArrayExpress)
Using R Google Docs http://www.omegahat.org/RGoogleDocs/run.pdf
It uses the XML and RCurl packages and illustrates that it is relatively quick and easy
to use their primitives to interact with Web services.
Amazon’s EC2 is a type of cloud that provides on demand computing infrastructures called an Amazon Machine Images or AMIs. In general, these types of cloud provide several benefits:
Simple and convenient to use. An AMI contains your applications, libraries, data and all associated configuration settings. You simply access it. You don’t need to configure it. This applies not only to applications like R, but also can include any third-party data that you require.
On-demand availability. AMIs are available over the Internet whenever you need them. You can configure the AMIs yourself without involving the service provider. You don’t need to order any hardware and set it up.
Elastic access. With elastic access, you can rapidly provision and access the additional resources you need. Again, no human intervention from the service provider is required. This type of elastic capacity can be used to handle surge requirements when you might need many machines for a short time in order to complete a computation.
Pay per use. The cost of 1 AMI for 100 hours and 100 AMI for 1 hour is the same. With pay per use pricing, which is sometimes called utility pricing, you simply pay for the resources that you use.
#This example requires you had previously created a bucket named data_language on your Google Storage and you had uploaded a CSV file named language_id.txt (your data) into this bucket – see for details
library(predictionapirwrapper)
Elastic-R is a new portal built using the Biocep-R platform. It enables statisticians, computational scientists, financial analysts, educators and students to use cloud resources seamlessly; to work with R engines and use their full capabilities from within simple browsers; to collaborate, share and reuse functions, algorithms, user interfaces, R sessions, servers; and to perform elastic distributed computing with any number of virtual machines to solve computationally intensive problems.
Also see Karim Chine’s http://biocep-distrib.r-forge.r-project.org/
R for Salesforce.com
At the point of writing this, there seem to be zero R based apps on Salesforce.com This could be a big opportunity for developers as both Apex and R have similar structures Developers could write free code in R and charge for their translated version in Apex on Salesforce.com
Force.com and Salesforce have many (1009) apps at http://sites.force.com/appexchange/home for cloud computing for
businesses, but very few forecasting and statistical simulation apps.
These are like iPhone apps except meant for business purposes (I am
unaware if any university is offering salesforce.com integration
though google apps and amazon related research seems to be on)
Personal Note-Mentioning SAS in an email to a R list is a big no-no in terms of getting a response and love. Same for being careless about which R help list to email (like R devel or R packages or R help)
Here is the brand new release from Jaspersoft at a groovy price of 9000$. Somebody stop these guys!
It’s a great company to watch for buyouts as well- given their expertise in REPORTING and clientele- especially for anyone looking to im prove thier standing in both open source world and reporting software branding.
Webinar: Introducing JasperReports Server Professional
Thursday October 14
In this live webinar, learn how a new solution from Jaspersoft combines the world’s favorite reporting server with powerful, mature report server functionality—for about 80% less.
The World’s Most Powerful and Affordable Reporting Server
Limited Time Introductory Offer: Starting from $9,000 (restrictions apply)
JasperReports Server is the recommended product for organizations requiring an affordable reporting solution for interactive, operational, and production-based reporting. Deployed as a standalone reporting server or integrated inside another application, JasperReports Server is a flexible, powerful, interactive reporting environment for small or large enterprises.
Powered by the world’s most popular reporting tools in JasperReports and iReport, developers and users can take advantage of more interactivity, security, and scheduling of their reports.
Key Benefits:
Affordable: Unlimited reports for unlimited users starting at $9,000
Powerful: Report scheduling and distribution to 1,000s of users on a single server
Flexible: Web service architecture simplifies application integration
PALO ALTO, Calif., Sept. 20 — Revolution Analytics, the leading commercial provider of software and support for the popular open source R statistics language, today announced it will deliver Revolution R Enterprise for Microsoft Windows HPC Server 2008 R2, released today, enabling users to analyze very large data sets in high-performance computing environments.
R is a powerful open source statistics language and the modern system for predictive analytics. Revolution Analytics recently introduced RevoScaleR, new “Big Data” analysis capabilities, to its R distribution, Revolution R Enterprise. RevoScaleR solves the performance and capacity limitations of the R language by with parallelized algorithms that stream data across multiple cores on a laptop, workstation or server. Users can now process, visualize and model terabyte-class data sets at top speeds — without the need for specialized hardware.
“Revolution Analytics is pleased to support Microsoft’s Technical Computing initiative, whose efforts will benefit scientists, engineers and data analysts,” said David Champagne, CTO at Revolution. “We believe the engineering we have done for Revolution R Enterprise, in particular our work on big-data statistics and multicore computing, along with Microsoft’s HPC platform for technical computing, makes an ideal combination for high-performance large scale statistical computing.”
“Processing and analyzing this ‘big data’ is essential to better prediction and decision making,” said Bill Hamilton, director of technical computing at Microsoft Corp. “Revolution R Enterprise for Windows HPC Server 2008 R2 gives customers an extremely powerful tool that handles analysis of very large data and high workloads.”
REvolution R Enterprise is designed for both novice and experienced R users looking for a production-grade R distribution to perform mission critical predictive analytics tasks right from the desktop and scale across multiprocessor environments. Featuring RPE™ REvolution’s R Productivity Environment for Windows.
Of course R Enterprise is available on Linux but on Red Hat Enterprise Linux- it would be nice to see Amazom Machine Images as well as Ubuntu versions as well.
Like all virtual appliances, the main component of an AMI is a read-only filesystem image which includes an operating system (e.g., Linux, UNIX, or Windows) and any additional software required to deliver a service or a portion of it.[2]
The AMI filesystem is compressed, encrypted, signed, split into a series of 10MB chunks and uploaded into Amazon S3 for storage. An XML manifest file stores information about the AMI, including name, version, architecture, default kernel id, decryption key and digests for all of the filesystem chunks.
An AMI does not include a kernel image, only a pointer to the default kernel id, which can be chosen from an approved list of safe kernels maintained by Amazon and its partners (e.g., RedHat, Canonical, Microsoft). Users may choose kernels other than the default when booting an AMI.[3]
Paid: a for-pay AMI image that is registered with Amazon DevPay and can be used by any one who subscribes for it. DevPay allows developers to mark-up Amazon’s usage fees and optionally add monthly subscription fees.