John Sall, founder SAS AND JMP , has released the latest blockbuster edition of flagship of JMP 9 (JMP Stands for John’s Macintosh Program).
To kill all birds with one software, it is integrated with R and SAS, and the brochure frankly lists all the qualities. Why am I excited for JMP 9 integration with R and with SAS- well it integrates bigger datasets manipulation (thanks to SAS) with R’s superb library of statistical packages and a great statistical GUI (JMP). This makes JMP the latest software apart from SAS/IML, Rapid Miner,Knime, Oracle Data Miner to showcase it’s R integration (without getting into the GPL compliance need for showing source code– it does not ship R- and advises you to just freely download R). I am sure Peter Dalgaard, and Frankie Harell are all overjoyed that R Base and Hmisc packages would be used by fellow statisticians and students for JMP- which after all is made in the neighborhood state of North Carolina.
Best of all a JMP 30 day trial is free- so no money lost if you download JMP 9 (and no they dont ask for your credit card number, or do they- but they do have a huuuuuuge form to register before you download. Still JMP 9 the software itself is more thoughtfully designed than the email-prospect-leads-form and the extra functionality in the free 30 day trial is worth it.
R is a programming language and software environment for statistical computing and graphics. JMP now supports a set of JSL functions to access R. The JSL functions provide the following options:
• open and close a connection between JMP and R
• exchange data between JMP and R
•submit R code for execution
•display graphics produced by R
JMP and R each have their own sets of computational methods.
R has some methods that JMP does not have. Using JSL functions, you can connect to R and use these R computational methods from within JMP.
Textual output and error messages from R appear in the log window.R must be installed on the same computer as JMP.
though probably they are not creating a movie on Jim yet (imagine a movie titled “The Statistical Software” -not just the same dude feel as “The Social Network”)
A: Not at all. The Document Foundation will continue to be focused on developing, supporting, and promoting the same software, and it’s very much business as usual. We are simply moving to a new and more appropriate organisational model for the next decade – a logical development from Sun’s inspirational launch a decade ago.
Q: Why are you calling yourselves “The Document Foundation”?
A: For ten years we have used the same name – “OpenOffice.org” – for both the Community and the software. We’ve decided it removes ambiguity to have a different name for the two, so the Community is now “The Document Foundation”, and the software “LibreOffice”. Note: there are other examples of this usage in the free software community – e.g. the Mozilla Foundation with the Firefox browser.
Q: Does this mean you intend to develop other pieces of software?
A: We would like to have that possibility open to us in the future…
Q: And why are you calling the software “LibreOffice” instead of “OpenOffice.org”?
A: The OpenOffice.org trademark is owned by Oracle Corporation. Our hope is that Oracle will donate this to the Foundation, along with the other assets it holds in trust for the Community, in due course, once legal etc issues are resolved. However, we need to continue work in the meantime – hence “LibreOffice” (“free office”).
Q: Why are you building a new web infrastructure?
A: Since Oracle’s takeover of Sun Microsystems, the Community has been under “notice to quit” from our previous Collabnet infrastructure. With today’s announcement of a Foundation, we now have an entity which can own our emerging new infrastructure.
Q: What does this announcement mean to other derivatives of OpenOffice.org?
A: We want The Document Foundation to be open to code contributions from as many people as possible. We are delighted to announce that the enhancements produced by the Go-OOo team will be merged into LibreOffice, effective immediately. We hope that others will follow suit.
Q: What difference will this make to the commercial products produced by Oracle Corporation, IBM, Novell, Red Flag, etc?
A: The Document Foundation cannot answer for other bodies. However, there is nothing in the licence arrangements to stop companies continuing to release commercial derivatives of LibreOffice. The new Foundation will also mean companies can contribute funds or resources without worries that they may be helping a commercial competitor.
Q: What difference will The Document Foundation make to developers?
A: The Document Foundation sets out deliberately to be as developer friendly as possible. We do not demand that contributors share their copyright with us. People will gain status in our community based on peer evaluation of their contributions – not by who their employer is.
Q: What difference will The Document Foundation make to users of LibreOffice?
A: LibreOffice is The Document Foundation’s reason for existence. We do not have and will not have a commercial product which receives preferential treatment. We only have one focus – delivering the best free office suite for our users – LibreOffice.
—————————————————————————————————-
Non Microsoft and Non Oracle vendors are indeed going to find it useful the possiblities of bundling a free Libre Office that reduces the total cost of ownership for analytics software. Right now, some of the best free advertising for Microsoft OS and Office is done by enterprise software vendors who create Windows Only Products and enable MS Office integration better than Open Office integration. This is done citing user demand- but it is a chicken egg dilemma- as functionality leads to enhanced demand. Microsoft on the other hand is aware of this dependence and has made SQL Server and SQL Analytics (besides investing in analytics startups like Revolution Analytics) along with it’s own infrastructure -Azure Cloud Platform/EC2 instances.
Here is the brand new release from Jaspersoft at a groovy price of 9000$. Somebody stop these guys!
It’s a great company to watch for buyouts as well- given their expertise in REPORTING and clientele- especially for anyone looking to im prove thier standing in both open source world and reporting software branding.
Webinar: Introducing JasperReports Server Professional
Thursday October 14
In this live webinar, learn how a new solution from Jaspersoft combines the world’s favorite reporting server with powerful, mature report server functionality—for about 80% less.
The World’s Most Powerful and Affordable Reporting Server
Limited Time Introductory Offer: Starting from $9,000 (restrictions apply)
JasperReports Server is the recommended product for organizations requiring an affordable reporting solution for interactive, operational, and production-based reporting. Deployed as a standalone reporting server or integrated inside another application, JasperReports Server is a flexible, powerful, interactive reporting environment for small or large enterprises.
Powered by the world’s most popular reporting tools in JasperReports and iReport, developers and users can take advantage of more interactivity, security, and scheduling of their reports.
Key Benefits:
Affordable: Unlimited reports for unlimited users starting at $9,000
Powerful: Report scheduling and distribution to 1,000s of users on a single server
Flexible: Web service architecture simplifies application integration
New software just released from the guys in California (@RevolutionR) so if you are a Linux user and have academic credentials you can download it for free (@Cmastication doesnt), you can test it to see what the big fuss is all about (also see http://www.revolutionanalytics.com/why-revolution-r/benchmarks.php) –
Revolution Analytics has just released Revolution R Enterprise 4.0.1 for Red Hat Enterprise Linux, a significant step forward in enterprise data analytics. Revolution R Enterprise 4.0.1 is built on R 2.11.1, the latest release of the open-source environment for data analysis and graphics. Also available is the initial release of our deployment server solution, RevoDeployR 1.0, designed to help you deliver R analytics via the Web. And coming soon to Linux: RevoScaleR, a new package for fast and efficient multi-core processing of large data sets.
As a registered user of the Academic version of Revolution R Enterprise for Linux, you can take advantage of these improvements by downloading and installing Revolution R Enterprise 4.0.1 today. You can install Revolution R Enterprise 4.0.1 side-by-side with your existing Revolution R Enterprise installations; there is no need to uninstall previous versions.
Download Information
The following information is all you will need to download and install the Academic Edition.
Supported Platforms:
Revolution R Enterprise Academic edition and RevoDeployR are supported on Red Hat® Enterprise Linux® 5.4 or greater (64-bit processors).
Approximately 300MB free disk space is required for a full install of Revolution R Enterprise. We recommend at least 1GB of RAM to use Revolution R Enterprise.
For the full list of system requirements for RevoDeployR, refer to the RevoDeployR™ Installation Guide for Red Hat® Enterprise Linux®.
Download Links:
You will first need to download the Revolution R Enterprise installer.
Installation Instructions for Revolution R Enterprise Academic Edition
After downloading the installer, do the following to install the software:
Log in as root if you have not already.
Change directory to the directory containing the downloaded installer.
Unpack the installer using the following command:
tar -xzf Revo-Ent-4.0.1-RHEL5-desktop.tar.gz
Change directory to the RevolutionR_4.0.1 directory created.
Run the installer by typing ./install.py and following the on-screen prompts.
Getting Started with the Revolution R Enterprise
After you have installed the software, launch Revolution R Enterprise by typing Revo64 at the shell prompt.
Documentation is available in the form of PDF documents installed as part of the Revolution R Enterprise distribution. Type Revo.home(“doc”) at the R prompt to locate the directory containing the manuals Getting Started with Revolution R (RevoMan.pdf) and the ParallelR User’s Guide(parRman.pdf).
Installation Instructions for RevoDeployR (and RServe)
After downloading the RevoDeployR distribution, use the following steps to install the software:
Note: These instructions are for an automatic install. For more details or for manual install instructions, refer to RevoDeployR_Installation_Instructions_for_RedHat.pdf.
Log into the operating system as root.
su –
Change directory to the directory containing the downloaded distribution for RevoDeployR and RServe.
Unzip the contents of the RevoDeployR tar file. At prompt, type:
tar -xzf deployrRedHat.tar.gz
Change directories. At the prompt, type:
cd installFiles
Launch the automated installation script and follow the on-screen prompts. At the prompt, type:
./installRedHat.sh Note:Red Hat installs MySQL without a password.
Getting Started with RevoDeployR
After installing RevoDeployR, you will be directed to the RevoDeployR landing page. The landing page has links to documentation, the RevoDeployR management console, the API Explorer development tool, and sample code.
The simple R-benchmark-25.R test script is a quick-running survey of general R performance. The Community-developed test consists of three sets of small benchmarks, referred to in the script as Matrix Calculation, Matrix Functions, and Program Control.
Revolution Analytics has created its own tests to simulate common real-world computations. Their descriptions are explained below.
Linear Algebra Computation
Base R 2.9.2
Revolution R (1-core)
Revolution R (4-core)
Speedup (4 core)
Matrix Multiply
243 sec
22 sec
5.9 sec
41x
Cholesky Factorization
23 sec
3.8 sec
1.1 sec
21x
Singular Value Decomposition
62 sec
13 sec
4.9 sec
12.6x
Principal Components Analysis
237 sec
41 sec
15.6 sec
15.2x
Linear Discriminant Analysis
142 sec
49 sec
32.0 sec
4.4x
Speedup = Slower time / Faster Time – 1
Matrix Multiply
This routine creates a random uniform 10,000 x 5,000 matrix A, and then times the computation of the matrix product transpose(A) * A.
set.seed (1)
m <- 10000
n <- 5000
A <- matrix (runif (m*n),m,n)
system.time (B <- crossprod(A))
The system will respond with a message in this format:
User system elapsed
37.22 0.40 9.68
The “elapsed” times indicate total wall-clock time to run the timed code.
The table above reflects the elapsed time for this and the other benchmark tests. The test system was an INTEL® Xeon® 8-core CPU (model X55600) at 2.5 GHz with 18 GB system RAM running Windows Server 2008 operating system. For the Revolution R benchmarks, the computations were limited to 1 core and 4 cores by calling setMKLthreads(1) and setMKLthreads(4) respectively. Note that Revolution R performs very well even in single-threaded tests: this is a result of the optimized algorithms in the Intel MKL library linked to Revolution R. The slight greater than linear speedup may be due to the greater total cache available to all CPU cores, or simply better OS CPU scheduling–no attempt was made to pin execution threads to physical cores. Consult Revolution R’s documentation to learn how to run benchmarks that use less cores than your hardware offers.
Cholesky Factorization
The Cholesky matrix factorization may be used to compute the solution of linear systems of equations with a symmetric positive definite coefficient matrix, to compute correlated sets of pseudo-random numbers, and other tasks. We re-use the matrix B computed in the example above:
system.time (C <- chol(B))
Singular Value Decomposition with Applications
The Singular Value Decomposition (SVD) is a numerically-stable and very useful matrix decompisition. The SVD is often used to compute Principal Components and Linear Discriminant Analysis.
# Singular Value Deomposition
m <- 10000
n <- 2000
A <- matrix (runif (m*n),m,n)
system.time (S <- svd (A,nu=0,nv=0))
# Principal Components Analysis
m <- 10000
n <- 2000
A <- matrix (runif (m*n),m,n)
system.time (P <- prcomp(A))
# Linear Discriminant Analysis require (‘MASS’)
g <- 5
k <- round (m/2)
A <- data.frame (A, fac=sample (LETTERS[1:g],m,replace=TRUE))
train <- sample(1:m, k)
system.time (L <- lda(fac ~., data=A, prior=rep(1,g)/g, subset=train))
Just go to Users-Personal Settings- and check the options shown. Thats it every time you write a post it suggests links and tags. Links are helpful for your readers (like Wikipedia links to understand dense technical jargon, or associated websites). Tags help to classify your contents so that all visitors to the web site including spiders ,search engines and your readers can search it better.
The bad thing is I need to go back to all 1025 posts on this site and auto generate tags for the archives ! Oh well. Great collaboration between zementa and Automattic for this new feature.
Google Instant is a relatively newer feature in Google Search Engine- it suggests websites at each type of keyword rather than wait for you to type the whole keyword.
The impact on user experience is incredible- rather than search or scroll through the results- you are more likely to click on the almost one of the ten websites you would have seen by the time you finished typing- or just clicking on the relevant ad (which probably changes on the right margin as fast as the websites below)
This spells a death for all those who indulged in black hat SEO– or link building, link exchanging- as these techniques pushed up your rank in search page only incrementally and rarely to the top 2-3 for a keyword.
Remember the size of the screen is such that each Google instant snapshot basically shows you or rather makes you focus on the top ranked search (and then presumably type on to get a newer result- rather than scroll down as the case was before).
It would be interesting to see or research the effect of keywords in the auction pricing, as well as compare those keyword pricing with Bing.com- Maybe there should be a website api tool for advertisers -like Adwords Instant that would show them the price instantly of keywords,comparison with Bing AND the search engine results for the keyword in a visual way.
Anyways- it is a incredible innovation and it is good Google is back to the math after the flings with being “Mad Men” of advertising.
and yes- I heard there is a new movie coming- it is called “The Search Engine” 🙂
Customizing your R software startup helps you do the following.
Thus it helps you to boot up R faster.
It automatically loads packages that you use regularly (like a R GUI -Deducer, Rattle or R Commander), set a CRAN mirror that you mostly use or is nearest for downloading new packages, and set some optional parameters. Everytime you start R Instead of doing this , loading same R packages, setting a CRAN mirror,setting some new functions- the user needs to do this just once by customizing the R Profile SITE file. This is done by editing the $R_HOME/etc/Renviron file for globally setting a default or the .Renviron file that is created in your home directory for a shared system. There are two special functions you can customize in these files. .First( ) will be run at the start of the R session and .Last( ) will be run when the R session is shutting down. When R starts up, it loads the .Rprofile file in your home directory and executes the .First() function. Where is the R Profile file? It is located in the \etc folder of your R folder- folder you installed R in. In Windows the folder will be of the format -”C:\Program Files\R\R-x.ab.c\etc” where x.ab.c will be the R version number (like 2.11.1) Example .First <- function(){ library(rattle) rattle() cat(“\nHello World”, date(), “\n”) } will automatically start the Rattle GUI for data mining and print Hello World with the date in your session. You can also modify the Rcmd_environ file in the same \etc folder if you are particular on your settings ## Default browser R_BROWSER=${R_BROWSER-‘C:\Documents and Settings\abc\Local Settings\Application Data\Google\Chrome\Application\chrome.exe}## Default editorEDITOR=${EDITOR-${notepad++}} will change the default Web browser to Chrome and the default editor to Notepad++ which is an enhanced Code Editor.