The internet is a ponderful and wonderful place for serendipity
Tobias- I discovered the free software foundation while still at university and spent wonderful evenings configuring my GNU/Linux system and reading RMS essays. For the statistics classes proprietary software was proposed and that was obviously not an option, so I started tackling all problems using R which was at the time (around 2000) still an underdog together with pspp (a command-line SPSS clone) and xlispstat. From that moment on, I decided that R was the hammer and all problems to be solved were nails ;-) In my early career I worked as a statistician / data miner for a general consulting company which gave me the opportunity to bring R into Fortune 500 companies and learn what was needed to support its use in an enterprise context. In 2008 I founded Open Analytics to turn these lessons into practice and we started building tools to support the data analysis process using R. The first big project was Architect, which started as an eclipse-based R IDE, but more and more evolves into an IDE for data science more generally. In parallel we started working on infrastructure to automate R-based analyses and to plug R (and therefore statistical logic) into larger business processes and soon we had a tool suite to cover the needs of industry.
Tobias- RSB stands for the R Service Bus and is communication middleware and a work manager for R jobs. It allows to trigger and receive results from R jobs using a plethora of protocols such as RESTful web services, e-mail protocols, sftp, folder polling etc. The idea is to enable people to push a button (or software to make a request) and have them receive automated R based analysis results or reports for their data.
Tobias- RSB started when automating toxicological analyses in pharmaceutical industry in collaboration with Philippe Lamote. Together with David Dossot, an exceptional software architect in Vancouver, we decided to cleanly separate concerns, namely to separate the integration layer (RSB) from the statistical layer (R) and, likewise, from the application layer. As a result any arbitrary R code can be run via RSB and any client application can interact with RSB as long as it can talk one of the many supported protocols. This fundamental design principle makes us different from alternative solutions where statistical logic and integration logic are always somehow interwoven, which results in maintenance and integration headaches. One of the challenges has been to keep focus on the core task of automating statistical analyses and not deviating into features that would turn RSB into a tool for interaction with an R session, which deserves an entirely different approach.
Tobias- From a freedom perspective, cloud computing and the SaaS model is often a step backwards, but in our own practice we obviously follow our customers’ needs and offer RSB hosting from our data centers as well. Also, our other products e.g. the R IDE Architect are ready for the cloud and use on servers via Architect Server. As far as R itself concerns in relation to cloud computing, I foresee its use to increase. At Open Analytics we see an increasing demand for R-based statistical engines that power web applications living in the cloud.
Tobias- RSB 6.0 is all about large-scale production environments and strong security. It kicked off on a project where RSB was responsible for spitting 8500 predictions per second. Such large-scale production deployments of RSB motivated the development of a series of features. First of all RSB was made lightning fast: we achieved a full round trip from REST call to prediction in 7 ms on the mentioned use case. In order to allow for high throughput, RSB also gained a synchronous API (RSB 5.0 had an asynchronous API only). Another new feature is the availability of client-side connection pooling to the pool manager of R processes that are read to serve RSB. Besides speed, this type of production environments also need monitoring and resilience in case of issues. For the monitoring, we made sure that everything is in place for monitoring and remotely querying not only the RSB application itself, but also the pool of R processes managed by RServi.
(Note from Ajay- RJ is an open source library providing tools and interfaces to integrate R in Java applications. RJ project also provides a pool for R engines, easy to setup and manage by a web-interface or JMX. One or multiple client can borrow the R engines (called RServi) see http://www.walware.de/it/rj/ and https://github.com/walware/rj-servi)
Also, we now allow to define node validation strategies to be able to check that R nodes are still functioning properly. If not, the nodes are killed and new nodes are started and added to the pool. In terms of security, we are now able to cover a very wide spectrum of authentication and authorization. We have machines up and running using openid, basic http authentication, LDAP, SSL client certificates etc. to serve everyone from the individual user who is happy with openid authentication for his RSB app to large investment banks who have very strong security requirements. The next step is to provide tighter integration with Architect, such that people can release new RSB applications without leaving the IDE.
Tobias- I do not feel qualified to answer such a question, since I founded a single company in Antwerp, Belgium. That being said, Belgium is great! :-)
Tobias- Free software. Free as in beer and as in free speech!
Tobias- Open source is probably a global ecosystem and crosses oceans very easily. Dries Buytaert started off Drupal in Belgium and now operates from the US interacting with a global community. From a business perspective, there are as many open source models as there are open source companies. I noticed that the major US R companies (Revolution Analytics and RStudio) cherished the open source philosophy initially, but drifted both into models combining open source and proprietary components. At Open Analytics, there are only open source products and enterprise customers have access to exactly the same functionality as a student may have in a developing country. That being said, I don’t believe this is a matter of geography, but has to do more with the origins and different strategies of the companies.
Tobias- In a previous life the athletics track helped keeping hands off the keyboard. Currently, my children find very effective ways to achieve similar goals
OpenAnalytics is a consulting company specialized in statistical computing using open technologies. You can read more on it at http://www.openanalytics.eu
So I wanted to really find out how cheap the cloud was- but I got confused by the 23 kinds of instances than Amazon has http://aws.amazon.com/ec2/pricing/ and 15 kinds of instances at https://developers.google.com/compute/pricing.
or whether there is any price collusion between them ;)
Now Amazon has spot pricing so I can bid for prices as well (http://aws.amazon.com/ec2/purchasing-options/spot-instances/ ) and upto 60% off for reserved instances (http://aws.amazon.com/ec2/purchasing-options/reserved-instances/) but charges $2 for dedicated instances (which are not dedicated but pay as you go)
- $2 per hour – An additional fee is charged once per hour in which at least one Dedicated Instance of any type is running in a Region.
Google has sustained discounts ( will not offer Windows on the cloud though!)
The table below describes the discount at each usage level. These discounts apply for all instance types.
|Usage Level (% of month)||% at which incremental is charged||Example incremental rate (USD/per hour) for an n1-standard-1 instance|
|0%-25%||100% of base rate||$0.07|
|25%-50%||80% of base rate||$0.056|
|50%-75%||60% of base rate||$0.042|
|75%-100%||40% of base rate||$0.028|
Anyways- I tried to create this simple table to help me with it- after all hard disks are cheap- it is memory I want on the cloud !
Or maybe I am wrong and the cloud is not so cheap- or its just too complicated for someone to build a pricing calculator that can take in prices from all providers (Amazon, Azure, Google Compute) and show us the money!
|vCPU||RAM(GiB)||$ per Hour||Type -Linux Usage||Provider||Notes|
|t2.micro||1||1||$0.01||General Purpose – Current Generation||Amazon (North Virginia)||Amazon also has spot instances|
|t2.small||1||2||$0.03||General Purpose – Current Generation||Amazon (North Virginia)||that can lower prices|
|t2.medium||2||4||$0.05||General Purpose – Current Generation||Amazon (North Virginia)|
|m3.medium||1||3.75||$0.07||General Purpose – Current Generation||Amazon (North Virginia)|
|m3.large||2||7.5||$0.14||General Purpose – Current Generation||Amazon (North Virginia)|
|m3.xlarge||4||15||$0.28||General Purpose – Current Generation||Amazon (North Virginia)|
|m3.2xlarge||8||30||$0.56||General Purpose – Current Generation||Amazon (North Virginia)|
|c3.large||2||3.75||$0.11||Compute Optimized – Current Generation||Amazon (North Virginia)|
|c3.xlarge||4||7.5||$0.21||Compute Optimized – Current Generation||Amazon (North Virginia)|
|c3.2xlarge||8||15||$0.42||Compute Optimized – Current Generation||Amazon (North Virginia)|
|c3.4xlarge||16||30||$0.84||Compute Optimized – Current Generation||Amazon (North Virginia)|
|c3.8xlarge||32||60||$1.68||Compute Optimized – Current Generation||Amazon (North Virginia)|
|g2.2xlarge||8||15||$0.65||GPU Instances – Current Generation||Amazon (North Virginia)|
|r3.large||2||15||$0.18||Memory Optimized – Current Generation||Amazon (North Virginia)|
|r3.xlarge||4||30.5||$0.35||Memory Optimized – Current Generation||Amazon (North Virginia)|
|r3.2xlarge||8||61||$0.70||Memory Optimized – Current Generation||Amazon (North Virginia)|
|r3.4xlarge||16||122||$1.40||Memory Optimized – Current Generation||Amazon (North Virginia)|
|r3.8xlarge||32||244||$2.80||Memory Optimized – Current Generation||Amazon (North Virginia)|
|i2.xlarge||4||30.5||$0.85||Storage Optimized – Current Generation||Amazon (North Virginia)|
|i2.2xlarge||8||61||$1.71||Storage Optimized – Current Generation||Amazon (North Virginia)|
|i2.4xlarge||16||122||$3.41||Storage Optimized – Current Generation||Amazon (North Virginia)|
|i2.8xlarge||32||244||$6.82||Storage Optimized – Current Generation||Amazon (North Virginia)|
|hs1.8xlarge||16||117||$4.60||Storage Optimized – Current Generation||Amazon (North Virginia)|
|n1-standard-1||1||3.75||$0.07||Standard||Google -US||Google charges per minute|
|n1-standard-2||2||7.5||$0.14||Standard||Google -US||of usage (subject to minimum of 10 minutes)|
|n1-highmem-2||2||13||$0.16||High Memory||Google -US|
|n1-highmem-4||4||26||$0.33||High Memory||Google -US|
|n1-highmem-8||8||52||$0.66||High Memory||Google -US|
|n1-highmem-16||16||104||$1.31||High Memory||Google -US|
|n1-highcpu-2||2||1.8||$0.09||High CPU||Google -US|
|n1-highcpu-4||4||3.6||$0.18||High CPU||Google -US|
|n1-highcpu-8||8||7.2||$0.35||High CPU||Google -US|
|n1-highcpu-16||16||14.4||$0.70||High CPU||Google -US|
|f1-micro||1||0.6||$0.01||Shared Core||Google -US|
|g1-small||1||1.7||$0.04||Shared Core||Google -US|
A Brief Tutorial I wrote by playing with the software at manage.windowsazure.com
Here is an interview with Louis Bajuk-Yorgan, from TIBCO. TIBCO which was the leading commercial vendor to S Plus, the precursor of the R language makes a commercial enterprise version of R called TIBCO Enterprise Runtime for R (TERR). Louis also presented recently at User2014 http://user2014.stat.ucla.edu/abstracts/talks/54_Bajuk-Yorgan.pdf
DecisionStats(DS)- How is TERR different from Revolution Analytics or Oracle R. How is it similar.
DS- How much of R is TERR compatible with?
DS- Describe Tibco Cloud Compute Grid, What are it’s applications for data science.
DS- What advantages does TIBCO’s rich history with the S project give it for the R project.
Lou- Our 20+ years of experience with S-PLUS gave us a unique knowledge of the commercial applications of the S/R language, deep experience with architecting, extending and maintaining a commercial S language engine, strong ties to the R community and a rich trove of algorithms we could apply on developing the TERR engine.
DS- Describe some benchmarks of TERR with open source of R.
DS- TERR is not open source. Why is that?
DS- How is TIBCO a company to work for potential data scientists.
DS- How is TIBCO giving back to the R Community globally. What are it’s plans on community.
DS- As a sixth time attendee of UseR, Describe the evolution of R ecosystem as you have observed it.
Here is an interview with one of the greatest statisticians and educator of this generation, Prof Jan de Leeuw. In this exclusive and free wheeling interview, Prof De Leeuw talks on the evolution of technology, education, statistics and generously shares nuggets of knowledge of interest to present and future statisticians.
DecisionStats(DS)- You have described UCLA Dept of Statistics as your magnum opus.Name a couple of turning points in your career which helped in this creation .
Jan de Leeuw (JDL) -From about 1980 until 1987 I was head of the Department of Data Theory at Leiden University. Our work there produced a large number of dissertations which we published using our own publishing company. I also became president of the Psychometric Society in 1987. These developments resulted in an invitation from UCLA to apply for the position of director of the interdepartmental program in social statistics, with a joint appointment in Mathematics and Psychology. I accepted the offer in 1987. The program eventually morphed into the new Department of Statistics in 1998.
DS- Describe your work with Gifi software and non linear multivariate analysis.
JDL- I started working on NLMVA and MDS in 1968, while I was a graduate student researcher in the new Department of Data Theory. Joe Kruskal and Doug Carroll invited me to spend a year at Bells Labs in Murray Hill in 1973-1974. At that time I also started working with Forrest Young and his student Yoshio Takane. This led to the sequence of “alternating least squares” papers, mainly in Psychometrika. After I returned to Leiden we set up a group of young researchers, supported by NWO (the Dutch equivalent of the NSF) and by SPSS, to develop a series of Fortran programs for NLMVA and MDS.
In 1980 the group had grown to about 10-15 people, and we gave a succesful postgraduate course on the “Gifi methods”, which eventually became the 1990 Gifi book. By the time I left Leiden most people in the group had gone on to do other things, although I continued to work in the area with some graduate students from Leiden and Los Angeles. Then around 2010 I worked with Patrick Mair, visiting scholar at UCLA, to produce the R packages smacof, anacor, homals, aspect, and isotone. Also see https://www.youtube.com/watch?v=u733Mf7jX24
JDL- I started in 1968 with PL/I. Card decks had to be flown to Paris to be compiled and executed on the IBM/360 mainframes. Around the same time APL came up and satisfied my personal development needs, although of course APL code was difficult to communicate. It was even difficult to underatand your own code after a week. We had APL symbol balls on the Selectrix typewriters and APL symbols on the character terminals. The basic model was there — you develop in an interpreted language (APL) and then for production you use a compiled language (FORTRAN). Over the years APL was replaced by XLISP and then by R. Fortran was largely replaced by C, I never switched to C++ or Java. We discouraged our students to use SAS or SPSS or MATLAB. UCLA Statistics promoted XLISP-STAT for quite a long time, but eventually we had to give it up. See http://www.stat.ucla.edu/~deleeuw/janspubs/2005/articles/deleeuw_A_05.pdf.
(In 1998 the UCLA Department of Statistics, which had been one of the major users of Lisp-Stat, and one of the main producers of Lisp-Stat code, decided to switch to S/R. This paper discusses why this decision was made, and what the pros and the cons were. )
Generally, there has never been an computational environment like R — so integrated with statistical practice and development, and so enormous, accessible and democratic. I must admit I personally still prefer to use R as originally intended: as a convenient wrapper around and command line interface for compiled libraries and code. But it is also great for rapid prototyping, and in that role it has changed the face of statistics.
The fact that you cannot really propose statistical computations without providing R references and in many cases R code has contributed a great deal to reproducibility and open access.
JDL- I really don’t know how to answer this. A life cannot be corrected, repeated, or relived.
DecisionStats(DS)- Describe your career journey from being a computer science student to one of the principal creators for RHadoop. What motivated you, what challenges did you overcome. What were the turning points.(You have 3500+ citations. What are most of those citations regarding.)
DS- What do you think is the future of R as an enterprise and research software in terms of computing on mobile, desktop, cloud and how do you see things evolve from here