R for Analytics is now live

Okay, through the weekend I created a website for a few of my favourite things.

It’s on at https://rforanalytics.wordpress.com/

Graphical User Interfaces for R

 

Jerry Rubin said: “Don’t trust anyone over thirty

I dont trust anyone not using atleast one R GUI. Here’s a list of the top 10.

 

Code Enhancers for R

Here is a list of top 5 code enhancers,editors in R

R Commercial Software

A list of companies and software making (and) selling R software (and) services. Hint- it is almost 5 (unless I missed someone)

R Graphs Resources

R’s famous graphing capabilities and equally famous learning curve can be made a bit more humane- using some of these resources.

Internet Browsing

Because that’s what I do (all I do as per my cat) , and I am pretty good at it.

Using R from other Software

R can be used successfully from a lot of analytical software including some surprising ones praising the great 3000 packages library.

(to be continued- as I find more stuff I will keep it there, some ideas- database access from R, prominent R consultants, prominent R packages, famous R interviewees 😉 )

ps- The quote from Jerry Rubin seems funny for a while. I turn 34 this year.

Choosing R for business – What to consider?

A composite of the GNU logo and the OSI logo, ...
Image via Wikipedia

Additional features in R over other analytical packages-

1) Source Code is given to ensure complete custom solution and embedding for a particular application. Open source code has an advantage that is extensively peer- reviewed in Journals and Scientific Literature.  This means bugs will found, shared and corrected transparently.

2) Wide literature of training material in the form of books is available for the R analytical platform.

3) Extensively the best data visualization tools in analytical software (apart from Tableau Software ‘s latest version). The extensive data visualization available in R is of the form a variety of customizable graphs, as well as animation. The principal reason third-party software initially started creating interfaces to R is because the graphical library of packages in R is more advanced as well as rapidly getting more features by the day.

4) Free in upfront license cost for academics and thus budget friendly for small and large analytical teams.

5) Flexible programming for your data environment. This includes having packages that ensure compatibility with Java, Python and C++.

 

6) Easy migration from other analytical platforms to R Platform. It is relatively easy for a non R platform user to migrate to R platform and there is no danger of vendor lock-in due to the GPL nature of source code and open community.

Statistics are numbers that tell (descriptive), advise ( prescriptive) or forecast (predictive). Analytics is a decision-making help tool. Analytics on which no decision is to be made or is being considered can be classified as purely statistical and non analytical. Thus ease of making a correct decision separates a good analytical platform from a not so good analytical platform. The distinction is likely to be disputed by people of either background- and business analysis requires more emphasis on how practical or actionable the results are and less emphasis on the statistical metrics in a particular data analysis task. I believe one clear reason between business analytics is different from statistical analysis is the cost of perfect information (data costs in real world) and the opportunity cost of delayed and distorted decision-making.

Specific to the following domains R has the following costs and benefits

  • Business Analytics
    • R is free per license and for download
    • It is one of the few analytical platforms that work on Mac OS
    • It’s results are credibly established in both journals like Journal of Statistical Software and in the work at LinkedIn, Google and Facebook’s analytical teams.
    • It has open source code for customization as per GPL
    • It also has a flexible option for commercial vendors like Revolution Analytics (who support 64 bit windows) as well as bigger datasets
    • It has interfaces from almost all other analytical software including SAS,SPSS, JMP, Oracle Data Mining, Rapid Miner. Existing license holders can thus invoke and use R from within these software
    • Huge library of packages for regression, time series, finance and modeling
    • High quality data visualization packages
    • Data Mining
      • R as a computing platform is better suited to the needs of data mining as it has a vast array of packages covering standard regression, decision trees, association rules, cluster analysis, machine learning, neural networks as well as exotic specialized algorithms like those based on chaos models.
      • Flexibility in tweaking a standard algorithm by seeing the source code
      • The RATTLE GUI remains the standard GUI for Data Miners using R. It was created and developed in Australia.
      • Business Dashboards and Reporting
      • Business Dashboards and Reporting are an essential piece of Business Intelligence and Decision making systems in organizations. R offers data visualization through GGPLOT, and GUI like Deducer and Red-R can help even non R users create a metrics dashboard
        • For online Dashboards- R has packages like RWeb, RServe and R Apache- which in combination with data visualization packages offer powerful dashboard capabilities.
        • R can be combined with MS Excel using the R Excel package – to enable R capabilities to be imported within Excel. Thus a MS Excel user with no knowledge of R can use the GUI within the R Excel plug-in to use powerful graphical and statistical capabilities.

Additional factors to consider in your R installation-

There are some more choices awaiting you now-
1) Licensing Choices-Academic Version or Free Version or Enterprise Version of R

2) Operating System Choices-Which Operating System to choose from? Unix, Windows or Mac OS.

3) Operating system sub choice- 32- bit or 64 bit.

4) Hardware choices-Cost -benefit trade-offs for additional hardware for R. Choices between local ,cluster and cloud computing.

5) Interface choices-Command Line versus GUI? Which GUI to choose as the default start-up option?

6) Software component choice- Which packages to install? There are almost 3000 packages, some of them are complimentary, some are dependent on each other, and almost all are free.

7) Additional Software choices- Which additional software do you need to achieve maximum accuracy, robustness and speed of computing- and how to use existing legacy software and hardware for best complementary results with R.

1) Licensing Choices-
You can choose between two kinds of R installations – one is free and open source from http://r-project.org The other R installation is commercial and is offered by many vendors including Revolution Analytics. However there are other commercial vendors too.

Commercial Vendors of R Language Products-
1) Revolution Analytics http://www.revolutionanalytics.com/
2) XL Solutions- http://www.experience-rplus.com/
3) Information Builder – Webfocus RStat -Rattle GUI http://www.informationbuilders.com/products/webfocus/PredictiveModeling.html
4) Blue Reference- Inference for R http://inferenceforr.com/default.aspx

  1. Choosing Operating System
      1. Windows

 

Windows remains the most widely used operating system on this planet. If you are experienced in Windows based computing and are active on analytical projects- it would not make sense for you to move to other operating systems. This is also based on the fact that compatibility problems are minimum for Microsoft Windows and the help is extensively documented. However there may be some R packages that would not function well under Windows- if that happens a multiple operating system is your next option.

        1. Enterprise R from Revolution Analytics- Enterprise R from Revolution Analytics has a complete R Development environment for Windows including the use of code snippets to make programming faster. Revolution is also expected to make a GUI available by 2011. Revolution Analytics claims several enhancements for it’s version of R including the use of optimized libraries for faster performance.
      1. MacOS

 

Reasons for choosing MacOS remains its considerable appeal in aesthetically designed software- but MacOS is not a standard Operating system for enterprise systems as well as statistical computing. However open source R claims to be quite optimized and it can be used for existing Mac users. However there seem to be no commercially available versions of R available as of now for this operating system.

      1. Linux

 

        1. Ubuntu
        2. Red Hat Enterprise Linux
        3. Other versions of Linux

 

Linux is considered a preferred operating system by R users due to it having the same open source credentials-much better fit for all R packages and it’s customizability for big data analytics.

Ubuntu Linux is recommended for people making the transition to Linux for the first time. Ubuntu Linux had an marketing agreement with revolution Analytics for an earlier version of Ubuntu- and many R packages can  installed in a straightforward way as Ubuntu/Debian packages are available. Red Hat Enterprise Linux is officially supported by Revolution Analytics for it’s enterprise module. Other versions of Linux popular are Open SUSE.

      1. Multiple operating systems-
        1. Virtualization vs Dual Boot-

 

You can also choose between having a VMware VM Player for a virtual partition on your computers that is dedicated to R based computing or having operating system choice at the startup or booting of your computer. A software program called wubi helps with the dual installation of Linux and Windows.

  1. 64 bit vs 32 bit – Given a choice between 32 bit versus 64 bit versions of the same operating system like Linux Ubuntu, the 64 bit version would speed up processing by an approximate factor of 2. However you need to check whether your current hardware can support 64 bit operating systems and if so- you may want to ask your Information Technology manager to upgrade atleast some operating systems in your analytics work environment to 64 bit operating systems.

 

  1. Hardware choices- At the time of writing this book, the dominant computing paradigm is workstation computing followed by server-client computing. However with the introduction of cloud computing, netbooks, tablet PCs, hardware choices are much more flexible in 2011 than just a couple of years back.

Hardware costs are a significant cost to an analytics environment and are also  remarkably depreciated over a short period of time. You may thus examine your legacy hardware, and your future analytical computing needs- and accordingly decide between the various hardware options available for R.
Unlike other analytical software which can charge by number of processors, or server pricing being higher than workstation pricing and grid computing pricing extremely high if available- R is well suited for all kinds of hardware environment with flexible costs. Given the fact that R is memory intensive (it limits the size of data analyzed to the RAM size of the machine unless special formats and /or chunking is used)- it depends on size of datasets used and number of concurrent users analyzing the dataset. Thus the defining issue is not R but size of the data being analyzed.

    1. Local Computing- This is meant to denote when the software is installed locally. For big data the data to be analyzed would be stored in the form of databases.
      1. Server version- Revolution Analytics has differential pricing for server -client versions but for the open source version it is free and the same for Server or Workstation versions.
      2. Workstation
    2. Cloud Computing- Cloud computing is defined as the delivery of data, processing, systems via remote computers. It is similar to server-client computing but the remote server (also called cloud) has flexible computing in terms of number of processors, memory, and data storage. Cloud computing in the form of public cloud enables people to do analytical tasks on massive datasets without investing in permanent hardware or software as most public clouds are priced on pay per usage. The biggest cloud computing provider is Amazon and many other vendors provide services on top of it. Google is also coming for data storage in the form of clouds (Google Storage), as well as using machine learning in the form of API (Google Prediction API)
      1. Amazon
      2. Google
      3. Cluster-Grid Computing/Parallel processing- In order to build a cluster, you would need the RMpi and the SNOW packages, among other packages that help with parallel processing.
    3. How much resources
      1. RAM-Hard Disk-Processors- for workstation computing
      2. Instances or API calls for cloud computing
  1. Interface Choices
    1. Command Line
    2. GUI
    3. Web Interfaces
  2. Software Component Choices
    1. R dependencies
    2. Packages to install
    3. Recommended Packages
  3. Additional software choices
    1. Additional legacy software
    2. Optimizing your R based computing
    3. Code Editors
      1. Code Analyzers
      2. Libraries to speed up R

citation-  R Development Core Team (2010). R: A language and environment for statistical computing. R Foundation for Statistical Computing,Vienna, Austria. ISBN 3-900051-07-0, URL http://www.R-project.org.

(Note- this is a draft in progress)

News on R Commercial Development -Rattle- R Data Mining Tool

R RANT- while the European R Core leadership led by the Great Dane, Pierre Dalgaard focuses on the small picture and virtually handing the whole commercial side to Prof Nie and David Smith at Revo Computing other smaller package developers have refused to be treated as cheap R and D developers for enterprise software. How’s the book sales coming along, Prof Peter? Any plans to write another R Book or are you done with writing your version of Mathematica (Ref-Newton). Running the R Core project team must be so hard I recommend the Tarantino movie “Inglorious B…” for Herr Doktors. -END

I believe that individual R Package creators like Prof Harell (Hmisc) , or Hadley Wickham (plyr) deserve a share of the royalties or REVENUE that Revolution Computing, or ANY software company that uses R.

On this note-Some updated news on Rattle the Data Mining Tool created by Dr Graham Williams. Once again R development taken ahead by Down Under chaps while the Big Guys thrash out the road map across the Pond.

Data Mining Resources

Citation –http://datamining.togaware.com/

Rattle is a free and open source data mining toolkit written in the statistical language R using the Gnome graphical interface. It runs under GNU/Linux, Macintosh OS X, and MS/Windows. Rattle is being used in business, government, research and for teaching data mining in Australia and internationally. Rattle can be purchased on DVD (or made available as a downloadable CD image) as a standalone installation for $450USD ($560AUD), using one of the following payment buttons.

The free and open source book, The Data Mining Desktop Survival Guide (ISBN 0-9757109-2-3) simply explains the otherwise complex algorithms and concepts of data mining, with examples to illustrate each algorithm using the statistical language R. The book is being written by Dr Graham Williams, based on his 20 years research and consulting experience in machine learning and data mining. An electronic PDF version is available for a small fee from Togaware ($40AUD/$35USD to cover costs and ongoing development);

Other Resources

  • The Data Mining Software Repository makes available a collection of free (as in libre) open source software tools for data mining
  • The Data Mining Catalogue lists many of the free and commercial data mining tools that are available on the market.
  • The Australasian Data Mining Conferences are supported by Togaware, which also hosts the web site.
  • Information about the Pacific Asia Knowledge Discovery and Data Mining series of conferences is also available.
  • Data Mining course is taught at the Australian National University.
  • See also the Canberra Analytics Practise Group.
  • A Data Mining Course was held at the Harbin Institute of Technology Shenzhen Graduate School, China, 6 December – 13 December 2006. This course introduced the basic concepts and algorithms of data mining from an applications point of view and introduced the use of R and Rattle for data mining in practise.
  • Data Mining Workshop was held over two days at the University of Canberra, 27-28 November, 2006. This course introduced the basic concepts and algorithms for data mining and the use of R and Rattle.

Using R for Data Mining

The open source statistical programming language R (based on S) is in daily use in academia and in business and government. We use R for data mining within the Australian Taxation Office. Rattle is used by those wishing to interact with R through a GUI.

R is memory based so that on 32bit CPUs you are limited to smaller datasets (perhaps 50,000 up to 100,000, depending on what you are doing). Deploying R on 64bit multiple CPU (AMD64) servers running GNU/Linux with 32GB of main memory provides a powerful platform for data mining.

R is open source, thus providing assurance that there will always be the opportunity to fix and tune things that suit our specific needs, rather than rely on having to convince a vendor to fix or tune their product to suit our needs.

Also, by being open source, we can be sure that the code will always be available, unlike some of the data mining products that have disappearded (e.g., IBM’s Intelligent Miner).

See earlier interview-

https://decisionstats.wordpress.com/2009/01/13/interview-dr-graham-williams/

Interview Paul van Eikeren Inference for R

visit this

http://decisionstats.posterous.com/decisionstats-interview-paul-van-eikeren-infe

Interview Paul van Eikeren Inference for R

Here is an interview with Paul van Eikeren, President and CEO of Blue Reference, Inc. Paul heads up a startup company addressing the need of information workers to have easier-cheaper-faster access to high-end data mining, analysis and reporting capabilities from software like R, S-plus, MATLAB, SAS, SPSS, python and ruby. His recent product Inference for R has been causing waves within the analytical fraternity across both R users and SAS users, especially given the fact that it is quite well designed, has a great GUI, and is priced rather reasonably.

A few weeks ago, rumour had it the SAS Institute was reportedly buying out the Inference for R product ( Note the merger and acquisition question below)

Rather curious to know about this company, I happened to met Ben Hincliffe at the http://www.analyticbridge.com site which with 5000 members has the largest number of data analytics and many business intelligence members as well). Ben who recently authored a guest post for Sandro at Data Mining Blog then put across my request to interview with Paul, the CEO for Blue Reference. Existing products for Blue Reference include additional analytical packages like Inference for Matlab etc.

Paul is an extremely seasoned person with years in the analytical fraternity and with a Phd from MIT. Here is Paul’s vision on his company and analytics product development.
pve1

Ajay: Describe your career journeys. What advice would you give to today’s young people of following careers in science.

Paul: I have been blessed with extremely productive and diversified career journey. After receiving undergraduate and graduate degrees in chemistry, I taught chemistry and carried out research as a college professor for 14 years. During the next 12 years I spend heading R&D teams at three different startup companies focused on the application of novel processing technology for use in drug discovery and development. And using that wealth of acquired experience, I have had the good fortune to successfully co-found and develop with my son Josh, two startup companies (IntelliChem and Blue Reference) directed at the use of informatics to drive more efficient and effective Research, Development, Manufacturing and Operations.

In my journey I have had the opportunity to counsel many young people regarding their career choices. I have offered two principal pieces of advice: one, for the right person, science represents an outstanding opportunity for a productive and satisfying career; and two, a science education provides an outstanding stepping stone to careers in other fields. A study disclosed in a recent Wall Street Journal article (Sarah E. Needleman, “Doing the Math to Find the Good Jobs, 26 January 2009) revealed that mathematicians land the top spot in the new rankings of the best occupations. Science-linked occupations took 7 out of the top 20 spots.

These ratings suggest that the problem solving and innovation aspects of scientific occupations are much less stressful than other occupations, which leads to high job satisfaction. But does one have to be a genius to have a successful career in science? An interesting read on this subject is the book by Robert Weisberg (Creativity: Beyond the Myth of the Genius) in which he dispels the myth of the genius being the results of a genetic gift. Weisberg argues, convincingly, that a genius exhibits three elements: (1) a basic intellectual capacity; (2) a high level of motivation/determination, which enables the genius to remain focused; and (3) immersion in their chosen field, typically represented by over 10,000 hours of study/practice/experience. It turns out that the latter element is the principal differentiator, and fortunately, it is something one has control over.

Ajay: Describe the journey that Blue Reference has made leading to its current product line, including Inference for R.

Paul: The Inference product suite represents a natural extension beyond the Electronic Laboratory Notebook (ELN) product we developed at our previous company, IntelliChem. ELNs are used by scientists and technicians to document research, experiments and procedures performed in a laboratory. The ELN is a fully electronic replacement of the paper notebook. IntelliChem (sold to Symyx in 2004) was a leader in deployment of ELNs at global pharmaceutical companies.

After seeing the successful adoption of ELNs in the laboratory, we saw an opportunity to improve upon the utility of ELN documents and the data contained therein. Essentially, we developed Inference to be a platform for enabling MS Office documents with powerful, flexible, and transparent analytic capabilities – what we call “dynamic documents” or “document mashups”. Executable code from high-level scripting languages like R, MATLAB, and .NET, is combined with data and explanatory text in the document canvas to transform it from a static record into an analytic application.

The pharmaceutical industry, in cooperation with the FDA, has begun to look at ways to implement quality by design (QbD) practices as an alternative to quality by end-testing. QbD comprises a systematic application of predictive analytics to the drug R&D process such that development timelines and costs are reduced while drug safety and efficacy is improved.

Statistical modeling and analysis plays a key role in QbD as a tool for identifying critical quality attributes and confining their variability to a specified design space. Dynamic documents fit nicely into this paradigm, and we’re currently using Inference as a platform to develop an enterprise solution for QbD. You can visit http://www.InferenceForQbD.com for more information about our QbD product.

Along the way, we recognized the need for Inference outside of the pharmaceutical industry. The Inference for R, Inference for MATLAB, and Inference for.NET versions are meant to serve users of these technical computing languages who have analysis, publishing, reporting, collaboration, and reproducible research needs that are best served by a document centric environment. By using Microsoft Word, Excel and PowerPoint as the “front end,” we can serve the the 500 million users that use Microsoft Office as their principal desktop productive application.

Ajay: What is the pricing strategy for Inference for Matlab and Inference for R – and how do you see the current recession as an opportunity for analytical products.

Paul: Our strategy is to reach out to the market Microsoft Office users that would benefit from easy access to datamining and predictive analytics capabilities within their principal desktop productivity tool. Accordingly, we have offered the Inference product at the low price of $199 for a single user/one year subscription. Additionally, because it is implemented on top of an existing installation of Microsoft Office, the cost of training, support and maintenance are expected to be minimal.

create-a-simple-user-interface-for-your-r-application
create-a-simple-user-interface-for-your-r-application

r-code-directly-in-excel-to-customize-your-analysis
r-code-directly-in-excel-to-customize-your-analysis

graphical-output-in-an-excel-tab
graphical-output-in-an-excel-tab

Ajay: Your product seems to follow a nice fit where both open source as well as proprietary packages from Microsoft( .Net) are working together to give the customer a nice solution. Do you believe it is possible that big companies and big open source communities can work together to create some software rather than just be at loggerheads.

Paul: Absolutely. We’re seeing momentum build for open source analytic solutions as the economy impacts companies, both small and large. We saw this take place in the back office with implementation of Linux and Apache Web servers, and now we’re starting to see it in the front office. Smart IT teams are looking for creative ways to stretch their resources, forcing them to look beyond established, but expensive, software products.

We’ve encountered concrete evidence of this in the financial industry. Fresh on the heels of the credit crisis, investment banks and hedge funds have begun to realize that their risk models and supporting software infrastructure are inadequate. In response, quantitative finance and risk analysts are increasingly turning to the open source R statistical computing environment for improved predictive analytics.

R has a core group of devotees in academia that drive innovation, making it a comprehensive venue for development of leading-edge data analysis methods. In order to leverage these tools, banks need a way

for R to play nicely with their existing personnel and IT infrastructure. This is where Inference for R produces real value. It transforms MS Office into platform for the development, distribution, and maintenance of R based quantitative tools – enabling production level predictive analytics.

Commercial distributions of R address issues of scalability and support, which might otherwise be subjects of concern. For example, REvolution Computing distributes an optimized, validated and supported distribution of R, providing peace of mind to corporate IT. REvolution also offers Enterprise R, a distribution of R for 64-bit, high performance computing.

Ajay: Please name any successful customer testimonials for Inference for R.

Paul: We have been working with the director of quantitative analytics at a large international bank. He reported that he has successfully distributed R applications to his team of research analysts and portfolio managers based on Inference in Excel. Use of this strategy eliminated the need to code complex models in Visual Basic for applications, which is time consuming and error prone.

Ajay: Also are there any issues with licensing and IP for mixing open source code and proprietary code.

Paul- The licensing issues with open source R pertain to distributing R. There are no licensing restrictions in using R. Accordingly, we do not distribute R. Rather, our customers install R separately and Inference recognizes the installation.

Ajay: So R is free and I can get Open Office for free. What are the five specific uses where Inference for R can score an edge over this and make me pay for the solution.

Paul: R is free, and many R enthusiasts would argue that all you need for R is a Linux operating system like Ubuntu, a text editor such as Emacs, and R’s command line interface. For some highly-skilled R users this is sufficient; for the new and average R user this is a nightmare.

Many people think that the largest fraction of the cost of implementing new software is the cost of the license. In actuality, and especially in the corporate world, it is the cost of training, user support, software maintenance, and the costs of switching the user base to the new software. Free open source software does not help here. Hence there is a strong ROI argument to be made to build new software application on top of existing systems that have worked well.

Additionally, successful implementation of open source software like R requires a baseline of integration with existing systems. The fact is that Microsoft operating systems dominate the business world, as does Microsoft Office. If one is serious about using R to address the analytic needs of big business, tight integration with these systems is imperative.

Ajay: Any plans for a web hosted SaaS version for Inference for R soon?

Paul: The natural progression of Inference for R to SaaS will coincide with the next release of Office (Office 2010 or Office 14), which we expect to be largely SaaS enabled.

Ajay: Name some alliances and close partners working with Blue Reference

– and what we can expect from you in terms of product launches in 2009.

Paul: We have created a product development consortium in partnership involving ‘top ten’ global pharmaceutical companies The consortium is guiding the development of an enterprise solution for Quality by Design (QbD), using Inference for R as the platform.

We are working with several consulting firms specializing in IT solutions for specialized markets like risk management and predictive analytics.

We are also working with several technology partners who have complementary products and where integration of their products with Inference provides clear and significant value to customers.

Ajay: Any truth to the rumors of an acquisition by a BIG analytics company?

Paul: Our business strategy is centered on growth through partnerships with others. Acquisition is one means to execute that strategy.

Ajay: How do you see this particular product (for R) shaping up down the years.

Paul: R’s success can be attributed, in large part, to the support of its loyal open source community. Its enthusiastic use in academia bodes very well for its growth as a cutting-edge analytics tool. It is just a matter of time before commercial analytic solutions powered by R become de rigueur. We’re happy to be at the tip of the spear.

Ajay: Any Asia plans for Blue Reference or are you still happy with the Oregon location. How do you plan to interact with graduate schools and academia for your products.

Paul: Although we don’t have a major private university in our backyard, Oregon State University has opened a campus here. And, we’ve been in dialogue with the global Academic community from day one. Over 100 academic institutions around the world use Inference through our academic licensing program. Inference is a great tool for preparing dynamic lessons and publishing reproducible research.

Our Central Oregon location is home to a growing high-tech sector that we’ve been a part of for decades. We’ve had success building large and profitable companies here. Bend attracts Silicon Valley types who come here for vacation and don’t want to leave – they just can’t seem to resist the quality of life and bountiful recreational opportunities that this area offers. It’s a good mix of work and play.

Biography

Paul van Eikeren is President and CEO of Blue Reference, Inc. He is responsible for guiding the strategic direction of the company through novel products and services development, partnerships and alliances in the realm of application of informatics to faster-cheaper-better research, development, manufacturing and operations. Van Eikeren is a successful serial entrepreneur, which includes the co-founding of IntelliChem with his son Josh and its ultimate sale to Symyx Technologies. He has headed up R&D at several startup companies focused on drug discovery and development including Sepracor Inc., Argonaut Technologies, Inc, and Bend Research, Inc. He served as Professor of Chemistry and Biochemistry at Harvey Mudd College of Science and Engineering. He is author/co-author and inventor/co-inventor in over 50 scientific articles and patents directed at the application of chemical, biochemical and computational technologies. Van Eikeren holds a BA degree in Chemistry from Columbia University and a PhD in Chemistry from MIT.bluereference-logo

Ajay- To know more I recommend checking out the free evaluation at http://inferenceforr.com/ especially if you need to rev up your MS office Installation with greater graphics and analytics juice.

%d bloggers like this: