Here are some nice articles on using R on Graphical Processing Units (GPU) mainly made by NVidia. Think of a GPU as a customized desktop with specialized computing equivalent to much faster computing. i.e. Matlab users can read the webinars here http://www.nvidia.com/object/webinar.html
Now a slightly better definition of GPU computing is from http://www.nvidia.com/object/GPU_Computing.html
GPU computing is the use of a GPU (graphics processing unit) to do general purpose scientific and engineering computing.
The model for GPU computing is to use a CPU and GPU together in a heterogeneous computing model. The sequential part of the application runs on the CPU and the computationally-intensive part runs on the GPU. From the user’s perspective, the application just runs faster because it is using the high-performance of the GPU to boost performance.
Citation:
http://brainarray.mbni.med.umich.edu/brainarray/rgpgpu/
R is the most popular open source statistical environment in the biomedical research community. However, most of the popular R function implementations involve no parallelism and they can only be executed as separate instances on multicore or cluster hardware for large data-parallel analysis tasks. The arrival of modern graphics processing units (GPUs) with user friendly programming tools, such as nVidia’s CUDA toolkit (http://www.nvidia.com/cuda), provides a possibility of increasing the computational efficiency of many common tasks by more than one order of magnitude (http://gpgpu.org/). However, most R users are not trained to program a GPU, a key obstacle for the widespread adoption of GPUs in biomedical research.
The research project at the page mentioned above has developed special packages for the above need- R on a GPU.
he initial package is hosted by CRAN as gputools a sorce package for UNIX and Linux systems. Be sure to set the environment variable CUDA_HOME to the root of your CUDA toolkit installation. Then install the package in the usual R manner. The installation process will automatically make use of nVidia’s nvcc compiler and CUBLAS shared library.
and some figures
Figure 1 provides performance comparisons between original R functions assuming a four thread data parallel solution on Intel Core i7 920 and our GPU enabled R functions for a GTX 295 GPU. The speedup test consisted of testing each of three algorithms with five randomly generated data sets. The Granger causality algorithm was tested with a lag of 2 for 200, 400, 600, 800, and 1000 random variables with 10 observations each. Complete hierarchical clustering was tested with 1000, 2000, 4000, 6000, and 8000 points. Calculation of Kendall’s correlation coefficient was tested with 20, 30, 40, 50, and 60 random variables with 10000 observations each
Ajay- For hard core data mining people ,customized GPU’s for accelerated analytics and data mining sounds like fun and common sense. Are there other packages for customization on a GPU – let me know.
Citation:
http://brainarray.mbni.med.umich.edu/brainarray/rgpgpu/
Download
Download the gputools package for R on a Linux platform here: version 0.01.
RT @decisionstats R language on the GPU DecisionStats http://bit.ly/Scxne
This comment was originally posted on Twitter
RT @Quesada @onertipaday: R language on the GPU some nice article on using R on GPU http://tr.im/sLpE #rstats // #so43
This comment was originally posted on Twitter
RT @onertipaday: R language on the GPU some nice article on using R on GPU http://tr.im/sLpE #rstats
This comment was originally posted on Twitter
R language on the GPU some nice article on using R on GPU http://tr.im/sLpE #rstats
This comment was originally posted on Twitter
RT @decisionstats R language on the GPU: R on Graphical Processing Units (GPU) mainly made by NVidia. http://url4.eu/6XAD
This comment was originally posted on Twitter