iTunes finally gets some competition ?- Amazon Cloud Player

 

An interesting development is Amazon’s Cloud Player (though Cannonical may be credited for thinking of the idea first for Ubuntu One). Since Ubuntu One is dependent on the OS (and not the browser) this makes Amazon \s version more of a  mobile Cloud Player (as it seems to be an Android app and not an app that is independent of any platform, os or browser.

Since Android and Ubuntu are both Linux flavors, I am not sure if Cannonical has an exiting mobile app for Ubuntu One. Apple’s cloud plans also seems kind of ambiguous compared to Microsoft (Azure et al)

I guess we will have to wait for a true Cloud player.

 

http://www.amazon.com/b/ref=tsm_1_tw_s_dm_liujd5?node=2658409011&tag=cloudplayer-20

How to Get Started with Cloud Drive and Cloud Player

 

Step 1. Add music to Cloud Drive

Purchase a song or album from the Amazon MP3 Store and click the Save to Amazon Cloud Drive button when your purchase is complete. Your purchase will be saved for free.

 

Step 2. Play your music in Cloud Player for Web

Click the Launch Amazon Cloud Player button to start listening to your purchase. Add more music from your library by clicking theUpload to Cloud Drive button from the Cloud Player screen. Start with 5 GB of free Cloud Drive storage. Upgrade to 20 GB with an MP3 album purchase (see details). Use Cloud Player to browse and search your library, create playlists, and download to your computer.

 

Step 3. Enjoy your music on the go with Cloud Player for Android

Install the Amazon MP3 for Android app to use Cloud Player on your Android device. Shop the full Amazon MP3 store, save your purchases to Cloud Drive, stream your Cloud Player library, and download to your device right from your Android phone or tablet.

compare this with

https://one.ubuntu.com/music/

A cloud-enabled music store

The Ubuntu One Music Store is integrated with the Ubuntu One service making it a cloud-enabled digital music store. All purchases are transferred to your Ubuntu One personal cloud for safe storage and then conveniently downloaded to your synchronizing computers. And don’t worry aboutgoing over your storage quota with music purchases. You won’t need to pay more for personal cloud storage of music purchased from the Ubuntu One Music Store.

An Ubuntu One subscription is required to purchase music from the Ubuntu One Music Store. Choose from either the free 2 GB option or the 50 GB plan for $10 (USD) per month to synchronize more of your digital life.

5 regional stores and more in the works

  • The Ubuntu One Music requires Ubuntu 10.04 LTS and offers digital music through five regional stores.
  • The US, UK, and Germany stores offer music from all major and independent labels.
  • The EU store serves most of the EU member countries (2) and offers music from fewer major label artists.
  • The World store offers only independent label music and serves the countries not covered by the other regional stores.

 

 

Windows Azure and Amazon Free offer

Simple Cpu Cache Memory Organization
Image via Wikipedia

For Hi-Computing folks try out Azure for free-

http://www.microsoft.com/windowsazure/offers/popup/popup.aspx?lang=en&locale=en-US&offer=MS-AZR-0001P#compute

Windows Azure Platform
Introductory Special

This promotional offer enables you to try a limited amount of the Windows Azure platform at no charge. The subscription includes a base level of monthly compute hours, storage, data transfers, a SQL Azure database, Access Control transactions and Service Bus connections at no charge. Please note that any usage over this introductory base level will be charged at standard rates.

Included each month at no charge:

  • Windows Azure
    • 25 hours of a small compute instance
    • 500 MB of storage
    • 10,000 storage transactions
  • SQL Azure
    • 1GB Web Edition database (available for first 3 months only)
  • Windows Azure platform AppFabric
    • 100,000 Access Control transactions
    • 2 Service Bus connections
  • Data Transfers (per region)
    • 500 MB in
    • 500 MB out

Any monthly usage in excess of the above amounts will be charged at the standard rates. This introductory special will end on March 31, 2011 and all usage will then be charged at the standard rates.

Standard Rates:

Windows Azure

  • Compute*
    • Extra small instance**: $0.05 per hour
    • Small instance (default): $0.12 per hour
    • Medium instance: $0.24 per hour
    • Large instance: $0.48 per hour
    • Extra large instance: $0.96 per hour

 

http://aws.amazon.com/ec2/pricing/

Free Tier*

As part of AWS’s Free Usage Tier, new AWS customers can get started with Amazon EC2 for free. Upon sign-up, new AWScustomers receive the following EC2 services each month for one year:

  • 750 hours of EC2 running Linux/Unix Micro instance usage
  • 750 hours of Elastic Load Balancing plus 15 GB data processing
  • 10 GB of Amazon Elastic Block Storage (EBS) plus 1 million IOs, 1 GB snapshot storage, 10,000 snapshot Get Requests and 1,000 snapshot Put Requests
  • 15 GB of bandwidth in and 15 GB of bandwidth out aggregated across all AWS services

 

Paid Instances-

 

Standard On-Demand Instances Linux/UNIX Usage Windows Usage
Small (Default) $0.085 per hour $0.12 per hour
Large $0.34 per hour $0.48 per hour
Extra Large $0.68 per hour $0.96 per hour
Micro On-Demand Instances
Micro $0.02 per hour $0.03 per hour
High-Memory On-Demand Instances
Extra Large $0.50 per hour $0.62 per hour
Double Extra Large $1.00 per hour $1.24 per hour
Quadruple Extra Large $2.00 per hour $2.48 per hour
High-CPU On-Demand Instances
Medium $0.17 per hour $0.29 per hour
Extra Large $0.68 per hour $1.16 per hour
Cluster Compute Instances
Quadruple Extra Large $1.60 per hour N/A*
Cluster GPU Instances
Quadruple Extra Large $2.10 per hour N/A*
* Windows is not currently available for Cluster Compute or Cluster GPU Instances.

 

NOTE- Amazon Instance definitions differ slightly from Azure definitions

http://aws.amazon.com/ec2/instance-types/

Available Instance Types

Standard Instances

Instances of this family are well suited for most applications.

Small Instance – default*

1.7 GB memory
1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit)
160 GB instance storage
32-bit platform
I/O Performance: Moderate
API name: m1.small

Large Instance

7.5 GB memory
4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each)
850 GB instance storage
64-bit platform
I/O Performance: High
API name: m1.large

Extra Large Instance

15 GB memory
8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute Units each)
1,690 GB instance storage
64-bit platform
I/O Performance: High
API name: m1.xlarge

Micro Instances

Instances of this family provide a small amount of consistent CPU resources and allow you to burst CPU capacity when additional cycles are available. They are well suited for lower throughput applications and web sites that consume significant compute cycles periodically.

Micro Instance

613 MB memory
Up to 2 EC2 Compute Units (for short periodic bursts)
EBS storage only
32-bit or 64-bit platform
I/O Performance: Low
API name: t1.micro

High-Memory Instances

Instances of this family offer large memory sizes for high throughput applications, including database and memory caching applications.

High-Memory Extra Large Instance

17.1 GB of memory
6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)
420 GB of instance storage
64-bit platform
I/O Performance: Moderate
API name: m2.xlarge

High-Memory Double Extra Large Instance

34.2 GB of memory
13 EC2 Compute Units (4 virtual cores with 3.25 EC2 Compute Units each)
850 GB of instance storage
64-bit platform
I/O Performance: High
API name: m2.2xlarge

High-Memory Quadruple Extra Large Instance

68.4 GB of memory
26 EC2 Compute Units (8 virtual cores with 3.25 EC2 Compute Units each)
1690 GB of instance storage
64-bit platform
I/O Performance: High
API name: m2.4xlarge

High-CPU Instances

Instances of this family have proportionally more CPU resources than memory (RAM) and are well suited for compute-intensive applications.

High-CPU Medium Instance

1.7 GB of memory
5 EC2 Compute Units (2 virtual cores with 2.5 EC2 Compute Units each)
350 GB of instance storage
32-bit platform
I/O Performance: Moderate
API name: c1.medium

High-CPU Extra Large Instance

7 GB of memory
20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each)
1690 GB of instance storage
64-bit platform
I/O Performance: High
API name: c1.xlarge

Cluster Compute Instances

Instances of this family provide proportionally high CPU resources with increased network performance and are well suited for High Performance Compute (HPC) applications and other demanding network-bound applications. Learn more about use of this instance type for HPC applications.

Cluster Compute Quadruple Extra Large Instance

23 GB of memory
33.5 EC2 Compute Units (2 x Intel Xeon X5570, quad-core “Nehalem” architecture)
1690 GB of instance storage
64-bit platform
I/O Performance: Very High (10 Gigabit Ethernet)
API name: cc1.4xlarge

Cluster GPU Instances

Instances of this family provide general-purpose graphics processing units (GPUs) with proportionally high CPU and increased network performance for applications benefitting from highly parallelized processing, including HPC, rendering and media processing applications. While Cluster Compute Instances provide the ability to create clusters of instances connected by a low latency, high throughput network, Cluster GPU Instances provide an additional option for applications that can benefit from the efficiency gains of the parallel computing power of GPUs over what can be achieved with traditional processors. Learn moreabout use of this instance type for HPC applications.

Cluster GPU Quadruple Extra Large Instance

22 GB of memory
33.5 EC2 Compute Units (2 x Intel Xeon X5570, quad-core “Nehalem” architecture)
2 x NVIDIA Tesla “Fermi” M2050 GPUs
1690 GB of instance storage
64-bit platform
I/O Performance: Very High (10 Gigabit Ethernet)
API name: cg1.4xlarge

versus-

Windows Azure compute instances come in five unique sizes to enable complex applications and workloads.

Compute Instance Size CPU Memory Instance Storage I/O Performance
Extra Small 1 GHz 768 MB 20 GB* Low
Small 1.6 GHz 1.75 GB 225 GB Moderate
Medium 2 x 1.6 GHz 3.5 GB 490 GB High
Large 4 x 1.6 GHz 7 GB 1,000 GB High
Extra large 8 x 1.6 GHz 14 GB 2,040 GB High

*There is a limitation on the Virtual Hard Drive (VHD) size if you are deploying a Virtual Machine role on an extra small instance. The VHD can only be up to 15 GB.

 

 

Interview Karim Chine BIOCEP (Cloud Computing with R)

Here is an interview with Karim Chine of http://www.biocep.net/

Working with an R or Scilab on clusters/grids/clouds becomes as simple as working with them locally-

Karim Chine, Biocep.

Ajay- Please describe your career in the field of science. What advice would you give to young science graduates in this recession.

Karim- My original background is in theoretical Physics, I did my Master’s thesis at the Ecole Normale’s Statistical Physics Laboratory where I worked on phase separation in two-dimensional additive mixtures with Dr Werner Krauth. I came to computer science after graduating from the Ecole Polytechnique and I spent two years at TELECOM ParisTech studying software architecture and distributed systems design. I worked then for the IBM Paris Laboratory (VisualAge Pacbase applications’ generator), Schlumberger (Over the Air Platform and Web platform for smartcards personalization services), Air France (SSO deployment) and ILOG (OPL-CPLEX-ODM Development System). This gave me the intense exposure to real world large-scale software design. I crossed the borders of cultural, technical and organizational domains several times and I worked with a broad palette of technologies with some of the best and most innovative engineers. I moved to Cambridge in 2006 and I worked for the European Bioinformatics Institute. It’s where I started dealing with the integration of R into various types of applications. I left the EBI in November 2007. I was looking for an institutional support to help me in bringing into reality a vision that was becoming clearer and clearer about a universal platform for scientific and statistical computing. I failed in getting that support and I have been working on BIOCEP full time for most of the last 18 months without being funded. Few days of consultancy given here and there allowed me to keep going. I spent several weeks at Imperial College, at the National Center for e-Social Sciences and at Berkeley’s department of statistics during that period. Those visits were extremely useful in refining the use cases of my platform. I am still looking for a partner to back the project. You asked me to give advice. The unique advice I would give is to be creative and to try again and again to do what you really want to do. Crisises come and go, they will always do and extreme situations are part of life. I believe hard work and sincerity can prevail anything.

Ajay- Describe BIOCEP’s scope and ambition.

What are the current operational analytics that can be done by users having data.

Karim- My first ambition with BIOCEP is to deliver a universal platform for scientific and statistical computing and to create an open, federative and collaborative environment for the production, sharing and reuse of all the artifacts of computing. My second ambition is to enhance dramatically the accessibility of mathematical and statistical computing, to make HPC a commonplace and to put new analytical, numerical and processing capabilities in the hands of everyone (open science).

The Open source software Conquest has gone very far. Environments like R or Scilab, technologies like Java, Operating Systems like Linux-Ubuntu, and tools like OpenOffice are being used by millions of people. Very little doubt remains about the OSS’s final victory in some domains. The cloud is already a reality and it will take computing to a whole new realm. What is currently missing is the software that, by making the Cloud’s usage seamless, will create new ecosystems and will provide rooms for creativity, innovation and knowledge discovery of an unprecedented scale.

BIOCEP is one more building block into this. BIOCEP is built on top of R and Scilab and anything that you can do within those environments is accessible through BIOCEP. Here is what you have uniquely with this new R/Scilab-based e-platform:

High productivity via the most advanced cross-platform workbench available for the R environment.

Advanced Graphics: with BIOCEP, a graphic transducer allows the rendering on client side of graphics produced on server side and enables advanced capabilities like zooming/unzooming/scrolling for R graphics. a client side mouse tracker allows to display dynamically information related to the graphics and depending on coordinates. Several virtual R Devices showing different data can be coupled in zooming/scrolling and this helps comparing visually complex graphics.

Extensibility with plug-ins: new views (IDE-like views, analytical interfaces…) can be created very easily either programmatically or via drag-and-drop GUI designers.

Extensibility with server-side extensions: any java code can be packaged and used on server side. The code can interact seamlessly with R and Scilab or provide generic bridges to other software. For example, I provide an extension that allows you to use openoffice as a universal converter between various files formats on server side.

Seamless High Performance Computing: working with an R or Scilab on clusters/grids/clouds becomes as simple as working with them locally. Distributed computing becomes seamless, creating a large number R and Scilab remote engines and using them to solve large scale problems becomes easier than ever. From the R console the user can create logical links to existing R engines or to newly created ones and use those logical links to pilot the remote workers from within his R session. R functions enable using the logical links to import/export variables from the R session to the different workers and vice versa. R commands/scripts can be executed by the R workers synchronously or asynchronously. Many logical R links can be aggregated into one logical cluster variable that can be used to pilot the R workers in a coordinated way. A cluster.apply function allows the usage of the logical cluster to apply a function to a big data structure by slicing it and sending elementary execution commands to the workers. The workers apply the user’s function to the slices in parallel. The elementary results are aggregated to compose the final result that becomes available within the R session.

Collaboration: your R/scilab server running in the cloud can be accessed simultaneously by you and your collaborators. Everything gets broadcasted including Graphics. A spreadsheet enables to view and edit data collaboratively. Anyone can write plug-ins to take advantage of the collaborative capabilities of the frameworks. If your IP address is public, you can provide a URL to anyone and get him connect to your locally running R.

– Powerful frameworks for Java developers: BIOCEP provides Frameworks and tools to use R as if it was an Object Oriented Java Toolkit or a Web Toolkit for R-based dynamic application.

Webservices for C#, Perl, Python users/developers: Most of the capabilities of BIOCEP including piloting of R/Scilab engines on the cloud for distributed computing or for building scalable analytical web application are accessible from most of the programming languages thanks to the SOAP front-end.

RESTful API: simple URLs can perform computing using R/Scilab engines and return the result as an XML or as graphics in any format. This works like google charts and has all the power of R since the graphic is described with an R script provided as a parameter of the URL. The same API can be exposed on demand by the workbench. This allow for example to integrate a Cloud-R with Excel or OpenOffice. The workbench works as a bridge between the cloud and those applications.

Advanced Pooling framework for distributed resources: useful for deploying pools of R/scilab engines on multi nodes systems and get them used simultaneously by several distributed client processes in a scalable/optimal way. A supervision GUI is provided for a user friendly management of the pools/nodes/engines.

Simultaneous use of R and Scilab: Using java scripting, data can be transferred from R to Scilab and vice versa.

Ajay- Could you tell us about a successful BIOCEP installation and what it led to? Can BIOCEP be used by the rest of the R community for other packages? What would be an ideal BIOCEP user /customer for whom cloud based analytics makes more sense ?

Karim- BIOCEP is still in pre-beta stage. However it is a robust and polished pre-Beta that several organizations are already using. Janssen Pharmaceutica is using it to create and deliver statistical applications for drug discovery that use R engines running on their backend servers. The platform is foreseen there as the way to go for an ultimate optimization of some of their data analysis pipelines. Janssen’s head of statistics said to be very much interested in the capabilities given by BIOCEP to statisticians to create their own analytical User Interfaces and deliver them with their models without needing specific software development skills. Shell is creating BIOCEP-based applications prototypes to explore the feasibility and advantages of migrating some of Shell’s applications to the Cloud. One group from Shell Global Solutions is planning to use BIOCEP for running scilab in the cloud for Corrosion simulation modeling. Dr Ivo Dinov’s team at UCLA is studying the migration of some the SOCR applications to the BIOCEP platform as plug-ins and extensions. Dr Ivo Dinov also applied for an important grant for building DISCb (Distributed Infrastructure for Statistical Computing in Biomedicine). If the grant application is successful, BIOCEP will be the backbone at software architecture level of that new infrastructure. In cooperation with the Institute of Biostatistics, Leibniz University of Hannover, Bernd Bischl and Kornelius Rohmeyer have developed a framework to user friendly R-GUIs of different complexity. The toolkit uses BIOCEP as an R-backend since release 2.0. Several small projects have been implemented using this framework and some are in production such as an application for education in biostatistics at the University of Hannover. Also the ESNATS project is planning to use the BIOCEP frameworks. Some development is being done at the EBI to customize the workbench and use it to give to the end user the possibility to run R and Bioconductor on the EBI’s LSF cluster.

I’ve been in touch with Phil Butcher, Sanger’s head of IT and he is considering the deployment of BIOCEP on Sanger’s systems simultaneously with Eucalyptus. The same type of deployment has been discussed with the director of OMII-UK, Neil Chue Hong. BIOCEP’s deployment is probably going to follow the deployment of the Eucalyptus System on NGS. Tena Sakai deployed BIOCEP at the Ernest Gallo Clinic and Research Centre and he is currently exploring the usage of the R on the Cloud via BIOCEP (Eucalyptus / AWS). The platform has been deployed by a small consultancy company specializing in R on several London-based investment banks’ systems. I have had a go ahead form Nancy Wilkins Diher (Director for Science Gateways, SDSC) for deploying on TeraGrid, a deployment on EGEE has been discussed with Dr Steven Newhouse (EGEE Technical Director). Both deployments are in standby at the moment.

Quest Diagnostics is planning to use BIOCEP extensively. Sudeep Talati (University of Manchester) is doing his Master’s project on BIOCEP. He is supervised by Professor Andy Brass and he is exploring the use of a BIOCEP-based infrastructure to deliver microarray analysis workflows in a simple and intuitive way to biologists with and without the Cloud. In Manchester, Robin Pinning (e-Science team leader, Research Computing Services) has the deployment of BIOCEP on Manchester’s research cluster on his agenda…

As I have said, anything that you can do with R including installing, loading and using any R package is accessible through BIOCEP. The platform aims to be universal and to become a tool for productivity and collaboration used by everyone dealing with computing/analytics with or without the cloud.

The Cloud whether it is public or private will be generalized and everyone will become a cloud user in one way or another

Ajay- What motivated you to build BIOCEP and mash cloud computing and R. What scope do you see for cloud computing in developing countries in Asia and Africa?

Karim– When I was at the EBI, I worked on the integration of R within scalable web applications. I explored and tested the available frameworks and tools and all of them were too low level or too simple to answer the problem. I decided to build new frameworks. I had the opportunity to be able to stand on the shoulders of giants.

Simon Urbanek’s packages already bridged the C-API of R with Java reliably. Martin Morgan’s RWebsevices package defined class mappings between R types, including S4 classes, and java.

Progressively R became usable as a Java object oriented toolkit, then as a Java Server. Then I built a pooling framework for distributed resources that made it possible for multiple clients to use multiple R engines optimally.

I started building a GUI to validate the server’s increasingly sophisticated API. That GUI became progressively the workbench.

When I was at Imperial, I worked with the National Grid Service team at the Oxford e-Research Centre to deploy my platform on Oxford’s core cluster. That deployment led to many changes in the architecture to meet all the security requirements.

It was obvious that the next step was to make BIOCEP available on Amazon’s Cloud. Academic Grids are for researchers and the cloud is for everyone. Making the platform work seamlessly on EC2 took few months. With the cloud came the focus on collaborative features (collaborative views, graphics, spreadsheets…).

I can only talk about the example of a country I know, Tunisia, and I guess some of this applies to Asian Countries. Even if the broadband is everywhere today and is becoming accessible and affordable by a majority of Tunisians, I am not sure that the adoption of the cloud would happen soon.

Simple considerations like the obligation to pay for the compute cycles in dollars (and not in dinars) are a barrier for adoption. Spending foreign currencies is subject to several restrictions in general for companies and for individuals; few Tunisians have credit cards that can be used to pay Amazon. Companies would prefer to buy and administer their own machines because the cost of operation and maintenance is lower in Tunisia than it is in Europe/US.

Even if the cloud would help in giving Tunisian researchers access to affordable Computing cycles on demand, it seems that most of them have learned to live without HPC resources and that their research is more theoretical and less computational than it could be. Others are collaborating with research groups in Europe (France) and they are using those European groups’ infrastructures.

Ajay- How would BIOCEP address the problem of data hygiene, data security and privacy. Is encrypted and compressed data transfers supported or planned?

Karim- With BIOCEP, a computational engine is exposed as a distributed component via a single mono-directional HTTP port. When you run such an engine on an EC2 instance you have two options:

  • 1/ totally sandbox the machine (via the security group) and leave only the SSH port open.
  • Private Key authentication is required to access the machine. In this case you use an SSH Tunnel (created with a tool like Putty for example) which allows you to see the engine as if it was running on your local machine on a port of your choice, the one specified for creating the Tunnel.
  • When you start the Virtual Workbench and connect in Http mode to your local host via the specified port, you are effectively connecting to the EC2-R engine. 100% of the information exchanged between your workbench and the engine, including your data, is ciphered thanks to the SSH tunnel.
  • The virtual workbench embeds JSCH and can create the Tunnel for you automatically. This mode doesn’t allow collaboration since it requires the private key to let the workbench talk to the EC2 R/Scilab engine.
  • 2/ tell the EC2 machine at startup (via the “user data”) to require specific credentials from the user. When the machine starts running, the user needs to provide those credentials to get a session ID and to be able to pilot a virtual EC2 R/Scilab engine. This mode enables collaboration. The client (workbench/scripts) connects to the EC2 machine instance via HTTP (will be HTTPS in a near future).

Ajay- Suppose I have 20 gb per month of data and my organization decided to cut back on the number of annual expensive software. How can the current version of BIOCEP help me do the following?

Karim– Ways BIOCEP can help you right now.

1) Data aggregation and Reporting in terms of spreadsheet, presentation and graphs

  • BIOCEP provides a highly programmable server side spreadsheet.
  • It can be used interactively as a view of the workbench and simple clicks allow the transfer of data form cells to R variables and vice versa. It can be created and populated from R (console / scripts).
  • Any R function can be used within dynamically computed cells. The evaluation of those dynamic cells is done on server side and can use high performance computing functions. Macros allow adding reactivity to the spreadsheets.
  • A macro allows the user to execute any R code in response to a value change of an R variable or of the content of a range within a spreadsheet. Variables docking macros allow the mirroring of R variables of any type (vectors, matrixes, data frames..) with ranges within the spreadsheet in Read/Write mode

. Several ready-to-use User Interface components can be created and docked anywhere within the spreadsheet. Those components include

  • an R Graphics viewer (PDF viewer) showing Graphics produced by a user-defined R script and reactive on user-defined variables and cell ranges changes,
  • customizable sliders mirroring R variables,
  • Buttons executing user-defined R code when pressed,
  • Combo boxes mirroring factor variables ..

The spreadsheet-based analytical user interface can pilot an R running at any location (local R, Grid R, Cloud R…). It can be created in minutes just by pointing, clicking and copy/pasting.

Cells content+macros+reactive docked components can be saved in a zip file and become a Workbench plug-ins. Like all BIOCEP plug-ins, the spreadsheet-based GUI can be delivered to the end user via a simple URL. It can use a cloud-R or a local R created transparently on the user’s machine.

2) Build time series models, regression models

BIOCEP’s workbench is extensible and I am hoping that contributors will soon start writing plug-ins or converting available GUIs to BIOCEP plug-ins in order to make the creation of those models as easy as possible.

Biography-

Karim Chine
Karim chine graduated from the French Ecole Polytechnique and TELECOM ParisTech. He worked at Ecole Normale Supérieure-LPS (phase separation in two-dimensional additive mixture), IBM (VisualAge Pacbase), Schlumberger (Over the Air Platform and Web platform for smartcards personalization services), Air France (SSO deployment), ILOG (OPL-CPLEX-ODM Development System), European Bioinformatics Institute (Expression Profiler, Biocep) and Imperial College London-Internet Center (Biocep). He contributed to open source software (AdaBroker) and he is the author of the Biocep platform. He currently works on the seamless integration of the new platform within utility computing infrastructures (Amazon EC2), its deployment on Grids (NGS) and its usage as a tool for education and he tries to build collaborations with academic and industrial partners.

You can view his resume here http://www.biocep.net/scan/CV_Karim_Chine_June_2009.pdf

SAS commits $70 million to Cloud Computing

From the official SAS website

http://www.sas.com/news/preleases/CCF2009.html

SAS to build $70 million cloud computing facility

New cloud computing facility will support needed data-intensive customer solutions

CARY, NC (Mar.19, 2009) SAS, the leader in business analytics software and services, announces today it is building a 38,000-square-foot cloud computing facility to provide the additional data-handling capacity needed to expand SAS OnDemand offerings and hosted solutions.

As the need for hosted solutions grows, new research and development jobs will be generated at SAS Cary, N.C., world headquarters, where the majority of R&D employees (more than 1,400) are located.

This project is proof that, despite the down economy, SAS continues to grow and innovate, said Jim Goodnight, CEO of SAS. The growing demand by our customers for hosted solutions has given us this opportunity to invest even further in North Carolina and the Cary community.

In keeping with SAS commitment to protecting the environment, the facility will be built to Leadership in Energy and Environmental Design (LEED) standards for water and energy conservation. The sustainable construction methods encourage recycling of materials, similar to the Executive Briefing Center under construction on the Cary campus. SAS first LEED building, SAS Canadas headquarters in Toronto, opened in April 2006.

In keeping with LEED standards, about 60 percent of the projects construction and equipment spending will be in North Carolina. Approximately 1,000 people will be involved in its design and construction.

The facility will include two 10,000-square-foot server farms. Server Farm 1 is anticipated to be on-line mid-2010 and support growth for three to five years. Server Farm 2 will be constructed as a shell and will be populated with mechanical and electrical infrastructure once Server Farm 1 reaches 80 percent capacity. The facility will be built on SAS Cary campus.

Apparently SAS Institute believes in creating jobs ( and thousands of them) during the recession ! Jim clearly is in top intellectual shape despite his err vintage. Imagine with just a browser and you could be crunching billions of bytes of data sitting from a beach in Goa! Thankfully they did not believe the hot air that McKinsey put out on cloud computing (read here http://smartdatacollective.com/Home/17942 )

McKinsey attacks Cloud Computing having no sense

McKinsey, that fine think tank of intellectuals recently dubbed cloud computing as not making sense -thus trying to throttle in its infancy a paradigm that could make companies across the world more competitive than they are today by helping cut costs precisely when they need it the most. The attempt to paint virtualization rather than remote computing is another attempt to cloud the air rather than clear the air on cloud computing. Most consulting companies would have pointed out industry affiliations and disclaimers on which companies they are representing or have represented.

Read other comments at the NYT article
http://bits.blogs.nytimes.com/2009/04/15/when-cloud-computing-doesnt-make-sense/

Its study uses Amazon.coms Web service offering as the price of outsourced cloud computing, since its service is the best-known and it publishes its costs. On that basis, according to McKinsey, the total cost of the data center functions would be $366 a month per unit of computing output, compared with $150 a month for the conventional data center. The industry has assumed the financial benefits of cloud computing and, in our view, thats a faulty assumption, said Will Forrest, a principal at McKinsey, who led the study.

My take on this is here-

Cloud computing will have lower costs as economies of scale kick in, as they did for nearly all technologies. McKinsey partners must be having a hard time to meet their annual bonuses if they have not factored this basic assumption in their cost projections. Cloud computing just converts this to a mass infrastructure from the present scenario where you pay annual licenses for software that you use for less than 60 % in a day, and hardware that you find obsolete in 3-4 years, which is off course gives accountants a reason to help you with depreciation and tax benefits. Rent a computer in the sky is simpler – and you would not need any consultant to help advise what configuration you need.

Mckinsey has deep touches with the outsourcing industry in India from their seminal paper in 1999, to their first concept Knowledge center that helped start it, to their alumni across the outsourcing sector which satisfy a mutual symbiotic relationship particularly in business research. Cloud computing actually help with virtual teams – no need for server farms, IT bureaucracies and Indian outsourcing can actually reduce a lot of costs along with American direct users. The intermediaries and consultants would be affected the most.

Indeed I am speaking on the Cloud Slam 09, precisely on how cloud computing can help lower the digital divide by giving high power computing to anyone having a thin shell laptop with a browser. Developing countries need access to HPC to better plan their resources and growth in an environmentally optimized manner.

http://www.decisionstats.com

Cloud say hello to R. R say hello to Cloud.

image

Here is a terrific project from Biocep which I have covered before in January at http://www.decisionstats.com/2009/01/r-and-cloud-computing/

But with some exciting steps ahead at http://biocep-distrib.r-forge.r-project.org/

Basically add open source R , create a user friendly GUI, host it on a cloud computer to better crunch data, and save hardware costs as well. Basically upload, crunch data, download.

Save hardware costs and software costs in recession. Before your boss decides to save his staffing costs.

image

    Biocep combines the capabilities of R and the flexibility of a Java based distributed system to create a tool of considerable power and utility. A Biocep based R virtualization infrastructure has been successfully deployed on the British National Grid Service, demonstrating its usability and usefulness for researchers. 

    The virtual workbench enhances the user experience and the productivity of anyone working with R.

A lovely presentation on it is here

and I am taking an extract

What is missing now

High Level Java API for Accessing R

Stateful, Resuable, Remotable R Components

Scalable, Distributed, R Based Infrastructure

Safe multiple clients framework for components usage as a pool of indistinguishable Remote Resources

User friendly Interface for the remote resources creation, tracking and debugging

    Citation: Karim Chine, "Biocep, Towards a Federative, Collaborative, User-Centric,Grid-Enabled and Cloud-Ready Computational Open Platform,"
    escience,pp.321-322, 2008 Fourth IEEE International Conference on eScience, 2008

Ajay- With thanks to Bob Marcus for pointing this out from an older post of mine. I did write on this in August on the Ohri framework but that was before recession moved me out from cloud computing to blog computing.

%d bloggers like this: