Amazon cloud gets more exciting. We are still waiting for the Oracle and Google public clouds (compute) to open up out of beta! See their (rather cluttered) blog
Today, we are excited to announce a new generation of the original Amazon EC2 instance family. Second generation Standard instances (M3 instances) provide customers with the same balanced set of CPU and memory resources as first generation Standard instances (M1 instances) while providing customers with 50% more computational capability/core.
M3 instances are currently available in two instance types; extra-large (m3.xlarge) and double extra-large (m3.2xlarge). Examples of applications that can benefit from the additional CPU horsepower of these new instances include media encoding, batch processing, web servers, caching fleets, and many others. Currently, M3 instances are available in the US East (N. Virginia) Region starting at a Linux On-Demand price of $0.58/hr for extra-large instances. Customers can also purchase M3 instances as Reserved Instances or as Spot instances. We will introduce M3 instances in additional regions in the coming months.
To learn more about Amazon EC2 instance types and to find out which instance type might be useful for you, please visit the Amazon EC2 Instance type page.
Pricing Change for M1 Standard Instances
Along with the introduction of the M3 Standard instance family, we are announcing a reduction in Linux On-Demand pricing for M1 Standard instances in the US East (N. Virginia) and US West (Oregon) Regions by almost 19%. The new pricing is effective from November 1 and is described in the following table
You can find out more about pricing for all Amazon EC2 instances by visiting the Amazon EC2 pricing page.
One more day of me mucking around MySQL and Amazon (hoping to get to the R)
Just working on the Hadoop on Amazon’s Time Sharing Platform 😉
Hopefully we will see some SAS, or SPSS or R up there soon .
I came across this lovely analytics company. Think Big Analytics. and I really liked their lovely explanation of the whole she-bang big data etc stuff. Because Hadoop isnt rocket science and can be made simpler to explain and deploy.
Check them out yourself at http://www.thinkbiganalytics.com/resources_reference
Also they have an awesome series of lectures coming up-
Up and Running with Big Data: 3 Day Deep-Dive
Over three days, explore the Big Data tools, technologies and techniques which allow organisations to gain insight and drive new business opportunities by finding signal in their data. Using Amazon Web Services, you’ll learn how to use the flexible map/reduce programming model to scale your analytics, use Hadoop with Elastic MapReduce, write queries with Hive, develop real world data flows with Pig and understand the operational needs of a production data platform
- MapReduce concepts
- Hadoop implementation: Jobtracker, Namenode, Tasktracker, Datanode, Shuffle & Sort
- Introduction to Amazon AWS and EMR with console and command-line tools
- Implementing MapReduce with Java and Streaming
- Hive Introduction
- Hive Relational Operators
- Hive Implementation to MapReduce
- Hive Partitions
- Hive UDFs, UDAFs, UDTFs
- Pig Introduction
- Pig Relational Operators
- Pig Implementation to MapReduce
- Pig UDFs
- NoSQL discussion
Here is a brief tutorial for people to run R on Windows Azure Cloud (OS=Windows in this case , but there are 4 kinds of Linux also available)
There is a free 90 day trial so you can run R for free on the cloud for free (since Google Cloud Compute is still in closed hush hush beta)
Go to https://www.windowsazure.com/en-us/pricing/free-trial/
Some slides I liked on cloud computing infrastructure as offered by Amazon, IBM, Google , Windows and Oracle