Latest from the Amazon Cloud-
hi1.4xlarge instances come with eight virtual cores that can deliver 35 EC2 Compute Units (ECUs) of CPU performance, 60.5 GiB of RAM, and 2 TiB of storage capacity across two SSD-based storage volumes. Customers using hi1.4xlarge instances for their applications can expect over 120,000 4 KB random write IOPS, and as many as 85,000 random write IOPS (depending on active LBA span). These instances are available on a 10 Gbps network, with the ability to launch instances into cluster placement groups for low-latency, full-bisection bandwidth networking.
High I/O instances are currently available in three Availability Zones in US East (N. Virginia) and two Availability Zones in EU West (Ireland) regions. Other regions will be supported in the coming months. You can launch hi1.4xlarge instances as On Demand instances starting at $3.10/hour, and purchase them as Reserved Instances
High I/O Instances
Instances of this family provide very high instance storage I/O performance and are ideally suited for many high performance database workloads. Example applications include NoSQL databases like Cassandra and MongoDB. High I/O instances are backed by Solid State Drives (SSD), and also provide high levels of CPU, memory and network performance.
High I/O Quadruple Extra Large Instance
60.5 GB of memory
35 EC2 Compute Units (8 virtual cores with 4.4 EC2 Compute Units each)
2 SSD-based volumes each with 1024 GB of instance storage
I/O Performance: Very High (10 Gigabit Ethernet)
Storage I/O Performance: Very High*
API name: hi1.4xlarge
*Using Linux paravirtual (PV) AMIs, High I/O Quadruple Extra Large instances can deliver more than 120,000 4 KB random read IOPS and between 10,000 and 85,000 4 KB random write IOPS (depending on active logical block addressing span) to applications. For hardware virtual machines (HVM) and Windows AMIs, performance is approximately 90,000 4 KB random read IOPS and between 9,000 and 75,000 4 KB random write IOPS. The maximum sequential throughput on all AMI types (Linux PV, Linux HVM, and Windows) per second is approximately 2 GB read and 1.1 GB write.
Western countries are running out of people to fight their wars. This is even more acute given the traditional and current demographic trends in both armed forces and general populations.
A shift to cyber conflict can help the West maintain parity over Eastern methods of assymetrical warfare (by human attrition /cyber conflict).
Declining resources will lead to converging conflicts of interest and dynamics in balance of power in the 21 st century.
The launch of Sputnik by USSR led to the moon shot rush by the US.1960s
The proposed announcement of StarWars by USA led to unsustainable defence expenditure by USSR.1980s
The threat of cyber conflict and espionage by China (and Russian cyber actions in war with Georgia) has led to increasing budgets for cyber conflict research and defense in USA. -2010s
If we do not learn from history, we are condemned to repeat it.
Declining Populations in the West and Rising Populations in the East in the 21 st century. The difference in military age personnel would be even more severe, due to more rapid aging in the west.
Economic output will be proportional to number of people employed as economies reach similar stages of maturity (Factor-Manufacturing-Services-Innovation)
GDP projections to 2050:
Western defence forces would not be able to afford a human attrition intensive war by 2030 given current demographic trends (both growth and aging). Existing balance of power could be maintained if resources are either shared or warfare is moved to cyber space. Technological advances can help augment resources reducing case for conflict scenarios.
Will the Internet be used by US against China in the 21 st century as Opium was used by GB in the 19th? Time will tell 🙂
Message from the guys at Palo Alto— Why dont they just make videos using Sal Academy’s help?
We’re sorry to have to tell you that our Machine Learning course will be delayed further. There have naturally been legal and administrative issues to be sorted out in offering Stanford classes freely to the outside world, and it’s just been taking time. We have, however, been able to take advantage of the extra time to debug and improve our course content!
We now expect that the course will start either late in February or early in March. We will let you know as soon as we hear a definite date. We apologize for the lack of communication in recent weeks; we kept hoping we would have a concrete launch date to give you, but that date has kept slipping.
Thanks so much for your patience! We are really sorry for repeatedly making you wait, and for any interference this causes in your schedules. We’re as excited and anxious as you are to get started, and we both look forward to your joining us soon in Machine Learning!
Andrew Ng and the ML Course Staff
and an additional 750 hours /month of Linux based computing. The windows instance is really quite easy for users to start getting the hang of cloud computing. and it is quite useful for people to tinker around, given Google’s retail cloud offerings are taking so long to hit the market
But it is only for new users.
WS Free Usage Tier now Includes Microsoft Windows on EC2
The AWS Free Usage Tier now allows you to run Microsoft Windows Server 2008 R2 on an EC2 t1.micro instance for up to 750 hours per month. This benefit is open to new AWS customers and to those who are already participating in the Free Usage Tier, and is available in all AWS Regions with the exception of GovCloud. This is an easy way for Windows users to start learning about and enjoying the benefits of cloud computing with AWS.
The micro instances provide a small amount of consistent processing power and the ability to burst to a higher level of usage from time to time. You can use this instance to learn about Amazon EC2, support a development and test environment, build an AWS application, or host a web site (or all of the above). We’ve fine-tuned the micro instances to make them even better at running Microsoft Windows Server.
You can launch your instance from the AWS Management Console:
We have lots of helpful resources to get you started:
Along with 750 instance hours of Windows Server 2008 R2 per month, the Free Usage Tier also provides another 750 instance hours to run Linux (also on a t1.micro), Elastic Load Balancer time and bandwidth, Elastic Block Storage, Amazon S3 Storage, and SimpleDB storage, a bunch of Simple Queue Service and Simple Notification Service requests, and some CloudWatch metrics and alarms (see the AWS Free Usage Tier page for details). We’ve also boosted the amount of EBS storage space offered in the Free Usage Tier to 30GB, and we’ve doubled the I/O requests in the Free Usage Tier, to 2 million.
Message from Stanford –
Dear Ajay Ohri,
We’re very excited for the forthcoming launch of Course Name. We’re sorry not to have gotten in touch lately – we’ve been busy generating lots of content, and the system is working really well. Unfortunately, there are still a few administrative i’s to dot and t’s to cross. We’re still hopeful that we’ll go live very soon – we hope not more than a few weeks late.
But since we don’t have a firm timeline right now, we’d rather leave this open and get back to you with a definitive date soon (rather than just promise you a date that’s far enough in the future that we can feel confident about it). We’ll let you know a firm date as soon as we possibly can.
We realize that some of you will have made plans around expecting the course to start in January, and we apologize for any difficulties that this delay may cause.
The good news is that the course is looking great, and we’re thrilled that over X,000 of you have signed up – we can’t wait for the course to start!
See you soon online!
Course Name Course Staff
Some interesting stats (and note the relative numbers)-
67,000 signups for Technology Entrepreneurship
58,000 signups for Cryptography
44,000 signups for Machine Learning
50,000 signups for Design and Analysis of Algorithms
Check out these other courses:
An amazing example of R being used sucessfully in combination (and not is isolation) with other enterprise software is the add-ins functionality of JMP and it’s R integration.
See the following JMP add-ins which use R
JMP Add-in: Multidimensional Scaling using R
This add-in creates a new menu command under the Add-Ins Menu in the submenu R Add-ins. The script will launch a custom dialog (or prompt for a JMP data table is one is not already open) where you can cast columns into roles for performing MDS on the data table. The analysis results in a data table of MDS dimensions and associated output graphics. MDS is a dimension reduction method that produces coordinates in Euclidean space (usually 2D, 3D) that best represent the structure of a full distance/dissimilarity matrix. MDS requires that input be a symmetric dissimilarity matrix. Input to this application can be data that is already in the form of a symmetric dissimilarity matrix or the dissimilarity matrix can be computed based on the input data (where dissimilarity measures are calculated between rows of the input data table in R).
Chernoff Faces Add-in
One way to plot multivariate data is to use Chernoff faces. For each observation in your data table, a face is drawn such that each variable in your data set is represented by a feature in the face. This add-in uses JMP’s R integration functionality to create Chernoff faces. An R install and the TeachingDemos R package are required to use this add-in.
Support Vector Machine for Classification
By simply opening a data table, specifying X, Y variables, selecting a kernel function, and specifying its parameters on the user-friendly dialog, you can build a classification model using Support Vector Machine. Please note that R package ‘e1071’ should be installed before running this dialog. The package can be found from http://cran.r-project.org/web/packages/e1071/index.html.
Penalized Regression Add-in
This add-in uses JMP’s R integration functionality to provide access to several penalized regression methods. Methods included are the LASSO (least absolutee shrinkage and selection operator, LARS (least angle regression), Forward Stagewise, and the Elastic Net. An R install and the “lars” and “elasticnet” R packages are required to use this add-in.
MP Addin: Univariate Nonparametric Bootstrapping
This script performs simple univariate, nonparametric bootstrap sampling by using the JMP to R Project integration. A JMP Dialog is built by the script where the variable you wish to perform bootstrapping over can be specified. A statistic to compute for each bootstrap sample is chosen and the data are sent to R using new JSL functionality available in JMP 9. The boot package in R is used to call the boot() function and the boot.ci() function to calculate the sample statistic for each bootstrap sample and the basic bootstrap confidence interval. The results are brought back to JMP and displayed using the JMP Distribution platform.
Finally a powerful enough cloud computing instance from Amazon EC2 – called CC2 priced at 3$ per hour (for Windows instances) and 2.4$/hour for Linux
It would be interesting to see how SAS, IBM SPSS or R can leverage these
Storage – On the storage front, the CC2 instance type is packed with 60.5 GB of RAM and 3.37 TB of instance storage.
Processing – The CC2 instance type includes 2 Intel Xeon processors, each with 8 hardware cores. We’ve enabled Hyper-Threading, allowing each core to process a pair of instruction streams in parallel. Net-net, there are 32 hardware execution threads and you can expect 88 EC2 Compute Units (ECU’s) from this 64-bit instance type
On a somewhat smaller scale, you can launch your own array of 290 CC2 instances and create a Top500 supercomputer (63.7 teraFLOPS) at a cost of less than $1000 per hour
Cluster Compute Eight Extra Large specifications:
88 EC2 Compute Units (Eight-core 2 x Intel Xeon)
60.5 GB of memory
3370 GB of instance storage
I/O Performance: Very High (10 Gigabit Ethernet)
API name: cc2.8xlarge
Price: Starting from $2.40 per hour
But some caveats
- The instances are available in a single Availability Zone in the US East (Northern Virginia) Region. We plan to add capacity in other EC2 Regions throughout 2012.
- You can run 2 CC2 instances by default.
- You cannot currently launch instances of this type within a Virtual Private Cloud (VPC).