The Best and Worst Graphs Ever

From http://www.math.yorku.ca/SCS/Gallery/ a great selection

March of the Pies

and the renowned best graph ever

and the most lieing factor or distorted graph

Amazon goes free for users next month

Amazon Web Services logo
Image via Wikipedia

Amazon EC2 and company announced a free year long tier for new users-you cant beat free 🙂

http://aws.amazon.com/free/

AWS Free Usage Tier

To help new AWS customers get started in the cloud, AWS is introducing a new free usage tier. Beginning November 1, new AWScustomers will be able to run a free Amazon EC2 Micro Instance for a year, while also leveraging a new free usage tier for Amazon S3, Amazon Elastic Block Store, Amazon Elastic Load Balancing, and AWSdata transfer. AWS’s free usage tier can be used for anything you want to run in the cloud: launch new applications, test existing applications in the cloud, or simply gain hands-on experience with AWS.

Below are the highlights of AWS’s new free usage tiers. All are available for one year (except Amazon SimpleDB, SQS, and SNS which are free indefinitely):

Sign Up Now

AWS’s free usage tier startsNovember 1, 2010. A valid creditcard is required to sign up.
See offer terms.

AWS Free Usage Tier (Per Month):

In addition to these services, the AWS Management Console is available at no charge to help you build and manage your application on AWS.

* These free tiers are only available to new AWS customers and are available for 12 months following your AWSsign-up date. When your free usage expires or if your application use exceeds the free usage tiers, you simply pay standard, pay-as-you-go service rates (see each service page for full pricing details). Restrictions apply; see offer terms for more details.

** These free tiers do not expire after 12 months and are available to both existing and new AWS customers indefinitely.

The new AWS free usage tier applies to participating services across all AWS regions: US – N. Virginia, US – N. California, EU – Ireland, and APAC – Singapore. Your free usage is calculated each month across all regions and automatically applied to your bill – free usage does not accumulate.

 

Doing Time Series using a R GUI

The Xerox Star Workstation introduced the firs...
Image via Wikipedia

Until recently I had been thinking that RKWard was the only R GUI supporting Time Series Models-

however Bob Muenchen of http://www.r4stats.com/ was helpful to point out that the Epack Plugin provides time series functionality to R Commander.

Note the GUI helps explore various time series functionality.

Using Bulkfit you can fit various ARMA models to dataset and choose based on minimum AIC

 

> bulkfit(AirPassengers$x)
$res
ar d ma      AIC
[1,]  0 0  0 1790.368
[2,]  0 0  1 1618.863
[3,]  0 0  2 1522.122
[4,]  0 1  0 1413.909
[5,]  0 1  1 1397.258
[6,]  0 1  2 1397.093
[7,]  0 2  0 1450.596
[8,]  0 2  1 1411.368
[9,]  0 2  2 1394.373
[10,]  1 0  0 1428.179
[11,]  1 0  1 1409.748
[12,]  1 0  2 1411.050
[13,]  1 1  0 1401.853
[14,]  1 1  1 1394.683
[15,]  1 1  2 1385.497
[16,]  1 2  0 1447.028
[17,]  1 2  1 1398.929
[18,]  1 2  2 1391.910
[19,]  2 0  0 1413.639
[20,]  2 0  1 1408.249
[21,]  2 0  2 1408.343
[22,]  2 1  0 1396.588
[23,]  2 1  1 1378.338
[24,]  2 1  2 1387.409
[25,]  2 2  0 1440.078
[26,]  2 2  1 1393.882
[27,]  2 2  2 1392.659
$min
ar        d       ma      AIC
2.000    1.000    1.000 1378.338
> ArimaModel.5 <- Arima(AirPassengers$x,order=c(0,1,1),
+ include.mean=1,
+   seasonal=list(order=c(0,1,1),period=12))
> ArimaModel.5
Series: AirPassengers$x
ARIMA(0,1,1)(0,1,1)[12]
Call: Arima(x = AirPassengers$x, order = c(0, 1, 1), seasonal = list(order = c(0,      1, 1), period = 12), include.mean = 1)
Coefficients:
ma1     sma1
-0.3087  -0.1074
s.e.   0.0890   0.0828
sigma^2 estimated as 135.4:  log likelihood = -507.5
AIC = 1021   AICc = 1021.19   BIC = 1029.63
> summary(ArimaModel.5, cor=FALSE)
Series: AirPassengers$x
ARIMA(0,1,1)(0,1,1)[12]
Call: Arima(x = AirPassengers$x, order = c(0, 1, 1), seasonal = list(order = c(0,      1, 1), period = 12), include.mean = 1)
Coefficients:
ma1     sma1
-0.3087  -0.1074
s.e.   0.0890   0.0828
sigma^2 estimated as 135.4:  log likelihood = -507.5
AIC = 1021   AICc = 1021.19   BIC = 1029.63
In-sample error measures:
ME        RMSE         MAE         MPE        MAPE        MASE
0.32355285 11.09952005  8.16242469  0.04409006  2.89713514  0.31563730
Dataset79 <- predar3(ArimaModel.5,fore1=5)

 

And I also found an interesting Ref Sheet for Time Series functions in R-

http://cran.r-project.org/doc/contrib/Ricci-refcard-ts.pdf

and a slightly more exhaustive time series ref card

http://www.statistische-woche-nuernberg-2010.org/lehre/bachelor/datenanalyse/Refcard3.pdf

Also of interest a matter of opinion on issues in Time Series Analysis in R at

http://www.stat.pitt.edu/stoffer/tsa2/Rissues.htm

Of course , if I was the sales manager for SAS ETS I would be worried given the increasing capabilities in Time Series in R. But then again some deficiencies in R GUI for Time Series-

1) Layout is not very elegant

2) Not enough documented help (atleast for the Epack GUI- and no integrated help ACROSS packages-)

3) Graphical capabilties need more help documentation to interpret the output (especially in ACF and PACF plots)

More resources on Time Series using R.

http://people.bath.ac.uk/masgs/time%20series/TimeSeriesR2004.pdf

and http://www.statoek.wiso.uni-goettingen.de/veranstaltungen/zeitreihen/sommer03/ts_r_intro.pdf

and books

http://www.springer.com/economics/econometrics/book/978-0-387-77316-2

http://www.springer.com/statistics/statistical+theory+and+methods/book/978-0-387-75960-9

http://www.springer.com/statistics/statistical+theory+and+methods/book/978-0-387-75958-6

http://www.springer.com/statistics/statistical+theory+and+methods/book/978-0-387-75966-1

Getting Inside R

Forums and Minerals, the new Internet tools
Image via Wikipedia

I loved the new upgraded design of Inside-R, Revo’s new(?) community.

And promptly shot up a blog application.

What makes Inside- R- slightly better than SDC, Analyticbridge,PlanetR and R _bloggers (with due respects)

  1. Open Id logins (I think thats a new and good step)
  2. Options for automated feed parsing for blogs
  3. More than just a blog aggregator- includes sections on other stuff- thus more like a community than a big feed
  4. Abbreviated feeds- just gives you two-three lines of summary per post  than the whole big schmakaround -thats a time saver for me —(D Smith is the only -lonely blogger atm there)
  5. The more the merrier- One more place to read and write R.


btw is the name insider (as in guy who knows inside stuff) or Inside- R (as in get inside the R box)- just kidding. With PlyR, ManipulatR, ApplyR and now Inside R- the pun gets MerrieR

If my blog app gets rejected- these views may change ,grr


Interesting R competition at Reddit

Image representing Reddit as depicted in Crunc...
Image via CrunchBase

Here is an interesting R competition going on at Reddit and it is to help Reddit make a recommendation engine 🙂

http://www.reddit.com/r/redditdev/comments/dtg4j/want_to_help_reddit_build_a_recommender_a_public/

by ketralnis

As promised, here is the big dump of voting information that you guys donated to research. Warning: this contains much geekery that may result in discomfort for the nerd-challenged.

I’m trying to use it to build a recommender, and I’ve got some preliminary source code. I’m looking for feedback on all of these steps, since I’m not experienced at machine learning.

Here’s what I’ve done

  • I dumped all of the raw data that we’ll need to generate the public dumps. The queries are the comments in the two .pig files and it took about 52 minutes to do the dump against production. The result of this raw dump looks like:
    $ wc -l *.dump
     13,830,070 reddit_data_link.dump
    136,650,300 reddit_linkvote.dump
         69,489 reddit_research_ids.dump
     13,831,374 reddit_thing_link.dump
    
  • I filtered the list of votes for the list of users that gave us permission to use their data. For the curious, that’s 67,059 users: 62,763 with “public votes” and 6,726 with “allow my data to be used for research”. I’d really like to see that second category significantly increased, and hopefully this project will be what does it. This filtering is done by srrecs_researchers.pig and took 83m55.335s on my laptop.
  • I converted data-dumps that were in our DB schema format to a more useable format using srrecs.pig(about 13min)
  • From that dump I mapped all of the account_ids, link_ids, and sr_ids to salted hashes (using obscure() insrrecs.py with a random seed, so even I don’t know it). This took about 13min on my laptop. The result of this, votes.dump is the file that is actually public. It is a tab-separated file consisting in:
    account_id,link_id,sr_id,dir
    

    There are 23,091,688 votes from 43,976 users over 3,436,063 links in 11,675 reddits. (Interestingly these ~44k users represent almost 17% of our total votes). The dump is 2.2gb uncompressed, 375mb in bz2.

What to do with it

The recommendations system that I’m trying right now turns those votes into a set of affinities. That is, “67% of user #223’s votes on /r/reddit.com are upvotes and 52% on programming). To make these affinities (55m45.107s on my laptop):

 cat votes.dump | ./srrecs.py "affinities_m()" | sort -S200m | ./srrecs.py "affinities_r()" > affinities.dump

Then I turn the affinities into a sparse matrix representing N-dimensional co-ordinates in the vector space of affinities (scaled to -1..1 instead of 0..1), in the format used by R’s skmeans package (less than a minute on my laptop). Imagine that this matrix looks like

          reddit.com pics       programming horseporn  bacon
          ---------- ---------- ----------- ---------  -----
ketralnis -0.5       (no votes) +0.45       (no votes) +1.0
jedberg   (no votes) -0.25      +0.95       +1.0       -1.0
raldi     +0.75      +0.75      +0.7        (no votes) +1.0
...

We build it like:

# they were already grouped by account_id, so we don't have to
# sort. changes to the previous step will probably require this
# step to have to sort the affinities first
cat affinities.dump | ./srrecs.py "write_matrix('affinities.cm', 'affinities.clabel', 'affinities.rlabel')"

I pass that through an R program srrecs.r (if you don’t have R installed, you’ll need to install that, and the packageskmeans like install.packages('skmeans')). This program plots the users in this vector space finding clusters using a sperical kmeans clustering algorithm (on my laptop, takes about 10 minutes with 15 clusters and 16 minutes with 50 clusters, during which R sits at about 220mb of RAM)

# looks for the files created by write_matrix in the current directory
R -f ./srrecs.r

The output of the program is a generated list of cluster-IDs, corresponding in order to the order of user-IDs inaffinities.clabel. The numbers themselves are meaningless, but people in the same cluster ID have been clustered together.

Here are the files

These are torrents of bzip2-compressed files. If you can’t use the torrents for some reason it’s pretty trivial to figure out from the URL how to get to the files directly on S3, but please try the torrents first since it saves us a few bucks. It’s S3 seeding the torrents anyway, so it’s unlikely that direct-downloading is going to go any faster or be any easier.

  • votes.dump.bz2 — A tab-separated list of:
    account_id, link_id, sr_id, direction
    
  • For your convenience, a tab-separated list of votes already reduced to percent-affinities affinities.dump.bz2, formatted:
    account_id, sr_id, affinity (scaled 0..1)
    
  • For your convenience, affinities-matrix.tar.bz2 contains the R CLUTO format matrix files affinities.cm,affinities.clabelaffinities.rlabel

And the code

  • srrecs.pigsrrecs_researchers.pig — what I used to generate and format the dumps (you probably won’t need this)
  • mr_tools.pysrrecs.py — what I used to salt/hash the user information and generate the R CLUTO-format matrix files (you probably won’t need this unless you want different information in the matrix)
  • srrecs.r — the R-code to generate the clusters

Here’s what you can experiment with

  • The code isn’t nearly useable yet. We need to turn the generated clusters into an actual set of recommendations per cluster, preferably ordered by predicted match. We probably need to do some additional post-processing per user, too. (If they gave us an affinity of 0% to /r/askreddit, we shouldn’t recommend it, even if we predicted that the rest of their cluster would like it.)
  • We need a test suite to gauge the accuracy of the results of different approaches. This could be done by dividing the data-set in and using 80% for training and 20% to see if the predictions made by that 80% match.
  • We need to get the whole process to less than two hours, because that’s how often I want to run the recommender. It’s okay to use two or three machines to accomplish that and a lot of the steps can be done in parallel. That said we might just have to accept running it less often. It needs to run end-to-end with no user-intervention, failing gracefully on error
  • It would be handy to be able to idenfity the cluster of just a single user on-the-fly after generating the clusters in bulk
  • The results need to be hooked into the reddit UI. If you’re willing to dive into the codebase, this one will be important as soon as the rest of the process is working and has a lot of room for creativity
  • We need to find the sweet spot for the number of clusters to use. Put another way, how many different types of redditors do you think there are? This could best be done using the aforementioned test-suite and a good-old-fashioned binary search.

Some notes:

  • I’m not attached to doing this in R (I don’t even know much R, it just has a handy prebaked skmeans implementation). In fact I’m not attached to my methods here at all, I just want a good end-result.
  • This is my weekend fun project, so it’s likely to move very slowly if we don’t pick up enough participation here
  • The final version will run against the whole dataset, not just the public one. So even though I can’t release the whole dataset for privacy reasons, I can run your code and a test-suite against it

——————————————————————————————-

 

I am thinking of using Rattle and using the arules package, and running it on the EC2 to get the horsepower.

How else do you think you can tackle a recommendation engine problem.

 

Ajay

 

Bringing Poetry to Life

Here is a new poetry book.

———————————————————————————————–

I’m excited to let you know about Carol Calkins who is releasing her first book of poetry entitled Bring Poetry to Life. This book is a powerful compilation of poetry touching on the most important moments in our everyday lives from new beginnings, to special people and events, to endings and saying goodbye.  Carol who found her life purpose through poetry is excited to release the first of a series of poetry books on Amazon. Grab your copy of Bring Poetry to Life today on Amazon.com – Find out more about Carol and her new book at http://www.bringpoetrytolife.com

We Said Goodbye a Thousand Times

 

Don’t be sad about my parting

Don’t feel like you never said goodbye

For you and I both know deep in our hearts

That We Said Goodbye a Thousand Times

And shared so much love and joy every day

 

Be happy that I am now at peace

Be joyful that I have lived a wonderful life

Be happy that we have shared so much together

 

And remember I am always with you in a thought and a sigh

Every day when you see the beauty in nature think of me

Every day when you see the colorful flowers think of me

Every day when you see a frisky animal prancing around think of me

Every day when you look into the eyes of someone you love think of me

 

And know beyond a doubt that I am with you in everything you do

And know beyond a doubt that I am with you in everything you say

And know beyond a doubt that I am with you in every quiet moment of your life

 

Don’t be sad about my parting

Don’t feel like you never said goodbye

For you and I both know deep in our hearts

That We Said Goodbye a Thousand Times

And shared so much love and joy every day

 

 

Scoring SAS and SPSS Models in the cloud

Outline of a cloud containing text 'The Cloud'
Image via Wikipedia

An announcement from Zementis and Predixion Software– about using cloud computing for scoring models using PMML. Note R has a PMML package as well which is used by Rattle, data mining GUI for exporting models.

Source- http://www.marketwatch.com/story/predixion-software-introduces-new-product-to-run-sas-and-spss-predictive-models-in-the-cloud-2010-10-19?reflink=MW_news_stmp

——————————————————————————————————–

ALISO VIEJO, Calif., Oct 19, 2010 (BUSINESS WIRE) — Predixion Software today introduced Predixion PMML Connexion(TM), an interface that provides Predixion Insight(TM), the company’s low-cost, self-service in the cloud predictive analytics solution, direct and seamless access to SAS, SPSS (IBM) and other predictive models for use by Predixion Insight customers. Predixion PMML Connexion enables companies to leverage their significant investments in legacy predictive analytics solutions at a fraction of the cost of conventional licensing and maintenance fees.

The announcement was made at the Predictive Analytics World conference in Washington, D.C. where Predixion also announced a strategic partnership with Zementis, Inc., a market leader in PMML-based solutions. Zementis is exhibiting in Booth #P2.

The Predictive Model Markup Language (PMML) standard allows for true interoperability, offering a mature standard for moving predictive models seamlessly between platforms. Predixion has fully integrated this PMML functionality into Predixion Insight, meaning Predixion Insight users can now effortlessly import PMML-based predictive models, enabling information workers to score the models in the cloud from anywhere and publish reports using Microsoft Excel(R) and SharePoint(R). In addition, models can also be written back into SAS, SPSS and other platforms for a truly collaborative, interoperable solution.

“Predixion’s investment in this PMML interface makes perfect business sense as the lion’s share of the models in existence today are created by the SAS and SPSS platforms, creating compelling opportunity to leverage existing investments in predictive and statistical models on a low-cost cloud predictive analytics platform that can be fed with enterprise, line of business and cloud-based data,” said Mike Ferguson, CEO of Intelligent Business Strategies, a leading analyst and consulting firm specializing in the areas of business intelligence and enterprise business integration. “In this economy, Predixion’s low-cost, self-service predictive analytics solutions might be welcome relief to IT organizations chartered with quickly adding additional applications while at the same time cutting costs and staffing.”

“We are pleased to be partnering with Zementis, truly a PMML market leader and innovator,” said Predixion CEO Simon Arkell. “To allow any SAS or SPSS customer to immediately score any of their predictive models in the cloud from within Predixion Insight, compare those models to those created by Predixion Insight, and share the results within Excel and Sharepoint is an exciting step forward for the industry. SAS and SPSS customers are fed up with the high prices they must pay for their business users just to access reports generated by highly skilled PhDs who are burdened by performing routine tasks and thus have become a massive bottleneck. That frustration is now a thing of the past because any information worker can now unlock the power of predictive analytics without relying on experts — for a fraction of the cost and from anywhere they can connect to the cloud,” Arkell said.

Dr. Michael Zeller, Zementis CEO, added, “Our mission is to significantly shorten the time-to-market for predictive models in any industry. We are excited to be contributing to Predixion’s self-service, cloud-based predictive analytics solution set.”

About Predixion Software

Predixion Software develops and markets collaborative predictive analytics solutions in the public and private cloud. Predixion enables self-service predictive analytics, allowing customers to use and analyze large amounts of data to make actionable decisions, all within the familiar environment of Excel and PowerPivot. Predixion customers are achieving immediate results across a multitude of industries including: retail, finance, healthcare, marketing, telecommunications and insurance/risk management.

Predixion Software is headquartered in Aliso Viejo, California with development offices in Redmond, Washington. The company has venture capital backing from established investors including DFJ Frontier, Miramar Venture Partners and Palomar Ventures. For more information please contact us at 949-330-6540, or visit us atwww.predixionsoftware.com.

About Zementis

Zementis, Inc. is a leading software company focused on the operational deployment and integration of predictive analytics and data mining solutions. Its ADAPA(R) decision engine successfully bridges the gap between science and engineering. ADAPA(R) was designed from the ground up to benefit from open standards and to significantly shorten the time-to-market for predictive models in any industry. For more information, please visit www.zementis.com.