Free and Open Source cannot get basic economics correct

Nutch robots
Image via Wikipedia

Before you rev up those keyboards, and shoot off a snarky comment- consider this statement- there are many ways to run (and ruin economies). But they still have not found a replacement for money. Yes Happiness is important. Search Engine is good.

So unless they start a new branch of economics with lots more motivational theory and psychology and lot less quant especially for open source projects, money ,revenue, sales is the only true measure of success in enterprise software. Particularly if you have competitors who are making more money selling the same class of software.

Popularity contests are for high school quarterbacks —so even if your open source software is popular in downloads, email discussions, stack overflow or Continue reading “Free and Open Source cannot get basic economics correct”

Spam Analysis Akismet-WPStats-Blogging

Here is a brief dataset I out after one hour of cutting and pasting from WordPress.com’s creative data style formats. It shows spam,comments,traffic, and number of posts written monthly.

Clearly monthly traffic is directly related to number I write (suppose A + B* Posts)

But Spam is showing a discontinuous growth especially after a big month (in which Reddit helped)

Akismet had some missing historical values (which is curious)

So what can we do with this dataframe in R or any other statistical software.

Spam Analysis
Month Spam detected Traffic excluding spam Posts Written Traffic /Post Spam /Post Spam/Traffic Ham detected Missed spam False positives
Feb-11 1848 5079 18 282.17 102.6667 36.39% 4.00 6.00 0.0%
Jan-11 3724 10238 35 292.51 106.4 36.37% 0.00 3.00 0.0%
Dec-10 3676 10345 35 295.57 105.0286 35.53% 8.00 6.00 0.0%
Nov-10 3680 11723 71 165.11 51.83099 31.39% 24.00 3.00 0.0%
Oct-10 2292 16430 71 231.41 32.28169 13.95% 24.00 18.00 0.0%
Sep-10 0 17913 63 284.33 0 0.00% 0.00 0.00 0.0%
Aug-10 0 5403 17 317.82 0 0.00% 0.00 0.00 0.0%
Jul-10 2 5041 10 504.1 0.2 0.04% 0.00 0.00 0.0%
Jun-10 5 4271 11 388.27 0.454545 0.12% 10.00 1.00 0.0%

Why do bloggers blog ?

Xbox (revision 1.0) internal layout. Including...
Image via Wikipedia

Step 1 is to create internal motivation to create a blog in the first place

Step 2 is to find what to write

Reasons Bloggers Blog-

Basic -Ranting


Examples- I hate Facebook Platform team treats me badly with waits, and breaks my code.

SAS Marketing wont give me  a big discount to make me look good in front of my boss.

Companies  wont give me their software for free- even though I will use it to make money (and not play X Box)

I want my vendors to be FOSS but my customers to switch to SaaS.

Google wont do this- Apple wont do that- Microsoft wont do those.

Revolution would give me 4 great packages but not the open source for RevoScaler (which only 300 people would understand in the first place)

Safety-

I better kiss the Professor and give a Turkey for dinner, as he sits on my thesis committee.

I will recommend Prof X’s lousy book in the hope he recommends my lousy book as a textbook too.

It is safe to laugh when the boss is making a joke-I should comment on her corporate blog, and retweet her.

Belonging-

I belong to this great online community of smart people. Let me agree to what they say.

I really believe in EVERYTHING that ALL the 2 MILLION members of the community have to say ALL the TIME.

I belong to this online community because all my friends are on my computer.

4 Egositic

My blog page rank is now X plus delta tau because of sugary key words (2004)

My technorati numbers rise (2005)

I was once on Digg (2007)

I have Z * exp N followers on Twitter and even more on Facebook (2008)

My Klout is increasing on twitter, My stack overflow reputation ‘s cup floweth over. (2009)

My Karma on Reddit is more important than my Karma in real life (2010)

Self Actualization-

I got time to kill- and I think I may learn more, meet intersting people and discover something wandering on the internet.

All those who wonder are not lost- Wikiquote

I got a story to tell, poems to write, code to give away. A free  Blog is something a Chinese , an Iranian  and a North korean really really know what the value is.

But after all that, WHY Do Bloggers Blog?

  • Because we are still waiting for Facebook to create the Blog Killer.
  • Its better than saying I am unemployed and a social loner
  • Reddit Karma feels good. Any Karma of any kind.

Reputation on Social Networks

Law of Diminishing Marginal Utility
Image via Wikipedia

Classical Economics talks of the value of utlity, diminishing marginal utility if the same things is repeated again and again (like spam in an online community)

StackOverflow has a great way of measuring reputation – and thus allows intangible benefits /awards -similar to wikipedia badges , reddit karma. Utility is also auto generated like @klout  on twitter or lists memberships and other sucessful open source communities online including Ubuntu forums have ways to create ah hierarchies even in class less utopian classes.

Basically it then acts as the motivating game as the mostly boy population try to race on numbers.

 

in Stack Overflow- you can get buddies to upvote you and basically act as a role playing game too.

—–From http://stackoverflow.com/faq#reputation

To gain reputation, post good questions and useful answers. Your peers will vote on your posts, and those votes will cause you to gain (or, in rare cases, lose) reputation:

answer is voted up +10
question is voted up +5
answer is accepted +15 (+2 to acceptor)
post is voted down -2 (-1 to voter)

A maximum of 30 votes can be cast per user per day, and you can earn a maximum of 200 reputation per day (although accepted answers and bounty awards are immune to this limit). Also, please note that votes for any posts marked “community wiki” do not generate reputation.

Amass enough reputation points and Stack Overflow will allow you to go beyond simply asking and answering questions:

15 Vote up
15 Flag offensive
50 Leave comments
100 Edit community wiki posts
125 Vote down (costs 1 rep)
200 Reduced advertising

Interesting R competition at Reddit

Image representing Reddit as depicted in Crunc...
Image via CrunchBase

Here is an interesting R competition going on at Reddit and it is to help Reddit make a recommendation engine 🙂

http://www.reddit.com/r/redditdev/comments/dtg4j/want_to_help_reddit_build_a_recommender_a_public/

by ketralnis

As promised, here is the big dump of voting information that you guys donated to research. Warning: this contains much geekery that may result in discomfort for the nerd-challenged.

I’m trying to use it to build a recommender, and I’ve got some preliminary source code. I’m looking for feedback on all of these steps, since I’m not experienced at machine learning.

Here’s what I’ve done

  • I dumped all of the raw data that we’ll need to generate the public dumps. The queries are the comments in the two .pig files and it took about 52 minutes to do the dump against production. The result of this raw dump looks like:
    $ wc -l *.dump
     13,830,070 reddit_data_link.dump
    136,650,300 reddit_linkvote.dump
         69,489 reddit_research_ids.dump
     13,831,374 reddit_thing_link.dump
    
  • I filtered the list of votes for the list of users that gave us permission to use their data. For the curious, that’s 67,059 users: 62,763 with “public votes” and 6,726 with “allow my data to be used for research”. I’d really like to see that second category significantly increased, and hopefully this project will be what does it. This filtering is done by srrecs_researchers.pig and took 83m55.335s on my laptop.
  • I converted data-dumps that were in our DB schema format to a more useable format using srrecs.pig(about 13min)
  • From that dump I mapped all of the account_ids, link_ids, and sr_ids to salted hashes (using obscure() insrrecs.py with a random seed, so even I don’t know it). This took about 13min on my laptop. The result of this, votes.dump is the file that is actually public. It is a tab-separated file consisting in:
    account_id,link_id,sr_id,dir
    

    There are 23,091,688 votes from 43,976 users over 3,436,063 links in 11,675 reddits. (Interestingly these ~44k users represent almost 17% of our total votes). The dump is 2.2gb uncompressed, 375mb in bz2.

What to do with it

The recommendations system that I’m trying right now turns those votes into a set of affinities. That is, “67% of user #223’s votes on /r/reddit.com are upvotes and 52% on programming). To make these affinities (55m45.107s on my laptop):

 cat votes.dump | ./srrecs.py "affinities_m()" | sort -S200m | ./srrecs.py "affinities_r()" > affinities.dump

Then I turn the affinities into a sparse matrix representing N-dimensional co-ordinates in the vector space of affinities (scaled to -1..1 instead of 0..1), in the format used by R’s skmeans package (less than a minute on my laptop). Imagine that this matrix looks like

          reddit.com pics       programming horseporn  bacon
          ---------- ---------- ----------- ---------  -----
ketralnis -0.5       (no votes) +0.45       (no votes) +1.0
jedberg   (no votes) -0.25      +0.95       +1.0       -1.0
raldi     +0.75      +0.75      +0.7        (no votes) +1.0
...

We build it like:

# they were already grouped by account_id, so we don't have to
# sort. changes to the previous step will probably require this
# step to have to sort the affinities first
cat affinities.dump | ./srrecs.py "write_matrix('affinities.cm', 'affinities.clabel', 'affinities.rlabel')"

I pass that through an R program srrecs.r (if you don’t have R installed, you’ll need to install that, and the packageskmeans like install.packages('skmeans')). This program plots the users in this vector space finding clusters using a sperical kmeans clustering algorithm (on my laptop, takes about 10 minutes with 15 clusters and 16 minutes with 50 clusters, during which R sits at about 220mb of RAM)

# looks for the files created by write_matrix in the current directory
R -f ./srrecs.r

The output of the program is a generated list of cluster-IDs, corresponding in order to the order of user-IDs inaffinities.clabel. The numbers themselves are meaningless, but people in the same cluster ID have been clustered together.

Here are the files

These are torrents of bzip2-compressed files. If you can’t use the torrents for some reason it’s pretty trivial to figure out from the URL how to get to the files directly on S3, but please try the torrents first since it saves us a few bucks. It’s S3 seeding the torrents anyway, so it’s unlikely that direct-downloading is going to go any faster or be any easier.

  • votes.dump.bz2 — A tab-separated list of:
    account_id, link_id, sr_id, direction
    
  • For your convenience, a tab-separated list of votes already reduced to percent-affinities affinities.dump.bz2, formatted:
    account_id, sr_id, affinity (scaled 0..1)
    
  • For your convenience, affinities-matrix.tar.bz2 contains the R CLUTO format matrix files affinities.cm,affinities.clabelaffinities.rlabel

And the code

  • srrecs.pigsrrecs_researchers.pig — what I used to generate and format the dumps (you probably won’t need this)
  • mr_tools.pysrrecs.py — what I used to salt/hash the user information and generate the R CLUTO-format matrix files (you probably won’t need this unless you want different information in the matrix)
  • srrecs.r — the R-code to generate the clusters

Here’s what you can experiment with

  • The code isn’t nearly useable yet. We need to turn the generated clusters into an actual set of recommendations per cluster, preferably ordered by predicted match. We probably need to do some additional post-processing per user, too. (If they gave us an affinity of 0% to /r/askreddit, we shouldn’t recommend it, even if we predicted that the rest of their cluster would like it.)
  • We need a test suite to gauge the accuracy of the results of different approaches. This could be done by dividing the data-set in and using 80% for training and 20% to see if the predictions made by that 80% match.
  • We need to get the whole process to less than two hours, because that’s how often I want to run the recommender. It’s okay to use two or three machines to accomplish that and a lot of the steps can be done in parallel. That said we might just have to accept running it less often. It needs to run end-to-end with no user-intervention, failing gracefully on error
  • It would be handy to be able to idenfity the cluster of just a single user on-the-fly after generating the clusters in bulk
  • The results need to be hooked into the reddit UI. If you’re willing to dive into the codebase, this one will be important as soon as the rest of the process is working and has a lot of room for creativity
  • We need to find the sweet spot for the number of clusters to use. Put another way, how many different types of redditors do you think there are? This could best be done using the aforementioned test-suite and a good-old-fashioned binary search.

Some notes:

  • I’m not attached to doing this in R (I don’t even know much R, it just has a handy prebaked skmeans implementation). In fact I’m not attached to my methods here at all, I just want a good end-result.
  • This is my weekend fun project, so it’s likely to move very slowly if we don’t pick up enough participation here
  • The final version will run against the whole dataset, not just the public one. So even though I can’t release the whole dataset for privacy reasons, I can run your code and a test-suite against it

——————————————————————————————-

 

I am thinking of using Rattle and using the arules package, and running it on the EC2 to get the horsepower.

How else do you think you can tackle a recommendation engine problem.

 

Ajay

 

%d bloggers like this: