How to use Bit Torrents

I really liked the software Qbittorent available from http://www.qbittorrent.org/ I think bit torrents should be the default way of sharing huge content especially software downloads. For protecting intellectual property there should be much better codes and software keys than presently available.

The qBittorrent project aims to provide a Free Software alternative to µtorrent. Additionally, qBittorrent runs and provides the same features on all major platforms (Linux, Mac OS X, Windows, OS/2, FreeBSD).

qBittorrent is based on Qt4 toolkit and libtorrent-rasterbar.

qBittorrent v2 Features

  • Polished µTorrent-like User Interface
  • Well-integrated and extensible Search Engine
    • Simultaneous search in most famous BitTorrent search sites
    • Per-category-specific search requests (e.g. Books, Music, Movies)
  • All Bittorrent extensions
    • DHT, Peer Exchange, Full encryption, Magnet/BitComet URIs, …
  • Remote control through a Web user interface
    • Nearly identical to the regular UI, all in Ajax
  • Advanced control over trackers, peers and torrents
    • Torrents queueing and prioritizing
    • Torrent content selection and prioritizing
  • UPnP / NAT-PMP port forwarding support
  • Available in ~25 languages (Unicode support)
  • Torrent creation tool
  • Advanced RSS support with download filters (inc. regex)
  • Bandwidth scheduler
  • IP Filtering (eMule and PeerGuardian compatible)
  • IPv6 compliant
  • Sequential downloading (aka “Download in order”)
  • Available on most platforms: Linux, Mac OS X, Windows, OS/2, FreeBSD
So if you are new to Bit Torrents- here is a brief tutorial
Some terminology from

Tracker

tracker is a server that keeps track of which seeds and peers are in the swarm.

Seed

Seed is used to refer to a peer who has 100% of the data. When a leech obtains 100% of the data, that peer automatically becomes a Seed.

Peer

peer is one instance of a BitTorrent client running on a computer on the Internet to which other clients connect and transfer data.

Leech

leech is a term with two meanings. Primarily leech (or leeches) refer to a peer (or peers) who has a negative effect on the swarm by having a very poor share ratio (downloading much more than they upload, creating a ratio less than 1.0)
1) Download and install the software from  http://www.qbittorrent.org/
2) If you want to search for new files, you can use the nice search features in here
3) If you want to CREATE new bit torrents- go to Tools -Torrent Creator
4) For sharing content- just seed the torrent you just created. What is seeding – hey did you read the terminology in the beginning?
5) Additionally –
From

Trackers: Below are some popular public trackers. They are servers which help peers to communicate.

Here are some good trackers you can use:

 

http://open.tracker.thepiratebay.org/announce
http://www.torrent-downloads.to:2710/announce
http://denis.stalker.h3q.com:6969/announce
udp://denis.stalker.h3q.com:6969/announce
http://www.sumotracker.com/announce

and

Super-seeding

When a file is new, much time can be wasted because the seeding client might send the same file piece to many different peers, while other pieces have not yet been downloaded at all. Some clients, like ABCVuzeBitTornado, TorrentStorm, and µTorrent have a “super-seed” mode, where they try to only send out pieces that have never been sent out before, theoretically making the initial propagation of the file much faster. However the super-seeding becomes less effective and may even reduce performance compared to the normal “rarest first” model in cases where some peers have poor or limited connectivity. This mode is generally used only for a new torrent, or one which must be re-seeded because no other seeds are available.
Note- you use this tutorial and any or all steps at your own risk. I am not legally responsible for any mishaps you get into. Please be responsible while being an efficient bit tor renter. That means respecting individual property rights.

Comparing Bit Torrent Downloaders

Tux, as originally drawn by Larry Ewing
Image via Wikipedia

I personally like UTorrent on Windows and KTorrent on Linux.

While no experts on this, anything that gets the data down faster while maximizing my pipes efficiency.

I also like Torrenting than  any of the sudo-apt get method of downloading software or the zip unzip,tar untar, install/make file

Torrenting is a simpler way of sharing applications but sadly not used much by the stats computing community to share downloads.

Also I think any dashboard or visualization should be sorted (but not alphabetically but numerically/categorically)

SORT THE DASHBOARD —-KEEP IT SORTED

So I am partially recreating after sorting the data viz from http://en.wikipedia.org/wiki/Comparison_of_BitTorrent_clients

BitTorrent client Magnet URI Super-seeding Embedded tracker UPnP[81] NAT Port Mapping Protocol NAT traversal[82] DHT[83] Peer exchange Encryption UDP tracker LPD
µTorrent Yes Yes[95] Yes[96] Yes[97] Yes Yes[98] Yes[99] Yes[85] Yes[100] Yes Yes[101]
BitSpirit [11] Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes No
BitTorrent 6 Yes Yes Yes Yes Yes Yes Yes Yes[85] Yes Yes Yes
OneSwarm Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes No
qBittorrent Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
SoMud Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
Vuze (formerly Azureus) Yes Yes Yes Yes Yes Yes[102] Yes[87] Yes Yes Yes No
BitComet Yes Yes Separate download Yes Yes Yes Yes Yes Yes Yes No
Tixati [43] Yes Yes No Yes No No Yes Yes Yes Yes Partial
Aria2 Yes No Yes No No No Yes Yes Yes Yes Yes
Tribler Yes No Yes Yes Yes No Yes Yes Yes No No
Bitflu Yes No No No No No Yes Yes No Yes No
Deluge Yes No No Yes Yes Yes Yes Yes Yes Yes Yes
Flush Yes No No Yes Yes No Yes Yes No No Yes
KTorrent Yes No No Yes Yes Yes Yes Yes Yes Yes Partial
Shareaza Yes No No Yes Yes No Yes[93] Yes No No No
Transmission Yes No No Yes Yes Yes Yes Yes[94] Yes No Yes
LimeWire Partial Yes Yes Yes Yes No Yes Yes Yes Yes No
BitTyrant No Yes[citation needed] Yes Yes Yes Yes[86] Yes[87] Yes Yes No No
BitTornado No Yes Yes[84] Yes No No No No Yes No No
Torrent Swapper No Yes Yes[84] Yes No No No Yes No No No
Localhost No Yes Yes Yes No Yes Yes [89] No No No No
Meerkat Bittorrent Client No Yes No Yes Yes Yes Yes No Yes No No
rTorrent No Yes No No No No Yes Yes Yes Yes No[92]
TorrentFlux No Yes No Yes No No No No Yes No No
TorrentVolve No Partial [76] No Partial[76] Partial [76] Partial [76] Partial[76] Partial [76] Partial [76] Partial [76] No
Opera No No Yes[90] No No No No Yes[91] No No No
BitTorrent 5 / Mainline No No Yes[84] Yes Yes No Yes Yes Yes No No
ABC No No Yes Yes No No No No No No No
Blog Torrent No No Yes No No No No No No No No
MLDonkey No No Yes Yes Yes No No No No Yes No
Tomato Torrent No No Yes No No No Yes No No No No
Acquisition No No No No Yes No No No No No No
Arctic Torrent No No No No No No No Yes No No No
BitLet No No No Yes No No No No No No No
BitLord No No No Yes No Yes No Yes No Yes No
BitThief No No No No No No No No No No No
Bits on Wheels No No No No No No No No No No No
BTG No No No Yes Yes No Yes Yes Yes Yes No
BTPD No No No No No No No No No No No
FlashGet No No No No No No Yes No Yes No No
Folx No No No Yes Yes No Yes Yes No Yes No
Free Download Manager No No No No No No Yes Yes No No No
G3 Torrent No No No No No No No No No No No
Gnome BitTorrent No No No No No No No No No No No
Halite No No No Yes Yes No Yes No Yes No[88] No
QTorrent No No No No No No No No No No No
Rufus No No No No No No No No No No No
SymTorrent No No No N/A N/A N/A No No No No No
Tonido Torrent No No No Yes Yes Yes Yes No No No No
Torium No No No Yes No No Yes No No No No
ZipTorrent No No No Yes Yes No No Yes No No No

 

 

 

 

Interesting R competition at Reddit

Image representing Reddit as depicted in Crunc...
Image via CrunchBase

Here is an interesting R competition going on at Reddit and it is to help Reddit make a recommendation engine 🙂

http://www.reddit.com/r/redditdev/comments/dtg4j/want_to_help_reddit_build_a_recommender_a_public/

by ketralnis

As promised, here is the big dump of voting information that you guys donated to research. Warning: this contains much geekery that may result in discomfort for the nerd-challenged.

I’m trying to use it to build a recommender, and I’ve got some preliminary source code. I’m looking for feedback on all of these steps, since I’m not experienced at machine learning.

Here’s what I’ve done

  • I dumped all of the raw data that we’ll need to generate the public dumps. The queries are the comments in the two .pig files and it took about 52 minutes to do the dump against production. The result of this raw dump looks like:
    $ wc -l *.dump
     13,830,070 reddit_data_link.dump
    136,650,300 reddit_linkvote.dump
         69,489 reddit_research_ids.dump
     13,831,374 reddit_thing_link.dump
    
  • I filtered the list of votes for the list of users that gave us permission to use their data. For the curious, that’s 67,059 users: 62,763 with “public votes” and 6,726 with “allow my data to be used for research”. I’d really like to see that second category significantly increased, and hopefully this project will be what does it. This filtering is done by srrecs_researchers.pig and took 83m55.335s on my laptop.
  • I converted data-dumps that were in our DB schema format to a more useable format using srrecs.pig(about 13min)
  • From that dump I mapped all of the account_ids, link_ids, and sr_ids to salted hashes (using obscure() insrrecs.py with a random seed, so even I don’t know it). This took about 13min on my laptop. The result of this, votes.dump is the file that is actually public. It is a tab-separated file consisting in:
    account_id,link_id,sr_id,dir
    

    There are 23,091,688 votes from 43,976 users over 3,436,063 links in 11,675 reddits. (Interestingly these ~44k users represent almost 17% of our total votes). The dump is 2.2gb uncompressed, 375mb in bz2.

What to do with it

The recommendations system that I’m trying right now turns those votes into a set of affinities. That is, “67% of user #223’s votes on /r/reddit.com are upvotes and 52% on programming). To make these affinities (55m45.107s on my laptop):

 cat votes.dump | ./srrecs.py "affinities_m()" | sort -S200m | ./srrecs.py "affinities_r()" > affinities.dump

Then I turn the affinities into a sparse matrix representing N-dimensional co-ordinates in the vector space of affinities (scaled to -1..1 instead of 0..1), in the format used by R’s skmeans package (less than a minute on my laptop). Imagine that this matrix looks like

          reddit.com pics       programming horseporn  bacon
          ---------- ---------- ----------- ---------  -----
ketralnis -0.5       (no votes) +0.45       (no votes) +1.0
jedberg   (no votes) -0.25      +0.95       +1.0       -1.0
raldi     +0.75      +0.75      +0.7        (no votes) +1.0
...

We build it like:

# they were already grouped by account_id, so we don't have to
# sort. changes to the previous step will probably require this
# step to have to sort the affinities first
cat affinities.dump | ./srrecs.py "write_matrix('affinities.cm', 'affinities.clabel', 'affinities.rlabel')"

I pass that through an R program srrecs.r (if you don’t have R installed, you’ll need to install that, and the packageskmeans like install.packages('skmeans')). This program plots the users in this vector space finding clusters using a sperical kmeans clustering algorithm (on my laptop, takes about 10 minutes with 15 clusters and 16 minutes with 50 clusters, during which R sits at about 220mb of RAM)

# looks for the files created by write_matrix in the current directory
R -f ./srrecs.r

The output of the program is a generated list of cluster-IDs, corresponding in order to the order of user-IDs inaffinities.clabel. The numbers themselves are meaningless, but people in the same cluster ID have been clustered together.

Here are the files

These are torrents of bzip2-compressed files. If you can’t use the torrents for some reason it’s pretty trivial to figure out from the URL how to get to the files directly on S3, but please try the torrents first since it saves us a few bucks. It’s S3 seeding the torrents anyway, so it’s unlikely that direct-downloading is going to go any faster or be any easier.

  • votes.dump.bz2 — A tab-separated list of:
    account_id, link_id, sr_id, direction
    
  • For your convenience, a tab-separated list of votes already reduced to percent-affinities affinities.dump.bz2, formatted:
    account_id, sr_id, affinity (scaled 0..1)
    
  • For your convenience, affinities-matrix.tar.bz2 contains the R CLUTO format matrix files affinities.cm,affinities.clabelaffinities.rlabel

And the code

  • srrecs.pigsrrecs_researchers.pig — what I used to generate and format the dumps (you probably won’t need this)
  • mr_tools.pysrrecs.py — what I used to salt/hash the user information and generate the R CLUTO-format matrix files (you probably won’t need this unless you want different information in the matrix)
  • srrecs.r — the R-code to generate the clusters

Here’s what you can experiment with

  • The code isn’t nearly useable yet. We need to turn the generated clusters into an actual set of recommendations per cluster, preferably ordered by predicted match. We probably need to do some additional post-processing per user, too. (If they gave us an affinity of 0% to /r/askreddit, we shouldn’t recommend it, even if we predicted that the rest of their cluster would like it.)
  • We need a test suite to gauge the accuracy of the results of different approaches. This could be done by dividing the data-set in and using 80% for training and 20% to see if the predictions made by that 80% match.
  • We need to get the whole process to less than two hours, because that’s how often I want to run the recommender. It’s okay to use two or three machines to accomplish that and a lot of the steps can be done in parallel. That said we might just have to accept running it less often. It needs to run end-to-end with no user-intervention, failing gracefully on error
  • It would be handy to be able to idenfity the cluster of just a single user on-the-fly after generating the clusters in bulk
  • The results need to be hooked into the reddit UI. If you’re willing to dive into the codebase, this one will be important as soon as the rest of the process is working and has a lot of room for creativity
  • We need to find the sweet spot for the number of clusters to use. Put another way, how many different types of redditors do you think there are? This could best be done using the aforementioned test-suite and a good-old-fashioned binary search.

Some notes:

  • I’m not attached to doing this in R (I don’t even know much R, it just has a handy prebaked skmeans implementation). In fact I’m not attached to my methods here at all, I just want a good end-result.
  • This is my weekend fun project, so it’s likely to move very slowly if we don’t pick up enough participation here
  • The final version will run against the whole dataset, not just the public one. So even though I can’t release the whole dataset for privacy reasons, I can run your code and a test-suite against it

——————————————————————————————-

 

I am thinking of using Rattle and using the arules package, and running it on the EC2 to get the horsepower.

How else do you think you can tackle a recommendation engine problem.

 

Ajay