Algorithms.io is Dataweek Startup of September

One of the guys I keep shooting ideas with on a ir-regular basis Andy Bartley ‘s startup , Algorithms.io just won a startup competition

 

http://dataweek.co/algorithms-io-wins-data-2-0-summit-2013-startup-pitch-competition/

Andy was kind enough to mention me at link above ( I extracted it here)–what is really cool is they are now going to demo on analytics for wearable computing. That’s right- Analytics + Google Glass ? Any takers..? 🙂

See-

isit Algorithms.io tomorrow and Thursday at Dataweek 2013 at the Fort Mason center in San Francisco.  We will be in booth #118 giving a live demo of our new machine learning platform for wearable devices.

This new platform intelligently classifies streaming data from wearable devices into actionable events that can be used to build predictive applications.  It combines a data scientist, dev ops engineer, and developer all into one simple service.

 

 

Geoff: Is Algorithms.io a “marketplace for algorithms” or do you plan on producing / curating most of the algorithms internally?

Andy:  Right now we are performing the curation internally.  When you get past the marketing hype around Big Data, Machine Learning, Predictive Analytics, etc. what you’ll find is most companies still aren’t sure exactly how these technologies can benefit their business.  We talk with Fortune 500 companies every week who have few if any data scientists in house, and aren’t using any intelligent algorithms.  Our main focus right now is working with those companies to help them understand the use cases and how they integrate with the business model.

Longer term, we think there is an opportunity for an algorithm marketplace.  This isn’t a new topic, one of our advisors Ajay Ohri, also the author of Springer’s book on R, wrote about this idea back in 2011   (http://readwrite.com/2011/06/01/an-app-store-for-algorithms#awesm=~ohfvTpPiq6Jmt5).  We’ve discussed this topic with folks at some of the potential players like Google who could be interested in this type of marketplace.  Two of the primary gating factors for an algorithm marketplace are data quality and use cases.  Data quality is still a fundamental challenge, and the really compelling business use cases today can be tackled with a relatively limited set of algorithms.  As companies get more sophisticated data infrastructure in the next 2 – 3 years, the bar will begin to rise and an opportunity could emerge for commerce around algorithms.  We’re doing a number of things on the technology and IP fronts to position us to play in this space when it emerges.

 

Using R for random number creation from time stamps #rstats

Suppose – let us just suppose- you want to create random numbers that are reproducible , and derived from time stamps

Here is the code in R

> a=as.numeric(Sys.time())
> set.seed(a)
> rnorm(log(a))

Note- you can create a custom function  ( I used  the log) for generating random numbers of the system time too. This creates a random numbered list of pseudo random numbers (since nothing machine driven is purely random in the strict philosophy of the word)

a=as.numeric(Sys.time())
set.seed(a)
abs(100000000*rnorm(abs(log(a))))

[1]  39621645  99451316 109889294 110275233 278994547   6554596  38654159  68748122   8920823  13293010
[11]  57664241  24533980 174529340 105304151 168006526  39173857  12810354 145341412 241341095  86568818
[21] 105672257

Possible applications- things that need both random numbers (like encryption keys) and time stamps (like events , web or industrial logs or as pseudo random pass codes in Google 2 factor authentication )

Note I used the rnorm function but you could possibly draw the functions also as a random input (rnorm or rcauchy)

Again I would trust my own random ness than one generated by an arm of US Govt (see http://www.nist.gov/itl/csd/ct/nist_beacon.cfm )

Update- Random numbers in R

http://stat.ethz.ch/R-manual/R-patched/library/base/html/Random.html

Details

The currently available RNG kinds are given below. kind is partially matched to this list. The default is "Mersenne-Twister".

"Wichmann-Hill"
The seed, .Random.seed[-1] == r[1:3] is an integer vector of length 3, where each r[i] is in 1:(p[i] - 1), where p is the length 3 vector of primes, p = (30269, 30307, 30323). The Wichmann–Hill generator has a cycle length of 6.9536e12 (= prod(p-1)/4, see Applied Statistics (1984) 33, 123 which corrects the original article).

"Marsaglia-Multicarry":
A multiply-with-carry RNG is used, as recommended by George Marsaglia in his post to the mailing list ‘sci.stat.math’. It has a period of more than 2^60 and has passed all tests (according to Marsaglia). The seed is two integers (all values allowed).

"Super-Duper":
Marsaglia’s famous Super-Duper from the 70’s. This is the original version which does not pass the MTUPLE test of the Diehard battery. It has a period of about 4.6*10^18 for most initial seeds. The seed is two integers (all values allowed for the first seed: the second must be odd).

We use the implementation by Reeds et al. (1982–84).

The two seeds are the Tausworthe and congruence long integers, respectively. A one-to-one mapping to S’s .Random.seed[1:12] is possible but we will not publish one, not least as this generator is not exactly the same as that in recent versions of S-PLUS.

"Mersenne-Twister":
From Matsumoto and Nishimura (1998). A twisted GFSR with period 2^19937 – 1 and equidistribution in 623 consecutive dimensions (over the whole period). The ‘seed’ is a 624-dimensional set of 32-bit integers plus a current position in that set.

"Knuth-TAOCP-2002":
A 32-bit integer GFSR using lagged Fibonacci sequences with subtraction. That is, the recurrence used is

X[j] = (X[j-100] – X[j-37]) mod 2^30

and the ‘seed’ is the set of the 100 last numbers (actually recorded as 101 numbers, the last being a cyclic shift of the buffer). The period is around 2^129.

"Knuth-TAOCP":
An earlier version from Knuth (1997).

The 2002 version was not backwards compatible with the earlier version: the initialization of the GFSR from the seed was altered. R did not allow you to choose consecutive seeds, the reported ‘weakness’, and already scrambled the seeds.

Initialization of this generator is done in interpreted R code and so takes a short but noticeable time.

"L'Ecuyer-CMRG":
A ‘combined multiple-recursive generator’ from L’Ecuyer (1999), each element of which is a feedback multiplicative generator with three integer elements: thus the seed is a (signed) integer vector of length 6. The period is around 2^191.

The 6 elements of the seed are internally regarded as 32-bit unsigned integers. Neither the first three nor the last three should be all zero, and they are limited to less than 4294967087 and 4294944443 respectively.

This is not particularly interesting of itself, but provides the basis for the multiple streams used in package parallel.

"user-supplied":
Use a user-supplied generator.

 

Function RNGkind allows user-coded uniform and normal random number generators to be supplied.

BigML goes on hyperdrive- exciting new features

Some changes in BigML.com whose CEO I have interviewed here

Thier earlier innovation in making a market place for models (like similar market place for apps) was written about here

I like the concept of BigMLer a command line tool https://bigml.com/bigmler

New changes are-

1) Text Analysis now available- It seems like rudimentary tdm (term document matrix) but I have yet to test it whether I can do clustering within text data too

2) A Cloud Server called BigML Predict Server- making adoption faster due to  data hygiene for sensitive industries like finance etc

3) Confusion Matrix – to evaluate- a long overdue step . Maybe some curves should be added to evaluation here 😉

4) Misc technical upgrades- that are more complex to execute and less interesting to write about

  • multi label classification
  • secret urls for sharing models (view model only not data)
  • export to MS Excel ( maybe add Google docs export ?)
  • etc

Overall , with the addition of training courses as well- this is a new phase in this data science startup that I have been tracking for past few years.

-related

Installing the book (SVMono) class in Lyx

  1. Start Lyx
  2. Your user directory can be got from looking at Help (last tab top -right)> About Lyx ( For Windows  this can be AppData – to find AppData put %appdata% at cmd line – or using windows button and R to run it – something like C:\Users\dell\AppData\Roaming\LyX2.0) Screenshot 2014-07-19 07.06.20Screenshot from 2013-09-27 15:29:34
  3. Download SVMono class from SVMono
  4. Unzip all the files into the layouts folder of your User Directory (step1) Screenshot from 2013-09-27 15:32:03
  5. Go to Lyx- Tools (second last tab-top right-) Reconfigure
  6. Close Lyx
  7. Start Lyx again
  8. Lyx (Documents (third last tab -top right ) Preferences- select Sv Mono as document class Screenshot from 2013-09-27 15:34:16

References-

http://wiki.lyx.org/Layouts/Layouts

http://wiki.lyx.org/LyX/UserDir

Thoughts on Soft Ware and Soft Vapour

  1. Interfaces for software, hardware,design and products evolve as technology enables better materials and more efficient consumption. The basic handicap though remains that humans do not evolve, at least not noticeable to themselves

  2. Human psychology continues to play a key role in marking success and failure of technological adoption. This includes the paradigms of loss aversion and being prone to logical fallacies in arguments put forward in advertising by fellow humans.

  3. What is easier? To create a better machine for a man. To train a man to be better at the machine. Often it is a mixture of both. Both have multiple costs and benefits to various agency players including incumbent corporations , challenger innovators, countries, regions and trade zones and environment.

  4. The practice of shipping code with an impatience of making up metrics alongside evolving industries causes dishonesty among interested stakeholders

  5. There are some things that can not be explained unless the receiver is trained to think along different paradigms than he has been used. The altered state of consciousness however is almost never credibly reversible.

  6. Information processing is the drug, not information itself.

  7. A Feedback loop is simple to construct with the results and errors flowing into inputs and next iteration processing. A Feedback loop is tough to sell to your own team , sometimes! Feedback loops are also inevitably gamed sometimes! By people of course

l8

 

Life Cycle of a Data Science Project

It is best to use CRISP -DM, SEMMA and/or KDD for a systematic approach

1) Understanding Business Requirements from Client

2) Converting Business Problem to a Statistical Problem

  • what data to collect

  • what is the cost of data

  • how can I enhance the data

  • data quality issues

3) Solving Statistical Problem with Tools (R, SAS, Excel)

  • import

  • data quality

  • outlier and missing value treatment

  • exploratory analysis

  • data visualization

  • hypothesis and problem framing

  • data mining and pattern identification

  • create success parameters for statistical solution

4) Converting Statistical Solution to Business Solution

  • project report template

  • assumptions and caveats

  • feedback from stakeholders

5) Communicating Business Solution to Client

  • presentation

  • report

  • customer satisfaction

  • monitoring of results

Using R and TwitteR together on Windows #rstats

You need to add the following apparently. on a Windows OS

options(RCurlOptions = list(cainfo = system.file(“CurlSSL”, “cacert.pem”, package = “RCurl”)))
download.file(url=”http://curl.haxx.se/ca/cacert.pem”, destfile=”cacert.pem”)

twitCred$handshake(cainfo=”cacert.pem”)

 

I am still investigating this to update my tutorial in previous post to be a complete stand alone Tutorial from Beginning to End