Family as a basic social unit- Stats

Some stats from http://www.stepfamilies.info/stepfamily-fact-sheet.php

Sadly they decided to discontinue providing estimates of marriage, divorce, and remarriage except for those that are available from  current census. Thus, many of the following  current estimates were derived from the 1990 census and earlier data sources. This is for the US, but I would be interested in plotting say how GDP and Family Unit size changes— across countries

 

Current estimates from 1988-1990 suggest:

  • 92% of all men and women marry by age 50
  • 43% of first marriages will end in divorce within 15 years
  • 25% of all men and women report being marriage two or more times by age 50
  • In 2004, 42% of all marriages were remarriages for at least one partner
  • Of those who get divorced, 75% will remarry (65% bring children from a previous union)
  • 60% of those who get remarried redivorce
  • 15% of remarriages will end in divorce within 3 years, 25% within 5 years, 39% within 10 years
  • The average length of first and re- marriages that end in divorce is about 8 years
  • The average time between first divorce and remarriage is about 3.5 years
  • 54% of women will remarry within 5 years of first divorce and 75% within 10 years
  • 50% of men who remarry after first divorce do so within 3-4 years
  • Having low income and living in poor neighborhoods are associated with a lower chance of remarriage
  • Younger adults are more likely to remarry than older ones
  • Whites and Latinos are more likely to remarry than African Americans
  • After 5 years of divorce Whites are most likely to remarry (58%), followed by Latinos (44%) and African Americans (32%)
  • These proportions show a marked downward trend when compared to national samples in 1976, which indicated the probability of remarriage within 5 years of divorce was 73% for Whites and nearly 50% for African Americans
  • Estimates suggest that by the time they are 18, anywhere from 1/3 to 1/2 will have been part of a stepfamily

In addition while billions are being spent on data software , how can we cut down the cost of collecting Census data using newer technologies ( than those paper forms!!)

Google introduces Analytics Academy for e-learning

I really liked this and promptly signed up at https://analyticsacademy.withgoogle.com/course

I of course passed the test some 2 years back-Google Web Analytics IQ test (but its only valid for 18 months)

Digital Analytics Fundamentals

This three-week course provides a foundation for marketers and analysts seeking to understand the core principles of digital analytics and to improve business performance through better digital measurement.

Course highlights include:

  • An overview of today’s digital measurement landscape
  • Guidance on how to build an effective measurement plan
  • Best practices for collecting actionable data
  • Descriptions of key digital measurement concepts, terminology and analysis techniques
  • Deep-dives into Google Analytics reports with specific examples for evaluating your digital marketing performance
  • View lessons from experts

    Watch or read lessons from digital analytics advocate Justin Cutroni, all at your own pace.

  • Test your knowledge

    Apply what you learn in the course by completing short quizzes and practice exercises.

  • Join the learning community

    Engage with other course participants and analytics experts in the course forum and on Google+.

Algorithms.io is Dataweek Startup of September

One of the guys I keep shooting ideas with on a ir-regular basis Andy Bartley ‘s startup , Algorithms.io just won a startup competition

 

http://dataweek.co/algorithms-io-wins-data-2-0-summit-2013-startup-pitch-competition/

Andy was kind enough to mention me at link above ( I extracted it here)–what is really cool is they are now going to demo on analytics for wearable computing. That’s right- Analytics + Google Glass ? Any takers..? 🙂

See-

isit Algorithms.io tomorrow and Thursday at Dataweek 2013 at the Fort Mason center in San Francisco.  We will be in booth #118 giving a live demo of our new machine learning platform for wearable devices.

This new platform intelligently classifies streaming data from wearable devices into actionable events that can be used to build predictive applications.  It combines a data scientist, dev ops engineer, and developer all into one simple service.

 

 

Geoff: Is Algorithms.io a “marketplace for algorithms” or do you plan on producing / curating most of the algorithms internally?

Andy:  Right now we are performing the curation internally.  When you get past the marketing hype around Big Data, Machine Learning, Predictive Analytics, etc. what you’ll find is most companies still aren’t sure exactly how these technologies can benefit their business.  We talk with Fortune 500 companies every week who have few if any data scientists in house, and aren’t using any intelligent algorithms.  Our main focus right now is working with those companies to help them understand the use cases and how they integrate with the business model.

Longer term, we think there is an opportunity for an algorithm marketplace.  This isn’t a new topic, one of our advisors Ajay Ohri, also the author of Springer’s book on R, wrote about this idea back in 2011   (http://readwrite.com/2011/06/01/an-app-store-for-algorithms#awesm=~ohfvTpPiq6Jmt5).  We’ve discussed this topic with folks at some of the potential players like Google who could be interested in this type of marketplace.  Two of the primary gating factors for an algorithm marketplace are data quality and use cases.  Data quality is still a fundamental challenge, and the really compelling business use cases today can be tackled with a relatively limited set of algorithms.  As companies get more sophisticated data infrastructure in the next 2 – 3 years, the bar will begin to rise and an opportunity could emerge for commerce around algorithms.  We’re doing a number of things on the technology and IP fronts to position us to play in this space when it emerges.

 

Using R for random number creation from time stamps #rstats

Suppose – let us just suppose- you want to create random numbers that are reproducible , and derived from time stamps

Here is the code in R

> a=as.numeric(Sys.time())
> set.seed(a)
> rnorm(log(a))

Note- you can create a custom function  ( I used  the log) for generating random numbers of the system time too. This creates a random numbered list of pseudo random numbers (since nothing machine driven is purely random in the strict philosophy of the word)

a=as.numeric(Sys.time())
set.seed(a)
abs(100000000*rnorm(abs(log(a))))

[1]  39621645  99451316 109889294 110275233 278994547   6554596  38654159  68748122   8920823  13293010
[11]  57664241  24533980 174529340 105304151 168006526  39173857  12810354 145341412 241341095  86568818
[21] 105672257

Possible applications- things that need both random numbers (like encryption keys) and time stamps (like events , web or industrial logs or as pseudo random pass codes in Google 2 factor authentication )

Note I used the rnorm function but you could possibly draw the functions also as a random input (rnorm or rcauchy)

Again I would trust my own random ness than one generated by an arm of US Govt (see http://www.nist.gov/itl/csd/ct/nist_beacon.cfm )

Update- Random numbers in R

http://stat.ethz.ch/R-manual/R-patched/library/base/html/Random.html

Details

The currently available RNG kinds are given below. kind is partially matched to this list. The default is "Mersenne-Twister".

"Wichmann-Hill"
The seed, .Random.seed[-1] == r[1:3] is an integer vector of length 3, where each r[i] is in 1:(p[i] - 1), where p is the length 3 vector of primes, p = (30269, 30307, 30323). The Wichmann–Hill generator has a cycle length of 6.9536e12 (= prod(p-1)/4, see Applied Statistics (1984) 33, 123 which corrects the original article).

"Marsaglia-Multicarry":
A multiply-with-carry RNG is used, as recommended by George Marsaglia in his post to the mailing list ‘sci.stat.math’. It has a period of more than 2^60 and has passed all tests (according to Marsaglia). The seed is two integers (all values allowed).

"Super-Duper":
Marsaglia’s famous Super-Duper from the 70’s. This is the original version which does not pass the MTUPLE test of the Diehard battery. It has a period of about 4.6*10^18 for most initial seeds. The seed is two integers (all values allowed for the first seed: the second must be odd).

We use the implementation by Reeds et al. (1982–84).

The two seeds are the Tausworthe and congruence long integers, respectively. A one-to-one mapping to S’s .Random.seed[1:12] is possible but we will not publish one, not least as this generator is not exactly the same as that in recent versions of S-PLUS.

"Mersenne-Twister":
From Matsumoto and Nishimura (1998). A twisted GFSR with period 2^19937 – 1 and equidistribution in 623 consecutive dimensions (over the whole period). The ‘seed’ is a 624-dimensional set of 32-bit integers plus a current position in that set.

"Knuth-TAOCP-2002":
A 32-bit integer GFSR using lagged Fibonacci sequences with subtraction. That is, the recurrence used is

X[j] = (X[j-100] – X[j-37]) mod 2^30

and the ‘seed’ is the set of the 100 last numbers (actually recorded as 101 numbers, the last being a cyclic shift of the buffer). The period is around 2^129.

"Knuth-TAOCP":
An earlier version from Knuth (1997).

The 2002 version was not backwards compatible with the earlier version: the initialization of the GFSR from the seed was altered. R did not allow you to choose consecutive seeds, the reported ‘weakness’, and already scrambled the seeds.

Initialization of this generator is done in interpreted R code and so takes a short but noticeable time.

"L'Ecuyer-CMRG":
A ‘combined multiple-recursive generator’ from L’Ecuyer (1999), each element of which is a feedback multiplicative generator with three integer elements: thus the seed is a (signed) integer vector of length 6. The period is around 2^191.

The 6 elements of the seed are internally regarded as 32-bit unsigned integers. Neither the first three nor the last three should be all zero, and they are limited to less than 4294967087 and 4294944443 respectively.

This is not particularly interesting of itself, but provides the basis for the multiple streams used in package parallel.

"user-supplied":
Use a user-supplied generator.

 

Function RNGkind allows user-coded uniform and normal random number generators to be supplied.

BigML goes on hyperdrive- exciting new features

Some changes in BigML.com whose CEO I have interviewed here

Thier earlier innovation in making a market place for models (like similar market place for apps) was written about here

I like the concept of BigMLer a command line tool https://bigml.com/bigmler

New changes are-

1) Text Analysis now available- It seems like rudimentary tdm (term document matrix) but I have yet to test it whether I can do clustering within text data too

2) A Cloud Server called BigML Predict Server- making adoption faster due to  data hygiene for sensitive industries like finance etc

3) Confusion Matrix – to evaluate- a long overdue step . Maybe some curves should be added to evaluation here 😉

4) Misc technical upgrades- that are more complex to execute and less interesting to write about

  • multi label classification
  • secret urls for sharing models (view model only not data)
  • export to MS Excel ( maybe add Google docs export ?)
  • etc

Overall , with the addition of training courses as well- this is a new phase in this data science startup that I have been tracking for past few years.

-related

Life Cycle of a Data Science Project

It is best to use CRISP -DM, SEMMA and/or KDD for a systematic approach

1) Understanding Business Requirements from Client

2) Converting Business Problem to a Statistical Problem

  • what data to collect

  • what is the cost of data

  • how can I enhance the data

  • data quality issues

3) Solving Statistical Problem with Tools (R, SAS, Excel)

  • import

  • data quality

  • outlier and missing value treatment

  • exploratory analysis

  • data visualization

  • hypothesis and problem framing

  • data mining and pattern identification

  • create success parameters for statistical solution

4) Converting Statistical Solution to Business Solution

  • project report template

  • assumptions and caveats

  • feedback from stakeholders

5) Communicating Business Solution to Client

  • presentation

  • report

  • customer satisfaction

  • monitoring of results

Using R and TwitteR together on Windows #rstats

You need to add the following apparently. on a Windows OS

options(RCurlOptions = list(cainfo = system.file(“CurlSSL”, “cacert.pem”, package = “RCurl”)))
download.file(url=”http://curl.haxx.se/ca/cacert.pem”, destfile=”cacert.pem”)

twitCred$handshake(cainfo=”cacert.pem”)

 

I am still investigating this to update my tutorial in previous post to be a complete stand alone Tutorial from Beginning to End