## Using R for random number creation from time stamps #rstats

Suppose – let us just suppose- you want to create random numbers that are reproducible , and derived from time stamps

Here is the code in R

> a=as.numeric(Sys.time())
> set.seed(a)
> rnorm(log(a))

Note- you can create a custom function  ( I used  the log) for generating random numbers of the system time too. This creates a random numbered list of pseudo random numbers (since nothing machine driven is purely random in the strict philosophy of the word)

a=as.numeric(Sys.time())
set.seed(a)
abs(100000000*rnorm(abs(log(a))))

[1]  39621645  99451316 109889294 110275233 278994547   6554596  38654159  68748122   8920823  13293010
[11]  57664241  24533980 174529340 105304151 168006526  39173857  12810354 145341412 241341095  86568818
[21] 105672257

Possible applications- things that need both random numbers (like encryption keys) and time stamps (like events , web or industrial logs or as pseudo random pass codes in Google 2 factor authentication )

Note I used the rnorm function but you could possibly draw the functions also as a random input (rnorm or rcauchy)

Again I would trust my own random ness than one generated by an arm of US Govt (see http://www.nist.gov/itl/csd/ct/nist_beacon.cfm )

Update- Random numbers in R

http://stat.ethz.ch/R-manual/R-patched/library/base/html/Random.html

### Details

The currently available RNG kinds are given below. `kind` is partially matched to this list. The default is `"Mersenne-Twister"`.

`"Wichmann-Hill"`
The seed, `.Random.seed[-1] == r[1:3]` is an integer vector of length 3, where each `r[i]` is in `1:(p[i] - 1)`, where `p` is the length 3 vector of primes, `p = (30269, 30307, 30323)`. The Wichmann–Hill generator has a cycle length of 6.9536e12 (= `prod(p-1)/4`, see Applied Statistics (1984) 33, 123 which corrects the original article).

`"Marsaglia-Multicarry"`:
A multiply-with-carry RNG is used, as recommended by George Marsaglia in his post to the mailing list ‘sci.stat.math’. It has a period of more than 2^60 and has passed all tests (according to Marsaglia). The seed is two integers (all values allowed).

`"Super-Duper"`:
Marsaglia’s famous Super-Duper from the 70’s. This is the original version which does not pass the MTUPLE test of the Diehard battery. It has a period of about 4.6*10^18 for most initial seeds. The seed is two integers (all values allowed for the first seed: the second must be odd).

We use the implementation by Reeds et al. (1982–84).

The two seeds are the Tausworthe and congruence long integers, respectively. A one-to-one mapping to S’s `.Random.seed[1:12]` is possible but we will not publish one, not least as this generator is not exactly the same as that in recent versions of S-PLUS.

`"Mersenne-Twister":`
From Matsumoto and Nishimura (1998). A twisted GFSR with period 2^19937 – 1 and equidistribution in 623 consecutive dimensions (over the whole period). The ‘seed’ is a 624-dimensional set of 32-bit integers plus a current position in that set.

`"Knuth-TAOCP-2002":`
A 32-bit integer GFSR using lagged Fibonacci sequences with subtraction. That is, the recurrence used is

X[j] = (X[j-100] – X[j-37]) mod 2^30

and the ‘seed’ is the set of the 100 last numbers (actually recorded as 101 numbers, the last being a cyclic shift of the buffer). The period is around 2^129.

`"Knuth-TAOCP":`
An earlier version from Knuth (1997).

The 2002 version was not backwards compatible with the earlier version: the initialization of the GFSR from the seed was altered. R did not allow you to choose consecutive seeds, the reported ‘weakness’, and already scrambled the seeds.

Initialization of this generator is done in interpreted R code and so takes a short but noticeable time.

`"L'Ecuyer-CMRG":`
A ‘combined multiple-recursive generator’ from L’Ecuyer (1999), each element of which is a feedback multiplicative generator with three integer elements: thus the seed is a (signed) integer vector of length 6. The period is around 2^191.

The 6 elements of the seed are internally regarded as 32-bit unsigned integers. Neither the first three nor the last three should be all zero, and they are limited to less than `4294967087` and `4294944443` respectively.

This is not particularly interesting of itself, but provides the basis for the multiple streams used in package parallel.

`"user-supplied":`
Use a user-supplied generator.

Function `RNGkind` allows user-coded uniform and normal random number generators to be supplied.

## BigML goes on hyperdrive- exciting new features

Some changes in BigML.com whose CEO I have interviewed here

Thier earlier innovation in making a market place for models (like similar market place for apps) was written about here

I like the concept of BigMLer a command line tool https://bigml.com/bigmler

New changes are-

1) Text Analysis now available- It seems like rudimentary tdm (term document matrix) but I have yet to test it whether I can do clustering within text data too

2) A Cloud Server called BigML Predict Server- making adoption faster due to  data hygiene for sensitive industries like finance etc

3) Confusion Matrix – to evaluate- a long overdue step . Maybe some curves should be added to evaluation here 😉

4) Misc technical upgrades- that are more complex to execute and less interesting to write about

• multi label classification
• secret urls for sharing models (view model only not data)
• export to MS Excel ( maybe add Google docs export ?)
• etc

Overall , with the addition of training courses as well- this is a new phase in this data science startup that I have been tracking for past few years.

-related

## Installing the book (SVMono) class in Lyx

1. Start Lyx
2. Your user directory can be got from looking at Help (last tab top -right)> About Lyx ( For Windows  this can be AppData – to find AppData put %appdata% at cmd line – or using windows button and R to run it – something like C:\Users\dell\AppData\Roaming\LyX2.0)
4. Unzip all the files into the layouts folder of your User Directory (step1)
5. Go to Lyx- Tools (second last tab-top right-) Reconfigure
6. Close Lyx
7. Start Lyx again
8. Lyx (Documents (third last tab -top right ) Preferences- select Sv Mono as document class

References-

http://wiki.lyx.org/Layouts/Layouts

http://wiki.lyx.org/LyX/UserDir

## Thoughts on Soft Ware and Soft Vapour

1. Interfaces for software, hardware,design and products evolve as technology enables better materials and more efficient consumption. The basic handicap though remains that humans do not evolve, at least not noticeable to themselves

2. Human psychology continues to play a key role in marking success and failure of technological adoption. This includes the paradigms of loss aversion and being prone to logical fallacies in arguments put forward in advertising by fellow humans.

3. What is easier? To create a better machine for a man. To train a man to be better at the machine. Often it is a mixture of both. Both have multiple costs and benefits to various agency players including incumbent corporations , challenger innovators, countries, regions and trade zones and environment.

4. The practice of shipping code with an impatience of making up metrics alongside evolving industries causes dishonesty among interested stakeholders

5. There are some things that can not be explained unless the receiver is trained to think along different paradigms than he has been used. The altered state of consciousness however is almost never credibly reversible.

6. Information processing is the drug, not information itself.

7. A Feedback loop is simple to construct with the results and errors flowing into inputs and next iteration processing. A Feedback loop is tough to sell to your own team , sometimes! Feedback loops are also inevitably gamed sometimes! By people of course

## Life Cycle of a Data Science Project

It is best to use CRISP -DM, SEMMA and/or KDD for a systematic approach

1) Understanding Business Requirements from Client

2) Converting Business Problem to a Statistical Problem

• what data to collect

• what is the cost of data

• how can I enhance the data

• data quality issues

3) Solving Statistical Problem with Tools (R, SAS, Excel)

• import

• data quality

• outlier and missing value treatment

• exploratory analysis

• data visualization

• hypothesis and problem framing

• data mining and pattern identification

• create success parameters for statistical solution

4) Converting Statistical Solution to Business Solution

• project report template

• assumptions and caveats

• feedback from stakeholders

5) Communicating Business Solution to Client

• presentation

• report

• customer satisfaction

• monitoring of results

## Using R and TwitteR together on Windows #rstats

You need to add the following apparently. on a Windows OS

options(RCurlOptions = list(cainfo = system.file(“CurlSSL”, “cacert.pem”, package = “RCurl”)))

twitCred\$handshake(cainfo=”cacert.pem”)

I am still investigating this to update my tutorial in previous post to be a complete stand alone Tutorial from Beginning to End

## Using Twitter Data with R #rstats updated for API changes

Step 1

```install.packages("twitteR")
Installing package(s) into ‘/home/R/library’
(as ‘lib’ is unspecified)
also installing the dependencies ‘ROAuth’, ‘rjson’

trying URL 'http://cran.rstudio.com/src/contrib/ROAuth_0.9.3.tar.gz'
Content type 'application/x-gzip' length 6202 bytes
opened URL
==================================================

trying URL 'http://cran.rstudio.com/src/contrib/rjson_0.2.13.tar.gz'
Content type 'application/x-gzip' length 98132 bytes (95 Kb)
opened URL
==================================================

Content type 'application/x-gzip' length 121696 bytes (118 Kb)
opened URL
==================================================

* installing *source* package ‘ROAuth’ ...
** package ‘ROAuth’ successfully unpacked and MD5 sums checked
** R
** help
*** installing help indices
** building package indices
** testing if installed package can be loaded

* DONE (ROAuth)
* installing *source* package ‘rjson’ ...
** package ‘rjson’ successfully unpacked and MD5 sums checked
** libs
g++ -I/usr/share/R/include -DNDEBUG      -fpic  -O3 -pipe  -g  -c dump.cpp -o dump.o
gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG      -fpic  -O3 -pipe  -g  -c parser.c -o parser.o
g++ -shared -o rjson.so dump.o parser.o -L/usr/lib/R/lib -lR
installing to /home/R/library/rjson/libs
** R
** inst
** help
*** installing help indices
** building package indices
** installing vignettes
‘json_rpc_server.Rnw’
** testing if installed package can be loaded

* DONE (rjson)
* installing *source* package ‘twitteR’ ...
** package ‘twitteR’ successfully unpacked and MD5 sums checked
** R
** inst
Creating a generic function for ‘as.data.frame’ from package ‘base’ in package ‘twitteR’
** help
*** installing help indices
** building package indices
** testing if installed package can be loaded

Step 2

```library(twitteR)

Step 3

Step 4

Create a new app for yourself by navigating to My Applications

Step 5

Click on New Application (button on top right)

Step 6

Fill the options here- leave the callback  url blank

Name should be Unique

Description should be atleast 10 Charachters

Website can be a placeholder as of now (or your blog address)

Agree to Terms and Conditions

Type the Spam Check Number and Letters

Step 7

Note these  details from your new APP

Consumer Key

Consumer Secret

On the Bottom –

Click on Create your OAuth Token

Finally your APP page should look like this (dont worry i will be deleting this app so you cant hack my twitter yet)

Step 8

Go to R

Type the following code  after you changed the two consumer keys (IMPORTANT- You will need to change your Consumer Key and Consumer Secret to the one specific to YOUR app)

NOTE- WordPress makes some changes when you copy and paste code to your blog

THE final formatted code is at very end of the post

consumerKey <- “2uQlGBBMMXdDffcK2IkAsg
consumerSecret <- “xrGr71kTfdT3ypWFURGxyJOC4Oqf46Rwu4qxyxoEfM”
twitCred <- OAuthFactory\$new(consumerKey=consumerKey,
consumerSecret=consumerSecret,
requestURL=reqURL,
accessURL=accessURL,
authURL=authURL)

Step 9

Do the Twitter Handshake by pasting this

command in R Console

twitCred\$handshake()

You will see a message like this from R

`> twitCred\$handshake()`

Step 10

Go to the link above given by R

You will get this message

Click on blue button

Authorize app

Step 11 Entering the Pin

Now you see a pin here

like this

-You cant copy and paste it. Write it down and then type in your R console

Step 12

Now register the credentials using

you will see this

if done correctly

Step 13

Search Twitter using commands like here. Note it returns only 499 tweets

Warning message: In doRppAPICall(“search/tweets”, n, params = params, retryOnRateLimit = retryOnRateLimit, : 2000 tweets were requested but the API can only return 499

Step 14 Now you can start analyzing the data

Convert the data into a data frame tweets_df = twListToDF(a)

Install Packages tm (for textmining and wordcloud)

> install.packages(c(“tm”, “wordcloud”))

library(tm)

library(wordcloud)

Basic Word Cloud can be created using code below

b=Corpus(VectorSource(tweets_df\$text), readerControl = list(language = “eng”))

b<- tm_map(b, tolower) #Changes case to lower case

b<- tm_map(b, stripWhitespace) #Strips White Space

b <- tm_map(b, removePunctuation) #Removes Punctuation

inspect(b)

tdm <- TermDocumentMatrix(b)

m1 <- as.matrix(tdm)

v1<- sort(rowSums(m1),decreasing=TRUE)

d1<- data.frame(word = names(v1),freq=v1)

wordcloud(d1\$word,d1\$freq)

Step 15 Keep your OAuth keys safely, and do your homework with out bothering your instructor too much.

If you try and copy the paste the code from a website, be sure to change the quotation marks “” manually in your R console

FINAL CODE

```install.packages("twitteR")
consumerKey <- "rR16FxDLkTYmuVhqH4s4EQ"
consumerSecret <- "xrGr71kTfdT3ypWFURGxyJOC4Oqf46Rwu4qxyxoEfM"
twitCred <- OAuthFactory\$new(consumerKey=consumerKey,
consumerSecret=consumerSecret,
requestURL=reqURL,
accessURL=accessURL,
authURL=authURL)
twitCred\$handshake() #Pause here for the Handshake Pin Code
registerTwitterOAuth(twitCred) #Wait till you see True

``` tweets_df = twListToDF(a) #Convert to Data Frame
install.packages(c("tm", "wordcloud"))
library(tm)
library(wordcloud)
b=Corpus(VectorSource(tweets_df\$text), readerControl = list(language = "eng"))
b<- tm_map(b, tolower) #Changes case to lower case
b<- tm_map(b, stripWhitespace) #Strips White Space
b <- tm_map(b, removePunctuation) #Removes Punctuation
inspect(b)
tdm <- TermDocumentMatrix(b)
m1 <- as.matrix(tdm)
v1<- sort(rowSums(m1),decreasing=TRUE)
d1<- data.frame(word = names(v1),freq=v1)
wordcloud(d1\$word,d1\$freq)```