Ads Alliance on Internet

Just saw

the Digital Advertising Alliance’s (DAA) Self-Regulatory Program for Online Behavioral Advertising.

Multi-Site Data Collection Principles Broaden Self Regulation Beyond Online Behavioral Advertising
WASHINGTON, D.C., NOVEMBER 7, 2011

The new Principles consist of the following specific requirements:

  1. Transparency and consumer control for purposes other than OBA – The Multi-Site Data Principles call for organizations that collect Multi-Site Data for purposes other than OBA to provide transparency and control regarding Internet surfing across unrelated Websites.
  2. Collection / use of data for eligibility determination – The Multi-Site Data Principles prohibit the collection, use or transfer of Internet surfing data across Websites for determination of a consumer’s eligibility for employment, credit standing, healthcare treatment and insurance.
  3. Collection / use of children’s data – The Multi-Site Data Principles state that organizations must comply with the Children’s Online Privacy Protection Act (COPPA).
  4. Meaningful accountability – The Multi-Site Data Principles are subject to enforcement through strong accountability mechanisms.

http://www.aboutads.info/principles

The DAA Self-Regulatory Principles

 

The cross-industry Self-Regulatory Principles for Multi-Site Data augment the Self-Regulatory   Principles for Online Behavioral Advertising  (OBA)  by covering the prospective  collection of Web site   data beyond that collected for OBA purposes.  The existing OBA  Principles and definitions  remain in   full force and effect and are not limited by the new  principles.

The cross-industry Self-Regulatory Principles for Online Behavioral Advertising was developed by   leading industry associations to apply  consumer-friendly standards to online  behavioral advertising  across the Internet. Online behavioral advertising increasingly supports the convenient access to  content, services, and applications over the Internet that consumers have come to expect at no cost   to them.

The Education Principle calls for organizations to participate in efforts to educate individuals and businesses about online behavioral advertising and the Principles.

The Transparency Principle calls for clearer and easily accessible disclosures to consumers about data collection and use practices associated with online behavioral advertising. It will result in new, enhanced notice on the page where data is collected through links embedded in or around advertisements, or on the Web page itself.

The Consumer Control Principle provides consumers with an expanded ability to choose whether data is collected and used for online behavioral advertising purposes. This choice will be available through a link from the notice provided on the Web page where data is collected.

The Consumer Control Principle requires “service providers”, a term that includes Internet access service providers and providers of desktop applications software such as Web browser “tool bars” to obtain the consent of users before engaging in online behavioral advertising, and take steps to de-identify the data used for such purposes.

The Data Security Principle calls for organizations to provide appropriate security for, and limited retention of data, collected and used for online behavioral advertising purposes.

The Material Changes Principle calls for obtaining consumer consent before a Material Change is made to an entity’s Online Behavioral Advertising data collection and use policies unless that change will result in less collection or use of data.

The Sensitive Data Principle recognizes that data collected from children and used for online behavioral advertising merits heightened protection, and requires parental consent for behavioral advertising to consumers known to be under 13 on child-directed Web sites. This Principle also provides heightened protections to certain health and financial data when attributable to a specific individual.

The Accountability Principle calls for development of programs to further advance these Principles, including programs to monitor and report instances of uncorrected non-compliance with these Principles to appropriate government agencies. The CBBB and DMA have been asked and agreed to work cooperatively to establish accountability mechanisms under the Principles.

 

Ajay- So why the self regulations?

Answer- Shoddy Maths in behaviorally targeted ads is leading to a very high glut in targeted ads, more than can be reasonably expected to click based on consumer spending. On the internet- unlike on television- cost is less of a barrrier to OVER ADVERTISING.

 

JMP and R – #rstats

An amazing example of R being used sucessfully in combination (and not is isolation) with other enterprise software is the add-ins functionality of JMP and it’s R integration.

See the following JMP add-ins which use R

http://support.sas.com/demosdownloads/downarea_t4.jsp?productID=110454&jmpflag=Y

JMP Add-in: Multidimensional Scaling using R

This add-in creates a new menu command under the Add-Ins Menu in the submenu R Add-ins. The script will launch a custom dialog (or prompt for a JMP data table is one is not already open) where you can cast columns into roles for performing MDS on the data table. The analysis results in a data table of MDS dimensions and associated output graphics. MDS is a dimension reduction method that produces coordinates in Euclidean space (usually 2D, 3D) that best represent the structure of a full distance/dissimilarity matrix. MDS requires that input be a symmetric dissimilarity matrix. Input to this application can be data that is already in the form of a symmetric dissimilarity matrix or the dissimilarity matrix can be computed based on the input data (where dissimilarity measures are calculated between rows of the input data table in R).

Submitted by: Kelci Miclaus SAS employee Initiative: All
Application: Add-Ins Analysis: Exploratory Data Analysis

Chernoff Faces Add-in

One way to plot multivariate data is to use Chernoff faces. For each observation in your data table, a face is drawn such that each variable in your data set is represented by a feature in the face. This add-in uses JMP’s R integration functionality to create Chernoff faces. An R install and the TeachingDemos R package are required to use this add-in.

Submitted by: Clay Barker SAS employee Initiative: All
Application: Add-Ins Analysis: Data Visualization

Support Vector Machine for Classification

By simply opening a data table, specifying X, Y variables, selecting a kernel function, and specifying its parameters on the user-friendly dialog, you can build a classification model using Support Vector Machine. Please note that R package ‘e1071’ should be installed before running this dialog. The package can be found from http://cran.r-project.org/web/packages/e1071/index.html.

Submitted by: Jong-Seok Lee SAS employee Initiative: All
Application: Add-Ins Analysis: Exploratory Data Analysis/Mining

Penalized Regression Add-in

This add-in uses JMP’s R integration functionality to provide access to several penalized regression methods. Methods included are the LASSO (least absolutee shrinkage and selection operator, LARS (least angle regression), Forward Stagewise, and the Elastic Net. An R install and the “lars” and “elasticnet” R packages are required to use this add-in.

Submitted by: Clay Barker SAS employee Initiative: All
Application: Add-Ins Analysis: Regression

MP Addin: Univariate Nonparametric Bootstrapping

This script performs simple univariate, nonparametric bootstrap sampling by using the JMP to R Project integration. A JMP Dialog is built by the script where the variable you wish to perform bootstrapping over can be specified. A statistic to compute for each bootstrap sample is chosen and the data are sent to R using new JSL functionality available in JMP 9. The boot package in R is used to call the boot() function and the boot.ci() function to calculate the sample statistic for each bootstrap sample and the basic bootstrap confidence interval. The results are brought back to JMP and displayed using the JMP Distribution platform.

Submitted by: Kelci Miclaus SAS employee Initiative: All
Application: Add-Ins Analysis: Basic Statistics

Revolution Webinar Series #Rstats

Revolution Analytics Webinar-

 

Featured Webinar
David Champagne REGISTER NOW
Presenter David Champagne
CTO, Revolution Analytics
Date Tuesday, December 20th
Time 11:00AM – 11:30AM Pacific 
Click here for the webinar time in your local time zone

Big Data Starts with R

Traditional IT infrastructure is simply unable to meet

the demands of the new “Big Data Analytics” landscape.   Many enterprises are turning to the “R” statistical programming language and Hadoop (both open source projects) as a potential solution. This webinar will introduce the statistical capabilities of R within the Hadoop ecosystem.  We’ll cover:

  • An introduction to new packages developed by Revolution Analytics to facilitate interaction with the data stores HDFS and HBase so that they can be leveraged from the R environment
  • An overview of how to write Map Reduce jobs in R using Hadoop
  • Special considerations that need to be made when working with R and Hadoop.

We’ll also provide additional resources that are available to people interested in integrating R and Hadoop.

 

Upcoming Webinars
Wed, Dec 14th
11:00AM – 11:30AM PT
Revolution R Enterprise – 100% R and MoreR users already know why the R language is the lingua franca of statisticians today: because it’s the most powerful statistical language in the world. Revolution Analytics builds on the power of open source R, and adds performance, productivity and integration features to create Revolution R Enterprise. In this webinar, author and blogger David Smith will introduce the additional capabilities of Revolution R Enterprise.
 Archived Webinars-
Revolution Webinar: New Features in Revolution R Enterprise 5.0 (including RevoScaleR) to Support Scalable Data AnalysisRevolution R Enterprise 5.0 is Revolution Analytics’ scalable analytics platform.  At its core is Revolution Analytics’ enhanced Distribution of R, the world’s most widely-used project for statistical computing.  In this webinar, Dr. Ranney will discuss new features and show examples of the new functionality, which extend the platform’s usability, integration and scalability

 

Graphs in Statistical Analysis

One of the seminal papers establishing the importance of data visualization (as it is now called) was the 1973 paper by F J Anscombe in http://www.sjsu.edu/faculty/gerstman/StatPrimer/anscombe1973.pdf

It has probably the most elegant introduction to an advanced statistical analysis paper that I have ever seen-

1. Usefulness of graphs

Most textbooks on statistical methods, and most statistical computer programs, pay too little attention to graphs. Few of us escape being indoctrinated with these notions:

(1) numerical calculations are exact, but graphs are rough;

(2) for any particular kind of statistical data there is just one set of calculations constituting a correct statistical analysis;

(3) performing intricate calculations is virtuous, whereas actually looking at the data is cheating.

A computer should make both calculations and graphs. Both sorts of output should be studied; each will contribute to understanding.

Of course the dataset makes it very very interesting for people who dont like graphical analysis too much.

From http://en.wikipedia.org/wiki/Anscombe%27s_quartet

 The x values are the same for the first three datasets.

Anscombe’s Quartet
I II III IV
x y x y x y x y
10.0 8.04 10.0 9.14 10.0 7.46 8.0 6.58
8.0 6.95 8.0 8.14 8.0 6.77 8.0 5.76
13.0 7.58 13.0 8.74 13.0 12.74 8.0 7.71
9.0 8.81 9.0 8.77 9.0 7.11 8.0 8.84
11.0 8.33 11.0 9.26 11.0 7.81 8.0 8.47
14.0 9.96 14.0 8.10 14.0 8.84 8.0 7.04
6.0 7.24 6.0 6.13 6.0 6.08 8.0 5.25
4.0 4.26 4.0 3.10 4.0 5.39 19.0 12.50
12.0 10.84 12.0 9.13 12.0 8.15 8.0 5.56
7.0 4.82 7.0 7.26 7.0 6.42 8.0 7.91
5.0 5.68 5.0 4.74 5.0 5.73 8.0 6.89

For all four datasets:

Property Value
Mean of x in each case 9 exact
Variance of x in each case 11 exact
Mean of y in each case 7.50 (to 2 decimal places)
Variance of y in each case 4.122 or 4.127 (to 3 d.p.)
Correlation between x and y in each case 0.816 (to 3 d.p.)
Linear regression line in each case y = 3.00 + 0.500x (to 2 d.p. and 3 d.p. resp.)
But see the graphical analysis –
While R has always been great in emphasizing graphical analysis, thanks in part due to work by H Wickham and others, SAS products and  language has also modified its approach at http://www.sas.com/technologies/analytics/statistics/datadiscovery/
 SAS Visual Data Discovery combines top-selling SAS products (Base SASSAS/STAT® and SAS/GRAPH®), along with two interfaces (SAS® Enterprise Guide® for guided tasks and batch analysis and JMP® software for discovery and exploratory analysis).
 and  ODS Statistical Graphs at
While ODS Statistical graphs is still not as smooth as say R’s GGPLOT2 http://tinyurl.com/ggplot2-book, it still is a progressive step
Pretty graphs make for better decisions too !

 

 

Announcing Jaspersoft 4.5

Message from  Jaspersoft

————————–

Announcing Jaspersoft 4.5:
Powerful Analytics for All Your Data

This new release provides a single, easy-to-use environment designed with the non-technical user in mind — delivering insight to data stored in relational, OLAP, and Big Data environments.New in Jaspersoft 4.5

Broad and Deep Big Data Connectivity
Intuitive drag and drop web UI for performing reporting and analysis against Hadoop, MongoDB, Cassandra, and many more.

Improved Ad Hoc Reporting and Analysis
Non-technical users can perform their own investigation.

Supercharged Analytic Performance
Enhanced push-down query processing and In-Memory Analysis engine improves response times for aggregation and summary queries.

Join us for an in-depth review and demo, showcasing the new features for self-service BI across any data source.

For more information on Jaspersoft 4.5, or any Jaspersoft solution, contact at sales@jaspersoft.com, or            415-348-2380      .

 

Download Jaspersoft 4.5 Today.

. Download your free 30 day evaluation trial now.

download now button

 

 

Data Documentation Initiative

Here is a nice initiative in standardizing data documentation for social sciences (which can be quite a relief to legions of analysts)

http://www.ddialliance.org/what

 

 

 

 

Benefits of DDI

The DDI facilitates:

  • Interoperability. Codebooks marked up using the DDI specification can be exchanged and transported seamlessly, and applications can be written to work with these homogeneous documents.
  • Richer content. The DDI was designed to encourage the use of a comprehensive set of elements to describe social science datasets as completely and as thoroughly as possible, thereby providing the potential data analyst with broader knowledge about a given collection.
  • Single document – multiple purposes. A DDI codebook contains all of the information necessary to produce several different types of output, including, for example, a traditional social science codebook, a bibliographic record, or SAS/SPSS/Stata data definition statements. Thus, the document may be repurposed for different needs and applications. Changes made to the core document will be passed along to any output generated.
  • On-line subsetting and analysis. Because the DDI markup extends down to the variable level and provides a standard uniform structure and content for variables, DDI documents are easily imported into on-line analysis systems, rendering datasets more readily usable for a wider audience.
  • Precision in searching. Since each of the elements in a DDI-compliant codebook is tagged in a specific way, field-specific searches across documents and studies are enabled. For example, a library of DDI codebooks could be searched to identify datasets covering protest demonstrations during the 1960s in specific states or countries.
Also see-
  1. http://www.ddialliance.org/Specification/DDI-Codebook/2.1/DTD/Documentation/DDI2-1-tree.html
  2. http://www.ddialliance.org/Specification/DDI-Lifecycle/3.1/

 

Interview Zach Goldberg, Google Prediction API

Here is an interview with Zach Goldberg, who is the product manager of Google Prediction API, the next generation machine learning analytics-as-an-api service state of the art cloud computing model building browser app.
Ajay- Describe your journey in science and technology from high school to your current job at Google.

Zach- First, thanks so much for the opportunity to do this interview Ajay!  My personal journey started in college where I worked at a startup named Invite Media.   From there I transferred to the Associate Product Manager (APM) program at Google.  The APM program is a two year rotational program.  I did my first year working in display advertising.  After that I rotated to work on the Prediction API.

Ajay- How does the Google Prediction API help an average business analytics customer who is already using enterprise software , servers to generate his business forecasts. How does Google Prediction API fit in or complement other APIs in the Google API suite.

Zach- The Google Prediction API is a cloud based machine learning API.  We offer the ability for anybody to sign up and within a few minutes have their data uploaded to the cloud, a model built and an API to make predictions from anywhere. Traditionally the task of implementing predictive analytics inside an application required a fair amount of domain knowledge; you had to know a fair bit about machine learning to make it work.  With the Google Prediction API you only need to know how to use an online REST API to get started.

You can learn more about how we help businesses by watching our video and going to our project website.

Ajay-  What are the additional use cases of Google Prediction API that you think traditional enterprise software in business analytics ignore, or are not so strong on.  What use cases would you suggest NOT using Google Prediction API for an enterprise.

Zach- We are living in a world that is changing rapidly thanks to technology.  Storing, accessing, and managing information is much easier and more affordable than it was even a few years ago.  That creates exciting opportunities for companies, and we hope the Prediction API will help them derive value from their data.

The Prediction API focuses on providing predictive solutions to two types of problems: regression and classification. Businesses facing problems where there is sufficient data to describe an underlying pattern in either of these two areas can expect to derive value from using the Prediction API.

Ajay- What are your separate incentives to teach about Google APIs  to academic or researchers in universities globally.

Zach- I’d refer you to our university relations page

Google thrives on academic curiosity. While we do significant in-house research and engineering, we also maintain strong relations with leading academic institutions world-wide pursuing research in areas of common interest. As part of our mission to build the most advanced and usable methods for information access, we support university research, technological innovation and the teaching and learning experience through a variety of programs.

Ajay- What is the biggest challenge you face while communicating about Google Prediction API to traditional users of enterprise software.

Zach- Businesses often expect that implementing predictive analytics is going to be very expensive and require a lot of resources.  Many have already begun investing heavily in this area.  Quite often we’re faced with surprise, and even skepticism, when they see the simplicity of the Google Prediction API.  We work really hard to provide a very powerful solution and take care of the complexity of building high quality models behind the scenes so businesses can focus more on building their business and less on machine learning.