Summary- Thats cloud computing scoring of models on EC2 (Zementis) partnering with the actual modeling software in R (Revolution Analytics RevoDeployR)
strategic partnership with Revolution Analytics, the leading commercial provider of software and support for the popular open source R statistics language. With this partnership, predictive models developed on Revolution R Enterprise are now accessible for real-time scoring through the ADAPA Decisioning Engine by Zementis.
ADAPA is an extremely fast and scalable predictive platform. Models deployed in ADAPA are automatically available for execution in real-time and batch-mode as Web Services. ADAPA allows Revolution R Enterprise to leverage the Predictive Model Markup Language (PMML) for better decision management. With PMML, models built in R can be used in a wide variety of real-world scenarios without requiring laborious or expensive proprietary processes to convert them into applications capable of running on an execution system.
“By partnering with Zementis, Revolution Analytics is building an end-to-end solution for moving enterprise-level predictive R models into the execution environment,” said Jeff Erhardt, Revolution Analytics Chief Operation Officer. “With Zementis, we are eliminating the need to take R applications apart and recode, retest and redeploy them in order to obtain desirable results.”
Got demo?
Yes, we do! Revolution Analytics and Zementis have put together a demo which combines the building of models in R with automatic deployment and execution in ADAPA. It uses Revolution Analytics’ RevoDeployR, a new Web Services framework that allows for data analysts working in R to publish R scripts to a server-based installation of Revolution R Enterprise.
RevoDeployR & ADAPA allow for real-time analysis and predictions from R to be effectively used by existing Excel spreadsheets, BI dashboards and Web-based applications, all in real-time.
Predictive analytics with RevoDeployR from Revolution Analytics and ADAPA from Zementis put model building and real-time scoring into a league of their own. Seriously!
Ajay- Describe your background working with analytics . How can we make analytics and science more attractive career options for young students
David- I had an interest in math from an early age, spurred by reading lots of science fiction with mathematicians and scientists in leading roles. I was fortunate to be at Harry and David (Fruit of the Month Club) when they were in the forefront of applying multivariate statistics to the challenge of targeting catalogs and other snail-mail offerings. Later I had the opportunity to expand these techniques to the retail sphere with Williams-Sonoma, who grew their retail business with the support of their catalog mailings. Since they had several catalog titles and product lines, cross-selling presented additional analytic challenges, and with the growth of the internet there was still another channel to consider, with its own dynamics.
After helping to found Abacus Direct Marketing, I became an independent consultant, which provided a lot of variety in applying statistics and data mining in a variety of settings from health care to telecom to credit marketing and education.
Students should be exposed to the many roles that analytics plays in modern life, and to the excitement of finding meaningful and useful patterns in the vast profusion of data that is now available.
Ajay- Describe your most challenging project in 3 decades of experience in this field.
David- Hard to choose just one, but the educational field has been particularly interesting. Partnering with Olympic Behavior Labs, we’ve developed systems to help identify students who are most at-risk for dropping out of school to help target interventions that could prevent dropout and promote success.
Ajay- What do you think are the top 5 trends in analytics for 2011.
David- Big Data, Privacy concerns, quick response to consumer needs, integration of testing and analysis into business processes, social networking data.
Ajay- Do you think techniques like RFM and LTV are adequately utilized by organization. How can they be propagated further.
David- Organizations vary amazingly in how sophisticated or unsophisticated the are in analytics. A key factor in success as a consultant is to understand where each client is on this continuum and how well that serves their needs.
Ajay- What are the various software you have worked for in this field- and name your favorite per category.
David- I started out using COBOL (that dates me!) then concentrated on SAS for many years. More recently R is my favorite because of its coverage, currency and programming model, and it’s debugging capabilities.
Ajay- Independent consulting can be a strenuous job. What do you do to unwind?
David- Cycling, yoga, meditation, hiking and guitar.
David Katz has been in the forefront of applying statistical models and database technology to marketing problems since 1980. He holds a Master’s Degree in Mathematics from the University of California, Berkeley. He is one of the founders of Abacus Direct Marketing and was previously the Director of Database Development for Williams-Sonoma.
He is the founder and President of David Katz Consulting, specializing in sophisticated statistical services for a variety of applications, with a special focus on the Direct Marketing Industry. David Katz has an extensive background that includes experience in all aspects of direct marketing from data mining, to strategy, to test design and implementation. In addition, he consults on a variety of data mining and statistical applications from public health to collections analysis. He has partnered with consulting firms such as Ernst and Young, Prediction Impact, and most recently on this project with Dataspora.
Track 2: Social Data and Telecom
Case Study: Major North American Telecom
Social Networking Data for Churn Analysis
A North American Telecom found that it had a window into social contacts – who has been calling whom on its network. This data proved to be predictive of churn. Using SQL, and GAM in R, we explored how to use this data to improve the identification of likely churners. We will present many dimensions of the lessons learned on this engagement.
Speaker: David Katz, Senior Analyst, Dataspora, and President, David Katz Consulting
Exhibit Hours Monday, March 14th:10:00am to 7:30pm
Related Articles (Ps the Related Articles is auto generated by Zementa- a software embedded within WordPress.com in case you are wondering what the deal with the linking is)
R Authors get more choice and variety now-
http://www.mail-archive.com/r-help@r-project.org/msg122965.html
We are pleased to announce the launch of a new series of books on R.
Chapman & Hall/CRC: The R Series
Aims and Scope
This book series reflects the recent rapid growth in the development and
application of R, the programming language and software environment for
statistical computing and graphics. R is now widely used in academic research,
education, and industry. It is constantly growing, with new versions of the
core software released regularly and more than 2,600 packages available. It is
difficult for the documentation to keep pace with the expansion of the
software, and this vital book series provides a forum for the publication of
books covering many aspects of the development and application of R.
The scope of the series is wide, covering three main threads:
• Applications of R to specific disciplines such as biology, epidemiology,
genetics, engineering, finance, and the social sciences.
• Using R for the study of topics of statistical methodology, such as linear
and mixed modeling, time series, Bayesian methods, and missing data.
• The development of R, including programming, building packages, and graphics.
The books will appeal to programmers and developers of R software, as well as
applied statisticians and data analysts in many fields. The books will feature
detailed worked examples and R code fully integrated into the text, ensuring
their usefulness to researchers, practitioners and students.
Series Editors
John M. Chambers (Department of Statistics, Stanford University, USA;
j...@stat.stanford.edu)
Torsten Hothorn (Institut für Statistik, Ludwig-Maximilians-Universität,
München, Germany; torsten.hoth...@stat.uni-muenchen.de)
Duncan Temple Lang (Department of Statistics, University of California, Davis,
USA; dun...@wald.ucdavis.edu)
Hadley Wickham (Department of Statistics, Rice University, Houston, Texas, USA;
had...@rice.edu)
Call for Proposals
We are interested in books covering all aspects of the development and
application of R software. If you have an idea for a book, please contact one
of the series editors above or one of the Chapman & Hall/CRC statistics
acquisitions editors below. Please provide brief details of topic, audience,
aims and scope, and include an outline if possible.
We look forward to hearing from you.
Best regards,Rob Calver (rob.cal...@informa.com)
David Grubbs (david.gru...@taylorandfrancis.com)
John Kimmel (john.kim...@taylorandfrancis.com)
This promotional offer enables you to try a limited amount of the Windows Azure platform at no charge. The subscription includes a base level of monthly compute hours, storage, data transfers, a SQL Azure database, Access Control transactions and Service Bus connections at no charge. Please note that any usage over this introductory base level will be charged at standard rates.
Included each month at no charge:
Windows Azure
25 hours of a small compute instance
500 MB of storage
10,000 storage transactions
SQL Azure
1GB Web Edition database (available for first 3 months only)
Windows Azure platform AppFabric
100,000 Access Control transactions
2 Service Bus connections
Data Transfers (per region)
500 MB in
500 MB out
Any monthly usage in excess of the above amounts will be charged at the standard rates. This introductory special will end on March 31, 2011 and all usage will then be charged at the standard rates.
As part of AWS’s Free Usage Tier, new AWS customers can get started with Amazon EC2 for free. Upon sign-up, new AWScustomers receive the following EC2 services each month for one year:
750 hours of EC2 running Linux/Unix Micro instance usage
750 hours of Elastic Load Balancing plus 15 GB data processing
10 GB of Amazon Elastic Block Storage (EBS) plus 1 million IOs, 1 GB snapshot storage, 10,000 snapshot Get Requests and 1,000 snapshot Put Requests
15 GB of bandwidth in and 15 GB of bandwidth out aggregated across all AWS services
Paid Instances-
Standard On-Demand Instances
Linux/UNIX Usage
Windows Usage
Small (Default)
$0.085 per hour
$0.12 per hour
Large
$0.34 per hour
$0.48 per hour
Extra Large
$0.68 per hour
$0.96 per hour
Micro On-Demand Instances
Micro
$0.02 per hour
$0.03 per hour
High-Memory On-Demand Instances
Extra Large
$0.50 per hour
$0.62 per hour
Double Extra Large
$1.00 per hour
$1.24 per hour
Quadruple Extra Large
$2.00 per hour
$2.48 per hour
High-CPU On-Demand Instances
Medium
$0.17 per hour
$0.29 per hour
Extra Large
$0.68 per hour
$1.16 per hour
Cluster Compute Instances
Quadruple Extra Large
$1.60 per hour
N/A*
Cluster GPU Instances
Quadruple Extra Large
$2.10 per hour
N/A*
* Windows is not currently available for Cluster Compute or Cluster GPU Instances.
NOTE- Amazon Instance definitions differ slightly from Azure definitions
Instances of this family are well suited for most applications.
Small Instance – default*
1.7 GB memory
1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit)
160 GB instance storage
32-bit platform
I/O Performance: Moderate
API name: m1.small
Large Instance
7.5 GB memory
4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each)
850 GB instance storage
64-bit platform
I/O Performance: High
API name: m1.large
Extra Large Instance
15 GB memory
8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute Units each)
1,690 GB instance storage
64-bit platform
I/O Performance: High
API name: m1.xlarge
Micro Instances
Instances of this family provide a small amount of consistent CPU resources and allow you to burst CPU capacity when additional cycles are available. They are well suited for lower throughput applications and web sites that consume significant compute cycles periodically.
Micro Instance
613 MB memory
Up to 2 EC2 Compute Units (for short periodic bursts)
EBS storage only
32-bit or 64-bit platform
I/O Performance: Low
API name: t1.micro
High-Memory Instances
Instances of this family offer large memory sizes for high throughput applications, including database and memory caching applications.
High-Memory Extra Large Instance
17.1 GB of memory
6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)
420 GB of instance storage
64-bit platform
I/O Performance: Moderate
API name: m2.xlarge
High-Memory Double Extra Large Instance
34.2 GB of memory
13 EC2 Compute Units (4 virtual cores with 3.25 EC2 Compute Units each)
850 GB of instance storage
64-bit platform
I/O Performance: High
API name: m2.2xlarge
High-Memory Quadruple Extra Large Instance
68.4 GB of memory
26 EC2 Compute Units (8 virtual cores with 3.25 EC2 Compute Units each)
1690 GB of instance storage
64-bit platform
I/O Performance: High
API name: m2.4xlarge
High-CPU Instances
Instances of this family have proportionally more CPU resources than memory (RAM) and are well suited for compute-intensive applications.
High-CPU Medium Instance
1.7 GB of memory
5 EC2 Compute Units (2 virtual cores with 2.5 EC2 Compute Units each)
350 GB of instance storage
32-bit platform
I/O Performance: Moderate
API name: c1.medium
High-CPU Extra Large Instance
7 GB of memory
20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each)
1690 GB of instance storage
64-bit platform
I/O Performance: High
API name: c1.xlarge
Cluster Compute Instances
Instances of this family provide proportionally high CPU resources with increased network performance and are well suited for High Performance Compute (HPC) applications and other demanding network-bound applications. Learn more about use of this instance type for HPC applications.
Cluster Compute Quadruple Extra Large Instance
23 GB of memory
33.5 EC2 Compute Units (2 x Intel Xeon X5570, quad-core “Nehalem” architecture)
1690 GB of instance storage
64-bit platform
I/O Performance: Very High (10 Gigabit Ethernet)
API name: cc1.4xlarge
Cluster GPU Instances
Instances of this family provide general-purpose graphics processing units (GPUs) with proportionally high CPU and increased network performance for applications benefitting from highly parallelized processing, including HPC, rendering and media processing applications. While Cluster Compute Instances provide the ability to create clusters of instances connected by a low latency, high throughput network, Cluster GPU Instances provide an additional option for applications that can benefit from the efficiency gains of the parallel computing power of GPUs over what can be achieved with traditional processors. Learn moreabout use of this instance type for HPC applications.
Cluster GPU Quadruple Extra Large Instance
22 GB of memory
33.5 EC2 Compute Units (2 x Intel Xeon X5570, quad-core “Nehalem” architecture)
2 x NVIDIA Tesla “Fermi” M2050 GPUs
1690 GB of instance storage
64-bit platform
I/O Performance: Very High (10 Gigabit Ethernet)
API name: cg1.4xlarge
versus-
Windows Azure compute instances come in five unique sizes to enable complex applications and workloads.
Compute Instance Size
CPU
Memory
Instance Storage
I/O Performance
Extra Small
1 GHz
768 MB
20 GB*
Low
Small
1.6 GHz
1.75 GB
225 GB
Moderate
Medium
2 x 1.6 GHz
3.5 GB
490 GB
High
Large
4 x 1.6 GHz
7 GB
1,000 GB
High
Extra large
8 x 1.6 GHz
14 GB
2,040 GB
High
*There is a limitation on the Virtual Hard Drive (VHD) size if you are deploying a Virtual Machine role on an extra small instance. The VHD can only be up to 15 GB.
A message from Predictive Analytics World on newly available videos. It has many free videos as well so you can check them out.
Access PAW DC Session Videos Now
Predictive Analytics World is pleased to announce on-demand access to the videos of PAW Washington DC, October 2010, including over 30 sessions and keynotes that you may view at your convenience. Access this leading predictive analytics content online now:
Select individual conference sessions, or recognize savings by registering for access to one or two full days of sessions. These on-demand videos deliver PAW DC right to your desk, covering hot topics and advanced methods such as:
PAW DC videos feature over 25 speakers with case studies from leading enterprises such as: CIBC, CEB, Forrester, Macy’s, MetLife, Microsoft, Miles Kimball, Monster.com, Oracle, Paychex, SunTrust, Target, UPMC, Xerox, Yahoo!, YMCA, and more.
Keynote: Five Ways Predictive Analytics Cuts Enterprise Risk
Eric Siegel,Ph.D., Program Chair, Predictive Analytics World
All business is an exercise in risk management. All organizations would benefit from measuring, tracking and computing risk as a core process, much like insurance companies do.
Predictive analytics does the trick, one customer at a time. This technology is a data-driven means to compute the risk each customer will defect, not respond to an expensive mailer, consume a retention discount even if she were not going to leave in the first place, not be targeted for a telephone solicitation that would have landed a sale, commit fraud, or become a “loss customer” such as a bad debtor or an insurance policy-holder with high claims.
In this keynote session, Dr. Eric Siegel reveals:
– Five ways predictive analytics evolves your enterprise to reduce risk
– Hidden sources of risk across operational functions
– What every business should learn from insurance companies
– How advancements have reversed the very meaning of fraud
– Why “man + machine” teams are greater than the sum of their parts for enterprise decision support
Platinum Sponsor Presentation: Analytics – The Beauty of Diversity
Anne H. Milley,Senior Director of Analytic Strategy, Worldwide Product Marketing, SAS
Analytics contributes to, and draws from, multiple disciplines. The unifying theme of “making the world a better place” is bred from diversity. For instance, the same methods used in econometrics might be used in market research, psychometrics and other disciplines. In a similar way, diverse paradigms are needed to best solve problems, reveal opportunities and make better decisions. This is why we evolve capabilities to formulate and solve a wide range of problems through multiple integrated languages and interfaces. Extending that, we have provided integration with other languages so that users can draw on the disciplines and paradigms needed to best practice their craft.
Gold Sponsor Presentation: Predictive Analytics Accelerate Insight for Financial Services
Finbarr Deely,Director of Business Development,ParAccel
Financial services organizations face immense hurdles in maintaining profitability and building competitive advantage. Financial services organizations must perform “what-if” scenario analysis, identify risks, and detect fraud patterns. The advanced analytic complexity required often makes such analysis slow and painful, if not impossible. This presentation outlines the analytic challenges facing these organizations and provides a clear path to providing the accelerated insight needed to perform in today’s complex business environment to reduce risk, stop fraud and increase profits. * The value of predictive analytics in Accelerating Insight * Financial Services Analytic Case Studies * Brief Overview of ParAccel Analytic Database
TOPIC: SURVEY ANALYSIS Case Study: YMCA Turning Member Satisfaction Surveys into an Actionable Narrative
Dean Abbott,President, Abbott Analytics
Employees are a key constituency at the Y and previous analysis has shown that their attitudes have a direct bearing on Member Satisfaction. This session will describe a successful approach for the analysis of YMCA employee surveys. Decision trees are built and examined in depth to identify key questions in describing key employee satisfaction metrics, including several interesting groupings of employee attitudes. Our approach will be contrasted with other factor analysis and regression-based approaches to survey analysis that we used initially. The predictive models described are currently in use and resulted in both greater understanding of employee attitudes, and a revised “short-form” survey with fewer key questions identified by the decision trees as the most important predictors.
TOPIC: INDUSTRY TRENDS 2010 Data Minter Survey Results: Highlights
Karl Rexer,Ph.D., Rexer Analytics
Do you want to know the views, actions, and opinions of the data mining community? Each year, Rexer Analytics conducts a global survey of data miners to find out. This year at PAW we unveil the results of our 4th Annual Data Miner Survey. This session will present the research highlights, such as:
Multiple Case Studies: U.S. DoD, U.S. DHS, SSA Text Mining: Lessons Learned
John F. Elder,Chief Scientist, Elder Research, Inc.
Text Mining is the “Wild West” of data mining and predictive analytics – the potential for gain is huge, the capability claims are often tall tales, and the “land rush” for leadership is very much a race.
In solving unstructured (text) analysis challenges, we found that principles from inductive modeling – learning relationships from labeled cases – has great power to enhance text mining. Dr. Elder highlights key technical breakthroughs discovered while working on projects for leading government agencies, including: Text Mining is the “Wild West” of data mining and predictive analytics – the potential for gain is huge, the capability claims are often tall tales, and the “land rush” for leadership is very much a race.
– Prioritizing searches for the Dept. of Homeland Security
– Quick decisions for Social Security Admin. disability
– Document discovery for the Dept. of Defense
– Disease discovery for the Dept. of Homeland Security
Keynote: How Target Gets the Most out of Its Guest Data to Improve Marketing ROI
Andrew Pole,Senior Manager, Media and Database Marketing, Target
In this session, you’ll learn how Target leverages its own internal guest data to optimize its direct marketing – with the ultimate goal of enhancing our guests’ shopping experience and driving in-store and online performance. You will hear about what guest data is available at Target, how and where we collect it, and how it is used to improve the performance and relevance of direct marketing vehicles. Furthermore, we will discuss Target’s development and usage of guest segmentation, response modeling, and optimization as means to suppress poor performers from mailings, determine relevant product categories and services for online targeted content, and optimally assign receipt marketing offers to our guests when offer quantities are limited.
Platinum Sponsor Presentation: Driving Analytics Into Decision Making
Jason Verlen,Director, SPSS Product Strategy & Management, IBM Software Group
Organizations looking to dramatically improve their business outcomes are turning to decision management, a convergence of technology and business processes that is used to streamline and predict the outcome of daily decision-making. IBM SPSS Decision Management technology provides the critical link between analytical insight and recommended actions. In this session you’ll learn how Decision Management software integrates analytics with business rules and business applications for front-line systems such as call center applications, insurance claim processing, and websites. See how you can improve every customer interaction, minimize operational risk, reduce fraud and optimize results.
TOPIC: DATA INFRASTRUCTURE AND INTEGRATION Case Study: Macy’s The world is not flat (even though modeling software has to think it is)
Paul Coleman,Director of Marketing Statistics, Macy’s Inc.
Software for statistical modeling generally use flat files, where each record represents a unique case with all its variables. In contrast most large databases are relational, where data are distributed among various normalized tables for efficient storage. Variable creation and model scoring engines are necessary to bridge data mining and storage needs. Development datasets taken from a sampled history require snapshot management. Scoring datasets are taken from the present timeframe and the entire available universe. Organizations, with significant data, must decide when to store or calculate necessary data and understand the consequences for their modeling program.
TOPIC: CUSTOMER VALUE Case Study: SunTrust When One Model Will Not Solve the Problem – Using Multiple Models to Create One Solution
Dudley Gwaltney,Group Vice President, Analytical Modeling, SunTrust Bank
In 2007, SunTrust Bank developed a series of models to identify clients likely to have large changes in deposit balances. The models include three basic binary and two linear regression models.
Based on the models, 15% of SunTrust clients were targeted as those most likely to have large balance changes. These clients accounted for 65% of the absolute balance change and 60% of the large balance change clients. The targeted clients are grouped into a portfolio and assigned to individual SunTrust Retail Branch. Since 2008, the portfolio generated a 2.6% increase in balances over control.
Using the SunTrust example, this presentation will focus on:
TOPIC: RESPONSE & CROSS-SELL Case Study: Paychex Staying One Step Ahead of the Competition – Development of a Predictive 401(k) Marketing and Sales Campaign
Jason Fox,Information Systems and Portfolio Manager,Paychex
In-depth case study of Paychex, Inc. utilizing predictive modeling to turn the tides on competitive pressures within their own client base. Paychex, a leading provider of payroll and human resource solutions, will guide you through the development of a Predictive 401(k) Marketing and Sales model. Through the use of sophisticated data mining techniques and regression analysis the model derives the probability a client will add retirement services products with Paychex or with a competitor. Session will include roadblocks that could have ended development and ROI analysis. Speaker: Frank Fiorille, Director of Enterprise Risk Management, Paychex Speaker: Jason Fox, Risk Management Analyst, Paychex
TOPIC: SEGMENTATION Practitioner: Canadian Imperial Bank of Commerce Segmentation Do’s and Don’ts
Daymond Ling,Senior Director, Modelling & Analytics,Canadian Imperial Bank of Commerce
The concept of Segmentation is well accepted in business and has withstood the test of time. Even with the advent of new artificial intelligence and machine learning methods, this old war horse still has its place and is alive and well. Like all analytical methods, when used correctly it can lead to enhanced market positioning and competitive advantage, while improper application can have severe negative consequences.
This session will explore what are the elements of success, and what are the worse practices that lead to failure. The relationship between segmentation and predictive modeling will also be discussed to clarify when it is appropriate to use one versus the other, and how to use them together synergistically.
TOPIC: SOCIAL DATA
Thought Leadership Social Network Analysis: Killer Application for Cloud Analytics
James Kobielus,Senior Analyst, Forrester Research
Social networks such as Twitter and Facebook are a potential goldmine of insights on what is truly going through customers´minds. Every company wants to know whether, how, how often, and by whom they´re being mentioned across the billowing new cloud of social media. Just as important, every company wants to influence those discussions in their favor, target new business, and harvest maximum revenue potential. In this session, Forrester analyst James Kobielus identifies fruitful applications of social network analysis in customer service, sales, marketing, and brand management. He presents a roadmap for enterprises to leverage their inline analytics initiatives and leverage high-performance data warehousing (DW) clouds and appliances in order to analyze shifting patterns of customer sentiment, influence, and propensity. Leveraging Forrester’s ongoing research in advanced analytics and customer relationship management, Kobielus will discuss industry trends, commercial modeling tools, and emerging best practices in social network analysis, which represents a game-changing new discipline in predictive analytics.
Trish Mathe,
Director of Database Marketing, Life Line Screening
While Life Line is successfully executing a US CRM roadmap, they are also beginning this same evolution abroad. They are beginning in the UK where Merkle procured data and built a response model that is pulling responses over 30% higher than competitors. This presentation will give an overview of the US CRM roadmap, and then focus on the beginning of their strategy abroad, focusing on the data procurement they could not get anywhere else but through Merkle and the successful modeling and analytics for the UK. Speaker: Ozgur Dogan, VP, Quantitative Solutions Group, Merkle Inc Speaker: Trish Mathe, Director of Database Marketing, Life Line Screening
TOPIC: SURVEY ANALYSIS Case Study: Forrester Making Survey Insights Addressable and Scalable – The Case Study of Forrester’s Technographics Benchmark Survey
Marketers use surveys to create enterprise wide applicable strategic insights to: (1) develop segmentation schemes, (2) summarize consumer behaviors and attitudes for the whole US population, and (3) use multiple surveys to draw unified views about their target audience. However, these insights are not directly addressable and scalable to the whole consumer universe which is very important when applying the power of survey intelligence to the one to one consumer marketing problems marketers routinely face. Acxiom partnered with Forrester Research, creating addressable and scalable applications of Forrester’s Technographics Survey and applied it successfully to a number of industries and applications.
TOPIC: HEALTHCARE Case Study: UPMC Health Plan A Predictive Model for Hospital Readmissions
Scott Zasadil,Senior Scientist, UPMC Health Plan
Hospital readmissions are a significant component of our nation’s healthcare costs. Predicting who is likely to be readmitted is a challenging problem. Using a set of 123,951 hospital discharges spanning nearly three years, we developed a model that predicts an individual’s 30-day readmission should they incur a hospital admission. The model uses an ensemble of boosted decision trees and prior medical claims and captures 64% of all 30-day readmits with a true positive rate of over 27%. Moreover, many of the ‘false’ positives are simply delayed true positives. 53% of the predicted 30-day readmissions are readmitted within 180 days.
A special issue of the Journal of Statistical Software has come out devoted to Multi State Models and Competing Risks. It is a must read for anyone with interest in Pharma Analytics or Survival Analysis- even if you dont know much R
Here is an extract from “mstate: An R Package for the Analysis ofCompeting Risks and Multi-State Models”
Multi-state models are a very useful tool to answer a wide range of questions in sur-vival analysis that cannot, or only in a more complicated way, be answered by classicalmodels. They are suitable for both biomedical and other applications in which time-to-event variables are analyzed. However, they are still not frequently applied. So far, animportant reason for this has been the lack of available software. To overcome this prob-lem, we have developed the mstate package in R for the analysis of multi-state models.The package covers all steps of the analysis of multi-state models, from model buildingand data preparation to estimation and graphical representation of the results. It canbe applied to non- and semi-parametric (Cox) models. The package is also suitable forcompeting risks models, as they are a special category of multi-state models.
—————————–
Issues for JSS Special Volume 38: Competing Risks and Multi-State Models
Special Issue about Competing Risks and Multi-State Models