Home » Posts tagged 'data driven'
Tag Archives: data driven
Message from PAWCON-
Space is filling up fast at the Hilton New York, host hotel for Predictive Analytics World and Text Analytics World, next month in New York City. Take advantage of the special room rate negotiated for attendees prior to Friday, September 23rd.
Space is limited so be sure to book your room before it’s too late.
You can reserve your room today by calling 212-586-7000 and reference Data Driven Business Week or online at:
View the PAW overview video: www.pawcon.com/newyork/2011/video_about_predictive_analytics_world.php
This is a short list of several known as well as lesser known R ( #rstats) language codes, packages and tricks to build a business intelligence application. It will be slightly Messy (and not Messi) but I hope to refine it someday when the cows come home.
It assumes that BI is basically-
a Database, a Document Database, a Report creation/Dashboard pulling software as well unique R packages for business intelligence.
What is business intelligence?
Seamless dissemination of data in the organization. In short let it flow- from raw transactional data to aggregate dashboards, to control and test experiments, to new and legacy data mining models- a business intelligence enabled organization allows information to flow easily AND capture insights and feedback for further action.
BI software has lately meant to be just reporting software- and Business Analytics has meant to be primarily predictive analytics. the terms are interchangeable in my opinion -as BI reports can also be called descriptive aggregated statistics or descriptive analytics, and predictive analytics is useless and incomplete unless you measure the effect in dashboards and summary reports.
Data Mining- is a bit more than predictive analytics- it includes pattern recognizability as well as black box machine learning algorithms. To further aggravate these divides, students mostly learn data mining in computer science, predictive analytics (if at all) in business departments and statistics, and no one teaches metrics , dashboards, reporting in mainstream academia even though a large number of graduates will end up fiddling with spreadsheets or dashboards in real careers.
Using R with
I created a short list of database connectivity with R here at https://rforanalytics.wordpress.com/odbc-databases-for-r/ but R has released 3 new versions since then.
The RODBC package remains the package of choice for connecting to SQL Databases.
Details on creating DSN and connecting to Databases are given at https://rforanalytics.wordpress.com/odbc-databases-for-r/
For document databases like MongoDB and CouchDB
( what is the difference between traditional RDBMS and NoSQL if you ever need to explain it in a cocktail conversation http://dba.stackexchange.com/questions/5/what-are-the-differences-between-nosql-and-a-traditional-rdbms
Basically dispensing with the relational setup, with primary and foreign keys, and with the additional overhead involved in keeping transactional safety, often gives you extreme increases in performance
NoSQL is a kind of database that doesn’t have a fixed schema like a traditional RDBMS does. With the NoSQL databases the schema is defined by the developer at run time. They don’t write normal SQL statements against the database, but instead use an API to get the data that they need.
instead relating data in one table to another you store things as key value pairs and there is no database schema, it is handled instead in code.)
I believe any corporation with data driven decision making would need to both have atleast one RDBMS and one NoSQL for unstructured data-Ajay. This is a sweeping generic statement , and is an opinion on future technologies.
- Use RMongo
Connecting to a MongoDB database from R using Java
Also see a nice basic analysis using R Mongo from
please see https://github.com/wactbprot/R4CouchDB and
2) External Report Creating Software-
Jaspersoft- It has good integration with R and is a certified Revolution Analytics partner (who seem to be the only ones with a coherent #Rstats go to market strategy- which begs the question – why is the freest and finest stats software having only ONE vendor- if it was so great lots of companies would make exclusive products for it – (and some do -see https://rforanalytics.wordpress.com/r-business-solutions/ and https://rforanalytics.wordpress.com/using-r-from-other-software/)
RevoConnectR for JasperReports Server
RevoConnectR for JasperReports Server RevoConnectR for JasperReports Server is a Java library interface between JasperReports Server and Revolution R Enterprise’s RevoDeployR, a standardized collection of web services that integrates security, APIs, scripts and libraries for R into a single server. JasperReports Server dashboards can retrieve R charts and result sets from RevoDeployR.
R and BI – Integrating R with Open Source Business Intelligence Platforms Pentaho and Jaspersoft David Reinke, Steve Miller Keywords: business intelligence Increasingly, R is becoming the tool of choice for statistical analysis, optimization, machine learning and visualization in the business world. This trend will only escalate as more R analysts transition to business from academia. But whereas in academia R is often the central tool for analytics, in business R must coexist with and enhance mainstream business intelligence (BI) technologies. A modern BI portfolio already includes relational databeses, data integration (extract, transform, load – ETL), query and reporting, online analytical processing (OLAP), dashboards, and advanced visualization. The opportunity to extend traditional BI with R analytics revolves on the introduction of advanced statistical modeling and visualizations native to R. The challenge is to seamlessly integrate R capabilities within the existing BI space. This presentation will explain and demo an initial approach to integrating R with two comprehensive open source BI (OSBI) platforms – Pentaho and Jaspersoft. Our eﬀorts will be successful if we stimulate additional progress, transparency and innovation by combining the R and BI worlds. The demonstration will show how we integrated the OSBI platforms with R through use of RServe and its Java API. The BI platforms provide an end user web application which include application security, data provisioning and BI functionality. Our integration will demonstrate a process by which BI components can be created that prompt the user for parameters, acquire data from a relational database and pass into RServer, invoke R commands for processing, and display the resulting R generated statistics and/or graphs within the BI platform. Discussion will include concepts related to creating a reusable java class library of commonly used processes to speed additional development.
If you know Java- try http://ramanareddyg.blog.com/2010/07/03/integrating-r-and-pentaho-data-integration/
and I like this list by two venerable powerhouses of the BI Open Source Movement
Open Source BI as disruptive technology
Open Source Punditry
|Commercial Open Source BI Redux||Dave Reinke & Steve Miller||An review and update on the predictions made in our 2007 article focused on the current state of the commercial open source BI market. Also included is a brief analysis of potential options for commercial open source business models and our take on their applicability.|
|Open Source BI as Disruptive Technology||Dave Reinke & Steve Miller||Reprint of May 2007 DM Review article explaining how and why Commercial Open Source BI (COSBI) will disrupt the traditional proprietary market.|
Spotlight on R
|R You Ready for Open Source Statistics?||Steve Miller||R has become the “lingua franca” for academic statistical analysis and modeling, and is now rapidly gaining exposure in the commercial world. Steve examines the R technology and community and its relevancy to mainstream BI.|
|R and BI (Part 1): Data Analysis with R||Steve Miller||An introduction to R and its myriad statistical graphing techniques.|
|R and BI (Part 2): A Statistical Look at Detail Data||Steve Miller||The usage of R’s graphical building blocks – dotplots, stripplots and xyplots – to create dashboards which require little ink yet tell a big story.|
|R and BI (Part 3): The Grooming of Box and Whiskers||Steve Miller||Boxplots and variants (e.g. Violin Plot) are explored as an essential graphical technique to summarize data distributions by categories and dimensions of other attributes.|
|R and BI (Part 4): Embellishing Graphs||Steve Miller||Lattices and logarithmic data transformations are used to illuminate data density and distribution and find patterns otherwise missed using classic charting techniques.|
|R and BI (Part 5): Predictive Modelling||Steve Miller||An introduction to basic predictive modelling terminology and techniques with graphical examples created using R.|
|R and BI (Part 6) :
|Steve Miller||How do you deal with highly skewed data distributions? Standard charting techniques on this “deviant” data often fail to illuminate relationships. This article explains techniques to re-express skewed data so that it is more understandable.|
|The Stock Market, 2007||Steve Miller||R-based dashboards are presented to demonstrate the return performance of various asset classes during 2007.|
|Bootstrapping for Portfolio Returns: The Practice of Statistical Analysis||Steve Miller||Steve uses the R open source stats package and Monte Carlo simulations to examine alternative investment portfolio returns…a good example of applied statistics using R.|
|Statistical Graphs for Portfolio Returns||Steve Miller||Steve uses the R open source stats package to analyze market returns by asset class with some very provocative embedded trellis charts.|
|Frank Harrell, Iowa State and useR!2007||Steve Miller||In August, Steve attended the 2007 Internation R User conference (useR!2007). This article details his experiences, including his meeting with long-time R community expert, Frank Harrell.|
|An Open Source Statistical “Dashboard” for Investment Performance||Steve Miller||The newly launched Dashboard Insight web site is focused on the most useful of BI tools: dashboards. With this article discussing the use of R and trellis graphics, OpenBI brings the realm of open source to this forum.|
|Unsexy Graphics for Business Intelligence||Steve Miller||Utilizing Tufte’s philosophy of maximizing the data to ink ratio of graphics, Steve demonstrates the value in dot plot diagramming. The R open source statistical/analytics software is showcased.|
- brew: Creating Repetitive Reports
brew: Templating Framework for Report Generation brew implements a templating framework for mixing text and R code for report generation. brew template syntax is similar to PHP, Ruby's erb module, Java Server Pages, and Python's psp module. http://bit.ly/jINmaI
- Yarr- creating reports in R
- the formidable Dirk with awesome stock reports
Here is an interview with Tom Redman, author of Data Driven. Among the first to recognize the need for high-quality data in the information age, Dr. Redman established the AT&T Bell Laboratories Data Quality Lab in 1987 and led it until 1995. He is the author of four books, two patents and leads his own consulting group. In many respects the “Data Doc’ as his nickname is- is also the father of Data Quality Evangelism.
Ajay- Describe your career as a science student to an author of science and strategy books.
Redman: I took the usual biology, chemistry, and physics classes in college. And I worked closely with oceanographers in graduate school. More importantly, I learned directly from two masters. First, was Dr. Basu, who was at Florida State when I was. He thought more deeply and clearly about the nature of data and what we can learn from them than anyone I’ve met since. And second is people in the Bell Labs’ community who were passionate about making communications better. What I learned there was you don’t always need “scientific proof” to mover forward.
Ajay- What kind of bailout do you think the Government can give to the importance of science education in this country.
Redman: I don’t think the government should bail science education per se. Science departments should compete for students just like the English and anthropology departments do. At the same time, I do think the government should support some audacious goals, such as slowing global warming or energy independence. These could well have the effect of increasing demand for scientists and science education.
Ajay- Describe your motivations for writing your book Data Driven-Profiting from your most important business asset.
Redman: Frankly I was frustrated. I’ve spent the last twenty years on data quality and organizations that improve gain enormous benefit. But so few do. I set out to figure out why that was and what to do about it.
Ajay- What can various segments of readers learn from this book-
a college student, a manager, a CTO, a financial investor and a business intelligence vendor.
Redman: I narrowed my focus to the business leader and I want him or her to take away three points. First, data should be managed as aggressively and professionally as your other assets. Second, they are unlike other assets in some really important ways and you’ll have to learn how to manage them. Third, improving quality is a great place to start.
Ajay- Garbage in Garbage out- How much money and time do you believe is given to data quality in data projects.
Redman: By this I assume you mean data warehouse, BI, and other tech projects. And the answer is “not near enough.” And it shows in the low success rate of those projects.
Ajay-Consider a hypothetical scenario- Instead of creating and selling fancy algorithms , a business intelligence vendor uses simple Pareto principle to focus on data quality and design during data projects. How successful do you think that would be?
Redman: I can’t speak to the market, but I do know that if organizations are loaded with problems and opportunities. They could make great progress on most important ones if could clearly state the problem and bring high-quality data and simple techniques to bear. But there are a few that require high-powered algorithms. Unfortunately those require high-quality data as well.
Ajay- How and when did you first earn the nickname “Data Doc”. Who gave it to you and would you rather be known by some other names.
Redman: One of my clients started calling me that about a dozen years ago. But I felt uncomfortable and didn’t put it on my business card until about five years ago. I’ve grown to really like it.
Ajay- The pioneering work at AT & T Bell laboratories and at Palo Alto laboratory- who do you think are the 21st century successors of these laboratories. Do you think lab work has become too commercialized even in respected laboratories like Microsoft Research and Google’s research in mathematics.
Redman: I don’t know. It may be that the circumstances of the 20th century were conducive to such labs and they’ll never happen again. You have to remember two things about Bell Labs. First, was the cross-fertilization that stemmed from having leading-edge work in dozens of areas. Second, the goal is not just invention, but innovation, the end-to-end process which starts with invention and ends with products in the market. AT&T, Bell Labs’ parent, was quite good at turning invention to product. These points lead me to think that the commercial aspect of laboratory work is so much the better.
Ajay-What does ” The Data Doc” do to relax and maintain a work life balance. How important do you think is work-life balance for creative people and researchers.
Redman: I think everyone needs a balance, not just creative people. Two things have made this easier for me. First, I like what I do. A lot of days it is hard to distinguish “work” from “play.” Second is my bride of thirty-three years, Nancy. She doesn’t let me go overboard too often.
Dr. Thomas C. Redman is President of Navesink Consulting Group, based in Little Silver, NJ. Known by many as “the Data Doc” (though “Tom” works too), Dr. Redman was the first to extend quality principles to data and information. By advancing the body of knowledge, his innovations have raised the standard of data quality in today’s information-based economy.
Dr. Redman conceived the Data Quality Lab at AT&T Bell Laboratories in 1987 and led it until 1995. There he and his team developed the first methods for improving data quality and applied them to important business problems, saving AT&T tens of millions of dollars. He started Navesink Consulting Group in 1996 to help other organizations improve their data, while simultaneously lowering operating costs, increasing revenues, and improving customer satisfaction and business relationships.
Since then – armed with proven, repeatable tools, techniques and practical advice – Dr. Redman has helped clients in fields ranging from telecommunications, financial services, and dot coms, to logistics, consumer goods, and government agencies. His work has helped organizations understand the importance of high-quality data, start their data quality programs, and also save millions of dollars per year.
Dr. Redman holds a Ph.D. in statistics from Florida State University. He is an internationally renowned lecturer and the author of numerous papers, including “Data Quality for Competitive Advantage” (Sloan Management Review, Winter 1995) and “Data as a Resource: Properties, Implications, and Prescriptions” (Sloan Management Review, Fall 1998). He has written four books: Data Driven (Harvard Business School Press, 2008), Data Quality: The Field Guide (Butterworth-Heinemann, 2001), Data Quality for the Information Age (Artech, 1996) and Data Quality: Management and Technology (Bantam, 1992). He was also invited to contribute two chapters to Juran’s Quality Handbook, Fifth Edition (McGraw Hill, 1999). Dr. Redman holds two patents.
About Navesink Consulting Group (http://www.dataqualitysolutions.com/ )
Navesink Consulting Group was formed in 1996 and was the first company to focus on data quality. Led by Dr. Thomas Redman, “the Data Doc” and former AT&T Bell Labs director, we have helped clients understand the importance of high-quality data, start their data quality programs, and save millions of dollars per year.
Our approach is not a cobbling together of ill-fitting ideas and assertions – it is based on rigorous scientific principles that have been field-tested in many industries, including financial services (see more under “Our clients”). We offer no silver bullets; we don’t even offer shortcuts. Improving data quality is hard work.
But with a dedicated effort, you should expect order-of-magnitude improvements and, as a direct result, an enormous boost in your ability to manage risk, steer a course through the crisis, and get back on the growth curve.
Ultimately, Navesink Consulting brings tangible, sustainable improvement in your business performance as a result of superior quality data.
Once in a whle comes a book that squeezes a lot of common sense in easy to execute paradigms, adds some flavours of anecdotes and adds penetrating insights as the topping. The Book Data Driven by Tom Redman is such a book- and it may rightly called the successor to the now epic Davenport Tome on Competing on Analytics.
Data Driven, the book is divided in 3 parts.
1) Data Quality – Including opportunity costs of bad data management.
2) Putting Data and Information to work
3) Creating a Management system for Data and Information.
At 218 pages not including the appendix- this is one easy read for someone who needs to refresh their mental batteries with data hygiene perspectives. With terrific wisdom and easy to communicate language and paradigms this would surely mark another important chapter in bring data quality to the forefront rather than the back burner of Business Intelligence and Business Analytics. All the trillion dollar algorthms in the world and software is useless without data qquality. Read this book and it will show you how to use the most important valuable and under used asset- data.