Interview John F Moore CEO The Lab

Social Media Landscape

Here is an interview with John F Moore, social media adviser,technologist and founder and CEO of The Lab.

Ajay-  The internet seems to be crowded by social media experts with everyone who spends a lot of time on the internet claiming to be one? How  does a small business owner on a budget distinguish for the correct value proposition that social media can give them. 

John- You’re right.  It seems like everytime I turn around I bump into more social media “experts”.  The majority of these self-proclaimed experts are not adding a great deal of value.  When looking to spend money for help ask the person a few questions about their approach. Things you should be hearing include:

  • The expert should be seeking to fully understand your business, your goals, your available resources, etc..
  • The expert should be seeking to understand current management thinking about social media and related technologies.

If the expert is purely focused on tools they are the wrong person.  Your solution may require tools alone but they cannot know this without first understanding your business.

Ajay- Facebook has 600 million people, with people preferring to play games and connect to old acquaintances rather than use social media for tangible career or business benefit..

John- People are definitely spending time playing games, looking at photos, and catching up with old friends.  However, there are many businesses seeing real value from Facebook (primarily by tying it into their e-mail marketing and using coupons and other incentives).  For example, I recently shared a small case study (http://thejohnfmoore.com/2010/10/07/email-social-media-and-coupons-makes-the-cfo-smile/) where a small pet product company achieved a 22% bump in monthly revenue by combining Facebook and coupons together.  In fact,45% of this bump in revenue came from new clients.  Customer acquisition and increased revenue were accomplished by using Facebook for their business.
Ajay-  How does a new social media convert (individual) go on selecting communities to join (Facebook,Twitter,Linkedin,Ning, Ping,Orkut, Empire Avenue etc etc.
How does a small business owner take the same decision.

John- It always starts with taking the time to define your goals and then determine how much time and effort you are willing to invest.  For example:
  • LinkedIn. A must have for individuals as it is one of the key social networking communities for professional networking.  Individuals should join groups that are relevant to their career and invest an hour a week.  Businesses should ensure they have a business profile completed and up to date.
  • Facebook can be a challenge for anyone trying to walk the personal/professional line.  However, from a business standpoint you should be creating a Facebook page that you can use to compliment your other marketing channels.
  • Twitter.  It is a great network to learn of, to meet, and to interact with people from around the world.  I have met thousands of interesting people, many of which I have had the pleasure to meet with in real life.  Businesses need to invest in listening on twitter to determine if their customers (current or potential) or competitors are already there discussing them, their marketplace, or their offerings.
In all cases I would encourage businesses to setup social media accounts on LinkedIn, Facebook, Twitter, YouTube, and Flickr.  You want to ensure your brand is protected by owning these accounts and ensuring at least the base information is accurate.
Ajay- Name the top 5 points that you think make a social media community successful.  What are the top 5 points for a business to succeed in their social media strategy.

John-
  • Define your goals up front.  Understand why you are building a community and keep this goal in mind.
  • Provide education.  Ideally you want to become a thought leader in your space, the trusted resource that people can turn to even if they are not using your product or services today.
  • Be honest.  We all make mistakes.  When you do, be honest with your community and engage them in any fall-out that may be coming out of your mistake.
  • Listen to them.  Use platforms like BubbleIdeas to gather feedback on what your community is looking for from the relationship.
  • Measure.  Are you on track with your goals?  Do your goals need to change?
Ajay- What is the unique value proposition that “The Lab” offers

John- The Lab understands the strategic importance of leveraging social media, management and leadership best practices, and our understanding of local government and small and medium business to help people in these areas achieve their goals.  Too many consultants come to the table with a predefined solution that really misses the mark as it lacks understanding of the client’s goals.
Ajay-  What is “CityCamp in Boston” all about.

John- CityCamp is a FREE unconference focused on innovation for municipal governments and community organizations (http://www.citycampboston.org/what-is-citycamp-boston/).  It brings together politicians, local municipal employees, citizens, vendors, developers, and journalist to build a common understanding of local government challenges and then works to deliver measurable outcomes following the event.  The key is the focus on change management, driving change as opposed to just in the moment education.
Biography-

John F Moore is the Founder and CEO of The Lab (http://thelabinboston.com).  John has experience working with local governments and small and medium business owners to achieve their goals.  His experience with social media strategies, CRM, and a plethora of other solutions provides immense value to all of our clients.   He has built engineering organizations, learned sales and marketing, run customer service teams, and built and executed strategies for social media thought leadership and branding.  He is also a prolific blogger as you can see by checking out his blog at http://thejohnfmoore.com.

Which software do we buy? -It depends

Software (novel)
Image via Wikipedia

Often I am asked by clients, friends and industry colleagues on the suitability or unsuitability of particular software for analytical needs.  My answer is mostly-

It depends on-

1) Cost of Type 1 error in purchase decision versus Type 2 error in Purchase Decision. (forgive me if I mix up Type 1 with Type 2 error- I do have some weird childhood learning disabilities which crop up now and then)

Here I define Type 1 error as paying more for a software when there were equivalent functionalities available at lower price, or buying components you do need , like SPSS Trends (when only SPSS Base is required) or SAS ETS, when only SAS/Stat would do.

The first kind is of course due to the presence of free tools with GUI like R, R Commander and Deducer (Rattle does have a 500$ commercial version).

The emergence of software vendors like WPS (for SAS language aficionados) which offer similar functionality as Base SAS, as well as the increasing convergence of business analytics (read predictive analytics), business intelligence (read reporting) has led to somewhat brand clutter in which all softwares promise to do everything at all different prices- though they all have specific strengths and weakness. To add to this, there are comparatively fewer business analytics independent analysts than say independent business intelligence analysts.

2) Type 2 Error- In this case the opportunity cost of delayed projects, business models , or lower accuracy – consequences of buying a lower priced software which had lesser functionality than you required.

To compound the magnitude of error 2, you are probably in some kind of vendor lock-in, your software budget is over because of buying too much or inappropriate software and hardware, and still you could do with some added help in business analytics. The fear of making a business critical error is a substantial reason why open source software have to work harder at proving them competent. This is because writing great software is not enough, we need great marketing to sell it, and great customer support to sustain it.

As Business Decisions are decisions made in the constraints of time, information and money- I will try to create a software purchase matrix based on my knowledge of known softwares (and unknown strengths and weakness), pricing (versus budgets), and ranges of data handling. I will add in basically an optimum approach based on known constraints, and add in flexibility for unknown operational constraints.

I will restrain this matrix to analytics software, though you could certainly extend it to other classes of enterprise software including big data databases, infrastructure and computing.

Noted Assumptions- 1) I am vendor neutral and do not suffer from subjective bias or affection for particular software (based on conferences, books, relationships,consulting etc)

2) All software have bugs so all need customer support.

3) All software have particular advantages , strengths and weakness in terms of functionality.

4) Cost includes total cost of ownership and opportunity cost of business analytics enabled decision.

5) All software marketing people will praise their own software- sometimes over-selling and mis-selling product bundles.

Software compared are SPSS, KXEN, R,SAS, WPS, Revolution R, SQL Server,  and various flavors and sub components within this. Optimized approach will include parallel programming, cloud computing, hardware costs, and dependent software costs.

To be continued-

 

 

 

 

Interesting Interview with Quentin G,AsterData

Here is an interesting interview with Quentin G, CEO AsterData, Marketing trumpeting aside apart-the insights on the whats next vision thing are quite good.

Sourcehttp://www.arnoldit.com/search-wizards-speak/aster-data.html

As you look down the road, what are the three major challenges you see for vendors who keep trying to solve big data and other “now” problems with old tools?

Old tools and traditional architectures cannot scale effectively to handle massive data volumes that reach 100’s of terabytes nor can they effectively process large data volumes in a high performance manner. Further, they are restricted to what SQL querying allows. The three challenges I have noted are:

First, performance, specifically, poor performance on large data volumes and heavy workloads: The pre-existing systems rely on storing data in a traditional DBMS or data warehouse and then extracting a sample of data to a separate processing tier. This greatly restricts data insights and analytics as only a sample of data is analyzed and understood.  As more data is stored in these systems they suffer from performance degradation as more users try to access the system concurrently. Additionally moving masses of data out of the traditional DBMS to a separate processing tier adds latency and slows down analytics and response times. This pre-existing architecture greatly limits performance especially as data sizes grow.

Second, limited analytics: Pre-existing systems rely mostly on SQL for data querying and analysis. SQL poses several limitations and is not suited for ad hoc querying, deep data exploration and a range of other analytics. MapReduce overcomes the limitations of SQL and SQL-MapReduce in particular opens up a new class of analytics that cannot be achieved with SQL alone.

And, third, limitations of types of data that can be stored and analyzed: Traditional systems are not designed for non-relational or unstructured data. New solutions such as Aster Data’s are designed from the ground up to handle both relational and non-relational data. Organizations want to store and process a range of data types and do this in a single platform. New solutions allow for different data types to be handled in a single platform whereas pre-existing architectures and solutions are specialized around a single data type or format – this restricts the diversity of analytics that can be performed on these systems.

Read the whole interview at –http://www.arnoldit.com/search-wizards-speak/aster-data.html

Speaking of which- there is a new webinar by Merv Adrian (interview on Decisionstats) and Colin White-

 

http://now.eloqua.com/es.asp?s=1015&e=1862&elq=9ec9b73872e849b88d2943cca920acda

and from the famous AOL website- a profile of AsterData’s money flow which kind of hints at an IPO two years onwards-

http://www.crunchbase.com/company/aster-data-systems

Interview Michael J. A. Berry Data Miners, Inc

Here is an interview with noted Data Mining practitioner Michael Berry, author of seminal books in data mining, noted trainer and consultantmjab picture

Ajay- Your famous book “Data Mining Techniques: For Marketing, Sales, and Customer Relationship Management” came out in 2004, and an update is being planned for 2011. What are the various new data mining techniques and their application that you intend to talk about in that book.

Michael- Each time we do a revision, it feels like writing a whole new book. The first edition came out in 1997 and it is hard to believe how much the world has changed since then. I’m currently spending most of my time in the on-line retailing world. The things I worry about today–improving recommendations for cross-sell and up-sell,and search engine optimization–wouldn’t have even made sense to me back then. And the data sizes that are routine today were beyond the capacity of the most powerful super computers of the nineties. But, if possible, Gordon and I have changed even more than the data mining landscape. What has changed us is experience. We learned an awful lot between the first and second editions, and I think we’ve learned even more between the second and third.

One consequence is that we now have to discipline ourselves to avoid making the book too heavy to lift. For the first edition, we could write everything we knew (and arguably, a bit more!); now we have to remind ourselves that our intended audience is still the same–intelligent laymen with a practical interest in getting more information out of data. Not statisticians. Not computer scientists. Not academic researchers. Although we welcome all readers, we are primarily writing for someone who works in a marketing department and has a title with the word “analyst” or “analytics” in it. We have relaxed our “no equations” rule slightly for cases when the equations really do make things easier to explain, but the core explanations are still in words and pictures.

The third edition completes a transition that was already happening in the second edition. We have fully embraced standard statistical modeling techniques as full-fledged components of the data miner’s toolkit. In the first edition, it seemed important to make a distinction between old, dull, statistics, and new, cool, data mining. By the second edition, we realized that didn’t really make sense, but remnants of that attitude persisted. The third edition rectifies this. There is a chapter on statistical modeling techniques that explains linear and logistic regression, naive Bayes models, and more. There is also a brand new chapter on text mining, a curious omission from previous editions.

There is also a lot more material on data preparation. Three whole chapters are devoted to various aspects of data preparation. The first focuses on creating customer signatures. The second is focused on using derived variables to bring information to the surface, and the third deals with data reduction techniques such as principal components. Since this is where we spend the greatest part of our time in our work, it seemed important to spend more time on these subjects in the book as well.

Some of the chapters have been beefed up a bit. The neural network chapter now includes radial basis functions in addition to multi-layer perceptrons. The clustering chapter has been split into two chapters to accommodate new material on soft clustering, self-organizing maps, and more. The survival analysis chapter is much improved and includes material on some of our recent application of survival analysis methods to forecasting. The genetic algorithms chapter now includes a discussion of swarm intelligence.

Ajay- Describe your early career and how you came into Data Mining as a profession. What do you think of various universities now offering MS in Analytics. How do you balance your own teaching experience with your consulting projects at The Data Miners.

Michael- I fell into data mining quite by accident. I guess I always had a latent interest in the topic. As a high school and college student, I was a fan of Martin Gardner‘s mathematical games in in Scientific American. One of my favorite things he wrote about was a game called New Eleusis in which one players, God, makes up a rule to govern how cards can be played (“an even card must be followed by a red card”, say) and the other players have to figure out the rule by watching what plays are allowed by God and which ones are rejected. Just for my own amusement, I wrote a computer program to play the game and presented it at the IJCAI conference in, I think, 1981.

That paper became a chapter in a book on computer game playing–so my first book was about finding patterns in data. Aside from that, my interest in finding patterns in data lay dormant for years. At Thinking Machines, I was in the compiler group. In particular, I was responsible for the run-time system of the first Fortran Compiler for the CM-2 and I represented Thinking Machines at the Fortran 8X (later Fortran-90) standards meetings.

What changed my direction was that Thinking Machines got an export license to sell our first machine overseas. The machine went to a research lab just outside of Paris. The connection machine was so hard to program, that if you bought one, you got an applications engineer to go along with it. None of the applications engineers wanted to go live in Paris for a few months, but I did.

Paris was a lot of fun, and so, I discovered, was actually working on applications. When I came back to the states, I stuck with that applied focus and my next assignment was to spend a couple of years at Epsilon, (then a subsidiary of American Express) working on a database marketing system that stored all the “records of charge” for American Express card members. The purpose of the system was to pick ads to go in the billing envelope. I also worked on some more general purpose data mining software for the CM-5.

When Thinking Machines folded, I had the opportunity to open a Cambridge office for a Virginia-based consulting company called MRJ that had been a major channel for placing Connection Machines in various government agencies. The new group at MRJ was focused on data mining applications in the commercial market. At least, that was the idea. It turned out that they were more interested in data warehousing projects, so after a while we parted company.

That led to the formation of Data Miners. My two partners in Data Miners, Gordon Linoff and Brij Masand, share the Thinking Machines background.

To tell the truth, I really don’t know much about the university programs in data mining that have started to crop up. I’ve visited the one at NC State, but not any of the others.

I myself teach a class in “Marketing Analytics” at the Carroll School of Management at Boston College. It is an elective part of the MBA program there. I also teach short classes for corporations on their sites and at various conferences.

Ajay- At the previous Predictive Analytics World, you took a session on Forecasting and Predicting Subsciber levels (http://www.predictiveanalyticsworld.com/dc/2009/agenda.php#day2-6) .

It seems inability to forecast is a problem many many companies face today. What do you think are the top 5 principles of business forecasting which companies need to follow.

Michael- I don’t think I can come up with five. Our approach to forecasting is essentially simulation. We try to model the underlying processes and then turn the crank to see what happens. If there is a principal behind that, I guess it is to approach a forecast from the bottom up rather than treating aggregate numbers as a time series.

Ajay- You often partner your talks with SAS Institute, and your blog at http://blog.data-miners.com/ sometimes contain SAS code as well. What particular features of the SAS software do you like. Do you use just the Enterprise Miner or other modules as well for Survival Analysis or Forecasting.

Michael- Our first data mining class used SGI’s Mineset for the hands-on examples. Later we developed versions using Clementine, Quadstone, and SAS Enterprise Miner. Then, market forces took hold. We don’t market our classes ourselves, we depend on others to market them and then share in the revenue.

SAS turned out to be much better at marketing our classes than the other companies, so over time we stopped updating the other versions. An odd thing about our relationship with SAS is that it is only with the education group. They let us use Enterprise Miner to develop course materials, but we are explicitly forbidden to use it in our consulting work. As a consequence, we don’t use it much outside of the classroom.

Ajay- Also any other software you use (apart from SQL and J)

Michael- We try to fit in with whatever environment our client has set up. That almost always is SQL-based (Teradata, Oracle, SQL Server, . . .). Often SAS Stat is also available and sometimes Enterprise Miner.

We run into SPSS, Statistica, Angoss, and other tools as well. We tend to work in big data environments so we’ve also had occasion to use Ab Initio and, more recently, Hadoop. I expect to be seeing more of that.

Biography-

Together with his colleague, Gordon Linoff, Michael Berry is author of some of the most widely read and respected books on data mining. These best sellers in the field have been translated into many languages. Michael is an active practitioner of data mining. His books reflect many years of practical, hands-on experience down in the data mines.

Data Mining Techniques cover

Data Mining Techniques for Marketing, Sales and Customer Relationship Management

by Michael J. A. Berry and Gordon S. Linoff
copyright 2004 by John Wiley & Sons
ISB

Mining the Web cover

Mining the Web

by Michael J.A. Berry and Gordon S. Linoff
copyright 2002 by John Wiley & Sons
ISBN 0-471-41609-6

Non-English editions available in Traditional Chinese and Simplified Chinese

This book looks at the new opportunities and challenges for data mining that have been created by the web. The book demonstrates how to apply data mining to specific types of online businesses, such as auction sites, B2B trading exchanges, click-and-mortar retailers, subscription sites, and online retailers of digital content.

Mastering Data Mining

by Michael J.A. Berry and Gordon S. Linoff
copyright 2000 by John Wiley & Sons
ISBN 0-471-33123-6

Non-English editions available in JapaneseItalianTraditional Chinese , and Simplified Chinese

A case study-based guide to applying data mining techniques for solving practical business problems. These “warts and all” case studies are drawn directly from consulting engagements performed by the authors.

A data mining educator as well as a consultant, Michael is in demand as a keynote speaker and seminar leader in the area of data mining generally and the application of data mining to customer relationship management in particular.

Prior to founding Data Miners in December, 1997, Michael spent 8 years at Thinking Machines Corporation. There he specialized in the application of massively parallel supercomputing techniques to business and marketing applications, including one of the largest database marketing systems of the time.

Blog Update

Some changes at Decisionstats-

1) We are back at Decisionstats.com and Decisionstats.wordpress.com will point to that as well. The SEO effects would be interesting and so would be the Instant Pagerank or LinkRank or whatever Coffee/Percolator they use in Cali to index the site.

2) AsterData is no longer a sponsor- but Predictive Analytics Conference is. Welcome PAWS! I have been a blog partner to PAWS ever since it began- and it’s a great marketing fit. Expect to see a lot of exclusive content and interviews from great speakers at PAWS.

3) The Feedblitz newsletter (now at 404 subscribers) is now a weekly subscription to send one big big email rather than lots of email through the week- this is because my blogging frequency is moving up as I collect material for a new book on business analytics that I would probably release in 2011 (if all goes well, touchwood). Linkedin group would be getting a weekly update announcement. If you are connected to Decisionstats on Analyticbridge _ I would soon try to find a way to update the whole post automatically using RSS and Ning.com . or not. Depends.

4) R continues to be a bigger focus. So will SPSS and maybe JMP. Newer softwares or older softwares that change more rapidly would get more coverage. Generally a particular software is covered if it has newer features, or an interesting techie conference, or it gets sued.

5) I will occasionally write a poem or post a video once a week randomly to prove geeks and nerds and analysts can have fun (much more fun actually dont we)

Thanks for reading this. Sept 2010 was the best ever for Decisionstats.com – we crossed 15,000 + visitors and thanks for that again! I promise to bore you less and less as we grow old together on the blog 😉

IBM SPSS 19: Marketing Analytics and RFM

What is RFM Analysis?

Recency Frequency Monetization is basically a technique to classify your entire customer list. You may be a retail player with thousands of customers or a enterprise software seller with only two dozen customers.

RFM Analysis can help you cut through and focus on the real customer that drives your profit.

As per Wikipedia

http://en.wikipedia.org/wiki/RFM

RFM is a method used for analyzing customer behavior and defining market segments. It is commonly used in database marketing and direct marketing and has received particular attention in retail.

RFM stands for

  • Recency – How recently a customer has purchased?
  • Frequency – How often he purchases?
  • Monetary Value – How much does he spend?

To create an RFM analysis, one creates categories for each attribute. For instance, the Recency attribute might be broken into three categories: customers with purchases within the last 90 days; between 91 and 365 days; and longer than 365 days. Such categories may be arrived at by applying business rules, or using a data mining technique, such asCHAID, to find meaningful breaks.

—————————————————————————————————-

Even if you dont know what or how to do a RFM, see below for an easy to do way.

I just got myself an evaluation copy of a fully loaded IBM SPSS 19 Module and did some RFM Analysis on some data- the way SPSS recent version is it makes it very very useful even to non statistical tool- but an extremely useful one to a business or marketing user.

Here are some screenshots to describe the features.

1) A simple dashboard to show functionality (with room for improvement for visual appeal)

2) Simple Intuitive design to inputting data3) Some options in creating marketing scorecards4) Easy to understand features for a business audiences

rather than pseudo techie jargon5) Note the clean design of the GUI in specifying data input type6) Again multiple options to export results in a very user friendly manner with options to customize business report7) Graphical output conveniently pasted inside a word document rather than a jumble of images. Auto generated options for customized standard graphs.8) An attractive heatmap to represent monetization for customers. Note the effect that a scale of color shades have in visual representation of data.9) Comparative plots placed side by side with easy to understand explanation (in the output word doc not shown here)10) Auto generated scores attached to data table to enhance usage. 

Note here I am evaluating RFM as a marketing technique (which is well known) but also the GUI of IBM SPSS 19 Marketing Analytics. It is simple, and yet powerful into turning what used to be a purely statistical software for nerds into a beautiful easy to implement tool for business users.

So what else can you do in Marketing Analytics with SPSS 19.

IBM SPSS Direct Marketing

The Direct Marketing add-on option allows organizations to ensure their marketing programs are as effective as possible, through techniques specifically designed for direct marketing, including:

• RFM Analysis. This technique identifies existing customers who are most likely to respond to a new offer.

• Cluster Analysis. This is an exploratory tool designed to reveal natural groupings (or clusters) within your data. For example, it can identify different groups of customers based on various demographic and purchasing characteristics.

• Prospect Profiles. This technique uses results from a previous or test campaign to create descriptive profiles. You can use the profiles to target specific groups of contacts in future campaigns.

• Postal Code Response Rates. This technique uses results from a previous campaign to calculate postal code response rates. Those rates can be used to target specific postal codes in future campaigns.

• Propensity to Purchase. This technique uses results from a test mailing or previous campaign to generate propensity scores. The scores indicate which contacts are most likely to respond.

• Control Package Test. This technique compares marketing campaigns to see if there is a significant difference in effectiveness for different packages or offers.

Click here to find out more about Direct Marketing.

PAW Reception and R Meetup

New DC meetup for R Users-

source- http://www.meetup.com/R-users-DC/calendar/14236478/

October’s R meet-up will be co-located with the Predictive Analytics World Conference (http://www.predictive…) taking place in Washington DC October 19-20. PAW is the premiere business-focused event for predictive analytics professionals, managers and commercial practitioners.

Agenda:

6:30 – 7:30 PAW Reception (open to meet-up attendees)
7:30 – 9:00 DC-R Meetup

Talks:
“How to speak ggplot2 like a native”
Harlan D. Harris, PhD @HarlanH

“Saving the world with R”
Michael Milton @michaelmilton

Important Registration Instructions:
You are welcome to RSVP here at meetup. The PAW organizers have requested that we register in the PAW site for the R meetup so they can provide badges to members which will give you access to the reception. There is no charge to register using the PAW site. Please click here to register.


Speaker Bios

Harlan D. Harris, PhD, is a statistical data scientist working for Kaplan Test Prep and Admissions in New York City. He has degrees from the University of Wisconsin-Madison and the University of Illinois at Urbana-Champaign. Prior to turning to the private sector, he worked as a researcher and lecturer in various areas of Artificial Intelligence and Cognitive Science at the University of Illinois, Columbia University, the University of Connecticut, and New York University.

Harlan’s talk is titled “How to speak ggplot2 like a native.”. One of the most innovative ideas in data visualization in recent years is that graphical images can be described using a grammar. Just as a fluent speaker of a language can talk more precisely and clearly than someone using a tourist phrasebook, graphics based on a grammar can yield more insights than graphics based on a limited set of templates (bar chart, pie graph, etc.). There are at least two implementations of the Grammar of Graphics idea in R, of which the most popular is the ggplot2 package written by Prof. Hadley Wickham. Just as with natural languages, ggplot2 has a surface structure made up of R vocabulary elements, as well as a deep structure that mediates the link between the vocabulary and the “semantic” representation of the data shown on a computer screen. In this introductory presentation, the links among these levels of representation are demonstrated, so that new ggplot2 users can build the mental models necessary for fluent and creative visualization of their data.

Michael Milton is a Client Manager at Blue State Digital. When he’s not saving the world by designing interactive marketing strategies that connect passionate users with causes and organizations, he writes about data and analytics. For O’Reilly Media, he wrote Head First Data Analysis and Head First Excel and has created the videos Great R: Level 1 and Getting the Most Out of Google Apps for Business.

Michael’s talk is called “How to Save the World Using R.” In this wide-ranging discussion, Michael will highlight individuals and organizations who are using R to help others as well as ways in which R can be used to promote good statistical thinking.