Aster Data hires Quentin Gallivan as CEO

AsterData formally marked phase 2 of it’s rapid growth story by getting as new CEO Quentin Gallivan (of Postini before it was sold to Google and also Pivotlink).

Founders (and Stanfordians) Mayan Bawa stays as Chief Customer Officer and Tasso Argyros as CTO. It has a very deja vu feel -like Eric Schmidt coming in CEO of Google in the glory days past.  Indeed the investment team in Google and AsterData is quite similar and so are the backgrounds of the founders.

AsterData of course creates the leading MapReduce (also created by Google) solution for providing BI infrastructure for big data and has been rapidly been expanding into new frontiers for Big Data.

Aster Data Appoints New Chief Executive Officer

Quentin Gallivan Joins Aster Data as CEO to Lead Company to Next Level of Growth

San Carlos, CA – September 9, 2010– Aster Data, a proven leader dedicated to providing the best data management and data processing platform for big data management and analytics, today announced the appointment of Quentin Gallivan as President and CEO. Gallivan brings more than 20 years of senior executive experience to the leading analytics and database company. With Aster Data achieving tremendous growth in the past year, Gallivan will take Aster Data to the next level, further accelerating its market leadership, sales, channel partnerships and international expansion.  Founding CEO Mayank Bawa, who grew the company from its inception based on the founders’ research at Stanford University, and whose passion for helping customers uniquely unlock the value of their data, will take on the role of Chief Customer Officer.  Bawa, in his new role, will lead the Company’s organization devoted to ensuring the success, longevity and innovation of its fast-growing customer base. Together, Gallivan and Bawa, along with co-founder and Chief Technology Officer, Tasso Argyros, will deliver on the the Company’s mission to help customers discover more value from their data, achieve deep insights through rich analytics and do more with their massive data volumes than has ever been possible.

Gallivan joins Aster Data with over 20 years of leadership experience in the high-tech industry and has held a variety of CEO and senior executive positions with leading technology companies. Before joining Aster Data, Gallivan served as CEO at PivotLink, the leading provider of business intelligence (BI) solutions delivered via Software as a Service (SaaS), where he rapidly grew the company to over 15,000 business users, from mid-sized companies to Fortune 1000 companies, across key industries including financial services, retail, CPG manufacturing and high technology. Prior to Pivotlink, Gallivan served as CEO of Postini where he scaled the company to 35,000 customers and over 10 million users until its eventual acquisition by Google in 2007.  Gallivan also served as executive vice president of worldwide sales and services at VeriSign where he was instrumental in growing the business from $20 million to $1.2 billion and was responsible for the design and execution of the global distribution strategy for the company’s security and services business. Gallivan also held a number of key executive and leadership positions at Netscape Communications and GE Information Services.

“We are delighted to have someone of Quentin’s caliber, who is a veteran of both emerging and established technology companies, lead Aster Data through our next stage of growth,” said Mayank Bawa, Chief Customer Officer and co-founder, Aster Data. “His significant experience around growing organizations and driving operational excellence will be invaluable as he takes Aster Data forward. I’m excited to shift my focus to customers and their success; to bring our innovations to our customers worldwide to help them unlock deep value from their growing data volumes.”

“I am very excited to be joining Aster Data and taking on the challenge of augmenting its already impressive level of growth and success.  Aster Data is very well respected and established in the marketplace, has an enviable solution for big data management that uniquely addresses both big data storage and data processing, an impressive client list and a very talented team,” said Quentin Gallivan, President and CEO, Aster Data. “My task will be to leverage these assets, help shape a new market and provide operational guidance and strategic direction to drive even greater value for shareholders, customers and employees alike.”

Interview Stephanie McReynolds Director Product Marketing, AsterData

Here is an interview with Stephanie McReynolds who works as as Director of Product Marketing with AsterData. I asked her a couple of questions about the new product releases from AsterData in analytics and MapReduce.

Ajay – How does the new Eclipse Plugin help people who are already working with huge datasets but are new to AsterData’s platform?

Stephanie- Aster Data Developer Express, our new SQL-MapReduce development plug-in for Eclipse, makes MapReduce applications easy to develop. With Aster Data Developer Express, developers can develop, test and deploy a complete SQL-MapReduce application in under an hour. This is a significant increase in productivity over the traditional analytic application development process for Big Data applications, which requires significant time coding applications in low-level code and testing applications on sample data.

Ajay – What are the various analytical functions that are introduced by you recently- list say the top 10.

Stephanie- At Aster Data, we have an intense focus on making the development process easier for SQL-MapReduce applications. Aster Developer Express is a part of this initiative, as is the release of pre-defined analytic functions. We recently launched both a suite of analytic modules and a partnership program dedicated to delivering pre-defined analytic functions for the Aster Data nCluster platform. Pre-defined analytic functions delivered by Aster Data’s engineering team are delivered as modules within the Aster Data Analytic Foundation offering and include analytics in the areas of pattern matching, clustering, statistics, and text analysis– just to name a few areas. Partners like Fuzzy Logix and Cobi Systems are extending this library by delivering industry-focused analytics like Monte Carlo Simulations for Financial Services and geospatial analytics for Public Sector– to give you a few examples.

Ajay – So okay I want to do a K Means Cluster on say a million rows (and say 200 columns) using the Aster method. How do I go about it using the new plug-in as well as your product.

Stephanie- The power of the Aster Data environment for analytic application development is in SQL-MapReduce. SQL is a powerful analytic query standard because it is a declarative language. MapReduce is a powerful programming framework because it can support high performance parallel processing of Big Data and extreme expressiveness, by supporting a wide variety of programming languages, including Java, C/C#/C++, .Net, Python, etc. Aster Data has taken the performance and expressiveness of MapReduce and combined it with the familiar declarativeness of SQL. This unique combination ensures that anyone who knows standard SQL can access advanced analytic functions programmed for Big Data analysis using MapReduce techniques.

kMeans is a good example of an analytic function that we pre-package for developers as part of the Aster Data Analytic Foundation. What does that mean? It means that the MapReduce portion of the development cycle has been completed for you. Each pre-packaged Aster Data function can be called using standard SQL, and executes the defined analytic in a fully parallelized manner in the Aster Data database using MapReduce techniques. The result? High performance analytics with the expressiveness of low-level languages accessed through declarative SQL.

Ajay – I see an an increasing focus on Analytics. Is this part of your product strategy and how do you see yourself competing with pure analytics vendors.

Stephanie – Aster Data is an infrastructure provider. Our core product is a massively parallel processing database called nCluster that performs at or beyond the capabilities of any other analytic database in the market today. We developed our analytics strategy as a response to demand from our customers who were looking beyond the price/performance wars being fought today and wanted support for richer analytics from their database provider. Aster Data analytics are delivered in nCluster to enable analytic applications that are not possible in more traditional database architectures.

Ajay – Name some recent case studies in Analytics of implementation of MR-SQL with Analytical functions

Stephanie – There are three new classes of applications that Aster Data Express and Aster Analytic Foundation support: iterative analytics, prediction and optimization, and ad hoc analysis.

Aster Data customers are uncovering critical business patterns in Big Data by performing hypothesis-driven, iterative analytics. They are exploring interactively massive volumes of data—terabytes to petabytes—in a top-down deductive manner. ComScore, an Aster Data customer that performs website experience analysis is a good example of an Aster Data customer performing this type of analysis.

Other Aster Data customers are building applications for prediction and optimization that discover trends, patterns, and outliers in data sets. Examples of these types of applications are propensity to churn in telecommunications, proactive product and service recommendations in retail, and pricing and retention strategies in financial services. Full Tilt Poker, who is using Aster Data for fraud prevention is a good example of a customer in this space.

The final class of application that I would like to highlight is ad hoc analysis. Examples of ad hoc analysis that can be performed includes social network analysis, advanced click stream analysis, graph analysis, cluster analysis and a wide variety of mathematical, trigonometry, and statistical functions. LinkedIn, whose analysts and data scientists have access to all of their customer data in Aster Data are a good example of a customer using the system in this manner.

While Aster Data customers are using nCluster in a number of other ways, these three new classes of applications are areas in which we are seeing particularly innovative application development.

Biography-

Stephanie McReynolds is Director of Product Marketing at Aster Data, where she is an evangelist for Aster Data’s massively parallel data-analytics server product. Stephanie has over a decade of experience in product management and marketing for business intelligence, data warehouse, and complex event processing products at companies such as Oracle, Peoplesoft, and Business Objects. She holds both a master’s and undergraduate degree from Stanford University.

Interview Thomas C. Redman Author Data Driven

Here is an interview with Tom Redman, author of Data Driven. Among the first to recognize the need for high-quality data in the information age, Dr. Redman established the AT&T Bell Laboratories Data Quality Lab in 1987 and led it until 1995. He is the author of four books, two patents and leads his own consulting group. In many respects the “Data Doc’ as his nickname is- is also the father of Data Quality Evangelism.

tom redman

Ajay- Describe your career as a science student to an author of science and strategy books.
Redman: I took the usual biology, chemistry, and physics classes in college.  And I worked closely with oceanographers in graduate school.  More importantly, I learned directly from two masters.  First, was Dr. Basu, who was at Florida State when I was.  He thought more deeply and clearly about the nature of data and what we can learn from them than anyone I’ve met since.  And second is people in the Bell Labs’ community who were passionate about making communications better. What I learned there was you don’t always need “scientific proof” to mover forward.


Ajay- What kind of bailout do you think the Government can give to the importance of science education in this country.

Redman: I don’t think the government should bail science education per se. Science departments should compete for students just like the English and anthropology departments do.  At the same time, I do think the government should support some audacious goals, such as slowing global warming or energy independence.  These could well have the effect of increasing demand for scientists and science education.

Ajay- Describe your motivations for writing your book Data Driven-Profiting from your most important business asset.

Redman: Frankly I was frustrated.  I’ve spent the last twenty years on data quality and organizations that improve gain enormous benefit.  But so few do.  I set out to figure out why that was and what to do about it.

Ajay- What can various segments of readers learn from this book-
a college student, a manager, a CTO, a financial investor and a business intelligence vendor.

Redman: I narrowed my focus to the business leader and I want him or her to take away three points.  First, data should be managed as aggressively and professionally as your other assets.  Second, they are unlike other assets in some really important ways and you’ll have to learn how to manage them.  Third, improving quality is a great place to start.

Ajay- Garbage in Garbage out- How much money and time do you believe is given to data quality in data projects.

Redman:   By this I assume you mean data warehouse, BI, and other tech projects.  And the answer is “not near enough.”  And it shows in the low success rate of those projects.

Ajay-Consider a hypothetical scenario- Instead of creating and selling fancy algorithms , a business intelligence vendor uses simple Pareto principle to focus on data quality and design during data projects. How successful do you think that would be?

Redman: I can’t speak to the market, but I do know that if organizations are loaded with problems and opportunities.  They could make great progress on most important ones if could clearly state the problem and bring high-quality data and simple techniques to bear.  But there are a few that require high-powered algorithms.  Unfortunately those require high-quality data as well.

Ajay- How and when did you first earn the nickname “Data Doc”. Who gave it to you and would you rather be known by some other names.

Redman: One of my clients started calling me that about a dozen years ago.  But I felt uncomfortable and didn’t put it on my business card until about five years ago.  I’ve grown to really like it.

Ajay- The pioneering work at AT & T Bell laboratories and at Palo Alto laboratory- who do you think are the 21st century successors of these laboratories. Do you think lab work has become too commercialized even in respected laboratories like Microsoft Research and Google’s research in mathematics.

Redman: I don’t know.  It may be that the circumstances of the 20th century were conducive to such labs and they’ll never happen again.  You have to remember two things about Bell Labs.  First, was the cross-fertilization that stemmed from having leading-edge work in dozens of areas.  Second, the goal is not just invention, but innovation, the end-to-end process which starts with invention and ends with products in the market.  AT&T, Bell Labs’ parent, was quite good at turning invention to product.  These points lead me to think that the commercial aspect of laboratory work is so much the better.

Ajay-What does ” The Data Doc” do to relax and maintain a work life balance. How important do you think is work-life balance for creative people and researchers.

Redman: I think everyone needs a balance, not just creative people.  Two things have made this easier for me.  First, I like what I do.  A lot of days it is hard to distinguish “work” from “play.”  Second is my bride of thirty-three years, Nancy.  She doesn’t let me go overboard too often.

Biography-

Dr. Thomas C. Redman is President of Navesink Consulting Group, based in Little Silver, NJ.  Known by many as “the Data Doc” (though “Tom” works too), Dr. Redman was the first to extend quality principles to data and information.  By advancing the body of knowledge, his innovations have raised the standard of data quality in today’s information-based economy.

Dr. Redman conceived the Data Quality Lab at AT&T Bell Laboratories in 1987 and led it until 1995.  There he and his team developed the first methods for improving data quality and applied them to important business problems, saving AT&T tens of millions of dollars. He started Navesink Consulting Group in 1996 to help other organizations improve their data, while simultaneously lowering operating costs, increasing revenues, and improving customer satisfaction and business relationships.

Since then – armed with proven, repeatable tools, techniques and practical advice – Dr. Redman has helped clients in fields ranging from telecommunications, financial services, and dot coms, to logistics, consumer goods, and government agencies. His work has helped organizations understand the importance of high-quality data, start their data quality programs, and also save millions of dollars per year.

Dr. Redman holds a Ph.D. in statistics from Florida State University.  He is an internationally renowned lecturer and the author of numerous papers, including “Data Quality for Competitive Advantage” (Sloan Management Review, Winter 1995) and “Data as a Resource: Properties, Implications, and Prescriptions” (Sloan Management Review, Fall 1998). He has written four books: Data Driven (Harvard Business School Press, 2008), Data Quality: The Field Guide (Butterworth-Heinemann, 2001), Data Quality for the Information Age (Artech, 1996) and Data Quality: Management and Technology (Bantam, 1992). He was also invited to contribute two chapters to Juran’s Quality Handbook, Fifth Edition (McGraw Hill, 1999). Dr. Redman holds two patents.

About Navesink Consulting Group (http://www.dataqualitysolutions.com/ )

Navesink Consulting Group was formed in 1996 and was the first company to focus on data quality.  Led by Dr. Thomas Redman, “the Data Doc” and former AT&T Bell Labs director, we have helped clients understand the importance of high-quality data, start their data quality programs, and save millions of dollars per year.

Our approach is not a cobbling together of ill-fitting ideas and assertions – it is based on rigorous scientific principles that have been field-tested in many industries, including financial services (see more under “Our clients”).  We offer no silver bullets; we don’t even offer shortcuts. Improving data quality is hard work.

But with a dedicated effort, you should expect order-of-magnitude improvements and, as a direct result, an enormous boost in your ability to manage risk, steer a course through the crisis, and get back on the growth curve.

Ultimately, Navesink Consulting brings tangible, sustainable improvement in your business performance as a result of superior quality data.

%d bloggers like this: