Interview Michael Zeller,CEO Zementis on PMML

Here is a topic specific interview with Micheal Zeller of Zementis on PMML, the de facto standard for data mining.

PMML Logo

Ajay- What is PMML?

Mike- The Predictive Model Markup Language (PMML) is the leading standard for statistical and data mining models and supported by all leading analytics vendors and organizations. With PMML, it is straightforward to develop a model on one system using one application and deploy the model on another system using another application. PMML reduces complexity and bridges the gap between development and production deployment of predictive analytics.

PMML is governed by the Data Mining Group (DMG), an independent, vendor led consortium that develops data mining standards

Ajay- Why can PMML help any business?

Mike– PMML ensures business agility with respect to data mining, predictive analytics, and enterprise decision management. It provides one standard, one deployment process, across all applications, projects and business divisions. In this way, business stakeholders, analytic scientists, and IT are finally speaking the same language.

In the current global economic crisis more than ever, a company must become more efficient and optimize business processes to remain competitive. Predictive analytics is widely regarded as the next logical step, implementing more intelligent, real-time decisions across the enterprise.

However, the deployment of decisions based on predictive models and statistical algorithms has been a hurdle for many companies. Typically, it has been a complex, costly process to get such models integrated into operational systems. With the PMML standard, this no longer is the case. PMML simply eliminates the deployment complexity for predictive models.

A standard also provides choices among vendors, allowing us to implement best-of-breed solutions, and creating a common knowledge framework for internal teams – analytics, IT, and business – as well external vendors and consultants. In general, having a solid standard is a sign of a mature analytics industry, creating more options for users and, most importantly, propelling the total analytics market to the next level.

Ajay- Can PMML help your existing software in analytics and BI?

Mike- PMML has been widely accepted among vendors, almost all major analytics and business intelligence vendors already support the standard. If you have any such software package in-house, you most likely have PMML at your disposal already.

For example, you can develop your models in any of the tools that support PMML, e.g., SPSS, SAS, Microstrategy, or IBM, and then deploy that model in ADAPA, which is the Zementis decision engine. Or you can even choose from various open source tools, like R and KNIME.

PMML_Now

Ajay- How does Zementis and ADAPA and PMML fit?

Mike- Zementis has been a avid supporter of the PMML standard and is very active in the development of the standard. We contributed to the PMML package for the open source R Project. Furthermore, we created a free PMML Converter tool which helps users to validate and correct PMML files from various vendors and convert legacy PMML files to the latest version of the standard.

Most prominently with ADAPA, Zementis launched the first cloud-computing scoring engine on the Amazon EC2 cloud. ADAPA is a highly scalable deployment, integration and execution platform for PMML-based predictive models. Not only does it give you all the benefits of being fully standards-based, using PMML and web services, but it also leverages the cloud for scalability and cost-effectiveness.

By being a Software as a Service (SaaS) application on Amazon EC2, ADAPA provides extreme flexibility, from casual usage which only costs a few dollars a month all the way to high-volume mission critical enterprise decision management which users can seamlessly launch in the United States or in European data centers.

Ajay- What are some examples where PMML helped companies save money?

Mike- For any consulting company focused on developing predictive analytics models for clients, PMML provides tremendous benefits, both for clients and service provider. In standardizing on PMML, it defines a clear deliverable – a PMML model – which clients can deploy instantly. No fixed requirements on which specific tools to choose for development or deployment, it is only important that the model adheres to the PMML standard which becomes the common interface between the business partners. This eliminates miscommunication and lowers the overall project cost. Another example is where a company has taken advantage of the capability to move models instantly from development to operational deployment. It allows them to quickly update models based on market conditions, say in the area of risk management and fraud detection, or to roll out new marketing campaigns.

Personally, I think the biggest opportunities are still ahead of us as more and more businesses embrace operational predictive analytics. The true value of PMML is to facilitate a real-time decision environment where we leverage predictive models in every business process, at every customer touch point and on-demand to maximize value

Ajay- Where can I find more information about PMML?

Mike- First there is the Data Mining Group (DMG) web site at http://www.dmg.org

I strongly encourage any company that has a significant interest in predictive analytics to become a member and help drive the development of the standard.

We also created a knowledge base of PMML-related information at http://www.predictive-analytics.info and there is a PMML interest group on Linked

In http://www.linkedin.com/groupRegistration?gid=2328634

This group is more geared toward a general discussion forum for business benefits and end-user questions, and it is a great way to get started with PMML.

Last but not least, the Zementis web site at http://www.zementis.com

It contains various PMML example files, the PMML Converter tool, as well links to PMML resource pages on the web.

For more on Michael Zeller and Zementis read his earlier interview at https://decisionstats.wordpress.com/2009/02/03/interview-michael-zeller-ceozementis-2/

Interview Ken O Connor Business Intelligence Consultant

Here is an interview with an industry veteran of Business Intelligence, Ken O Connor.

Ajay- Describe your career journey across the full development cycle of Business Intelligence.

Ken- I started my career in the early 80’s in the airline industry, where I worked as an application programmer and later as a systems programmer. I took a computer science degree by night. The airline industry was one of the first to implement computer systems in the ‘60s, and the legacy of being an early adaptor was that airline reservation systems were developed in Assembler. Remarkable as it sounds now, as application programmers, we wrote our own file access methods. Even more remarkable, as systems programmers, we modified the IBM supplied Operating System, originally known as the Airline Control Program (ACP), later renamed as Transaction Processing Facility (TPF). The late ‘80s saw the development of Global “Computer Reservations Systems” (CRS systems) including AMADEUS and GALILEO. I moved from Aer Lingus, a small Irish airline, to work in London on the British Airways systems, to enable the British Airways systems share information and communicate with the new Global CRS systems.

I learnt very important lessons during those years.

* The criticality of standards

* The drive for interoperability of systems

* The drive towards information sharing

* The drive away from bespoke development

In the 90’s I returned to Dublin, where I worked as an independent consultant with IBM on many data intensive projects. On one project I was lead developer in the IBM Dublin Laboratory on the development of the Data Replication tool called “Data Propagator NonRelational”. This tool automatically propagates updates made on IMS databases to DB2 databases. On this project, we successfully piloted using the Cleanroom Development Method, as part of IBM’s derive towards Six Sigma quality.

In the past 15 years I have moved away from IT towards the business. I describe myself as a Hybrid. I believe there is a serious communications gap between business users and IT, and this is a frequent cause of project failures. I seek to bridge that gap. I ensure that requirements are clear, measurable, testable, and capable of being easily understood and signed off by business owners.

One of my favorite programmes was Euro Changeover, This was a hugely data intensive programme. It was the largest changeover undertaken by European Financial Institutions. I worked as an independent consultant with the IBM Euro Centre of Competence. I developed changeover strategies for a number of Irish Enterprises, and was the End to End IT changeover process owner in a major Irish bank. Every application and every data store holding currency sensitive data (not just amounts, but currency signs etc.) had to be converted at exactly the same time to ensure that all systems successfully switched to euro processing on 1st January 2002.

I learnt many, many lasting lessons about data the hard way on Euro Changeover programmes, such as:

* The extent to which seemingly separate applications share operational data – often without the knowledge of the owning application.

* The extent to which business users use (abuse) data fields to hold information never intended for the data field.

* The critical distinction between the underlying data (in a data store) and the information displayed to a business user.

I have worked primarily on what I call “End of food chain” projects and programmes, such as Single View of Customer, data migrations, and data population of repositories for BASEL II and Anti Money Laundering (AML) systems. Business Intelligence is another example of an “End of food chain” project. “End of food-chain” projects share the following characteristics:

* Dependent on existing data

* No control over the quality of existing data they depend on

* No control over the data entry processes by which the data they require is captured.

* The data required may have been captured many years previously.

Recently, I have shared my experience of “Enterprise wide data issues” in a series of posts on my blog, together with a process for assessing the status of those issues within an Enterprise (more details). In my experience, the success of a Business Intelligence programme and the ease with which an Enterprise completes “End of food chain” data dependent programmes directly depends on the status of the common Enterprise Wide data issues I have identified.

Ajay -Describe the educational scene for science graduates in Ireland. What steps do you think governments and universities can do to better teach science and keep young people excited about it?

Ken- I am not in a position to comment on the educational scene for science graduates in Ireland. However, I can say that currently there are insufficient numbers of school children studying science in primary and 2nd level education. There is a need to excite young people about science. There is a need for more interactive science museums, like W5 in Belfast which is hugely successful. Kids love to get involved, and practical science can be great fun.

Ajay- What are some of the key trends in business intelligence that you have seen-

Ken- Since the earliest days of my career, I have seen an ever increasing move towards standards based interoperability of systems, and interchange of data. This has accelerated dramatically in recent years. This is the good news. Further good news is the drive towards the use of external reference databases to verify the accuracy of data, at point of data entry (See blog post on Upstream prevention by Henrik Liliendahl Sørensen). One example of this drive is cloud based verification services from new companies like Ireland based Clavis Technology.

The harsh reality is that “Old hardware goes into museums, while old software goes into production every night”. Enterprises have invested vast amounts of money in legacy applications over decades. These legacy systems access legacy data in legacy data stores. This legacy data will continue to pose challenges in the delivery of Business Intelligence to the Business community that needs it. These challenges will continue to provide opportunities for Data Quality professionals.

Ajay- What is going to be the next fundamental change in this industry in your opinion?

Ken- The financial crisis will result in increased regulatory requirements. This will be good news for the Business Intelligence / Data Quality industry. In time, it will no longer be sufficient to provide the regulator with ‘just’ the information requested. The regulator will want to see the process by which the information was gathered; the process controls, and evidence of the quality the underlying data from which the information was derived. This move will result in funding for Data Governance programmes, which will lead to increased innovation in our industry.

Ajay- Describe your startup Map My Business, your target customer and your vision for it.

Ken- I started MapMyBusiness.com as a “recession buster”. Ireland was hit particularly hard by the financial crisis. I had become over dependent on the financial services industry, and a blanket ban on the use of external consultants left me with no option but to reinvent myself. MapMyBusiness.com helps small businesses to attract clients, by getting them on Google page one. Having been burnt by an over dependence on one industry, my vision is to diversify. I believe that Data Governance is industry independent, and I am focussing on increasing my customer base for my Data Governance consultancy skills, via my company Professional IT Personnel Ltd.

Ajay- What do you do when not working with customers or blogging on your website?

Ken- I try to achieve a reasonable work/life balance. I am married with two children aged 12 and 10, and like to spend time with them, especially outdoors, walking, hiking, playing tennis etc. I am involved in my community, lobbying for improved cycling infrastructure in our area (more details). Ireland, like most countries, is facing an obesity epidemic, due to an increasingly sedentary lifestyle. Too many people get little or no exercise, and don’t have the time, willpower, or perhaps money, to regularly work out in a gym. By including “Active Travel” in our daily lives – by walking or cycling to schools and local amenities, we can get enough physical exercise to prevent obesity, and obesity related health problems. We need to make our cities, towns and villages more pedestrian and cyclist friendly, to encourage “active travel”. My voluntary work in this area introduced me to mapping (see example), and enabled me to set up MapMyBusiness.com.

Biography-

Ken O’Connor is an independent IT Consultant with almost 30 years of work experience. He specialises in Data: Data Migration, Data Population, Data Governance, Data Quality, Data Profiling…His  company is called Professional IT Personnel Ltd.

Ken started his blog (Ken O’ Connor Data Consultant) to share his experience and to learn from the experience of others.   Dylan Jones, editor of dataqualitypro, describe Ken as a “grizzled veteran”, with almost 30 years experience across the full development lifecycle.

Interview Shawn Kung Sr Director Aster Data

Here is an interview with Shawn Kung, Senior Director of Product Management at Aster Data. Shawn explains the difference between the various database technologies, Aster’s rising appeal to its unique technological approach and touches upon topics of various other interests as well to people in the BI and technology space.

image001

Ajay -Describe your career journey from a high school student of science till today .Do you think science is a more lucrative career?

Shawn: My career journey has spanned over a decade in several Silicon Valley technology companies.  In both high school and my college studies at Princeton, I had a fervent interest in math and quantitative economics.  Silicon Valley drew me to companies like upstart procurement software maker Ariba and database giant Oracle.  I continued my studies by returning to get a Master’s in Management Science at Stanford before going on to lead core storage systems for nearly 5 years at NetApp and subsequently Aster.

Science (whether it is math, physics, economics, or the hard engineering sciences) provides a solid foundation.  It teaches you to think and test your assumptions – those are valuable skills that can lead to a both a financially lucrative and personally inspiring career.

Ajay- How would you describe the difference between Map Reduce and Hadoop and Oracle and SAS, DBMS and Teradata and Aster Data products to a class of undergraduate engineers ?

Shawn: Let’s start with the database guys – Oracle and Teradata.  They focus on structured data – data that has a logical schema and is manipulated via a standards-based structured query language (SQL).  Oracle tries to be everything to everyone – it does OLTP (low-latency transactions like credit card or stock trade execution apps) and some data warehousing (typically summary reporting).  Oracle’s data warehouse is not known for large-scale data warehousing and is more often used for back-office reporting.

Teradata is focused on data warehousing and scales very well, but is extremely expensive – it runs on high-end custom hardware and takes a mainframe approach to data processing.  This approach makes less sense as commodity hardware becomes more compute-rich and better software comes along to support large-scale MPP data warehousing.

SAS is very different – it’s not a relational database. It really offers an application platform for data analysis, specifically data mining.  Unlike Oracle and Teradata which is used by SQL developers and managed by DBAs, SAS is typically run in business units by data analysts – for example a quantitative marketing analyst, a statistician/mathematician, or a savvy engineer with a data mining/math background.  SAS is used to try to find patterns, understand behaviors, and offer predictive analytics that enable businesses to identify trends and make smarter decisions than their competitors.

Hadoop offers an open-source framework for large-scale data processing.  MapReduce is a component of Hadoop, which also contains multiple other modules including a distributed filesystem (HDFS).  MapReduce offers a programming paradigm for distributed computing (a parallel data flow processing framework).

Both Hadoop and MapReduce are catered toward the application developer or programmer.  It’s not catered for enterprise data centers or IT.  If you have a finite project in a line of business and want to get it done, Hadoop offers a low-cost way to do this.  For example, if you want to do large-scale data munging like aggregations, transformations, manipulations of unstructured data – Hadoop offers a solution for this without compromising on the performance of your main data warehouse.  Once the data munging is finished, the post-processed data set can be loaded into a database for interactive analysis or analytics. It is a great combination of big data technologies for certain use-cases.

Aster takes a very unique approach.  Our Aster nCluster software offers the best of all worlds – we offer the potential for deep analytics of SAS, the low-cost scalability and parallel processing of Hadoop/MapReduce, and the structured data advantages (schema, SQL, ACID compliance and transactional integrity, indexes, etc) of a relational database like Teradata and Oracle.  Often, we find complementary approaches and therefore view SAS and Hadoop/MapReduce as synergistic to a complete solution.  Data warehouses like Teradata and Oracle tend to be more competitive.

Ajay- What exciting products have you launched so far and what makes them unique both from a technical developer perspective and a business owner perspective

Shawn: Aster was the first-to-market to offer In-Database MapReduce, which provides the standards and familiarity of SQL and databases with the analytic power of MapReduce.  This is very unique as it offers technical developers and application programmers to write embedded procedural algorithms once, upload it, and allow business analysts or IT folks (SQL developers, DBAs, etc) to invoke these SQL-MapReduce functions forever.

It is highly polymorphic (re-usable), highly fault-tolerant, highly flexible (any language – Java, Python, Ruby, Perl, R statistical language, C# in the .NET world, etc) and natively massively parallel – all of which differentiate these SQL extensions from traditional dumb user-defined functions (UDFs).

Ajay- “I am happy with my databases and I don’t need too much diversity or experimentation in my systems”, says a CEO to you.

How do you convince him using quantitative numbers and not marketing adjectives?

Shawn: Aster has dozens of production customers including big-names like MySpace, LinkedIn, Akamai, Full Tilt Poker, comScore, and several yet-to-be-named retail and financial service accounts.  We have quantified proof points that show orders of magnitude improvements in scalability, performance, and analytic insights compared to incumbent or competitor solutions.  Our highly referenceable customers would be happy to discuss their positive experiences with the CEO.

But taking a step back, there’s a fundamental concept that this CEO needs to first understand.  The world is changing – data growth is proliferating due to the digitization of so many applications and the emergence of unstructured data and new data types.  Like the book “Competing on Analytics”, the world is shifting to a paradigm where companies that don’t take risks and push the limits on analytics will die like the dinosaurs.

IDC is projecting 10x+ growth in data over the next few years to zetabytes of aggregate data driven by digitization (Internet, digital television, RFID, etc).  The data is there and in order to compete effectively and understand your customers more intimately, you need a large-scale analytics solution like the one Aster nCluster offers.  If you hold off on experimentation and innovation, it will be too late by the time you realize you have a problem at hand.

Ajay- How important is work life balance for you?

Shawn: Very important.  I hang out with my wife most weekends – we do a lot of outdoors activities like hiking and gardening.  In Silicon Valley, it’s all too easy to get caught up in the rush of things.  Taking breaks, especially during the weekend, is important to recharge and re-energize to be as productive as possible.

Ajay- Are you looking for college interns and new hires what makes aster exciting for you so you are pumped up every day to go to work?

Shawn: We’re always looking for smart, innovative, and entrepreneurial new college grads and interns, especially on the technical side.  So if you are a computer science major or recent grad or graduate student, feel free to contact us for opportunities.

What makes Aster exciting is 2 things –

first, the people.  Everyone is very smart and innovative so you learn a tremendous amount, which is personally gratifying and professionally useful long-term.

Second, Aster is changing the world!

Distributed systems computing focused on big data processing and analytics – these are massive game-changers that will fundamentally change the landscape in data warehousing and analytics.  Traditional databases have been a oligopoly for over a generation – they haven’t been challenged and so the 1970’s based technology has stuck around.  The emergence of big data and low-cost commodity hardware has created a unique opportunity to carve out a brand new market…

what gets me pumped every day is I have the ability to contribute to a pioneer that is quickly becoming Silicon Valley’s next great success story!

Biography-

Over the past decade, Shawn has led product management for some of Silicon Valley’s most successful and innovative technology companies.  Most recently, he spent nearly 5 years at Network Appliance leading Core Systems storage product management, where he oversaw the development of high availability software and Storage Systems hardware products that grew in annual revenue from $200M to nearly $800M.  Prior to NetApp, Shawn held senior product management and corporate strategy roles at Oracle Corporation and Ariba Inc.

Shawn holds an M.S. in Management Science and engineering from Stanford University, where he was awarded the Valentine Fellowship (endowed by Don Valentine of Sequoia Capital).  He also received a B.A. with high honors from Princeton University.

About Aster

Aster Data Systems is a proven leader in high-performance database systems for data warehousing and analytics – the first DBMS to tightly integrate SQL with MapReduce – providing deep insights on data analyzed on clusters of low-cost commodity hardware. The AsternCluster database cost-effectively powers frontline analytic applications for companies such as MySpace, aCerno (an Akamai company), and ShareThis.

Running on low-cost off-the-shelf hardware, and providing ‘hands-free’ administration, Aster enables enterprises to meet their data warehousing needs within their budget. Aster is headquartered in San Carlos, California and is backed by Sequoia Capital, JAFCO Ventures, IVP, Cambrian Ventures, and First-Round Capital, as well as industry visionaries including David Cheriton and Ron Conway.

Interview Thomas C. Redman Author Data Driven

Here is an interview with Tom Redman, author of Data Driven. Among the first to recognize the need for high-quality data in the information age, Dr. Redman established the AT&T Bell Laboratories Data Quality Lab in 1987 and led it until 1995. He is the author of four books, two patents and leads his own consulting group. In many respects the “Data Doc’ as his nickname is- is also the father of Data Quality Evangelism.

tom redman

Ajay- Describe your career as a science student to an author of science and strategy books.
Redman: I took the usual biology, chemistry, and physics classes in college.  And I worked closely with oceanographers in graduate school.  More importantly, I learned directly from two masters.  First, was Dr. Basu, who was at Florida State when I was.  He thought more deeply and clearly about the nature of data and what we can learn from them than anyone I’ve met since.  And second is people in the Bell Labs’ community who were passionate about making communications better. What I learned there was you don’t always need “scientific proof” to mover forward.


Ajay- What kind of bailout do you think the Government can give to the importance of science education in this country.

Redman: I don’t think the government should bail science education per se. Science departments should compete for students just like the English and anthropology departments do.  At the same time, I do think the government should support some audacious goals, such as slowing global warming or energy independence.  These could well have the effect of increasing demand for scientists and science education.

Ajay- Describe your motivations for writing your book Data Driven-Profiting from your most important business asset.

Redman: Frankly I was frustrated.  I’ve spent the last twenty years on data quality and organizations that improve gain enormous benefit.  But so few do.  I set out to figure out why that was and what to do about it.

Ajay- What can various segments of readers learn from this book-
a college student, a manager, a CTO, a financial investor and a business intelligence vendor.

Redman: I narrowed my focus to the business leader and I want him or her to take away three points.  First, data should be managed as aggressively and professionally as your other assets.  Second, they are unlike other assets in some really important ways and you’ll have to learn how to manage them.  Third, improving quality is a great place to start.

Ajay- Garbage in Garbage out- How much money and time do you believe is given to data quality in data projects.

Redman:   By this I assume you mean data warehouse, BI, and other tech projects.  And the answer is “not near enough.”  And it shows in the low success rate of those projects.

Ajay-Consider a hypothetical scenario- Instead of creating and selling fancy algorithms , a business intelligence vendor uses simple Pareto principle to focus on data quality and design during data projects. How successful do you think that would be?

Redman: I can’t speak to the market, but I do know that if organizations are loaded with problems and opportunities.  They could make great progress on most important ones if could clearly state the problem and bring high-quality data and simple techniques to bear.  But there are a few that require high-powered algorithms.  Unfortunately those require high-quality data as well.

Ajay- How and when did you first earn the nickname “Data Doc”. Who gave it to you and would you rather be known by some other names.

Redman: One of my clients started calling me that about a dozen years ago.  But I felt uncomfortable and didn’t put it on my business card until about five years ago.  I’ve grown to really like it.

Ajay- The pioneering work at AT & T Bell laboratories and at Palo Alto laboratory- who do you think are the 21st century successors of these laboratories. Do you think lab work has become too commercialized even in respected laboratories like Microsoft Research and Google’s research in mathematics.

Redman: I don’t know.  It may be that the circumstances of the 20th century were conducive to such labs and they’ll never happen again.  You have to remember two things about Bell Labs.  First, was the cross-fertilization that stemmed from having leading-edge work in dozens of areas.  Second, the goal is not just invention, but innovation, the end-to-end process which starts with invention and ends with products in the market.  AT&T, Bell Labs’ parent, was quite good at turning invention to product.  These points lead me to think that the commercial aspect of laboratory work is so much the better.

Ajay-What does ” The Data Doc” do to relax and maintain a work life balance. How important do you think is work-life balance for creative people and researchers.

Redman: I think everyone needs a balance, not just creative people.  Two things have made this easier for me.  First, I like what I do.  A lot of days it is hard to distinguish “work” from “play.”  Second is my bride of thirty-three years, Nancy.  She doesn’t let me go overboard too often.

Biography-

Dr. Thomas C. Redman is President of Navesink Consulting Group, based in Little Silver, NJ.  Known by many as “the Data Doc” (though “Tom” works too), Dr. Redman was the first to extend quality principles to data and information.  By advancing the body of knowledge, his innovations have raised the standard of data quality in today’s information-based economy.

Dr. Redman conceived the Data Quality Lab at AT&T Bell Laboratories in 1987 and led it until 1995.  There he and his team developed the first methods for improving data quality and applied them to important business problems, saving AT&T tens of millions of dollars. He started Navesink Consulting Group in 1996 to help other organizations improve their data, while simultaneously lowering operating costs, increasing revenues, and improving customer satisfaction and business relationships.

Since then – armed with proven, repeatable tools, techniques and practical advice – Dr. Redman has helped clients in fields ranging from telecommunications, financial services, and dot coms, to logistics, consumer goods, and government agencies. His work has helped organizations understand the importance of high-quality data, start their data quality programs, and also save millions of dollars per year.

Dr. Redman holds a Ph.D. in statistics from Florida State University.  He is an internationally renowned lecturer and the author of numerous papers, including “Data Quality for Competitive Advantage” (Sloan Management Review, Winter 1995) and “Data as a Resource: Properties, Implications, and Prescriptions” (Sloan Management Review, Fall 1998). He has written four books: Data Driven (Harvard Business School Press, 2008), Data Quality: The Field Guide (Butterworth-Heinemann, 2001), Data Quality for the Information Age (Artech, 1996) and Data Quality: Management and Technology (Bantam, 1992). He was also invited to contribute two chapters to Juran’s Quality Handbook, Fifth Edition (McGraw Hill, 1999). Dr. Redman holds two patents.

About Navesink Consulting Group (http://www.dataqualitysolutions.com/ )

Navesink Consulting Group was formed in 1996 and was the first company to focus on data quality.  Led by Dr. Thomas Redman, “the Data Doc” and former AT&T Bell Labs director, we have helped clients understand the importance of high-quality data, start their data quality programs, and save millions of dollars per year.

Our approach is not a cobbling together of ill-fitting ideas and assertions – it is based on rigorous scientific principles that have been field-tested in many industries, including financial services (see more under “Our clients”).  We offer no silver bullets; we don’t even offer shortcuts. Improving data quality is hard work.

But with a dedicated effort, you should expect order-of-magnitude improvements and, as a direct result, an enormous boost in your ability to manage risk, steer a course through the crisis, and get back on the growth curve.

Ultimately, Navesink Consulting brings tangible, sustainable improvement in your business performance as a result of superior quality data.

Interview Augusto Albeghi (Straycat) —Founder Straysoft

An interview with Augusto (StrayCat), a Startup Entrepreneur with an interesting technology StraySoft.

Ajay- Describe your career as a BI consultant.

Straycat- I’m an aerospace engineer who had to turn to IT right after graduation because of the Italian aerospace industry crisis in the first half of the 90’s . My first job was by the company now called Accenture, as a simple developer. I was part of a large project for a large US food corporation.

We built an enterprise level reporting and budgeting system based on what was later to become Hyperion. After that I had various experiences, always as an IT professional, always focusing on BI or related subjects. I worked for the Milan Airport Authority, the l’Oreal group and couple of local software houses. Now I’m a project manager by a large Italian consulting firm but, most of all, I’m a bootstrapping entrepreneur.

Ajay- How do you think we can teach BI at an early stage to young students.

Straycat- I think that the main problem resides in the naïve university approach toward business data analysis. Collecting data is considered trivial compared to other related subjects.

Data availability is often given for granted, then equations are written upon them. No use to say it is not trivial at all and there is an entire class of problems which students are not aware of.

A few lessons spent focusing on data quality, aggregations, measure definitions etc are enough to create the necessary awareness of the problem. It’s no longer cool telling to be ignorant on the subject!

Ajay- Describe the most challenging project you ever did. Name a project which led to the biggest dollar impact.

Straycat- About three years ago we signed a contract with a large fashion firm here in Italy to reengineer their entire business intelligence setup. It has been a project ranging from sales to production, from accounting to human resources.

It impacted almost a thousand users in six different time zones. The main challenge we had to tackle was the fragmentation of their legacy BI systems, which produced different jargon and practices across the corporation. We changed the database and the presentation layer, built a modern datawarehouse, and worked relentlessly on change management.

I can’t disclose figures but the new unified system shed light on some bad practices, revealed inefficiencies and provided a whole new set of analytics that increased market awareness.

Ajay- Describe your start-up StraySoft and what it is hoping to accomplish.

Straycat-

StraySoft is a small and fresh startup devoted to build Business Intelligence applications.

It produces Viney@rd an Excel/SQL Server based spreadsheet automation and BI tool.

I have personal reasons for embarking in such a project but the kick off came from a sudden realization. Despite the terrific sophistication level provided by current BI tools, the one thing each and every user wants is to have data in MS Excel.

This is simply a fact, users get data, elaborate them and make Excel reports. It’s not a matter of features, people feel in full control only when they have an Excel file.

Why? Because Excel is able to address a single cell, and the figures within can be adjusted at will and saved in a familiar place like C: .

So, the original idea (2 years and one half ago) was to create a tool to refresh a complex layout without disrupting it i.e. a tool which could address query results into single cells.

This can be done by Excel alone but it’s far too difficult even for and advanced user.

Viney@rd features this but I soon realized that, if I wanted to go down this path, I had to tackle a second issue: the data provided by the systems are never the data required by the user. I’m not talking about bad analysis or wrong KPIs; even if the architects did everything fine, the human brain works according to categories that often are not saved within a database.

Example: you are a salesman and you have 4 customers who make the 75% of your business. Plus you have 40 customers who make the other 25% of your business.

Question: “how many customers do you have?”, reply: “I have 4, customer A,B.C and D. A is bla bla bla, B is bla bla bla, C etc. etc. Oh, by the way I have some others but they are marginal.”

The salesman needs any kind of information about the 4, and just few hints about the rest; every detailed information about the rest is perceived as clutter. He needs a screen with 4 ultra detailed sheets per customer, not a customer ABC report with 44 rows.

So far nothing revolutionary, what is revolutionary is that the user himself must be able to tag the 4 main customers according to his own perception of the customer importance.

If one of the small customers is going to place a large order, than it must become important as well and should immediately take the fifth place, to be automatically demoted when the opportunity expires.

The point is that these rules are defined heuristically by the human brain and have so many exceptions that can be handled only by a human brain. This consideration led to implementing the unique feature of letting the users change their data directly by an Excel table.

The Viney@rd database is easy to be fed by traditional techniques but Excel sheet data can be saved within as well. This gives the best of both worlds, a central repository for “conventional” data, so no more “spreadsheet hell” nightmares, but the ability to classify and adjust the data still working in Excel.

This approach has limits, specifically when we talk about large amounts of data, I’m the first to admit it, but I still think that it’s the one thing that can popularize BI among business users.

When large vendors will embrace this, I’ll remind them this interview! :o) Viney@rd now is in its infancy but already implements these two core features.

There are a lot of things to do, and many features to add to take it to a full corporate level, but I enjoy the process so much that I can’t stop working on it!

I’ve been asked “What if I buy from you and you go belly up next year?”. My reply is that you must shoot me to stop me from working on it! I still have a long list of features to implement and I’m not going to dismiss the fun!

For example, did you ever notice that people think naturally in terms of information streams ….?

I’ll consider myself successful when 3 conditions will be met:

a) I’ll have a body of satisfied users which had their working lives improved by my products

b) I’ll make a living out of StraySoft together with the employees, when I’ll have some

c) people will think to me as the Business Intelligence “enfant terrible”.

Ajay-  What do you do in your spare time ?

StrayCat- Sorry? What’s spare time? Jokes apart, I devote time to my wife, who’s really supportive in this effort. Late at night, before falling asleep, I’m used to read for half an hour: I’m passionate about history; but the events I really never miss are the Italian National Rugby Union Team matches.

Ajay- why do you tweet using the name Stray Cat ?

Augusto– I named the company StraySoft after the adoption of a stray cat; the full story is told here http://www.straysoft.com/dblog/articolo.asp?id=30.

The twitter name came as natural as naming the company. I know that someone may find it awkward but I feel like going upstream on that! Secondly, I want to keep my consulting activity and StraySoft totally separate for a matter of convenience. I did not, and will never propose my product to my consulting customers.

Ajay-  What visible trends in Business Intelligence do you fore see for the next two to three years.

Augusto- The #1 trend is that all the main vendors (excluding Microsoft, which already did) finally realized that there’s a midrange market which needs BI more than ever.

What they’re doing wrong is targeting this segment with the same enterprise class tools which miss the few key features required by this market.

The #2 trend is the rising of workgroup BI and the new dignity given to informal analysis. This is a whole new approach I do not share completely but I admit it has its strengths.

The #3 trend is at the opposite side of the spectrum; unconventional databases (columnar stores, appliances etc.) are becoming increasingly popular to manage very large amounts of data.

There are two fake trends: Clouds and SaaS. They’ll get a share of the market but will not become, in the foreseeable future, the reference architecture. Thank you again for giving me voice. All the best. Augusto Albeghi

Ajay-To know more on Augusto’s startup and Vineyard please see www.straysoft.com

Interview Evan Levy Baseline Consulting

Here is an interview with Evan Levy, founder of one of the best and most practical business consultancy Baseline Consulting. The lower the bull-shit the better the consultant ( forgive my ….) Read here why Baseline’s frank and fast technology acumen have made it a rising star in the this fast growing field.

Businesses realize that there’s more to information delivery than just distributing reports; companies rely on data analysis to support operational decision making.- Evan Levy

Ajay- Describe your career in science and technology.

Evan Levy- My formal “science and technology career” started during college. I received degrees in Electrical Engineering and Computer Science from Duke University. While in school, I had several programming jobs both on and off campus: a systems integration team within a large government contractor; a research study within the Psychology Department; and the university Computation Center. After graduation, I worked at a database computer startup (Teradata) for several years beforeco-founding Baseline Consulting. I’ve been here 18 years.

Ajay- How would help solve the problem of chronic technology worker shortage in the United States?.

Evan- I think the “technology worker shortage” in the US is a problem with multiple facets. I don’t think it can be addressed simply by “throwing bodies” at the problem. I tend to view IT as two distinct areas: processing operations and information delivery. Processing operations includes the development and maintenance of all of the operational computing platforms (applications, mainframes, servers, desktop systems, processing infrastructure, etc.) Information delivery focuses on a company’s data and the associated integration and analytical infrastructure (data integration, analysis tools, processing platforms, etc.)

There’s been a fundamental shift in operational systems development over the past 15 years. The belief that most companies had unique, specialized business processes requiring custom developed applications proved unrealistic. The costs, resources, and long timelines associated with these activities weren’t practical in today’s business environment. Most companies have been willing to shift to “packaged applications” that allowed them to evaluate and trade business process customization for time. This has caused skills within development teams to change dramatically; the need for business process and data analysis skills has exploded.

The growth and adoption of business analysis as a core business capability has also changed the approach to technology development. The time of distributing standard reports across the company in the weekly or monthly format simply isn’t sufficient any more. Businesses realize that there’s more to information delivery than just distributing reports; companies rely on data analysis to support operational decision making. Detailed data analysis and exception reporting isn’t a luxury, it’s become part of the core business functions required by most companies. This has caused business users to become much more sophisticated in data analysis – and it has caused a need for IT to expand their focus from simply providing applications to having to provide detailed data to business users.

I think these two changes have caused a significant shift in the “technology worker shortage”. I don’t think we’re living in the time where the application is the key business asset for companies. I think folk are beginning to realize that the key asset is data. The only way to address the backlog in applications is to fundamentally shift how we approach the problem. Our users are learning to solve their problems with desktop tools; we need to be able to deliver detailed data to them in a more efficient manner.

The only way to address this challenge is for the technology worker to become educated on their users’ business practices and methods. I don’t think throwing thousands of programmers at the issue will solve the current problem. I think we need to capitalize on the skills that already exist within our user communities and position IT resources to make them more self sufficient. I don’t think there’s any short cut to building and maintaining operational applications; programmers will always be needed for that. However, in the growth area of business and data analysis, I think we need to take an entirely different tact. We need to make users more self-sufficient with data.

Ajay- Unemployment in the United States is now touching 10 % yet millions of jobs that went overseas remain there. How good or bad has the technology sector been affected by offshoring compared to other sectors.

Evan- I’m no economist, so I’m afraid I can’t offer much of an opinion regarding the impact of offshoring technology jobs. I do know that in business, there will always be a desire to build and deliver products in as cost-efficient a manner as possible. We’ve seen numerous industries expand through offshoring and outsourcing. It shouldn’t be a surprise to anyone that the technology industry is maturing in much the same manner as every other industry. We all expect companies to stay competitive through managing costs; offshoring and outsourcing is an accepted practice.

I think the impact to offshoring and outsourcing will continue to impact the technology sector. We continue to see companies reevaluate their technology investments. As some technologies mature, evolve, and become commodities, I suspect we’ll continue to see jobs associated with those technologies to be outsourcd.

I think the challenge to those of us that work within the technology industry is to continually invest and grow our skills. As any industry matures, the products transition from being specialized to commodities. Look at the internet and web applications and tools. Prior to 2000, building even a simple website proved expensive and resource intensive. Today, most anyone with fundamental pc skills can build their own website. This industry has collapsed in size – but many continue in that space because they’ve grown and expanded their skills. A look at the sophistication of today’s websites reflects this shift.

Ajay- What data solutions would you recommend for the United States government to better channel its stimulus spending.

Evan- I actually think the government has taken an interesting approach with some aspects of the current stimulus spending. I can’t remember when it was possible for any of us to quickly determine where and how federal money was distributed. Today, there are several websites that provide detailed information identifying individual projects and their related funding levels. I wish this type of detailed data was made available for federal spending related to Hurricane Katrina, or the activities in Iraq or Afghanistan. It would provide clarity to where our tax dollars go and raise visibility to the inappropriate distribution of funds.

It think it would be valuable that any and all government spending to be made available to the public in a simple online manner.

Ajay- Do you think Business Intelligence is a male dominated sector. If so, why?

Evan- I’m not sure BI is any more male-dominated than any other IT area. But I’ll say one thing: we need more women in IT. It’s not about gender-specific skills or even about unique talents. It’s just about balance and perspective. Some of my best friends are women! Seriously, I think women do bring some cultural and knowledge assets to the table that just make the overall environment better for everyone. The women who work for me are so exceptional that I should probably be working for them.

When it comes to BI, Cindi Howson—a BI thought leader in her own right and someone who knows a LOT about the vendor space—wrote a great blog post about women in BI. It mentions my partner Jill, whom you interviewed a few months ago. Jill and Cindi are only two of a stellar group of women in BI. (I’d call them “BI babes” but I’d be seriously hurt if I did that.) But I think Cindi’s blog says it all.

Ajay- What do you for relaxing? How important are hobbies and family life for busy career professionals.

Evan- I’m a strong believer in the balance of work and play. My colleagues and staff at Baseline work hard with our clients. Many of our client projects require time and travel flexibility that doesn’t align with the traditional 9—5 world. It’s important for individuals to spend time with their friends and families – to enjoy the things that are outside of their jobs. Personally I spend my time volunteering with the YMCA. It’s a life-long cause for me, and the source of many of my best friendships.

Ajay-  What are your views on mis-selling in consulting- selling something which you are not really an expert of. Does this happen in your opinion in BI.

Evan- That’s a pretty interesting question. I’m sure we all know about situations where an aggressive sales person made impractical promises to address business challenges and established unrealistic expectations. Individuals are driven by their company’s incentive system. I find that when a company rewards its team members on client satisfaction and project success (instead of simply the numbers), mis-selling rarely occurs. I often recommend that our clients ask their suppliers how their sales people and client teams are rewarded. We often see those questions in RFPs.

Most of the consulting problems we see aren’t related to aggressive selling, but simply a gap between the requirements and the solution. While this sounds trite – we often find that the solution providers don’t fully understand the problem they are solving. Whether it’s because the problem wasn’t well understood (by the solution provider) or well analyzed and described (by the prospect) is sometimes impossible to determine; it’s usually a combination of both.

Preventing (or limiting) these surprises is very doable; short-term and small deliverables, frequent and thorough project reviews, and measureable acceptance criteria is a good place to start.

I’ll be the first to admit that at Baseline, we’re much better at delivery than we are at sales. This means that we don’t chase deals, but when we get them we deliver. We like to delight our clients. And our consultants really know their stuff. Because of that we have great client references, for which we’re grateful.

Biography

Evan Levy is a partner and co-founder of Baseline Consulting. Evan has spent his career leading both practitioners and executives in delivering a range of IT solutions. In addition to his executive management responsibilities at Baseline, he regularly oversees high-profile systems integration projects for key clients such as Charles Schwab, Verizon, State of Michigan, and CheckFree.

Evan also advises software vendors in the areas of product planning, and continues to counsel the executive and investment communities in applying advanced technologies to key business initiatives. Evan has been known to shave off his beard on a bet. He can whistle most of the songs in the Sesame Street oeuvre, has a thing for silicone kitchen implements, and helped design the data warehouse at one of those superstores that is inevitably coming to a neighborhood near you.

Author

Evan writes frequently for leading industry publications, focusing on the financial payback of IT investments, architectural best practices, and data integration alternatives. He is a regular online contributor to DMReview.com and SearchDataManagement.com.

Evan is also co-author of the book, Customer Data Integration (John Wiley and Sons, 2006), which describes the business breakthroughs achieved with integrated customer data, and explains how to make CDI work. Evan also writes a regular blog for Baseline http://www.evanjlevy.com/

Industry Leader

Evan has been a thought leader at major industry and vendor conferences, including the American Marketing Association, DAMA International, MDM Summit, MDM Insight, and TechTarget conferences. He is a faculty member of TDWI and delivers regular presentations on data integration alternatives. Recent seminars have focused on the application of emerging technologies and use cases for master data management and data integration solutions.

Baseline Consulting, an acknowledged leader in information design and deployment, helps companies enhance the value of their enterprise data, improve business performance, and achieve self-sufficiency in managing data as a corporate asset. Baseline provides business consulting and technical implementation services in four practice areas: Data Warehousing, Data Integration, Business Analytics, and Data Governance. Founded in 1991 and headquartered in Los Angeles, California, Baseline changes how companies leverage information. To learn more, visit Baseline’s website at www.baseline-consulting.com.

Interview James Taylor Decision Management Expert (Updated)

Quick update

James is hosting a webinar series on decision manaement, predictive analytics and business rules this fall. You can check out the webinars and register for some or all at https://decisionmanagement.omnovia.com/registration

Here is an interview with James Taylor, a leading consultant and evangelist in the emerging field of converging decision management.

Ajay- Describe your career in science. What fascinates you with reporting on this segment. How would you interest freshmen students in taking up statistics and math courses.

James- I took Geological Geophysics and Mathematics in college but graduated in a year when the oil price was in free fall and never worked in geophysics. Since then I have worked in computers, mostly focused on how they can be applied rather than on how they work. I am not sure I would say that this represents a career in science so much a career enabled by science and, increasingly, watching science.

As far as math goes I actually think the problem is at the other end of the spectrum. Far too many people leave school without a feel for math – it is taught in a very narrow way and leaves far too many feeling that math is something that other people do. In a world with more and more data, and more and more statistics/data-driven decisions this is not ok. We need everyone in business to be able to consume math intelligently, even if they can’t develop mathematical models themselves. Continuing with traditional math teaching in high school and college is just excluding most people and that has to end.

Ajay- What are the various stages of evolution that you have seen in the Decision Management Industry, including the prevailing jargon name.

James- Decision Management applications have been around for years, albeit primarily in the financial services industry. They used to all have their own categories – fraud systems, origination systems, account management systems – and this was the beginning of the category. One of the first things I did at FICO was describe all these applications as a set – recognizing that the same approach and the same cluster of technologies was being used in each case. Back in 2002 I and some colleagues started calling this approach Enterprise Decision Management. Back then most decision management was enabled by these packaged applications and the tools that could be used to build custom applications were talked about separately – business rules, optimization, predictive analytics.

Over the last 6 years the focus on decisions in each of these areas has increased – more rules people talk about managing decisions with rules and there’s more talk of improving operational decisions in predictive analytics and optimization circles. There’s more talk of using the tools/technologies together and a growing range of integrated suites/platforms.

Where most companies stand today is wanting the kind of capabilities that can only be delivered by applying decision management techniques and technologies but they are not yet asking for decision management. They want, for instance, consistent personalized offers across channels but they are not asking for centralized decision management. Based on previous experience I think this will change steadily over the next year or two with the number of companies asking explicitly for decision management capabilities rising.

From a name perspective we have evolved too. Over time it has become clear that the “Enterprise” was misinterpreted as a call for Enterprise-wide implementation of decision management when it was meant as a call from enterprise ownership of decisions. As a result some folks talk about Business Decision Management and I just like to talk about Decision Management.

Ajay- Why is Decision Management more important than say performance management, business intelligence, predictive analytics.

James- I am not sure it is more important. Most organizations need business intelligence to understand what happened in their business and they need performance management to monitor what is happening now. This kind of understanding is important in successful decision management implementations. And decision management is a management discipline designed, in part, to put predictive analytics to work in operational systems.

I do think a focus on decision is vital to all of them, however. If you don’t understand the decisions you are making it is hard for me to see how you can judge the effectiveness of either business intelligence or performance management. And predictive analytics should be even more decision-centric if it is to be effective. So a focus on decisions is a necessary prerequisite and the management of those decisions, using rules and analytics, is a great way to maximize their value in operational systems.

Ajay- What are your views on offshoring 1) High quality research 2) Labor Arbitrage technical work 3) Cost cutting driven

James- Well I think offshoring is an inevitable consequence of an interconnected world. I also think that companies that offshore simply to reduce cost deserve the employee and customer loyalty they will get as a consequence!

I do think that companies should make thoughtful decisions about what to do where, when something must be handled centrally and when it can be pushed to different localities etc. I think that smarter systems – systems that manage decisions explicitly – can help in this and help companies have a real DNA when it comes to decision making.

Ajay- What are the top 5 principles of Decision Management , as you would explain to a class of business graduates and CEO’s

James-

1. Little decisions add up

The day to day decisions that drive operational behavior, customer interactions, transactional systems are more important than the big, strategic decisions beloved of management consultants. Each one seems unimportant but they happen so often that their total value swamps anything else you do. If you get these decisions wrong it won’t matter what you get right.

2. The purpose of information is to decide

Deming has a famous quote that “The ultimate purpose of collecting the data is to provide a basis for action or a recommendation”. The reason you collect data, report on data, analyze data is to make better decisions. Otherwise it’s just a cost. And unless you know which decision you are making, and what will make it a good or a bad one, then all the data in the world (and all the data management or data analysis) will not help you.

3. You cannot afford to lock up your logic

Decision making logic- the policies, regulations, best practices and customer preferences that drive decision making – cannot be locked up in code you cannot read, systems you do not understand. No matter what else might be handled by your IT people, business decision making logic must not be. You must at least be able to collaborate with your IT folks and manage it with them. You must be responsible for this logic.

4. No answer, no matter how good, is static

Organizations must realize that they have to constantly analyze, reassess and challenge their decision making process. The effectiveness of a decision can often not be determined for some time and even a good decision can be degraded by a change in the behavior of a competitor or a change in the market. As such constant challenging of the decision making approach, constant A/B testing or adaptive control is essential if decisions are to remain effective.

5. Decision making is a process to be managed

The way you make decisions is something you must understand, document, automate and analyze. Good managers, good staff, have a good decision making process. Good outcomes might result from luck or circumstance but you don’t want to rely on that. Instead you want to focus on quality decision making processes. And like many repeatable processes, automating decision making makes it easier to analyze and improve it over time.

UPDATE-

James has agreed to schedule a free webinar to explain it more fully. Anyone who wants can register at https://decisionmanagement.omnovia.com/registration/pid=74151252469530

Ajay- What does James Taylor do when not in front of a computer, a podium or an airport. How important do you think is work life balance particularly for young people

James- Well I am a parent, a partner and an avid reader and between them those use up most of my non-work time. I really enjoy my work which makes it hard to stop sometimes. I think life/work balance is important but so is enthusiasm for what one does. Perhaps I am kidding myself but I think there is a difference between putting a lot of hours into something about which you are passionate and putting a lot of work into something just to get ahead or to avoid the rest of your life.

Ajay- Do you think BI world is male dominated. What could be the reasons.

James- Yes. The usual sexism of business combined with the average age of BI people (younger groups seem more mixed in general).

Ajay- Green economy and stimulus macro economics. How can both these fields benefit from Decision Management

James- From a macro stimulus point of view I think the key thing is that governments around the world throw money at companies specializing in decision management. <smile>

The green economy, however, is more interesting. Personally I don’t see how smart grids can be made to work without a solid core of powerful decisioning. Green marketing requires personalization and targeting to avoid waste (more decisioning) while helping consumers make better decisions about products based on green criteria needs to be built into shopping engines like Amazon’s if it is to make a real difference. Being green is all about making greener decisions and making systems make greener decisions takes decision management and decisioning technology.

Biography-

James Taylor is a leading expert in Decision Management and an independent consultant specializing in helping companies automate and improve critical decisions. Previously James was a Vice President at Fair Isaac Corporation where he developed and refined the concept of enterprise decision management or EDM. Widely credited with the invention of the term and the best known proponent of the approach, James helped create the Decision Management market and is its most passionate advocate.

James has 20 years experience in all aspects of the design, development, marketing and use of advanced technology including CASE tools, project planning and methodology tools as well as platform development in PeopleSoft’s R&D team and consulting with Ernst and Young. He has consistently worked to develop approaches, tools and platforms that others can use to build more effective information systems.

James is an active consultant, speaker and author. He is a prolific blogger, with regular posts at jtonedm.com and ebizq.net/blogs/decision_management. He also has an Expert Channel – Decision Management – on the BI Network.

His articles appear in industry magazines, he has contributed chapters to “The Business Rules Revolution:Doing Business The Right Way” (Happy About, 2006) and “Business Intelligence Implementation : Issues and Perspectives” (ICFAI University Press, 2006), and is the co-author of “Smart (Enough) Systems: How to Deliver Competitive Advantage by Automating Hidden Decisions ” (Prentice Hall, 2007) with Neil Raden .

James is a highly sought speaker, appearing frequently at industry conferences, events and seminars. He is also a lecturer at the University of California, Berkeley.

James has an M.S. in Business Systems Analysis and Design from City University, London; a B.S. in Geological Geophysics and Mathematics from the University of Reading, England; and a “Mini-MBA” certificate from the Silicon Valley Executive Business Program at San Jose State University.

You can contact James at james@jtonedm.com