SAS Lawsuit against WPS- Application Dismissed

I saw Phil Rack http://twitter.com/#!/PhilRack (whom I have interviewed before at https://decisionstats.com/2009/02/03/interview-phil-rack/ ) and whom I dont talk to since Obama won the election-

 

 

 

 

 

 

 

well Phil -creator of Bridge to R- first SAS language to R language interface- mentioned this judgment and link.

 

Probably Phil should revise the documentation of Bridge to R- lest he is sued himself!!!

Conclusion
It was for these reasons that I decided to dismiss SAS’s application.

From-

http://www.bailii.org/cgi-bin/markup.cgi?doc=/ew/cases/EWHC/Ch/2010/3012.html

 

Neutral Citation Number: [2010] EWHC 3012 (Ch)
Case No: HC09C03293

IN THE HIGH COURT OF JUSTICE
CHANCERY DIVISION
Royal Courts of Justice
Strand, London, WC2A 2LL
22 November 2010

B e f o r e :

THE HON MR JUSTICE ARNOLD
____________________
Between:
SAS INSTITUTE INC. Claimant
– and –

WORLD PROGRAMMING LIMITED Defendant

____________________

Michael Hicks (instructed by Bristows) for the Claimant
Martin Howe QC and Isabel Jamal (instructed by Speechly Bircham LLP) for the Defendant
Hearing date: 18 November 2010
____________________

HTML VERSION OF JUDGMENT
____________________

Crown Copyright ©

MR. JUSTICE ARNOLD :

Introduction
By order dated 28 July 2010 I referred certain questions concerning the interpretation of Council Directive 91/250/EEC of 14 May 1991 on the legal protection of computer programs, which was recently codified as European Parliament and Council Directive 2009/24/EC of 23 April 2009, and European Parliament and Council Directive 2001/29/EC of 22 May 2001 on the harmonisation of certain aspects of copyright and related rights in the information society to the Court of Justice of the European Union under Article 267 of the Treaty on the Functioning of the European Union. The background to the reference is set out in full in my judgment dated 23 July 2010 [2010] EWHC 1829 (Ch). The reference is presently pending before the Court of Justice as Case C-406/10. By an application notice issued on 11 October 2010 SAS applied for the wording of the questions to be amended in a number of respects. I heard that application on 18 November 2010 and refused it for reasons to be given later. This judgment contains those reasons.

The questions and the proposed amendments
I set out below the questions referred with the amendments proposed by SAS shown by strikethrough and underlining:

“A. On the interpretation of Council Directive 91/250/EEC of 14 May 1991 on the legal protection of computer programs and of Directive 2009/24/EC of the European Parliament and of the Council of 23 April 2009 (codified version):
1. Where a computer program (‘the First Program’) is protected by copyright as a literary work, is Article 1(2) to be interpreted as meaning that it is not an infringement of the copyright in the First Program for a competitor of the rightholder without access to the source code of the First Program, either directly or via a process such as decompilation of the object code, to create another program (‘the Second Program’) which replicates by copying the functions of the First Program?
2. Is the answer to question 1 affected by any of the following factors:
(a) the nature and/or extent of the functionality of the First Program;
(b) the nature and/or extent of the skill, judgment and labour which has been expended by the author of the First Program in devising and/or selecting the functionality of the First Program;
(c) the level of detail to which the functionality of the First Program has been reproduced in the Second Program;
(d) if, the Second Program includes the following matters as a result of copying directly or indirectly from the First Program:
(i) the selection of statistical operations which have been implemented in the First Program;
(ii) the selection of mathematical formulae defining the statistical operations which the First Program carries out;
(iii) the particular commands or combinations of commands by which those statistical operations may be invoked;
(iv) the options which the author of the First Program has provided in respect of various commands;
(v) the keywords and syntax recognised by the First Program;
(vi) the defaults which the author of the First Program has chosen to implement in the event that a particular command or option is not specified by the user;
(vii) the number of iterations which the First Program will perform in certain circumstances;
(e)(d) if the source code for the Second Program reproduces by copying aspects of the source code of the First Program to an extent which goes beyond that which was strictly necessary in order to produce the same functionality as the First Program?
3. Where the First Program interprets and executes application programs written by users of the First Program in a programming language devised by the author of the First Program which comprises keywords devised or selected by the author of the First Program and a syntax devised by the author of the First Program, is Article 1(2) to be interpreted as meaning that it is not an infringement of the copyright in the First Program for the Second Program to be written so as to interpret and execute such application programs using the same keywords and the same syntax?
4. Where the First Program reads from and writes to data files in a particular format devised by the author of the First Program, is Article 1(2) to be interpreted as meaning that it is not an infringement of the copyright in the First Program for the Second Program to be written so as to read from and write to data files in the same format?
5. Does it make any difference to the answer to questions 1, 2, 3 and 4 if the author of the Second Program created the Second Program without access to the source code of the First Program, either directly or via decompilation of the object code by:
(a) observing, studying and testing the functioning of the First Program; or
(b) reading a manual created and published by the author of the First Program which describes the functions of the First Program (“the Manual”) and by implementing in the Second Program the functions described in the Manual; or
(c) both (a) and (b)?
6. Where a person has the right to use a copy of the First Program under a licence, is Article 5(3) to be interpreteding as meaning that the licensee is entitled, without the authorisation of the rightholder, to perform acts of loading, running and storing the program in order to observe, test or study the functioning of the First Program so as to determine the ideas and principles which underlie any element of the program, if the licence permits the licensee to perform acts of loading, running and storing the First Program when using it for the particular purpose permitted by the licence, but the acts done in order to observe, study or test the First Program extend outside the scope of the purpose permitted by the licence and are therefore acts for which the licensee has no right to use the copy of the First Program under the licence?
7. Is Article 5(3) to be interpreted as meaning that acts of observing, testing or studying of the functioning of the First Program are to be regarded as being done in order to determine the ideas or principles which underlie any element of the First Program where they are done:
(a) to ascertain the way in which the First Program functions, in particular details which are not described in the Manual, for the purpose of writing the Second Program in the manner referred to in question 1 above;
(b) to ascertain how the First Program interprets and executes statements written in the programming language which it interprets and executes (see question 3 above);
(c) to ascertain the formats of data files which are written to or read by the First Program (see question 4 above);
(d) to compare the performance of the Second Program with the First Program for the purpose of investigating reasons why their performances differ and to improve the performance of the Second Program;
(e) to conduct parallel tests of the First Program and the Second Program in order to compare their outputs in the course of developing the Second Program, in particular by running the same test scripts through both the First Program and the Second Program;
(f) to ascertain the output of the log file generated by the First Program in order to produce a log file which is identical or similar in appearance;
(g) to cause the First Program to output data (in fact, data correlating zip codes to States of the USA) for the purpose of ascertaining whether or not it corresponds with official databases of such data, and if it does not so correspond, to program the Second Program so that it will respond in the same way as the First Program to the same input data.
B. On the interpretation of Directive 2001/29/EC of the European Parliament and of the Council of 22 May 2001 on the harmonisation of certain aspects of copyright and related rights in the information society:
8. Where the Manual is protected by copyright as a literary work, is Article 2(a) to be interpreted as meaning that it is an infringement of the copyright in the Manual for the author of the Second Program to reproduce or substantially reproduce in the Second Program any or all of the following matters described in the Manual:
(a) the selection of statistical operations which have been described in the Manual as being implemented in the First Program;
(b) the mathematical formulae used in the Manual to describe those statistical operations;
(c) the particular commands or combinations of commands by which those statistical operations may be invoked;
(d) the options which the author of the First Program has provided in respect of various commands;
(e) the keywords and syntax recognised by the First Program;
(f) the defaults which the author of the First Program has chosen to implement in the event that a particular command or option is not specified by the user;
(g) the number of iterations which the First Program will perform in certain circumstances?
9. Is Article 2(a) to be interpreted as meaning that it is an infringement of the copyright in the Manual for the author of the Second Program to reproduce or substantially reproduce in a manual describing the Second Program the keywords and syntax recognised by the First Program?”

Jurisdiction
It was common ground between counsel that, although there is no direct authority on the point, it appears that the Court of Justice would accept an amendment to questions which had previously been referred by the referring court. The Court of Justice has stated that “national courts have the widest discretion in referring matters”: see Case 166/73 Rheinmühlen Düsseldorf v Einfuhr-und Vorratstelle für Getreide under Futtermittel [1974] ECR 33 at [4]. If an appeal court substitutes questions for those referred by a lower court, the substituted questions will be answered: Case 65/77 Razanatsimba [1977] ECR 2229. Sometimes the Court of Justice itself invites the referring court to clarify its questions, as occurred in Interflora Inc v Marks & Spencer plc (No 2) [2010] EWHC 925 (Ch). In these circumstances, there does not appear to be any reason to think that, if the referring court itself had good reason to amend its questions, the Court of Justice would disregard the amendment.

Counsel for WPL submitted, however, that, as a matter of domestic procedural law, this Court had no jurisdiction to vary an order for reference once sealed unless either there had been a material change of circumstances since the order (as in Interflora) or it had subsequently emerged that the Court had made the order on a false basis. He submitted that neither of those conditions was satisfied here. In those circumstances, the only remedy of a litigant in the position of SAS was to seek to appeal to the Court of Appeal.

As counsel for WPL pointed out, CPR rule 3.1(7) confers on courts what appears to be a general power to vary or revoke their own orders. The proper exercise of that power was considered by the Court of Appeal in Collier v Williams [2006] EWCA Civ 20, [2006] 1 WLR 1945 and Roult v North West Strategic Health Authority [2009] EWCA Civ 444, [2010] 1 WLR 487.

In Collier Dyson LJ (as he then was) giving the judgment of the Court of Appeal said:

“39. We now turn to the third argument. CPR 3.1(7) gives a very general power to vary or revoke an order. Consideration was given to the circumstances in which that power might be used by Patten J in Lloyds Investment (Scandinavia) Limited v Christen Ager-Hanssen [2003] EWHC 1740 (Ch). He said at paragraph 7:
‘The Deputy Judge exercised a discretion under CPR Part 13.3. It is not open to me as a judge exercising a parallel jurisdiction in the same division of the High Court to entertain what would in effect be an appeal from that order. If the Defendant wished to challenge whether the order made by Mr Berry was disproportionate and wrong in principle, then he should have applied for permission to appeal to the Court of Appeal. I have been given no real reasons why this was not done. That course remains open to him even today, although he will have to persuade the Court of Appeal of the reasons why he should have what, on any view, is a very considerable extension of time. It seems to me that the only power available to me on this application is that contained in CPR Part 3.1(7), which enables the Court to vary or revoke an order. This is not confined to purely procedural orders and there is no real guidance in the White Book as to the possible limits of the jurisdiction. Although this is not intended to be an exhaustive definition of the circumstances in which the power under CPR Part 3.1(7) is exercisable, it seems to me that, for the High Court to revisit one of its earlier orders, the Applicant must either show some material change of circumstances or that the judge who made the earlier order was misled in some way, whether innocently or otherwise, as to the correct factual position before him. The latter type of case would include, for example, a case of material non-disclosure on an application for an injunction. If all that is sought is a reconsideration of the order on the basis of the same material, then that can only be done, in my judgment, in the context of an appeal. Similarly it is not, I think, open to a party to the earlier application to seek in effect to re-argue that application by relying on submissions and evidence which were available to him at the time of the earlier hearing, but which, for whatever reason, he or his legal representatives chose not to employ. It is therefore clear that I am not entitled to entertain this application on the basis of the Defendant’s first main submission, that Mr Berry’s order was in any event disproportionate and wrong in principle, although I am bound to say that I have some reservations as to whether he was right to impose a condition of this kind without in terms enquiring whether the Defendant had any realistic prospects of being able to comply with the condition.’
We endorse that approach. We agree that the power given by CPR 3.1(7) cannot be used simply as an equivalent to an appeal against an order with which the applicant is dissatisfied. The circumstances outlined by Patten J are the only ones in which the power to revoke or vary an order already made should be exercised under 3.1(7).”
In Roult Hughes LJ, with whom Smith and Carnwath LJJ agreed, said at [15]:

“There is scant authority upon Rule 3.1(7) but such as exists is unanimous in holding that it cannot constitute a power in a judge to hear an appeal from himself in respect of a final order. Neuberger J said as much in Customs & Excise v Anchor Foods (No 3) [1999] EWHC 834 (Ch). So did Patten J in Lloyds Investment (Scandinavia) Ltd v Ager-Hanssen [2003] EWHC 1740 (Ch). His general approach was approved by this court, in the context of case management decisions, in Collier v Williams [2006] EWCA Civ 20. I agree that in its terms the rule is not expressly confined to procedural orders. Like Patten J in Ager-Hanssen I would not attempt any exhaustive classification of the circumstances in which it may be proper to invoke it. I am however in no doubt that CPR 3.1(7) cannot bear the weight which Mr Grime’s argument seeks to place upon it. If it could, it would come close to permitting any party to ask any judge to review his own decision and, in effect, to hear an appeal from himself, on the basis of some subsequent event. It would certainly permit any party to ask the judge to review his own decision when it is not suggested that he made any error. It may well be that, in the context of essentially case management decisions, the grounds for invoking the rule will generally fall into one or other of the two categories of (i) erroneous information at the time of the original order or (ii) subsequent event destroying the basis on which it was made. The exigencies of case management may well call for a variation in planning from time to time in the light of developments. There may possibly be examples of non-procedural but continuing orders which may call for revocation or variation as they continue – an interlocutory injunction may be one. But it does not follow that wherever one or other of the two assertions mentioned (erroneous information and subsequent event) can be made, then any party can return to the trial judge and ask him to re-open any decision…..”
In the present case there has been no material change of circumstances since I made the Order dated 28 July 2010. Nor did counsel for SAS suggest that I had made the Order upon a false basis. Counsel for SAS did submit, however, that the Court of Appeal had left open the possibility that it might be proper to exercise the power conferred by rule 3.1(7) even if there had no been material change of circumstances and it was not suggested that the order in question had been made on a false basis. Furthermore, he relied upon paragraph 1.1 of the Practice Direction to CPR Part 68, which provides that “responsibility for settling the terms of the reference lies with the English court and not with the parties”. He suggested that this meant that orders for references were not subject to the usual constraints on orders made purely inter partes.

In my judgment PD68 paragraph 1.1 does not justify exercising the power conferred by rule 3.1(7) in circumstances falling outside those identified in Collier and Roult. I am therefore very doubtful that it would be a proper exercise of the power conferred on me by CPR r. 3.1(7) to vary the Order dated 28 July 2010 in the present circumstances. I prefer, however, not to rest my decision on that ground.

Discretion
Counsel for WPL also submitted that, even if this Court had jurisdiction to amend the questions, I should exercise my discretion by refusing to do so for two reasons. First, because the application was made too late. Secondly, because there was no sufficient justification for the amendments anyway. I shall consider these points separately.

Delay
The relevant dates are as follows. The judgment was handed down on 23 July 2010, a draft having been made available to the parties a few days before that. There was a hearing to consider the form of the order, and in particular the wording of the questions to be referred, on 28 July 2010. Prior to that hearing both parties submitted drafts of the questions, and the respective drafts were discussed at the hearing. Following the hearing I settled the Order, and in particular the questions. The Order was sealed on 2 August 2010. The sealed Order was received by the parties between 3 and 5 August 2010. At around the same time the Senior Master of the Queen’s Bench Division transmitted the Order to the Court of Justice. On 15 September 2010 the Registry of the Court of Justice notified the parties, Member States and EU institutions of the reference. On 1 October 2010 the United Kingdom Intellectual Property Office advertised the reference on its website and invited comments by interested parties by 7 October 2010. The latest date on which written observations on the questions referred may be filed at the Court of Justice is 8 December 2010 (two months from the date of the notification plus 10 days extension on account of distance where applicable). This period is not extendable in any circumstances.

As noted above, the application was not issued until 11 October 2010. No justification has been provided by SAS for the delay in making the application. The only explanation offered by counsel for SAS was that the idea of proposing the amendments had only occurred to those representing SAS when starting work on SAS’s written observations.

Furthermore, the application notice requested that the matter be dealt with without a hearing. In my view that was not appropriate: the application was plainly one which was likely to require at least a short hearing. Furthermore, the practical consequence of proceeding in that way was to delay the hearing of the application. The paper application was put before me on 22 October 2010. On the same day I directed that the matter be listed for hearing. In the result it was not listed for hearing until 18 November 2010. If SAS had applied for the matter to be heard urgently, I am sure that it could have been dealt with sooner.

As counsel for WPL submitted, it is likely that the parties, Member States and institutions who intend to file written observations are now at an advanced stage of preparing those observations. Indeed, it is likely that preparations would have been well advanced even on 11 October 2010. To amend the questions at this stage in the manner proposed by SAS would effectively require the Court of Justice to re-start the written procedure all over again. The amended questions would have to be translated into all the EU official languages; the parties, Member States and EU institutions would have to be notified of the amended questions; and the time for submitting written observations would have to be re-set. This would have two consequences. First, a certain amount of time, effort and money on the part of those preparing written observations would be wasted. Secondly, the progress of the case would be delayed. Those are consequences that could have been avoided if SAS had moved promptly after receiving the sealed Order.

In these circumstances, it would not in my judgment be proper to exercise any discretion I may have in favour of amending the questions.

No sufficient justification
Counsel for WPL submitted that in any event SAS’s proposed amendments were not necessary in order to enable the Court of Justice to provide guidance on the issues in this case, and therefore there was no sufficient justification for making the amendments.

Before addressing that submission directly, I think it is worth commenting more generally on the formulation of questions. As is common ground, and reflected in paragraph 1.1 of PD68, it is well established that the questions posed on a reference under Article 267 are the referring court’s questions, not the parties’. The purpose of the procedure is for the Court of Justice to provide the referring court with the guidance it needs in order to deal with the issues before it. It follows that it is for the referring court to decide how to formulate the questions.

In my view it is usually helpful for the court to have the benefit of the parties’ comments on the wording of the proposed questions, as envisaged in paragraph 1.1 of PD68. There are two main reasons for this. The first is to try to ensure that the questions are sufficiently comprehensive to enable all the issues arising to be addressed by the Court of Justice, and thus avoid the need for a further reference at a later stage of the proceedings, as occurred in the Boehringer Ingelheim v Swingward litigation. In that case Laddie J referred questions to the Court of Justice, which were answered in Case C-143/00 [2002] ECR I-3759. The Court of Appeal subsequently concluded, with regret, that the answers to those questions did not suffice to enable it to deal with the case, and referred further questions to the Court of Justice: [2004] EWCA Civ 575, [2004] ETMR 65. Those questions were answered in Case C-348/04 [2007] ECR I-3391. The second main reason is to try to ensure that the questions are clear and free from avoidable ambiguity or obscurity.

In my experience it is not uncommon for parties addressing the court on the formulation of the questions to attempt to ensure that the questions are worded in a leading manner, that is to say, in a way which suggests the desired answer. In my view that is neither proper nor profitable. It is not proper because the questions should so far as possible be impartially worded. It is not profitable because experience shows that the Court of Justice is usually not concerned with the precise wording of the questions referred, but with their legal substance. Thus the Court of Justice frequently reformulates the question in giving its answer.

As counsel for WPL pointed out, and as I have already mentioned, in the present case the parties provided me with draft questions which were discussed at a hearing. In settling the questions I took into account the parties’ drafts and their comments on each other’s drafts, but the final wording is, for better or worse, my own.

As counsel for WPL submitted, at least to some extent SAS’s proposed amendments to the questions appear designed to bring the wording closer to that originally proposed by SAS. This is particularly true of the proposed amendment to question 1. In my judgment it would not be a proper exercise of any discretion that I may have to permit such an amendment, both because it appears to be an attempt by SAS to have the question worded in a manner which it believes favours its case and because its proper remedy if it objected to my not adopting the wording it proposed was to seek to appeal to the Court of Appeal. In saying this, I do not overlook the fact that SAS proposes to move some of the words excised from question 1 to question 5.

In any event, I am not satisfied that any of the amendments are necessary either to enable the parties to present their respective arguments to the Court of Justice or to enable the Court to give guidance on any of the issues arising in this case. On the contrary, I consider that the existing questions are sufficient for these purposes. By way of illustration, I will take the biggest single amendment, which is the proposed insertion of new paragraph (d) in question 2. In my view, the matters referred to in paragraph (d) are matters that are encompassed within paragraphs (b) and/or (c); or at least can be addressed by the parties, and hence the Court of Justice, in the context provided by paragraphs (b) and/or (c). When I put this to counsel for SAS during the course of argument, he accepted it.

Other amendments counsel for SAS himself presented as merely being minor matters of clarification. In my view none of them amount to the elimination of what would otherwise be ambiguities or obscurities in the questions.

It is fair to say that SAS have identified a small typographical error in question 2 (“interpreting” should read “interpreted”), but in my view this is an obvious error which will not cause any difficulty in the proceedings before the Court of Justice.

Conclusion
It was for these reasons that I decided to dismiss SAS’s application

DirkE and JD swoon about Shane's MOM in Room 106 while writing R code

In a shadowy room in cyberworld , two geeks plot revenge on a common

blgger and up vote each other on stack overflow while discussing Shane’s MOM

http://chat.stackoverflow.com/transcript/106/2010/11/15

 

How can you announce this on SO?

Oh…

Sure…go for it.

I’ll downvote it.

🙂

We should also add it into the [r] wiki.

I added it to the wiki.

We should probably try to clean that up a little; some of the other tags have put a lot of effort into it. (e.g. stackoverflow.com/tags/java/info)

Whoa! I didn’t downvote your post.

I was wondering…
Feel free to upvote it to set it back to even.

3:18 PM

I did.

Someone voted to close too.

Some people take themselves way too seriously…

Yup. And not unlike the people constantly call for community-wiki.

BTW I didn’t see the button for CW anymore once it was posted. What am I missing?

I think that I may have seen something about a bug related to that…

Four close votes, and -2 score. Whoa Nelly.

Ha! I’m not overly surprised. Meant to suggest that you use CW…

Ironically, you’re still ahead in the rep. on this question, right? Although I think that it might get downvoted into oblivion before we’re done…

I up-voted. Dirk, I’ve got your back. 😉

 

 

Begin……

3 hours later…

8:09 PM

@DirkEddelbuettel you catch Ajay’s latest? ow.ly/3a8gK

Jeebus

I had actually unsub’ed from his feed. Now I know why. How you’re doing with the Yahoo Pipes app?

Methinks he has some sort of clinical compulsive condition given how every single post has to include a reference that his facvourite software company from NC, and/or members of their management team.

@DirkEddelbuettel I stumbled on that one in Twitter. pipes project has been tabled while I fight some other battles.

I think he’s fishing for SEO sugar with his posts. His use of words seems contrived to include key words over and over

Twitter is so useless, between him and Ed Borasky’s (znmeb) spambots nothing else of value appers.

I guess like so many streams it requires filtering. The basic twitter blocking takes them out prett

y quickly

So blocking is common? They ought to show that: “subscribed to N, listened to by M and showing good taste by blocking O asshats”
8:19 PM

ha! yeah that would be good signaling. Not sure how common it is, but I use it mostly for spam bots. I actually have only blocked 2 warm blooded humans (counting Ajay’s multiple accts as one person)

Dirk Eddelbuettel

Oh boy 🙂 Romain has fired a salvo on r-devel: “Depends on what your goal is: getting the job done, or learning about the R/C API”. Hehe.

@JDLong Tell who: One is Shane’s mother, and the other is … ?

JD Long
JD Long

speaking of shane’s mom, he and Josh deciding to be productive members of real society today?

PAWCON -This week in London

Watch out for the twitter hash news on PAWCON and the exciting agenda lined up. If your in the City- you may want to just drop in

http://www.predictiveanalyticsworld.com/london/2010/agenda.php#day1-7

Disclaimer- PAWCON has been a blog partner with Decisionstats (since the first PAWCON ). It is vendor neutral and features open source as well proprietary software, as well case studies from academia and Industry for a balanced view.

 

Little birdie told me some exciting product enhancements may be in the works including a not yet announced R plugin 😉 and the latest SAS product using embedded analytics and Dr Elder’s full day data mining workshop.

Citation-

http://www.predictiveanalyticsworld.com/london/2010/agenda.php#day1-7

Monday November 15, 2010
All conference sessions take place in Edward 5-7

8:00am-9:00am

Registration, Coffee and Danish
Room: Albert Suites


9:00am-9:50am

Keynote
Five Ways Predictive Analytics Cuts Enterprise Risk

All business is an exercise in risk management. All organizations would benefit from measuring, tracking and computing risk as a core process, much like insurance companies do.

Predictive analytics does the trick, one customer at a time. This technology is a data-driven means to compute the risk each customer will defect, not respond to an expensive mailer, consume a retention discount even if she were not going to leave in the first place, not be targeted for a telephone solicitation that would have landed a sale, commit fraud, or become a “loss customer” such as a bad debtor or an insurance policy-holder with high claims.

In this keynote session, Dr. Eric Siegel will reveal:

  • Five ways predictive analytics evolves your enterprise to reduce risk
  • Hidden sources of risk across operational functions
  • What every business should learn from insurance companies
  • How advancements have reversed the very meaning of fraud
  • Why “man + machine” teams are greater than the sum of their parts for
  • enterprise decision support

 

Speaker: Eric Siegel, Ph.D., Program Chair, Predictive Analytics World

Top of this page ] [ Agenda overview ]


IBM9:50am-10:10am

Platinum Sponsor Presentation
The Analytical Revolution

The algorithms at the heart of predictive analytics have been around for years – in some cases for decades. But now, as we see predictive analytics move to the mainstream and become a competitive necessity for organisations in all industries, the most crucial challenges are to ensure that results can be delivered to where they can make a direct impact on outcomes and business performance, and that the application of analytics can be scaled to the most demanding enterprise requirements.

This session will look at the obstacles to successfully applying analysis at the enterprise level, and how today’s approaches and technologies can enable the true “industrialisation” of predictive analytics.

Speaker: Colin Shearer, WW Industry Solutions Leader, IBM UK Ltd

Top of this page ] [ Agenda overview ]


Deloitte10:10am-10:20am

Gold Sponsor Presentation
How Predictive Analytics is Driving Business Value

Organisations are increasingly relying on analytics to make key business decisions. Today, technology advances and the increasing need to realise competitive advantage in the market place are driving predictive analytics from the domain of marketers and tactical one-off exercises to the point where analytics are being embedded within core business processes.

During this session, Richard will share some of the focus areas where Deloitte is driving business transformation through predictive analytics, including Workforce, Brand Equity and Reputational Risk, Customer Insight and Network Analytics.

Speaker: Richard Fayers, Senior Manager, Deloitte Analytical Insight

Top of this page ] [ Agenda overview ]


10:20am-10:45am

Break / Exhibits
Room: Albert Suites


10:45am-11:35am
Healthcare
Case Study: Life Line Screening
Taking CRM Global Through Predictive Analytics

While Life Line is successfully executing a US CRM roadmap, they are also beginning this same evolution abroad. They are beginning in the UK where Merkle procured data and built a response model that is pulling responses over 30% higher than competitors. This presentation will give an overview of the US CRM roadmap, and then focus on the beginning of their strategy abroad, focusing on the data procurement they could not get anywhere else but through Merkle and the successful modeling and analytics for the UK.

Speaker: Ozgur Dogan, VP, Quantitative Solutions Group, Merkle Inc.

Speaker: Trish Mathe, Life Line Screening

Top of this page ] [ Agenda overview ]


11:35am-12:25pm
Open Source Analytics; Healthcare
Case Study: A large health care organization
The Rise of Open Source Analytics: Lowering Costs While Improving Patient Care

Rapidminer and R were the number 1 and 2 in this years annual KDNuggets data mining tool usage poll, followed by Knime on place 4 and Weka on place 6. So what’s going on here? Are these open source tools really that good or is their popularity strongly correlated with lower acquisition costs alone? This session answers these questions based on a real world case for a large health care organization and explains the risks & benefits of using open source technology. The final part of the session explains how these tools stack up against their traditional, proprietary counterparts.

Speaker: Jos van Dongen, Associate & Principal, DeltIQ Group

Top of this page ] [ Agenda overview ]


12:25pm-1:25pm

Lunch / Exhibits
Room: Albert Suites


1:25pm-2:15pm
Keynote
Thought Leader:
Case Study: Yahoo! and other large on-line e-businesses
Search Marketing and Predictive Analytics: SEM, SEO and On-line Marketing Case Studies

Search Engine Marketing is a $15B industry in the U.S. growing to double that number over the next 3 years. Worldwide the SEM market was over $50B in 2010. Not only is this a fast growing area of marketing, but it is one that has significant implications for brand and direct marketing and is undergoing rapid change with emerging channels such as mobile and social. What is unique about this area of marketing is a singularly heavy dependence on analytics:

 

  • Large numbers of variables and options
  • Real-time auctions/bids and a need to adjust strategies in real-time
  • Difficult optimization problems on allocating spend across a huge number of keywords
  • Fast-changing competitive terrain and heavy competition on the obvious channels
  • Complicated interactions between various channels and a large choice of search keyword expansion possibilities
  • Profitability and ROI analysis that are complex and often challenging

 

The size of the industry, its growing importance in marketing, its upcoming role in Mobile Advertising, and its uniquely heavy reliance on analytics makes it particularly interesting as an area for predictive analytics applications. In this session, not only will hear about some of the latest strategies and techniques to optimize search, you will hear case studies that illustrate the important role of analytics from industry practitioners.

Speaker: Usama Fayyad, , Ph.D., CEO, Open Insights

Top of this page ] [ Agenda overview ]


SAS2:15pm-2:35pm

Platinum Sponsor Presentation
Creating a Model Factory Using in-Database Analytics

With the ever-increasing number of analytical models required to make fact-based decisions, as well as increasing audit compliance regulations, it is more important than ever that these models can be created, monitored, retuned and deployed as quickly and automatically as possible. This paper, using a case study from a major financial organisation, will show how organisations can build a model factory efficiently using the latest SAS technology that utilizes the power of in-database processing.

Speaker: John Spooner, Analytics Specialist, SAS (UK)

Top of this page ] [ Agenda overview ]


2:35pm-2:45pm

Session Break
Room: Albert Suites


2:45pm-3:35pm

Retail
Case Study: SABMiller
Predictive Analytics & Global Marketing Strategy

Over the last few years SABMiller plc, the second largest brewing company in the world operating in 70 countries, has been systematically segmenting its markets in different countries globally in order optimize their portfolio strategy & align it to their long term country specific growth strategy. This presentation talks about the overall methodology followed and the challenges that had to be overcome both from a technical as well as from a change management stand point in order to successfully implement a standard analytics approach to diverse markets and diverse business positions in a highly global setting.

The session explains how country specific growth strategies were converted to objective variables and consumption occasion segments were created that differentiated the market effectively by their growth potential. In addition to this the presentation will also provide a discussion on issues like:

  • The dilemmas of static vs. dynamic solutions and standardization vs. adaptable solutions
  • Challenges in acceptability, local capability development, overcoming implementation inertia, cost effectiveness, etc
  • The role that business partners at SAB and analytics service partners at AbsolutData together play in providing impactful and actionable solutions

 

Speaker: Anne Stephens, SABMiller plc

Speaker: Titir Pal, AbsolutData

Top of this page ] [ Agenda overview ]


3:35pm-4:25pm

Retail
Case Study: Overtoom Belgium
Increasing Marketing Relevance Through Personalized Targeting

 

Since many years, Overtoom Belgium – a leading B2B retailer and division of the French Manutan group – focuses on an extensive use of CRM. In this presentation, we demonstrate how Overtoom has integrated Predictive Analytics to optimize customer relationships. In this process, they employ analytics to develop answers to the key question: “which product should we offer to which customer via which channel”. We show how Overtoom gained a 10% revenue increase by replacing the existing segmentation scheme with accurate predictive response models. Additionally, we illustrate how Overtoom succeeds to deliver more relevant communications by offering personalized promotional content to every single customer, and how these personalized offers positively impact Overtoom’s conversion rates.

Speaker: Dr. Geert Verstraeten, Python Predictions

Top of this page ] [ Agenda overview ]


4:25pm-4:50pm

Break / Exhibits
Room: Albert Suites


4:50pm-5:40pm
Uplift Modelling:
Case Study: Lloyds TSB General Insurance & US Bank
Uplift Modelling: You Should Not Only Measure But Model Incremental Response

Most marketing analysts understand that measuring the impact of a marketing campaign requires a valid control group so that uplift (incremental response) can be reported. However, it is much less widely understood that the targeting models used almost everywhere do not attempt to optimize that incremental measure. That requires an uplift model.

This session will explain why a switch to uplift modelling is needed, illustrate what can and does go wrong when they are not used and the hugely positive impact they can have when used effectively. It will also discuss a range of approaches to building and assessing uplift models, from simple basic adjustments to existing modelling processes through to full-blown uplift modelling.

The talk will use Lloyds TSB General Insurance & US Bank as a case study and also illustrate real-world results from other companies and sectors.

 

Speaker: Nicholas Radcliffe, Founder and Director, Stochastic Solutions

Top of this page ] [ Agenda overview ]


5:40pm-6:30pm

Consumer services
Case Study: Canadian Automobile Association and other B2C examples
The Diminishing Marginal Returns of Variable Creation in Predictive Analytics Solutions

 

Variable Creation is the key to success in any predictive analytics exercise. Many different approaches are adopted during this process, yet there are diminishing marginal returns as the number of variables increase. Our organization conducted a case study on four existing clients to explore this so-called diminishing impact of variable creation on predictive analytics solutions. Existing predictive analytics solutions were built using our traditional variable creation process. Yet, presuming that we could exponentially increase the number of variables, we wanted to determine if this added significant benefit to the existing solution.

Speaker: Richard Boire, BoireFillerGroup

Top of this page ] [ Agenda overview ]


6:30pm-7:30pm

Reception / Exhibits
Room: Albert Suites


Tuesday November 16, 2010
All conference sessions take place in Edward 5-7

8:00am-9:00am

Registration, Coffee and Danish
Room: Albert Suites


9:00am-9:55am
Keynote
Multiple Case Studies: Anheuser-Busch, Disney, HP, HSBC, Pfizer, and others
The High ROI of Data Mining for Innovative Organizations

Data mining and advanced analytics can enhance your bottom line in three basic ways, by 1) streamlining a process, 2) eliminating the bad, or 3) highlighting the good. In rare situations, a fourth way – creating something new – is possible. But modern organizations are so effective at their core tasks that data mining usually results in an iterative, rather than transformative, improvement. Still, the impact can be dramatic.

Dr. Elder will share the story (problem, solution, and effect) of nine projects conducted over the last decade for some of America’s most innovative agencies and corporations:

    Streamline:

  • Cross-selling for HSBC
  • Image recognition for Anheuser-Busch
  • Biometric identification for Lumidigm (for Disney)
  • Optimal decisioning for Peregrine Systems (now part of Hewlett-Packard)
  • Quick decisions for the Social Security Administration
    Eliminate Bad:

  • Tax fraud detection for the IRS
  • Warranty Fraud detection for Hewlett-Packard
    Highlight Good:

  • Sector trading for WestWind Foundation
  • Drug efficacy discovery for Pharmacia & UpJohn (now Pfizer)

Moderator: Eric Siegel, Program Chair, Predictive Analytics World

Speaker: John Elder, Ph.D., Elder Research, Inc.

Also see Dr. Elder’s full-day workshop

 

Top of this page ] [ Agenda overview ]


9:55am-10:30am

Break / Exhibits
Room: Albert Suites


10:30am-11:20am
Telecommunications
Case Study: Leading Telecommunications Operator
Predictive Analytics and Efficient Fact-based Marketing

The presentation describes what are the major topics and issues when you introduce predictive analytics and how to build a Fact-Based marketing environment. The introduced tools and methodologies proved to be highly efficient in terms of improving the overall direct marketing activity and customer contact operations for the involved companies. Generally, the introduced approaches have great potential for organizations with large customer bases like Mobile Operators, Internet Giants, Media Companies, or Retail Chains.

Main Introduced Solutions:-Automated Serial Production of Predictive Models for Campaign Targeting-Automated Campaign Measurements and Tracking Solutions-Precise Product Added Value Evaluation.

Speaker: Tamer Keshi, Ph.D., Long-term contractor, T-Mobile

Speaker: Beata Kovacs, International Head of CRM Solutions, Deutsche Telekom

Top of this page ] [ Agenda overview ]


11:20am-11:25am

Session Changeover


11:25am-12:15pm
Thought Leader
Nine Laws of Data Mining

Data mining is the predictive core of predictive analytics, a business process that finds useful patterns in data through the use of business knowledge. The industry standard CRISP-DM methodology describes the process, but does not explain why the process takes the form that it does. I present nine “laws of data mining”, useful maxims for data miners, with explanations that reveal the reasons behind the surface properties of the data mining process. The nine laws have implications for predictive analytics applications: how and why it works so well, which ambitions could succeed, and which must fail.

 

Speaker: Tom Khabaza, khabaza.com

 

Top of this page ] [ Agenda overview ]


12:15pm-1:30pm

Lunch / Exhibits
Room: Albert Suites


1:30pm-2:25pm
Expert Panel: Kaboom! Predictive Analytics Hits the Mainstream

Predictive analytics has taken off, across industry sectors and across applications in marketing, fraud detection, credit scoring and beyond. Where exactly are we in the process of crossing the chasm toward pervasive deployment, and how can we ensure progress keeps up the pace and stays on target?

This expert panel will address:

  • How much of predictive analytics’ potential has been fully realized?
  • Where are the outstanding opportunities with greatest potential?
  • What are the greatest challenges faced by the industry in achieving wide scale adoption?
  • How are these challenges best overcome?

 

Panelist: John Elder, Ph.D., Elder Research, Inc.

Panelist: Colin Shearer, WW Industry Solutions Leader, IBM UK Ltd

Panelist: Udo Sglavo, Global Analytic Solutions Manager, SAS

Panel moderator: Eric Siegel, Ph.D., Program Chair, Predictive Analytics World


2:25pm-2:30pm

Session Changeover


2:30pm-3:20pm
Crowdsourcing Data Mining
Case Study: University of Melbourne, Chessmetrics
Prediction Competitions: Far More Than Just a Bit of Fun

Data modelling competitions allow companies and researchers to post a problem and have it scrutinised by the world’s best data scientists. There are an infinite number of techniques that can be applied to any modelling task but it is impossible to know at the outset which will be most effective. By exposing the problem to a wide audience, competitions are a cost effective way to reach the frontier of what is possible from a given dataset. The power of competitions is neatly illustrated by the results of a recent bioinformatics competition hosted by Kaggle. It required participants to pick markers in HIV’s genetic sequence that coincide with changes in the severity of infection. Within a week and a half, the best entry had already outdone the best methods in the scientific literature. This presentation will cover how competitions typically work, some case studies and the types of business modelling challenges that the Kaggle platform can address.

Speaker: Anthony Goldbloom, Kaggle Pty Ltd

Top of this page ] [ Agenda overview ]


3:20pm-3:50pm

Breaks /Exhibits
Room: Albert Suites


3:50pm-4:40pm
Human Resources; e-Commerce
Case Study: Naukri.com, Jeevansathi.com
Increasing Marketing ROI and Efficiency of Candidate-Search with Predictive Analytics

InfoEdge, India’s largest and most profitable online firm with a bouquet of internet properties has been Google’s biggest customer in India. Our team used predictive modeling to double our profits across multiple fronts. For Naukri.com, India’s number 1 job portal, predictive models target jobseekers most relevant to the recruiter. Analytical insights provided a deeper understanding of recruiter behaviour and informed a redesign of this product’s recruiter search functionality. This session will describe how we did it, and also reveal how Jeevansathi.com, India’s 2nd-largest matrimony portal, targets the acquisition of consumers in the market for marriage.

 

Speaker: Suvomoy Sarkar, Chief Analytics Officer, HT Media & Info Edge India (parent company of the two companies above)

 

Top of this page ] [ Agenda overview ]


4:40pm-5:00pm
Closing Remarks

Speaker: Eric Siegel, Ph.D., Program Chair, Predictive Analytics World

Top of this page ] [ Agenda overview ]


Wednesday November 17, 2010

Full-day Workshop
The Best and the Worst of Predictive Analytics:
Predictive Modeling Methods and Common Data Mining Mistakes

Click here for the detailed workshop description

  • Workshop starts at 9:00am
  • First AM Break from 10:00 – 10:15
  • Second AM Break from 11:15 – 11:30
  • Lunch from 12:30 – 1:15pm
  • First PM Break: 2:00 – 2:15
  • Second PM Break: 3:15 – 3:30
  • Workshop ends at 4:30pm

Speaker: John Elder, Ph.D., CEO and Founder, Elder Research, Inc.

 

Cloud Computing with R

Illusion of Depth and Space (4/22) - Rotating ...
Image by Dominic's pics via Flickr

Here is a short list of resources and material I put together as starting points for R and Cloud Computing It’s a bit messy but overall should serve quite comprehensively.

Cloud computing is a commonly used expression to imply a generational change in computing from desktop-servers to remote and massive computing connections,shared computers, enabled by high bandwidth across the internet.

As per the National Institute of Standards and Technology Definition,
Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.

(Citation: The NIST Definition of Cloud Computing

Authors: Peter Mell and Tim Grance
Version 15, 10-7-09
National Institute of Standards and Technology, Information Technology Laboratory
http://csrc.nist.gov/groups/SNS/cloud-computing/cloud-def-v15.doc)

R is an integrated suite of software facilities for data manipulation, calculation and graphical display.

From http://cran.r-project.org/doc/FAQ/R-FAQ.html#R-Web-Interfaces

R Web Interfaces

Rweb is developed and maintained by Jeff Banfield. The Rweb Home Page provides access to all three versions of Rweb—a simple text entry form that returns output and graphs, a more sophisticated JavaScript version that provides a multiple window environment, and a set of point and click modules that are useful for introductory statistics courses and require no knowledge of the R language. All of the Rweb versions can analyze Web accessible datasets if a URL is provided.
The paper “Rweb: Web-based Statistical Analysis”, providing a detailed explanation of the different versions of Rweb and an overview of how Rweb works, was published in the Journal of Statistical Software (http://www.jstatsoft.org/v04/i01/).

Ulf Bartel has developed R-Online, a simple on-line programming environment for R which intends to make the first steps in statistical programming with R (especially with time series) as easy as possible. There is no need for a local installation since the only requirement for the user is a JavaScript capable browser. See http://osvisions.com/r-online/ for more information.

Rcgi is a CGI WWW interface to R by MJ Ray. It had the ability to use “embedded code”: you could mix user input and code, allowing the HTMLauthor to do anything from load in data sets to enter most of the commands for users without writing CGI scripts. Graphical output was possible in PostScript or GIF formats and the executed code was presented to the user for revision. However, it is not clear if the project is still active.

Currently, a modified version of Rcgi by Mai Zhou (actually, two versions: one with (bitmap) graphics and one without) as well as the original code are available from http://www.ms.uky.edu/~statweb/.

CGI-based web access to R is also provided at http://hermes.sdu.dk/cgi-bin/go/. There are many additional examples of web interfaces to R which basically allow to submit R code to a remote server, see for example the collection of links available from http://biostat.mc.vanderbilt.edu/twiki/bin/view/Main/StatCompCourse.

David Firth has written CGIwithR, an R add-on package available from CRAN. It provides some simple extensions to R to facilitate running R scripts through the CGI interface to a web server, and allows submission of data using both GET and POST methods. It is easily installed using Apache under Linux and in principle should run on any platform that supports R and a web server provided that the installer has the necessary security permissions. David’s paper “CGIwithR: Facilities for Processing Web Forms Using R” was published in the Journal of Statistical Software (http://www.jstatsoft.org/v08/i10/). The package is now maintained by Duncan Temple Lang and has a web page athttp://www.omegahat.org/CGIwithR/.

Rpad, developed and actively maintained by Tom Short, provides a sophisticated environment which combines some of the features of the previous approaches with quite a bit of JavaScript, allowing for a GUI-like behavior (with sortable tables, clickable graphics, editable output), etc.
Jeff Horner is working on the R/Apache Integration Project which embeds the R interpreter inside Apache 2 (and beyond). A tutorial and presentation are available from the project web page at http://biostat.mc.vanderbilt.edu/twiki/bin/view/Main/RApacheProject.

Rserve is a project actively developed by Simon Urbanek. It implements a TCP/IP server which allows other programs to use facilities of R. Clients are available from the web site for Java and C++ (and could be written for other languages that support TCP/IP sockets).

OpenStatServer is being developed by a team lead by Greg Warnes; it aims “to provide clean access to computational modules defined in a variety of computational environments (R, SAS, Matlab, etc) via a single well-defined client interface” and to turn computational services into web services.

Two projects use PHP to provide a web interface to R. R_PHP_Online by Steve Chen (though it is unclear if this project is still active) is somewhat similar to the above Rcgi and Rweb. R-php is actively developed by Alfredo Pontillo and Angelo Mineo and provides both a web interface to R and a set of pre-specified analyses that need no R code input.

webbioc is “an integrated web interface for doing microarray analysis using several of the Bioconductor packages” and is designed to be installed at local sites as a shared computing resource.

Rwui is a web application to create user-friendly web interfaces for R scripts. All code for the web interface is created automatically. There is no need for the user to do any extra scripting or learn any new scripting techniques. Rwui can also be found at http://rwui.cryst.bbk.ac.uk.

Finally, the R.rsp package by Henrik Bengtsson introduces “R Server Pages”. Analogous to Java Server Pages, an R server page is typically HTMLwith embedded R code that gets evaluated when the page is requested. The package includes an internal cross-platform HTTP server implemented in Tcl, so provides a good framework for including web-based user interfaces in packages. The approach is similar to the use of the brew package withRapache with the advantage of cross-platform support and easy installation.

Also additional R Cloud Computing Use Cases
http://wwwdev.ebi.ac.uk/Tools/rcloud/

ArrayExpress R/Bioconductor Workbench

Remote access to R/Bioconductor on EBI’s 64-bit Linux Cluster

Start the workbench by downloading the package for your operating system (Macintosh or Windows), or via Java Web Start, and you will get access to an instance of R running on one of EBI’s powerful machines. You can install additional packages, upload your own data, work with graphics and collaborate with colleagues, all as if you are running R locally, but unlimited by your machine’s memory, processor or data storage capacity.

  • Most up-to-date R version built for multicore CPUs
  • Access to all Bioconductor packages
  • Access to our computing infrastructure
  • Fast access to data stored in EBI’s repositories (e.g., public microarray data in ArrayExpress)

Using R Google Docs
http://www.omegahat.org/RGoogleDocs/run.pdf
It uses the XML and RCurl packages and illustrates that it is relatively quick and easy
to use their primitives to interact with Web services.

Using R with Amazon
Citation
http://rgrossman.com/2009/05/17/running-r-on-amazons-ec2/

Amazon’s EC2 is a type of cloud that provides on demand computing infrastructures called an Amazon Machine Images or AMIs. In general, these types of cloud provide several benefits:

  • Simple and convenient to use. An AMI contains your applications, libraries, data and all associated configuration settings. You simply access it. You don’t need to configure it. This applies not only to applications like R, but also can include any third-party data that you require.
  • On-demand availability. AMIs are available over the Internet whenever you need them. You can configure the AMIs yourself without involving the service provider. You don’t need to order any hardware and set it up.
  • Elastic access. With elastic access, you can rapidly provision and access the additional resources you need. Again, no human intervention from the service provider is required. This type of elastic capacity can be used to handle surge requirements when you might need many machines for a short time in order to complete a computation.
  • Pay per use. The cost of 1 AMI for 100 hours and 100 AMI for 1 hour is the same. With pay per use pricing, which is sometimes called utility pricing, you simply pay for the resources that you use.

Connecting to R on Amazon EC2- Detailed tutorials
Ubuntu Linux version
https://decisionstats.com/2010/09/25/running-r-on-amazon-ec2/
and Windows R version
https://decisionstats.com/2010/10/02/running-r-on-amazon-ec2-windows/

Connecting R to Data on Google Storage and Computing on Google Prediction API
https://github.com/onertipaday/predictionapirwrapper
R wrapper for working with Google Prediction API

This package consists in a bunch of functions allowing the user to test Google Prediction API from R.
It requires the user to have access to both Google Storage for Developers and Google Prediction API:
see
http://code.google.com/apis/storage/ and http://code.google.com/apis/predict/ for details.

Example usage:

#This example requires you had previously created a bucket named data_language on your Google Storage and you had uploaded a CSV file named language_id.txt (your data) into this bucket – see for details
library(predictionapirwrapper)

and Elastic R for Cloud Computing
http://user2010.org/tutorials/Chine.html

Abstract

Elastic-R is a new portal built using the Biocep-R platform. It enables statisticians, computational scientists, financial analysts, educators and students to use cloud resources seamlessly; to work with R engines and use their full capabilities from within simple browsers; to collaborate, share and reuse functions, algorithms, user interfaces, R sessions, servers; and to perform elastic distributed computing with any number of virtual machines to solve computationally intensive problems.
Also see Karim Chine’s http://biocep-distrib.r-forge.r-project.org/

R for Salesforce.com

At the point of writing this, there seem to be zero R based apps on Salesforce.com This could be a big opportunity for developers as both Apex and R have similar structures Developers could write free code in R and charge for their translated version in Apex on Salesforce.com

Force.com and Salesforce have many (1009) apps at
http://sites.force.com/appexchange/home for cloud computing for
businesses, but very few forecasting and statistical simulation apps.

Example of Monte Carlo based app is here
http://sites.force.com/appexchange/listingDetail?listingId=a0N300000016cT9EAI#

These are like iPhone apps except meant for business purposes (I am
unaware if any university is offering salesforce.com integration
though google apps and amazon related research seems to be on)

Force.com uses a language called Apex  and you can see
http://wiki.developerforce.com/index.php/App_Logic and
http://wiki.developerforce.com/index.php/An_Introduction_to_Formulas
Apex is similar to R in that is OOPs

SAS Institute has an existing product for taking in Salesforce.com data.

A new SAS data surveyor is
available to access data from the Customer Relationship Management
(CRM) software vendor Salesforce.com. at
http://support.sas.com/documentation/cdl/en/whatsnew/62580/HTML/default/viewer.htm#datasurveyorwhatsnew902.htm)

Personal Note-Mentioning SAS in an email to a R list is a big no-no in terms of getting a response and love. Same for being careless about which R help list to email (like R devel or R packages or R help)

For python based cloud see http://pi-cloud.com

R Apache – The next frontier of R Computing

I am currently playing/ trying out RApache- one more excellent R product from Vanderbilt’s excellent Dept of Biostatistics and it’s prodigious coder Jeff Horner.

The big ninja himself

I really liked the virtual machine idea- you can download a virtual image of Rapache and play with it- .vmx is easy to create and great to share-

http://rapache.net/vm.html

Basically using R Apache (with an EC2 on backend) can help you create customized dashboards, BI apps, etc all using R’s graphical and statistical capabilities.

What’s R Apache?

As  per

http://biostat.mc.vanderbilt.edu/wiki/Main/RapacheWebServicesReport

Rapache embeds the R interpreter inside the Apache 2 web server. By doing this, Rapache realizes the full potential of R and its facilities over the web. R programmers configure appache by mapping Universal Resource Locaters (URL’s) to either R scripts or R functions. The R code relies on CGI variables to read a client request and R’s input/output facilities to write the response.

One advantage to Rapache’s architecture is robust multi-process management by Apache. In contrast to Rserve and RSOAP, Rapache is a pre-fork server utilizing HTTP as the communications protocol. Another advantage is a clear separation, a loose coupling, of R code from client code. With Rserve and RSOAP, the client must send data and R commands to be executed on the server. With Rapache the only client requirements are the ability to communicate via HTTP. Additionally, Rapache gains significant authentication, authorization, and encryption mechanism by virtue of being embedded in Apache.

Existing Demos of Architechture based on R Apache-

  1. http://rweb.stat.ucla.edu/ggplot2/ An interactive web dashboard for plotting graphics based on csv or Google Spreadsheet Data
  2. http://labs.dataspora.com/gameday/ A demo visualization of a web based dashboard system of baseball pitches by pitcher by player 

 

 

 

 

 

 

 

3. http://data.vanderbilt.edu/rapache/bbplot For baseball results – a demo of a query based web dashboard system- very good BI feel.

Whats coming next in R Apache?

You can  download version 1.1.10 of rApache now. There
are only two significant changes and you don’t have to edit your
apache config or change any code (just recompile rApache and
reinstall):

1) Error reporting should be more informative. both when you
accidentally introduce errors in the Apache config, and when your code
introduces warnings and errors from web requests.

I’ve struggled with this one for awhile, not really knowing what
strategy would be best. Basically, rApache hooks into the R I/O layer
at such a low level that it’s hard to capture all warnings and errors
as they occur and introduce them to the user in a sane manner. In
prior releases, when ROutputErrors was in effect (either the apache
directive or the R function) one would typically see a bunch of grey
boxes with a red outline with a title of RApache Warning/Error!!!.
Unfortunately those grey boxes could contain empty lines, one line of
error, or a few that relate to the lines in previously displayed
boxes. Really a big uninformative mess.

The new approach is to print just one warning box with the title
“”Oops!!! <b>rApache</b> has something to tell you. View source and
read the HTML comments at the end.” and then as the title implies you
can read the HTML comment located at the end of the file… after the
closing html. That way, you’re actually reading how R would present
the warnings and errors to you as if you executed the code at the R
command prompt. And if you don’t use ROutputErrors, the warning/error
messages are printed in the Apache log file, just as they were before,
but nicer 😉

2) Code dispatching has changed so please let me know if I’ve
introduced any strange behavior.

This was necessary to enhance error reporting. Prior to this release,
rApache would use R’s C API exclusively to build up the call to your
code that is then passed to R’s evaluation engine. The advantage to
this approach is that it’s much more efficient as there is no parsing
involved, however all information about parse errors, files which
produced errors, etc. were lost. The new approach uses R’s built-in
parse function to build up the call and then passes it of to R. A
slight overhead, but it should be negligible. So, if you feel that
this approach is too slow OR I’ve introduced bugs or strange behavior,
please let me know.

FUTURE PLANS

I’m gaining more experience building Debian/Ubuntu packages each day,
so hopefully by some time in 2011 you can rely on binary releases for
these distributions and not install rApache from source! Fingers
crossed!

Development on the rApache 1.1 branch will be winding down (save bug
fix releases) as I transition to the 1.2 branch. This will involve
taking out a small chunk of code that defines the rApache development
environment (all the CGI variables and the functions such as
setHeader, setCookie, etc) and placing it in its own R package…
unnamed as of yet. This is to facilitate my development of the ralite
R package, a small single user cross-platform web server.

The goal for ralite is to speed up development of R web applications,
take out a bit of friction in the development process by not having to
run the full rApache server. Plus it would allow users to develop in
the rApache enronment while on windows and later deploy on more
capable server environments. The secondary goal for ralite is it’s use
in other web server environments (nginx and IIS come to mind) as a
persistent per-client process.

And finally, wiki.rapache.net will be the new www.rapache.net once I
translate the manual over… any day now.

From –http://biostat.mc.vanderbilt.edu/wiki/Main/JeffreyHorner

 

 

Not convinced ?- try the demos above.

Cisco SocialMiner

A highly simplified version of the RSS feed ic...
Image via Wikipedia

A new product from Cisco to mine social media for analytics on sentiment-

http://www.cisco.com/en/US/products/ps11349/index.html

Cisco SocialMiner is a social media customer care solution that can help you proactively respond to customers and prospects communicating through public social media networks like Twitter, Facebook, or other public forums or blogging sites. By providing social media monitoring, queuing, and workflow to organize customer posts on social media networks and deliver them to your social media customer care team, your company can respond to customers in real time using the same social network they are using.

Cisco SocialMiner provides:

  • The ability to configure multiple campaigns to search for customer postings on the public social web about your company’s products, services, or area of expertise
  • Filtering of social contacts based on preconfigured campaign filters to focus campaign searches
  • Routing of social contacts to skilled customer care representatives in the contact center or to experts in the enterprise–multiple people can work together to handle responses to customer postings through shared work queues
  • Detailed metrics for social media customer care activities, campaign reports, and team reports

With Cisco SocialMiner, your company can listen and respond to customer conversations originating in the social web. Being proactive can help your company enhance its service, improve customer loyalty, garner new customers, and protect your brand.

Table 1. Features and Benefits of Cisco SocialMiner 8.5

Feature Benefits
Product Baseline Features
Social media feeds

• Feeds are configurable sources to capture public social contacts that contain specific words, terms, or phrases.

• Feeds enable you to search for information on the public social web about your company’s products, services, or area of expertise.

• Cisco SocialMiner supports the following types of feeds:

• Facebook

• Twitter
Campaign management

• Groups feeds into campaigns to organize all posting activity related to a product category or business objective

• Produces metrics on campaign activity

• Provides the ability to configure multiple campaigns to search for customer postings on specific products or services

• Groups social contacts for handling by the social media customer care team

• Enables filtering of social contacts based on preconfigured campaign filters to focus campaign searches
Route and queue social contacts

• Enables routing of social contacts to skilled customer care representatives in the contact center

• Draws on expertise in the enterprise by allowing multiple people in the enterprise to work together to handle responses to customer postings through shared work queues

• Enables automated distribution of work to improve efficiency and effectiveness of social media engagement
Tagging

• Allows work to be routed to the appropriate team by grouping each post or social contact into different categories; for example, a post can be marked with the “customer_support” tag; this post will then appear on a customer support agent’s queue for processing
Social media customer care metrics

• Provides detailed metrics on social media customer care activities, campaign reports, and team reports

• Measures work and results

• Manages to service-level goals

• Supports brand management

• Optimizes staffing

• Includes dashboarding of social media posting activity when Cisco Unified Intelligence Center is used
Reporting for social contacts

• Provides a reporting database that can be accessed using any reporting tool, including Cisco Unified Intelligence Center

• Enables customer care management to accurately report on and track social media interactions by the contact center
OpenSocial-compliant gadgets

Representational State Transfer (REST) application programming interfaces (APIs)

• Provides flexible user interface options

• Enables extensive opportunities for customization
Optional integration with full suite of Cisco Collaboration tools

• Allows you to take advantage of the full suite of Cisco Collaboration tools, including Cisco Quad, Cisco Show and Share, and Cisco Pulse technology, to help your social media customer care team quickly find answers to help customers efficiently and effectively

• Easy to maintain with existing IT personnel
Operating Environment
Cisco Unified Computing System(UCS) C-Series or B-Series Servers

• Requires a Cisco UCS C-Series or B-Series Server.

• Server consolidation means lower cost per server with Cisco UCS Servers.
Architecture
Scalability

• One server supports up to 30 simultaneous social media customer care users and 10,000 social contacts per hour.
Management
Cisco Unified Real-Time Monitoring Tool (RTMT)

• Operational management is enhanced through integration with the Cisco Unified RTMT, providing consistent application monitoring across Cisco Unified Communications Solutions.
Simple Network Management Protocol (SNMP)

• SNMP with an associated MIB is supported through the Cisco Voice Operating System (VOS).
Reporting
Cisco Unified Intelligence Center

• Create customizable reports of social media customer care events using Cisco Unified Intelligence Center (purchased separately).

 

 

Jim Goodnight on Open Source- and why he is right -sigh

Logo Open Source Initiative
Image via Wikipedia

Jim Goodnight – grand old man and Godfather of the Cosa Nostra of the BI/Database Analytics software industry said recently on open source in BI (btw R is generally termed in business analytics and NOT business intelligence software so these remarks were more apt to Pentaho and Jaspersoft )

Asked whether open source BI and data integration software from the likes of Jaspersoft, Pentaho and Talend is a growing threat, [Goodnight] said: “We haven’t noticed that a lot. Most of our companies need industrial strength software that has been tested, put through every possible scenario or failure to make sure everything works correctly.”

quotes from Jim Goodnight are courtesy Jason’s  story here:
http://www.cbronline.com/news/sas-ceo-says-cep-open-source-and-cloud-bi-have-limited-appeal

and the Pentaho follow-up reaction is here

http://bi.cbronline.com/news/pentaho-fires-back-across-sas-bows-over-limited-open-source-appeal

 

 

While you can rage and screech- here is the reality in terms of market share-

From Merv Adrian-‘s excellent article on market shares in BI

http://www.enterpriseirregulars.com/22444/decoding-bi-market-share-numbers-%E2%80%93-play-sudoku-with-analysts/

The first, labeled BI Platforms, is drawn fromGartner Market Share Analysis: Business Intelligence, Analytics and Performance Management Software, Worldwide, 2009, published May 2010 , and Gartner Dataquest Market Share: Business Intelligence, Analytics and Performance Management Software, Worldwide, 2009.

and

Advanced Analytics category.

and 

so whats the performance of Talend, Pentaho and Jaspersoft

From http://www.dbms2.com/category/products-and-vendors/talend/

It seems that Talend’s revenue was somewhat shy of $10 million in 2008.

and Talend itself says

http://www.talend.com/press/Talend-Announces-Record-2009-and-Continues-Growth-in-the-New-Year.php

Additional 2009 highlights include:

  • Achieved record revenue, more then doubling from 2008. The fourth quarter of 2009 was Talend’s tenth consecutive quarter of growth.
  • Grew customer base by 140% to over 1,000 customers, up from 420 at the end of 2008. Of these new customers, over 50% are Fortune 1000 companies.
  • Total downloads reached seven million, with over 300,000 users of the open source products.
  • Talend doubled its staff, increasing to 200 global employees. Continuing this trend, Talend has already hired 15 people in 2010 to support its rapid growth.

now for Jaspersoft numbers

http://www.dbms2.com/2008/09/14/jaspersoft-numbers/

Highlights include:

  • Revenue run rate in the double-digit millions.
  • 40% sequential growth most recent quarter. (I didn’t ask whether there was any reason to suspect seasonality.)
  • 130% annual revenue growth run rate.
  • “Not quite” profitable.
  • Several hundred commercial subscribers, at an average of $25K annually per, including >100 in Europe.
  • 9,000 paying customers of some kind.
  • 100,000+ total deployments, “very conservatively,” counting OEMs as one deployment each and not double-counting for OEMs’ customers. (Nick said Business Objects quotes 45,000 deployments by the same standards.)
  • 70% of revenue from the mid-market, defined as $100 million – $1 billion revenue. 30% from bigger enterprises. (Hmm. That begs a couple of questions, such as where OEM revenue comes in, and whether <$100 million enterprises were truly a negligible part of revenue.)

and for Pentaho numbers-

http://www.dbms2.com/2009/01/27/introduction-to-pentaho/

and http://www.monash.com/uploads/Pentaho-January-2009.pdf

suggests there are far far away from the top 5-6 vendors in BI

and a special mention  for postgreSQL– which is a non Profit but is seriously denting Oracle/MySQL

http://www.postgresql.org/about/

Limit Value
Maximum Database Size Unlimited
Maximum Table Size 32 TB
Maximum Row Size 1.6 TB
Maximum Field Size 1 GB
Maximum Rows per Table Unlimited
Maximum Columns per Table 250 – 1600 depending on column types
Maximum Indexes per Table Unlimited

and leading vendor is EnterpriseDB which is again IBM-partnering as well as IBM funded

http://www.sramanamitra.com/2009/05/18/enterprise-db/

and

http://www.enterprisedb.com/company/news_events/press_releases/2010_21.do

suggest it is still in early stages.

————————————————————–

So what do we conclude-

1) There is a complete lack of transparency in open source BI market shares as almost all these companies are privately held and do not disclose revenues.

2) What may be a pure play open source company may actually be a company funded by a big BI vendor (like Revolution Analytics is funded among others by Intel-Microsoft) and EnterpriseDB has IBM as an investor.MySQL and Sun of course are bought by Oracle

The degree of control by proprietary vendors on open source vendors is still not disclosed- whether they are holding a stake for strategic reasons or otherwise.

3) None of the Open Source Vendors are even close to a 1 Billion dollar revenue number.

Jim Goodnight is pointing out market reality when he says he has not seen much impact (in terms of market share). As for the rest of his remarks, well he’s got a job to do as CEO and thats talk up his company and trash the competition- which he as been doing for 3 decades and unlikely to change now unless there is severe market share impact. Unless you expect him to notice companies less than 5% of his size in revenue.

http://www.cbronline.com/news/sas-ceo-says-cep-open-source-and-cloud-bi-have-limited-appeal

http://bi.cbronline.com/news/pentaho-fires-back-across-sas-bows-over-limited-open-source-appeal