Analytics Software

Predictive Analytics- The Book

Book on Analytics

R for Quantitative Finance

If you can sue the NSA why cant there be a class action lawsuit against Google et al

If the NSA can be sued for collection of data, why cant Google be sued for sharing my data with NSA without my permission.

Any thoughts- anyone who knows Tort law here?

What did the terms and conditions of google’s policy say back then in those good old days of quiet cooperation

What about global liability across different countries (like EU and India)

 

_ I think there should be a lawsuit to discover more (click the link)

The truth is out there!

the_truth_is_out_there

 

 

The first Decisionstats.com Intern

The following certificate is awarded to Chandan Routray (https://www.linkedin.com/in/chandanroutray) , a 2yr student at an IIT who learnt all this

and wrote some of  this at https://python4analytics.wordpress.com/- the following certificate to show he was tested as a potential scientist and showed great promise in executing his task!

chandan

 

Interview Christoph Waldhauser ‎Founder, CEO at KDSS #rstats #googleplus

Here is an interview with ‎Christoph Waldhauser a noted researcher and the Founder, CEO at KDSS K Data Science Solutions, which is a R based Analytics advisory firm. In a generous and insightful interview, Christoph talks of the perceptions around open source, academia versus startups, Europe and North America for technology and his own journey through it all.
1456608
Ajay Ohri (AO)- Describe your career in science. At what point did you decide to become involved in open source projects including R
Chrisoph Waldhauser (CW)-  When I did my second course on quantitative social science, the software we used was SPSS. At that time, the entire social science curriculum was built around that package. There were no student versions available, only a number of computer labs on campus that had SPSS installed. I had previously switched from Windows to Linux to cut on licensing costs (as a student you are constantly short on money) and to try something new. So I was willing to try out the same with R. In the beginning I was quite lost, having to work with survey weighted data and only rudimentary exposure to Perl before that. That was in a time long before RStudio and I started out with Emacs and ESS. As you might imagine, I landed in a world full of metaphorical pain. But in due time I was able to replicate all of the things we did in class. Even more, I was able to do them much more efficiently. And while my colleagues where only interpreting SPSS output, I was understanding where those numbers came from. This epiphany was then guiding me for the remainder of my scientific career. For instance, instead of Word I’d use LaTeX (back then a thing unheard of at my department) and even put free/libre/open source software in the focus of my research for my master thesis.
Continuing down that path, I eventually started working at WU Vienna University of Economics and Business and that had led me to one of the centers of R development. There, most of every day’s work was revolving around Free Software. The people there had a great impact on my understanding of Free Software and its influence on how we do research.
AO- Describe your work in social media analytics including package creation for Google Plus and your publication on Twitter
CW- Social media analytics is a very exciting field. The majority of research focuses on listening to the garden hose of social media data, that is, analyzing the communication revolving around certain keywords or communities. For instance, linking real-world events to the #euromaidan hashtag in Ukraine. I tread down a different path: instead of looking at what all users have to say on a certain topic, I investigate how a certain user or class of users communicates across all topics they bring up. So instead of following a top-down approach, I chose to go bottom-up.
Starting with the smallest building blocks of a social network has a number of advantages and leads to Google+ eventually. The reason behind this is, that the utility of social media for Google is different from say Twitter or Facebook. While classical social media is used to engage followers, say a lottery connected to the Facebook page of a brand, Google+ has an additional purpose: enlist users to help Google produce better, more accurate search results. With this in mind, focus shifts naturally from many users on one topic to how a single user can use Google+ to optimize the message they get across and manage the search terms they are associated with.
This line of argument has fueled my research in Google+ and Twitter: Which messages resonate most with the followers of a certain user? We know that each follower aims at resharing and liking messages she deems interesting. What precisely is interesting to a follower depends on her tastes. And if that will eventually lead to a reshare or not depends also on other factors like time of day and chance. For this, I’ve created a simulation framework that is centered on the preferences of individual users and their decision to reshare a social media message. In analyses of Twitter and Google+ content, we’ve found interesting patterns. For instance, there are significant differences in the types of message that are popular among followers of the US Democrats’ and Republicans’ Twitter accounts. I’m currently investigating if these observations can also be found in Europe. In the world of brand marketing, we’ve found significant differences in the wording of messages between localized Google+ pages. For instance, different mixtures of emotions in BMW’s German and US Google+ pages are key to increased reshare rates.
AO- What are some of the cultural differences in implementing projects in academia and in startups?
 
CW- This is a very hard question, mainly due to the fact that there is no one academia. Broadly speaking, quantitative academia can always be broken down in two classes that I like to refer to as science vs. engineering. Within this framework, science is seeking to understand why something is happening, often at the cost of practical implentability. The engineering mindset on the other hand focuses on producing working solutions, and is less concerned about understanding why something behaves the way it does. Take for instance neural networks that are currently enjoying somewhat of a renaissance due to deep learning approaches. In science, neural networks are not really popular because they are black boxes. You can use a neural network to produce great classifiers and use them to filter e.g. the picture of cats out of a stack of pictures. But even then it is not clear what factors are the defining essence of a cat. So while engineers might be happy to have found a useful classifier, scientists will not be content. This focus on understanding precludes many technological options to science. For instance, big data analyses aim at finding patterns that hold for most cases, but accept if the patterns don’t apply to every case. This is fine for engineering, but science would require theoretical explanations for each case that did not match the pattern. To me, this “rigor” has few practical benefits.
In startups, there is little place for science. The largest part of startups is being financed by some sort borrowed capital. And these lenders are only interested in return on their investment, and not insights or enlightenment. So, to me, there are few difference between academic engineering and startups. But I find that startups that want to do science proper, will have a very hard time getting off to good start. That is not to say it’s impossible, just more difficult.
AO-  What do you think of the open access publishing movement as represented by http://arxiv.org/ and JSS. What are some of the costs and benefits for researchers that prevent whole scale adoption of the open access system and how can these be addressed
 
CW- I think it is important to differentiate open access and preprints like arxiv.org. Open access merely means that articles are accessible without paying for accessing them. As most research that is published has been paid for by the taxpayer anyway, it should also be freely accessible to them. Keeping information behind paywalls is a moral choice, and I think it is self-evident which side we as a scientific community should choose. I’d also question the argument of publishing houses that their services are costly. Which services? Copy-editing? Marketing stunts at conferences? I fail to see how these services are important to academia.
Turning to preprints, one must note that academic publishing is currently plagued by two flaws. One is the lack of transparency that leads to poor reviews. The other one is academia’s using of publications as a quantitative indicator of academic success. This led to a vast increase of results being submitted for publication: a publishing house that had to review hundreds of articles before is now facing thousands. Therefore, it is not uncommon today for authors having to wait for multiple years until a final decision has been made by the editors. And the longer the backlog of articles gets, the lower the quality of the reviews will become. This is unacceptable.
Preprints are a way around this deadlock. Findings can be accessed by fellow researchers even before a formal review has been completed. In an ideal world with impeccable review quality, this would lead to a watering down of the quality of research being available. This certainly poses a risk, but today’s reviews are from flawless. More often than not, reviews fail to discover obvious flaws in research design and barely ever do reviewers check if data actually do lead to the results published. So, relying on preprints or reviewed articles, researchers always need to use common sense anyway:  If five independent research groups come to the same conclusion, the papers are likely to be solid. This heuristic is somewhat similar to Wikipedia: it might not always be correct, but most of the time it is.
AO- What are some of the differences that you have encountered in the ecosystem in funding, research, and development both in academia and tech startups as compared to Europe versus North America
 
CW- Living and working in a country that is increasingly being affected by the aftershocks of the Great Depression of 2007–2009 has left me somewhat disillusioned. In face of the economic problems in Europe at the moment, most of public funding has come to a full stop. Private capital is somewhat still available, but also here risk management has led to an increased focus on business plans and marketability. As pointed out above, this is less of a problem for engineering approaches (even though writing convincing business plans is challenging to scientists and engineers alike). But it is outright deadly to science. From what I see, North America has a different tradition. There, engineering generates so much revenue that part of that revenue goes back to science. An attitude we certainly lack in Europe is what Tim O’Reilly terms the makers’ mindset. We could use some more of that.
AO-In enterprise software people often pay more for software that is bad compared to software that is open source. What are your thoughts on this
 
CW- I’ve just had an interesting discussion with the head of a credit risk unit in a major bank. The unit is switching from SAS to SPSS for modeling credit risk. R, an equally capable or perhaps even superior free software solution, was not even considered. The rationale behind that is simple: in case the software is faulty, there is a company to blame and hold liable. Free software in general does not have software companies that back it in that way. So this appears to be the reason behind the psychological barrier to use free software. But I think it is a false security. Suppose every bank in the world uses either SAS or SPSS for credit risk modeling. And at one point, a fatal flaw in one of those two packages is being discovered. This flaw is then likely to affect most of the banks that chose it. So within 24 hours of that flaw being discovered, the company backing the product will have to file for bankruptcy. It is somewhat ironic that people responsible for credit risk management don’t see that the high correlation introduced due to all banks relying on the same software company does not mitigate but greatly inflate their risk.
For example, some years ago a SAS executive said, she’d feel more comfortable to fly in a plane that has been developed using closed source and not open source software, because closed source would provide increased quality. That line of argument has been thoroughly refuted. However, there is some truth in the fact that an investor might be more comfortable in putting money in a aircraft company that relies on closed source software for reasons of liability. Should a plane go down because of a closed source software bug, then the software company and not the aircraft company could be held liable. So any lawsuits against the aircraft company would be redirected to the software company. But at the end of the day, again, the software company will go out of business, leaving the aircraft company with the damage non the less.
So the trade off between poorly designed or implemented, expensive closed source software and superior free software is made due to questions of liability. But the truth is, that this is a false sense of security. I would therefore always argue in favor of free software.
KDSS is a bleeding edge state of the art data science advisory firm. You can see more of Christoph’s work at https://github.com/tophcito

Interview Heiko Miertzsch eoda #rstats #startups

This is an interview wit Heiko Miertzsch, founder EODA ( http://www.eoda.de/en/).  EODA is a cutting edge startup . recently they launched a few innovative products that made me sit up and pay attention. In this interview, Heiko Miertzsch , the founder of eoda talks on the startup journey and the vision of analytics.

eoda_logo
DecisionStats (DS)- Describe the journey of your startup eoda. What made you choose R as the platform for your software and training. Name a few turning points and milestones in your corporate journey

Heiko Miertzsch (HM)- eoda was founded in 2010 by Oliver and me. We both have a strong background in analytics and Information Technology industry. So we observed the market a while before starting the business. We saw two trends: First, a lot of new Technologies and Tools for data analysis appeared and second Open Source seemed to become more and more important for several reasons. Just to name one the easiness to share experience and code in a broad and professional community. Disruptive forces seem to change the market and we just don’t want back the wrong horse.

From the beginning on we tested R and we were enthusiastic. We started choosing R for our projects, software development, services and build up a training program for R. We already believed in 2010 that R has a successful future. It was more flexible than other statistic languages, more powerful in respect of the functionality, you could integrate it in an existing environment and much more.

DS- You make both Software products and services. What are the challenges in marketing both?

HM- We even do more: We provide consulting, training, individual software, customizing software and services. It is pure fun for us to go to our customers and say “hey, we can help you solving your analytical problems, no matter what kind of service you want to buy, what kind of infrastructure you use, if you want to learn about forest trees or buy a SaaS solution to predict your customers revenues”. In a certain way we don’t see barriers between these delivery models because we use analytics as our basis. First of all, we focus on the analytical problem of our customers and then we find the ideal solution together with the customer.

DS- Describe your software tableR. How does it work, what is the pricing and what is the benefit to user. Name a few real life examples if available for usage.

HM- Today the process of data collection, analysis and presenting the results is characterized by the use of a heterogeneous software environment with many tools, file formats and manual processing steps. tableR supports the entire process from design a questionnaire, share a structured format with the CAXI software, import the data and doing the analysis and plot the table report with only one single solution. The base report comes with just one click and if you want to go more into detail you can enhance your analysis with your own R code.

tableR is used in a closed beta at the moment and the open beta will in start next weeks.

(It is available at http://www.eoda.de/en/tableR.html)

DS- Describe your software translateR (http://www.eoda.de/en/translateR.html) . How does it work, what is the pricing and what is the benefit to user. Name a few real life examples if available for usage.

HM- Many companies realized the advantages of the open source programming language R. translateR allows a fast and inexpensive migration to R – currently from SPSS code.

The manual migration of complex SPSS® scripts has always been tedious and error-prone. translateR will help here and the task of translating by hand becomes a thing of the past. The beta test of translateR will also start in the next weeks.

DS- How do you think we can use R on the cloud for better analytics?

HM- Well, R seems to bring together the best “Data Scientists” of the world with all their different focuses on different methods, vertical knowledge, technical experience and more. The cloud is a great workplace: It holds the data – a lot of data and it offers a technical platform with computing power. If it succeeds to bring these two aspects together, we could provide a lot of knowledge to solve a lot of problems – with individual and global impact.

DS- What advantages and disadvantages does working on the cloud give to a R user?

HM- In terms of R I don’t see other aspects than in using the cloud in general.

DS- Startup life can be hectic – what do you do to relax.

HM- Oliver and I have both families, so eoda is our time to relax – just fun. I guess we do the same typical things like others, Oliver plays soccer and goes running. I like any kind of endurance sports and go climbing, the first to give the thoughtless space the second to train to focus on a concrete target.

About-

translateR is the new service from German based R specialist eoda, which helps users to translate SPSS® Code to R automatically. translateR is developed in cooperation with the University of Kassel and financially supported by the LOEWE-program of the state Hessen. translateR will be available as a cloud service and as a desktop application.

eoda offers consulting, software development and training for analytical and statistical questions. eoda is focused on R and specializes in integrating R into existing software environments.

 

Big Data Shoes

The internet is a ponderful and wonderful place for serendipity

 

Interview Tobias Verbeke Open Analytics #rstats #startups

Here is an interview with Tobias Verbeke, Managing Director of Open Analytics (http://www.openanalytics.eu/). Open Analytics is doing cutting edge work with R in the enterprise software space.
Ajay- Describe your career journey including your involvement with Open Source and R. What things enticed you to try R?

Tobias- I discovered the free software foundation while still at university and spent wonderful evenings configuring my GNU/Linux system and reading RMS essays. For the statistics classes proprietary software was proposed and that was obviously not an option, so I started tackling all problems using R which was at the time (around 2000) still an underdog together with pspp (a command-line SPSS clone) and xlispstat. From that moment on, I decided that R was the hammer and all problems to be solved were nails ;-) In my early career I worked as a statistician / data miner for a general consulting company which gave me the opportunity to bring R into Fortune 500 companies and learn what was needed to support its use in an enterprise context. In 2008 I founded Open Analytics to turn these lessons into practice and we started building tools to support the data analysis process using R. The first big project was Architect, which started as an eclipse-based R IDE, but more and more evolves into an IDE for data science more generally. In parallel we started working on infrastructure to automate R-based analyses and to plug R (and therefore statistical logic) into larger business processes and soon we had a tool suite to cover the needs of industry.

Ajay- What is RSB all about- what needs does it satisfy- who can use it ?

Tobias- RSB stands for the R Service Bus and is communication middleware and a work manager for R jobs. It allows to trigger and receive results from R jobs using a plethora of protocols such as RESTful web services, e-mail protocols, sftp, folder polling etc. The idea is to enable people to push a button (or software to make a request) and have them receive automated R based analysis results or reports for their data.

Ajay- What is your vision and what have been the challenges and learning so far in the project

Tobias- RSB started when automating toxicological analyses in pharmaceutical industry in collaboration with Philippe Lamote. Together with David Dossot, an exceptional software architect in Vancouver, we decided to cleanly separate concerns, namely to separate the integration layer (RSB) from the statistical layer (R) and, likewise, from the application layer. As a result any arbitrary R code can be run via RSB and any client application can interact with RSB as long as it can talk one of the many supported protocols. This fundamental design principle makes us different from alternative solutions where statistical logic and integration logic are always somehow interwoven, which results in maintenance and integration headaches. One of the challenges has been to keep focus on the core task of automating statistical analyses and not deviating into features that would turn RSB into a tool for interaction with an R session, which deserves an entirely different approach. Rservice-diagram

Ajay- Computing seems to be moving to an heterogeneous cloud , server and desktop model. What do you think about the R and Cloud Computing- current and future

Tobias- From a freedom perspective, cloud computing and the SaaS model is often a step backwards, but in our own practice we obviously follow our customers’ needs and offer RSB hosting from our data centers as well. Also, our other products e.g. the R IDE Architect are ready for the cloud and use on servers via Architect Server. As far as R itself concerns in relation to cloud computing, I foresee its use to increase. At Open Analytics we see an increasing demand for R-based statistical engines that power web applications living in the cloud.

Ajay- You recently released RSB version 6. What are all the new features. What is the planned roadmap going forward

Tobias- RSB 6.0 is all about large-scale production environments and strong security. It kicked off on a project where RSB was responsible for spitting 8500 predictions per second. Such large-scale production deployments of RSB motivated the development of a series of features. First of all RSB was made lightning fast: we achieved a full round trip from REST call to prediction in 7 ms on the mentioned use case. In order to allow for high throughput, RSB also gained a synchronous API (RSB 5.0 had an asynchronous API only). Another new feature is the availability of client-side connection pooling to the pool manager of R processes that are read to serve RSB. Besides speed, this type of production environments also need monitoring and resilience in case of issues. For the monitoring, we made sure that everything is in place for monitoring and remotely querying not only the RSB application itself, but also the pool of R processes managed by RServi.

 

(Note from Ajay- RJ is an open source library providing tools and interfaces to integrate R in Java applications. RJ project also provides a pool for R engines, easy to setup and manage by a web-interface or JMX. One or multiple client can borrow the R engines (called RServi)  see http://www.walware.de/it/rj/ and https://github.com/walware/rj-servi)
Also, we now allow to define node validation strategies to be able to check that R nodes are still functioning properly. If not, the nodes are killed and new nodes are started and added to the pool. In terms of security, we are now able to cover a very wide spectrum of authentication and authorization. We have machines up and running using openid, basic http authentication, LDAP, SSL client certificates etc. to serve everyone from the individual user who is happy with openid authentication for his RSB app to large investment banks who have very strong security requirements. The next step is to provide tighter integration with Architect, such that people can release new RSB applications without leaving the IDE.

Ajay- How does the startup ecosystem in Europe compare with say the SF Bay Area, What are some of the good things and not so great things

Tobias- I do not feel qualified to answer such a question, since I founded a single company in Antwerp, Belgium. That being said, Belgium is great! :-)

Ajay- How can we popularize STEM education using MooCs , training etc

Tobias- Free software. Free as in beer and as in free speech!

Ajay- Describe the open source ecosystem in general and R ecosystem in  particular for Europe. How does it compare with other locations in your opinion

Tobias- Open source is probably a global ecosystem and crosses oceans very easily. Dries Buytaert started off Drupal in Belgium and now operates from the US interacting with a global community. From a business perspective, there are as many open source models as there are open source companies. I noticed that the major US R companies (Revolution Analytics and RStudio) cherished the open source philosophy initially, but drifted both into models combining open source and proprietary components. At Open Analytics, there are only open source products and enterprise customers have access to exactly the same functionality as a student may have in a developing country. That being said, I don’t believe this is a matter of geography, but has to do more with the origins and different strategies of the companies.

Ajay- What do you do for work life balance and de stressing when not shipping  code.

Tobias- In a previous life the athletics track helped keeping hands off the keyboard. Currently, my children find very effective ways to achieve similar goals

About-

OpenAnalytics is a consulting company specialized in statistical computing using open technologies. You can read more on it at http://www.openanalytics.eu

How cheap is cloud computing anyway?

So I wanted to really find out how cheap the cloud was- but I got confused by the 23 kinds of instances than Amazon has http://aws.amazon.com/ec2/pricing/ and 15 kinds of instances at https://developers.google.com/compute/pricing.

or whether there is any price collusion between them ;)

Now Amazon has spot pricing so I can bid for prices as well (http://aws.amazon.com/ec2/purchasing-options/spot-instances/ ) and upto 60% off for reserved instances (http://aws.amazon.com/ec2/purchasing-options/reserved-instances/) but charges $2 for dedicated instances (which are not dedicated but pay as you go)

Dedicated Per Region Fee

  • $2 per hour – An additional fee is charged once per hour in which at least one Dedicated Instance of any type is running in a Region.

Google has sustained discounts ( will not offer Windows on the cloud though!)

The table below describes the discount at each usage level. These discounts apply for all instance types.

Usage Level (% of month) % at which incremental is charged Example incremental rate (USD/per hour) for an n1-standard-1 instance
0%-25% 100% of base rate $0.07
25%-50% 80% of base rate $0.056
50%-75% 60% of base rate $0.042
75%-100% 40% of base rate $0.028

 

Anyways- I tried to create this simple table to help me with it- after all  hard disks are cheap- it is memory I want on the cloud !

Or maybe I am wrong and the cloud is not so cheap- or its just too complicated for someone to build a pricing calculator that can take in prices from all providers (Amazon, Azure, Google Compute) and show us the money!

vCPU RAM(GiB) $ per Hour Type -Linux Usage Provider Notes
t2.micro 1 1 $0.01 General Purpose – Current Generation Amazon (North Virginia) Amazon also has spot instances
t2.small 1 2 $0.03 General Purpose – Current Generation Amazon (North Virginia) that can lower prices
t2.medium 2 4 $0.05 General Purpose – Current Generation Amazon (North Virginia)
m3.medium 1 3.75 $0.07 General Purpose – Current Generation Amazon (North Virginia)
m3.large 2 7.5 $0.14 General Purpose – Current Generation Amazon (North Virginia)
m3.xlarge 4 15 $0.28 General Purpose – Current Generation Amazon (North Virginia)
m3.2xlarge 8 30 $0.56 General Purpose – Current Generation Amazon (North Virginia)
c3.large 2 3.75 $0.11 Compute Optimized – Current Generation Amazon (North Virginia)
c3.xlarge 4 7.5 $0.21 Compute Optimized – Current Generation Amazon (North Virginia)
c3.2xlarge 8 15 $0.42 Compute Optimized – Current Generation Amazon (North Virginia)
c3.4xlarge 16 30 $0.84 Compute Optimized – Current Generation Amazon (North Virginia)
c3.8xlarge 32 60 $1.68 Compute Optimized – Current Generation Amazon (North Virginia)
g2.2xlarge 8 15 $0.65 GPU Instances – Current Generation Amazon (North Virginia)
r3.large 2 15 $0.18 Memory Optimized – Current Generation Amazon (North Virginia)
r3.xlarge 4 30.5 $0.35 Memory Optimized – Current Generation Amazon (North Virginia)
r3.2xlarge 8 61 $0.70 Memory Optimized – Current Generation Amazon (North Virginia)
r3.4xlarge 16 122 $1.40 Memory Optimized – Current Generation Amazon (North Virginia)
r3.8xlarge 32 244 $2.80 Memory Optimized – Current Generation Amazon (North Virginia)
i2.xlarge 4 30.5 $0.85 Storage Optimized – Current Generation Amazon (North Virginia)
i2.2xlarge 8 61 $1.71 Storage Optimized – Current Generation Amazon (North Virginia)
i2.4xlarge 16 122 $3.41 Storage Optimized – Current Generation Amazon (North Virginia)
i2.8xlarge 32 244 $6.82 Storage Optimized – Current Generation Amazon (North Virginia)
hs1.8xlarge 16 117 $4.60 Storage Optimized – Current Generation Amazon (North Virginia)
n1-standard-1 1 3.75 $0.07 Standard Google -US Google charges per minute
n1-standard-2 2 7.5 $0.14 Standard Google -US of usage (subject to minimum of 10 minutes)
n1-standard-4 4 15 $0.28 Standard Google -US
n1-standard-8 8 30 $0.56 Standard Google -US
n1-standard-16 16 60 $1.12 Standard Google -US
n1-highmem-2 2 13 $0.16 High Memory Google -US
n1-highmem-4 4 26 $0.33 High Memory Google -US
n1-highmem-8 8 52 $0.66 High Memory Google -US
n1-highmem-16 16 104 $1.31 High Memory Google -US
n1-highcpu-2 2 1.8 $0.09 High CPU Google -US
n1-highcpu-4 4 3.6 $0.18 High CPU Google -US
n1-highcpu-8 8 7.2 $0.35 High CPU Google -US
n1-highcpu-16 16 14.4 $0.70 High CPU Google -US
f1-micro 1 0.6 $0.01 Shared Core Google -US
g1-small 1 1.7 $0.04 Shared Core Google -US

Analytics Conference

Learn R

Follow

Get every new post delivered to your Inbox.

Join 845 other followers