Creating an Anonymous Bot

or Surfing the Net Anonmously and Having some Fun.

On the weekend, while browsing through http://freelancer.com I came across an intriguing offer-

http://www.freelancer.com/projects/by-job/YouTube.html

Basically projects asking for increasing Youtube Views-

Hmm.Hmm.Hmm

So this is one way I though it could be done-

1) Create an IP Address Anonymizer

Thats pretty simple- I used the Tor Project at http://www.torproject.org/easy-download.html.en

Basically it uses a peer to peer network to  connect to the internet and you can reset the connection as you want-so it hides your IP address.

Also useful for sending hatemail- limitation uses Firefox browser only.And also your webpage default keeps changing languages as the ip address changes.

Note-

The Tor Project is a 501(c)(3) non-profit based in the United States. The official address of the organization is:

The Tor Project
969 Main Street, Suite 206
Walpole, MA 02081 USA
Check your IP address at http://www.whatismyip.com/

2) Creating a Bot or an automatic clicking code ( without knowing code)

Go to https://addons.mozilla.org/en-US/firefox/addon/3863/

Remember when you could create an Excel Macro by just recording the Macro (in Excel 2003)

So while surfing if you need to do something again and again (like go the same Youtube video and clicking Like 5000 times) you can press record Macro

  • Do the action you want repeated again and again.
  • Click save Macro
  • Now run the Macro in a loop using the iMacro extension.

see screenshot below-

Note I have added two lines of code -WAIT SECONDS= 6

This means everytime the code runs in a loop it will wait for 6 seconds and then reload.

However I recommend you create a random number of wait seconds using Google Spreadsheet and the function RANDBETWEEN(5,400) (to limit between 5 and 400 seconds) and also use CONCATENATE with click and drag to create RANDOM wait times (instead of typing it say 500 times yourself)

see https://spreadsheets.google.com/ccc?key=tr18JVEE2TmAuH5V8fzJLRA#gid=0

That’s it – Your Anonymous Bot is ready.

See the  analytical results for my personal favourite Streaming Poetry video http://www.youtube.com/watch?v=a5yReaKRHOM

Easy isn’t it. Lines of code written= 0 , Number of Views =335 (before I grew bored)

Note- Officially it is against Youtube Terms http://www.youtube.com/t/terms to  use scripts or Bots so I did it for Research Purposes only. And the http://Freelancer.com needs to look into the activities underway at http://www.freelancer.com/projects/by-job/YouTube.html and also http://www.freelancer.com/projects/by-job/Facebook.html and http://www.freelancer.com/projects/by-job/Social-Networking.html

The final word on these activities is by http://xkcd.com or

Aster Data hires Quentin Gallivan as CEO

AsterData formally marked phase 2 of it’s rapid growth story by getting as new CEO Quentin Gallivan (of Postini before it was sold to Google and also Pivotlink).

Founders (and Stanfordians) Mayan Bawa stays as Chief Customer Officer and Tasso Argyros as CTO. It has a very deja vu feel -like Eric Schmidt coming in CEO of Google in the glory days past.  Indeed the investment team in Google and AsterData is quite similar and so are the backgrounds of the founders.

AsterData of course creates the leading MapReduce (also created by Google) solution for providing BI infrastructure for big data and has been rapidly been expanding into new frontiers for Big Data.

Aster Data Appoints New Chief Executive Officer

Quentin Gallivan Joins Aster Data as CEO to Lead Company to Next Level of Growth

San Carlos, CA – September 9, 2010– Aster Data, a proven leader dedicated to providing the best data management and data processing platform for big data management and analytics, today announced the appointment of Quentin Gallivan as President and CEO. Gallivan brings more than 20 years of senior executive experience to the leading analytics and database company. With Aster Data achieving tremendous growth in the past year, Gallivan will take Aster Data to the next level, further accelerating its market leadership, sales, channel partnerships and international expansion.  Founding CEO Mayank Bawa, who grew the company from its inception based on the founders’ research at Stanford University, and whose passion for helping customers uniquely unlock the value of their data, will take on the role of Chief Customer Officer.  Bawa, in his new role, will lead the Company’s organization devoted to ensuring the success, longevity and innovation of its fast-growing customer base. Together, Gallivan and Bawa, along with co-founder and Chief Technology Officer, Tasso Argyros, will deliver on the the Company’s mission to help customers discover more value from their data, achieve deep insights through rich analytics and do more with their massive data volumes than has ever been possible.

Gallivan joins Aster Data with over 20 years of leadership experience in the high-tech industry and has held a variety of CEO and senior executive positions with leading technology companies. Before joining Aster Data, Gallivan served as CEO at PivotLink, the leading provider of business intelligence (BI) solutions delivered via Software as a Service (SaaS), where he rapidly grew the company to over 15,000 business users, from mid-sized companies to Fortune 1000 companies, across key industries including financial services, retail, CPG manufacturing and high technology. Prior to Pivotlink, Gallivan served as CEO of Postini where he scaled the company to 35,000 customers and over 10 million users until its eventual acquisition by Google in 2007.  Gallivan also served as executive vice president of worldwide sales and services at VeriSign where he was instrumental in growing the business from $20 million to $1.2 billion and was responsible for the design and execution of the global distribution strategy for the company’s security and services business. Gallivan also held a number of key executive and leadership positions at Netscape Communications and GE Information Services.

“We are delighted to have someone of Quentin’s caliber, who is a veteran of both emerging and established technology companies, lead Aster Data through our next stage of growth,” said Mayank Bawa, Chief Customer Officer and co-founder, Aster Data. “His significant experience around growing organizations and driving operational excellence will be invaluable as he takes Aster Data forward. I’m excited to shift my focus to customers and their success; to bring our innovations to our customers worldwide to help them unlock deep value from their growing data volumes.”

“I am very excited to be joining Aster Data and taking on the challenge of augmenting its already impressive level of growth and success.  Aster Data is very well respected and established in the marketplace, has an enviable solution for big data management that uniquely addresses both big data storage and data processing, an impressive client list and a very talented team,” said Quentin Gallivan, President and CEO, Aster Data. “My task will be to leverage these assets, help shape a new market and provide operational guidance and strategic direction to drive even greater value for shareholders, customers and employees alike.”

Big Data and R: New Product Release by Revolution Analytics

Press Release by the Guys in Revolution Analytics- this time claiming to enable terabyte level analytics with R. Interesting stuff but techie details are awaited.

Revolution Analytics Brings

Big Data Analysis to R

The world’s most powerful statistics language can now tackle terabyte-class data sets using

Revolution R Enterpriseat a fraction of the cost of legacy analytics products


JSM 2010 – VANCOUVER (August 3, 2010) — Revolution Analytics today introduced ‘Big Data’ analysis to its Revolution R Enterprise software, taking the popular R statistics language to unprecedented new levels of capacity and performance for analyzing very large data sets. For the first time, R users will be able to process, visualize and model terabyte-class data sets in a fraction of the time of legacy products—without employing expensive or specialized hardware.

The new version of Revolution R Enterprise introduces an add-on package called RevoScaleR that provides a new framework for fast and efficient multi-core processing of large data sets. It includes:

  • The XDF file format, a new binary ‘Big Data’ file format with an interface to the R language that provides high-speed access to arbitrary rows, blocks and columns of data.
  • A collection of widely-used statistical algorithms optimized for Big Data, including high-performance implementations of Summary Statistics, Linear Regression, Binomial Logistic Regressionand Crosstabs—with more to be added in the near future.
  • Data Reading & Transformation tools that allow users to interactively explore and prepare large data sets for analysis.
  • Extensibility, expert R users can develop and extend their own statistical algorithms to take advantage of Revolution R Enterprise’s new speed and scalability capabilities.

“The R language’s inherent power and extensibility has driven its explosive adoption as the modern system for predictive analytics,” said Norman H. Nie, president and CEO of Revolution Analytics. “We believe that this new Big Data scalability will help R transition from an amazing research and prototyping tool to a production-ready platform for enterprise applications such as quantitative finance and risk management, social media, bioinformatics and telecommunications data analysis.”

Sage Bionetworks is the nonprofit force behind the open-source collaborative effort, Sage Commons, a place where data and disease models can be shared by scientists to better understand disease biology. David Henderson, Director of Scientific Computing at Sage, commented: “At Sage Bionetworks, we need to analyze genomic databases hundreds of gigabytes in size with R. We’re looking forward to using the high-speed data-analysis features of RevoScaleR to dramatically reduce the times it takes us to process these data sets.”

Take Hadoop and Other Big Data Sources to the Next Level

Revolution R Enterprise fits well within the modern ‘Big Data’ architecture by leveraging popular sources such as Hadoop, NoSQL or key value databases, relational databases and data warehouses. These products can be used to store, regularize and do basic manipulation on very large datasets—while Revolution R Enterprise now provides advanced analytics at unparalleled speed and scale: producing speed on speed.

“Together, Hadoop and R can store and analyze massive, complex data,” said Saptarshi Guha, developer of the popular RHIPE R package that integrates the Hadoop framework with R in an automatically distributed computing environment. “Employing the new capabilities of Revolution R Enterprise, we will be able to go even further and compute Big Data regressions and more.”

Platforms and Availability

The new RevoScaleR package will be delivered as part of Revolution R Enterprise 4.0, which will be available for 32-and 64-bit Microsoft Windows in the next 30 days. Support for Red Hat Enterprise Linux (RHEL 5) is planned for later this year.

On its website (http://www.revolutionanalytics.com/bigdata), Revolution Analytics has published performance and scalability benchmarks for Revolution R Enterprise analyzing a 13.2 gigabyte data set of commercial airline information containing more than 123 million rows, and 29 columns.

Additionally, the company will showcase its new Big Data solution in a free webinar on August 25 at 9:00 a.m. Pacific.

Additional Resources

•      Big Data Benchmark whitepaper

•      The Revolution Analytics Roadmap whitepaper

•      Revolutions Blog

•      Download free academic copy of Revolution R Enterprise

•      Visit Inside-R.org for the most comprehensive set of information on R

•      Spread the word: Add a “Download R!” badge on your website

•      Follow @RevolutionR on Twitter

About Revolution Analytics

Revolution Analytics (http://www.revolutionanalytics.com) is the leading commercial provider of software and support for the popular open source R statistics language. Its Revolution R products help make predictive analytics accessible to every type of user and budget. The company is headquartered in Palo Alto, Calif. and backed by North Bridge Venture Partners and Intel Capital.

Media Contact

Chantal Yang
Page One PR, for Revolution Analytics
Tel: +1 415-875-7494

Email:  revolution@pageonepr.com

Open Source and Software Strategy

Curt Monash at Monash Research pointed out some ongoing open source GPL issues for WordPress and the Thesis issue (Also see http://ma.tt/2009/04/oracle-and-open-source/ and  http://www.mattcutts.com/blog/switching-things-around/).

As a user of both going upwards of 2 years- I believe open source and GPL license enforcement are general parts of software strategy of most software companies nowadays. Some thoughts on  open source and software strategy-Thesis remains a very very popular theme and has earned upwards of 100,000 $ for its creator (estimate based on 20k plus installs and 60$ avg price)

  • Little guys like to give away code to get some satisfaction/ recognition, big guys give away free code only when its necessary or when they are not making money in that product segment anyway.
  • As Ethan Hunt said, ” Every Hero needs a Villian”. Every software (market share) war between players needs One Big Company Holding more market share and Open Source Strategy between other player who is not able to create in house code, so effectively out sources by creating open source project. But same open source propent rarely gives away the secret to its own money making project.
    • Examples- Google creates open source Android, but wont reveal its secret algorithm for search which drives its main profits,
    • Google again puts a paper for MapReduce but it’s Yahoo that champions Hadoop,
    • Apple creates open source projects (http://www.apple.com/opensource/) but wont give away its Operating Source codes (why?) which help people buys its more expensive hardware,
    • IBM who helped kickstart the whole proprietary code thing (remember MS DOS) is the new champion of open source (http://www.ibm.com/developerworks/opensource/) and
    • Microsoft continues to spark open source debate but read http://blogs.technet.com/b/microsoft_blog/archive/2010/07/02/a-perspective-on-openness.aspx and  also http://www.microsoft.com/opensource/
    • SAS gives away a lot of open source code (Read Jim Davis , CMO SAS here , but will stick to Base SAS code (even though it seems to be making more money by verticals focus and data mining).
    • SPSS was the first big analytics company that helps supports R (open source stats software) but will cling to its own code on its softwares.
    • WordPress.org gives away its software (and I like Akismet just as well as blogging) for open source, but hey as anyone who is on WordPress.com knows how locked in you can get by its (pricy) platform.
    • Vendor Lock-in (wink wink price escalation) is the elephant in the room for Big Software Proprietary Companies.
    • SLA Quality, Maintenance and IP safety is the uh-oh for going in for open source software mostly.
  • Lack of IP protection for revenue models for open source code is the big bottleneck  for a lot of companies- as very few software users know what to do with source code if you give it to them anyways.
    • If companies were confident that they would still be earning same revenue and there would be less leakage or theft, they would gladly give away the source code.
    • Derivative softwares or extensions help popularize the original softwares.
      • Half Way Steps like Facebook Applications  the original big company to create a platform for third party creators),
      • IPhone Apps and Android Applications show success of creating APIs to help protect IP and software control while still giving some freedom to developers or alternate
      • User Interfaces to R in both SAS/IML and JMP is a similar example
  • Basically open source is mostly done by under dog while top dog mostly rakes in money ( and envy)
  • There is yet to a big commercial success in open source software, though they are very good open source softwares. Just as Google’s success helped establish advertising as an alternate ( and now dominant) revenue source for online companies , Open Source needs a big example of a company that made billions while giving source code away and still retaining control and direction of software strategy.
  • Open source people love to hate proprietary packages, yet there are more shades of grey (than black and white) and hypocrisy (read lies) within  the open source software movement than the regulated world of big software. People will be still people. Software is just a piece of code.  😉

(Art citation-http://gapingvoid.com/about/ and http://gapingvoidgallery.com/

Student Statement: The Right to Research

An initiative by Student Government at U Tenn ( I am a slight pat of Student govt but not of the following ———as  a member of the University of Tennessee Technology, Fee Advisory Board. My current role involves increasing funding for bears like koala 😉  )

Scholarly knowledge is part of the common wealth of humanity.

Unfortunately, not everyone has access to the scholarly literature, despite advances in communications technology.  The high cost of academic journals restricts access to knowledge; in some fields, prices can reach $20,000 for a single journal subscription1 or $30 for an individual article.2 Despite these high prices, authors of scholarly articles are not paid for their work. The profits from these publications go solely to the publishers of the journals.  A vast amount of research is funded from public sources – yet taxpayers are locked out by the cost of access.

Screenshot

I suppose companies like SAS Institute ( with a nice SAS Publishing arm- I got a SAS Enterprise Guide book for predictive analytics from them) Aster Data ( which needs all the BIG DATA programmers and researchers including students), SPSS ( with IBM’s backing and pedigree of R and D) , and SAP (with University Network) and even the dropout* founded Oracle

can help by sponsoring journal articles so as to

1) Increase pool of developers who remain loyal to that platform for life ( similar to companies offering student credit cards)

2) Increase visibility as a low cost advertising medium.

 

( *Amazing- Google, Microsoft, Oracle, Trilogy, Aster (partly) , JMP (partly) it seems to get really really rich- one has to go to Grad School, make a tech company and drop out.

Maybe I do a research paper on this hypothesis using some kind of ANOVA, T tests)

If you believe students have Right to Research and you can help by stepping in to help both article authors and students come closer AND makes good sense for your business

– HONK YOUR HORN.

Interview Dr Usama Fayyad Founder Open Insights LLC

Here is an interview with Dr Usama Fayyad, founder of Open Insights LLC (www.open-insights.com). Prior to this he was Yahoo’s Chief Data Officer. In his prior role as Chief Data Officer of Yahoo! he built the data teams and infrastructure to manage the 25 terabytes of data per day that resulted from the company’s operations.

 

Picture_004_(2)

Ajay-     Describe your career in science. How would you motivate young people today to take science careers rather than other careers
Dr Fayyad-
My career started out in science and engineering. My original plan was to be in research and to become a university professor. Indeed, my first few jobs were strictly in basic Research. After doing summer internships at place like GM Research Labs and JPL, my first full-time position was at the NASA – Jet Propulsion Laboratory, California Institute of Technology.

I started in research in Artificial Intelligence for autonomous monitoring and control and in Machine Learning and data mining. The first major success was with Caltech Astronomers on using machine learning classification techniques to automatically recognize objects in a large sky survey (POSS-II – the 2nd Palomar Observatory Sky Survey).  The Survey consists of taking high resolution images of the entire northern sky. The images, when digitized, contain over 2 billion sky objects. The main problem is to recognize if an object is a star of galaxy. For “faint objects” – which constitute the majority of objects, this was an exceedingly hard problem that people wrestled with for 30 years. I was surprised how well the algorithms could do at solving it.

This was a real example of data sets where the dimensionality is so high that algorithms are better suited at solving it than humans – even well-trained astronomers. Our methods had over 94% accuracy on faint objects that no one could reliably classify before at better than 75% accuracy. This additional accuracy made all the difference in enabling all sort of new science, discoveries and theories about formation of large scale structure in the Universe.
The success of this work and its wide recognition in scientific and engineering communities let to the creation of a new group – I founded and managed the Machine Learning Systems group at JPL which went on to address hard problems in object recognition in scientific data – mostly from remote sensing instruments – like Magellan images of the planet Venus (we recognized and classified over a million small volcanoes on the planet in collaboration with geologists at Brown University) and Earth Observing System data, including Atmospherics and storm data.
At the time, Microsoft was interested in figuring out data mining applications in the corporate world and after a long recruiting cycle they got me to join the newly formed Microsoft Research as a Senior Researcher in late 1995. My work there focus on algorithms, database systems, and basic science issues in the newly formed field of Data Mining and Knowledge Discovery. We had just finished publishing a nice edited collection of chapters in a book that became very popular, and I had agreed to become the founding Editor-in-Chief of a brand new journal called: Data Mining and Knowledge Discovery. This journal today is the premier scientific journal in the field. My research work at Microsoft led to several applications – especially in databases. I founded the Data Mining & Exploration group at MSR and later a product group in SQL Server that built and shipped the first integrated data mining product in a large-scale commercial DBMS  – SQL Server 2000 (analysis Services). We created extensions to the SQL language (that we called DMX) and tried to make data mining mainstream. I really enjoyed the life of doing basic research as well as having a real product group that built and shipped components in a major DBMS.
That’s when I learned that the real challenging problems in the real-world where really not in data mining but in getting the data ready and available for analysis – Data Warehousing was a field littered with failures and data stores that were write-only (meaning data never came out!)  — I used to call these Data Tombs at the time and I likened them to the pyramids in Ancient Egypt: great engineering feats to build, but really just tombs.

In 2000 I decided to leave the world of Research at Microsoft to do my first venture-backed start-up company – digiMine. The company wanted to solve the problem of managing the data and performing data mining and analysis over data sets, and we targeted a model of hosted data warehouses and mining applications as an ASP – one of the first Software as a Service (SaaS) firms in that arena. This began my transition from the world of research and science to business and technology.  We focused on on-line data and web analytics since the data volumes their were about 10x the size of transactional databases and most companies did not know how to deal with all that data. The business grew fast and so did the company – reaching 120 employees in about 1 year.

After 3 years of doing high-growth start-up and raising some $50 million in venture capital for the company, I was beginning to feel the itch again to do technical work.
In June 2003, we had a chance to spin-off part of the business that was focused on difficult high-end data mining problems. This opportunity was exactly what I needed and we formed DMX Group as a spinoff company that had a solid business from its first day. At DMX Group I got to work on some of the hardest data mining problems in predicting sales of automobiles, churn of wireless users, financial scoring and credit risk analysis, and many related deep business Intelligence problems.

Our client list included many of the Fortune 500 companies. One of these clients was Yahoo!  — After 6 months of working with Yahoo! As a client they decided to acquire DMX Group and use the people to build a serious data team for Yahoo!  We negotiated a deal that got about half the employees into Yahoo! And we spun-off the rest of DMX Group to continue focusing on consulting work in data mining and BI.  I thus became the industry’s first Chief Data Officer. 

 The original plan was to spend 2 years or so to help Yahoo! Form the right data teams and build the data processing and targeting technology to deliver high value from its inventory of ads.
Yahoo! Proved to be a wonderful experience and I learned so much about the Internet. I also learned that even someone like me who worked on Internet data from the early days of MSN (in 1996) and who ran a web analytics firm still did not scratch the service on the depth of the area. I learned a lot about the Internet from Jerry Yaang (Yahoo! Co-founder) and much about advertising/media business from Dan Rosensweig (COO) and mTerry Semel (then CEO) and lots about technology management and strategic deal-making from Farzad (Zod) Nazem who was the CTO. As Executive VP at Yahoo!

I built one of the industry’s largest and best data teams and we were able to to process over 25 terabytes of data per year and power several hundred million Dollars of new revenue for Yahoo! Resulting from these data systems. A year after joining Yahoo! I was asked to form a new Research Lab to study much of what we did not understand about the Internet. This was yet another return of basic research into my life. I founded Yahoo! Research to invent the new sciences of the Internet, and I wanted them to be focused on only 4 areas (the idea of focus came from my exposure to Caltech and its philosophy in picking few areas of excellence). The goal was the become the best research lab in the world in these new focused areas. Surprisingly we did it within 2 years. I hired Prabhakar Raghavan to run Research and he did a phenomenal job in building out the Research organization. The four areas we chose were: Search and information navigation, Community Systems, Micro-economics of the Web, and Computational Advertising.  We were able to attract the top talent in the world to lead or work on these emerging areas. Yahoo! Research was a success in basic research but also in influencing product. The chief scientists for all the major areas of company products all came from Yahoo! Research and all owned the product development agenda and plans: Raghu Ramakrishnan (CS for Audience), Andrew Tomkins (CS for Search), Anrei Broder (CS for Monetization) and Preston McCaffee (CS for Marketplaces/Exchanges). I consider this an unprecendented achievement in the world of Research in general: excellence in basic research and huge impact on company products, all within 3-4 years.
I have recently left Yahoo! And started Open Insights (www.open-insights.com) to focus on data strategy and helping enterprises realize the value of data, develop the right data strategies, and create new business models. Sort of an ‘outsourced version” of my Chief Data Officer job at Yahoo!
Finally, on my advice to young people: it is not just about science careers, I would call it engineering careers. My advice to any young person in fact, whether they plan to become a business person, a medical doctor, and artist, a lawyer, or a scientist – basic training in engineering and abstract problem solving will be a huge assets. Some of the best lawyers, doctors, and even CEO’s started out with engineering training.
For those young people who want to become scientists, my advice is always look for real-world applications where the research can be conducted in their context. The reason for that is technical and sociological. From a technical perspective, the reality of an application and the fact that things have to work force a regiment of technical discipline and make sure that the new ideas are tested and challenged. Socially, working on a real application forces interactions with people who care about the problem and provides continuous feedback which is really crucial in guiding good work (even if scientists deny this, social pressure is a big factor) – it also ensures that your work will be relevant and will evolve in relevant directions. I always tell people who are seeking basic research: “some of the deepest fundamental science problems can often be found lurking in the most mundane of applications”. So embrace applied work but always look for the abstract deep problems – that summarizes my advice.
Ajay- What are the challenges of running data mining for a big big website.
Dr Fayyad-
There are many challenges. Most algorithms will not work due to scale. Also, most of the problems have an unusually high dimensionality – so simple tricks like sampling won’t work. You need to be very clever on how to sample and how to reduce dimensionality by applying the right variable transformations.

The variety of problems is huge, and the fact that the Internet is evolving and growing rapidly, means that the problems are not fixed or stationary. A solution that works well today will likely fail in a few months – so you need to always innovate and always look at new approaches. Also, you need to build automated tools to help detect changes and address them as soon as they arise. 

Problems with 1000 10,000 or millions of variables are very common in web challenges. Finally, whatever you do needs to work fast or else you will not be able to keep up with the data flux. Imagine falling behind on processing 25 Terabytes of data per day. If you fall behind by two days, you will never be able to catch up again! Not within any reasonable budget constraint. So you try never to go down.
Ajay-      What are the 5 most important things that the data miner should avoid in doing analysis.

Dr Fayyad-I never thought about this in terms of top 5, but here are the big ones that come to mind, not necessarily in any order
a.       The algorithms knows nothing about the data, and the knowledge of the domain is in the head of the domain experts. As I always say, an ounce of knowledge is worth a ton of data – so seek and model what the experts know or your results will look silly
b.      Don’t let an algorithm fish blindly when you have lots of data. Use what you know to reduce the dimensionality quickly. The curse of dimensionality is never to be under-estimated
c.       Resist the temptation to cheat: selecting training and test sets can easily fool you into thinking you have something that works. Test it honestly against new data, never “peek” at the test data – what you see will force you to cheat without knowing it.
d.      Business rules typically dominate data mining accuracy, so be sure to incorporate the business and legal constraints into your mining.
e.       I have never seen a large database in my life that came from a static distribution that was sampled independently. Real databases grow to be big through lots of systematic changes and biases, and they are collected over years from changing underlying distribution: segmentation is a pre-requisite to any analysis. Most algorithms assume that data is IID (independent and identically distributed)

Ajay-   Do you think softwares like Hadoop and MapReduce will change the online database permanently. What further developments do you see in this area.


Dr Fayyad-
I think they will (and have) changed the landscape dramatically, but they do not address everything. Many problems lend themselves naturally to Map-Reduce and many new approaches are enabled by Map-Reduce. However, there are many problems where M-R does not do much. I see a lot of problems being addressed by a large grid nowadays when they don’t need it. This is often a huge waste of computational resources. We need to learn how to deal with a mix of tools and platforms. I think M-R will be with us for a long time and will be a staple tool – but not a universal one.
Ajay-    I look forward to the day when I have just a low priced netbook and fast internet connection, and upload a Gigabyte of data and run advanced analytics on the browser. How far or soon do you think it is possible?
Dr Fayyad- Well, I thnk the day is already here. In fact, much of our web search today is conducted exactly in that model. A lot of web analysis, and much of scientific analysis is done like this today.
Ajay-    Describe some of the conferences you are currently involved with and the research areas that excites you the most.
Dr Fayyad-
I am still very involved in knowledge discovery and data mining conferences (especially the KDD series), machine learning, some statistics, and some conferences on search and internet.  Most exciting conferences for me are ones that cover a mix of topics but that address real problems. Examples include understanding how social networks evolve and behave, understanding dimensionality reductions (like random projections in very high-D spaces) and generally any work that gives us insight into why a particular technique works better and where the open challenges are.
Ajay-  What are the next breakthrough areas in data mining. Can we have a  Google or Yahoo in fields of business intelligence as well given their huge market potential and uncertain ROI.
Dr Fayyad- We already have some large and healthy businesses in BI and quite a huge industry in consulting. If you are asking particularly about the tools market then I think that market is very limited. The users of analysis tools are always going to be small in number. However, once the BI and Data Mining tools are embedded in vertical applications, then the number of users will be tremendous. That’s where you will see success.
Consider the examples of Google or Yahoo! – and now Microsoft with BING search engine.  Search engines today would not be good without machine learning/data mining technology. In fact MLR (Machine Learned Ranking) is at the core of the ranking methodology that decides which search results bubble to the top of the list. The typical web query is 2.6 keywords long and has about a billion matches. What matters are the top 10. The function that determines these is a relevance ranking algrorithm that uses machine learning to tune a formula that considers hundreds or thousands of variables about each document. So in many ways, you have a great example of this technology being used by hundreds of millions of people every day – without knowing it!
Success will be in applications where the technology becomes invisible – much like the internal combustion engine in your car or the electric motor in your coffee grinder or power supply fan. I think once people start building verticalized solutions that embed data mining and BI, we will hit success. This already has happened in web search, in direct marketing, in advertising targeting, in credit scoring, in fraud detection, and so on…

Ajay-  What do you do to relax. What publications would you recommend for staying up to date for the data mining people especially the younger analysts.
Dr Fayyad-
My favorite activity is sleep when I can get it J.  But more seriously, I enjoy reading books, playing chess, skiing (on water or snow – downhill or x-country), or any activities with my kids.  I swim a lot and that gives me much time to think and sort things out.
I think for keeping up with the technical advances in data mining: the KDD conferences, some of the applied analytics conferences, the WWW conferences, and the data mining journals. The ACM SIGKDD publishes a nice newsletter called SIGKDD explorations. It is free with a very low membership fee and it has a lot of announcements and survey papers on new topics and important areas (www.kdd.org).  Also, a good list to keep up with is an email list called KDNuggets edited by Gregory Piatetsky-Shapiro.
 

Biography (www.fayyad.com/usama )-

Usama Fayyad founded Open Insights (www.open-insights.com) to deliver on the vision of bridging the gap between data and insights and to help companies develop strategies and solutions not only to turn data into working business assets, but to turn the insights available from the growing amounts of data into critical components of an enterprise’s strategy for approaching markets, dealing with competitors, and acquire and retain customers.

In his prior role as Chief Data Officer of Yahoo! he built the data teams and infrastructure to manage the 25 terabytes of data per day that resulted from the company’s operations. He also built up the targeting systems and the data strategy for how to utilize data to enhance revenue and to create new revenue sources for the company.

In addition, he was the founding executive for Yahoo! Research, a scientific research organization that became the top research place in the world working on inventing the new sciences of the Internet.

Journal of Statistical Software

Here is a good open content Journal for people wanting to keep track of latest in statistical software.

It is called Journal of Statistical Software.

Citation: http://www.jstatsoft.org/

Established in 1996, the Journal of Statistical Software publishes articles, book reviews, code snippets, and software reviews on the subject of statistical software and algorithms.  The contents are freely available on-line.  For both articles and code snippets the source code is published along with the paper.

Implementations can use languages such as C, C++, S, Fortran, Java, PHP, Python and Ruby or environments such as Mathematica, MATLAB, R, S-PLUS, SAS, Stata, and XLISP-STAT.

E.g Book Reviews of  A Handbook of Statistical Analyses Using SAS (Third Edition)

and Statistics and Data with R: An Applied Approach Through Examples

jss

It is really cutting edge stuff for someone who wants to keep up with the latest and fast moving tech trends in statistical software and has convenient RSS feeds as well announce alerts for emails.

Note- Various Journals can be ranked using a quantitative index called Impact Factor

Citation http://in-cites.com/research/2007/august_27_2007-2.html

E.G For Statistics

In these columns, total citations to a journal’s published papers are divided by the total number of papers that the journal published, producing a citations-per-paper impact score over a five-year period (middle column) and a 26-year period (right-hand column).

Journals Ranked by Impact:
Statistics & Probability

Rank

2006
Impact Factor

Impact
2002-06

Impact
1981-2006
1 Bioinformatics
(4.89)
Bioinformatics
(9.87)
Econometrica
(52.93)
2 Biostatistics
(3.01)
J. Royal Stat. Soc. B
(6.75)
J. Royal Stat. Soc. B
(27.32)
3 Chemom. Intell. Lab.
(2.45)
Biostatistics
(6.56)
J. Am. Stat. Assoc.
(25.11)
4 Econometrica
(2.40)
J. Computat. Biology
(6.49)
Biometrika
(22.75)
5 J. Royal Stat. Soc. B
(2.32)
Econometrica
(5.82)
Annals of Statistics
(21.31)
6 IEEE ACM T Comp. Bi.
(2.28)
J. Chemometrics
(5.08)
Biometrics
(20.32)
7 J. Am. Stat. Assoc.
(2.17)
J. Am. Stat. Assoc.
(4.95)
Technometrics
(17.74)
8 Multivar. Behav. Res.
(2.10)
Statistical Science
(4.19)
Multivar. Behav. Res.
(16.62)
9 J. Computat. Biology
(2.00)
Annals of Statistics
(3.94)
Bioinformatics
(16.37)
10 Annals of Statistics
(1.90)
Stat. in Medicine
(3.62)
J. Royal Stat. Soc. A
(14.46)