Ways to use both Windows and Linux together

Tux, as originally drawn by Larry Ewing
Image via Wikipedia

Some programming ways to use both Windows and Linux

1) Wubi

http://wubi.sourceforge.net/

Wubi only adds an extra option to boot into Ubuntu. Wubi does not require you to modify the partitions of your PC, or to use a different bootloader, and does not install special drivers.

2) Wine

Wine lets you run Windows software on other operating systems. With Wine, you can install and run these applications just like you would in Windows. Read more at http://wiki.winehq.org/Debunking_Wine_Myths

http://www.winehq.org/about/

3) Cygwin

http://www.cygwin.com/

Cygwin is a Linux-like environment for Windows. It consists of two parts:

  • A DLL (cygwin1.dll) which acts as a Linux API emulation layer providing substantial Linux API functionality.
  • A collection of tools which provide Linux look and feel
  • What Isn’t Cygwin?

  • Cygwin is not a way to run native linux apps on Windows. You have to rebuild your application from source if you want it to run on Windows.
  • Cygwin is not a way to magically make native Windows apps aware of UNIX ® functionality, like signals, ptys, etc. Again, you need to build your apps from source if you want to take advantage of Cygwin functionality.
  • 4) Vmplayer

    https://www.vmware.com/products/player/

    VMware Player is the easiest way to run multiple operating systems at the same time on your PC. With its user-friendly interface, VMware Player makes it effortless for anyone to try out Windows 7, Chrome OS or the latest Linux releases, or create isolated virtual machines to safely test new software and surf the Web

    RWui :Creating R Web Interfaces on the go

    Here is a great R application created by http://sysbio.mrc-bsu.cam.ac.uk

    R Wui for creating R Web Interfaces

    its been there for some time now- but presumably R Apache is more well known.

    From-

    http://sysbio.mrc-bsu.cam.ac.uk/Rwui/tutorial/Rwui_Rnews_final.pdf

    The web application Rwui is used to create web interfaces  for running R scripts. All the code is generated automatically so that a fully functional web interface for an R script can be downloaded and up and running in a matter of minutes.

    Rwui is aimed at R script writers who have scripts that they want people unversed in R to use. The script writer uses Rwui to create a web application that will run their R script. Rwui allows the script writer to do this without them having to do any web application programming, because Rwui generates all the code for them.

    The script writer designs the web application to run their R script by entering information on a sequence of web pages. The script writer then downloads the application they have created and installs it on their own server.

    http://sysbio.mrc-bsu.cam.ac.uk/Rwui/tutorial/Technical_Report.pdf

    Features of web applications created by Rwui

    1. Whole range of input items available if required – text boxes, checkboxes, file upload etc.
    2. Facility for uploading of an arbitrary number of files (for example, microarray replicates).
    3. Facility for grouping uploaded files (for example, into ‘Diseased’ and ‘Control’ microarray data files).
    4. Results files displayed on results page and available for download.
    5. Results files can be e-mailed to the user.
    6. Interactive results files using image maps.
    7. Repeat analyses with different parameters and data files – new results added to results list, as a link to the corresponding results page.
    8. Real time progress information (text or graphical) displayed when running the application.

    Requirements

    In order to use the completed web applications created by Rwui you will need:

    1. A Java webserver such as Tomcat version 5.5 or later.
    2. Java version 1.5
    3. R – a version compatible with your R script(s).

    Using Rwui

    Using Rwui to create a web application for an R script simply involves:

    1. Entering details about your Rscript on a sequence of web pages.
    2. Rwui is quite flexible so you can backtrack, edit and insert, as you design your application.
    3. Rwui then generates the web application, which is Java based and platform independent.
    4. The application can be downloaded either as a .zip or .tgz file.
    5. Unpacked, the download contains all the source code and a .war file.
    6. Once the .war file is copied to the Tomcat webapps directory, the application is ready to use.
    7. Application details are saved in an ‘application definition file’ for reuse and modification.
    Interested-
    go click and check out a new web app from http://sysbio.mrc-bsu.cam.ac.uk/Rwui/ in a matter of minutes
    Also see

    PAWCON Bay Area March

    The biggest Predictive Analytics Conference comes back to the SF Bay in March next year.

    From

    http://www.predictiveanalyticsworld.com/sanfrancisco/2011/

    Predictive Analytics World March 2011 in San Francisco is packed with the top predictive analytics experts, practitioners, authors and business thought leaders, including keynote speakers:


    Sugato Basu, Ph.D.
    Senior Research Scientist
    Google
    Lessons Learned in Predictive Modeling 
    for Ad Targeting

    Eric Siegel, Ph.D.
    Conference Chair
    Predictive Analytics World
    Five Ways Predictive Analytics
    Cuts Enterprise Risk




    Plus special plenary sessions from industry heavy-weights:


    Andreas S. Weigend, Ph.D.
    weigend.com
    Former Chief Scientist, Amazon.com
    The State of the Social Data Revoltion

    John F. Elder, Ph.D.
    CEO and Founder
    Elder Research
    Data Mining Lessons Learned




    Predictive Analytics World focuses on concrete examples of deployed predictive analytics. Hear from the horse’s mouth precisely how Fortune 500 analytics competitors and other top practitioners deploy predictive modeling, and what kind of business impact it delivers. Click here to view the agenda at-a-glance.

    PAW SF 2011 will feature speakers with case studies from leading enterprises. such as:

    PAW’s March agenda covers hot topics and advanced methods such as uplift (net lift) modeling, ensemble models, social data, search marketing, crowdsourcing, blackbox trading, fraud detection, risk management, survey analysis and otherinnovative applications that benefit organizations in new and creative ways.

    Join PAW and access the best keynotes, sessions, workshops, exposition, expert panel, live demos, networking coffee breaks, reception, birds-of-a-feather lunches, brand-name enterprise leaders, and industry heavyweights in the business.

     

    Brief Interview Timo Elliott

    Here is a brief interview with Timo Elliott.Timo Elliott is a 19-year veteran of SAP Business Objects.

    Ajay- What are the top 5 events in Business Integration and Data Visualization services you saw in 2010 and what are the top three trends you see in these in 2011.


    Timo-

    Top five events in 2010:

    (1) Back to strong market growth. IT spending plummeted last year (BI continued to grow, but more slowly than previous years). This year, organizations reopened their wallets and funded new analytics initiatives — all the signs indicate that BI market growth will be double that of 2009.

    (2) The launch of the iPad. Mobile BI has been around for years, but the iPad opened the floodgates of organizations taking a serious look at mobile analytics — and the easy-to-use, executive-friendly iPad dashboards have considerably raised the profile of analytics projects inside organizations.

    (3) Data warehousing got exciting again. Decades of incremental improvements (column databases, massively parallel processing, appliances, in-memory processing…) all came together with robust commercial offers that challenged existing data storage and calculation methods. And new “NoSQL” approaches, designed for the new problems of massive amounts of less-structured web data, started moving into the mainstream.

    (4) The end of Google Wave, the start of social BI.Google Wave was launched as a rethink of how we could bring together email, instant messaging, and social networks. While Google decided to close down the technology this year, it has left its mark, notably by influencing the future of “social BI”, with several major vendors bringing out commercial products this year.

    (5) The start of the big BI merge. While several small independent BI vendors reported strong growth, the major trend of the year was consolidation and integration: the BI megavendors (SAP, Oracle, IBM, Microsoft) increased their market share (sometimes by acquiring smaller vendors, e.g. IBM/SPSS and SAP/Sybase) and integrated analytics with their existing products, blurring the line between BI and other technology areas.

    Top three trends next year:

    (1) Analytics, reinvented. New DW techniques make it possible to do sub-second, interactive analytics directly against row-level operational data. Now BI processes and interfaces need to be rethought and redesigned to make best use of this — notably by blurring the distinctions between the “design” and “consumption” phases of BI.

    (2) Corporate and personal BI come together. The ability to mix corporate and personal data for quick, pragmatic analysis is a common business need. The typical solution to the problem — extracting and combining the data into a local data store (either Excel or a departmental data mart) — pleases users, but introduces duplication and extra costs and makes a mockery of information governance. 2011 will see the rise of systems that let individuals and departments load their data into personal spaces in the corporate environment, allowing pragmatic analytic flexibility without compromising security and governance.

    (3) The next generation of business applications. Where are the business applications designed to support what people really do all day, such as implementing this year’s strategy, launching new products, or acquiring another company? 2011 will see the first prototypes of people-focused, flexible, information-centric, and collaborative applications, bringing together the best of business intelligence, “enterprise 2.0”, and existing operational applications.

    And one that should happen, but probably won’t:

    (4) Intelligence = Information + PEOPLE. Successful analytics isn’t about technology — it’s about people, process, and culture. The biggest trend in 2011 should be organizations spending the majority of their efforts on user adoption rather than technical implementation.                 About- http://timoelliott.com/blog/about

    Timo Elliott is a 19-year veteran of SAP BusinessObjects, and has spent the last twenty years working with customers around the world on information strategy.

    He works closely with SAP research and innovation centers around the world to evangelize new technology prototypes.

    His popular Business Analytics and SAPWeb20 blogs track innovation in analytics and social media, including topics such as augmented corporate reality, collaborative decision-making, and social network analysis.

    His PowerPoint Twitter Tools lets presenters see and react to tweets in real time, embedded directly within their slides.

    A popular and engaging speaker, Elliott presents regularly to IT and business audiences at international conferences, on subjects such as why BI projects fail and what to do about it, and the intersection of BI and enterprise 2.0.

    Prior to Business Objects, Elliott was a computer consultant in Hong Kong and led analytics projects for Shell in New Zealand. He holds a first-class honors degree in Economics with Statistics from Bristol University, England. He blogs on http://timoelliott.com/blog/ (one of the best designed blogs in BI) . You can see more about him personal web site here and photo/sketch blog here. You should follow Timo at http://twitter.com/timoelliott

    Art Credit- Timo Elliott

    Related Articles

    Brief Interview with James G Kobielus

    Here is a brief one question interview with James Kobielus, Senior Analyst, Forrester.

    Ajay-Describe the five most important events in Predictive Analytics you saw in 2010 and the top three trends in 2011 as per you.

    Jim-

    Five most important developments in 2010:

    • Continued emergence of enterprise-grade Hadoop solutions as the core of the future cloud-based platforms for advanced analytics
    • Development of the market for analytic solution appliances that incorporate several key features for advanced analytics: massively parallel EDW appliance, in-database analytics and data management function processing, embedded statistical libraries, prebuilt logical domain models, and integrated modeling and mining tools
    • Integration of advanced analytics into core BI platforms with user-friendly, visual, wizard-driven, tools for quick, exploratory predictive modeling, forecasting, and what-if analysis by nontechnical business users
    • Convergence of predictive analytics, data mining, content analytics, and CEP in integrated tools geared  to real-time social media analytics
    • Emergence of CRM and other line-of-business applications that support continuously optimized “next-best action” business processes through embedding of predictive models, orchestration engines, business rules engines, and CEP agility

    Three top trends I see in the coming year, above and beyond deepening and adoption of the above-bulleted developments:

    • All-in-memory, massively parallel analytic architectures will begin to gain a foothold in complex EDW environments in support of real-time elastic analytics
    • Further crystallization of a market for general-purpose “recommendation engines” that, operating inline to EDWs, CEP environments, and BPM platforms, enable “next-best action” approaches to emerge from today’s application siloes
    • Incorporation of social network analysis functionality into a wider range of front-office business processes to enable fine-tuned behavioral-based customer segmentation to drive CRM optimization

    About –http://www.forrester.com/rb/analyst/james_kobielus

    James G. Kobielus
    Senior Analyst, Forrester Research

    RESEARCH FOCUS

    James serves Business Process & Applications professionals. He is a leading expert on data warehousing, predictive analytics, data mining, and complex event processing. In addition to his core coverage areas, James contributes to Forrester’s research in business intelligence, data integration, data quality, and master data management.

    PREVIOUS WORK EXPERIENCE

    James has a long history in IT research and consulting and has worked for both vendors and research firms. Most recently, he was at Current Analysis, an IT research firm, where he was a principal analyst covering topics ranging from data warehousing to data integration and the Semantic Web. Prior to that position, James was a senior technical systems analyst at Exostar (a hosted supply chain management and eBusiness hub for the aerospace and defense industry). In this capacity, James was responsible for identifying and specifying product/service requirements for federated identity, PKI, and other products. He also worked as an analyst for the Burton Group and was previously employed by LCC International, DynCorp, ADEENA, International Center for Information Technologies, and the North American Telecommunications Association. He is both well versed and experienced in product and market assessments. James is a widely published business/technology author and has spoken at many industry events

    Predictive Analytics World March2011 SF

    USGS Satellite photo of the San Francisco Bay ...
    Image via Wikipedia

    Message from PAWCON-

     

    Predictive Analytics World, Mar 14-15 2011, San Francisco, CA

    More info: pawcon.com/sanfrancisco

    Agenda at-a-glance: pawcon.com/sanfrancisco/2011/agenda_overview.php

    PAW’s San Francisco 2011 program is the richest and most diverse yet, including over 30 sessions across two tracks – an “All Audiences” and an “Expert/Practitioner” track — so you can witness how predictive analytics is applied at Bank of America, Bank of the West, Best Buy, CA State Automobile Association, Cerebellum Capital, Chessmetrics, Fidelity, Gaia Interactive, GE Capital, Google, HealthMedia, Hewlett Packard, ICICI Bank (India), MetLife, Monster.com, Orbitz, PayPal/eBay, Richmond, VA Police Dept, U. of Melbourne, Yahoo!, YMCA, and a major N. American telecom, plus insights from projects for Anheiser-Busch, the SSA, and Netflix.

    PAW’s agenda covers hot topics and advanced methods such as uplift modeling (net lift), ensemble models, social data (6 sessions on this), search marketing, crowdsourcing, blackbox trading, fraud detection, risk management, survey analysis, and other innovative applications that benefit organizations in new and creative ways.

    Predictive Analytics World is the only conference of its kind, delivering vendor-neutral sessions across verticals such as banking, financial services, e-commerce, education, government, healthcare, high technology, insurance, non-profits, publishing, social gaming, retail and telecommunications

    And PAW covers the gamut of commercial applications of predictive analytics, including response modeling, customer retention with churn modeling, product recommendations, fraud detection, online marketing optimization, human resource decision-making, law enforcement, sales forecasting, and credit scoring.

    WORKSHOPS. PAW also features pre- and post-conference workshops that complement the core conference program. Workshop agendas include advanced predictive modeling methods, hands-on training and enterprise decision management.

    More info: pawcon.com/sanfrancisco

    Agenda at-a-glance: pawcon.com/sanfrancisco/2011/agenda_overview.php

    Be sure to register by Dec 7 for the Super Early Bird rate (save $400):
    pawcon.com/sanfrancisco/register.php

    If you’d like our informative event updates, sign up at:
    pawcon.com/signup-us.php

    Summer School on Uncertainty Quantification

    Scheme for sensitivity analysis
    Image via Wikipedia

    SAMSI/Sandia Summer School on Uncertainty Quantification – June 20-24, 2011

    http://www.samsi.info/workshop/samsisandia-summer-school-uncertainty-quantification

    The utilization of computer models for complex real-world processes requires addressing Uncertainty Quantification (UQ). Corresponding issues range from inaccuracies in the models to uncertainty in the parameters or intrinsic stochastic features.

    This Summer school will expose students in the mathematical and statistical sciences to common challenges in developing, evaluating and using complex computer models of processes. It is essential that the next generation of researchers be trained on these fundamental issues too often absent of traditional curricula.

    Participants will receive not only an overview of the fast developing field of UQ but also specific skills related to data assimilation, sensitivity analysis and the statistical analysis of rare events.

    Theoretical concepts and methods will be illustrated on concrete examples and applications from both nuclear engineering and climate modeling.

    The main lecturers are:
    Dan Cacuci (N.C. State University): data assimilation and applications to nuclear engineering

    Dan Cooley (Colorado State University): statistical analysis of rare events
    This short course will introduce the current statistical practice for analyzing extreme events. Statistical practice relies on fitting distributions suggested by asymptotic theory to a subset of data considered to be extreme. Both block maximum and threshold exceedance approaches will be presented for both the univariate and multivariate cases.

    Doug Nychka (NCAR): data assimilation and applications in climate modeling
    Climate prediction and modeling do not incorporate geophysical data in the sequential manner as weather forecasting and comparison to data is typically based on accumulated statistics, such as averages. This arises because a climate model matches the state of the Earth’s atmosphere and ocean “on the average” and so one would not expect the detailed weather fluctuations to be similar between a model and the real system. An emerging area for climate model validation and improvement is the use of data assimilation to scrutinize the physical processes in a model using observations on shorter time scales. The idea is to find a match between the state of the climate model and observed data that is particular to the observed weather. In this way one can check whether short time physical processes such as cloud formation or dynamics of the atmosphere are consistent with what is observed.

    Dongbin Xiu (Purdue University): sensitivity analysis and polynomial chaos for differential equations
    This lecture will focus on numerical algorithms for stochastic simulations, with an emphasis on the methods based on generalized polynomial chaos methodology. Both the mathematical framework and the technical details will be examined, along with performance comparisons and implementation issues for practical complex systems.

    The main lectures will be supplemented by discussion sessions and by presentations from UQ practitioners from both the Sandia and Los Alamos National Laboratories.

    http://www.samsi.info/workshop/samsisandia-summer-school-uncertainty-quantification