LibreOffice Stable Release launched

Non Oracle Open Office completes important milestone- from the press release

The Document Foundation launches LibreOffice 3.3

The first stable release of the free office suite is available for download

The Internet, January 25, 2011 – The Document Foundation launches LibreOffice 3.3, the first stable release of the free office suite developed by the community. In less than four months, the number of developers hacking LibreOffice has grown from less than twenty in late September 2010, to well over one hundred today. This has allowed us to release ahead of the aggressive schedule set by the project.

Not only does it ship a number of new and original features, LibreOffice 3.3 is also a significant achievement for a number of reasons:

– the developer community has been able to build their own and independent process, and get up and running in a very short time (with respect to the size of the code base and the project’s strong ambitions);

– thanks to the high number of new contributors having been attracted into the project, the source code is quickly undergoing a major clean-up to provide a better foundation for future development of LibreOffice;

– the Windows installer, which is going to impact the largest and most diverse user base, has been integrated into a single build containing all language versions, thus reducing the size for download sites from 75 to 11GB, making it easier for us to deploy new versions more rapidly and lowering the carbon footprint of the entire infrastructure.

Caolán McNamara from RedHat, one of the developer community leaders, comments, “We are excited: this is our very first stable release, and therefore we are eager to get user feedback, which will be integrated as soon as possible into the code, with the first enhancements being released in February. Starting from March, we will be moving to a real time-based, predictable, transparent and public release schedule, in accordance with Engineering Steering Committee’s goals and users’ requests”. The LibreOffice development roadmap is available at http://wiki.documentfoundation.org/ReleasePlan

LibreOffice 3.3 brings several unique new features. The 10 most-popular among community members are, in no particular order:

  1. the ability to import and work with SVG files;
  2. an easy way to format title pages and their numbering in Writer;
  3. a more-helpful Navigator Tool for Writer;
  4. improved ergonomics in Calc for sheet and cell management;
  5. and Microsoft Works and Lotus Word Pro document import filters.

In addition, many great extensions are now bundled, providing

PDF import,

a slide-show presenter console,

a much improved report builder, and more besides.

A more-complete and detailed list of all the new features offered by LibreOffice 3.3 is viewable on the following web page: http://www.libreoffice.org/download/new-features-and-fixes/

LibreOffice 3.3 also provides all the new features of OpenOffice.org 3.3, such as new custom properties handling; embedding of standard PDF fonts in PDF documents; new Liberation Narrow font; increased document protection in Writer and Calc; auto decimal digits for “General” format in Calc; 1 million rows in a spreadsheet; new options for CSV import in Calc; insert drawing objects in Charts; hierarchical axis labels for Charts; improved slide layout handling in Impress; a new easier-to-use print interface; more options for changing case; and colored sheet tabs in Calc. Several of these new features were contributed by members of the LibreOffice team prior to the formation of The Document Foundation.

LibreOffice hackers will be meeting at FOSDEM in Brussels on February 5 and 6, and will be presenting their work during a one-day workshop on February 6, with speeches and hacking sessions coordinated by several members of the project.

The home of The Document Foundation is at http://www.documentfoundation.org

The home of LibreOffice is at http://www.libreoffice.org where the download page has been redesigned by the community to be more user-friendly.

*** About The Document Foundation

The Document Foundation has the mission of facilitating the evolution of the OOo Community into a new, open, independent, and meritocratic organization within the next few months. An independent Foundation is a better reflection of the values of our contributors, users and supporters, and will enable a more effective, efficient and transparent community. TDF will protect past investments by building on the achievements of the first decade, will encourage wide participation within the community, and will co-ordinate activity across the community.

*** Media Contacts for TDF

Florian Effenberger (Germany)

Mobile: +49 151 14424108 – E-mail: floeff@documentfoundation.org

Olivier Hallot (Brazil)

Mobile: +55 21 88228812 – E-mail: olivier.hallot@documentfoundation.org

Charles H. Schulz (France)

Mobile: +33 6 98655424 – E-mail: charles.schulz@documentfoundation.org

Italo Vignoli (Italy)

Mobile: +39 348 5653829 – E-mail: italo.vignoli@documentfoundation.org

PAW Videos

A message from Predictive Analytics World on  newly available videos. It has many free videos as well so you can check them out.

Predictive Analytics World March 2011 in San Francisco

Access PAW DC Session Videos Now

Predictive Analytics World is pleased to announce on-demand access to the videos of PAW Washington DC, October 2010, including over 30 sessions and keynotes that you may view at your convenience. Access this leading predictive analytics content online now:

View the PAW DC session videos online

Register by January 18th and receive $150 off the full 2-day conference program videos (enter code PAW150 at checkout)

Trial videos – view the following for no charge:

Select individual conference sessions, or recognize savings by registering for access to one or two full days of sessions. These on-demand videos deliver PAW DC right to your desk, covering hot topics and advanced methods such as:

Social data 

Text mining

Search marketing

Risk management

Survey analysis

Consumer privacy

Sales force optimization

Response & cross-sell

Recommender systems

Featuring experts such as:
Usama Fayyad, Ph.D.
CEO, Open Insights Former Chief Data Officer, Yahoo!

Andrew Pole
Sr Mgr, Media/DB Mktng
Target
View Keynote for Free

John F. Elder, Ph.D.
CEO and Founder
Elder Research

Bruno Aziza
Director, Worldwide Strategy Lead, BI
Microsoft

Eric Siegel, Ph.D.
Conference Chair
Predictive Analytics World

PAW DC videos feature over 25 speakers with case studies from leading enterprises such as: CIBC, CEB, Forrester, Macy’s, MetLife, Microsoft, Miles Kimball, Monster.com, Oracle, Paychex, SunTrust, Target, UPMC, Xerox, Yahoo!, YMCA, and more.

How video access works:

View Slides on the Left See & Hear Speaker in the Right Window

Sign up by January 18 for immediate video access and $150 discount


San Francisco
March 14-15, 2011
Washington DC
October, 2011
London
November, 2011
Contact Us

Produced by:

 

Session Gallery: Day 1 of 2

Viewing (17) Sessions of (31)

 

keynote.jpg
Add to Cart
Keynote: Five Ways Predictive Analytics Cuts Enterprise Risk  

Eric Siegel, Ph.D., Program Chair, Predictive Analytics World

All business is an exercise in risk management. All organizations would benefit from measuring, tracking and computing risk as a core process, much like insurance companies do.

Predictive analytics does the trick, one customer at a time. This technology is a data-driven means to compute the risk each customer will defect, not respond to an expensive mailer, consume a retention discount even if she were not going to leave in the first place, not be targeted for a telephone solicitation that would have landed a sale, commit fraud, or become a “loss customer” such as a bad debtor or an insurance policy-holder with high claims.

In this keynote session, Dr. Eric Siegel reveals:

– Five ways predictive analytics evolves your enterprise to reduce risk

– Hidden sources of risk across operational functions

– What every business should learn from insurance companies

– How advancements have reversed the very meaning of fraud

– Why “man + machine” teams are greater than the sum of their parts for enterprise decision support

Length – 00:45:57 | Email to a Colleague

Price: $195

 

 

sponsor.jpg
Play video of session: Platinum Sponsor Presentation, Analytics: The Beauty of Diversity
Platinum Sponsor Presentation: Analytics – The Beauty of Diversity 

Anne H. Milley, Senior Director of Analytic Strategy, Worldwide Product Marketing, SAS

Analytics contributes to, and draws from, multiple disciplines. The unifying theme of “making the world a better place” is bred from diversity. For instance, the same methods used in econometrics might be used in market research, psychometrics and other disciplines. In a similar way, diverse paradigms are needed to best solve problems, reveal opportunities and make better decisions. This is why we evolve capabilities to formulate and solve a wide range of problems through multiple integrated languages and interfaces. Extending that, we have provided integration with other languages so that users can draw on the disciplines and paradigms needed to best practice their craft.

Length – 20:11 | Email to a Colleague

Free viewing enabled – no charge

 

gold sponsor.jpg
Play video of session: Gold Sponsor Presentation Predictive Analytics Accelerate Insight for Financial Services
Gold Sponsor Presentation: Predictive Analytics Accelerate Insight for Financial Services 

Finbarr Deely, Director of Business Development,ParAccel

Financial services organizations face immense hurdles in maintaining profitability and building competitive advantage. Financial services organizations must perform “what-if” scenario analysis, identify risks, and detect fraud patterns. The advanced analytic complexity required often makes such analysis slow and painful, if not impossible. This presentation outlines the analytic challenges facing these organizations and provides a clear path to providing the accelerated insight needed to perform in today’s complex business environment to reduce risk, stop fraud and increase profits. * The value of predictive analytics in Accelerating Insight * Financial Services Analytic Case Studies * Brief Overview of ParAccel Analytic Database

Length – 09:06 | Email to a Colleague

Free viewing enabled – no charge

 

isson1.jpg
Add to Cart
TOPIC: BUSINESS VALUE
Case Study: Monster.com
Creating Global Competitive Power with Predictive Analytics 

Jean Paul Isson, Vice President, Globab BI & Predictive Analytics, Monster Worldwide

Using Predictive analytics to gain a deeper understanding of customer behaviours, increase marketing ROI and drive growth

– Creating global competitive power with business intelligence: Making the right decisions – at the right time

– Avoiding common change management challenges in sales, marketing, customer service, and products

– Developing a BI vision – and implementing it: successful business intelligence implementation models

– Using predictive analytics as a business driver to stay on top of the competition

– Following the Monster Worldwide global BI evolution: How Monster used BI to go from good to great

Length – 51:17 | Email to a Colleague

Price: $195

 

 

abbot.jpg
Add to Cart
TOPIC: SURVEY ANALYSIS
Case Study: YMCA
Turning Member Satisfaction Surveys into an Actionable Narrative 

Dean Abbott, President, Abbott Analytics

Employees are a key constituency at the Y and previous analysis has shown that their attitudes have a direct bearing on Member Satisfaction. This session will describe a successful approach for the analysis of YMCA employee surveys. Decision trees are built and examined in depth to identify key questions in describing key employee satisfaction metrics, including several interesting groupings of employee attitudes. Our approach will be contrasted with other factor analysis and regression-based approaches to survey analysis that we used initially. The predictive models described are currently in use and resulted in both greater understanding of employee attitudes, and a revised “short-form” survey with fewer key questions identified by the decision trees as the most important predictors.

Length – 50:19 | Email to a Colleague

Price: $195

 

 

rexer.jpg
Add to Cart
TOPIC: INDUSTRY TRENDS
2010 Data Minter Survey Results: Highlights
 

Karl Rexer, Ph.D., Rexer Analytics

Do you want to know the views, actions, and opinions of the data mining community? Each year, Rexer Analytics conducts a global survey of data miners to find out. This year at PAW we unveil the results of our 4th Annual Data Miner Survey. This session will present the research highlights, such as:

– Analytic goals & key challenges

– Impact of the economy

– Regional differences

– Text mining trends

Length – 15:20 | Email to a Colleague

Price: $195

 

 

elder.jpg
Add to Cart
Multiple Case Studies: U.S. DoD, U.S. DHS, SSA
Text Mining: Lessons Learned 

John F. Elder, Chief Scientist, Elder Research, Inc.

Text Mining is the “Wild West” of data mining and predictive analytics – the potential for gain is huge, the capability claims are often tall tales, and the “land rush” for leadership is very much a race.

In solving unstructured (text) analysis challenges, we found that principles from inductive modeling – learning relationships from labeled cases – has great power to enhance text mining. Dr. Elder highlights key technical breakthroughs discovered while working on projects for leading government agencies, including: Text Mining is the “Wild West” of data mining and predictive analytics – the potential for gain is huge, the capability claims are often tall tales, and the “land rush” for leadership is very much a race.

– Prioritizing searches for the Dept. of Homeland Security

– Quick decisions for Social Security Admin. disability

– Document discovery for the Dept. of Defense

– Disease discovery for the Dept. of Homeland Security

– Risk profiling for the Dept. of Defense

Length – 48:58 | Email to a Colleague

Price: $195

 

 

target.jpg
Play video of session: Keynote: How Target Gets the Most out of Its Guest Data to Improve Marketing ROI
Keynote: How Target Gets the Most out of Its Guest Data to Improve Marketing ROI 

Andrew Pole, Senior Manager, Media and Database Marketing, Target

In this session, you’ll learn how Target leverages its own internal guest data to optimize its direct marketing – with the ultimate goal of enhancing our guests’ shopping experience and driving in-store and online performance. You will hear about what guest data is available at Target, how and where we collect it, and how it is used to improve the performance and relevance of direct marketing vehicles. Furthermore, we will discuss Target’s development and usage of guest segmentation, response modeling, and optimization as means to suppress poor performers from mailings, determine relevant product categories and services for online targeted content, and optimally assign receipt marketing offers to our guests when offer quantities are limited.

Length – 47:49 | Email to a Colleague

Free viewing enabled – no charge

 

analytics.jpg
Play video of session: Platinum Sponsor Presentation: Driving Analytics Into Decision Making
Platinum Sponsor Presentation: Driving Analytics Into Decision Making  

Jason Verlen, Director, SPSS Product Strategy & Management, IBM Software Group

Organizations looking to dramatically improve their business outcomes are turning to decision management, a convergence of technology and business processes that is used to streamline and predict the outcome of daily decision-making. IBM SPSS Decision Management technology provides the critical link between analytical insight and recommended actions. In this session you’ll learn how Decision Management software integrates analytics with business rules and business applications for front-line systems such as call center applications, insurance claim processing, and websites. See how you can improve every customer interaction, minimize operational risk, reduce fraud and optimize results.

Length – 17:29 | Email to a Colleague

Free viewing enabled – no charge

 

macy.jpg
Add to Cart
TOPIC: DATA INFRASTRUCTURE AND INTEGRATION
Case Study: Macy’s
The world is not flat (even though modeling software has to think it is) 

Paul Coleman, Director of Marketing Statistics, Macy’s Inc.

Software for statistical modeling generally use flat files, where each record represents a unique case with all its variables. In contrast most large databases are relational, where data are distributed among various normalized tables for efficient storage. Variable creation and model scoring engines are necessary to bridge data mining and storage needs. Development datasets taken from a sampled history require snapshot management. Scoring datasets are taken from the present timeframe and the entire available universe. Organizations, with significant data, must decide when to store or calculate necessary data and understand the consequences for their modeling program.

Length – 34:54 | Email to a Colleague

Price: $195

 

 

gwaltney.jpg
Add to Cart
TOPIC: CUSTOMER VALUE
Case Study: SunTrust
When One Model Will Not Solve the Problem – Using Multiple Models to Create One Solution 

Dudley Gwaltney, Group Vice President, Analytical Modeling, SunTrust Bank

In 2007, SunTrust Bank developed a series of models to identify clients likely to have large changes in deposit balances. The models include three basic binary and two linear regression models.

Based on the models, 15% of SunTrust clients were targeted as those most likely to have large balance changes. These clients accounted for 65% of the absolute balance change and 60% of the large balance change clients. The targeted clients are grouped into a portfolio and assigned to individual SunTrust Retail Branch. Since 2008, the portfolio generated a 2.6% increase in balances over control.

Using the SunTrust example, this presentation will focus on:

– Identifying situations requiring multiple models

– Determining what types of models are needed

– Combining the individual component models into one output

Length – 48:22 | Email to a Colleague

Price: $195

 

 

paychex1.jpg
Add to Cart
TOPIC: RESPONSE & CROSS-SELL
Case Study: Paychex
Staying One Step Ahead of the Competition – Development of a Predictive 401(k) Marketing and Sales Campaign 

Jason Fox, Information Systems and Portfolio Manager,Paychex

In-depth case study of Paychex, Inc. utilizing predictive modeling to turn the tides on competitive pressures within their own client base. Paychex, a leading provider of payroll and human resource solutions, will guide you through the development of a Predictive 401(k) Marketing and Sales model. Through the use of sophisticated data mining techniques and regression analysis the model derives the probability a client will add retirement services products with Paychex or with a competitor. Session will include roadblocks that could have ended development and ROI analysis. Speaker: Frank Fiorille, Director of Enterprise Risk Management, Paychex Speaker: Jason Fox, Risk Management Analyst, Paychex

Length – 26:29 | Email to a Colleague

Price: $195

 

 

ling.jpg
Add to Cart
TOPIC: SEGMENTATION
Practitioner: Canadian Imperial Bank of Commerce
Segmentation Do’s and Don’ts 

Daymond Ling, Senior Director, Modelling & Analytics,Canadian Imperial Bank of Commerce

The concept of Segmentation is well accepted in business and has withstood the test of time. Even with the advent of new artificial intelligence and machine learning methods, this old war horse still has its place and is alive and well. Like all analytical methods, when used correctly it can lead to enhanced market positioning and competitive advantage, while improper application can have severe negative consequences.

This session will explore what are the elements of success, and what are the worse practices that lead to failure. The relationship between segmentation and predictive modeling will also be discussed to clarify when it is appropriate to use one versus the other, and how to use them together synergistically.

Length – 45:57 | Email to a Colleague

Price: $195

 

 

kobelius1.jpg
Add to Cart
TOPIC: SOCIAL DATA
Thought Leadership
Social Network Analysis: Killer Application for Cloud Analytics
 

James Kobielus, Senior Analyst, Forrester Research

Social networks such as Twitter and Facebook are a potential goldmine of insights on what is truly going through customers´minds. Every company wants to know whether, how, how often, and by whom they´re being mentioned across the billowing new cloud of social media. Just as important, every company wants to influence those discussions in their favor, target new business, and harvest maximum revenue potential. In this session, Forrester analyst James Kobielus identifies fruitful applications of social network analysis in customer service, sales, marketing, and brand management. He presents a roadmap for enterprises to leverage their inline analytics initiatives and leverage high-performance data warehousing (DW) clouds and appliances in order to analyze shifting patterns of customer sentiment, influence, and propensity. Leveraging Forrester’s ongoing research in advanced analytics and customer relationship management, Kobielus will discuss industry trends, commercial modeling tools, and emerging best practices in social network analysis, which represents a game-changing new discipline in predictive analytics.

Length – 48:16 | Email to a Colleague

Price: $195

 

 

dogan.jpg
Add to Cart
TOPIC: HEALTHCARE – INTERNATIONAL TARGETING
Case Study: Life Line Screening
Taking CRM Global Through Predictive Analytics 

Ozgur Dogan,
VP, Quantitative Solutions Group, Merkle Inc

Trish Mathe,
Director of Database Marketing, Life Line Screening

While Life Line is successfully executing a US CRM roadmap, they are also beginning this same evolution abroad. They are beginning in the UK where Merkle procured data and built a response model that is pulling responses over 30% higher than competitors. This presentation will give an overview of the US CRM roadmap, and then focus on the beginning of their strategy abroad, focusing on the data procurement they could not get anywhere else but through Merkle and the successful modeling and analytics for the UK. Speaker: Ozgur Dogan, VP, Quantitative Solutions Group, Merkle Inc Speaker: Trish Mathe, Director of Database Marketing, Life Line Screening

Length – 40:12 | Email to a Colleague

Price: $195

 

 

sambamoorthi1.jpg
Add to Cart
TOPIC: SURVEY ANALYSIS
Case Study: Forrester
Making Survey Insights Addressable and Scalable – The Case Study of Forrester’s Technographics Benchmark Survey 

Nethra Sambamoorthi, Team Leader, Consumer Dynamics & Analytics, Global Consulting, Acxiom Corporation

Marketers use surveys to create enterprise wide applicable strategic insights to: (1) develop segmentation schemes, (2) summarize consumer behaviors and attitudes for the whole US population, and (3) use multiple surveys to draw unified views about their target audience. However, these insights are not directly addressable and scalable to the whole consumer universe which is very important when applying the power of survey intelligence to the one to one consumer marketing problems marketers routinely face. Acxiom partnered with Forrester Research, creating addressable and scalable applications of Forrester’s Technographics Survey and applied it successfully to a number of industries and applications.

Length – 39:23 | Email to a Colleague

Price: $195

 

 

zasadil.jpg
Add to Cart
TOPIC: HEALTHCARE
Case Study: UPMC Health Plan
A Predictive Model for Hospital Readmissions 

Scott Zasadil, Senior Scientist, UPMC Health Plan

Hospital readmissions are a significant component of our nation’s healthcare costs. Predicting who is likely to be readmitted is a challenging problem. Using a set of 123,951 hospital discharges spanning nearly three years, we developed a model that predicts an individual’s 30-day readmission should they incur a hospital admission. The model uses an ensemble of boosted decision trees and prior medical claims and captures 64% of all 30-day readmits with a true positive rate of over 27%. Moreover, many of the ‘false’ positives are simply delayed true positives. 53% of the predicted 30-day readmissions are readmitted within 180 days.

Length – 54:18 | Email to a Colleague

Price: $195

Libre Office (Beta) 3 Launched

Larry Ellison crop
Image via Wikipedia

The guys who forked off Larry Ellison‘s Open Office launched Beta 3 .

Whats new-

  • DDE reconnect – the old DDE implementation was very quirky in that, opening and closing a DDE server document a few times would totally disconnect the link with the client document. Plus it also causes several other side-effects because of the way it accessed the server documents. The new implementation removes those quirkiness plus enables re-connection of DDE server client pair when the server document is loaded into LO when the client document is already open.
  • External reference rework – External reference handling has been re-worked to make it work within OFFSET function. In addition, this change allows Calc to read data directly from documents already loaded when possible. The old implementation would always load from disk even when the document was already loaded.
  • Autocorrect accidental caps locks – automatically corrects what appears to be a mis-cap such as tHIS or tHAT, as a result of the user not realizing the CAPS lock key was on. When correcting the mis-cap, it also automatically turns off CAPS lock (note: not working on Mac OS X yet). (translation)(look for accidental-caps-lock in the commit log)
  • Swapped default key bindings of Delete and Backspace keys in Calc – this was a major annoyance for former Excel users when migrating to Calc.

(look for delete-backspace-key in the commit log)

  • In Calc, hitting TAB during auto-complete commits current selection and moves to the next cell. Shift-TAB cycles through auto-complete selections.
  • and lots of bugs squashed….

_Announcement_

 

 

The Document Foundation is happy to announce the third beta of
LibreOffice 3.3. This beta comes with lots of improvements and
bugfixes. As usual, be warned that this is beta quality software –
nevertheless, we ask you to play with it – we very much welcome your
feedback and testing!

Please, download suitable package(s) from

http://www.documentfoundation.org/download/

install them, and start testing. Should you find bugs, please report
them to the FreeDesktop Bugzilla:

https://bugs.freedesktop.org

A detailed list of changes from the past four weeks of development is
to be found here:

http://wiki.documentfoundation.org/Development/Weekly_Summary

If you want to get involved with this exciting project, you can
contribute code:

http://www.documentfoundation.org/develop/

translate LibreOffice to your language:

http://www.freedesktop.org/wiki/Software/LibreOffice/i18n/translating_3.3

or just donate:

http://www.documentfoundation.org/contribution/

A list of known issues with Beta 3 is available from our wiki:

http://wiki.documentfoundation.org/Beta3

Libre Office Marketing Event FOSDEM

Things begin building up for LibreOffice- and here is the inaugural event.

Source- LibreOffice Wiki

 

LibreOffice at FOSDEM – Call for Papers

Contents

[hide]

Call for Papers

2011: Brussels, FOSDEM.(2011-02-05 – 2011-02-06).. Your first chance ever to give a talk for LibreOffice at this great open source event… obviously you don’t want to miss this!

Do you want to share your experience in starting to hack the code, or tell about the tweaks in your build environment, talk about the code changes you have done or those that you prepare, or do you want to share insight on your QA work? Simply submit your proposal at this page.

We really like you to share in the way that fits you best, be it 5, 10 or up to 30 minutes 🙂

It might well be that we’ll have to choose between the various proposals, after all FOSDEM is only two days 😉 So please give a clear description of your talk, goals and target audience. For details, see the outline that is provided at the wiki.

FOSDEM is a free conference to attend, and we will try to seek sponsorship. But funding is limited, so please only request it if you cannot attend otherwise, and we will try our best to support you.

Thanks a lot,
TDF Steering Committee

More info

http://wiki.documentfoundation.org/Marketing/Events/Fosdem2011
For questions mail info@documentfoundation.org
Discussions with developers and code hackers take place on libreoffice@lists.freedesktop.org

 

So what's new in R 2.12.0

PoissonCDF
Image via Wikipedia

and as per http://cran.r-project.org/src/base/NEWS

the answer is plenty is new in the newR.

While you and me, were busy writing and reading blogs, or generally writing code for earning more money, or our own research- Uncle Peter D and his band of merry men have been really busy in a much more upgraded R.

————————————–

CHANGES————————-

NEW FEATURES:

    • Reading a packages's CITATION file now defaults to ASCII rather
      than Latin-1: a package with a non-ASCII CITATION file should
      declare an encoding in its DESCRIPTION file and use that encoding
      for the CITATION file.

    • difftime() now defaults to the "tzone" attribute of "POSIXlt"
      objects rather than to the current timezone as set by the default
      for the tz argument.  (Wish of PR#14182.)

    • pretty() is now generic, with new methods for "Date" and "POSIXt"
      classes (based on code contributed by Felix Andrews).

    • unique() and match() are now faster on character vectors where
      all elements are in the global CHARSXP cache and have unmarked
      encoding (ASCII).  Thanks to Matthew Dowle for suggesting
      improvements to the way the hash code is generated in unique.c.

    • The enquote() utility, in use internally, is exported now.

    • .C() and .Fortran() now map non-zero return values (other than
      NA_LOGICAL) for logical vectors to TRUE: it has been an implicit
      assumption that they are treated as true.

    • The print() methods for "glm" and "lm" objects now insert
      linebreaks in long calls in the same way that the print() methods
      for "summary.[g]lm" objects have long done.  This does change the
      layout of the examples for a number of packages, e.g. MASS.
      (PR#14250)

    • constrOptim() can now be used with method "SANN".  (PR#14245)

      It gains an argument hessian to be passed to optim(), which
      allows all the ... arguments to be intended for f() and grad().
      (PR#14071)

    • curve() now allows expr to be an object of mode "expression" as
      well as "call" and "function".

    • The "POSIX[cl]t" methods for Axis() have been replaced by a
      single method for "POSIXt".

      There are no longer separate plot() methods for "POSIX[cl]t" and
      "Date": the default method has been able to handle those classes
      for a long time.  This _inter alia_ allows a single date-time
      object to be supplied, the wish of PR#14016.

      The methods had a different default ("") for xlab.

    • Classes "POSIXct", "POSIXlt" and "difftime" have generators
      .POSIXct(), .POSIXlt() and .difftime().  Package authors are
      advised to make use of them (they are available from R 2.11.0) to
      proof against planned future changes to the classes.

      The ordering of the classes has been changed, so "POSIXt" is now
      the second class.  See the document ‘Updating packages for
      changes in R 2.12.x’ on  for
      the consequences for a handful of CRAN packages.

    • The "POSIXct" method of as.Date() allows a timezone to be
      specified (but still defaults to UTC).

    • New list2env() utility function as an inverse of
      as.list() and for fast multi-assign() to existing
      environment.  as.environment() is now generic and uses list2env()
      as list method.

    • There are several small changes to output which ‘zap’ small
      numbers, e.g. in printing quantiles of residuals in summaries
      from "lm" and "glm" fits, and in test statisics in print.anova().

    • Special names such as "dim", "names", etc, are now allowed as
      slot names of S4 classes, with "class" the only remaining
      exception.

    • File .Renviron can have architecture-specific versions such as
      .Renviron.i386 on systems with sub-architectures.

    • installed.packages() has a new argument subarch to filter on
      sub-architecture.

    • The summary() method for packageStatus() now has a separate
      print() method.

    • The default summary() method returns an object inheriting from
      class "summaryDefault" which has a separate print() method that
      calls zapsmall() for numeric/complex values.

    • The startup message now includes the platform and if used,
      sub-architecture: this is useful where different
      (sub-)architectures run on the same OS.

    • The getGraphicsEvent() mechanism now allows multiple windows to
      return graphics events, through the new functions
      setGraphicsEventHandlers(), setGraphicsEventEnv(), and
      getGraphicsEventEnv().  (Currently implemented in the windows()
      and X11() devices.)

    • tools::texi2dvi() gains an index argument, mainly for use by R
      CMD Rd2pdf.

      It avoids the use of texindy by texinfo's texi2dvi >= 1.157,
      since that does not emulate 'makeindex' well enough to avoid
      problems with special characters (such as (, {, !) in indices.

    • The ability of readLines() and scan() to re-encode inputs to
      marked UTF-8 strings on Windows since R 2.7.0 is extended to
      non-UTF-8 locales on other OSes.

    • scan() gains a fileEncoding argument to match read.table().

    • points() and lines() gain "table" methods to match plot().  (Wish
      of PR#10472.)

    • Sys.chmod() allows argument mode to be a vector, recycled along
      paths.

    • There are |, & and xor() methods for classes "octmode" and
      "hexmode", which work bitwise.

    • Environment variables R_DVIPSCMD, R_LATEXCMD, R_MAKEINDEXCMD,
      R_PDFLATEXCMD are no longer used nor set in an R session.  (With
      the move to tools::texi2dvi(), the conventional environment
      variables LATEX, MAKEINDEX and PDFLATEX will be used.
      options("dvipscmd") defaults to the value of DVIPS, then to
      "dvips".)

    • New function isatty() to see if terminal connections are
      redirected.

    • summaryRprof() returns the sampling interval in component
      sample.interval and only returns in by.self data for functions
      with non-zero self times.

    • print(x) and str(x) now indicate if an empty list x is named.

    • install.packages() and remove.packages() with lib unspecified and
      multiple libraries in .libPaths() inform the user of the library
      location used with a message rather than a warning.

    • There is limited support for multiple compressed streams on a
      file: all of [bgx]zfile() allow streams to be appended to an
      existing file, but bzfile() reads only the first stream.

    • Function person() in package utils now uses a given/family scheme
      in preference to first/middle/last, is vectorized to handle an
      arbitrary number of persons, and gains a role argument to specify
      person roles using a controlled vocabulary (the MARC relator
      terms).

    • Package utils adds a new "bibentry" class for representing and
      manipulating bibliographic information in enhanced BibTeX style,
      unifying and enhancing the previously existing mechanisms.

    • A bibstyle() function has been added to the tools package with
      default JSS style for rendering "bibentry" objects, and a
      mechanism for registering other rendering styles.

    • Several aspects of the display of text help are now customizable
      using the new Rd2txt_options() function.
      options("help_text_width") is no longer used.

    • Added \href tag to the Rd format, to allow hyperlinks to URLs
      without displaying the full URL.

    • Added \newcommand and \renewcommand tags to the Rd format, to
      allow user-defined macros.

    • New toRd() generic in the tools package to convert objects to
      fragments of Rd code, and added "fragment" argument to Rd2txt(),
      Rd2HTML(), and Rd2latex() to support it.

    • Directory R_HOME/share/texmf now follows the TDS conventions, so
      can be set as a texmf tree (‘root directory’ in MiKTeX parlance).

    • S3 generic functions now use correct S4 inheritance when
      dispatching on an S4 object.  See ?Methods, section on “Methods
      for S3 Generic Functions” for recommendations and details.

    • format.pval() gains a ... argument to pass arguments such as
      nsmall to format().  (Wish of PR#9574)

    • legend() supports title.adj.  (Wish of PR#13415)

    • Added support for subsetting "raster" objects, plus assigning to
      a subset, conversion to a matrix (of colour strings), and
      comparisons (== and !=).

    • Added a new parseLatex() function (and related functions
      deparseLatex() and latexToUtf8()) to support conversion of
      bibliographic entries for display in R.

    • Text rendering of \itemize in help uses a Unicode bullet in UTF-8
      and most single-byte Windows locales.

    • Added support for polygons with holes to the graphics engine.
      This is implemented for the pdf(), postscript(),
      x11(type="cairo"), windows(), and quartz() devices (and
      associated raster formats), but not for x11(type="Xlib") or
      xfig() or pictex().  The user-level interface is the polypath()
      function in graphics and grid.path() in grid.

    • File NEWS is now generated at installation with a slightly
      different format: it will be in UTF-8 on platforms using UTF-8,
      and otherwise in ASCII.  There is also a PDF version, NEWS.pdf,
      installed at the top-level of the R distribution.

    • kmeans(x, 1) now works.  Further, kmeans now returns between and
      total sum of squares.

    • arrayInd() and which() gain an argument useNames.  For arrayInd,
      the default is now false, for speed reasons.

    • As is done for closures, the default print method for the formula
      class now displays the associated environment if it is not the
      global environment.

    • A new facility has been added for inserting code into a package
      without re-installing it, to facilitate testing changes which can
      be selectively added and backed out.  See ?insertSource.

    • New function readRenviron to (re-)read files in the format of
      ~/.Renviron and Renviron.site.

    • require() will now return FALSE (and not fail) if loading the
      package or one of its dependencies fails.

    • aperm() now allows argument perm to be a character vector when
      the array has named dimnames (as the results of table() calls
      do).  Similarly, array() allows MARGIN to be a character vector.
      (Based on suggestions of Michael Lachmann.)

    • Package utils now exports and documents functions
      aspell_package_Rd_files() and aspell_package_vignettes() for
      spell checking package Rd files and vignettes using Aspell,
      Ispell or Hunspell.

    • Package news can now be given in Rd format, and news() prefers
      these inst/NEWS.Rd files to old-style plain text NEWS or
      inst/NEWS files.

    • New simple function packageVersion().

    • The PCRE library has been updated to version 8.10.

    • The standard Unix-alike terminal interface declares its name to
      readline as 'R', so that can be used for conditional sections in
      ~/.inputrc files.

    • ‘Writing R Extensions’ now stresses that the standard sections in
      .Rd files (other than \alias, \keyword and \note) are intended to
      be unique, and the conversion tools now drop duplicates with a
      warning.

      The .Rd conversion tools also warn about an unrecognized type in
      a \docType section.

    • ecdf() objects now have a quantile() method.

    • format() methods for date-time objects now attempt to make use of
      a "tzone" attribute with "%Z" and "%z" formats, but it is not
      always possible.  (Wish of PR#14358.)

    • tools::texi2dvi(file, clean = TRUE) now works in more cases (e.g.
      where emulation is used and when file is not in the current
      directory).

    • New function droplevels() to remove unused factor levels.

    • system(command, intern = TRUE) now gives an error on a Unix-alike
      (as well as on Windows) if command cannot be run.  It reports a
      non-success exit status from running command as a warning.

      On a Unix-alike an attempt is made to return the actual exit
      status of the command in system(intern = FALSE): previously this
      had been system-dependent but on POSIX-compliant systems the
      value return was 256 times the status.

    • system() has a new argument ignore.stdout which can be used to
      (portably) ignore standard output.

    • system(intern = TRUE) and pipe() connections are guaranteed to be
      avaliable on all builds of R.

    • Sys.which() has been altered to return "" if the command is not
      found (even on Solaris).

    • A facility for defining reference-based S4 classes (in the OOP
      style of Java, C++, etc.) has been added experimentally to
      package methods; see ?ReferenceClasses.

    • The predict method for "loess" fits gains an na.action argument
      which defaults to na.pass rather than the previous default of
      na.omit.

      Predictions from "loess" fits are now named from the row names of
      newdata.

    • Parsing errors detected during Sweave() processing will now be
      reported referencing their original location in the source file.

    • New adjustcolor() utility, e.g., for simple translucent color
      schemes.

    • qr() now has a trivial lm method with a simple (fast) validity
      check.

    • An experimental new programming model has been added to package
      methods for reference (OOP-style) classes and methods.  See
      ?ReferenceClasses.

    • bzip2 has been updated to version 1.0.6 (bug-fix release).
      --with-system-bzlib now requires at least version 1.0.6.

    • R now provides jss.cls and jss.bst (the class and bib style file
      for the Journal of Statistical Software) as well as RJournal.bib
      and Rnews.bib, and R CMD ensures that the .bst and .bib files are
      found by BibTeX.

    • Functions using the TAR environment variable no longer quote the
      value when making system calls.  This allows values such as tar
      --force-local, but does require additional quotes in, e.g., TAR =
      "'/path with spaces/mytar'".

  DEPRECATED & DEFUNCT:

    • Supplying the parser with a character string containing both
      octal/hex and Unicode escapes is now an error.

    • File extension .C for C++ code files in packages is now defunct.

    • R CMD check no longer supports configuration files containing
      Perl configuration variables: use the environment variables
      documented in ‘R Internals’ instead.

    • The save argument of require() now defaults to FALSE and save =
      TRUE is now deprecated.  (This facility is very rarely actually
      used, and was superseded by the Depends field of the DESCRIPTION
      file long ago.)

    • R CMD check --no-latex is deprecated in favour of --no-manual.

    • R CMD Sd2Rd is formally deprecated and will be removed in R
      2.13.0.

  PACKAGE INSTALLATION:

    • install.packages() has a new argument libs_only to optionally
      pass --libs-only to R CMD INSTALL and works analogously for
      Windows binary installs (to add support for 64- or 32-bit
      Windows).

    • When sub-architectures are in use, the installed architectures
      are recorded in the Archs field of the DESCRIPTION file.  There
      is a new default filter, "subarch", in available.packages() to
      make use of this.

      Code is compiled in a copy of the src directory when a package is
      installed for more than one sub-architecture: this avoid problems
      with cleaning the sources between building sub-architectures.

    • R CMD INSTALL --libs-only no longer overrides the setting of
      locking, so a previous version of the package will be restored
      unless --no-lock is specified.

  UTILITIES:

    • R CMD Rprof|build|check are now based on R rather than Perl
      scripts.  The only remaining Perl scripts are the deprecated R
      CMD Sd2Rd and install-info.pl (used only if install-info is not
      found) as well as some maintainer-mode-only scripts.

      *NB:* because these have been completely rewritten, users should
      not expect undocumented details of previous implementations to
      have been duplicated.

      R CMD no longer manipulates the environment variables PERL5LIB
      and PERLLIB.

    • R CMD check has a new argument --extra-arch to confine tests to
      those needed to check an additional sub-architecture.

      Its check for “Subdirectory 'inst' contains no files” is more
      thorough: it looks for files, and warns if there are only empty
      directories.

      Environment variables such as R_LIBS and those used for
      customization can be set for the duration of checking _via_ a
      file ~/.R/check.Renviron (in the format used by .Renviron, and
      with sub-architecture specific versions such as
      ~/.R/check.Renviron.i386 taking precedence).

      There are new options --multiarch to check the package under all
      of the installed sub-architectures and --no-multiarch to confine
      checking to the sub-architecture under which check is invoked.
      If neither option is supplied, a test is done of installed
      sub-architectures and all those which can be run on the current
      OS are used.

      Unless multiple sub-architectures are selected, the install done
      by check for testing purposes is only of the current
      sub-architecture (_via_ R CMD INSTALL --no-multiarch).

      It will skip the check for non-ascii characters in code or data
      if the environment variables _R_CHECK_ASCII_CODE_ or
      _R_CHECK_ASCII_DATA_ are respectively set to FALSE.  (Suggestion
      of Vince Carey.)

    • R CMD build no longer creates an INDEX file (R CMD INSTALL does
      so), and --force removes (rather than overwrites) an existing
      INDEX file.

      It supports a file ~/.R/build.Renviron analogously to check.

      It now runs build-time \Sexpr expressions in help files.

    • R CMD Rd2dvi makes use of tools::texi2dvi() to process the
      package manual.  It is now implemented entirely in R (rather than
      partially as a shell script).

    • R CMD Rprof now uses utils::summaryRprof() rather than Perl.  It
      has new arguments to select one of the tables and to limit the
      number of entries printed.

    • R CMD Sweave now runs R with --vanilla so the environment setting
      of R_LIBS will always be used.

  C-LEVEL FACILITIES:

    • lang5() and lang6() (in addition to pre-existing lang[1-4]())
      convenience functions for easier construction of eval() calls.
      If you have your own definition, do wrap it inside #ifndef lang5
      .... #endif to keep it working with old and new R.

    • Header R.h now includes only the C headers it itself needs, hence
      no longer includes errno.h.  (This helps avoid problems when it
      is included from C++ source files.)

    • Headers Rinternals.h and R_ext/Print.h include the C++ versions
      of stdio.h and stdarg.h respectively if included from a C++
      source file.

  INSTALLATION:

    • A C99 compiler is now required, and more C99 language features
      will be used in the R sources.

    • Tcl/Tk >= 8.4 is now required (increased from 8.3).

    • System functions access, chdir and getcwd are now essential to
      configure R.  (In practice they have been required for some
      time.)

    • make check compares the output of the examples from several of
      the base packages to reference output rather than the previous
      output (if any).  Expect some differences due to differences in
      floating-point computations between platforms.

    • File NEWS is no longer in the sources, but generated as part of
      the installation.  The primary source for changes is now
      doc/NEWS.Rd.

    • The popen system call is now required to build R.  This ensures
      the availability of system(intern = TRUE), pipe() connections and
      printing from postscript().

    • The pkg-config file libR.pc now also works when R is installed
      using a sub-architecture.

    • R has always required a BLAS that conforms to IE60559 arithmetic,
      but after discovery of more real-world problems caused by a BLAS
      that did not, this is tested more thoroughly in this version.

  BUG FIXES:

    • Calls to selectMethod() by default no longer cache inherited
      methods.  This could previously corrupt methods used by as().

    • The densities of non-central chi-squared are now more accurate in
      some cases in the extreme tails, e.g. dchisq(2000, 2, 1000), as a
      series expansion was truncated too early.  (PR#14105)

    • pt() is more accurate in the left tail for ncp large, e.g.
      pt(-1000, 3, 200).  (PR#14069)

    • The default C function (R_binary) for binary ops now sets the S4
      bit in the result if either argument is an S4 object.  (PR#13209)

    • source(echo=TRUE) failed to echo comments that followed the last
      statement in a file.

    • S4 classes that contained one of "matrix", "array" or "ts" and
      also another class now accept superclass objects in new().  Also
      fixes failure to call validObject() for these classes.

    • Conditional inheritance defined by argument test in
      methods::setIs() will no longer be used in S4 method selection
      (caching these methods could give incorrect results).  See
      ?setIs.

    • The signature of an implicit generic is now used by setGeneric()
      when that does not use a definition nor explicitly set a
      signature.

    • A bug in callNextMethod() for some examples with "..." in the
      arguments has been fixed.  See file
      src/library/methods/tests/nextWithDots.R in the sources.

    • match(x, table) (and hence %in%) now treat "POSIXlt" consistently
      with, e.g., "POSIXct".

    • Built-in code dealing with environments (get(), assign(),
      parent.env(), is.environment() and others) now behave
      consistently to recognize S4 subclasses; is.name() also
      recognizes subclasses.

    • The abs.tol control parameter to nlminb() now defaults to 0.0 to
      avoid false declarations of convergence in objective functions
      that may go negative.

    • The standard Unix-alike termination dialog to ask whether to save
      the workspace takes a EOF response as n to avoid problems with a
      damaged terminal connection.  (PR#14332)

    • Added warn.unused argument to hist.default() to allow suppression
      of spurious warnings about graphical parameters used with
      plot=FALSE.  (PR#14341)

    • predict.lm(), summary.lm(), and indeed lm() itself had issues
      with residual DF in zero-weighted cases (the latter two only in
      connection with empty models). (Thanks to Bill Dunlap for
      spotting the predict() case.)

    • aperm() treated resize = NA as resize = TRUE.

    • constrOptim() now has an improved convergence criterion, notably
      for cases where the minimum was (very close to) zero; further,
      other tweaks inspired from code proposals by Ravi Varadhan.

    • Rendering of S3 and S4 methods in man pages has been corrected
      and made consistent across output formats.

    • Simple markup is now allowed in \title sections in .Rd files.

    • The behaviour of as.logical() on factors (to use the levels) was
      lost in R 2.6.0 and has been restored.

    • prompt() did not backquote some default arguments in the \usage
      section.  (Reported by Claudia Beleites.)

    • writeBin() disallows attempts to write 2GB or more in a single
      call. (PR#14362)

    • new() and getClass() will now work if Class is a subclass of
      "classRepresentation" and should also be faster in typical calls.

    • The summary() method for data frames makes a better job of names
      containing characters invalid in the current locale.

    • [[ sub-assignment for factors could create an invalid factor
      (reported by Bill Dunlap).

    • Negate(f) would not evaluate argument f until first use of
      returned function (reported by Olaf Mersmann).

    • quietly=FALSE is now also an optional argument of library(), and
      consequently, quietly is now propagated also for loading
      dependent packages, e.g., in require(*, quietly=TRUE).

    • If the loop variable in a for loop was deleted, it would be
      recreated as a global variable.  (Reported by Radford Neal; the
      fix includes his optimizations as well.)

    • Task callbacks could report the wrong expression when the task
      involved parsing new code. (PR#14368)

    • getNamespaceVersion() failed; this was an accidental change in
      2.11.0. (PR#14374)

    • identical() returned FALSE for external pointer objects even when
      the pointer addresses were the same.

    • L$a@x[] <- val did not duplicate in a case it should have.

    • tempfile() now always gives a random file name (even if the
      directory is specified) when called directly after startup and
      before the R RNG had been used.  (PR#14381)

    • quantile(type=6) behaved inconsistently.  (PR#14383)

    • backSpline(.) behaved incorrectly when the knot sequence was
      decreasing.  (PR#14386)

    • The reference BLAS included in R was assuming that 0*x and x*0
      were always zero (whereas they could be NA or NaN in IEC 60559
      arithmetic).  This was seen in results from tcrossprod, and for
      example that log(0) %*% 0 gave 0.

    • The calculation of whether text was completely outside the device
      region (in which case, you draw nothing) was wrong for screen
      devices (which have [0, 0] at top-left).  The symptom was (long)
      text disappearing when resizing a screen window (to make it
      smaller).  (PR#14391)

    • model.frame(drop.unused.levels = TRUE) did not take into account
      NA values of factors when deciding to drop levels. (PR#14393)

    • library.dynam.unload required an absolute path for libpath.
      (PR#14385)

      Both library() and loadNamespace() now record absolute paths for
      use by searchpaths() and getNamespaceInfo(ns, "path").

    • The self-starting model NLSstClosestX failed if some deviation
      was exactly zero.  (PR#14384)

    • X11(type = "cairo") (and other devices such as png using
      cairographics) and which use Pango font selection now work around
      a bug in Pango when very small fonts (those with sizes between 0
      and 1 in Pango's internal units) are requested.  (PR#14369)

    • Added workaround for the font problem with X11(type = "cairo")
      and similar on Mac OS X whereby italic and bold styles were
      interchanged.  (PR#13463 amongst many other reports.)

    • source(chdir = TRUE) failed to reset the working directory if it
      could not be determined - that is now an error.

    • Fix for crash of example(rasterImage) on x11(type="Xlib").

    • Force Quartz to bring the on-screen display up-to-date
      immediately before the snapshot is taken by grid.cap() in the
      Cocoa implementation. (PR#14260)

    • model.frame had an unstated 500 byte limit on variable names.
      (Example reported by Terry Therneau.)

    • The 256-byte limit on names is now documented.    • Subassignment by [, [[ or $ on an expression object with value
      NULL coerced the object to a list.

 

 

LibreOffice Beta 2 (Office Fork off Oracle) launches!

 

Windows 7, the latest client version in the Mi...
Image via Wikipedia

 

Announcement from Code Ninjas at Document Foundation

10 years after the StarOffice code has been opened as OpenOffice.org, The Document Foundation is proud to announce the availability of LibreOffice Beta 2 for public testing. Please, download the suitable package(s) from

http://www.documentfoundation.org/download/

 

Ajay- Note that first beta was downloaded almost 100,000 times

install them, and start testing! Should you find bugs, please report them to the FreeDesktop Bugzilla:

https://bugs.freedesktop.org

If you want to get involved in this exciting project, you can contribute code:

http://www.documentfoundation.org/develop/

translate LibreOffice to your language:

http://www.freedesktop.org/wiki/Software/LibreOffice/i18n/translating_3.3

or just donate:

http://www.documentfoundation.org/contribution/
A list of known issues with Beta 2 is available in our wiki:

http://wiki.documentfoundation.org/Beta2

Beta Release Notes

This beta release is not intended for production use!

There are a number of known issues being worked on:

  • The Windows build is an International build – you can choose the user interface language that is suitable for you, but the help is always English. We are currently working on improving the delivery mechanism to be able to provide you with the localized help. We are also working on smaller problems like wrong description of several languages.
  • The Linux and MacOSX builds are English builds with the possibility to install language packs. Please browse the archives to get the langugage pack you need for your platform.
  • The LibreOffice branding and renaming is new and work in progress. You may still see old graphics, icons or websites. So please bear with us. This also applies to the BrOffice.org branding – applicable in Brazil.
  • Filters for the legacy StarOffice binary formats are missing.

I tested it- it seems okay enough. Once again Open Source tends to underplay expectations (when was the last time you saw that in enterprise software?)