Interview – first Big Data Conference

From time to time, I revisit the past just to ensure I am not choking on the same spin or hype bullshit I resent in this world of marketing the worlds perfect software (everyone claims to be it, noone is it)

From the very first Big Data conference ever (and kudos to Steve Wooledge at AsterData for coining the buzz word of the decade- BIG DATA)- a two year old interview.

Is it relevant- or outdated.

watch?v=cqgChBV_JBs

https://www.youtube.com/v/cqgChBV_JBs?version=3&hl=en_US

Jaspersoft 4.1 launched

http://www.jaspersoft.com/events/jaspersoft-41-webinar-north-america

one more webinar, but a newer software and a changing paradigm.

———————————————————————————————————————————————

Webinar: Introducing Jaspersoft 4.1- North America

 Date: June 8, 2011

Time: 10:00 AM PT/1:00 PM ET
Duration: 60 Minutes
Language: English

Self-service Business Intelligence helps individuals and organizations respond quickly to operational and strategic decisions.  A driving factor for the success of self-service BI is an intuitive and integrated UI that spans reporting, dashboards, and data analysis. Additionally, the rise of the cloud and big-data environments presents new opportunities for organizations that can analyze and respond to these emerging data sources.

In this webinar, see how the new Jaspersoft Business Intelligence Suite 4.1 release achieves all this and more.
Register now and learn how you can now take advantage of:

  • Modern self-service BI for reporting and analysis: powerful, unified capabilities for both ad hoc reporting and analysis tasks.
  • Flexible insight into any data source: a seamless analytics UI for both data stored within relational, OLAP, and big data stores.
  • Maximum performance—now designed to take advantage of 64-bit processors.

    Join the webinar and explore the world’s most advanced and affordable BI suite through live demos, interactive Q&A and more.

the upcoming Jaspersoft 4.1 Webinar and Demo. Here’s a preview of what you’ll see:

  • Unified drag-and-drop UI for reporting and analytics. Jaspersoft 4.1 delivers powerful, unified drag-and-drop usability to both ad hoc reporting and analysis tasks while supporting both in-memory and OLAP engines. That means easier, more powerful analysis for business users and a faster route to self-service BI.
  • Insight against any data source. The analytics UI provides easier, more powerful analysis across relational, OLAP, and Big Data stores.
  • High performance with native 64-bit installer support. With Jaspersoft 4.1, you can also scale BI applications to new performance heights, thanks to full support for today’s powerful 64-bit processors.

Join us for the webinar and see how quickly and easily BI Builders like you can deliver true, self-service BI, combining reports, dashboards, and analytics, against virtually any data source — all through a single, web-based UI.

AsterData still alive;/launches SQL-MapReduce Developer Portal

so apparantly ole client AsterData continues to thrive under gentle touch of Terrific Data

———————————————————————————————————————————————————

Aster Data today launched the SQL-MapReduce Developer Portal, a new online community for data scientists and analytic developers. For your convenience, I copied the release below and it can also be found here. Please let me know if you have any questions or if there is anything else I can help you with.

Sara Korolevich

Point Communications Group for Aster Data

sarak@pointcgroup.com

Office: 602.279.1137

Mobile: 623.326.0881

Teradata Accelerates Big Data Analytics with First Collaborative Community for SQL-MapReduce®

New online community for data scientists and analytic developers enables development and sharing of powerful MapReduce analytics


San Carlos, California – Teradata Corporation (NYSE:TDC) today announced the launch of the Aster Data SQL-MapReduce® Developer Portal. This portal is the first collaborative online developer community for SQL-MapReduce analytics, an emerging framework for processing non-relational data and ultra-fast analytics.

“Aster Data continues to deliver on its unique vision for powerful analytics with a rich set of tools to make development of those analytics quick and easy,” said Tasso Argyros, vice president of Aster Data Marketing and Product Management, Teradata Corporation. “This new developer portal builds on Aster Data’s continuing SQL-MapReduce innovation, leveraging the flexibility and power of SQL-MapReduce for analytics that were previously impossible or impractical.”

The developer portal showcases the power and flexibility of Aster Data’s SQL-MapReduce – which uniquely combines standard SQL with the popular MapReduce distributed computing technology for processing big data – by providing a collaborative community for sharing SQL-MapReduce expert insights in addition to sharing SQL-MapReduce analytic functions and sample code. Data scientists, quantitative analysts, and developers can now leverage the experience, knowledge, and best practices of a community of experts to easily harness the power of SQL-MapReduce for big data analytics.

A recent report from IDC Research, “Taking Care of Your Quants: Focusing Data Warehousing Resources on Quantitative Analysts Matters,” has shown that by enabling data scientists with the tools to harness emerging types and sources of data, companies create significant competitive advantage and become leaders in their respective industry.

“The biggest positive differences among leaders and the rest come from the introduction of new types of data,” says Dan Vesset, program vice president, Business Analytics Solutions, IDC Research. “This may include either new transactional data sources or new external data feeds of transactional or multi-structured interactional data — the latter may include click stream or other data that is a by-product of social networking.”

Vesset goes on to say, “Aster Data provides a comprehensive platform for analytics and their SQL-MapReduce Developer Portal provides a community for sharing best practices and functions which can have an even greater impact to an organization’s business.”

With this announcement Aster Data extends its industry leadership in delivering the most comprehensive analytic platform for big data analytics — not only capable of processing massive volumes of multi-structured data, but also providing an extensive set of tools and capabilities that make it simple to leverage the power of MapReduce analytics. The Aster Data

SQL-MapReduce Developer Portal brings the power of SQL-MapReduce accessible to data scientists, quantitative analysis, and analytic developers by making it easy to share and collaborate with experts in developing SQL-MapReduce analytics. This portal builds on Aster Data’s history of SQL-MapReduce innovations, including:

  • The first deep integration of SQL with MapReduce
  • The first MapReduce support for .NET
  • The first integrated development environment, Aster Data
    Developer Express
  • A comprehensive suite of analytic functions, Aster Data
    Analytic Foundation

Aster Data’s patent-pending SQL-MapReduce enables analytic applications and functions that can deliver faster, deeper insights on terabytes to petabytes of data. These applications are implemented using MapReduce but delivered through standard SQL and business intelligence (BI) tools.

SQL-MapReduce makes it possible for data scientists and developers to empower business analysts with the ability to make informed decisions, incorporating vast amounts of data, regardless of query complexity or data type. Aster Data customers are using SQL-MapReduce for rich analytics including analytic applications for social network analysis, digital marketing optimization, and on-the-fly fraud detection and prevention.

“Collaboration is at the core of our success as one of the leading providers, and pioneers of social software,” said Navdeep Alam, director of Data Architecture at Mzinga. “We are pleased to be one of the early members of The Aster Data SQL-MapReduce Developer Portal, which will allow us the ability to share and leverage insights with others in using big data analytics to attain a deeper understanding of customers’ behavior and create competitive advantage for our business.”

SQL-MapReduce is one of the core capabilities within Aster Data’s flagship product. Aster DatanCluster™ 4.6, the industry’s first massively parallel processing (MPP) analytic platform has an integrated analytics engine that stores and processes both relational and non-relational data at scale. With Aster Data’s unique analytics framework that supports both SQL and
SQL-MapReduce™, customers benefit from rich, new analytics on large data volumes with complex data types. Aster Data analytic functions are embedded within the analytic platform and processed locally with data, which allows for faster data exploration. The SQL-MapReduce framework provides scalable fault-tolerance for new analytics, providing users with superior reliability, regardless of number of users, query size, or data types.


About Aster Data
Aster Data is a market leader in big data analytics, enabling the powerful combination of cost-effective storage and ultra-fast analysis of new sources and types of data. The Aster Data nCluster analytic platform is a massively parallel software solution that embeds MapReduce analytic processing with data stores for deeper insights on new data sources and types to deliver new analytic capabilities with breakthrough performance and scalability. Aster Data’s solution utilizes Aster Data’s patent-pending SQL-MapReduce to parallelize processing of data and applications and deliver rich analytic insights at scale. Companies including Barnes & Noble, Intuit, LinkedIn, Akamai, and MySpace use Aster Data to deliver applications such as digital marketing optimization, social network and relationship analysis, and fraud detection and prevention.


About Teradata
Teradata is the world’s leader in data warehousing and integrated marketing management through itsdatabase softwaredata warehouse appliances, and enterprise analytics. For more information, visitteradata.com.

# # #

Teradata is a trademark or registered trademark of Teradata Corporation in the United States and other countries.

Try JMP for free in steps 1-2-3

Test a 30 day free trial of JMP, the beautiful software with the ugliest website.

In case you have never used JMP, but know the difference between a mean and a mode- take a look.

Step 1 Fill long and badly designed outdated form (note the blue lightening graphics design and font)


Step 2 See uselessly long message, as the website does require registration but it has not done  any oAuth/SM easy registration even though they help sell software in the same campus on social media

Step 3 Wait for 352 mb TO DOWNLOAD without a bit torrent or mirror servers, or even a link for scheduling Download Accelerator-

Note internet connections can be lousy (globally not just in India) to categorize 352 mb of downloads as painful.


And after all the violence and double talk
There’s just a song in all the trouble and the strife

JMP is still the best easiest to use powerful Big Data software with extensions into R and SAS.

High Performance Analytics

Marry Big Data Analytics to High Performance Computing, and you get the buzzword of this season- High Performance Analytics.

It basically consists of Parallelized code to run in parallel on custom hardware, in -database analytics for speed, and cloud computing /high performance computing environments. On an operational level, it consists of software (as in analytics) partnering with software (as in databases, Map reduce, Hadoop) plus some hardware (HP or IBM mostly). It is considered a high margin , highly profitable, business with small number of deals compared to say desktop licenses.

As per HPC Wire- which is a great tool/newsletter to keep updated on HPC , SAS Institute has been busy on this front partnering with EMC Greenplum and TeraData (who also acquired  SAS Partner AsterData to gain a much needed foot in the MR/SQL space) Continue reading “High Performance Analytics”

Google Snappy

Diagram of how a 32-bit integer is arranged in...
Image via Wikipedia

a cool sounding software- yet again by the guys from California, this one enables to zip and unzip Big Data much much faster

http://news.ycombinator.com/item?id=2356735

and

https://code.google.com/p/snappy/

Snappy is a compression/decompression library. It does not aim for maximum compression, or compatibility with any other compression library; instead, it aims for very high speeds and reasonable compression. For instance, compared to the fastest mode of zlib, Snappy is an order of magnitude faster for most inputs, but the resulting compressed files are anywhere from 20% to 100% bigger. On a single core of a Core i7 processor in 64-bit mode, Snappy compresses at about 250 MB/sec or more and decompresses at about 500 MB/sec or more.

Snappy is widely used inside Google, in everything from BigTable and MapReduce to our internal RPC systems. (Snappy has previously been referred to as “Zippy” in some presentations and the likes.)

For more information, please see the README. Benchmarks against a few other compression libraries (zlib, LZO, LZF, FastLZ, and QuickLZ) are included in the source code distribution.

Introduction
============
Snappy is a compression/decompression library. It does not aim for maximum
compression, or compatibility with any other compression library; instead,
it aims for very high speeds and reasonable compression. For instance,
compared to the fastest mode of zlib, Snappy is an order of magnitude faster
for most inputs, but the resulting compressed files are anywhere from 20% to
100% bigger. (For more information, see “Performance”, below.)
Snappy has the following properties:
* Fast: Compression speeds at 250 MB/sec and beyond, with no assembler code.
See “Performance” below.
* Stable: Over the last few years, Snappy has compressed and decompressed
petabytes of data in Google’s production environment. The Snappy bitstream
format is stable and will not change between versions.
* Robust: The Snappy decompressor is designed not to crash in the face of
corrupted or malicious input.
* Free and open source software: Snappy is licensed under the Apache license,
version 2.0. For more information, see the included COPYING file.
Snappy has previously been called “Zippy” in some Google presentations
and the like.
Performance
===========
Snappy is intended to be fast. On a single core of a Core i7 processor
in 64-bit mode, it compresses at about 250 MB/sec or more and decompresses at
about 500 MB/sec or more. (These numbers are for the slowest inputs in our
benchmark suite; others are much faster.) In our tests, Snappy usually
is faster than algorithms in the same class (e.g. LZO, LZF, FastLZ, QuickLZ,
etc.) while achieving comparable compression ratios.
Typical compression ratios (based on the benchmark suite) are about 1.5-1.7x
for plain text, about 2-4x for HTML, and of course 1.0x for JPEGs, PNGs and
other already-compressed data. Similar numbers for zlib in its fastest mode
are 2.6-2.8x, 3-7x and 1.0x, respectively. More sophisticated algorithms are
capable of achieving yet higher compression rates, although usually at the
expense of speed. Of course, compression ratio will vary significantly with
the input.
Although Snappy should be fairly portable, it is primarily optimized
for 64-bit x86-compatible processors, and may run slower in other environments.
In particular:
– Snappy uses 64-bit operations in several places to process more data at
once than would otherwise be possible.
– Snappy assumes unaligned 32- and 64-bit loads and stores are cheap.
On some platforms, these must be emulated with single-byte loads
and stores, which is much slower.
– Snappy assumes little-endian throughout, and needs to byte-swap data in
several places if running on a big-endian platform.
Experience has shown that even heavily tuned code can be improved.
Performance optimizations, whether for 64-bit x86 or other platforms,
are of course most welcome; see “Contact”, below.
Usage
=====
Note that Snappy, both the implementation and the interface,
is written in C++.
To use Snappy from your own program, include the file “snappy.h” from
your calling file, and link against the compiled library.
There are many ways to call Snappy, but the simplest possible is
snappy::Compress(input, &output);
and similarly
snappy::Uncompress(input, &output);
where “input” and “output” are both instances of std::string.