Google has a whole list of certifications for people wanting to be certified in analytics, and advertising related to internet.
Tag: management
#SAS 9.3 and #Rstats 2.13.1 Released
A bit early but the latest editions of both SAS and R were released last week.
SAS 9.3 is clearly a major release with multiple enhancements to make SAS both relevant and pertinent in enterprise software in the age of big data. Also many more R specific, JMP specific and partners like Teradata specific enhancements.
http://support.sas.com/software/93/index.html
Features
Data management
- Enhanced manageability for improved performance
- In-database processing (EL-T pushdown)
- Enhanced performance for loading oracle data
- New ET-L transforms
- Data access
Data quality
- SAS® Data Integration Server includes DataFlux® Data Management Platform for enhanced data quality
- Master Data Management (DataFlux® qMDM)
- Provides support for master hub of trusted entity data.
Analytics
- SAS® Enterprise Miner™
- New survival analysis predicts when an event will happen, not just if it will happen.
- New rate making capability for insurance predicts optimal insurance premium for individuals based on attributes known at application time.
- Time Series Data Mining node (experimental) applies data mining techniques to transactional, time-stamped data.
- Support Vector Machines node (experimental) provides a supervised machine learning method for prediction and classification.
- SAS® Forecast Server
- SAS Forecast Server is integrated with the SAP APO Demand Planning module to provide SAP users with access to a superior forecasting engine and automatic forecasting capabilities.
- SAS® Model Manager
- Seamless integration of R models with the ability to register and manage R models in SAS Model Manager.
- Ability to perform champion/challenger side-by-side comparisons between SAS and R models to see which model performs best for a specific need.
- SAS/OR® and SAS® Simulation Studio
- Optimization
- Simulation
- Automatic input distribution fitting using JMP with SAS Simulation Studio.
Text analytics
- SAS® Text Miner
- SAS® Enterprise Content Categorization
- SAS® Sentiment Analysis
Scalability and high-performance
- SAS® Analytics Accelerator for Teradata (new product)
- SAS® Grid Manager
LICENCE:
• No parts of R are now licensed solely under GPL-2. The licences for packages rpart and survival have been changed, which means that the licence terms for R as distributed are GPL-2 | GPL-3.
This is a maintenance release to consolidate various minor fixes to 2.13.0.
CHANGES IN R VERSION 2.13.1:
NEW FEATURES:
• iconv() no longer translates NA strings as "NA".
• persp(box = TRUE) now warns if the surface extends outside the
box (since occlusion for the box and axes is computed assuming
the box is a bounding box). (PR#202.)
• RShowDoc() can now display the licences shipped with R, e.g.
RShowDoc("GPL-3").
• New wrapper function showNonASCIIfile() in package tools.
• nobs() now has a "mle" method in package stats4.
• trace() now deals correctly with S4 reference classes and
corresponding reference methods (e.g., $trace()) have been added.
• xz has been updated to 5.0.3 (very minor bugfix release).
• tools::compactPDF() gets more compression (usually a little,
sometimes a lot) by using the compressed object streams of PDF
1.5.
• cairo_ps(onefile = TRUE) generates encapsulated EPS on platforms
with cairo >= 1.6.
• Binary reads (e.g. by readChar() and readBin()) are now supported
on clipboard connections. (Wish of PR#14593.)
• as.POSIXlt.factor() now passes ... to the character method
(suggestion of Joshua Ulrich). [Intended for R 2.13.0 but
accidentally removed before release.]
• vector() and its wrappers such as integer() and double() now warn
if called with a length argument of more than one element. This
helps track down user errors such as calling double(x) instead of
as.double(x).
INSTALLATION:
• Building the vignette PDFs in packages grid and utils is now part
of running make from an SVN checkout on a Unix-alike: a separate
make vignettes step is no longer required.
These vignettes are now made with keep.source = TRUE and hence
will be laid out differently.
• make install-strip failed under some configuration options.
• Packages can customize non-standard installation of compiled code
via a src/install.libs.R script. This allows packages that have
architecture-specific binaries (beyond the package's shared
objects/DLLs) to be installed in a multi-architecture setting.
SWEAVE & VIGNETTES:
• Sweave() and Stangle() gain an encoding argument to specify the
encoding of the vignette sources if the latter do not contain a
\usepackage[]{inputenc} statement specifying a single input
encoding.
• There is a new Sweave option figs.only = TRUE to run each figure
chunk only for each selected graphics device, and not first using
the default graphics device. This will become the default in R
2.14.0.
• Sweave custom graphics devices can have a custom function
foo.off() to shut them down.
• Warnings are issued when non-portable filenames are found for
graphics files (and chunks if split = TRUE). Portable names are
regarded as alphanumeric plus hyphen, underscore, plus and hash
(periods cause problems with recognizing file extensions).
• The Rtangle() driver has a new option show.line.nos which is by
default false; if true it annotates code chunks with a comment
giving the line number of the first line in the sources (the
behaviour of R >= 2.12.0).
• Package installation tangles the vignette sources: this step now
converts the vignette sources from the vignette/package encoding
to the current encoding, and records the encoding (if not ASCII)
in a comment line at the top of the installed .R file.
DEPRECATED AND DEFUNCT:
• The internal functions .readRDS() and .saveRDS() are now
deprecated in favour of the public functions readRDS() and
saveRDS() introduced in R 2.13.0.
• Switching off lazy-loading of code _via_ the LazyLoad field of
the DESCRIPTION file is now deprecated. In future all packages
will be lazy-loaded.
• The off-line help() types "postscript" and "ps" are deprecated.
UTILITIES:
• R CMD check on a multi-architecture installation now skips the
user's .Renviron file for the architecture-specific tests (which
do read the architecture-specific Renviron.site files). This is
consistent with single-architecture checks, which use
--no-environ.
• R CMD build now looks for DESCRIPTION fields BuildResaveData and
BuildKeepEmpty for per-package overrides. See ‘Writing R
Extensions’.
BUG FIXES:
• plot.lm(which = 5) was intended to order factor levels in
increasing order of mean standardized residual. It ordered the
factor labels correctly, but could plot the wrong group of
residuals against the label. (PR#14545)
• mosaicplot() could clip the factor labels, and could overlap them
with the cells if a non-default value of cex.axis was used.
(Related to PR#14550.)
• dataframe[[row,col]] now dispatches on [[ methods for the
selected column (spotted by Bill Dunlap).
• sort.int() would strip the class of an object, but leave its
object bit set. (Reported by Bill Dunlap.)
• pbirthday() and qbirthday() did not implement the algorithm
exactly as given in their reference and so were unnecessarily
inaccurate.
pbirthday() now solves the approximate formula analytically
rather than using uniroot() on a discontinuous function.
The description of the problem was inaccurate: the probability is
a tail probablity (‘2 _or more_ people share a birthday’)
• Complex arithmetic sometimes warned incorrectly about producing
NAs when there were NaNs in the input.
• seek(origin = "current") incorrectly reported it was not
implemented for a gzfile() connection.
• c(), unlist(), cbind() and rbind() could silently overflow the
maximum vector length and cause a segfault. (PR#14571)
• The fonts argument to X11(type = "Xlib") was being ignored.
• Reading (e.g. with readBin()) from a raw connection was not
advancing the pointer, so successive reads would read the same
value. (Spotted by Bill Dunlap.)
• Parsed text containing embedded newlines was printed incorrectly
by as.character.srcref(). (Reported by Hadley Wickham.)
• decompose() used with a series of a non-integer number of periods
returned a seasonal component shorter than the original series.
(Reported by Rob Hyndman.)
• fields = list() failed for setRefClass(). (Reported by Michael
Lawrence.)
• Reference classes could not redefine an inherited field which had
class "ANY". (Reported by Janko Thyson.)
• Methods that override previously loaded versions will now be
installed and called. (Reported by Iago Mosqueira.)
• addmargins() called numeric(apos) rather than
numeric(length(apos)).
• The HTML help search sometimes produced bad links. (PR#14608)
• Command completion will no longer be broken if tail.default() is
redefined by the user. (Problem reported by Henrik Bengtsson.)
• LaTeX rendering of markup in titles of help pages has been
improved; in particular, \eqn{} may be used there.
• isClass() used its own namespace as the default of the where
argument inadvertently.
• Rd conversion to latex mis-handled multi-line titles (including
cases where there was a blank line in the \title section).
Cyber Attacks-Protecting your assets and people from cyber attacks
Cyber Attacks-Protecting your assets and people from cyber attacks
Everyday we hear of new cyber attacks on organizations and countries. The latest attacks were on IMF and 200,000 accounts of Citibank and now the website of the US Senate. If some of the most powerful and technologically advanced organizations could not survive targeted attacks, how effective is your organization in handling cyber security. Sony Playstation, Google Gmail, PBS website are other famous targets that have been victimized.
Before we play the blame game by pointing to China for sponsoring hacker attacks, or Russian spammers for creating Bot Nets or ex Silicon Valley /American technology experts rendered jobless by off-shoring, we need to both understand which companies are most vulnerable, which processes need to be fine tuned and what is the plan of action in case your cyber security is breached.
Which companies are most vulnerable?
If you have valuable data, confidential in nature , in electronic form AND connectivity to internet, you have an opening. Think of data as water, if you have a small leakage all the water can be leaked away. To add to complexity, the attackers are mostly unknown, and extremely difficult to catch, and can take a big chunk of your credibility and intellectual property in a very short time.
The best people in technology are not the ones attending meetings in nicely pressed suits– and your IT guy is rarely a match for the talent that is now available on freelance hire for cyber corporate espionage.
Any company or organization that has not undergone through one real time simulated cyber attack or IT audit that focuses on data security is very vulnerable.
Which organizational processes need to be fine tuned ?
Clearly employee access even at senior management needs to be ensured for both technological as well as social vulnerability. Does your reception take the name of senior management if cold called. Do your senior managers surf the internet and use a simple password on the same computer and laptop. Do you have disaster management and redundancy plans.
A wall is only as strong as its weakest brick and the same is true of organizational readiness for cyber attacks.
What is the plan of action in case your cyber security is breached?
Lean back, close your eyes and think your website has just been breached, someone has just stolen confidential emails from your corporate email server, and complete client as well as the most confidential data in your organization has been lost.
Do you have a plan for what to do next? Or are you waiting for an actual cyber event to occur to make that plan.
Analytics 2011 Conference
From http://www.sas.com/events/analytics/us/
The Analytics 2011 Conference Series combines the power of SAS’s M2010 Data Mining Conference and F2010 Business Forecasting Conference into one conference covering the latest trends and techniques in the field of analytics. Analytics 2011 Conference Series brings the brightest minds in the field of analytics together with hundreds of analytics practitioners. Join us as these leading conferences change names and locations. At Analytics 2011, you’ll learn through a series of case studies, technical presentations and hands-on training. If you are in the field of analytics, this is one conference you can’t afford to miss.
Conference Details
October 24-25, 2011
Grande Lakes Resort
Orlando, FL
Analytics 2011 topic areas include:
- Data Mining
- Forecasting
- Text Analytics
- Fraud Detection
- Data Visualization Continue reading “Analytics 2011 Conference”
Amazon Ec2 goes Red Hat
message from Amazing Amazon’s cloud team- this will also help for #rstats users given that revolution Analytics full versions on RHEL.
—————————————————-
on-demand instances of Amazon EC2 running Red Hat Enterprise Linux (RHEL) for as little as $0.145 per instance hour. The offering combines the cost-effectiveness, scalability and flexibility of running in Amazon EC2 with the proven reliability of Red Hat Enterprise Linux.
Highlights of the offering include:
- Support is included through subscription to AWS Premium Support with back-line support by Red Hat
- Ongoing maintenance, including security patches and bug fixes, via update repositories available in all Amazon EC2 regions
- Amazon EC2 running RHEL currently supports RHEL 5.5, RHEL 5.6, RHEL 6.0 and RHEL 6.1 in both 32 bit and 64 bit formats, and is available in all Regions.
- Customers who already own Red Hat licenses will continue to be able to use those licenses at no additional charge.
- Like all services offered by AWS, Amazon EC2 running Red Hat Enterprise Linux offers a low-cost, pay-as-you-go model with no long-term commitments and no minimum fees.
For more information, please visit the Amazon EC2 Red Hat Enterprise Linux page.
which is
Amazon EC2 Running Red Hat Enterprise Linux
Amazon EC2 running Red Hat Enterprise Linux provides a dependable platform to deploy a broad range of applications. By running RHEL on EC2, you can leverage the cost effectiveness, scalability and flexibility of Amazon EC2, the proven reliability of Red Hat Enterprise Linux, and AWS premium support with back-line support from Red Hat.. Red Hat Enterprise Linux on EC2 is available in versions 5.5, 5.6, 6.0, and 6.1, both in 32-bit and 64-bit architectures.
Amazon EC2 running Red Hat Enterprise Linux provides seamless integration with existing Amazon EC2 features including Amazon Elastic Block Store (EBS), Amazon CloudWatch, Elastic-Load Balancing, and Elastic IPs. Red Hat Enterprise Linux instances are available in multiple Availability Zones in all Regions.
Pricing
Pay only for what you use with no long-term commitments and no minimum fee.
On-Demand Instances
On-Demand Instances let you pay for compute capacity by the hour with no long-term commitments.
| Standard Instances | Red Hat Enterprise Linux |
|---|---|
| Small (Default) | $0.145 per hour |
| Large | $0.40 per hour |
| Extra Large | $0.74 per hour |
| Micro Instances | Red Hat Enterprise Linux |
| Micro | $0.08 per hour |
| High-Memory Instances | Red Hat Enterprise Linux |
| Extra Large | $0.56 per hour |
| Double Extra Large | $1.06 per hour |
| Quadruple Extra Large | $2.10 per hour |
| High-CPU Instances | Red Hat Enterprise Linux |
| Medium | $0.23 per hour |
| Extra Large | $0.78 per hour |
| Cluster Compute Instances | Red Hat Enterprise Linux |
| Quadruple Extra Large | $1.70 per hour |
| Cluster GPU Instances | Red Hat Enterprise Linux |
| Quadruple Extra Large | $2.20 per hour |
Pricing is per instance-hour consumed for each instance type. Partial instance-hours consumed are billed as full hours.
and
Available Instance Types
Standard Instances
Instances of this family are well suited for most applications.
Small Instance – default*
1.7 GB memory
1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit)
160 GB instance storage
32-bit platform
I/O Performance: Moderate
API name: m1.small
Large Instance
7.5 GB memory
4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each)
850 GB instance storage
64-bit platform
I/O Performance: High
API name: m1.large
Extra Large Instance
15 GB memory
8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute Units each)
1,690 GB instance storage
64-bit platform
I/O Performance: High
API name: m1.xlarge
Micro Instances
Instances of this family provide a small amount of consistent CPU resources and allow you to burst CPU capacity when additional cycles are available. They are well suited for lower throughput applications and web sites that consume significant compute cycles periodically.
Micro Instance
613 MB memory
Up to 2 EC2 Compute Units (for short periodic bursts)
EBS storage only
32-bit or 64-bit platform
I/O Performance: Low
API name: t1.micro
High-Memory Instances
Instances of this family offer large memory sizes for high throughput applications, including database and memory caching applications.
High-Memory Extra Large Instance
17.1 GB of memory
6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)
420 GB of instance storage
64-bit platform
I/O Performance: Moderate
API name: m2.xlarge
High-Memory Double Extra Large Instance
34.2 GB of memory
13 EC2 Compute Units (4 virtual cores with 3.25 EC2 Compute Units each)
850 GB of instance storage
64-bit platform
I/O Performance: High
API name: m2.2xlarge
High-Memory Quadruple Extra Large Instance
68.4 GB of memory
26 EC2 Compute Units (8 virtual cores with 3.25 EC2 Compute Units each)
1690 GB of instance storage
64-bit platform
I/O Performance: High
API name: m2.4xlarge
High-CPU Instances
Instances of this family have proportionally more CPU resources than memory (RAM) and are well suited for compute-intensive applications.
High-CPU Medium Instance
1.7 GB of memory
5 EC2 Compute Units (2 virtual cores with 2.5 EC2 Compute Units each)
350 GB of instance storage
32-bit platform
I/O Performance: Moderate
API name: c1.medium
High-CPU Extra Large Instance
7 GB of memory
20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each)
1690 GB of instance storage
64-bit platform
I/O Performance: High
API name: c1.xlarge
Cluster Compute Instances
Instances of this family provide proportionally high CPU resources with increased network performance and are well suited for High Performance Compute (HPC) applications and other demanding network-bound applications. Learn more about use of this instance type for HPC applications.
Cluster Compute Quadruple Extra Large Instance
23 GB of memory
33.5 EC2 Compute Units (2 x Intel Xeon X5570, quad-core “Nehalem” architecture)
1690 GB of instance storage
64-bit platform
I/O Performance: Very High (10 Gigabit Ethernet)
API name: cc1.4xlarge
Cluster GPU Instances
Instances of this family provide general-purpose graphics processing units (GPUs) with proportionally high CPU and increased network performance for applications benefitting from highly parallelized processing, including HPC, rendering and media processing applications. While Cluster Compute Instances provide the ability to create clusters of instances connected by a low latency, high throughput network, Cluster GPU Instances provide an additional option for applications that can benefit from the efficiency gains of the parallel computing power of GPUs over what can be achieved with traditional processors. Learn more about use of this instance type for HPC applications.
Cluster GPU Quadruple Extra Large Instance
22 GB of memory
33.5 EC2 Compute Units (2 x Intel Xeon X5570, quad-core “Nehalem” architecture)
2 x NVIDIA Tesla “Fermi” M2050 GPUs
1690 GB of instance storage
64-bit platform
I/O Performance: Very High (10 Gigabit Ethernet)
API name: cg1.4xlarge
Getting Started
To get started using Red Hat Enterprise Linux on Amazon EC2, perform the following steps:
- Open and log into the AWS Management Console
- Click on Launch Instance from the EC2 Dashboard
- Select the Red Hat Enterprise Linux AMI from the QuickStart tab
- Specify additional details of your instance and click Launch
- Additional details can be found on each AMI’s Catalog Entry page
The AWS Management Console is an easy tool to start and manage your instances. If you are looking for more details on launching an instance, a quick video tutorial on how to use Amazon EC2 with the AWS Management Console can be found here .
A full list of Red Hat Enterprise Linux AMIs can be found in the AWS AMI Catalog.
Support
All customers running Red Hat Enterprise Linux on EC2 will receive access to repository updates from Red Hat. Moreover, AWS Premium support customers can contact AWS to get access to a support structure from both Amazon and Red Hat.
Resources
- Red Hat solution provider page in EC2
- Red Hat Images in the Amazon Machine Image catalog
- Cloud Computing with Red Hat
About Red Hat
Red Hat, the world’s leading open source solutions provider, is headquartered in Raleigh, NC with over 50 satellite offices spanning the globe. Red Hat provides high-quality, low-cost technology with its operating system platform, Red Hat Enterprise Linux, together with applications, management and Services Oriented Architecture (SOA) solutions, including the JBoss Enterprise Middleware Suite. Red Hat also offers support, training and consulting services to its customers worldwide.
also from Revolution Analytics- in case you want to #rstats in the cloud and thus kill all that talk of RAM dependency, slow R than other softwares (just increase the RAM above in the instances to keep it simple)
,or Revolution not being open enough
http://www.revolutionanalytics.com/downloads/gpl-sources.php
GPL SOURCES
Revolution Analytics uses an Open-Core Licensing model. We provide open- source R bundled with proprietary modules from Revolution Analytics that provide additional functionality for our users. Open-source R is distributed under the GNU Public License (version 2), and we make our software available under a commercial license.
Revolution Analytics respects the importance of open source licenses and has contributed code to the open source R project and will continue to do so. We have carefully reviewed our compliance with GPLv2 and have worked with Mark Radcliffe of DLA Piper, the outside General Legal Counsel of the Open Source Initiative, to ensure that we fully comply with the obligations of the GPLv2.
For our Revolution R distribution, we may make some minor modifications to the R sources (the ChangeLog file lists all changes made). You can download these modified sources of open-source R under the terms of the GPLv2, using either the links below or those in the email sent to you when you download a specific version of Revolution R.
Download GPL Sources
| Product | Version | Platform | Modified R Sources |
| Revolution R Community | 3.2 | Windows | R 2.10.1 |
| Revolution R Community | 3.2 | MacOS | R 2.10.1 |
| Revolution R Enterprise | 3.1.1 | RHEL | R 2.9.2 |
| Revolution R Enterprise | 4.0 | Windows | R 2.11.1 |
| Revolution R Enterprise | 4.0.1 | RHEL | R 2.11.1 |
| Revolution R Enterprise | 4.1.0 | Windows | R 2.11.1 |
| Revolution R Enterprise | 4.2 | Windows | R 2.11.1 |
| Revolution R Enterprise | 4.2 | RHEL | R 2.11.1 |
| Revolution R Enterprise | 4.3 | Windows & RHEL | R 2.12.2 |
Indian Business Schools Alumni try to grow more equal
A message from one the IIM (Indian Institute of Management) alumni networks, just an example of how any global organization should make extra efforts to make things more equal- and (thus position their brand for a differentiated place for attracting talent)
http://en.wikipedia.org/wiki/Indian_Institutes_of_Management
The Indian Institutes of Management (IIMs), are graduate business schools in India that also conduct research and provide consultancy services in the field of management to various sectors of the Indian economy. They were created by the Indian Government[1] with the aim of identifying the brightest intellectual talent[1] available in the student community of India and training it in the best management techniques available in the world, to ultimately create a pool of elite managers to manage and lead the various sections of the Indian economy.
The IIMs are considered the top business schools in India.[3] All the IIMs are completely autonomous institutes owned and financed by the Central Government of India. In order of establishment, the IIMs are located at Calcutta (Kolkata), Ahmedabad, Bangalore, Lucknow, Kozhikode (Calicut), Indore, Shillong, Ranchi, Rohtak, Raipur, Trichy, Kashipur and Udaipur. (My alma mater is Lucknow)
IIMs being role models have shared knowledge and skills with other institutions to improve their quality and standards in management education

————————————————————————————————————–
IIM A Alumni Association has been reaching out to the alumni associations of other IIMs to broad base the brotherhood (no offense to the fairer sex. Couldn’t think of a replacement word).
IM Calcutta Alumni Association has been conducting a lecture series and has invited us for the next edition. The topic is “The Unlimited Person”
India’s ambitions today – particulary reflected in the Corporate Sector – are Unlimited. What mind-set does it take to realise these ambitions ? Minds that live in the past or in the future – as too many Indian minds do – limit themselves, their companies and their country.
This presentation gives several examples of our current average mind-set and talks about ways in which an unlimited mind-set can emerge, creating “The Unlimited Person”
The speaker will be IIM Calcutta alumnus Shashi Maudgal, Chief Marketing Officer of Hindalco Industries of the Aditya Birla Group. The date is Friday June 24th at Gulmohar at the India Habitat Centre . Time 7 pm.
We hope you will come for this lecture and benefit from Shashi’s experience and insights.
Jayaraman -PGP ’70 / Sunil Kala PGP ’73 / Salil Agrawal PGP ’83
T. Venkateswaran PGP ’85 / Rahul Aggarwal PGP ’94
Why open source companies dont dance?
I have been pondering on this seemingly logical paradox for some time now-
1) Why are open source solutions considered technically better but not customer friendly.
2) Why do startups and app creators in social media or mobile get much more press coverage than
profitable startups in enterprise software.
3) How does tech journalism differ in covering open source projects in enterprise versus retail software.
4) What are the hidden rules of the game of enterprise software.
Some observations-
1) Open source companies often focus much more on technical community management and crowd sourcing code. Traditional software companies focus much more on managing the marketing community of customers and influencers. Accordingly the balance of power is skewed in favor of techies and R and D in open source companies, and in favor of marketing and analyst relations in traditional software companies.
Traditional companies also spend much more on hiring top notch press release/public relationship agencies, while open source companies are both financially and sometimes ideologically opposed to older methods of marketing software. The reverse of this is you are much more likely to see Videos and Tutorials by an open source company than a traditional company. You can compare the websites of Cloudera, DataStax, Hadapt ,Appistry and Mapr and contrast that with Teradata or Oracle (which has a much bigger and much more different marketing strategy.
Social media for marketing is also more efficiently utilized by smaller companies (open source) while bigger companies continue to pay influential analysts for expensive white papers that help present the brand.
Lack of budgets is a major factor that limits access to influential marketing for open source companies particularly in enterprise software.
2 and 3) Retail software is priced at 2-100$ and sells by volume. Accordingly technology coverage of these software is based on volume.
Enterprise software is much more expensively priced and has much more discreet volume or sales points. Accordingly the technology coverage of enterprise software is more discreet, in terms of a white paper coming every quarter, a webinar every month and a press release every week. Retail software is covered non stop , but these journalists typically do not charge for “briefings”.
Journalists covering retail software generally earn money by ads or hosting conferences. So they have an interest in covering new stuff or interesting disruptive stuff. Journalists or analysts covering enterprise software generally earn money by white papers, webinars, attending than hosting conferences, writing books. They thus have a much stronger economic incentive to cover existing landscape and technologies than smaller startups.
4) What are the hidden rules of the game of enterprise software.
- It is mostly a white man’s world. this can be proved by statistical demographic analysis
- There is incestuous intermingling between influencers, marketers, and PR people. This can be proved by simple social network analysis of who talks to who and how much. A simple time series between sponsorship and analysts coverage also will prove this (I am working on quantifying this ).
- There are much larger switching costs to enterprise software than retail software. This leads to legacy shoddy software getting much chances than would have been allowed in an efficient marketplace.
- Enterprise software is a less efficient marketplace than retail software in all definitions of the term “efficient markets”
- Cloud computing, and SaaS and Open source threatens to disrupt the jobs and careers of a large number of people. In the long term, they will create many more jobs, but in the short term, people used to comfortable living of enterprise software (making,selling,or writing) will actively and passively resist these changes to the paradigms in the current software status quo.
- Open source companies dont dance and dont play ball. They prefer to hire 4 more college grads than commission 2 more white papers.
and the following with slight changes from a comment I made on a fellow blog-
- While the paradigm on how to create new software has evolved from primarily silo-driven R and D departments to a broader collaborative effort, the biggest drawback is software marketing has not evolved.
- If you want your own version of the open source community editions to be more popular, some standardization is necessary for the corporate decision makers, and we need better marketing paradigms.
- While code creation is crowdsourced, solution implementation cannot be crowdsourced. Customers want solutions to a problem not code.
- Just as open source as a production and licensing paradigm threatens to disrupt enterprise software, it will lead to newer ways to marketing software given the hostility of existing status quo.



