Possible Digital Disruptions by Cyber Actors in USA Electoral Cycle

Some possible electronic disruptions  that threaten to disrupt the electoral cycle in United States of America currently underway is-

1) Limited Denial of Service Attacks (like for 5-8 minutes) on fund raising websites, trying to fly under the radar of network administrators to deny the targeted  fundraising website for a small percentage of funds . Money remains critical to the world’s most expensive political market. Even a 5% dropdown in online fund-raising capacity can cripple a candidate.

2)  Limited Man of the Middle  Attacks on ground volunteers to disrupt ,intercept and manipulate communication flows. Basically cyber attacks at vulnerable ground volunteers in critical counties /battleground /swing states (like Florida)

3) Electro-Magnetic Disruptions of Electronic Voting Machines in critical counties /swing states (like Florida) to either disrupt, manipulate or create an impression that some manipulation has been done.

4) Use search engine flooding (for search engine de-optimization of rival candidates keywords), and social media flooding for disrupting the listening capabilities of sentiment analysis.

5) Selected leaks (including using digital means to create authetntic, fake or edited collateral) timed to embarrass rivals or influence voters , this can be geo-coded and mass deployed.

6) using Internet communications to selectively spam or influence independent or opinionated voters through emails, short messaging service , chat channels, social media.

7) Disrupt the Hillary for President 2016 campaign by Anonymous-Wikileak sympathetic hacktivists.

 

 

Random Sampling a Dataset in R

A common example in business  analytics data is to take a random sample of a very large dataset, to test your analytics code. Note most business analytics datasets are data.frame ( records as rows and variables as columns)  in structure or database bound.This is partly due to a legacy of traditional analytics software.

Here is how we do it in R-

• Refering to parts of data.frame rather than whole dataset.

Using square brackets to reference variable columns and rows

The notation dataset[i,k] refers to element in the ith row and jth column.

The notation dataset[i,] refers to all elements in the ith row .or a record for a data.frame

The notation dataset[,j] refers to all elements in the jth column- or a variable for a data.frame.

For a data.frame dataset

> nrow(dataset) #This gives number of rows

> ncol(dataset) #This gives number of columns

An example for corelation between only a few variables in a data.frame.

> cor(dataset1[,4:6])

Splitting a dataset into test and control.

ts.test=dataset2[1:200] #First 200 rows

ts.control=dataset2[201:275] #Next 75 rows

• Sampling

Random sampling enables us to work on a smaller size of the whole dataset.

use sample to create a random permutation of the vector x.

Suppose we want to take a 5% sample of a data frame with no replacement.

Let us create a dataset ajay of random numbers

ajay=matrix( round(rnorm(200, 5,15)), ncol=10)

#This is the kind of code line that frightens most MBAs!!

Note we use the round function to round off values.

ajay=as.data.frame(ajay)

 nrow(ajay)

[1] 20

> ncol(ajay)

[1] 10

This is a typical business data scenario when we want to select only a few records to do our analysis (or test our code), but have all the columns for those records. Let  us assume we want to sample only 5% of the whole data so we can run our code on it

Then the number of rows in the new object will be 0.05*nrow(ajay).That will be the size of the sample.

The new object can be referenced to choose only a sample of all rows in original object using the size parameter.

We also use the replace=FALSE or F , to not the same row again and again. The new_rows is thus a 5% sample of the existing rows.

Then using the square backets and ajay[new_rows,] to get-

b=ajay[sample(nrow(ajay),replace=F,size=0.05*nrow(ajay)),]

 

You can change the percentage from 5 % to whatever you want accordingly.

SAS Institute Financials 2011

SAS Institute has release it’s financials for 2011 at http://www.sas.com/news/preleases/2011financials.html,

Revenue surged across all solution and industry categories. Software to detect fraud saw a triple-digit jump. Revenue from on-demand solutions grew almost 50 percent. Growth from analytics and information management solutions were double digit, as were gains from customer intelligence, retail, risk and supply chain solutions

AJAY- and as a private company it is quite nice that they are willing to share so much information every year.

The graphics are nice ( and the colors much better than in 2010) , but pie-charts- seriously dude there is no way to compare how much SAS revenue is shifting across geographies or even across industries. So my two cents is – lose the pie charts, and stick to line graphs please for the share of revenue by country /industry.

In 2011, SAS grew staff 9.2 percent and reinvested 24 percent of revenue into research and development

AJAY- So that means 654 million dollars spent in Research and Development.  I wonder if SAS has considered investing in much smaller startups (than it’s traditional strategy of doing all research in-house and completely acquiring a smaller company)

Even a small investment of say 5-10 million USD in open source , or even Phd level research projects could greatly increase the ROI on that.

That means

Analyzing a private company’s financials are much more fun than a public company, and I remember the words of my finance professor ( “dig , dig”) to compare 2011 results with 2010 results.

http://www.sas.com/news/preleases/2010financials.html

The percentage invested in R and D is exactly the same (24%) and the percentages of revenue earned from each geography is exactly the same . So even though revenue growth increased from 5.2 % to 9% in 2011, both the geographic spread of revenues and share  R&D costs remained EXACTLY the same.

The Americas accounted for 46 percent of total revenue; Europe, Middle East and Africa (EMEA) 42 percent; and Asia Pacific 12 percent.

Overall, I think SAS remains a 35% market share (despite all that noise from IBM, SAS clones, open source) because they are good at providing solutions customized for industries (instead of just software products), the market for analytics is not saturated (it seems to be growing faster than 12% or is it) , and its ability to attract and retain the best analytical talent (which in a non -American tradition for a software company means no stock options, job security, and great benefits- SAS remains almost Japanese in HR practices).

In 2010, SAS grew staff by 2.4 percent, in 2011 SAS grew staff by 9 percent.

But I liked the directional statement made here-and I think that design interfaces, algorithmic and computational efficiencies should increase analytical time, time to think on business and reduce data management time further!

“What would you do with the extra time if your code ran in two minutes instead of five hours?” Goodnight challenged.

Does Facebook deserve a 100 billion Valuation

some  questions in my Mind as I struggle to bet my money and pension savings on Facebook IPO

1) Revenue Mix- What percentage of revenues for Facebook come from Banner ads versus gaming partners like Zynga. How dependent is Facebook on Gaming partners. (Zynga has Google as an investor). What mix of revenue is dependent on privacy regulation countries like Europe vs countries like USA.

2) Do 800 million users of Facebook mean 100 billion valuation ? Thats a valuation of $125 in customer life time in terms of NPV . Since ad revenue is itself a percentage of actual good and services sold- how much worth of goods and services do consumers have to buy per capita , to give $125 worth of ads to FB. Eg . companies spend 5% of product cost on Facebook ads, so does that mean each FB account will hope to buy 2500$ worth of Goods from the Internet and from Facebook (assuming they also buy from Amazon etc)

3) Corporate Governance- Unlike Google, Facebook has faced troubling questions of ethics from the day it has started. This includes charges of intellectual property theft, but also non transparent FB stock option pricing in secondary markets before IPO, private placement by Wall Street Bankers like GoldMan Saachs, major investments by Russian Internet media corporations. (read- http://money.cnn.com/2011/01/03/technology/facebook_goldman/index.htm)

4) Retention of key employees post IPO- Key Employees at Google are actually ex- Microsofties. Key FB staff are ex-Google people. Where will the key -FB people go when bored and rich after IPO.

5) Does the macro Economic Condition justify the premium and Private Equity multiple of Facebook?

Will FB be the next Google (in terms of investor retruns) or will it be like Groupon. I suspect the answer  is- it depends on market discounting these assumptions while factoring in sentiment (as well as unloading of stock from large number of FB stock holders on week1).

Baby You Are a Rich Man. but not 100 billion rich. yet. Maybe 80 billion isnt that bad.

Quantitative Modeling for Arbitrage Positions in Ad KeyWords Internet Marketing

Assume you treat an ad keyword as an equity stock. There are slight differences in the cost for advertising for that keyword across various locations (Zurich vs Delhi) and various channels (Facebook vs Google) . You get revenue if your website ranks naturally in organic search for the keyword, and you have to pay costs for getting traffic to your website for that keyword.
An arbitrage position is defined as a riskless profit when cost of keyword is less than revenue from keyword. We take examples of Adsense  and Adwords primarily.
There are primarily two types of economic curves on the foundation of which commerce of the  internet  resides-
1) Cost Curve- Cost of Advertising to drive traffic into the website  (Google Adwords, Twitter Ads, Facebook , LinkedIn ads)
2) Revenue Curve – Revenue from ads clicked by the incoming traffic on website (like Adsense, LinkAds, Banner Ads, Ad Sharing Programs , In Game Ads)
The cost and revenue curves are primarily dependent on two things
1) Type of KeyWord-Also subdependent on
a) Location of Prospective Customer, and
b) Net Present Value of Good and Service to be eventually purchased
For example , keyword for targeting sales of enterprise “business intelligence software” should ideally be costing say X times as much as keywords for “flower shop for birthdays” where X is the multiple of the expected payoffs from sales of business intelligence software divided by expected payoff from sales of flowers (say in Location, Daytona Beach ,Florida or Austin, Texas)
2) Traffic Volume – Also sub-dependent on Time Series and
a) Seasonality -Annual Shoppping Cycle
b) Cyclicality– Macro economic shifts in time series
The cost and revenue curves are not linear and ideally should be continuous in a definitive exponential or polynomial manner, but in actual reality they may have sharp inflections , due to location, time, as well as web traffic volume thresholds
Type of Keyword – For example ,keywords for targeting sales for Eminem Albums may shoot up in a non linear manner after the musician dies.
The third and not so publicly known component of both the cost and revenue curves is factoring in internet industry dynamics , including relative market share of internet advertising platforms, as well as percentage splits between content creator and ad providing platforms.
For example, based on internet advertising spend, people belive that the internet advertising is currently heading for a duo-poly with Google and Facebook are the top two players, while Microsoft/Skype/Yahoo and LinkedIn/Twitter offer niche options, but primarily depend on price setting from Google/Bing/Facebook.
It is difficut to quantify  the elasticity and efficiency of market curves as most literature and research on this is by in-house corporate teams , or advisors or mentors or consultants to the primary leaders in a kind of incesteous fraternal hold on public academic research on this.
It is recommended that-
1) a balance be found in the need for corporate secrecy to protest shareholder value /stakeholder value maximization versus the need for data liberation for innovation and grow the internet ad pie faster-
2) Cost and Revenue Curves between different keywords, time,location, service providers, be studied by quants for hedging inetrent ad inventory or /and choose arbitrage positions This kind of analysis is done for groups of stocks and commodities in the financial world, but as commerce grows on the internet this may need more specific and independent quants.
3) attention be made to how cost and revenue curves mature as per level of sophistication of underlying economy like Brazil, Russia, China, Korea, US, Sweden may be in different stages of internet ad market evolution.
For example-
A study in cost and revenue curves for certain keywords across domains across various ad providers across various locations from 2003-2008 can help academia and research (much more than top ten lists of popular terms like non quantitative reports) as well as ensure that current algorithmic wightings are not inadvertently given away.
Part 2- of this series will explore the ways to create third party re-sellers of keywords and measuring impacts of search and ad engine optimization based on keywords.

Oracle launches XBRL extension for financial domains

What is XBRL and how does it work?

http://www.xbrl.org/HowXBRLWorks/

How XBRL Works
XBRL is a member of the family of languages based on XML, or Extensible Markup Language, which is a standard for the electronic exchange of data between businesses and on the internet.  Under XML, identifying tags are applied to items of data so that they can be processed efficiently by computer software.

XBRL is a powerful and flexible version of XML which has been defined specifically to meet the requirements of business and financial information.  It enables unique identifying tags to be applied to items of financial data, such as ‘net profit’.  However, these are more than simple identifiers.  They provide a range of information about the item, such as whether it is a monetary item, percentage or fraction.  XBRL allows labels in any language to be applied to items, as well as accounting references or other subsidiary information.

XBRL can show how items are related to one another.  It can thus represent how they are calculated.  It can also identify whether they fall into particular groupings for organisational or presentational purposes.  Most importantly, XBRL is easily extensible, so companies and other organisations can adapt it to meet a variety of special requirements.

The rich and powerful structure of XBRL allows very efficient handling of business data by computer software.  It supports all the standard tasks involved in compiling, storing and using business data.  Such information can be converted into XBRL by suitable mapping processes or generated in XBRL by software.  It can then be searched, selected, exchanged or analysed by computer, or published for ordinary viewing.

also see

http://www.xbrl.org/Example1/

 

 

 

and from-

http://www.oracle.com/us/dm/xbrlextension-354972.html?msgid=3-3856862107

With more than 7,000 new U.S. companies facing extensible business reporting language (XBRL) filing mandates in 2011, Oracle has released a free XBRL extension on top of the latest release of Oracle Database.

Oracle’s XBRL extension leverages Oracle Database 11g Release 2 XML to manage the collection, validation, storage, and analysis of XBRL data. It enables organizations to create one or more back-end XBRL repositories based on Oracle Database, providing secure XBRL storage and query-ability with a set of XBRL-specific services.

In addition, the extension integrates easily with Oracle Business Intelligence Suite Enterprise Edition to provide analytics, plus interactive development environments (IDEs) and design tools for creating and editing XBRL taxonomies.

The Other Side of XBRL
“While the XBRL mandate continues to grow, the feedback we keep hearing from the ‘other side’ of XRBL—regulators, academics, financial analysts, and investors—is that they lack sufficient tools and historic data to leverage the full potential of XBRL,” says John O’Rourke, vice president of product marketing, Oracle.

However, O’Rourke says this is quickly changing as XBRL mandates enter their third year—and more and more companies have to comply. While the new extension should be attractive to organizations that produce XBRL filings, O’Rourke expects it will prove particularly valuable to regulators, stock exchanges, universities, and other organizations that need to collect, analyze, and disseminate XBRL-based filings.

Outsourcing, a Bolt-on Solution, or Integrated XBRL Tagging
Until recently, reporting organizations had to choose between expensive third-party outsourcing or manual, in-house tagging with bolt-on solutions— both of which introduce the possibility of error.

In response, Oracle launched Oracle Hyperion Disclosure Management, which provides an XBRL tagging solution that is integrated with the financial close and reporting process for fast and reliable XBRL report submission—without relying on third-party providers. The solution enables organizations to

  • Author regulatory filings in Microsoft Office and “hot link” them directly to financial reporting systems so they can be easily updated
  • Graphically perform XBRL tagging at several levels—within Microsoft Office, within EPM system reports, or in the data source metadata
  • Modify or extend XBRL taxonomies before the mapping process, as well as set up multiple taxonomies
  • Create and validate final XBRL instance documents before submission

 

Assumptions on Guns

This is a very crude yet functional homemade g...
Image via Wikipedia

While sitting in Delhi, India- I sometimes notice that there is one big new worthy gun related incident in the United States every six months (latest incident Gabrielle giffords incident) and the mythical NRA (which seems just as powerful as equally mythical Jewish American or Cuban American lobby ) . As someone who once trained to fire guns (.22 and SLR -rifles actually), comes from a gun friendly culture (namely Punjabi-North Indian), my dad carried a gun sometimes as a police officer during his 30 plus years of service, I dont really like guns (except when they are in a movie). My 3 yr old son likes guns a lot (for some peculiar genetic reason even though we are careful not to show him any violent TV or movie at all).

So to settle the whole guns are good- guns are bad thing I turned to the one resource -Internet

Here are some findings-

1) A lot of hard statistical data on guns is biased by the perspective of the writer- it reminds me of the old saying Lies, True lies and Statistics.

2) There is not a lot of hard data in terms of a universal research which can be quoted- unlike say lung cancer is caused by cigarettes- no broad research which can be definitive in this regards.

3) American , European and Asian attitudes on guns actually seem a function of historical availability , historic crime rates and cultural propensity for guns.

Switzerland and United States are two extreme outlier examples on gun causing violence causal statistics.

4) Lot of old and outdated data quoted selectively.

It seems you can fudge data about guns in the following ways-

1) Use relative per capita numbers vis a vis aggregate numbers

2) Compare and contrast gun numbers with crime numbers selectively

3) Remove drill down of type of firearm- like hand guns, rifles, automatic, semi automatic

Maybe I am being simplistic-but I found it easier to list credible data sources on guns than to summarize all assumptions on guns. Are guns good or bad- i dont know -it depends? Any research you can quote is welcome.

Data Sources on Guns and Firearms and Crime-

1) http://www.justfacts.com/guncontrol.asp

Ownership

* As of 2009, the United States has a population of 307 million people.[5]

* Based on production data from firearm manufacturers,[6] there are roughly 300 million firearms owned by civilians in the United States as of 2010. Of these, about 100 million are handguns.[7]

* Based upon surveys, the following are estimates of private firearm ownership in the U.S. as of 2010:

Households With a Gun Adults Owning a Gun Adults Owning a Handgun
Percentage 40-45% 30-34% 17-19%
Number 47-53 million 70-80 million 40-45 million

[8]

* A 2005 nationwide Gallup poll of 1,012 adults found the following levels of firearm ownership:

Category Percentage Owning 

a Firearm

Households 42%
Individuals 30%
Male 47%
Female 13%
White 33%
Nonwhite 18%
Republican 41%
Independent 27%
Democrat 23%

[9]

* In the same poll, gun owners stated they own firearms for the following reasons:

Protection Against Crime 67%
Target Shooting 66%
Hunting 41%

2) NationMaster.com

http://www.nationmaster.com/graph/cri_mur_wit_fir-crime-murders-with-firearms

VIEW DATA: Totals Per capita
Definition Source Printable version
Bar Graph Pie Chart Map

Showing latest available data.

Rank Countries Amount
# 1 South Africa: 31,918
# 2 Colombia: 21,898
# 3 Thailand: 20,032
# 4 United States: 9,369
# 5 Philippines: 7,708
# 6 Mexico: 2,606
# 7 Slovakia: 2,356
# 8 El Salvador: 1,441
# 9 Zimbabwe: 598
# 10 Peru: 442
# 11 Germany: 269
# 12 Czech Republic: 181
# 13 Ukraine: 173
# 14 Canada: 144
# 15 Albania: 135
# 16 Costa Rica: 131
# 17 Azerbaijan: 120
# 18 Poland: 111
# 19 Uruguay: 109
# 20 Spain: 97
# 21 Portugal: 90
# 22 Croatia: 76
# 23 Switzerland: 68
# 24 Bulgaria: 63
# 25 Australia: 59
# 26 Sweden: 58
# 27 Bolivia: 52
# 28 Japan: 47
# 29 Slovenia: 39
= 30 Hungary: 38
= 30 Belarus: 38
# 32 Latvia: 28
# 33 Burma: 27
# 34 Macedonia, The Former Yugoslav Republic of: 26
# 35 Austria: 25
# 36 Estonia: 21
# 37 Moldova: 20
# 38 Lithuania: 16
= 39 United Kingdom: 14
= 39 Denmark: 14
# 41 Ireland: 12
# 42 New Zealand: 10
# 43 Chile: 9
# 44 Cyprus: 4
# 45 Morocco: 1
= 46 Iceland: 0
= 46 Luxembourg: 0
= 46 Oman: 0
Total: 100,693
Weighted average: 2,097.8

DEFINITION: Total recorded intentional homicides committed with a firearm. Crime statistics are often better indicators of prevalence of law enforcement and willingness to report crime, than actual prevalence.

SOURCE: The Eighth United Nations Survey on Crime Trends and the Operations of Criminal Justice Systems (2002) (United Nations Office on Drugs and Crime, Centre for International Crime Prevention)

3)

Bureau of Justice Statistics

see

http://bjs.ojp.usdoj.gov/dataonline/Search/Homicide/State/RunHomTrendsInOneVar.cfm

or the brand new website (till 2009) on which I CANNOT get gun crime but can get total

http://www.ucrdatatool.gov/

Estimated  murder rate *
Year United States-Total

1960 5.1
1961 4.8
1962 4.6
1963 4.6
1964 4.9
1965 5.1
1966 5.6
1967 6.2
1968 6.9
1969 7.3
1970 7.9
1971 8.6
1972 9.0
1973 9.4
1974 9.8
1975 9.6
1976 8.7
1977 8.8
1978 9.0
1979 9.8
1980 10.2
1981 9.8
1982 9.1
1983 8.3
1984 7.9
1985 8.0
1986 8.6
1987 8.3
1988 8.5
1989 8.7
1990 9.4
1991 9.8
1992 9.3
1993 9.5
1994 9.0
1995 8.2
1996 7.4
1997 6.8
1998 6.3
1999 5.7
2000 5.5
2001 5.6
2002 5.6
2003 5.7
2004 5.5
2005 5.6
2006 5.7
2007 5.6
2008 5.4
2009 5.0
Notes: National or state offense totals are based on data from all reporting agencies and estimates for unreported areas.
* Rates are the number of reported offenses per 100,000 population
  • United States-Total –
    • The 168 murder and nonnegligent homicides that occurred as a result of the bombing of the Alfred P. Murrah Federal Building in Oklahoma City in 1995 are included in the national estimate.
    • The 2,823 murder and nonnegligent homicides that occurred as a result of the events of September 11, 2001, are not included in the national estimates.

     

  • Sources: 


    FBI, Uniform Crime Reports as prepared by the National Archive of Criminal Justice Data


    4) united nation statistics of 2002  were too old in my opinion.
    wikipedia seems too broad based to qualify as a research article but is easily accessible http://en.wikipedia.org/wiki/Gun_violence_in_the_United_States
    to actually buy a gun or see guns available for purchase in United States see
    http://www.usautoweapons.com/
    %d bloggers like this: