China -United States -The Third Opium War

U.S.troops in China during the Boxer Rebellion...
Image via Wikipedia

A brief glance through http://www.treasury.gov/resource-center/data-chart-center/tic/Documents/mfh.txt

shows that while US added 600 billion of debt during the past one year, the Chinese actually reduced their exposure by 50 billion Dollars.

so who has been financing the debt for the US for the past one year- It is Japan- eager to keep its currency down and United Kingdom which has pumped in an extra 300 billion of T Bills.

See the whole table at official link above or at goo.gl/qMugp

—————————————————————————————-

China still remembers the Opium Wars in which the then ruling Anglo Saxon superpower used naval superiority to enforce trade and eventual political dependency. Is China unsure of the United States brotherly nice  intentions? They certainly seem to be putting their money that way.

http://en.wikipedia.org/wiki/Opium_Wars

Britain forced the Chinese government into signing theTreaty of Nanking and the Treaty of Tianjin, also known as the Unequal Treaties, which included provisions for the opening of additional ports to unrestricted foreign trade, for fixed tariffs; for the recognition of both countries as equal in correspondence; and for the cession of Hong Kong to Britain. The British also gained extraterritorial rights. Several countries followed Britain and sought similar agreements with China. Many Chinese found these agreements humiliating and these sentiments contributed to the Taiping Rebellion (1850–1864), the Boxer Rebellion (1899–1901), and the downfall of the Qing Dynasty in 1912, putting an end to dynastic China.

———————————————————————————————-

The Koreans can always be depended on provide the first shot in any conflict- and though Anglo-US-Chinese conflict would be expensive- I guess as long as the cost of outstanding debt with China is less than cost of a brief -techno-war , we would see interesting games in this neighborhood. Note China restricts major trade with United States particularly in software, internet services (like Web Advertising, Facebook, Twitter ) and represents a lucrative market for big pharma (especially in psychiatric drugs) and big tech once it reforms its intellectual property rights. Software would be the opium of the 21st Century- if Chinese resist the Treasury Bills as their poppy flowers. The widespread Western media coverage of school kids murders by pyschopaths is also a trade tactic to encourage flow of more US made medicine in the Chinese market.

It would also help create an economic revival in the United States to exaggerate the Chinese threat (remember Sputnik) and build up its own cyber spending. Any military or cyber humiliation for the ruling party in China can help create a political vacuum for more malleable and agreeable alternatives to emerge.

(to be continued)

 

How to Analyze Wikileaks Data – R SPARQL

Logo for R
Image via Wikipedia

Drew Conway- one of the very very few Project R voices I used to respect until recently. declared on his blog http://www.drewconway.com/zia/

Why I Will Not Analyze The New WikiLeaks Data

and followed it up with how HE analyzed the post announcing the non-analysis.

“If you have not visited the site in a week or so you will have missed my previous post on analyzing WikiLeaks data, which from the traffic and 35 Comments and 255 Reactions was at least somewhat controversial. Given this rare spotlight I thought it would be fun to use the infochimps API to map out the geo-location of everyone that visited the blog post over the last few days. Unfortunately, after nearly two years with the same web hosting service, only today did I realize that I was not capturing daily log files for my domain”

Anyways – non American users of R Project can analyze the Wikileaks data using the R SPARQL package I would advise American friends not to use this approach or attempt to analyze any data because technically the data is still classified and it’s possession is illegal (which is the reason Federal employees and organizations receiving federal funds have advised not to use this or any WikiLeaks dataset)

https://code.google.com/p/r-sparql/

Overview

R is a programming language designed for statistics.

R Sparql allows you to run SPARQL Queries inside R and store it as a R data frame.

The main objective is to allow the integration of Ontologies with Statistics.

It requires Java and rJava installed.

Example (in R console):

> library(sparql)> data <- query("SPARQL query>","RDF file or remote SPARQL Endpoint")

and the data in a remote SPARQL  http://www.ckan.net/package/cablegate

SPARQL is an easy language to pick  up, but dammit I am not supposed to blog on my vacations.

http://code.google.com/p/r-sparql/wiki/GettingStarted

Getting Started

1. Installation

1.1 Make sure Java is installed and is the default JVM:

$ sudo apt-get install sun-java6-bin sun-java6-jre sun-java6-jdk$ sudo update-java-alternatives -s java-6-sun

1.2 Configure R to use the correct version of Java

$ sudo R CMD javareconf

1.3 Install the rJava library

$ R> install.packages("rJava")> q()

1.4 Download and install the sparql library

Download: http://code.google.com/p/r-sparql/downloads/list

$ R CMD INSTALL sparql-0.1-X.tar.gz

2. Executing a SPARQL query

2.1 Start R

#Load the librarylibrary(sparql)#Run the queryresult <- query("SELECT ... ", "http://...")#Print the resultprint(result)

3. Examples

3.1 The Query can be a string or a local file:

query("SELECT ?date ?number ?season WHERE {  ... }", "local-file.rdf")
query("my-query.rq", "local-file.rdf")

The package will detect if my-query.rq exists and will load it from the file.

3.3 The uri can be a file or an url (for remote queries):

query("SELECT ... ","local-file.db")
query("SELECT ... ","http://dbpedia.org/sparql")

3.4 Get some examples here: http://code.google.com/p/r-sparql/downloads/list

SPARQL Tutorial-

http://openjena.org/ARQ/Tutorial/index.html

Also read-

http://webr3.org/blog/linked-data/virtuoso-6-sparqlgeo-and-linked-data/

and from the favorite blog of Project R- Also known as NY Times

http://bits.blogs.nytimes.com/2010/11/15/sorting-through-the-government-data-explosion/?twt=nytimesbits

In May 2009, the Obama administration started putting raw 
government data on the Web. 
It started with 47 data sets. Today, there are more than
 270,000 government data sets, spanning every imaginable 
category from public health to foreign aid.

RWui :Creating R Web Interfaces on the go

Here is a great R application created by http://sysbio.mrc-bsu.cam.ac.uk

R Wui for creating R Web Interfaces

its been there for some time now- but presumably R Apache is more well known.

From-

http://sysbio.mrc-bsu.cam.ac.uk/Rwui/tutorial/Rwui_Rnews_final.pdf

The web application Rwui is used to create web interfaces  for running R scripts. All the code is generated automatically so that a fully functional web interface for an R script can be downloaded and up and running in a matter of minutes.

Rwui is aimed at R script writers who have scripts that they want people unversed in R to use. The script writer uses Rwui to create a web application that will run their R script. Rwui allows the script writer to do this without them having to do any web application programming, because Rwui generates all the code for them.

The script writer designs the web application to run their R script by entering information on a sequence of web pages. The script writer then downloads the application they have created and installs it on their own server.

http://sysbio.mrc-bsu.cam.ac.uk/Rwui/tutorial/Technical_Report.pdf

Features of web applications created by Rwui

  1. Whole range of input items available if required – text boxes, checkboxes, file upload etc.
  2. Facility for uploading of an arbitrary number of files (for example, microarray replicates).
  3. Facility for grouping uploaded files (for example, into ‘Diseased’ and ‘Control’ microarray data files).
  4. Results files displayed on results page and available for download.
  5. Results files can be e-mailed to the user.
  6. Interactive results files using image maps.
  7. Repeat analyses with different parameters and data files – new results added to results list, as a link to the corresponding results page.
  8. Real time progress information (text or graphical) displayed when running the application.

Requirements

In order to use the completed web applications created by Rwui you will need:

  1. A Java webserver such as Tomcat version 5.5 or later.
  2. Java version 1.5
  3. R – a version compatible with your R script(s).

Using Rwui

Using Rwui to create a web application for an R script simply involves:

  1. Entering details about your Rscript on a sequence of web pages.
  2. Rwui is quite flexible so you can backtrack, edit and insert, as you design your application.
  3. Rwui then generates the web application, which is Java based and platform independent.
  4. The application can be downloaded either as a .zip or .tgz file.
  5. Unpacked, the download contains all the source code and a .war file.
  6. Once the .war file is copied to the Tomcat webapps directory, the application is ready to use.
  7. Application details are saved in an ‘application definition file’ for reuse and modification.
Interested-
go click and check out a new web app from http://sysbio.mrc-bsu.cam.ac.uk/Rwui/ in a matter of minutes
Also see

Tale of Two Wiki Appeals

Mirror, Mirror on the Wall-

Whos the Prettiest Wiki of them All

(click on the image you like)


Brief Interview Timo Elliott

Here is a brief interview with Timo Elliott.Timo Elliott is a 19-year veteran of SAP Business Objects.

Ajay- What are the top 5 events in Business Integration and Data Visualization services you saw in 2010 and what are the top three trends you see in these in 2011.


Timo-

Top five events in 2010:

(1) Back to strong market growth. IT spending plummeted last year (BI continued to grow, but more slowly than previous years). This year, organizations reopened their wallets and funded new analytics initiatives — all the signs indicate that BI market growth will be double that of 2009.

(2) The launch of the iPad. Mobile BI has been around for years, but the iPad opened the floodgates of organizations taking a serious look at mobile analytics — and the easy-to-use, executive-friendly iPad dashboards have considerably raised the profile of analytics projects inside organizations.

(3) Data warehousing got exciting again. Decades of incremental improvements (column databases, massively parallel processing, appliances, in-memory processing…) all came together with robust commercial offers that challenged existing data storage and calculation methods. And new “NoSQL” approaches, designed for the new problems of massive amounts of less-structured web data, started moving into the mainstream.

(4) The end of Google Wave, the start of social BI.Google Wave was launched as a rethink of how we could bring together email, instant messaging, and social networks. While Google decided to close down the technology this year, it has left its mark, notably by influencing the future of “social BI”, with several major vendors bringing out commercial products this year.

(5) The start of the big BI merge. While several small independent BI vendors reported strong growth, the major trend of the year was consolidation and integration: the BI megavendors (SAP, Oracle, IBM, Microsoft) increased their market share (sometimes by acquiring smaller vendors, e.g. IBM/SPSS and SAP/Sybase) and integrated analytics with their existing products, blurring the line between BI and other technology areas.

Top three trends next year:

(1) Analytics, reinvented. New DW techniques make it possible to do sub-second, interactive analytics directly against row-level operational data. Now BI processes and interfaces need to be rethought and redesigned to make best use of this — notably by blurring the distinctions between the “design” and “consumption” phases of BI.

(2) Corporate and personal BI come together. The ability to mix corporate and personal data for quick, pragmatic analysis is a common business need. The typical solution to the problem — extracting and combining the data into a local data store (either Excel or a departmental data mart) — pleases users, but introduces duplication and extra costs and makes a mockery of information governance. 2011 will see the rise of systems that let individuals and departments load their data into personal spaces in the corporate environment, allowing pragmatic analytic flexibility without compromising security and governance.

(3) The next generation of business applications. Where are the business applications designed to support what people really do all day, such as implementing this year’s strategy, launching new products, or acquiring another company? 2011 will see the first prototypes of people-focused, flexible, information-centric, and collaborative applications, bringing together the best of business intelligence, “enterprise 2.0”, and existing operational applications.

And one that should happen, but probably won’t:

(4) Intelligence = Information + PEOPLE. Successful analytics isn’t about technology — it’s about people, process, and culture. The biggest trend in 2011 should be organizations spending the majority of their efforts on user adoption rather than technical implementation.                 About- http://timoelliott.com/blog/about

Timo Elliott is a 19-year veteran of SAP BusinessObjects, and has spent the last twenty years working with customers around the world on information strategy.

He works closely with SAP research and innovation centers around the world to evangelize new technology prototypes.

His popular Business Analytics and SAPWeb20 blogs track innovation in analytics and social media, including topics such as augmented corporate reality, collaborative decision-making, and social network analysis.

His PowerPoint Twitter Tools lets presenters see and react to tweets in real time, embedded directly within their slides.

A popular and engaging speaker, Elliott presents regularly to IT and business audiences at international conferences, on subjects such as why BI projects fail and what to do about it, and the intersection of BI and enterprise 2.0.

Prior to Business Objects, Elliott was a computer consultant in Hong Kong and led analytics projects for Shell in New Zealand. He holds a first-class honors degree in Economics with Statistics from Bristol University, England. He blogs on http://timoelliott.com/blog/ (one of the best designed blogs in BI) . You can see more about him personal web site here and photo/sketch blog here. You should follow Timo at http://twitter.com/timoelliott

Art Credit- Timo Elliott

Related Articles

Why social media is an one way street- cant close accounts

Update to https://decisionstats.com/2010/11/24/deleting-twitter-facebooklinkedin-accepting-life/

You cant DELETE a Facebook Account- it gets deactivated NOT DELETED.

You have to delete photo albums one by one, but if you have a folder like profile photos or wall photos or mobile uploads  (you cant delete these folders you have to delete those photos one by one)

So I had to delete 1100 friends, delete all Facebook Pages I created, and then download the account- (photos) which were now a more easy to download zip file of 92 mb. And I deleted all the 250+ Likes I had given to things I had flippantly liked- it was horrifying because if you accumulate all that info- it basically gives you a big lead in estimating my psychological profile- and thats not stuff I want to be used for selling.

Then I deactivated it- no like Lord Voldermort’s horcruxes you cant delete it all.

and Facebook shows you ads even if you clean your profile and your friends and can longer see any preference for any product.

Facebook treats data like prisoners – even if you are released they WILL maintain your record.

20 years later they would be able to blackmail all the people  of all countries in the WORLD- by that much info.

And Linkedin is still getting deleted- I got this email from them-

basically if you have an active group for whom you are the only owner you cant delete yourself- you have to delete the group or find another owner.

Sigh!

If it took me 2 days to download all my info, and wipe my social media for just 3 yrs of using it (albiet at an expert enough level to act as a social media consultant to some companies)- I am not sure what today’s generation of young people who jump to twitter and Facebook at early ages would face after say 5-10 years of data is collected on them. Lots of Ads I guess!