How to balance your online advertising and your offline conscience

Google in 1998, showing the original logo
Image via Wikipedia

I recently found an interesting example of  a website that both makes a lot of money and yet is much more efficient than any free or non profit. It is called ECOSIA

If you see a website that wants to balance administrative costs  plus have a transparent way to make the world better- this is a great example.

  • http://ecosia.org/how.php
  • HOW IT WORKS
    You search with Ecosia.
  • Perhaps you click on an interesting sponsored link.
  • The sponsoring company pays Bing or Yahoo for the click.
  • Bing or Yahoo gives the bigger chunk of that money to Ecosia.
  • Ecosia donates at least 80% of this income to support WWF’s work in the Amazon.
  • If you like what we’re doing, help us spread the word!
  • Key facts about the park:

    • World’s largest tropical forest reserve (38,867 square kilometers, or about the size of Switzerland)
    • Home to about 14% of all amphibian species and roughly 54% of all bird species in the Amazon – not to mention large populations of at least eight threatened species, including the jaguar
    • Includes part of the Guiana Shield containing 25% of world’s remaining tropical rainforests – 80 to 90% of which are still pristine
    • Holds the last major unpolluted water reserves in the Neotropics, containing approximately 20% of all of the Earth’s water
    • One of the last tropical regions on Earth vastly unaltered by humans
    • Significant contributor to climatic regulation via heat absorption and carbon storage

     

    http://ecosia.org/statistics.php

    They claim to have donated 141,529.42 EUR !!!

    http://static.ecosia.org/files/donations.pdf

     

     

     

     

     

     

     

     

     

     

    Well suppose you are the Web Admin of a very popular website like Wikipedia or etc

    One way to meet server costs is to say openly hey i need to balance my costs so i need some money.

    The other way is to use online advertising.

    I started mine with Google Adsense.

    Click per milli (or CPM)  gives you a very low low conversion compared to contacting ad sponsor directly.

    But its a great data experiment-

    as you can monitor which companies are likely to be advertised on your site (assume google knows more about their algols than you will)

    which formats -banner or text or flash have what kind of conversion rates

    what are the expected pay off rates from various keywords or companies (like business intelligence software, predictive analytics software and statistical computing software are similar but have different expected returns (if you remember your eco class)

     

    NOW- Based on above data, you know whats your minimum baseline to expect from a private advertiser than a public, crowd sourced search engine one (like Google or Bing)

    Lets say if you have 100000 views monthly. and assume one out of 1000 page views will lead to a click. Say the advertiser will pay you 1 $ for every 1 click (=1000 impressions)

    Then your expected revenue is $100.But if your clicks are priced at 2.5$ for every click , and your click through rate is now 3 out of 1000 impressions- (both very moderate increases that can done by basic placement optimization of ad type, graphics etc)-your new revenue is  750$.

    Be a good Samaritan- you decide to share some of this with your audience -like 4 Amazon books per month ( or I free Amazon book per week)- That gives you a cost of 200$, and leaves you with some 550$.

    Wait! it doesnt end there- Adam Smith‘s invisible hand moves on .

    You say hmm let me put 100 $ for an annual paper writing contest of $1000, donate $200 to one laptop per child ( or to Amazon rain forests or to Haiti etc etc etc), pay $100 to your upgraded server hosting, and put 350$ in online advertising. say $200 for search engines and $150 for Facebook.

    Woah!

    Month 1 would should see more people  visiting you for the first time. If you have a good return rate (returning visitors as a %, and low bounce rate (visits less than 5 secs)- your traffic should see atleast a 20% jump in new arrivals and 5-10 % in long term arrivals. Ignoring bounces- within  three months you will have one of the following

    1) An interesting case study on statistics on online and social media advertising, tangible motivations for increasing community response , and some good data for study

    2) hopefully better cost management of your server expenses

    3)very hopefully a positive cash flow

     

    you could even set a percentage and share the monthly (or annually is better actions) to your readers and advertisers.

    go ahead- change the world!

    the key paradigms here are sharing your traffic and revenue openly to everyone

    donating to a suitable cause

    helping increase awareness of the suitable cause

    basing fixed percentages rather than absolute numbers to ensure your site and cause are sustained for years.

    Privacy Browsing Extensions in Google Chrome

    coat of arms of the Palaiologos dynasty, the l...
    Image via Wikipedia

    Using two Chrome Extensions, Disconnect and AdBlock you can be sure of having a vary very clean browsing experience-it is recommended especially if you dont like the auto sharing of your personal preferences and cannot be bothered by the Byzantine maze of social media privacy fineprint.

    https://chrome.google.com/extensions/detail/jeoacafpbcihiomhlakheieifhpjdfeo

    Disconnect by Brian Kennish

    (184) – 44,284 users – Weekly installs: 24,086

    Stop major third parties and search engines from tracking the webpages you go to and searches you do.

    * Search depersonalization is now optional and off by default. Click the “d” button then the “Depersonalize searches” checkbox to turn this feature on (or back off in case you have trouble getting to Google or Yahoo services). For help with anything else, see the known issues below and ask questions at http://j.mp/dnewgroup.
    
    §
    
    If you’re a typical web user, you’re unintentionally sending your browsing and search history with your name and other personal information to third parties and search engines whenever you’re online.
    
    Take control of the data you share with Disconnect!
    
    From the developer of the top-10-rated Facebook Disconnect extension, Disconnect lets you:
    
    • Disable tracking by third parties like Digg, Facebook, Google, Twitter, and Yahoo, without requiring any setup or significantly degrading the usability of the web.
    
    • Truly depersonalize searches on search engines like Google and Yahoo (by blocking identifying cookies not just changing the appearance of results pages), while staying logged into other services — e.g., so you can search anonymously on Google and access iGoogle at once.
    
    • See how many resource and cookie requests are blocked, in real time
    
    
    and
    https://chrome.google.com/extensions/detail/gighmmpiobklfepjocnamgkkbiglidom
    
    ExtensionsAdBlock

    AdBlock

    (6937) - 1,615,373 users - Weekly installs: 153,032
    The most popular Chrome extension, with over 1.5 million users! Blocks ads all over the web.
    Verified author: chromeadblock.com
    =================
    
    New in version 2.1: Translated into dozens of languages!
    New in version 2.0: Ads are blocked from downloading, instead of just being removed after the fact!
    
    =======================
    
    The official AdBlock For Chrome!  Block all advertisements on all web pages.  Your browser is automatically updated with additions to the filter: just click Install, then visit your favorite website and see the ads disappear!
    
    FAQs:1. This is the official AdBlock extension: the original ad blocker written from the ground up to be optimized in Chrome.  There's an unrelated, older Firefox project called Adblock Plus, and they're working on making a Chrome version out of the old AdThwart codebase.  At the moment AdBlock blocks some ads that AdThwart only hides, but they're working to improve it.  It's available at bit.ly/id2Gqx; if you have trouble with AdBlock, they're good guys and a fine alternative!

     

    China -United States -The Third Opium War

    U.S.troops in China during the Boxer Rebellion...
    Image via Wikipedia

    A brief glance through http://www.treasury.gov/resource-center/data-chart-center/tic/Documents/mfh.txt

    shows that while US added 600 billion of debt during the past one year, the Chinese actually reduced their exposure by 50 billion Dollars.

    so who has been financing the debt for the US for the past one year- It is Japan- eager to keep its currency down and United Kingdom which has pumped in an extra 300 billion of T Bills.

    See the whole table at official link above or at goo.gl/qMugp

    —————————————————————————————-

    China still remembers the Opium Wars in which the then ruling Anglo Saxon superpower used naval superiority to enforce trade and eventual political dependency. Is China unsure of the United States brotherly nice  intentions? They certainly seem to be putting their money that way.

    http://en.wikipedia.org/wiki/Opium_Wars

    Britain forced the Chinese government into signing theTreaty of Nanking and the Treaty of Tianjin, also known as the Unequal Treaties, which included provisions for the opening of additional ports to unrestricted foreign trade, for fixed tariffs; for the recognition of both countries as equal in correspondence; and for the cession of Hong Kong to Britain. The British also gained extraterritorial rights. Several countries followed Britain and sought similar agreements with China. Many Chinese found these agreements humiliating and these sentiments contributed to the Taiping Rebellion (1850–1864), the Boxer Rebellion (1899–1901), and the downfall of the Qing Dynasty in 1912, putting an end to dynastic China.

    ———————————————————————————————-

    The Koreans can always be depended on provide the first shot in any conflict- and though Anglo-US-Chinese conflict would be expensive- I guess as long as the cost of outstanding debt with China is less than cost of a brief -techno-war , we would see interesting games in this neighborhood. Note China restricts major trade with United States particularly in software, internet services (like Web Advertising, Facebook, Twitter ) and represents a lucrative market for big pharma (especially in psychiatric drugs) and big tech once it reforms its intellectual property rights. Software would be the opium of the 21st Century- if Chinese resist the Treasury Bills as their poppy flowers. The widespread Western media coverage of school kids murders by pyschopaths is also a trade tactic to encourage flow of more US made medicine in the Chinese market.

    It would also help create an economic revival in the United States to exaggerate the Chinese threat (remember Sputnik) and build up its own cyber spending. Any military or cyber humiliation for the ruling party in China can help create a political vacuum for more malleable and agreeable alternatives to emerge.

    (to be continued)

     

    How to Analyze Wikileaks Data – R SPARQL

    Logo for R
    Image via Wikipedia

    Drew Conway- one of the very very few Project R voices I used to respect until recently. declared on his blog http://www.drewconway.com/zia/

    Why I Will Not Analyze The New WikiLeaks Data

    and followed it up with how HE analyzed the post announcing the non-analysis.

    “If you have not visited the site in a week or so you will have missed my previous post on analyzing WikiLeaks data, which from the traffic and 35 Comments and 255 Reactions was at least somewhat controversial. Given this rare spotlight I thought it would be fun to use the infochimps API to map out the geo-location of everyone that visited the blog post over the last few days. Unfortunately, after nearly two years with the same web hosting service, only today did I realize that I was not capturing daily log files for my domain”

    Anyways – non American users of R Project can analyze the Wikileaks data using the R SPARQL package I would advise American friends not to use this approach or attempt to analyze any data because technically the data is still classified and it’s possession is illegal (which is the reason Federal employees and organizations receiving federal funds have advised not to use this or any WikiLeaks dataset)

    https://code.google.com/p/r-sparql/

    Overview

    R is a programming language designed for statistics.

    R Sparql allows you to run SPARQL Queries inside R and store it as a R data frame.

    The main objective is to allow the integration of Ontologies with Statistics.

    It requires Java and rJava installed.

    Example (in R console):

    > library(sparql)> data <- query("SPARQL query>","RDF file or remote SPARQL Endpoint")

    and the data in a remote SPARQL  http://www.ckan.net/package/cablegate

    SPARQL is an easy language to pick  up, but dammit I am not supposed to blog on my vacations.

    http://code.google.com/p/r-sparql/wiki/GettingStarted

    Getting Started

    1. Installation

    1.1 Make sure Java is installed and is the default JVM:

    $ sudo apt-get install sun-java6-bin sun-java6-jre sun-java6-jdk$ sudo update-java-alternatives -s java-6-sun

    1.2 Configure R to use the correct version of Java

    $ sudo R CMD javareconf

    1.3 Install the rJava library

    $ R> install.packages("rJava")> q()

    1.4 Download and install the sparql library

    Download: http://code.google.com/p/r-sparql/downloads/list

    $ R CMD INSTALL sparql-0.1-X.tar.gz

    2. Executing a SPARQL query

    2.1 Start R

    #Load the librarylibrary(sparql)#Run the queryresult <- query("SELECT ... ", "http://...")#Print the resultprint(result)

    3. Examples

    3.1 The Query can be a string or a local file:

    query("SELECT ?date ?number ?season WHERE {  ... }", "local-file.rdf")
    query("my-query.rq", "local-file.rdf")

    The package will detect if my-query.rq exists and will load it from the file.

    3.3 The uri can be a file or an url (for remote queries):

    query("SELECT ... ","local-file.db")
    query("SELECT ... ","http://dbpedia.org/sparql")

    3.4 Get some examples here: http://code.google.com/p/r-sparql/downloads/list

    SPARQL Tutorial-

    http://openjena.org/ARQ/Tutorial/index.html

    Also read-

    http://webr3.org/blog/linked-data/virtuoso-6-sparqlgeo-and-linked-data/

    and from the favorite blog of Project R- Also known as NY Times

    http://bits.blogs.nytimes.com/2010/11/15/sorting-through-the-government-data-explosion/?twt=nytimesbits

    In May 2009, the Obama administration started putting raw 
    government data on the Web. 
    It started with 47 data sets. Today, there are more than
     270,000 government data sets, spanning every imaginable 
    category from public health to foreign aid.
    

    RWui :Creating R Web Interfaces on the go

    Here is a great R application created by http://sysbio.mrc-bsu.cam.ac.uk

    R Wui for creating R Web Interfaces

    its been there for some time now- but presumably R Apache is more well known.

    From-

    http://sysbio.mrc-bsu.cam.ac.uk/Rwui/tutorial/Rwui_Rnews_final.pdf

    The web application Rwui is used to create web interfaces  for running R scripts. All the code is generated automatically so that a fully functional web interface for an R script can be downloaded and up and running in a matter of minutes.

    Rwui is aimed at R script writers who have scripts that they want people unversed in R to use. The script writer uses Rwui to create a web application that will run their R script. Rwui allows the script writer to do this without them having to do any web application programming, because Rwui generates all the code for them.

    The script writer designs the web application to run their R script by entering information on a sequence of web pages. The script writer then downloads the application they have created and installs it on their own server.

    http://sysbio.mrc-bsu.cam.ac.uk/Rwui/tutorial/Technical_Report.pdf

    Features of web applications created by Rwui

    1. Whole range of input items available if required – text boxes, checkboxes, file upload etc.
    2. Facility for uploading of an arbitrary number of files (for example, microarray replicates).
    3. Facility for grouping uploaded files (for example, into ‘Diseased’ and ‘Control’ microarray data files).
    4. Results files displayed on results page and available for download.
    5. Results files can be e-mailed to the user.
    6. Interactive results files using image maps.
    7. Repeat analyses with different parameters and data files – new results added to results list, as a link to the corresponding results page.
    8. Real time progress information (text or graphical) displayed when running the application.

    Requirements

    In order to use the completed web applications created by Rwui you will need:

    1. A Java webserver such as Tomcat version 5.5 or later.
    2. Java version 1.5
    3. R – a version compatible with your R script(s).

    Using Rwui

    Using Rwui to create a web application for an R script simply involves:

    1. Entering details about your Rscript on a sequence of web pages.
    2. Rwui is quite flexible so you can backtrack, edit and insert, as you design your application.
    3. Rwui then generates the web application, which is Java based and platform independent.
    4. The application can be downloaded either as a .zip or .tgz file.
    5. Unpacked, the download contains all the source code and a .war file.
    6. Once the .war file is copied to the Tomcat webapps directory, the application is ready to use.
    7. Application details are saved in an ‘application definition file’ for reuse and modification.
    Interested-
    go click and check out a new web app from http://sysbio.mrc-bsu.cam.ac.uk/Rwui/ in a matter of minutes
    Also see

    Sugar CRM: Forrester Webinar

    Analytické CRM
    Image via Wikipedia

    https://sugarcrmevents.webex.com/mw0306lb/mywebex/default.do?nomenu=true&siteurl=sugarcrmevents&service=6&main_url=https://sugarcrmevents.webex.com/ec0605lb/eventcenter/event/eventAction.do%3FtheAction%3Ddetail%26confViewID%3D279191911%26siteurl%3Dsugarcrmevents%26%26%26

    Date and time:

    Thursday, December 2, 2010 11:00 am 
    Pacific Standard Time (San Francisco, GMT-08:00) 
    Change time zone

    Thursday, December 2, 2010 2:00 pm 
    Eastern Standard Time (New York, GMT-05:00)
    Thursday, December 2, 2010 7:00 pm
    Western European Time (London, GMT)
    Thursday, December 2, 2010 8:00 pm
    Europe Time (Berlin, GMT+01:00)
    Duration: 1 hour
    Description:
    Every organization wants to improve the way they manage their customer relationships. But until recently, adding robust CRM tools to your organization was a time consuming and cost prohibitive endeavor for many resources-constrained organizations. Until Now. On December 2 join us to learn how new developments in technology like open source, cloud computing and web 2.0 – are making it easier than ever to add a top notch CRM system to your operations. 

     

    This live webinar hosted by SugarCRM will feature Forrester Research, Inc. Vice President William Band, named one CRM Magazine’s 2007 Influential Leaders. Mr. Band will discuss the current state of the market, review the major trends affecting the CRM landscape, and discuss some criteria you can use to ensure your next CRM decision is the right one.

    In addition, all attendees of the live webinar will receive a complimentary download a recent Forrester Wave™ Report! Register today!

    Speakers:

    William Band, Vice President, Forrester Research
    Martin Schneider, Sr. Director Communications, SugarCRM

    Who Should Attend:
    VP Sales, VP Marketing, CIO’s, Head of Customer Support and other technical decision makers

    Brief Interview Timo Elliott

    Here is a brief interview with Timo Elliott.Timo Elliott is a 19-year veteran of SAP Business Objects.

    Ajay- What are the top 5 events in Business Integration and Data Visualization services you saw in 2010 and what are the top three trends you see in these in 2011.


    Timo-

    Top five events in 2010:

    (1) Back to strong market growth. IT spending plummeted last year (BI continued to grow, but more slowly than previous years). This year, organizations reopened their wallets and funded new analytics initiatives — all the signs indicate that BI market growth will be double that of 2009.

    (2) The launch of the iPad. Mobile BI has been around for years, but the iPad opened the floodgates of organizations taking a serious look at mobile analytics — and the easy-to-use, executive-friendly iPad dashboards have considerably raised the profile of analytics projects inside organizations.

    (3) Data warehousing got exciting again. Decades of incremental improvements (column databases, massively parallel processing, appliances, in-memory processing…) all came together with robust commercial offers that challenged existing data storage and calculation methods. And new “NoSQL” approaches, designed for the new problems of massive amounts of less-structured web data, started moving into the mainstream.

    (4) The end of Google Wave, the start of social BI.Google Wave was launched as a rethink of how we could bring together email, instant messaging, and social networks. While Google decided to close down the technology this year, it has left its mark, notably by influencing the future of “social BI”, with several major vendors bringing out commercial products this year.

    (5) The start of the big BI merge. While several small independent BI vendors reported strong growth, the major trend of the year was consolidation and integration: the BI megavendors (SAP, Oracle, IBM, Microsoft) increased their market share (sometimes by acquiring smaller vendors, e.g. IBM/SPSS and SAP/Sybase) and integrated analytics with their existing products, blurring the line between BI and other technology areas.

    Top three trends next year:

    (1) Analytics, reinvented. New DW techniques make it possible to do sub-second, interactive analytics directly against row-level operational data. Now BI processes and interfaces need to be rethought and redesigned to make best use of this — notably by blurring the distinctions between the “design” and “consumption” phases of BI.

    (2) Corporate and personal BI come together. The ability to mix corporate and personal data for quick, pragmatic analysis is a common business need. The typical solution to the problem — extracting and combining the data into a local data store (either Excel or a departmental data mart) — pleases users, but introduces duplication and extra costs and makes a mockery of information governance. 2011 will see the rise of systems that let individuals and departments load their data into personal spaces in the corporate environment, allowing pragmatic analytic flexibility without compromising security and governance.

    (3) The next generation of business applications. Where are the business applications designed to support what people really do all day, such as implementing this year’s strategy, launching new products, or acquiring another company? 2011 will see the first prototypes of people-focused, flexible, information-centric, and collaborative applications, bringing together the best of business intelligence, “enterprise 2.0”, and existing operational applications.

    And one that should happen, but probably won’t:

    (4) Intelligence = Information + PEOPLE. Successful analytics isn’t about technology — it’s about people, process, and culture. The biggest trend in 2011 should be organizations spending the majority of their efforts on user adoption rather than technical implementation.                 About- http://timoelliott.com/blog/about

    Timo Elliott is a 19-year veteran of SAP BusinessObjects, and has spent the last twenty years working with customers around the world on information strategy.

    He works closely with SAP research and innovation centers around the world to evangelize new technology prototypes.

    His popular Business Analytics and SAPWeb20 blogs track innovation in analytics and social media, including topics such as augmented corporate reality, collaborative decision-making, and social network analysis.

    His PowerPoint Twitter Tools lets presenters see and react to tweets in real time, embedded directly within their slides.

    A popular and engaging speaker, Elliott presents regularly to IT and business audiences at international conferences, on subjects such as why BI projects fail and what to do about it, and the intersection of BI and enterprise 2.0.

    Prior to Business Objects, Elliott was a computer consultant in Hong Kong and led analytics projects for Shell in New Zealand. He holds a first-class honors degree in Economics with Statistics from Bristol University, England. He blogs on http://timoelliott.com/blog/ (one of the best designed blogs in BI) . You can see more about him personal web site here and photo/sketch blog here. You should follow Timo at http://twitter.com/timoelliott

    Art Credit- Timo Elliott

    Related Articles