Home » Posts tagged 'Machine'
Tag Archives: Machine
An awesome conference by an awesome software Rapid Miner remains one of the leading enterprise grade open source software , that can help you do a lot of things including flow driven data modeling ,web mining ,web crawling etc which even other software cant.
- Mining Machine 2 Machine Data (Katharina Morik, TU Dortmund University)
- Handling Big Data (Andras Benczur, MTA SZTAKI)
- Introduction of RapidAnalytics at Telenor (Telenor and United Consult)
- and more
Here is a list of complete program
09:00 – 10:30
Ingo Mierswa (Rapid-I)Resource-aware Data Mining or M2M Mining (Invited Talk)
Katharina Morik (TU Dortmund University)
NeurophRM: Integration of the Neuroph framework into RapidMiner
|To be announced (Invited Talk)
Extending RapidMiner with Recommender Systems Algorithms
Implementation of User Based Collaborative Filtering in RapidMiner
|Parallel Training / Workshop Session
10:30 – 11:00
11:00 – 12:30
Nearest-Neighbor and Clustering based Anomaly Detection Algorithms for RapidMiner
Customers’ LifeStyle Targeting on Big Data using Rapid Miner
Robust GPGPU Plugin Development for RapidMiner
Optimization Plugin For RapidMiner
Image Mining Extension – Year After
Incorporating R Plots into RapidMiner Reports
12:30 – 13:30
13:30 – 15:30
|Parallel Training / Workshop Session
Introduction of RapidAnalyticy Enterprise Edition at Telenor Hungary
Application of RapidMiner in Steel Industry Research and Development
A Comparison of Data-driven Models for Forecast River Flow
Portfolio Optimization Using Local Linear Regression Ensembles in Rapid Miner
An Octave Extension for RapidMiner
Processing Data Streams with the RapidMiner Streams-Plugin
Automated Creation of Corpuses for the Needs of Sentiment Analysis
Demonstration: News from the Rapid-I Labs
This short session demonstrates the latest developments from the Rapid-I lab and will let you how you can build powerful analysis processes and routines by using those RapidMiner tools.
15:30 – 16:00
16:00 – 18:00
|Book Presentation and Game Show
Data Mining for the Masses: A New Textbook on Data Mining for Everyone
Matthew North presents his new book “Data Mining for the Masses” introducing data mining to a broader audience and making use of RapidMiner for practical data mining problems.
Get some Coffee for free – Writing Operators with RapidMiner Beans
Meta-Modeling Execution Times of RapidMiner operators
Conference day ends at ca. 17:00.
Social Event (Conference Dinner)
Social Event (Visit of Bar District)
and you should have a look at https://rapid-i.com/rcomm2012f/index.php?option=com_content&view=article&id=65
Conference is in Budapest, Hungary,Europe.
( Disclaimer- Rapid Miner is an advertising sponsor of Decisionstats.com in case you didnot notice the two banner sized ads.)
Here is an interview with James G Kobielus, who is the Senior Program Director, Product Marketing, Big Data Analytics Solutions at IBM. Special thanks to Payal Patel Cudia of IBM’s communication team,for helping with the logistics for this.
Ajay -What are the specific parts of the IBM Platform that deal with the three layers of Big Data -variety, velocity and volume
James-Well first of all, let’s talk about the IBM Information Management portfolio. Our big data platform addresses the three layers of big data to varying degrees either together in a product , or two out of the three or even one of the three aspects. We don’t have separate products for the variety, velocity and volume separately.
Let us define these three layers-Volume refers to the hundreds of terabytes and petabytes of stored data inside organizations today. Velocity refers to the whole continuum from batch to real time continuous and streaming data.
Variety refers to multi-structure data from structured to unstructured files, managed and stored in a common platform analyzed through common tooling.
For Volume-IBM has a highly scalable Big Data platform. This includes Netezza and Infosphere groups of products, and Watson-like technologies that can support petabytes volume of data for analytics. But really the support of volume ranges across IBM’s Information Management portfolio both on the database side and the advanced analytics side.
For real time Velocity, we have real time data acquisition. We have a product called IBM Infosphere, part of our Big Data platform, that is specifically built for streaming real time data acquisition and delivery through complex event processing. We have a very rich range of offerings that help clients build a Hadoop environment that can scale.
Our Hadoop platform is the most real time capable of all in the industry. We are differentiated by our sheer breadth, sophistication and functional depth and tooling integrated in our Hadoop platform. We are differentiated by our streaming offering integrated into the Hadoop platform. We also offer a great range of modeling and analysis tools, pretty much more than any other offering in the Big Data space.
Attached- Jim’s slides from Hadoop World
Ajay- Any plans for Mahout for Hadoop
Jim- I cant speak about product plans. We have plans but I cant tell you anything more. We do have a feature in Big Insights called System ML, a library for machine learning.
Ajay- How integral are acquisitions for IBM in the Big Data space (Netezza,Cognos,SPSS etc). Is it true that everything that you have in Big Data is acquired or is the famous IBM R and D contributing here . (see a partial list of IBM acquisitions at at http://www.ibm.com/investor/strategy/acquisitions.wss )
Jim- We have developed a lot on our own. We have the deepest R and D of anybody in the industry in all things Big Data.
For example – Watson has Big Insights Hadoop at its core. Apache Hadoop is the heart and soul of Big Data (see http://www-01.ibm.com/software/data/infosphere/hadoop/ ). A great deal that makes Big Insights so differentiated is that not everything that has been built has been built by the Hadoop community.
We have built additions out of the necessity for security, modeling, monitoring, and governance capabilities into BigInsights to make it truly enterprise ready. That is one example of where we have leveraged open source and we have built our own tools and technologies and layered them on top of the open source code.
Yes of course we have done many strategic acquisitions over the last several years related to Big Data Management and we continue to do so. This quarter we have done 3 acquisitions with strong relevance to Big Data. One of them is Vivisimo (http://www-03.ibm.com/press/us/en/pressrelease/37491.wss ).
Vivisimo provides federated Big Data discovery, search and profiling capabilities to help you figure out what data is out there,what is relevance of that data to your data science project- to help you answer the question which data should you bring in your Hadoop Cluster.
We also did Varicent , which is more performance management and we did TeaLeaf , which is a customer experience solution provider where customer experience management and optimization is one of the hot killer apps for Hadoop in the cloud. We have done great many acquisitions that have a clear relevance to Big Data.
Netezza already had a massively parallel analytics database product with an embedded library of models called Netezza Analytics, and in-database capabilties to massively parallelize Map Reduce and other analytics management functions inside the database. In many ways, Netezza provided capabilities similar to that IBM had provided for many years under the Smart Analytics Platform (http://www-01.ibm.com/software/data/infosphere/what-is-advanced-analytics/ ) .
There is a differential between Netezza and ISAS.
ISAS was built predominantly in-house over several years . If you go back a decade ago IBM acquired Ascential Software , a product portfolio that was the heart and soul of IBM InfoSphere Information Manager that is core to our big Data platform. In addition to Netezza, IBM bought SPSS two years back. We already had data mining tools and predictive modeling in the InfoSphere portfolio, but we realized we needed to have the best of breed, SPSS provided that and so IBM acquired them.
Cognos- We had some BI reporting capabilities in the InfoSphere portfolio that we had built ourselves and also acquired for various degrees from prior acquisitions. But clearly Cognos was one of the best BI vendors , and we were lacking such a rich tool set in our product in visualization and cubing and so for that reason we acquired Cognos.
There is also Unica – which is a marketing campaign optimization which in many ways is a killer app for Hadoop. Projects like that are driving many enterprises.
Ajay- How would you rank order these acquisitions in terms of strategic importance rather than data of acquisition or price paid.
Jim-Think of Big Data as an ecosystem that has components that are fitted to particular functions for data analytics and data management. Is the database the core, or the modeling tool the core, or the governance tools the core, or is the hardware platform the core. Everything is critically important. We would love to hear from you what you think have been most important. Each acquisition has helped play a critical role to build the deepest and broadest solution offering in Big Data. We offer the hardware, software, professional services, the hosting service. I don’t think there is any validity to a rank order system.
Ajay-What are the initiatives regarding open source that Big Data group have done or are planning?
Jim- What we are doing now- We are very much involved with the Apache Hadoop community. We continue to evolve the open source code that everyone leverages.. We have built BigInsights on Apache Hadoop. We have the closest, most up to date in terms of version number to Apache Hadoop ( Hbase,HDFS, Pig etc) of all commercial distributions with our BigInsights 1.4 .
We have an R library integrated with BigInsights . We have a R library integrated with Netezza Analytics. There is support for R Models within the SPSS portfolio. We already have a fair amount of support for R across the portfolio.
Ajay- What are some of the concerns (privacy,security,regulation) that you think can dampen the promise of Big Data.
Jim- There are no showstoppers, there is really a strong momentum. Some of the concerns within the Hadoop space are immaturity of the technology, the immaturity of some of the commercial offerings out there that implement Hadoop, the lack of standardization for formal sense for Hadoop.
There is no Open Standards Body that declares, ratifies the latest version of Mahout, Map Reduce, HDFS etc. There is no industry consensus reference framework for layering these different sub projects. There are no open APIs. There are no certifications or interoperability standards or organizations to certify different vendors interoperability around a common API or framework.
The lack of standardization is troubling in this whole market. That creates risks for users because users are adopting multiple Hadoop products. There are lots of Hadoop deployments in the corporate world built around Apache Hadoop (purely open source). There may be no assurance that these multiple platforms will interoperate seamlessly. That’s a huge issue in terms of just magnifying the risk. And it increases the need for the end user to develop their own custom integrated code if they want to move data between platforms, or move map-reduce jobs between multiple distributions.
Also governance is a consideration. Right now Hadoop is used for high volume ETL on multi structured and unstructured data sources, or Hadoop is used for exploratory sand boxes for data scientists. These are important applications that are a majority of the Hadoop deployments . Some Hadoop deployments are stand alone unstructured data marts for specific applications like sentiment analysis like.
Hadoop is not yet ready for data warehousing. We don’t see a lot of Hadoop being used as an alternative to data warehouses for managing the single version of truth of system or record data. That day will come but there needs to be out there in the marketplace a broader range of data governance mechanisms , master data management, data profiling products that are mature that enterprises can use to make sure their data inside their Hadoop clusters is clean and is the single version of truth. That day has not arrived yet.
One of the great things about IBM’s acquisition of Vivisimo is that a piece of that overall governance picture is discovery and profiling for unstructured data , and that is done very well by Vivisimo for several years.
What we will see is vendors such as IBM will continue to evolve security features inside of our Hadoop platform. We will beef up our data governance capabilities for this new world of Hadoop as the core of Big Data, and we will continue to build up our ability to integrate multiple databases in our Hadoop platform so that customers can use data from a bit of Hadoop,some data from a bit of traditional relational data warehouse, maybe some noSQL technology for different roles within a very complex Big Data environment.
That latter hybrid deployment model is becoming standard across many enterprises for Big Data. A cause for concern is when your Big Data deployment has a bit of Hadoop, bit of noSQL, bit of EDW, bit of in-memory , there are no open standards or frameworks for putting it all together for a unified framework not just for interoperability but also for deployment.
There needs to be a virtualization or abstraction layer for unified access to all these different Big Data platforms by the users/developers writing the queries, by administrators so they can manage data and resources and jobs across all these disparate platforms in a seamless unified way with visual tooling. That grand scenario, the virtualization layer is not there yet in any standard way across the big data market. It will evolve, it may take 5-10 years to evolve but it will evolve.
So, that’s the concern that can dampen some of the enthusiasm for Big Data Analytics.
You can read more about Jim at http://www.linkedin.com/pub/james-kobielus/6/ab2/8b0 or
follow him on Twitter at http://twitter.com/jameskobielus
You can read more about IBM Big Data at http://www-01.ibm.com/software/data/bigdata/
Google Translate has been a pioneer in using machine learning for translating various languages (and so is the awesome Google Transliterate)
I wonder if they can expand it to programming languages and not just human languages.
converting translating programming language code
1) Paths referred for stored objects
2) Object Names should remain the same and not translated
3) Multiple Functions have multiple uses , sometimes function translate is not straightforward
I think all these issues are doable, solveable and more importantly profitable.
I look forward to the day a iOS developer can convert his code to Android app code by simple upload and download.
Amazon gets some competition, and customers should see some relief, unless Google withdraws commitment on these products after a few years of trying (like it often does now!)
|Machine Type Pricing|
|Configuration||Virtual Cores||Memory||GCEU *||Local disk||Price/Hour||$/GCEU/hour|
|n1-standard-1-d||1||3.75GB ***||2.75||420GB ***||$0.145||0.053|
|n1-standard-8-d||8||30GB||22||2 x 1770GB||$1.16||0.053|
|Egress to the same Zone.||Free|
|Egress to a different Cloud service within the same Region.||Free|
|Egress to a different Zone in the same Region (per GB)||$0.01|
|Egress to a different Region within the US||$0.01 ****|
|Inter-continental Egress||At Internet Egress Rate|
|Internet Egress (Americas/EMEA destination) per GB|
|0-1 TB in a month||$0.12|
|Internet Egress (APAC destination) per GB|
|0-1 TB in a month||$0.21|
|Persistent Disk Pricing|
|Provisioned space||$0.10 GB/month|
|Snapshot storage**||$0.125 GB/month|
|IO Operations||$0.10 per million|
|IP Address Pricing|
|Static IP address (assigned but unused)||$0.01 per hour|
|Ephemeral IP address (attached to instance)||Free|
** coming soon
*** 1GB is defined as 2^30 bytes
**** promotional pricing; eventually will be charged at internet download rates
Google Prediction API
Tap into Google’s machine learning algorithms to analyze data and predict future outcomes.
Leverage machine learning without the complexity
Use the familiar RESTful interface
Enter input in any format – numeric or text
Build smart apps
Learn how you can use Prediction API to build customer sentiment analysis, spam detection or document and email classification.
Google Translation API
Use Google Translate API to build multilingual apps and programmatically translate text in your webpage or application.
Translate text into other languages programmatically
Use the familiar RESTful interface
Take advantage of Google’s powerful translation algorithms
Build multilingual apps
Learn how you can use Translate API to build apps that can programmatically translate text in your applications or websites.
Analyze Big Data in the cloud using SQL and get real-time business insights in seconds using Google BigQuery. Use a fully-managed data analysis service with no servers to install or maintain.
Reliable & Secure
Complete peace of mind as your data is automatically replicated across multiple sites and secured using access control lists.
You can store up to hundreds of terabytes, paying only for what you use.
Run ad hoc SQL queries on
multi-terabyte datasets in seconds.
Google App Engine
Create apps on Google’s platform that are easy to manage and scale. Benefit from the same systems and infrastructure that power Google’s applications.
Focus on your apps
Let us worry about the underlying infrastructure and systems.
See your applications scale seamlessly from hundreds to millions of users.
Premium paid support and 99.95% SLA for business users
The cyber -group known as Anonymous has now decided to fight for internet freedom for my 1.2 billion countrymen (India)
So in operation India they go and knock some websites off. The immediate provocation-
1) Legal System prevented access to Pirate Bay (and other sites)
This as per Anons restricts the freedom of glorious motherland of India (which incidentally does have a high number of engineers).
A slight modification to using violence (like DDOS) is to use non violence-this approach is use the free tier at Amazon EC2-http://aws.amazon.com/free/ and sign up and start the windows tier
AWS Free Usage Tier (Per Month): ( only if your torrents are going to be less than 15 gb a month!!)
- 750 hours of Amazon EC2 Linux Micro Instance usage (613 MB of memory and 32-bit and 64-bit platform support) – enough hours to run continuously each month
- 750 hours of Amazon EC2 Microsoft Windows Server Micro Instance usage (613 MB of memory and 32-bit and 64-bit platform support) – enough hours to run continuously each month
- 750 hours of an Elastic Load Balancer plus 15 GB data processing*
- 30 GB of Amazon Elastic Block Storage, plus 2 million I/Os and 1 GB of snapshot storage
- 5 GB of Amazon S3 standard storage, 20,000 Get Requests, and 2,000 Put Requests
- 100 MB of storage, 5 units of write capacity, and 10 units of read capacity for Amazon DynamoDB.**
- 25 Amazon SimpleDB Machine Hours and 1 GB of Storage
- 1,000 Amazon SWF workflow executions can be initiated for free. A total of 10,000 activity tasks, signals, timers and markers, and 30,000 workflow-days can also be used for free
- 100,000 Requests of Amazon Simple Queue Service
- 100,000 Requests, 100,000 HTTP notifications and 1,000 email notifications for Amazon Simple Notification Service
- 10 Amazon Cloudwatch metrics, 10 alarms, and 1,000,000 API requests
- 15 GB of bandwidth out aggregated across all AWS services
So you dont know Linux, huh (but do know how to Torrent). Well Amazon has a Windows instance for free too. Shame on you for not knowing Linux though! Illegal torrents hurt artists like Shahrukh Khan the most!!!
How to create a Windows Amazon Instance
and to download your precious data (why?) from your remote instance to your local PC use these instructions.
1. Go to find the RDP file amazon asked you to downloaded onto your local PC. right-click –> Edit
2. Go to “Local Resources” tab –> “Local devices and resources” –> “More” button
3. Expand the “Drives” and check the disks you want to share when you TS to the remote box.
4. after connect, you will see the new drives in My Computer already mounted for you.
For me, copy speed is 200-300kB/Second. Enjoy!
or even easier
Installing dropbox on both your client machine and EC2 instance is one of the easiest ways to do it. (go to http://dropbox.com) or try the new Google Drive to share content.
As for Anonymous- DDOS attacks are easy, IRC press conferences are fun, but there are enough techies in India ,kids.
NOTE- You are liable legally for your actions whether on Amazon AWS or on your own laptop. This is just a technical note- not a moral note.
PS- I wonder if the Chinese can use this to access Facebook. Maybe it is time Anonymous got the guts to hit China for it’s unfree internet.
PPS- Message to Anons— Next time, try giving us a pdf tutorial on how to create an anonymized sql injection/ddos !
Custom T Shirt-
INDIA- Writing code since 3000 BC.
INDIA- We made the zero 0.