Data Science Vs BI & Predictive Analytics

Business intelligence (BI) has been evolving for decades as data has become cheaper, easier to access, and easier to share. BI analysts take historical data, perform queries, and summarize findings in static reports that often include charts. The outputs of business intelligence are “known knowns” that are manifested in stand-alone reports examined by a single business analyst or shared among a few managers. For example, who are the probable high-net-worth clients to sell them a premium bank account. There can be some consideration like the average account balance etc.

Predictive analytics has been unfolding on a parallel track to business intelligence. With predictive analytics, numerous tools allow analysts to gain insight into “known unknowns”. These tools track trends and make predictions, but are often limited to specialized programs. In the previous example, the probable high-net-worth client could also be the spouse of an existing high-net-worth client that can be figured out using predictive analytics.

Data Science on the other hand is an interdisciplinary field that combines machine learning, statistics, advanced analysis, high-performance computing and visualizations. It is a new form of art that draws out hidden insights and puts data to work in the cognitive era. The tools of data science originated in the scientific community, where researchers used them to test and verify hypotheses that include “unknown unknowns”. Here are some of the examples:

  • Uncover totally unanticipated relationships and changes in markets or other patterns. For example the price of a house based on nearness to high voltage power lines or based on brick exterior.
  • Handle streams of data—in fact, some embedded intelligent services make decisions and carry out those decisions automatically in microseconds. For example analyzing the users click pattern to dynamically propose a product or promotion to attract the customer.

As discussed, Data Science different from from traditional business intelligence and predictive analytics in the following way.

  • It brings in data that is orders of magnitude larger than what previous generations of data warehouses could store, and it even works on streaming data sources.
  • The analytical tools used in data science are also increasingly powerful, using artificial intelligence techniques to identify hidden patterns in data and pull new insights out of it.
  • The visualization tools used in data science leverage modern web technologies to deliver interactive browser-based applications. Not only are these applications visually stunning, they also provide rich context and relevance to their consumers.

Data science enriches the value of data, going beyond what the data says to what it means for your organization—in other words, it turns raw data into intelligence that empowers everyone in your organization to discover new innovations, increase sales, and become more cost-efficient. Data science is not just about the algorithm, but about deriving value.

 

Disclaimer: The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies or opinions.

 

Advertisements

Need for Governance in Self-Service Analytics

Screen Shot 2017-03-31 at 9.55.05 PM
Analytics Offering without Self-Service

Self-Service Analytics is a form of business intelligence in which line-of-business professionals or data scientists are enabled and encouraged to perform queries and generate reports on their own, with nominal IT support. This empowers everyone in the organization to discover new insights and enable them for informed decision-making. Capitalizing on the data lake, or modernized data warehouse, they can do full data set analysis (no more sampling), gain insight from non-relational data, support individuals in their desire for exploratory analysis and discovery with 360o view of all their business. At this stage, the organization can truly be data-savvy and insight-driven leading to better decisions, more effective actions, and improved outcomes. Insight is being used to make risk-aware decisions, or fight fraud and counter threats, optimize operations or most often focused on attract, grow and retain customers.

Any self service analytics, regardless of persona, has to involve data governance. Here are three examples of how any serious analytics work would be impossible without support for a proper data governance practice in the analytics technology:

  1. Fine-grained authorization controls: Most industries feature data sets where data access needs to be controlled so that sensitive data is protected. As data moves from one store to another, gets transformed, and aggregated, the authorization information needs to move with that data. Without the transfer of authorization controls as data changes state or location, self-service analytics would not be permitted under the applicable regulatory policies.
  2. Data lineage information: As data moves between different data storage layers and changes state, it’s important for the lineage of the data to be captured and stored. This helps analysts understand what goes into their analytic results, but it is also often a policy requirement for many regulatory frameworks. An example of where this is important is the right to be forgotten, which is a legislative initiative we are seeing in some Western countries. With this, any trace of information about a citizen would have to be tracked down and deleted from all of an organization’s data stores. Without a comprehensive data lineage framework, adherence to a right to be forgotten policy would be impossible.
  3. Business glossary: A current and complete business glossary acts as a roadmap for analysts to understand the nature of an organization’s data. Specifically, a business glossary maps an organization’s business concepts to the data schemas. One common problem with Hadoop data lakes is a lack of business glossary information as Hadoop has no proper set of metadata and governance tooling.

Summary:
A core design point of any self service analytics offering (like the IBM DataWorks) is that data governance capabilities should be baked in. This enables self-service data analysis where analysts only see data they’re entitled to see, where data movement and transformation is automatically tracked for a complete lineage story, and as users search for data, business glossary information is used.

Watson Analytics

Need for Watson Analytics
If an organization is good at analyzing data and extracting relevant insights from it then decision makers can make more informed and thus more optimal decisions. But the decision makers are forced to make decisions with incomplete information. The reason?  Decisions makers/ Citizen Analysts, for the most part, tend to be mainly consumers of analytics and they rely on more skilled resources (Like Data Engineer, Data Scientist, Application developer) in the organization to provide the data driven answers to their questions. Moreover the answer to one question is just the start of another. Think of a detective interrogating a suspect. The consumer/builder model is hardly conducive to the iterative nature of data analysis. Therefore, the time it takes for these answers to be delivered to the decision makers is far from optimal – and many questions go unanswered every day.

watsonlogoWatson Analytics
So a logical solution is to provide an easier to use analytics offerings. Watson Analytics provides that value add so that more people will be able to leverage data to drive better decision making using analytics.

When we think of Watson, we think about Cognitive. And when we think about Analytics, we think  about traditional analytics (querying, dashboarding), along with some more advanced analytic capabilities (data mining, and social media analytics). So Watson Analytics is a Cloud based offering which can make analytics a child’s play even for a non-skilled user.

Watson Analytics helps users understand their data in a guided way using a natural language interface to ask a series of business questions. Example, a user can ask “What is the trend of revenue over years?” and get a visualization in response. So, Instead of having to first choose a visualization and working backwards to try answer the business question, Watson Analytics allows you to describe your intent in natural language, and it chooses the best visualization for you. Even better, Watson Analytics gives you some initial set of questions which you can keep refining.

Watson Analytics for Social Media
Watson Analytics can work on Social Media data to take the pulse of an audience by spotting trends and identifying new insights and relationships across multiple social channels allowing greater visibility into a given topic or market. It combines structured and unstructured self-service analysis to enrich your social media analytics experience for exceptionally insightful discoveries. All on the cloud!

Summary of Steps:
Watson Analytics does the following to provide insights hidden in your Big data. Mouse-over the below images to get the details of the steps.

  • Import data from a robust set of data source (on Cloud and on premise) options, with the option to prepare and cleanse via IBM Bluemix Data Connect.
  • Answering What: Identifying issues, early problem detection, finding anomalies or exceptions, challenging conventional wisdom or the status quo.
  • Understanding or explaining outcomes, Why something happened.
  • Dashboarding to share results

Lift your Data to Cloud

database_migrationTo stay competitive and reduce cost, several Enterprises are realizing the merits of moving their data to Cloud. Due to their economies of scale cloud storage vendors can achieve lesser cost. Also Enterprises escape the drudgery of [capacity] planning, buying, commissioning, provisioning and maintaining storage systems. Data is even protected by replication to multiple data centers which Cloud vendors provide by default. You can read this blog listing the various advantages to move data to cloud.

But now the BIG challenge is to securely migrate the terabytes of Enterprise data to Cloud. Months can be spent coming up with airtight migration plan which does not disrupt your business. And the final migration may also take a long time impacting adversely the users, applications or customers using the source database.

Innovative data migration

In short, database migration can end up being a miserable experience. IBM Bluemix Lift is a self-service, ground-to-cloud database migration offering from IBM to take care of the above listed needs. Using Bluemix Lift, database migration becomes fast, reliable and secure. Here’s what it offers:

  • Blazing fast Speed: Bluemix Lift helps accelerate data transfer by embedding the IBM Aspera technology. Aspera’s patented and highly efficient bulk data transport protocol allows Bluemix Lift to achieve transport speeds much faster than FTP and HTTP. Moving 10 TB of data can take a little over a day, depending on your network connection.
  • Zero downtime: Bluemix Lift can eliminate the downtime associated with database migrations. An efficient change capture technology tracks incremental changes to your source database and replays them to your target database. As a result, any applications using the source database can keep running uninterrupted while the database migration is in progress.
  • Secure: Any data movement across the Internet requires strong encryption so that the data is never compromised. Bluemix Lift encrypts data as it travels across the web on its way to an IBM cloud data property.
  • Easy to use: Set up the source data connection, provide credentials to the target database, verify schema compatibility with the target database engine and hit run. That’s all it takes to kick off a database migration with Bluemix Lift.
  • Reliable: The Bluemix Lift service automatically recovers from problems encountered during data extract, transport and load. If your migration is interrupted because of a drop in network connectivity, Bluemix Lift automatically resumes once connectivity returns. In other words, you can kick off a large database migration and walk away knowing that Bluemix Lift is on the job.

Speed, zero downtime, security, ease of use and reliability—these are the hallmarks of a great database migration service, and Bluemix Lift can deliver on all these benefits. Bluemix Lift gets data into a cloud database as easy as selecting Save As –> Cloud. Bluemix Lift also provides an amazing jumping-off point for new capabilities that are planned to be added in the future such as new source and target databases, enhanced automation and additional use cases. Take a look at IBM Bluemix Lift and give it a go.

IBM Bluemix Data Connect

I have been tracking the development on IBM Bluemix Data Connect quite closely. One of the reason is that I was a key developer in the one of the first few services that it launched almost two years back under the name of DataWorks. Two weeks back I attended a session on Data Connect by the architect and saw a demo. I am impressed at the way it has evolved since then. Therefore I am planning to re-visit DataWorks again, now as IBM Bluemix Data Connect. In this blog I will reconcile the role that IBM Bluemix Data Connect play in the era of cloud computing, big data and the Internet of Things.

Research from Forrester found that 68 percent of simple BI requests take weeks, months or longer for IT to fulfill due to lack of technical resources. So this entails that the enterprises must find ways to transform line of business professionals into skilled data workers, taking some of the burden off of IT. It means business users should be empowered work with data from many sources—both on premises and in the cloud—without requiring the deep technical expertise of a database administrator or data scientist.

This is where cloud services like IBM Bluemix Data Connect comes into picture. It enables both technical and non-technical business users to derive useful insights from data, with point and click access—whether it’s a few Excel sheets stored locally, or a massive database hosted in the cloud.

Data Connect is a fully managed data preparation and movement service that enables users to put data to work through a simple yet powerful cloud-based interface. The design team has taken lot of pain to design the solution in most simplistic way, so that a basic user can quickly get started with it. Data Connect empowers the business analyst to discover, cleanse, standardize, transform and move data in support of application development and analytics use cases.

Through its integration with cloud data services like IBM Watson Analytics, Data Connect is a seamless tool for preparing and moving data from on premises and off premises to an analytics cloud ecosystem where it can be quickly analyzed and visualized. Furthermore, Data Connect is backed by continuous delivery, which adds robust new features and functionality on a regular basis. Its processing engine is built on Apache Spark, the leading open source analytics project, with a large and continuously growing development community. The result is a best-of-breed solution that can keep up with the rapid pace of innovation in big data and cloud computing.

So here are highlights of IBM Bluemix Data Connect:

  • Allow technical and non-technical users to draw value from data quickly and easily.
  • Ensure data quality with simple data preparation and movement services in the cloud.
  • Integrate with leading cloud data services to create a seamless data management platform.
  • Continuous inflow of new and robust features
  • Best-of-breed ETL solution available on Bluemix  – IBMs Next-Generation Cloud App Development Platform

DataStage job run time architecture on Hadoop

hadoop-logo In my earlier blog, I explored why enterprises are using Hadoop. In summary, scalable data platforms such as Hadoop offers unparalleled cost benefits and analytical opportunities (including content analytics) to enterprises. In this blog, I will mention some of the enhancements in  IBM‘s InfoSphere Informaiton Server 11.5 that helps leverage the scale and promise of Hadoop.

Data integration in Hadoop:
In this release, Information Server can execute directly inside a Hadoop cluster. This means that all of the data connectivity, transformation, cleansing, enhancement, and data delivery features that thousands of enterprises have relied on for years, can be immediately available to run within the Hadoop platform! Information Server is market leading product  in terms of it’s data integration and governance capability. Now the same product can be used to solve some of the industry’s most complex data challenges inside a Hadoop cluster directly. Imagine the time saved in moving the data back and forth from HDFS!

Even more, these new features for Hadoop use the same simple graphical design environment that IBM clients have previously been accustomed to build integration applications with. In other words, organizations can build new Hadoop-based information intensive applications without the need to retrain their development team on newly emerging languages that require manual hand coding and lack governance support.

How is this accomplished? YARN! 
Apache Hadoop YARN is the framework for job scheduling and cluster resource management. Information Server  can communicate with YARN to run a job on the data nodes on a Hadoop cluster using following steps.

Here is more detail on how Information Server uses YARN

DataStageOnHadoopArchitecture

  1. A job is submitted to run in the Information Server engine.
  2. The ‘Conductor’ (the process responsible for coordinating the job) asks YARN to instantiate the YARN version of the Conductor: The Application Master.
  3. The YARN Client is responsible for starting and stopping Application Masters
  4. Now that the Application Master is ready, ‘Section Leaders’ (responsible for work on a datanode) are prepared
  5. Section Leaders are created and managed by YARN Node Managers.  This is the point where the BigIntegrate/BigQuality binaries will be copied to the Hadoop DataNode if they do not already exist there.
  6. Now the real work can begin – the ‘players’ (that actually run the process) are started.

All of this is automatic and behind the scenes.  The actual user interface will look and feel identical to when a job is run on Windows, AIX, or Linux.

 

Spark – Sparkling framework for big data management and analytics

sparkThere has been lot of buzz around Apache Spark since last several months, and I have been following it to some extent and comparing it with Hadoop. In this blog, I will share some of what I have read about it.

Apache Spark is an open source parallel processing framework that enables users to run large-scale data analytics applications across clustered computers. Well, wasn’t that Hadoop’s claim to fame? Well yes, but Spark was developed as a way to speed up processing jobs in Hadoop systems. Spark advocates claim that with its in-memory computing layer, Spark can run batch-processing programs up to 100 times faster than MapReduce can. When data is processed from disk, Spark can run batch jobs up to 10 times faster, they say.

While MapReduce is limited to batch processing, the Apache Spark architecture includes a stream processing module, a machine learning library and a graph processing API with related algorithms thereby making it a more general purpose platform. The Spark Streaming technology in particular has found its way into deployments at Spark early adopters, for uses such as analyzing online advertising data and processing satellite images and geo-tagged tweets. Does that imply that handling of these additional processing workload may require companies to expand the size of their Hadoop clusters? And the answer is obviously, Yes.

Unlike Hadoop, Spark doesn’t include its own file system. It can run in a standalone mode and access a variety of data sources, but most often it is used to process and analyze data stored in the Hadoop Distributed File System (HDFS). So it should not be surprising to note that Spark has been incorporated into the top Hadoop distributions in every major distribution of Hadoop, including the ones from Cloudera, Hortonworks, IBM, MapR and Pivotol. In such installations, one can still use MapReduce because of its reliability, but Spark may require less development expertise than MapReduce does because of its high-level APIs and support for writing applications in Java, Scala or Python.