Entity analytics is used to detect non-obvious relationships, resolve entities, and find threats and vulnerabilities that are hiding in your disparate collections of data. Through the medium of three use cases, let’s try to understand how Entity Analytics can help organizations enhance their customer experience.
There is a lot of buzz about Cognitive Computing. I plan to write a series of blogs on it as cognition is framing the future of the digital economy. In this blog, lets explore the need of Cognitive technology.
In today’s world we see the business models converge across categories and industries. The Ubers’ and Airbnbs’ have shifted consumer behavior very rapidly threatening the existence of very strong, established players in the market. This new dynamic was result of their on demand model powered through the immediacy of technology. Now, with emergence of such new business models, organizations can no longer continue to see their competitive set from within industry; they need to be structured to look beyond their traditional boundaries. As brands, businesses and organizations shift to become lifestyle centric, competition can come from anywhere. Piotr Ruszowski, chief marketing officer, Mondial Assistance, Poland says, “The biggest threat is new competitors that aren’t yet classified as competitors.”
To win in this dynamic age, there is a need for organizations to become all knowing. This means getting insights from all data including the ‘dark’ data that sits outside of the firewall of the organization. It includes unstructured information – books, emails, tweets, journals, blogs, images, sound and videos. The challenge here is that this pool of dark matter is only going to get bigger – the statistic here that “By the year 2020, about 1.7 MB of new information will be created every second, for every human being on the planet” represents the magnitude of how much we really mean when we say big.
As you go outside your firewall to the data that’s coming, it is increasingly unstructured. Traditional systems are programmed and so are not structured to be able to glean insight from dark data. Therefore organizations need to look to cognitive systems that have the capability to be able to make sense of it How? We will explore later, but the key difference here is structured data will tell you that your sales are down for instance, but it’s the unstructured data that can tell you why?
With Cognition, Business technologies that automate and detect can now also advise and enhance human expertise, powering organizations to be able to make richer, more data driven decisions.
Here are the benefits of a cognitive business:
- Puts to work all forms of data, whether structured or unstructured
- Facilitates evidence-based, confidence-weighted decisions.
- Discovers new insights and patterns in new kinds of data.
- Learns and adapts with use, actions, outcomes and new data to stay current.
- Navigates natural language to allow conversational-style interaction, enhancing adoption and use
The success of cognitive computing will not be measured by Turing tests or a computer’s ability to mimic humans. It will be measured in more practical ways, like return on investment, new market opportunities, diseases cured and lives saved. How cognitive systems work and what are some of the industries already benefiting from cognitive systems? Stay tuned.
Here is a recap on Internet of Things (IoT) : First, in the simplest terms, IoT deals with physical devices that generate data from sensors and send the streams of data via the Internet to some kind of “hub” for data collection, visualization, and analytics. Second, IoT deals with multiple types of sensors and data formats. Third, IoT solutions might deal with thousands and millions of connected devices and huge amount of data.
Now, Billions of Internet-connected ‘things’ will, by definition, generate massive amounts of data of varying complexity, formats and timeliness. This is just a swamp, especially if all you do is collect data and don’t do anything with it. For example, Insurers pay more than $1 billion in claims in the United States for cars and trucks damaged by hail. Can Weather Company’s weather data make it possible for insurers to send text-message alerts to policy holders, warning them of an imminent hailstorm and advising them of safe locations nearby? Note IoT will make it possible to identify the exact location of these cars /trucks and identify the owner to send the text message!
Therefore while many people focus on the devices themselves— how they function, how they perform and how they look—the real opportunity is in the data these devices are consuming and generating and the value it provides for businesses and even entire connected cities. Retailers will piggyback on Analytics, and use IoT to pull consumers into one of their channels, where they will entice them with products that have been contextualized and personalized for the customers’ gratification. And there will be similar usecases for manufacturers, servicing organizations, public utilities, industrial, telecommunications, healthcare providers and more—to serve their customers in new, personalized ways. Using predictive, prescriptive, cognitive and investigative analytics will make it possible for organizations to discover new relationships and correlations that bring together broader and deeper insights that lead to smarter business decisions in terms of risks, costs, growth, customer service and other things.
What all will be required for organizations to harness the power of Analytics and what will be the challenges? Stay tuned.
Disclaimer: The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies or opinions
There has been a lot of talks around IoT. In this blog I wish to demystify IoT a little bit and share some facts and my understanding on this topic.
IoT is not about Internet of connected computers, rather it is about is an Internet of connected devices (or things) that broadcasts loads of data about devices—their interactions with their owners and with each other—that traditionally had little to no computing capacity, but now do.
So lets start with some Facts:
By 2020, there will be 28 times more sensor-enabled devices in existence than there are people in the world. Of those 212 billion enabled devices, 30 billion will be connected to networks and potentially to each other. These device include everything from cellphones to coffee makers, washing machines, headphones, lamps, wearable devices, and more. A device can also be a component of a machine, such as a jet engine of an airplane or the drill of an oil rig. These smart devices could respond to properties, such as vibration, chemicals, radio frequencies, environment, weather, humidity, light, etc.
So what value proposition will these sensor-enabled devices bring?
• Cars with on-board sensors can report back to manufacturers with information on the wear and tear of parts, indicate the cause of system failures and generate warranty notifications.
• Store shelves can connect with the supply chain when they’re running low on inventory of a certain product.
• Skyscrapers can send building managers information about how much electricity they’re using—and make suggestions for how to reduce it.
• Wearable monitors can alert doctors about the side effects of medications and provide patients with advice on how to manage their symptoms at home.
• Airplanes can connect with weather stations to help predict turbulence and avoid it during flights.
So what are the challenges and where is the “Real Opportunity” in IoT?
[Hint]: 90 percent of all data generated by devices such as smartphones, tablets, connected vehicles and appliances is never analyzed or acted on. Imagine the possibilities if that were increased to 20% or more.
I spent some time reading about IBM InfoSphere BigInsights. In this blog, I wish to share the summary of what I read.
Need for a solution like BigInsights
Imagine if you were able to:
- Build sophisticated predictive models from the combination of existing information and big data information flows, providing a level of depth that only analytics applied at a large scale can offer.
- Broadly and automatically perform consumer sentiment and brand perception analysis on data gathered from across the Internet, at a scale previously impossible using partially or fully manual methods.
- Analyze system logs from a variety of disparate systems to lower operational risk.
- Leverage existing systems and customer knowledge in new ways that were previously ruled out as infeasible due to cost or scale.
Highlights of InfoSphere BigInsights
- BigInsights allows organizations to cost-effectively analyze a wide variety and large volume of data to gain insights that were not previously possible.
- BigInsights is focused on providing enterprises with the capabilities they need to meet critical business requirements while maintaining compatibility with the Hadoop project.
- BigInsights includes a variety of IBM technologies that enhance and extend the value of open-source Hadoop software to facilitate faster time-to-value, including application accelerators, analytical facilities, development tools, platform improvements and enterprise software integration.
- While BigInsights offers a wide range of capabilities that extend beyond the Hadoop functionality, IBM has taken an optin approach: you can use the IBM extensions to Hadoop based on your needs rather than being forced to use the extensions that come with InfoSphere BigInsights.
- In addition to core capabilities for installation, configuration and management, InfoSphere BigInsights includes advanced analytics and user interfaces for the non-developer business analyst.
- It is flexible to be used for unstructured or semi-structured information; the solution does not require schema definitions or data preprocessing and allows for structure and associations to be added on the fly across information types.
- The platform runs on commonly available, low-cost hardware in parallel, supporting linear scalability; as information grows, we simply add more commodity hardware.
InfoSphere BigInsights provides a unique set of capabilities that combine the innovation from the Apache Hadoop ecosystem with robust support for traditional skill sets and already installed tools. The ability to leverage existing skills and tools through open-source capabilities helps drive lower total cost of ownership and faster time-to-value. Thus InfoSphere BigInsights enables new solutions for problems that were previously too large and complex to solve cost-effectively.
In the previous blog, we discussed in great details the limitation of a Data Lake and how without proper governance, a data lake can become overwhelming and unsafe to use. Hence, emerged an enhanced data lake solution known as a data reservoir. So how does a Data Reservoir assists the Enterprise:
- A data reservoir provides the right information to people so they can perform activities like the following:
– Investigate and understand a particular situation or type of activity.
– Build analytical models of the activity.
– Assess the success of an analytic solution in production in order to improve it.
- A data reservoir provides credible information to subject matter experts (such as data to analysts, data scientists, and business teams) so they can perform analysis activities such as, investigating and understanding a particular situation, event, or activity.
- A data reservoir has capabilities that ensure the data is properly cataloged and protected so subject matter experts can confidently access the data they need for their work and analysis.
- The creation and maintenance of the data reservoir is accomplished with little to no assistance and additional effort from the IT teams.
Design of a Data Reservoir:
This design point is critical because subject matter experts play a crucial role in ensuring that analytics provides worthwhile and valuable insights at appropriate points in the organization’s operation. With a data reservoir, line-of-business teams can take advantage of the data in the data reservoir to make decisions with confidence.
- The data reservoir repositories (Figure 1, item 1) provide platforms both for storing data and running analytics as close to the data as possible.
- The data reservoir services (Figure 1, item 2) provide the ability to locate, access, prepare, transform, process, and move data in and out of the data reservoir repositories.
- The information management and governance fabric (Figure 1 item 3) provides the engines and libraries to govern and manage the data in the data reservoir. This set of capabilities includes validating and enhancing the quality of the data, protecting the data from misuse, and ensuring it is refreshed, retained, and eventually removed at appropriate points in its lifecycle.
The data reservoir is designed to offer simple and flexible access to data because people are key to making analytics successful. For more information please read Governing and Managing Big Data for Analytics and Decision Makers.
In my previous blogs I was discussing about Data Lake. Imagine you have pooled the entire data of your enterprise to a Data lake, there will be challenges. All this raw data will be overwhelming and unsafe to use because no-one is sure where data came from, how reliable it is, and how it should be protected. Without proper management and governance, such a data lake can quickly become a data swamp. This data swamp can cause frustration to the business users, application developers, IT and even customers.
So there is a need for a facility for transforming raw data into information that is Clean, Timely, Useful and Relevant. Hence an enhanced data lake solution was built with management, affordability, and governance at its core. This solution is known as a data reservoir. Probably in one of the subsequent blogs we will take a dip into data reservoir! Stay tuned.