Unit - 2
Data Analysis and Exploration
Q1) Explain the structure of Mathematical Model.
A1)
- Mathematical models have been developed and used in many application domains, ranging from physics to architecture, from engineering to economics.
2. The models adopted in the various contexts differ substantially in terms of their mathematical structure. However, it is possible to identify a few fundamental features shared by most models.
3. Generally speaking, a model is a selective abstraction of a real system. In other words, a model is designed to analyze and understand from an abstract point of view the operating behavior of a real system, regarding which it only includes those elements deemed relevant for the purpose of the investigation carried out.
4. In this respect it is worth quoting the words of Einstein on the development of a model: ‘everything should be made as simple as possible, but not simpler.’ In graphical terms the definition of a Scientific and technological development has turned to mathematical models of various types for the abstract representation of real systems.
5. As an example, consider the thought experiment (Gendanken experiment) popularized in physics at the beginning of the twentieth century, which involved building a mental model of a given phenomenon and verifying its validity by imagining the consequences caused by hypothetical modifications in the model itself.
6. The analogy is well apparent between this conceptual paradigm and what-if analyses that can be easily performed using a simple spreadsheet to find an answer to questions such as: given a model for calculating the budget of a company, how are cash flows affected by a change in the payment terms, such as 90 days vs. 60 days, of invoices issued in favor of the main customers?
Q2) What are the characteristics of Mathematical model and explain its characteristics?
A2)
According to their characteristics, models can be divided into iconic, analogical and symbolic: -
- Iconic: -
An iconic model is a material representation of a real system, whose behavior is imitated for the purpose of the
2. Analysis: -
A miniaturized model of a new city neighborhood is an example of iconic model.
3. Analogical: -
a) An analogical model is also a material representation, although it imitates the real behavior by analogy rather than by replication.
b) A wind tunnel built to investigate the aerodynamic properties of a motor vehicle is an example of an analogical model intended to represent the actual progression of a vehicle on the road.
4. Symbolic: -
a) A symbolic model, such as a mathematical model, is an abstract representation of a real system. It is intended to describe the behavior of the system through a series of symbolic variables, numerical parameters and mathematical relationships.
b) Business intelligence systems, and consequently the models presented, are exclusively based on symbolic models.
c) A further relevant distinction concerns the probabilistic nature of models, which can either stochastic or Deterministic.
5. Stochastic: -
a) In a stochastic model some input information represents randomevents and is therefore characterized by a probability distribution, which in turn can be assigned or unknown.
b) Predictive models, which will be thoroughly described in the following chapters, as well as waiting line models, briefly mentioned below in this chapter, are examples of stochastic models.
6. Deterministic: -
a) A model is called deterministic when all input data are supposed to be known a priori and with certainty.
b) Since this assumption is rarely fulfilled in real systems, one resort to deterministic models when the problem at hand is sufficiently complex and any stochastic elements are of limited relevance.
c) Notice, however, that even for deterministic models the hypothesis of knowing the data with certainty may be relaxed. Sensitivity and scenario analyses, as well as what-if analysis, allow one to assess the robustness of optimal decisions to variations in the input parameters.
d) A further distinction concerns the temporal dimension in a mathematical model, which can be either static or dynamic.
7. Static: -
Static models consider a given system and the related decision-making process within one single temporal stage.
8. Dynamic: -
a) Dynamic models consider a given system through several temporal stages, corresponding to a sequence of decisions.
b) In many instances the temporal dimension is subdivided into discrete intervals of a previously fixed span: minutes, hours, days, weeks, months and years are examples of discrete subdivisions of the time axis.
c) Discrete-time dynamic models, which largely prevail in business intelligence applications, observe the status of a system only at the beginning or at the end of discrete intervals. Continuous-time dynamic models consider a continuous sequence of periods on the time axis.
Q3) Write the short note on Data mining.
A3)
- Data Mining is a process of finding potentially useful patterns from huge data sets. It is a multi-disciplinary skill that uses machine learning, statistics, and AI to extract information to evaluate future events probability.
2. The insights derived from Data Mining are used for marketing, fraud detection, scientific discovery, etc.
3. Data Mining is all about discovering hidden, unsuspected, and previously unknown yet valid relationships amongst the data.
4. Data mining is also called Knowledge Discovery in Data (KDD), Knowledge extraction, data/pattern analysis, information harvesting, etc.
Q4) Explain the Data mining implementation process in detail.
A4)
Let's study the Data Mining implementation process in detail:
- Business understanding:
a) In this phase, business and data-mining goals are established.
b) First, you need to understand business and client objectives. You need to define what your client wants (which many times even they do not know themselves)
c) Take stock of the current data mining scenario. Factor in resources, assumption, constraints, and other significant factors into your assessment.
d) Using business objectives and current scenario, define your data mining goals.
e) A good data mining plan is very detailed and should be developed to accomplish both business and data mining goals.
2. Data understanding:
a) In this phase, sanity check on data is performed to check whether its appropriate for the data mining goals.
b) First, data is collected from multiple data sources available in the organization.
c) These data sources may include multiple databases, flat filer or data cubes. There are issues like object matching and schema integration which can arise during Data Integration process. It is a quite complex and tricky process as data from various sources unlikely to match easily. For example, table A contains an entity named cust_no whereas another table B contains an entity named cust-id.
d) Therefore, it is quite difficult to ensure that both of these given objects refer to the same value or not. Here, Metadata should be used to reduce errors in the data integration process.
e) Next, the step is to search for properties of acquired data. A good way to explore the data is to answer the data mining questions (decided in business phase) using the query, reporting, and visualization tools.
f) Based on the results of query, the data quality should be ascertained. Missing data if any should be acquired.
3. Data preparation:
a) In this phase, data is made production ready.
b) The data preparation process consumes about 90% of the time of the project.
c) The data from different sources should be selected, cleaned, transformed, formatted, anonymized, and constructed (if required).
d) Data cleaning is a process to "clean" the data by smoothing noisy data and filling in missing values.
e) For example, for a customer demographics profile, age data is missing. The data is incomplete and should be filled. In some cases, there could be data outliers. For instance, age has a value 300. Data could be inconsistent. For instance, name of the customer is different in different tables.
f) Data transformation operations change the data to make it useful in data mining. Following transformation can be applied.
4. Data transformation:
Data transformation operations would contribute toward the success of the mining process.
a) Smoothing: -It helps to remove noise from the data.
b) Aggregation: - Summary or aggregation operations are applied to the data. I.e., the weekly sales data is aggregated to calculate the monthly and yearly total.
c) Generalization: - In this step, Low-level data is replaced by higher-level concepts with the help of concept hierarchies. For example, the city is replaced by the county.
d) Normalization: - Normalization performed when the attribute data are scaled up o scaled down. Example: Data should fall in the range -2.0 to 2.0 post-normalization.
e) Attribute construction: -These attributes are constructed and included the given set of attributes helpful for data mining.
The result of this process is a final data set that can be used in modeling.
5. Modelling:
a) In this phase, mathematical models are used to determine data patterns.
b) Based on the business objectives, suitable modeling techniques should be selected for the prepared dataset.
c) Create a scenario to test check the quality and validity of the model.
d) Run the model on the prepared dataset.
e) Results should be assessed by all stakeholders to make sure that model can meet data mining objectives.
6. Evaluation:
a) In this phase, patterns identified are evaluated against the business objectives.
b) Results generated by the data mining model should be evaluated against the business objectives.
c) Gaining business understanding is an iterative process. In fact, while understanding, new business requirements may be raised because of data mining.
d) A go or no-go decision is taken to move the model in the deployment phase.
7. Deployment:
a) In the deployment phase, you ship your data mining discoveries to everyday business operations.
b) The knowledge or information discovered during data mining process should be made easy to understand for non-technical stakeholders.
c) A detailed deployment plan, for shipping, maintenance, and monitoring of data mining discoveries is created.
d) A final project report is created with lessons learned and key experiences during the project. This helps to improve the organization's business policy
Q5) Explain Data Mining techniques
A5)
1. Classification:
This analysis is used to retrieve important and relevant information about data, and metadata. This data mining method helps to classify data in different classes.
2. Clustering:
Clustering analysis is a data mining technique to identify data that are like each other. This process helps to understand the differences and similarities between the data.
3. Regression:
Regression analysis is the data mining method of identifying and analyzing the relationship between variables. It is used to identify the likelihood of a specific variable, given the presence of other variables.
4. Association Rules:
This data mining technique helps to find the association between two or more Items. It discovers a hidden pattern in the data set.
5. Outer detection:
This type of data mining technique refers to observation of data items in the dataset which do not match an expected pattern or expected behavior. This technique can be used in a variety of domains, such as intrusion, detection, fraud or fault detection, etc. Outer detection is also called Outlier Analysis or Outlier mining.
6. Sequential Patterns:
This data mining technique helps to discover or identify similar patterns or trends in transaction data for certain period.
7. Prediction:
Prediction has used a combination of the other techniques of data mining like trends, sequential patterns, clustering, classification, etc. It analyzes past events or instances in a right sequence for predicting a future event.
Q6) Explain
- R-language
- Oracle data mining
A6)
- R-language:
R language is an open-source tool for statistical computing and graphics. R has a wide variety of statistical, classical statistical tests, time-series analysis, classification and graphical techniques. It offers effective data handling and storage facility.
2. Oracle Data Mining: -
Oracle Data mining popularly knowns as ODM is a module of the Oracle Advanced Analytics Database. This Data mining tool allows data analysts to generate detailed insights and makes predictions. It helps predict customer behavior, develops customer profiles, and identifies cross-selling opportunities.
Q7) What are the benefits of Data Mining
A7)
- Data mining technique helps companies to get knowledge-based information.
2. Data mining helps organizations to make the profitable adjustments in operation and production.
3. The data mining is a cost-effective and efficient solution compared to other statistical data applications.
4. Data mining helps with the decision-making process.
5. Facilitates automated prediction of trends and behaviors as well as automated discovery of hidden patterns.
6. It can be implemented in new systems as well as existing platforms
7. It is the speedy process which makes it easy for the users to analyze huge amount of data in less time.
Q8) Write down the Disadvantages of Data Mining
A8)
- There are chances of companies may sell useful information of their customers to other companies for money. For example, American Express has sold credit card purchases of their customers to the other companies.
2. Many data mining analytics software is difficult to operate and requires advance training to work on.
3. Different data mining tools work in different manners due to different algorithms employed in their design. Therefore, the selection of correct data mining tool is a very difficult task.
4. The data mining techniques are not accurate, and so it can cause serious consequences in certain conditions.
Q9) What are the Applications of Data mining and explain its applications.
A9)
- Communications: -
Data mining techniques are used in communication sector to predict customer behavior to offer highly targeted and relevant campaigns.
2. Insurance: -
Data mining helps insurance companies to price their products profitable and promote new offers to their new or existing customers.
3. Education: -
Data mining benefits educators to access student data, predict achievement levels and find students or groups of students which need extra attention. For example, students who are weak in maths subject.
4. Manufacturing: -
With the help of Data Mining Manufacturers can predict wear and tear of production assets. They can anticipate maintenance which helps them reduce them to minimize downtime
5. Banking: -
Data mining helps finance sector to get a view of market risks and manage regulatory compliance. It helps banks to identify probable defaulters to decide whether to issue credit cards, loans, etc.
6. Retail: -
Data mining techniques help retail malls and grocery stores identify and arrange most sellable items in the most attentive positions. It helps store owners to comes up with the offer which encourages customers to increase their spending.
7. Service Providers: -
Service providers like mobile phone and utility industries use Data Mining to predict the reasons when a customer leaves their company. They analyze billing details, customer service interactions, complaints made to the company to assign each customer a probability score and offers incentives.
8. E-Commerce: -
E-commerce websites use Data Mining to offer cross-sells and up-sells through their websites. One of the most famous names is Amazon, who use Data mining techniques to get more customers into their ecommerce store.
9. Super Markets: -
Data Mining allows supermarket's develop rules to predict if their shoppers were likely to be expecting. By evaluating their buying pattern, they could find woman customers who are most likely pregnant. They can start targeting products like baby powder, baby shop, and diapers and so on.
10. Crime Investigation: -
Data Mining helps crime investigation agencies to deploy police workforce (where is a crime most likely to happen and when?), who to search at a border crossing etc.
11. Bioinformatics: -
Data Mining helps to mine biological data from massive datasets gathered in biology and medicine.
Q10) Explain Data Preparation
A10)
- Data preparation is the cleaning and transforming raw data prior to processing and analysis. It is an important step prior to processing and often involves reformatting data, making corrections to data and the combining of data sets to enrich data.
2. Data preparation is often a lengthy undertaking for data professionals or business users, but it is essential as a prerequisite to put data in context in order to turn it into insights and eliminate bias resulting from poor data quality.
Q11) What are the benefits of Data preparation?
A11)
76% of data scientists say that data preparation is the worst part of their job, but the efficient, accurate business decisions can only be made with clean data. Data preparation helps:
- Fix errors quickly:-Data preparation helps catch errors before processing. After data has been removed from its original source, these errors become more difficult to understand and correct.
2. Produce top-quality data: - Cleaning and reformatting datasets ensures that all data used in analysis will be high quality.
3. Make better business decisions: - Higher quality data that can be processed and analyzed more quickly and efficiently leads to more timely, efficient and high-quality business decisions.
Additionally, as data and data processes move to the cloud, data preparation moves with it for even greater benefits, such as:
4. Superior scalability: - Cloud data preparation can grow at the pace of the business. Enterprise doesn’t have to worry about the underlying infrastructure or try to anticipate their evolutions.
5. Future proof: - Cloud data preparation upgrades automatically so that new capabilities or problem fixes can be turned on as soon as they are released. This allows organizations to stay ahead of the innovation curve without delays and added costs.
6. Accelerated data usage and collaboration: - Doing data prep in the cloud means it is always on, doesn’t require any technical installation, and lets teams collaborate on the work for faster results.
Additionally, a good, cloud-native data preparation tool will offer other benefits (like an intuitive and simple to use GUI) for easier and more efficient preparation.
Q12) Explain the Data preparation steps.
A12)
The specifics of the data preparation process vary by industry, organization and need, but the framework remains largely the same.
1. Gather data:
The data preparation process begins with finding the right data. This can come from an existing data catalog or can be added ad-hoc.
2. Discover and assess data:
After collecting the data, it is important to discover each dataset. This step is about getting to know the data and understanding what has to be done before the data becomes useful in a particular context.
Discovery is a big task, but Talend’s data preparation platform offers visualization tools which help users’ profile and browse their data.
3. Cleanse and validate data:
Cleaning up the data is traditionally the most time-consuming part of the data preparation process, but it’s crucial for removing faulty data and filling in gaps. Important tasks here include:
a) Removing extraneous data and outliers.
b) Filling in missing values.
c) Conforming data to a standardized pattern.
d) Masking private or sensitive data entries.
Once data has been cleansed, it must be validated by testing for errors in the data preparation process up to this point. Often times, an error in the system will become apparent during this step and will need to be resolved before moving forward.
4. Transform and enrich data:
Transforming data is the process of updating the format or value entries in order to reach a well-defined outcome, or to make the data more easily understood by a wider audience. Enriching data refers to adding and connecting data with other related information to provide deeper insights.
5. Store data:
Once prepared, the data can be stored or channeled into a third-party application—such as a business intelligence tool—clearing the way for processing and analysis to take place.
Q13) Explain the role of Data exploration.
A13)
- Before it can conduct analysis on data collected by multiple data sources and stored in data warehouses, an organization must know how many cases are in a data set, what variables are included, how many missing values there are and what general hypotheses the data is likely to support.
2. An initial exploration of the data set can help answer these questions by familiarizing analysts with the data with which they are working.
3. Once data exploration has uncovered the relationships between the different variables, organizations can continue the data mining process by creating and deploying data models to take action
4. Companies can conduct data exploration via a combination of automated and manual methods.
5. Analysts commonly use automated tools such as data visualization software for data exploration because these tools allow users to quickly and simply view most of the relevant features of a data set. From this step, users can identify variables that are likely to have interesting observations.
6. By displaying data graphically -- for example, through scatter plots, density plots or bar charts -- users can see if two or more variables correlate and determine if they are good candidates for further analysis, which may include:
a) Univariate analysis: The analysis of one variable.
b) Bivariate analysis: The analysis of two variables to determine their relationship.
c) Multivariate analysis: The analysis of multiple outcome variables.
d) Principal components analysis: The analysis and conversion of possibly correlated variables into a smaller number of uncorrelated variables.
7. Manual data exploration methods may include filtering and drilling down into data in Excel spreadsheets or writing scripts to analyze raw data sets.
8. After the data exploration is complete, analysts can move on to the data discovery phase to answer specific questions about a business issue.
9. The data discovery process involves using business intelligence tools to examine trends, sequences and events and creating visualizations to present to business leaders.