The probability of a simple classification that
performs calculations with several probabilities by
performing several frequencies and mixing values
from the dataset used is a naïve Bayes method of
testing. Naïve Bayes assumes independent attributes
or not interdependence on the variable values of
each class. (Patil, 2013). Other researchers say that
the naïve Bayes method is made by English people,
Thomas Bayes, who classifies probabilities and
statistics by predicting the future by doing it using
previous experiences (Bustami, 2013).
Naïve bayes is not given an output value and
simplifies the independent atriut values
conditionally. In other words, by observing the
probability of the product's probability. (Ridwan,
2013). The good that is produced by using the
method n is that this method only requires little data
in conducting data training and in determining
estimates of the parameters used in the data
classification process. Naïve Bayes works very well
in real global life as expected.
Theorem bayes is used to calculate the number of
probabilities to events that influence the results of
observations. In Bayesian, the parameter is used as a
random variable while in the former statistical
world, the parameters must always be corrected.
Pastor Thomas Bayes is the name of the theorem
Bayes, which is described as the relationship
between the opportunities of events A and Z, which
are explained in the following formula (Kundu,
2011):
In the X data sample class whose label is not yet
known, and H is the hypothesis, the sample data x is
transferred to the special class c. P (H / x) is a
probability that explains data about research data x.
P (H / x) is a posterior probability that resembles
trust in predictions after x is given. Conversely, P
(H) is the probability H before the sample is used,
before the sample is formed. Posterior probability P
(H / x) is based on a lot of information from the
priori probability P (H). Bayes theory has a way of
calculating posterior probability P (H / x) using
probabilities P (H), P (X) and P (H / x).
The Bayes method is a statistical approach for
induction inference on classification problems. First
discussed first about the basic concepts and
definitions in the theory of Bayes, then using this
theorem to classify in Data Mining. The Bayes
method uses conditional proportions as the basis.
2.1.1 Principles of The Bayes Method
The Bayes method has an easy way to add outside
information to the data analysis process. The process
is done by distributing existing data with approved
datasets (Albert, 2009). this method is done with
opportunities that have requirements..
2.1.2 Principles of The Bayes Method
Some classification techniques are used (Albert,
2009): Decision tree classifier, Rule based classifier,
Neural network, Naive Bayes.
Each technique uses a learning algorithm to identify
the model that provides the most appropriate
relationship. An example of the Bayesian theory is
the case of patients who have difficulty breathing.
Decisions taken are between cases of patients
suffering from asthma or patients suffering from
lung cancer (Bolstad, 2007).
a. Decision 1: states that someone has lung cancer
despite the actual symptoms of asthma (cost:
high enough, so that it scares patients and makes
patients undergo unnecessary examinations).
b. Decision 2: declare someone asthma even though
it is actually lung cancer (cost: very high that
makes the patient lose the opportunity to treat
cancer at the initial or final stage).
2.1.3 Principles of The Bayes Method
Disadvantages of the Bayes Method include is The
Bayes method can only be used for classification
problems with supervised learning and categorical
data, The Bayes method requires initial knowledge
to be able to make a decision. The success rate of
this method depends on the initial knowledge given.
The advantages of the Bayes Method include is
Interpolation: The Bayes method has choices about
how much time and effort is made by humans vs.
computers; Language: The bayes method has its own
language for determining the things prior and
posterior; Intuition: Involving priors and integration,
two broadly useful activities.
Bayesian probability is the best theory in dealing
with estimation problems and drawing conclusions.
Bayesian methods can be used to draw conclusions
in cases with multiple sources of measurement that
cannot be handled by other methods such as
complex hierarchical models (Bolstad, 2007).
2.2 Smooth Support Vector Machine
SVM was created by Vapnik in 1992 using a series
of superior concepts that are good in the field of
ICONART 2019 - International Conference on Natural Resources and Technology
288