Generative Engine – #1

This is a parametric probabilistic engine for classification.

Supported kinds of features for Data Set

1. Quantitative variable, e.g. price value

2. Categorical variable, e.g. label value (red, white, blue, etc)

Target Optimised Function

Basically, the target optimised function for finding hyper parameter is just MLE (Maximum likelihood Estimate).
But, in this engine, we can mix categorical and quantitative variable.
To deal with that, this engine is using the simple approach like the following:

\prod P(D) = \prod P(c1) P(q1 | q2, c2).. = \prod_{n}{i=0} P(c1) \prod_{c2 == i's value}{i = 0} P(q1 | q2) ...  \newline  \newline  q : quantitative\ variable  \newline  c : categorical\ variable

P(q1 | q2, c) = P(q1, q2)

For the probabilistic term depending on categorical variable like the above formula, we only consider a subset of the whole dataset where the value for that categorical variable is equal to the depending categorical variable data.

Inference  Steps

Inferred\ label = argmax P( target label | features )

Depending on the graph structure, we expand the formula like the below:

P(c | features) = P(c) P( q1 | q2, c) ... = P(c) P(q1,q2) ...

P(q1 | q2, c) = P(q1, q2)

Here, the probabilistic term depending on categorical variable can remove that categorical variable dependency, which leads to term with only quantitative variables. But, in stead of that, when we optimised hyper parameter via MLE, we take care of that categorical variable part.

After expanding the formula, you can get the below two kinds of term:

P(qi, qj, …) : term depending on only quantitative variables

P(ci, cj, …) : term depending on only categorical variables

For P(qi, qj), we simply use Multivariate Gaussian Distribution

For P(ci, cj) : we simply return “1”, which is the same behaviour of uniform distribution for this classification engine.

 

 

 

 

 

 

Advertisements