![]()
![]()
![]()
Next: Conjunction
of specifications and Up:
Polus Framework Previous: Rule of thumb specifications
Learning algorithms quantify the rule of thumb
specifications and they interpolate the value sets for the relationships
defined in the specifications. A learning algorithm is treated as a black box
that interpolates information for the
data point given a previous sample of
data points. The rule-of-thumb specifications help in pruning the
learning space. For example, the implication of invoking prefetching is a
function of all the observables, i.e.,
. Using the rule of thumb specifications, the interpolation of the
implication function is:
. Given that specifications prune the learning space, the question
of what happens if the rule-of-thumb specifications are incomplete arises. The
current implementation of Polus does not handle this scenario as it assumes
that the specifications are complete. However, one can overcome incomplete
specifications using existing machine learning approaches such as
``bagging'' [5],
which discover unspecified relationships and add them to the specifications. In
Polus, the process of learning is a combination of off-line training and
on-line refinement. Initially, when the management software is installed,
learning is an off-line process, which means that the learning algorithms are
just recording the system state along-with the actions invoked by the
administrator. After a sufficient number of training data points are recorded,
the learning algorithm switches to the on-line approach in which it keeps
refining the interpolation function generated using the training data points.
This refinement is based on the difference between the interpolated value and
the value actually obtained from the invocation (also referred to as
re-enforcement learning [20]).
![]()
![]()
![]()
Next: Conjunction of specifications and Up: Polus
Framework Previous: Rule of thumb
specifications