Controls the deployment information of a processing unit. Allows to set the deployment topology
(including the cluster schema, number of instances and number of backups), as well as controling
relocation or scalling policies based on predefined monitors and controlling the requirements of where a
processing unit can be deployed.
Scale up policy will cause a processing unit instance to be created when the policy
associated monitor breaches its threshold values.
The monitor name (registered under the monitors section) that its value will
control the breaching of policy.
The low threshold value of the policy.
The high threshold value of the policy.
The lower dampener acts as a time window where if the lower threshold has been
cleared (after it has been breached), it won't cause the policy action to
happen. Set in milliseconds, defaults to 3000.
The upper dampener acts as a time window where if the upper threshold has been
cleared (after it has been breached), it won't cause the policy action to
happen. Set in milliseconds, defaults to 3000.
The maximum number of instances this scalling policy will scale to. The minimun
is the number of instnaces.
Relocation policy will cause a processing unit instance to relocate when the policy
associated monitor breaches its threshold values. Relocation means that the processing
unit will be removed from its current grid container and moved to a new one (that meets
its requirements).
The monitor name that its value will control the breaching of policy. Can either
be one of the built in monitors or one of the monitors regsitered under the
monitors section.
The low threshold value of the policy.
The high threshold value of the policy.
The lower dampener acts as a time window where if the lower threshold has been
cleared (after it has been breached), it won't cause the policy action to
happen. Set in milliseconds, defaults to 3000.
The upper dampener acts as a time window where if the upper threshold has been
cleared (after it has been breached), it won't cause the policy action to
happen. Set in milliseconds, defaults to 3000.
The member alive indicator allows to configure the SLA on how often a member will be checed
to see if a member is alive, and in case of failure, how many times to retry and how often.
How often an instance will be checked and verfied to be alive. In milliseconds and
defaults to 5000, which are 5 seconds.
Once a member has been indicated as not alive, how many times to check it before
giving up on it. Defaults to 3.
Once a member has been indicated as not alive, what is the retry timeout interval.
In zmilliseconds and defaults to 500 milliseconds.
One or more monitors that can be defined to monitor the execution of a processing unit. Can
also control the relocation policy.
The bean property monitor allows to register a Spring bean reference and a
proeprty name which will be monitored at a scheduled interval by invoking it.
The monitor name that will be used when displayed or referenced in the
policy.
The bean reference (id) that will be monitored.
The property name of the given bean that will be monitored. Note, the
method invoked will be 'get[Property Name]'.
The period this monitor will be sampled (in milliseconds). Defaults to 5
seconds.
The history log size that will be kept for this monitor. Defaults to
100.
Sets the number of instnaces this processing unit will have. Default value is 1. Note, when
specifying a value higher than 1 make sure the cluster schema provided support such topology.
The number of backups per instance this processing unit will have. Defaults to 0. Note, when
specifying a value higher than 0 make sure the cluster schema provided supports such a topology.
This value mainly applies when deploying a processing unit that starts an embedded space.
The cluster schema this processing unit will be deployed under. This value mainly applies when
deploying a processing unit that starts an embedded space.
Controls how many instances can be deployed per VM. When using a topology without backups, will
control the number of instances based on all the processing unit instances. When working with a
topology that has backups will control the number of a partition and its backups per vm.
Controls how many instances can be deployed per machine. When using a topology without backups,
will control the number of instances based on all the processing unit instances. When working
with a topology that has backups will control the number of a partition and its backups per
machine.
Controls how many instances can be deployed per zone. When using a topology without backups,
will control the number of instances based on all the processing unit instances. When working
with a topology that has backups will control the number of a partition and its backups per
machine. Format is: "zone1/1", and several zones is: "zoneX/1,zoneY/2".
Defines the zone where primary instances should
be allocated.
Allows to define a set of requirements that control where a processing unit can be deployed.
Controls the ip of the machine a processing unit can be deployed to.
Controls teh zone of the GSC belongs to where the processing unit can be deployed.
Controls based on the value of built in monitor if a processing unit can be deployed to.
The monitor name to get the value from.
The lower threshold where value beneath it will cause the processing unit not to deploy.
The high threshold where value above it will cause the processing unit not to deploy.
A CPU monitor that controls if a processing unit can deploy to.
Lower values of the CPU beneath this value will cause the processing unit not to deploy.
High values of the CPU above this value will cause the processing unit not to deploy.
A memory monitor that controls if a processing unit can deploy to.
Lower values of the Memory beneath this value will cause the processing unit not to
deploy.
High values of the Memory above this value will cause the processing unit not to deploy.
System reqreuiemtns use custom attributes specified in the processing unit container that can
then be referenced by this deployment if it can be deployed to.
A set of one or more key value pairs (configured similar to Map) of attributes to
match against.
The name of the system attribute.