ml.dmlc.xgboost4j.scala.spark.params

BoosterParams

Related Doc: package params

trait BoosterParams extends Params

Self Type
XGBoostEstimator
Linear Supertypes
Params, Serializable, Serializable, Identifiable, AnyRef, Any
Known Subclasses
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. BoosterParams
  2. Params
  3. Serializable
  4. Serializable
  5. Identifiable
  6. AnyRef
  7. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Abstract Value Members

  1. abstract def copy(extra: ParamMap): Params

    Definition Classes
    Params
  2. abstract val uid: String

    Definition Classes
    Identifiable

Concrete Value Members

  1. final def !=(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Definition Classes
    AnyRef → Any
  3. final def $[T](param: Param[T]): T

    Attributes
    protected
    Definition Classes
    Params
  4. final def ==(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  5. val alpha: DoubleParam

    L1 regularization term on weights, increase this value will make model more conservative.

    L1 regularization term on weights, increase this value will make model more conservative. [default=0]

  6. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  7. val boosterType: Param[String]

    Booster to use, options: {'gbtree', 'gblinear', 'dart'}

  8. final def clear(param: Param[_]): BoosterParams.this.type

    Definition Classes
    Params
  9. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  10. val colSampleByLevel: DoubleParam

    subsample ratio of columns for each split, in each level.

    subsample ratio of columns for each split, in each level. [default=1] range: (0,1]

  11. val colSampleByTree: DoubleParam

    subsample ratio of columns when constructing each tree.

    subsample ratio of columns when constructing each tree. [default=1] range: (0,1]

  12. def copyValues[T <: class="extype" name="org.apache.spark.ml.param.Params">Params](to: T, extra: ParamMap): T

    Attributes
    protected
    Definition Classes
    Params
  13. final def defaultCopy[T <: class="extype" name="org.apache.spark.ml.param.Params">Params](extra: ParamMap): T

    Attributes
    protected
    Definition Classes
    Params
  14. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  15. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  16. val eta: DoubleParam

    step size shrinkage used in update to prevents overfitting.

    step size shrinkage used in update to prevents overfitting. After each boosting step, we can directly get the weights of new features and eta actually shrinks the feature weights to make the boosting process more conservative. [default=0.3] range: [0,1]

  17. def explainParam(param: Param[_]): String

    Definition Classes
    Params
  18. def explainParams(): String

    Explains all params of this instance.

    Explains all params of this instance. See explainParam().

    Definition Classes
    BoosterParams → Params
  19. final def extractParamMap(): ParamMap

    Definition Classes
    Params
  20. final def extractParamMap(extra: ParamMap): ParamMap

    Definition Classes
    Params
  21. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  22. val gamma: DoubleParam

    minimum loss reduction required to make a further partition on a leaf node of the tree.

    minimum loss reduction required to make a further partition on a leaf node of the tree. the larger, the more conservative the algorithm will be. [default=0] range: [0, Double.MaxValue]

  23. final def get[T](param: Param[T]): Option[T]

    Definition Classes
    Params
  24. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  25. final def getDefault[T](param: Param[T]): Option[T]

    Definition Classes
    Params
  26. final def getOrDefault[T](param: Param[T]): T

    Definition Classes
    Params
  27. def getParam(paramName: String): Param[Any]

    Definition Classes
    Params
  28. final def hasDefault[T](param: Param[T]): Boolean

    Definition Classes
    Params
  29. def hasParam(paramName: String): Boolean

    Definition Classes
    Params
  30. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  31. final def isDefined(param: Param[_]): Boolean

    Definition Classes
    Params
  32. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  33. final def isSet(param: Param[_]): Boolean

    Definition Classes
    Params
  34. val lambda: DoubleParam

    L2 regularization term on weights, increase this value will make model more conservative.

    L2 regularization term on weights, increase this value will make model more conservative. [default=1]

  35. val lambdaBias: DoubleParam

    Parameter of linear booster L2 regularization term on bias, default 0(no L1 reg on bias because it is not important)

  36. val maxDeltaStep: DoubleParam

    Maximum delta step we allow each tree's weight estimation to be.

    Maximum delta step we allow each tree's weight estimation to be. If the value is set to 0, it means there is no constraint. If it is set to a positive value, it can help making the update step more conservative. Usually this parameter is not needed, but it might help in logistic regression when class is extremely imbalanced. Set it to value of 1-10 might help control the update. [default=0] range: [0, Double.MaxValue]

  37. val maxDepth: IntParam

    maximum depth of a tree, increase this value will make model more complex / likely to be overfitting.

    maximum depth of a tree, increase this value will make model more complex / likely to be overfitting. [default=6] range: [1, Int.MaxValue]

  38. val minChildWeight: DoubleParam

    minimum sum of instance weight(hessian) needed in a child.

    minimum sum of instance weight(hessian) needed in a child. If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight, then the building process will give up further partitioning. In linear regression mode, this simply corresponds to minimum number of instances needed to be in each node. The larger, the more conservative the algorithm will be. [default=1] range: [0, Double.MaxValue]

  39. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  40. val normalizeType: Param[String]

    Parameter of Dart booster.

    Parameter of Dart booster. type of normalization algorithm, options: {'tree', 'forest'}. [default="tree"]

  41. final def notify(): Unit

    Definition Classes
    AnyRef
  42. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  43. lazy val params: Array[Param[_]]

    Definition Classes
    Params
  44. val rateDrop: DoubleParam

    Parameter of Dart booster.

    Parameter of Dart booster. dropout rate. [default=0.0] range: [0.0, 1.0]

  45. val sampleType: Param[String]

    Parameter for Dart booster.

    Parameter for Dart booster. Type of sampling algorithm. "uniform": dropped trees are selected uniformly. "weighted": dropped trees are selected in proportion to weight. [default="uniform"]

  46. val scalePosWeight: DoubleParam

    Control the balance of positive and negative weights, useful for unbalanced classes.

    Control the balance of positive and negative weights, useful for unbalanced classes. A typical value to consider: sum(negative cases) / sum(positive cases). [default=0]

  47. final def set(paramPair: ParamPair[_]): BoosterParams.this.type

    Attributes
    protected
    Definition Classes
    Params
  48. final def set(param: String, value: Any): BoosterParams.this.type

    Attributes
    protected
    Definition Classes
    Params
  49. final def set[T](param: Param[T], value: T): BoosterParams.this.type

    Definition Classes
    Params
  50. final def setDefault(paramPairs: ParamPair[_]*): BoosterParams.this.type

    Attributes
    protected
    Definition Classes
    Params
  51. final def setDefault[T](param: Param[T], value: T): BoosterParams.this.type

    Attributes
    protected
    Definition Classes
    Params
  52. val sketchEps: DoubleParam

    This is only used for approximate greedy algorithm.

    This is only used for approximate greedy algorithm. This roughly translated into O(1 / sketch_eps) number of bins. Compared to directly select number of bins, this comes with theoretical guarantee with sketch accuracy. [default=0.03] range: (0, 1)

  53. val skipDrop: DoubleParam

    Parameter of Dart booster.

    Parameter of Dart booster. probability of skip dropout. If a dropout is skipped, new trees are added in the same manner as gbtree. [default=0.0] range: [0.0, 1.0]

  54. val subSample: DoubleParam

    subsample ratio of the training instance.

    subsample ratio of the training instance. Setting it to 0.5 means that XGBoost randomly collected half of the data instances to grow trees and this will prevent overfitting. [default=1] range:(0,1]

  55. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  56. def toString(): String

    Definition Classes
    Identifiable → AnyRef → Any
  57. val treeMethod: Param[String]

    The tree construction algorithm used in XGBoost.

    The tree construction algorithm used in XGBoost. options: {'auto', 'exact', 'approx'} [default='auto']

  58. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  59. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  60. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Deprecated Value Members

  1. def validateParams(): Unit

    Definition Classes
    Params
    Annotations
    @deprecated
    Deprecated

    (Since version 2.0.0) Will be removed in 2.1.0. Checks should be merged into transformSchema.

Inherited from Params

Inherited from Serializable

Inherited from Serializable

Inherited from Identifiable

Inherited from AnyRef

Inherited from Any

Ungrouped