% Generated by roxygen2: do not edit by hand % Please edit documentation in R/lightgbm.R \name{lgb_shared_params} \alias{lgb_shared_params} \title{Shared parameter docs} \arguments{ \item{callbacks}{List of callback functions that are applied at each iteration.} \item{data}{a \code{lgb.Dataset} object, used for training. Some functions, such as \code{\link{lgb.cv}}, may allow you to pass other types of data like \code{matrix} and then separately supply \code{label} as a keyword argument.} \item{early_stopping_rounds}{int. Activates early stopping. When this parameter is non-null, training will stop if the evaluation of any metric on any validation set fails to improve for \code{early_stopping_rounds} consecutive boosting rounds. If training stops early, the returned model will have attribute \code{best_iter} set to the iteration number of the best iteration.} \item{eval}{evaluation function(s). This can be a character vector, function, or list with a mixture of strings and functions. \itemize{ \item{\bold{a. character vector}: If you provide a character vector to this argument, it should contain strings with valid evaluation metrics. See \href{https://lightgbm.readthedocs.io/en/latest/Parameters.html#metric}{ The "metric" section of the documentation} for a list of valid metrics. } \item{\bold{b. function}: You can provide a custom evaluation function. This should accept the keyword arguments \code{preds} and \code{dtrain} and should return a named list with three elements: \itemize{ \item{\code{name}: A string with the name of the metric, used for printing and storing results. } \item{\code{value}: A single number indicating the value of the metric for the given predictions and true values } \item{ \code{higher_better}: A boolean indicating whether higher values indicate a better fit. For example, this would be \code{FALSE} for metrics like MAE or RMSE. } } } \item{\bold{c. list}: If a list is given, it should only contain character vectors and functions. These should follow the requirements from the descriptions above. } }} \item{eval_freq}{evaluation output frequency, only effective when verbose > 0 and \code{valids} has been provided} \item{init_model}{path of model file or \code{lgb.Booster} object, will continue training from this model} \item{nrounds}{number of training rounds} \item{obj}{objective function, can be character or custom objective function. Examples include \code{regression}, \code{regression_l1}, \code{huber}, \code{binary}, \code{lambdarank}, \code{multiclass}, \code{multiclass}} \item{params}{a list of parameters. See \href{https://lightgbm.readthedocs.io/en/latest/Parameters.html}{ the "Parameters" section of the documentation} for a list of parameters and valid values.} \item{verbose}{verbosity for output, if <= 0 and \code{valids} has been provided, also will disable the printing of evaluation during training} \item{serializable}{whether to make the resulting objects serializable through functions such as \code{save} or \code{saveRDS} (see section "Model serialization").} } \description{ Parameter docs shared by \code{lgb.train}, \code{lgb.cv}, and \code{lightgbm} } \section{Early Stopping}{ "early stopping" refers to stopping the training process if the model's performance on a given validation set does not improve for several consecutive iterations. If multiple arguments are given to \code{eval}, their order will be preserved. If you enable early stopping by setting \code{early_stopping_rounds} in \code{params}, by default all metrics will be considered for early stopping. If you want to only consider the first metric for early stopping, pass \code{first_metric_only = TRUE} in \code{params}. Note that if you also specify \code{metric} in \code{params}, that metric will be considered the "first" one. If you omit \code{metric}, a default metric will be used based on your choice for the parameter \code{obj} (keyword argument) or \code{objective} (passed into \code{params}). } \section{Model serialization}{ LightGBM model objects can be serialized and de-serialized through functions such as \code{save} or \code{saveRDS}, but similarly to libraries such as 'xgboost', serialization works a bit differently from typical R objects. In order to make models serializable in R, a copy of the underlying C++ object as serialized raw bytes is produced and stored in the R model object, and when this R object is de-serialized, the underlying C++ model object gets reconstructed from these raw bytes, but will only do so once some function that uses it is called, such as \code{predict}. In order to forcibly reconstruct the C++ object after deserialization (e.g. after calling \code{readRDS} or similar), one can use the function \link{lgb.restore_handle} (for example, if one makes predictions in parallel or in forked processes, it will be faster to restore the handle beforehand). Producing and keeping these raw bytes however uses extra memory, and if they are not required, it is possible to avoid producing them by passing `serializable=FALSE`. In such cases, these raw bytes can be added to the model on demand through function \link{lgb.make_serializable}. \emph{New in version 4.0.0} } \keyword{internal}