Closed
Description
Carried over from #3484 (comment).
{lightgbm}
3.1.0 was recently submitted to CRAN. It passed checks for many combinations of compiler, operating system, and R version, but is failing on CRAN's r-oldrel-windows-ix86+x86_64
check. The details of this check can be found here: https://cran.r-project.org/web/checks/check_flavors.html#r-oldrel-windows-ix86_x86_64.
full logs from the failing CRAN check
using R version 3.6.3 (2020-02-29)
using platform: x86_64-w64-mingw32 (64-bit)
using session charset: ISO8859-1
checking for file 'lightgbm/DESCRIPTION' ... OK
checking extension type ... Package
this is package 'lightgbm' version '3.1.0'
package encoding: UTF-8
checking package namespace information ... OK
checking package dependencies ... OK
checking if this is a source package ... OK
checking if there is a namespace ... OK
checking for hidden files and directories ... OK
checking for portable file names ... OK
checking whether package 'lightgbm' can be installed ... OK
checking installed package size ... NOTE
installed size is 7.8Mb
sub-directories of 1Mb or more:
libs 7.2Mb
checking package directory ... OK
checking DESCRIPTION meta-information ... OK
checking top-level files ... OK
checking for left-over files ... OK
checking index information ... OK
checking package subdirectories ... OK
checking R files for non-ASCII characters ... OK
checking R files for syntax errors ... OK
loading checks for arch 'i386'
checking whether the package can be loaded ... OK
checking whether the package can be loaded with stated dependencies ... OK
checking whether the package can be unloaded cleanly ... OK
checking whether the namespace can be loaded with stated dependencies ... OK
checking whether the namespace can be unloaded cleanly ... OK
checking loading without being on the library search path ... OK
checking use of S3 registration ... OK
loading checks for arch 'x64'
checking whether the package can be loaded ... OK
checking whether the package can be loaded with stated dependencies ... OK
checking whether the package can be unloaded cleanly ... OK
checking whether the namespace can be loaded with stated dependencies ... OK
checking whether the namespace can be unloaded cleanly ... OK
checking loading without being on the library search path ... OK
checking use of S3 registration ... OK
checking dependencies in R code ... OK
checking S3 generic/method consistency ... OK
checking replacement functions ... OK
checking foreign function calls ... OK
checking R code for possible problems ... [10s] OK
checking Rd files ... OK
checking Rd metadata ... OK
checking Rd cross-references ... OK
checking for missing documentation entries ... OK
checking for code/documentation mismatches ... OK
checking Rd \usage sections ... OK
checking Rd contents ... OK
checking for unstated dependencies in examples ... OK
checking contents of 'data' directory ... OK
checking data for non-ASCII characters ... OK
checking data for ASCII and uncompressed saves ... OK
checking line endings in shell scripts ... OK
checking line endings in C/C++/Fortran sources/headers ... OK
checking line endings in Makefiles ... OK
checking compilation flags in Makevars ... OK
checking for GNU extensions in Makefiles ... OK
checking for portable use of $(BLAS_LIBS) and $(LAPACK_LIBS) ... OK
checking pragmas in C/C++ headers and code ... OK
checking compiled code ... OK
checking examples ...
running examples for arch 'i386' ... [6s] OK
running examples for arch 'x64' ... [6s] OK
checking for unstated dependencies in 'tests' ... OK
checking tests ...
running tests for arch 'i386' ... [20s] ERROR
Running 'testthat.R' [19s]
Running the tests in 'tests/testthat.R' failed.
Complete output:
> library(testthat)
> library(lightgbm)
Loading required package: R6
>
> test_check(
+ package = "lightgbm"
+ , stop_on_failure = TRUE
+ , stop_on_warning = FALSE
+ )
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003420 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
[LightGBM] [Info] Start training from score -0.071580
[1] "[1]: train's binary_logloss:0.314167 test's binary_logloss:0.317777"
[1] "[2]: train's binary_logloss:0.187654 test's binary_logloss:0.187981"
[1] "[3]: train's binary_logloss:0.109209 test's binary_logloss:0.109949"
[1] "[4]: train's binary_logloss:0.0755423 test's binary_logloss:0.0772008"
[1] "[5]: train's binary_logloss:0.0528045 test's binary_logloss:0.0533291"
[1] "[6]: train's binary_logloss:0.0395797 test's binary_logloss:0.0380824"
[1] "[7]: train's binary_logloss:0.0287269 test's binary_logloss:0.0255364"
[1] "[8]: train's binary_logloss:0.0224443 test's binary_logloss:0.0195616"
[1] "[9]: train's binary_logloss:0.016621 test's binary_logloss:0.017834"
[1] "[10]: train's binary_logloss:0.0112055 test's binary_logloss:0.0125538"
[1] "[11]: train's binary_logloss:0.00759638 test's binary_logloss:0.00842372"
[1] "[12]: train's binary_logloss:0.0054887 test's binary_logloss:0.00631812"
[1] "[13]: train's binary_logloss:0.00399548 test's binary_logloss:0.00454944"
[1] "[14]: train's binary_logloss:0.00283135 test's binary_logloss:0.00323724"
[1] "[15]: train's binary_logloss:0.00215378 test's binary_logloss:0.00256697"
[1] "[16]: train's binary_logloss:0.00156723 test's binary_logloss:0.00181753"
[1] "[17]: train's binary_logloss:0.00120077 test's binary_logloss:0.00144437"
[1] "[18]: train's binary_logloss:0.000934889 test's binary_logloss:0.00111807"
[1] "[19]: train's binary_logloss:0.000719878 test's binary_logloss:0.000878304"
[1] "[20]: train's binary_logloss:0.000558692 test's binary_logloss:0.000712272"
[1] "[21]: train's binary_logloss:0.000400916 test's binary_logloss:0.000492223"
[1] "[22]: train's binary_logloss:0.000315938 test's binary_logloss:0.000402804"
[1] "[23]: train's binary_logloss:0.000238113 test's binary_logloss:0.000288682"
[1] "[24]: train's binary_logloss:0.000190248 test's binary_logloss:0.000237835"
[1] "[25]: train's binary_logloss:0.000148322 test's binary_logloss:0.000174674"
[1] "[26]: train's binary_logloss:0.000120581 test's binary_logloss:0.000139513"
[1] "[27]: train's binary_logloss:0.000102756 test's binary_logloss:0.000118804"
[1] "[28]: train's binary_logloss:7.83011e-05 test's binary_logloss:8.40978e-05"
[1] "[29]: train's binary_logloss:6.29191e-05 test's binary_logloss:6.8803e-05"
[1] "[30]: train's binary_logloss:5.28039e-05 test's binary_logloss:5.89864e-05"
[1] "[31]: train's binary_logloss:4.51561e-05 test's binary_logloss:4.91874e-05"
[1] "[32]: train's binary_logloss:3.89402e-05 test's binary_logloss:4.13015e-05"
[1] "[33]: train's binary_logloss:3.24434e-05 test's binary_logloss:3.52605e-05"
[1] "[34]: train's binary_logloss:2.65255e-05 test's binary_logloss:2.86338e-05"
[1] "[35]: train's binary_logloss:2.19277e-05 test's binary_logloss:2.3937e-05"
[1] "[36]: train's binary_logloss:1.86469e-05 test's binary_logloss:2.05375e-05"
[1] "[37]: train's binary_logloss:1.49881e-05 test's binary_logloss:1.53852e-05"
[1] "[38]: train's binary_logloss:1.2103e-05 test's binary_logloss:1.20722e-05"
[1] "[39]: train's binary_logloss:1.02027e-05 test's binary_logloss:1.0578e-05"
[1] "[40]: train's binary_logloss:8.91561e-06 test's binary_logloss:8.8323e-06"
[1] "[41]: train's binary_logloss:7.4855e-06 test's binary_logloss:7.58441e-06"
[1] "[42]: train's binary_logloss:6.21179e-06 test's binary_logloss:6.14299e-06"
[1] "[43]: train's binary_logloss:5.06413e-06 test's binary_logloss:5.13576e-06"
[1] "[44]: train's binary_logloss:4.2029e-06 test's binary_logloss:4.53605e-06"
[1] "[45]: train's binary_logloss:3.47042e-06 test's binary_logloss:3.73234e-06"
[1] "[46]: train's binary_logloss:2.78181e-06 test's binary_logloss:3.02556e-06"
[1] "[47]: train's binary_logloss:2.19819e-06 test's binary_logloss:2.3666e-06"
[1] "[48]: train's binary_logloss:1.80519e-06 test's binary_logloss:1.92932e-06"
[1] "[49]: train's binary_logloss:1.50192e-06 test's binary_logloss:1.64658e-06"
[1] "[50]: train's binary_logloss:1.20212e-06 test's binary_logloss:1.33316e-06"
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003030 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
[LightGBM] [Info] Start training from score -0.071580
[1] "[1]: train's binary_error:0.0222632"
[1] "[2]: train's binary_error:0.0222632"
[1] "[3]: train's binary_error:0.0222632"
[1] "[4]: train's binary_error:0.0109013"
[1] "[5]: train's binary_error:0.0141256"
[1] "[6]: train's binary_error:0.0141256"
[1] "[7]: train's binary_error:0.0141256"
[1] "[8]: train's binary_error:0.0141256"
[1] "[9]: train's binary_error:0.00598802"
[1] "[10]: train's binary_error:0.00598802"
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000741 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 98
[LightGBM] [Info] Number of data points in the train set: 150, number of used features: 4
[LightGBM] [Info] Start training from score -1.098612
[LightGBM] [Info] Start training from score -1.098612
[LightGBM] [Info] Start training from score -1.098612
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: train's multi_error:0.0466667"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: train's multi_error:0.0466667"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: train's multi_error:0.0466667"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: train's multi_error:0.0466667"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: train's multi_error:0.0466667"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[6]: train's multi_error:0.0466667"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[7]: train's multi_error:0.0466667"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[8]: train's multi_error:0.0466667"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[9]: train's multi_error:0.0466667"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[10]: train's multi_error:0.0466667"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[11]: train's multi_error:0.0333333"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[12]: train's multi_error:0.0266667"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[13]: train's multi_error:0.0266667"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[14]: train's multi_error:0.0266667"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[15]: train's multi_error:0.0266667"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[16]: train's multi_error:0.0333333"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[17]: train's multi_error:0.0266667"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[18]: train's multi_error:0.0333333"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[19]: train's multi_error:0.0333333"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[20]: train's multi_error:0.0333333"
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002302 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
[LightGBM] [Info] Start training from score -0.071580
[1] "[1]: train's binary_error:0.0304007 train's auc:0.972508 train's binary_logloss:0.198597"
[1] "[2]: train's binary_error:0.0222632 train's auc:0.995075 train's binary_logloss:0.111535"
[1] "[3]: train's binary_error:0.00598802 train's auc:0.997845 train's binary_logloss:0.0480659"
[1] "[4]: train's binary_error:0.00122831 train's auc:0.998433 train's binary_logloss:0.0279151"
[1] "[5]: train's binary_error:0.00122831 train's auc:0.999354 train's binary_logloss:0.0190479"
[1] "[6]: train's binary_error:0.00537387 train's auc:0.98965 train's binary_logloss:0.16706"
[1] "[7]: train's binary_error:0 train's auc:1 train's binary_logloss:0.0128449"
[1] "[8]: train's binary_error:0 train's auc:1 train's binary_logloss:0.00774702"
[1] "[9]: train's binary_error:0 train's auc:1 train's binary_logloss:0.00472108"
[1] "[10]: train's binary_error:0 train's auc:1 train's binary_logloss:0.00208929"
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002385 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
[LightGBM] [Info] Start training from score -0.071580
[1] "[1]: train's binary_error:0.0222632"
[1] "[2]: train's binary_error:0.0222632"
[1] "[3]: train's binary_error:0.0222632"
[1] "[4]: train's binary_error:0.0109013"
[1] "[5]: train's binary_error:0.0141256"
[1] "[6]: train's binary_error:0.0141256"
[1] "[7]: train's binary_error:0.0141256"
[1] "[8]: train's binary_error:0.0141256"
[1] "[9]: train's binary_error:0.00598802"
[1] "[10]: train's binary_error:0.00598802"
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003158 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] Start training from score 0.482113
[1] "[1]: train's l2:0.206337"
[1] "[2]: train's l2:0.171229"
[1] "[3]: train's l2:0.140871"
[1] "[4]: train's l2:0.116282"
[1] "[5]: train's l2:0.096364"
[1] "[6]: train's l2:0.0802308"
[1] "[7]: train's l2:0.0675595"
[1] "[8]: train's l2:0.0567154"
[1] "[9]: train's l2:0.0482086"
[1] "[10]: train's l2:0.0402694"
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002448 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
[LightGBM] [Info] Start training from score -0.071580
[1] "[1]: train's binary_error:0.0222632 train's auc:0.981784 valid1's binary_error:0.0222632 valid1's auc:0.981784 valid2's binary_error:0.0222632 valid2's auc:0.981784"
[1] "[2]: train's binary_error:0.0222632 train's auc:0.981784 valid1's binary_error:0.0222632 valid1's auc:0.981784 valid2's binary_error:0.0222632 valid2's auc:0.981784"
[1] "[3]: train's binary_error:0.0222632 train's auc:0.992951 valid1's binary_error:0.0222632 valid1's auc:0.992951 valid2's binary_error:0.0222632 valid2's auc:0.992951"
[1] "[4]: train's binary_error:0.0109013 train's auc:0.992951 valid1's binary_error:0.0109013 valid1's auc:0.992951 valid2's binary_error:0.0109013 valid2's auc:0.992951"
[1] "[5]: train's binary_error:0.0141256 train's auc:0.994714 valid1's binary_error:0.0141256 valid1's auc:0.994714 valid2's binary_error:0.0141256 valid2's auc:0.994714"
[1] "[6]: train's binary_error:0.0141256 train's auc:0.994714 valid1's binary_error:0.0141256 valid1's auc:0.994714 valid2's binary_error:0.0141256 valid2's auc:0.994714"
[1] "[7]: train's binary_error:0.0141256 train's auc:0.994714 valid1's binary_error:0.0141256 valid1's auc:0.994714 valid2's binary_error:0.0141256 valid2's auc:0.994714"
[1] "[8]: train's binary_error:0.0141256 train's auc:0.994714 valid1's binary_error:0.0141256 valid1's auc:0.994714 valid2's binary_error:0.0141256 valid2's auc:0.994714"
[1] "[9]: train's binary_error:0.00598802 train's auc:0.993175 valid1's binary_error:0.00598802 valid1's auc:0.993175 valid2's binary_error:0.00598802 valid2's auc:0.993175"
[1] "[10]: train's binary_error:0.00598802 train's auc:0.998242 valid1's binary_error:0.00598802 valid1's auc:0.998242 valid2's binary_error:0.00598802 valid2's auc:0.998242"
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002321 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
[LightGBM] [Info] Start training from score -0.071580
[1] "[1]: train's binary_logloss:0.179606"
[1] "[2]: train's binary_logloss:0.0975448"
[1] "[3]: train's binary_logloss:0.0384292"
[1] "[4]: train's binary_logloss:0.0582241"
[1] "[5]: train's binary_logloss:0.0595215"
[1] "[6]: train's binary_logloss:0.0609174"
[1] "[7]: train's binary_logloss:0.317567"
[1] "[8]: train's binary_logloss:0.0104223"
[1] "[9]: train's binary_logloss:0.00497498"
[1] "[10]: train's binary_logloss:0.00283557"
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002529 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
[LightGBM] [Info] Start training from score -0.071580
[1] "[1]: train's binary_logloss:0.179606"
[1] "[2]: train's binary_logloss:0.0975448"
[1] "[3]: train's binary_logloss:0.0384292"
[1] "[4]: train's binary_logloss:0.0582241"
[1] "[5]: train's binary_logloss:0.0595215"
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003434 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[1] "[6]: train's binary_logloss:0.0609174"
[1] "[7]: train's binary_logloss:0.317567"
[1] "[8]: train's binary_logloss:0.0104223"
[1] "[9]: train's binary_logloss:0.00497498"
[1] "[10]: train's binary_logloss:0.00283557"
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002565 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 232
[LightGBM] [Info] Number of data points in the train set: 5211, number of used features: 116
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002578 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 232
[LightGBM] [Info] Number of data points in the train set: 5211, number of used features: 116
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003057 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 232
[LightGBM] [Info] Number of data points in the train set: 5210, number of used features: 116
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002678 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 232
[LightGBM] [Info] Number of data points in the train set: 5210, number of used features: 116
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002674 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 232
[LightGBM] [Info] Number of data points in the train set: 5210, number of used features: 116
[LightGBM] [Info] Start training from score 0.483976
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Info] Start training from score 0.480906
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Info] Start training from score 0.481574
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Info] Start training from score 0.482342
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Info] Start training from score 0.481766
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[1] "[1]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306994+0.00061397"
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[1] "[2]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306985+0.000613967"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306985+0.000613967"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
[1] "[4]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306985+0.000613967"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
[1] "[5]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306985+0.000613967"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
[1] "[6]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306985+0.000613967"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
[1] "[7]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306985+0.000613967"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
[1] "[8]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306985+0.000613967"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
[1] "[9]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306985+0.000613967"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
[1] "[10]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306985+0.000613967"
[LightGBM] [Info] Number of positive: 198, number of negative: 202
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000505 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 167
[LightGBM] [Info] Number of data points in the train set: 400, number of used features: 1
[LightGBM] [Info] Number of positive: 196, number of negative: 204
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000572 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 167
[LightGBM] [Info] Number of data points in the train set: 400, number of used features: 1
[LightGBM] [Info] Number of positive: 207, number of negative: 193
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000648 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 167
[LightGBM] [Info] Number of data points in the train set: 400, number of used features: 1
[LightGBM] [Info] Number of positive: 207, number of negative: 193
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000601 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 167
[LightGBM] [Info] Number of data points in the train set: 400, number of used features: 1
[LightGBM] [Info] Number of positive: 192, number of negative: 208
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000574 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 167
[LightGBM] [Info] Number of data points in the train set: 400, number of used features: 1
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.495000 -> initscore=-0.020001
[LightGBM] [Info] Start training from score -0.020001
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.490000 -> initscore=-0.040005
[LightGBM] [Info] Start training from score -0.040005
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.517500 -> initscore=0.070029
[LightGBM] [Info] Start training from score 0.070029
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.517500 -> initscore=0.070029
[LightGBM] [Info] Start training from score 0.070029
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.480000 -> initscore=-0.080043
[LightGBM] [Info] Start training from score -0.080043
[1] "[1]: valid's auc:0.476662+0.0622898 valid's binary_error:0.5+0.0593296"
[1] "[2]: valid's auc:0.477476+0.0393392 valid's binary_error:0.554+0.0372022"
[1] "[3]: valid's auc:0.456927+0.042898 valid's binary_error:0.526+0.0361109"
[1] "[4]: valid's auc:0.419531+0.0344972 valid's binary_error:0.54+0.0289828"
[1] "[5]: valid's auc:0.459109+0.0862237 valid's binary_error:0.52+0.0489898"
[1] "[6]: valid's auc:0.460522+0.0911246 valid's binary_error:0.528+0.0231517"
[1] "[7]: valid's auc:0.456328+0.0540445 valid's binary_error:0.532+0.0386782"
[1] "[8]: valid's auc:0.463653+0.0660907 valid's binary_error:0.514+0.0488262"
[1] "[9]: valid's auc:0.443017+0.0549965 valid's binary_error:0.55+0.0303315"
[1] "[10]: valid's auc:0.477483+0.0763283 valid's binary_error:0.488+0.0549181"
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002739 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
[LightGBM] [Info] Start training from score -0.071580
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[1] "[1]: train's binary_error:0.00307078 train's auc:0.99996 train's binary_logloss:0.132074"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: train's binary_error:0.00153539 train's auc:1 train's binary_logloss:0.0444372"
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[1] "[3]: train's binary_error:0 train's auc:1 train's binary_logloss:0.0159408"
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[1] "[4]: train's binary_error:0 train's auc:1 train's binary_logloss:0.00590065"
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[1] "[5]: train's binary_error:0 train's auc:1 train's binary_logloss:0.00230167"
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[1] "[6]: train's binary_error:0 train's auc:1 train's binary_logloss:0.00084253"
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[1] "[7]: train's binary_error:0 train's auc:1 train's binary_logloss:0.000309409"
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[1] "[8]: train's binary_error:0 train's auc:1 train's binary_logloss:0.000113754"
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[1] "[9]: train's binary_error:0 train's auc:1 train's binary_logloss:4.1838e-05"
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[1] "[10]: train's binary_error:0 train's auc:1 train's binary_logloss:1.539e-05"
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
[LightGBM] [Info] Start training from score -0.071580
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
[LightGBM] [Info] Start training from score -0.071580
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Info] Number of positive: 35110, number of negative: 34890
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000971 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 12
[LightGBM] [Info] Number of data points in the train set: 70000, number of used features: 1
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501571 -> initscore=0.006286
[LightGBM] [Info] Start training from score 0.006286
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Info] Number of positive: 500, number of negative: 500
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000444 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 3
[LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.500000 -> initscore=0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[6]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[7]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[8]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[9]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[10]: valid1's binary_error:0"
[LightGBM] [Info] Number of positive: 500, number of negative: 500
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000446 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 3
[LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.500000 -> initscore=0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[6]: valid1's binary_error:0"
[LightGBM] [Info] Number of positive: 500, number of negative: 500
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000437 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 3
[LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.500000 -> initscore=0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: valid1's binary_error:0"
[LightGBM] [Info] Number of positive: 500, number of negative: 500
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000442 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 3
[LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.500000 -> initscore=0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: valid1's binary_error:0"
[LightGBM] [Info] Number of positive: 500, number of negative: 500
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000455 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 3
[LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.500000 -> initscore=0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: valid1's binary_error:0"
[LightGBM] [Info] Number of positive: 500, number of negative: 500
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000456 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 3
[LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.500000 -> initscore=0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: valid1's binary_error:0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: valid1's binary_error:0"
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003129 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
[LightGBM] [Info] Start training from score -0.071580
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: valid1's auc:0.987036"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: valid1's auc:0.987036"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid1's auc:0.998699"
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[1] "[4]: valid1's auc:0.998699"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: valid1's auc:0.998699"
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[1] "[6]: valid1's auc:0.999667"
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[1] "[7]: valid1's auc:0.999806"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[8]: valid1's auc:0.999978"
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[1] "[9]: valid1's auc:0.999997"
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[1] "[10]: valid1's auc:0.999997"
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002841 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
[LightGBM] [Info] Start training from score -0.071580
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: valid1's binary_error:0.016139"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: valid1's binary_error:0.016139"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid1's binary_error:0.016139"
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[1] "[4]: valid1's binary_error:0.016139"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: valid1's binary_error:0.016139"
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[1] "[6]: valid1's binary_error:0.016139"
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000422 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 3
[LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: valid1's rmse:55"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: valid1's rmse:59.5"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid1's rmse:63.55"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: valid1's rmse:67.195"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: valid1's rmse:70.4755"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[6]: valid1's rmse:73.428"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[7]: valid1's rmse:76.0852"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[8]: valid1's rmse:78.4766"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[9]: valid1's rmse:80.629"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[10]: valid1's rmse:82.5661"
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000615 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 3
[LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: valid1's rmse:55"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: valid1's rmse:59.5"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid1's rmse:63.55"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: valid1's rmse:67.195"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: valid1's rmse:70.4755"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[6]: valid1's rmse:73.428"
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000440 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 35
[LightGBM] [Info] Number of data points in the train set: 100, number of used features: 1
[LightGBM] [Info] Start training from score 0.045019
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: valid1's constant_metric:0.2 valid1's increasing_metric:0.1"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: valid1's constant_metric:0.2 valid1's increasing_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid1's constant_metric:0.2 valid1's increasing_metric:0.3"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: valid1's constant_metric:0.2 valid1's increasing_metric:0.4"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: valid1's constant_metric:0.2 valid1's increasing_metric:0.5"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[6]: valid1's constant_metric:0.2 valid1's increasing_metric:0.6"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[7]: valid1's constant_metric:0.2 valid1's increasing_metric:0.7"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[8]: valid1's constant_metric:0.2 valid1's increasing_metric:0.8"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[9]: valid1's constant_metric:0.2 valid1's increasing_metric:0.9"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[10]: valid1's constant_metric:0.2 valid1's increasing_metric:1"
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000617 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 35
[LightGBM] [Info] Number of data points in the train set: 100, number of used features: 1
[LightGBM] [Info] Start training from score 0.045019
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: valid1's increasing_metric:1.1 valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: valid1's increasing_metric:1.2 valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid1's increasing_metric:1.3 valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: valid1's increasing_metric:1.4 valid1's constant_metric:0.2"
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000399 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 35
[LightGBM] [Info] Number of data points in the train set: 100, number of used features: 1
[LightGBM] [Info] Start training from score 0.045019
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: valid1's increasing_metric:1.5 valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: valid1's increasing_metric:1.6 valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid1's increasing_metric:1.7 valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: valid1's increasing_metric:1.8 valid1's constant_metric:0.2"
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000287 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 35
[LightGBM] [Info] Number of data points in the train set: 100, number of used features: 1
[LightGBM] [Info] Start training from score 0.045019
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: valid1's increasing_metric:1.9 valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: valid1's increasing_metric:2 valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid1's increasing_metric:2.1 valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: valid1's increasing_metric:2.2 valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: valid1's increasing_metric:2.3 valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[6]: valid1's increasing_metric:2.4 valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[7]: valid1's increasing_metric:2.5 valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[8]: valid1's increasing_metric:2.6 valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[9]: valid1's increasing_metric:2.7 valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[10]: valid1's increasing_metric:2.8 valid1's constant_metric:0.2"
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000425 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 35
[LightGBM] [Info] Number of data points in the train set: 100, number of used features: 1
[LightGBM] [Info] Start training from score 0.045019
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: valid1's rmse:1.10501 valid1's l2:1.22105 valid1's increasing_metric:2.9 valid1's rmse:1.10501 valid1's l2:1.22105 valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: valid1's rmse:1.10335 valid1's l2:1.21738 valid1's increasing_metric:3 valid1's rmse:1.10335 valid1's l2:1.21738 valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid1's rmse:1.10199 valid1's l2:1.21438 valid1's increasing_metric:3.1 valid1's rmse:1.10199 valid1's l2:1.21438 valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: valid1's rmse:1.10198 valid1's l2:1.21436 valid1's increasing_metric:3.2 valid1's rmse:1.10198 valid1's l2:1.21436 valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: valid1's rmse:1.10128 valid1's l2:1.21282 valid1's increasing_metric:3.3 valid1's rmse:1.10128 valid1's l2:1.21282 valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[6]: valid1's rmse:1.10101 valid1's l2:1.21222 valid1's increasing_metric:3.4 valid1's rmse:1.10101 valid1's l2:1.21222 valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[7]: valid1's rmse:1.10065 valid1's l2:1.21143 valid1's increasing_metric:3.5 valid1's rmse:1.10065 valid1's l2:1.21143 valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[8]: valid1's rmse:1.10011 valid1's l2:1.21025 valid1's increasing_metric:3.6 valid1's rmse:1.10011 valid1's l2:1.21025 valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[9]: valid1's rmse:1.09999 valid1's l2:1.20997 valid1's increasing_metric:3.7 valid1's rmse:1.09999 valid1's l2:1.20997 valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[10]: valid1's rmse:1.09954 valid1's l2:1.20898 valid1's increasing_metric:3.8 valid1's rmse:1.09954 valid1's l2:1.20898 valid1's constant_metric:0.2"
[LightGBM] [Info] Number of positive: 66, number of negative: 54
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000423 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 42
[LightGBM] [Info] Number of data points in the train set: 120, number of used features: 1
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.550000 -> initscore=0.200671
[LightGBM] [Info] Start training from score 0.200671
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: valid1's binary_error:0.486486 valid1's binary_logloss:0.693255"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: valid1's binary_error:0.486486 valid1's binary_logloss:0.691495"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid1's binary_error:0.486486 valid1's binary_logloss:0.69009"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: valid1's binary_error:0.432432 valid1's binary_logloss:0.688968"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: valid1's binary_error:0.432432 valid1's binary_logloss:0.688534"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[6]: valid1's binary_error:0.432432 valid1's binary_logloss:0.689883"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[7]: valid1's binary_error:0.432432 valid1's binary_logloss:0.689641"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[8]: valid1's binary_error:0.432432 valid1's binary_logloss:0.689532"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[9]: valid1's binary_error:0.432432 valid1's binary_logloss:0.691066"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[10]: valid1's binary_error:0.432432 valid1's binary_logloss:0.690653"
[LightGBM] [Info] Number of positive: 66, number of negative: 54
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000431 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 42
[LightGBM] [Info] Number of data points in the train set: 120, number of used features: 1
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.550000 -> initscore=0.200671
[LightGBM] [Info] Start training from score 0.200671
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: valid1's binary_logloss:0.693255"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: valid1's binary_logloss:0.691495"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid1's binary_logloss:0.69009"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: valid1's binary_logloss:0.688968"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: valid1's binary_logloss:0.688534"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[6]: valid1's binary_logloss:0.689883"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[7]: valid1's binary_logloss:0.689641"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[8]: valid1's binary_logloss:0.689532"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[9]: valid1's binary_logloss:0.691066"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[10]: valid1's binary_logloss:0.690653"
[LightGBM] [Info] Number of positive: 66, number of negative: 54
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000426 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 42
[LightGBM] [Info] Number of data points in the train set: 120, number of used features: 1
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.550000 -> initscore=0.200671
[LightGBM] [Info] Start training from score 0.200671
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: valid1's binary_error:0.486486 valid1's binary_logloss:0.693255"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: valid1's binary_error:0.486486 valid1's binary_logloss:0.691495"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid1's binary_error:0.486486 valid1's binary_logloss:0.69009"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: valid1's binary_error:0.432432 valid1's binary_logloss:0.688968"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: valid1's binary_error:0.432432 valid1's binary_logloss:0.688534"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[6]: valid1's binary_error:0.432432 valid1's binary_logloss:0.689883"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[7]: valid1's binary_error:0.432432 valid1's binary_logloss:0.689641"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[8]: valid1's binary_error:0.432432 valid1's binary_logloss:0.689532"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[9]: valid1's binary_error:0.432432 valid1's binary_logloss:0.691066"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[10]: valid1's binary_error:0.432432 valid1's binary_logloss:0.690653"
[LightGBM] [Info] Number of positive: 66, number of negative: 54
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000533 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 42
[LightGBM] [Info] Number of data points in the train set: 120, number of used features: 1
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.550000 -> initscore=0.200671
[LightGBM] [Info] Start training from score 0.200671
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: valid1's binary_logloss:0.693255"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: valid1's binary_logloss:0.691495"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid1's binary_logloss:0.69009"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: valid1's binary_logloss:0.688968"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: valid1's binary_logloss:0.688534"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[6]: valid1's binary_logloss:0.689883"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[7]: valid1's binary_logloss:0.689641"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[8]: valid1's binary_logloss:0.689532"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[9]: valid1's binary_logloss:0.691066"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[10]: valid1's binary_logloss:0.690653"
[LightGBM] [Info] Number of positive: 66, number of negative: 54
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000556 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 42
[LightGBM] [Info] Number of data points in the train set: 120, number of used features: 1
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.550000 -> initscore=0.200671
[LightGBM] [Info] Start training from score 0.200671
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: valid1's binary_error:0.486486 valid1's binary_logloss:0.693255"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: valid1's binary_error:0.486486 valid1's binary_logloss:0.691495"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid1's binary_error:0.486486 valid1's binary_logloss:0.69009"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: valid1's binary_error:0.432432 valid1's binary_logloss:0.688968"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: valid1's binary_error:0.432432 valid1's binary_logloss:0.688534"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[6]: valid1's binary_error:0.432432 valid1's binary_logloss:0.689883"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[7]: valid1's binary_error:0.432432 valid1's binary_logloss:0.689641"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[8]: valid1's binary_error:0.432432 valid1's binary_logloss:0.689532"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[9]: valid1's binary_error:0.432432 valid1's binary_logloss:0.691066"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[10]: valid1's binary_error:0.432432 valid1's binary_logloss:0.690653"
[LightGBM] [Info] Number of positive: 66, number of negative: 54
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000605 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 42
[LightGBM] [Info] Number of data points in the train set: 120, number of used features: 1
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.550000 -> initscore=0.200671
[LightGBM] [Info] Start training from score 0.200671
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[6]: valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[7]: valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[8]: valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[9]: valid1's constant_metric:0.2"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[10]: valid1's constant_metric:0.2"
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000442 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 3
[LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: valid1's mape:1.1 valid1's rmse:55 valid1's l1:55"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: valid1's mape:1.19 valid1's rmse:59.5 valid1's l1:59.5"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid1's mape:1.271 valid1's rmse:63.55 valid1's l1:63.55"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: valid1's mape:1.3439 valid1's rmse:67.195 valid1's l1:67.195"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: valid1's mape:1.40951 valid1's rmse:70.4755 valid1's l1:70.4755"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[6]: valid1's mape:1.46856 valid1's rmse:73.428 valid1's l1:73.428"
-- Skip (test_basic.R:1171:3): lgb.train() supports non-ASCII feature names ----
Reason: UTF-8 feature names are not fully supported in the R package
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000582 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 3
[LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: valid1's rmse:125 valid2's rmse:98.1071"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: valid1's rmse:87.5 valid2's rmse:62.5"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid1's rmse:106.25 valid2's rmse:80.0878"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: valid1's rmse:96.875 valid2's rmse:71.2198"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: valid1's rmse:101.562 valid2's rmse:75.6386"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[6]: valid1's rmse:99.2188 valid2's rmse:73.425"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[7]: valid1's rmse:100.391 valid2's rmse:74.5308"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[8]: valid1's rmse:99.8047 valid2's rmse:73.9777"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[9]: valid1's rmse:100.098 valid2's rmse:74.2542"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[10]: valid1's rmse:99.9512 valid2's rmse:74.1159"
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000718 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 3
[LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: train's rmse:25 valid1's rmse:125 valid2's rmse:98.1071"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: train's rmse:12.5 valid1's rmse:87.5 valid2's rmse:62.5"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: train's rmse:6.25 valid1's rmse:106.25 valid2's rmse:80.0878"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: train's rmse:3.125 valid1's rmse:96.875 valid2's rmse:71.2198"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: train's rmse:1.5625 valid1's rmse:101.562 valid2's rmse:75.6386"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[6]: train's rmse:0.78125 valid1's rmse:99.2188 valid2's rmse:73.425"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[7]: train's rmse:0.390625 valid1's rmse:100.391 valid2's rmse:74.5308"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[8]: train's rmse:0.195312 valid1's rmse:99.8047 valid2's rmse:73.9777"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[9]: train's rmse:0.0976562 valid1's rmse:100.098 valid2's rmse:74.2542"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[10]: train's rmse:0.0488281 valid1's rmse:99.9512 valid2's rmse:74.1159"
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000431 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 3
[LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: train's rmse:25 valid1's rmse:125 valid2's rmse:98.1071"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: train's rmse:12.5 valid1's rmse:87.5 valid2's rmse:62.5"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: train's rmse:6.25 valid1's rmse:106.25 valid2's rmse:80.0878"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: train's rmse:3.125 valid1's rmse:96.875 valid2's rmse:71.2198"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: train's rmse:1.5625 valid1's rmse:101.562 valid2's rmse:75.6386"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[6]: train's rmse:0.78125 valid1's rmse:99.2188 valid2's rmse:73.425"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[7]: train's rmse:0.390625 valid1's rmse:100.391 valid2's rmse:74.5308"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[8]: train's rmse:0.195312 valid1's rmse:99.8047 valid2's rmse:73.9777"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[9]: train's rmse:0.0976562 valid1's rmse:100.098 valid2's rmse:74.2542"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[10]: train's rmse:0.0488281 valid1's rmse:99.9512 valid2's rmse:74.1159"
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000432 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 3
[LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: train's rmse:25 valid1's rmse:125 valid2's rmse:98.1071"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: train's rmse:12.5 valid1's rmse:87.5 valid2's rmse:62.5"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: train's rmse:6.25 valid1's rmse:106.25 valid2's rmse:80.0878"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: train's rmse:3.125 valid1's rmse:96.875 valid2's rmse:71.2198"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: train's rmse:1.5625 valid1's rmse:101.562 valid2's rmse:75.6386"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[6]: train's rmse:0.78125 valid1's rmse:99.2188 valid2's rmse:73.425"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[7]: train's rmse:0.390625 valid1's rmse:100.391 valid2's rmse:74.5308"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[8]: train's rmse:0.195312 valid1's rmse:99.8047 valid2's rmse:73.9777"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[9]: train's rmse:0.0976562 valid1's rmse:100.098 valid2's rmse:74.2542"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[10]: train's rmse:0.0488281 valid1's rmse:99.9512 valid2's rmse:74.1159"
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000445 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 3
[LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: something-random-we-would-not-hardcode's rmse:25 valid1's rmse:125 valid2's rmse:98.1071"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: something-random-we-would-not-hardcode's rmse:12.5 valid1's rmse:87.5 valid2's rmse:62.5"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: something-random-we-would-not-hardcode's rmse:6.25 valid1's rmse:106.25 valid2's rmse:80.0878"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: something-random-we-would-not-hardcode's rmse:3.125 valid1's rmse:96.875 valid2's rmse:71.2198"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: something-random-we-would-not-hardcode's rmse:1.5625 valid1's rmse:101.562 valid2's rmse:75.6386"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[6]: something-random-we-would-not-hardcode's rmse:0.78125 valid1's rmse:99.2188 valid2's rmse:73.425"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[7]: something-random-we-would-not-hardcode's rmse:0.390625 valid1's rmse:100.391 valid2's rmse:74.5308"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[8]: something-random-we-would-not-hardcode's rmse:0.195312 valid1's rmse:99.8047 valid2's rmse:73.9777"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[9]: something-random-we-would-not-hardcode's rmse:0.0976562 valid1's rmse:100.098 valid2's rmse:74.2542"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[10]: something-random-we-would-not-hardcode's rmse:0.0488281 valid1's rmse:99.9512 valid2's rmse:74.1159"
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000458 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 3
[LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: train's rmse:25"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: train's rmse:12.5"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: train's rmse:6.25"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: train's rmse:3.125"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: train's rmse:1.5625"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[6]: train's rmse:0.78125"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[7]: train's rmse:0.390625"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[8]: train's rmse:0.195312"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[9]: train's rmse:0.0976562"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[10]: train's rmse:0.0488281"
[LightGBM] [Info] Number of positive: 500, number of negative: 500
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000445 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 255
[LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.500000 -> initscore=0.000000
[1] "[1]: something-random-we-would-not-hardcode's auc:0.58136 valid1's auc:0.429487"
[1] "[2]: something-random-we-would-not-hardcode's auc:0.599008 valid1's auc:0.266026"
[1] "[3]: something-random-we-would-not-hardcode's auc:0.6328 valid1's auc:0.349359"
[1] "[4]: something-random-we-would-not-hardcode's auc:0.655136 valid1's auc:0.394231"
[1] "[5]: something-random-we-would-not-hardcode's auc:0.655408 valid1's auc:0.419872"
[1] "[6]: something-random-we-would-not-hardcode's auc:0.678784 valid1's auc:0.336538"
[1] "[7]: something-random-we-would-not-hardcode's auc:0.682176 valid1's auc:0.416667"
[1] "[8]: something-random-we-would-not-hardcode's auc:0.698032 valid1's auc:0.394231"
[1] "[9]: something-random-we-would-not-hardcode's auc:0.712672 valid1's auc:0.445513"
[1] "[10]: something-random-we-would-not-hardcode's auc:0.723024 valid1's auc:0.471154"
[LightGBM] [Info] Number of positive: 50, number of negative: 39
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000423 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 42
[LightGBM] [Info] Number of data points in the train set: 89, number of used features: 1
[LightGBM] [Info] Number of positive: 49, number of negative: 41
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000611 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 42
[LightGBM] [Info] Number of data points in the train set: 90, number of used features: 1
[LightGBM] [Info] Number of positive: 53, number of negative: 38
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000430 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 42
[LightGBM] [Info] Number of data points in the train set: 91, number of used features: 1
[LightGBM] [Info] Number of positive: 46, number of negative: 44
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000446 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 42
[LightGBM] [Info] Number of data points in the train set: 90, number of used features: 1
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.561798 -> initscore=0.248461
[LightGBM] [Info] Start training from score 0.248461
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.544444 -> initscore=0.178248
[LightGBM] [Info] Start training from score 0.178248
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.582418 -> initscore=0.332706
[LightGBM] [Info] Start training from score 0.332706
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.511111 -> initscore=0.044452
[LightGBM] [Info] Start training from score 0.044452
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: valid's binary_error:0.500565+0.0460701 valid's binary_logloss:0.701123+0.0155541"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: valid's binary_error:0.500565+0.0460701 valid's binary_logloss:0.70447+0.0152787"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid's binary_error:0.500565+0.0460701 valid's binary_logloss:0.706572+0.0162531"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: valid's binary_error:0.500565+0.0460701 valid's binary_logloss:0.709214+0.0165672"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: valid's binary_error:0.500565+0.0460701 valid's binary_logloss:0.710652+0.0172198"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[6]: valid's binary_error:0.500565+0.0460701 valid's binary_logloss:0.713091+0.0176604"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[7]: valid's binary_error:0.508899+0.0347887 valid's binary_logloss:0.714842+0.0184267"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[8]: valid's binary_error:0.508899+0.0347887 valid's binary_logloss:0.714719+0.0178927"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[9]: valid's binary_error:0.508899+0.0347887 valid's binary_logloss:0.717162+0.0181993"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[10]: valid's binary_error:0.508899+0.0347887 valid's binary_logloss:0.716577+0.0180201"
[LightGBM] [Info] Number of positive: 45, number of negative: 35
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000589 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 42
[LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1
[LightGBM] [Info] Number of positive: 40, number of negative: 40
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000807 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 42
[LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1
[LightGBM] [Info] Number of positive: 47, number of negative: 33
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000594 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 42
[LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.562500 -> initscore=0.251314
[LightGBM] [Info] Start training from score 0.251314
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.500000 -> initscore=0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.587500 -> initscore=0.353640
[LightGBM] [Info] Start training from score 0.353640
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: valid's constant_metric:0.2+0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: valid's constant_metric:0.2+0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid's constant_metric:0.2+0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: valid's constant_metric:0.2+0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: valid's constant_metric:0.2+0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[6]: valid's constant_metric:0.2+0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[7]: valid's constant_metric:0.2+0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[8]: valid's constant_metric:0.2+0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[9]: valid's constant_metric:0.2+0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[10]: valid's constant_metric:0.2+0"
[LightGBM] [Warning] Unknown parameter: valids
[LightGBM] [Warning] Unknown parameter: 0x06b41840>
[LightGBM] [Warning] Unknown parameter: valids
[LightGBM] [Warning] Unknown parameter: 0x06b41840>
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000554 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 35
[LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1
[LightGBM] [Warning] Unknown parameter: valids
[LightGBM] [Warning] Unknown parameter: 0x06b41840>
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000555 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 35
[LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1
[LightGBM] [Warning] Unknown parameter: valids
[LightGBM] [Warning] Unknown parameter: 0x06b41840>
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000551 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 35
[LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1
[LightGBM] [Warning] Unknown parameter: valids
[LightGBM] [Warning] Unknown parameter: 0x06b41840>
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000558 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 35
[LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1
[LightGBM] [Warning] Unknown parameter: valids
[LightGBM] [Warning] Unknown parameter: 0x06b41840>
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000558 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 35
[LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1
[LightGBM] [Info] Start training from score 0.024388
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Info] Start training from score 0.005573
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Info] Start training from score 0.039723
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Info] Start training from score 0.029700
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Info] Start training from score 0.125712
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: valid's increasing_metric:4.1+0.141421 valid's constant_metric:0.2+0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: valid's increasing_metric:4.6+0.141421 valid's constant_metric:0.2+0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid's increasing_metric:5.1+0.141421 valid's constant_metric:0.2+0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: valid's increasing_metric:5.6+0.141421 valid's constant_metric:0.2+0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[5]: valid's increasing_metric:6.1+0.141421 valid's constant_metric:0.2+0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[6]: valid's increasing_metric:6.6+0.141421 valid's constant_metric:0.2+0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[7]: valid's increasing_metric:7.1+0.141421 valid's constant_metric:0.2+0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[8]: valid's increasing_metric:7.6+0.141421 valid's constant_metric:0.2+0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[9]: valid's increasing_metric:8.1+0.141421 valid's constant_metric:0.2+0"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[10]: valid's increasing_metric:8.6+0.141421 valid's constant_metric:0.2+0"
[LightGBM] [Warning] Unknown parameter: valids
[LightGBM] [Warning] Unknown parameter: 0x06b41840>
[LightGBM] [Warning] Unknown parameter: valids
[LightGBM] [Warning] Unknown parameter: 0x06b41840>
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000586 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 35
[LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1
[LightGBM] [Warning] Unknown parameter: valids
[LightGBM] [Warning] Unknown parameter: 0x06b41840>
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000589 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 35
[LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1
[LightGBM] [Warning] Unknown parameter: valids
[LightGBM] [Warning] Unknown parameter: 0x06b41840>
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000574 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 35
[LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1
[LightGBM] [Warning] Unknown parameter: valids
[LightGBM] [Warning] Unknown parameter: 0x06b41840>
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000568 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 35
[LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1
[LightGBM] [Warning] Unknown parameter: valids
[LightGBM] [Warning] Unknown parameter: 0x06b41840>
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000557 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 35
[LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1
[LightGBM] [Info] Start training from score 0.024388
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Info] Start training from score 0.005573
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Info] Start training from score 0.039723
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Info] Start training from score 0.029700
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Info] Start training from score 0.125712
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: valid's constant_metric:0.2+0 valid's increasing_metric:9.1+0.141421"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: valid's constant_metric:0.2+0 valid's increasing_metric:9.6+0.141421"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[3]: valid's constant_metric:0.2+0 valid's increasing_metric:10.1+0.141421"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[4]: valid's constant_metric:0.2+0 valid's increasing_metric:10.6+0.141421"
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003376 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] Start training from score 0.482113
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: train's l2:0.24804"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: train's l2:0.246711"
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003291 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] Start training from score 0.482113
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: train's l2:0.24804"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: train's l2:0.246711"
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003277 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] Start training from score 0.482113
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: train's l2:0.24804"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: train's l2:0.246711"
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003181 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] Start training from score 0.482113
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: train's l2:0.24804"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: train's l2:0.246711"
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003465 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] Start training from score 0.482113
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[1]: train's l2:0.24804"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] "[2]: train's l2:0.246711"
[LightGBM] [Warning] Using self-defined objective function
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002869 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Warning] Using self-defined objective function
[1] "[1]: train's auc:0.994987 train's error:0.00598802 eval's auc:0.995243 eval's error:0.00558659"
[1] "[2]: train's auc:0.99512 train's error:0.00307078 eval's auc:0.995237 eval's error:0.00248293"
[1] "[3]: train's auc:0.99009 train's error:0.00598802 eval's auc:0.98843 eval's error:0.00558659"
[1] "[4]: train's auc:0.999889 train's error:0.00168893 eval's auc:1 eval's error:0.000620732"
[1] "[5]: train's auc:1 train's error:0 eval's auc:1 eval's error:0"
[1] "[6]: train's auc:1 train's error:0 eval's auc:1 eval's error:0"
[1] "[7]: train's auc:1 train's error:0 eval's auc:1 eval's error:0"
[1] "[8]: train's auc:1 train's error:0 eval's auc:1 eval's error:0"
[1] "[9]: train's auc:1 train's error:0 eval's auc:1 eval's error:0"
[1] "[10]: train's auc:1 train's error:0 eval's auc:1 eval's error:0"
[LightGBM] [Warning] Using self-defined objective function
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003009 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Warning] Using self-defined objective function
[1] "[1]: train's error:0.00598802 eval's error:0.00558659"
[1] "[2]: train's error:0.00307078 eval's error:0.00248293"
[1] "[3]: train's error:0.00598802 eval's error:0.00558659"
[1] "[4]: train's error:0.00168893 eval's error:0.000620732"
[LightGBM] [Info] Saving data to binary file D:\temp\RtmpeWkbSp\lgb.Dataset_b364301314d
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000577 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 32
[LightGBM] [Info] Number of data points in the train set: 6000, number of used features: 16
-- FAILURE (test_learning_to_rank.R:52:9): learning-to-rank with lgb.train() wor
abs(eval_results[[2L]][["value"]] - 0.745986) < TOLERANCE is not TRUE
`actual`: FALSE
`expected`: TRUE
-- FAILURE (test_learning_to_rank.R:53:9): learning-to-rank with lgb.train() wor
abs(eval_results[[3L]][["value"]] - 0.7351959) < TOLERANCE is not TRUE
`actual`: FALSE
`expected`: TRUE
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000564 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 40
[LightGBM] [Info] Number of data points in the train set: 4500, number of used features: 20
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000764 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 40
[LightGBM] [Info] Number of data points in the train set: 4500, number of used features: 20
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000945 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 40
[LightGBM] [Info] Number of data points in the train set: 4500, number of used features: 20
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000620 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 40
[LightGBM] [Info] Number of data points in the train set: 4500, number of used features: 20
[1] "[1]: valid's ndcg@1:0.675+0.0829156 valid's ndcg@2:0.655657+0.0625302 valid's ndcg@3:0.648464+0.0613335"
[1] "[2]: valid's ndcg@1:0.725+0.108972 valid's ndcg@2:0.666972+0.131409 valid's ndcg@3:0.657124+0.130448"
[1] "[3]: valid's ndcg@1:0.65+0.111803 valid's ndcg@2:0.630657+0.125965 valid's ndcg@3:0.646928+0.15518"
[1] "[4]: valid's ndcg@1:0.725+0.0829156 valid's ndcg@2:0.647629+0.120353 valid's ndcg@3:0.654052+0.129471"
[1] "[5]: valid's ndcg@1:0.75+0.165831 valid's ndcg@2:0.662958+0.142544 valid's ndcg@3:0.648186+0.130213"
[1] "[6]: valid's ndcg@1:0.725+0.129904 valid's ndcg@2:0.647629+0.108136 valid's ndcg@3:0.648186+0.106655"
[1] "[7]: valid's ndcg@1:0.75+0.165831 valid's ndcg@2:0.653287+0.14255 valid's ndcg@3:0.64665+0.119557"
[1] "[8]: valid's ndcg@1:0.725+0.129904 valid's ndcg@2:0.637958+0.123045 valid's ndcg@3:0.64665+0.119557"
[1] "[9]: valid's ndcg@1:0.75+0.15 valid's ndcg@2:0.701643+0.116239 valid's ndcg@3:0.701258+0.102647"
[1] "[10]: valid's ndcg@1:0.75+0.165831 valid's ndcg@2:0.682301+0.117876 valid's ndcg@3:0.66299+0.121243"
-- FAILURE (test_learning_to_rank.R:130:5): learning-to-rank with lgb.cv() works
all(...) is not TRUE
`actual`: FALSE
`expected`: TRUE
-- FAILURE (test_learning_to_rank.R:136:5): learning-to-rank with lgb.cv() works
all(...) is not TRUE
`actual`: FALSE
`expected`: TRUE
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003343 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 232
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 116
[LightGBM] [Info] Start training from score 0.482113
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[1] "[1]: test's l2:6.44165e-17"
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[1] "[2]: test's l2:1.97215e-31"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
[1] "[3]: test's l2:1.97215e-31"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
[1] "[4]: test's l2:1.97215e-31"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
[1] "[5]: test's l2:1.97215e-31"
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002776 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 232
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 116
[LightGBM] [Info] Start training from score 0.482113
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[1] "[1]: test's l2:6.44165e-17"
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[1] "[2]: test's l2:1.97215e-31"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
[1] "[3]: test's l2:1.97215e-31"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
[1] "[4]: test's l2:1.97215e-31"
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
[1] "[5]: test's l2:1.97215e-31"
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002385 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
[LightGBM] [Info] Start training from score -0.071580
[1] "[1]: train's binary_logloss:0.198597"
[1] "[2]: train's binary_logloss:0.111535"
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002861 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
[LightGBM] [Info] Start training from score -0.071580
[1] "[1]: train's binary_logloss:0.198597"
[1] "[2]: train's binary_logloss:0.111535"
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002919 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
[LightGBM] [Info] Start training from score -0.071580
[1] "[1]: train's binary_logloss:0.198597"
[1] "[2]: train's binary_logloss:0.111535"
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003152 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
[LightGBM] [Info] Start training from score -0.071580
[1] "[1]: train's binary_logloss:0.198597"
[1] "[2]: train's binary_logloss:0.111535"
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002350 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002456 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
[LightGBM] [Info] Start training from score -0.071580
[1] "[1]: train's binary_logloss:0.198597"
[1] "[2]: train's binary_logloss:0.111535"
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001635 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 182
[LightGBM] [Info] Number of data points in the train set: 1611, number of used features: 91
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002727 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
[LightGBM] [Info] Start training from score -0.071580
[1] "[1]: train's binary_logloss:0.198597"
[1] "[2]: train's binary_logloss:0.111535"
[1] "[3]: train's binary_logloss:0.0480659"
[1] "[4]: train's binary_logloss:0.0279151"
[1] "[5]: train's binary_logloss:0.0190479"
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002473 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
[LightGBM] [Info] Start training from score -0.071580
[1] "[1]: train's binary_logloss:0.198597"
[1] "[2]: train's binary_logloss:0.111535"
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002397 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
[LightGBM] [Info] Start training from score -0.071580
[1] "[1]: train's binary_logloss:0.198597"
[1] "[2]: train's binary_logloss:0.111535"
[1] "[3]: train's binary_logloss:0.0480659"
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002476 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
[LightGBM] [Info] Start training from score -0.071580
[1] "[1]: train's binary_logloss:0.198597"
[1] "[2]: train's binary_logloss:0.111535"
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002409 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 214
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
[LightGBM] [Info] Start training from score -0.071580
[1] "[1]: train's binary_logloss:0.198597"
[1] "[2]: train's binary_logloss:0.111535"
-- Skip (test_lgb.Booster.R:445:5): Saving a model with unknown importance type
Reason: Skipping this test because it causes issues for valgrind
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000533 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000573 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000489 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000539 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000440 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003266 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 232
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 116
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000422 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 77
[LightGBM] [Info] Number of data points in the train set: 90, number of used features: 4
[LightGBM] [Info] Start training from score -1.504077
[LightGBM] [Info] Start training from score -1.098612
[LightGBM] [Info] Start training from score -0.810930
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003095 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 232
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 116
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
[LightGBM] [Info] Start training from score -0.071580
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Info] Number of positive: 3140, number of negative: 3373
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003403 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 232
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 116
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000442 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 77
[LightGBM] [Info] Number of data points in the train set: 90, number of used features: 4
[LightGBM] [Info] Start training from score -1.504077
[LightGBM] [Info] Start training from score -1.098612
[LightGBM] [Info] Start training from score -0.810930
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003497 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 232
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 116
[LightGBM] [Info] Start training from score 0.482113
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003426 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 232
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 116
[LightGBM] [Info] Start training from score 0.482113
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003226 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 232
[LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 116
[LightGBM] [Info] Start training from score 0.482113
[LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
-- Skip (test_utils.R:70:5): lgb.last_error() correctly returns errors from the
Reason: Skipping this test because it causes valgrind to think there is a memory leak, and needs to be rethought
-- Skipped tests --------------------------------------------------------------
* Skipping this test because it causes issues for valgrind (1)
* Skipping this test because it causes valgrind to think there is a memory leak, and needs to be rethought (1)
* UTF-8 feature names are not fully supported in the R package (1)
== testthat results ===========================================================
FAILURE (test_learning_to_rank.R:52:9): learning-to-rank with lgb.train() works as expected
FAILURE (test_learning_to_rank.R:53:9): learning-to-rank with lgb.train() works as expected
FAILURE (test_learning_to_rank.R:130:5): learning-to-rank with lgb.cv() works as expected
FAILURE (test_learning_to_rank.R:136:5): learning-to-rank with lgb.cv() works as expected
[ FAIL 4 | WARN 0 | SKIP 3 | PASS 597 ]
Error: Test failures
Execution halted
running tests for arch 'x64' ... [20s] OK
Running 'testthat.R' [20s]
checking PDF version of manual ... OK
DONE
Status: 1 ERROR, 1 NOTE
The tests failing on 32-bit Windows here are the same that we saw failures for on 32-bit Solaris. #3534 added some code to skip those tests on Solaris.
How to fix this issue
Quick fix:
- just
testthat::skip()
the tests changed in [R-package] fix learning-to-rank tests on Solaris #3534 , to get us back into CRAN compliance as quickly as possible, and avoid the risk of too many additional CRAN submissions.
Longer-term fix:
- make sure that tests for the i386 architecture (32-bit) are run in the Windows CRAN CI check (it seems like they are not right now, for example see the most recent build)
- remove the
testthat::skip()
and replace it with a more specific skip or to have different expected values on 32-bit systems
Activity
jameslamb commentedon Nov 21, 2020
It's interesting that these tests failed on CRAN's R 3.6 tests on Windows (
r-oldrel-windows-ix86+x86_64
), but not R 4.0 (/r-release-windows-ix86+x86_64
)jameslamb commentedon Nov 21, 2020
Ok good news: I'm at least able to replicate what we're seeing on CRAN with R Hub.
✔️
windows-x86_64-release
artifacts (expires in 24 hours): https://artifacts.r-hub.io/lightgbm_3.1.0.1.tar.gz-6dc52587f27e479bb96bf2401df7f0d9/
R CMD check logs
✔️
windows-x86_64-devel
artifacts (expires in 24 hours): https://artifacts.r-hub.io/lightgbm_3.1.0.1.tar.gz-0a6718f3d49e4e4ba7166b9d7092bce2/
R CMD check logs
❌
windows-x86_64-oldrel
artifacts (expires in 24 hours): https://artifacts.r-hub.io/lightgbm_3.1.0.1.tar.gz-14f3c6a42d6a4e4c976fb8c910a59e8c/
R CMD check logs
jameslamb commentedon Nov 21, 2020
@StrikerRUS let's please talk on this issue.
From #3484 (comment)
Running
R CMD check
on Windows is supposed to check with both 64-bit and 32-bit R. Look in the logs on CRAN'sr-release-windows-ix86+x86_64
job, for example.https://www.r-project.org/nosvn/R.check/r-release-windows-ix86+x86_64/lightgbm-00check.html
StrikerRUS commentedon Nov 22, 2020
I mean, I don't see anything similar to 32-bit checks in our
r-package (windows-latest, MINGW, R 4, cran)
CI job. Am I missing something?https://github.com/microsoft/LightGBM/runs/1435854065
Although there are no
--no-multiarch
flag for CRAN job...LightGBM/.ci/test_r_package_windows.ps1
Line 149 in 1c5930b
jameslamb commentedon Nov 22, 2020
We shouldn't need to add anything to
R CMD check
on Windows that says "please run the 32-bit tests".Just omitting the
--no-multiarch
flag is enough.From "Writing R Extensions"
I think I see the issue though!
https://github.com/microsoft/LightGBM/blob/master/.ci/test_r_package_windows.ps1#L9
We probably need to add
i386
to the arguments for installing RI remember now that when you download R and install it on Windows, there is a checkbox that says "also install 32-bit components".
jameslamb commentedon Dec 11, 2020
these tests are passing on version 3.1.1!
https://cran.r-project.org/web/checks/check_results_lightgbm.html
2 remaining items