Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[R] Custom objective for multi-class #1846

Closed
albedan opened this issue Nov 15, 2018 · 11 comments
Closed

[R] Custom objective for multi-class #1846

albedan opened this issue Nov 15, 2018 · 11 comments
Assignees

Comments

@albedan
Copy link

albedan commented Nov 15, 2018

I'm trying to use a custom objective and metric for a multiclass problem, following @Laurae2 example here: https://github.com/Microsoft/LightGBM/blob/master/R-package/demo/early_stopping.R

Using R 3.5.1 and lightgbm 2.2.2.
So far I managed to implement the eval metric, but objective seems tricky.

I think a trivial example should be useful: maybe just implementing the default multi class logloss with a custom objective in R (in order to be able to compare the results with the default objective).

@BruceZhaoR
Copy link

@albedan #1682 maybe helpful 😁

@albedan
Copy link
Author

albedan commented Nov 16, 2018

@albedan #1682 maybe helpful grin

Thanks @BruceZhaoR, but that one is about the evaluation metric (and I have no issues with that).
I'm referring to the objective function, i.e. the one that defines the optimization through gradient and hessian

@BruceZhaoR
Copy link

multi class logloss with a custom objective :
https://github.com/Microsoft/LightGBM/blob/425503d7e254130eaea4a5ff287f324f5a5cc1ea/R-package/demo/multiclass_custom_objective.R#L37-L55

It's that what you want?

@albedan
Copy link
Author

albedan commented Nov 16, 2018

Yes! Somehow I couldn't find it, I'm going to try it soon. Thank you @BruceZhaoR

@jameslamb jameslamb self-assigned this Nov 17, 2018
@albedan
Copy link
Author

albedan commented Nov 17, 2018

@BruceZhaoR ,
Actually I think there could be a couple of issues:

  1. The example has balanced classes. Let's change it at the beginning of the script:
train <- as.matrix(iris[c(1:30, 51:95, 101:140), ])
test <- as.matrix(iris[c(31:50, 96:100, 141:150), ])

The builtin model starts with a "boost from average"

[LightGBM] [Info] Start training from score -1.343735
[LightGBM] [Info] Start training from score -0.938270
[LightGBM] [Info] Start training from score -1.056053

The custom one does not (and setting the flag does not seem to work).

  1. Running the example as it is, the final "identical" function returns False, as the two predictions are different. Actually, reshaping and applying manually a softmax function gets to identical predictions (but I expected the raw ones to be identical, too).

@vnijs
Copy link

vnijs commented Nov 28, 2018

I'm seeing the same as @albedan. See output below.

@albedan Could you perhaps share/post the code you used to get identical probability predictions? Thanks

> # compare predictions
> identical(preds_builtin, preds_custom)
[1] FALSE
> head(data.frame(preds_builtin, preds_custom))
  preds_builtin preds_custom
1      1.540058     2.638670
2     -2.912387    -1.813775
3     -2.912387    -1.813775
4      1.540058     2.638670
5     -2.912387    -1.813775
6     -2.912387    -1.813775

EDIT: The following produces equivalent results after a bit of rounding

preds_custom <- predict(model_custom, test[, 1:4], rawscore = TRUE, reshape = TRUE)
preds_custom <- exp(preds_custom) / rowSums(exp(preds_custom))

preds_builtin <- predict(model_builtin, test[, 1:4], rawscore = TRUE, reshape = TRUE)
preds_builtin <- exp(preds_builtin) / rowSums(exp(preds_builtin))

# compare predictions
identical(round(preds_builtin, 14), round(preds_custom, 14))

@vnijs
Copy link

vnijs commented Nov 28, 2018

I don't mean to thread-jack but ... I'm trying to setup a customer objective function for a binary classifier. I tried adapting the cost @ * https://github.com/Microsoft/LightGBM/blob/master/R-package/demo/multiclass_custom_objective.R but not having much luck so far.

Is the result from https://github.com/Microsoft/LightGBM/blob/master/R-package/demo/early_stopping.R supposed to produce the same result as the below?

bst_builtin <- lgb.train(param,
                 dtrain,
                 num_round,
                 valids,
                 objective = "binary",
                 metric = "binary_logloss",
                 early_stopping_round = 3)

FYI An example of a custom objective function for a regression model with lightgbm:

https://towardsdatascience.com/custom-loss-functions-for-gradient-boosting-f79c1b40466d

@grand-home-projects
Copy link

grand-home-projects commented Jan 13, 2019

@vnijs

Were you ever able to get a working version of the binary classifier? I am currently trying to do the same - yet I'm running into weird behavior when submitting to run.

This is what I'm trying to use to penalize false positives (and is the custom function I use to do the same within xgboost), however it doesn't appear to actually work within LightGBM.

    weighted_logloss <- function(preds, dtrain) {
        y <- getinfo(dtrain, "label")
        p <- preds
        beta <- 2
        grad <- p * (beta + y - beta*y) - y
        hess <- p * (1 - p) * (beta + y - beta*y)
        return(list(grad = grad, hess = hess))
        }

Maybe I'm misunderstanding what I need to do to tailor the function to work within LightGBM.

@Laurae2
Copy link
Contributor

Laurae2 commented Jan 13, 2019

@grand-home-projects For binary classification (logloss in your case here), the returned values are the raw values (not the transformed values).

You need to transform the predictions as follow: new_preds <- 1 / (1 + exp(-preds))

Using your code, it should look like the following:

    weighted_logloss <- function(preds, dtrain) {
        y <- getinfo(dtrain, "label")
        p <- 1 / (1 + exp(-preds))
        beta <- 2
        grad <- p * (beta + y - beta*y) - y
        hess <- p * (1 - p) * (beta + y - beta*y)
        return(list(grad = grad, hess = hess))
        }

@grand-home-projects
Copy link

@Laurae2 Ah, that makes complete sense. Just dropped it in and it appears to be working now.

Thank you much!

maximilianeber added a commit to maximilianeber/LightGBM that referenced this issue Jan 14, 2019
Laurae2 pushed a commit that referenced this issue Jan 19, 2019
* Fix multiclass custom objective demo

* Use option not to boost from average instead of setting init score explicitly

* Reference #1846 when turning off boost_from_average

* Add trailing whitespace
@StrikerRUS
Copy link
Collaborator

@Laurae2 Seems this issue can be closed (#1940 didn't have autoclose keyword).

alisterw pushed a commit to G-Research/LightGBM that referenced this issue Feb 13, 2019
#2)

* [ci] removed temp brew hotfix and deprecated sudo option (microsoft#1951)

* removed brew hotfix and deprecated sudo option on Travis

* removed brew hotfix on Azure

* updated Boost docs (microsoft#1955)

* removed warnings about types in comparison ([-Wsign-compare]) (microsoft#1953)

* removed comparison warning

* fixed spacing

* [docs] ask to provide LightGBM version for issue (microsoft#1958)

* [R] Fix multiclass demo (microsoft#1940)

* Fix multiclass custom objective demo

* Use option not to boost from average instead of setting init score explicitly

* Reference microsoft#1846 when turning off boost_from_average

* Add trailing whitespace

* [R] Correcting lgb.prepare output comment (microsoft#1831)

* Correcting lgb.prepare output comment

* updated Roxygen files

* [docs] bump xcode version in docs (microsoft#1952)

* fix typo

* [docs] Added the links to the libraries used (microsoft#1962)

* Added links to the libraries used.

* Fixing the header

* Fixes

* ot -> to

* [docs] fixed minor typos in documentation (microsoft#1959)

* fixed minor typos in documentation

* fixed typo in gpu_tree_learner.cpp

* Update .gitignore

* support to override some parameters in Dataset (microsoft#1876)

* add warnings for override parameters of Dataset

* fix pep8

* add feature_penalty

* refactor

* add R's code

* Update basic.py

* Update basic.py

* fix parameter bug

* Update lgb.Dataset.R

* fix a bug

* Fix build on macOS Mojave (microsoft#1923)

* Fix build on macOS Mojave

Fixed microsoft#1898

- https://iscinumpy.gitlab.io/post/omp-on-high-sierra/
- https://cliutils.gitlab.io/modern-cmake/chapters/packages/OpenMP.html
- Homebrew/homebrew-core#20589

* update setup.py

* update docs

* fix setup.py

* update docs

* update docs

* update setup.py

* update docs

* [tests][python] added tests for metrics' behavior and fixed case for multiclass task with custom objective (microsoft#1954)

* added metrics test for standard interface

* simplified code

* less trees

* less trees

* use dummy custom objective and metric

* added tests for multiclass metrics aliases

* fixed bug in case of custom obj and num_class > 1

* added metric test for sklearn wrapper

* [python][R][docs] added possibility to install with Visual Studio 2019 Preview (microsoft#1956)

* Found error from microsoft#1939 (microsoft#1974)

* fix more edge cases in mape (microsoft#1977)

* fix R's overflow (microsoft#1960)

* [tests][python] added test for huge string model (microsoft#1964)

* added test for huge string model

* fixed tree sizes field

* simplified model structure

* fixed test and added try/except

* fix nan in eval results (microsoft#1973)

* always save the score of the first round in early stopping

fix microsoft#1971

* avoid using std::log on non-positive numbers

* remove unnecessary changes

* add tests

* Update test_sklearn.py

* enhanced tests

* fix microsoft#1981

* [python] added OpenMP options for python-package installation (microsoft#1975)

* added OpenMP options for python-package installation

* fixed grammar typo

* improved model loading routines (microsoft#1979)

* [ci] refined command status check  (microsoft#1980)

* refined command status check

* refined Appveyor

* redirect all warnings to stdout

* cpplint whitespaces and new lines (microsoft#1986)

* fix microsoft#1994

* [docs] Fixed OpenCL Debian package name typo (microsoft#1995)

[docs] Fixed OpenCL Debian package name typo

* [python] convert datatable to numpy directly (microsoft#1970)

* convert datatable to numpy directly

* fix according to comments

* updated more docstrings

* simplified isinstance check

* Update compat.py

* [R-package] Fix demos not using lgb.Dataset.create.valid (microsoft#1993)

* Hand edit broken commit

* Hand edit broken commit

* Hand edit broken commit

* Hand edit broken commit

* 2.2.3 release (microsoft#1987)

* Update DESCRIPTION

* Update DESCRIPTION

* update version number at master branch (microsoft#1996)

* Update VERSION.txt

* Update .appveyor.yml

* Update DESCRIPTION

* Initial attempt to implement appending features in-memory to another data set

The intent is for this to enable munging files together easily, without needing to round-trip via numpy or write multiple copies to disk.
In turn, that enables working more efficiently with data sets that were written separately.

* Implement Dataset.dump_text, and fix small bug in appending of group bin boundaries.

Dumping to text enables us to compare results, without having to worry about issues like features being reordered.

* Add basic tests for validation logic for add_features_from.

* Remove various internal mapping items from dataset text dumps

These are too sensitive to the exact feature order chosen, which is not visible to the user.
Including them in tests appears unnecessary, as the data dumping code should provide enough coverage.

* Add test that add_features_from results in identical data sets according to dump_text.

* Add test that booster behaviour after using add_features_from matches that of training on the full data

This checks:
- That training after add_features_from works at all
- That add_features_from does not cause training to misbehave

* Expose feature_penalty and monotone_types/constraints via get_field

These getters allow us to check that add_features_from does the right thing with these vectors.

* Add tests that add_features correctly handles feature_penalty and monotone_constraints.

* Ensure add_features_from properly frees the added dataset and add unit test for this

Since add_features_from moves the feature group pointers from the added dataset to the dataset being added to, the added dataset is invalid after the call.
We must ensure we do not try and access this handle.

* Remove some obsolete TODOs

* Tidy up DumpTextFile by using a single iterator for each feature

This iterators were also passed around as raw pointers without being freed, which is now fixed.

* Factor out offsetting logic in AddFeaturesFrom

* Remove obsolete TODO

* Remove another TODO

This one is debatable, test code can be a bit messy and duplicate-heavy, factoring it out tends to end badly.
Leaving this for now, will revisit if adding more tests later on becomes a mess.

* Add documentation for newly-added methods.

* Initial work towards add_data_from

This currently only merges the feature groups and updates num_data_.
It does not deal with Metadata or non-dense bins yet.

* Fix bug where dense bin copy of num_data_ wasn't updated

* Small bug fix in dense_bin.hpp, initial implementation of Merge for 4-bits bin.

* Add unit test for dense bin case of add_data_from, and refactor tests slightly.

* Initial implementation of Merge for sparse bins and unit tests for it.

* Ensure we test merging sparse data sets after loading them from binary

This seems silly, but push_buffers_ aren't populated if the data was loaded from a binary file.
This forces us to reconstruct the index,value form of the data in the target bin before merging.
Adding this test ensures that code is covered.

* Add labels to text dumps.

* Add weights to text dumps.

* Ensure add_data_from properly merges labels.

* Ensure metadata appends weights correctly, and unit test for it.

* Implement metadata merging for query bits

This is currently not covered by unit tests.

* Check datasets are aligned before merging.

This catches the majority of obvious errors, e.g. not having the same number of features or having different bin mappings.

* Add test that booster behaviour is preserved by add_data_from.

* Add configuration parameters for CEGB.

* Add skeleton CEGB tree learner

Like the original CEGB version, this inherits from SerialTreeLearner.
Currently, it changes nothing from the original.

* Track features used in CEGB tree learner.

* Pull CEGB tradeoff and coupled feature penalty from config.

* Implement finding best splits for CEGB

This is heavily based on the serial version, but just adds using the coupled penalties.

* Set proper defaults for cegb parameters.

* Ensure sanity checks don't switch off CEGB.

* Implement per-data-point feature penalties in CEGB.

* Implement split penalty and remove unused parameters.

* Merge changes from CEGB tree learner into serial tree learner

* Represent features_used_in_data by a bitset, to reduce the memory overhead of CEGB, and add sanity checks for the lengths of the penalty vectors.
@lock lock bot locked as resolved and limited conversation to collaborators Mar 11, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

8 participants