-
Notifications
You must be signed in to change notification settings - Fork 1
36 revision of objective handling #65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
sarahleidolf
commented
Aug 15, 2025
- Introduction of subclasses for different sub-objectives for more intuitive parameterization of weighting factors
- Visualization of sub-objective results in a dashboard
- Original notation of the objective function still supported. “r_del_u” notation no longer supported.
add sampling of training data for online learning
change dataset sampling new plotting
retrain initial model
first approach for collocation
fix DeltaUObjective
time_in_mpc into OL_in_ml_models
Time in mpc
…H-EBC/AgentLib-MPC into 20-online-learning-ml-models
20 online learning ml models
36 revision of objective handling
update from main
Update MultipleShooting for CasadiMLModels
…oting_ML and DirectCollocation
add three examples with new objective formulation in one_room_mpc/physical
main into #36
add ml models to ci test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, I finished the first round of review.
Generally really cool pull request, I like the overall structure and how the new examples work / look.
I've structured my feedback into points that should be addressed before this can be merged, and points that shouldn't block this merge, but may be considered in future issues.
Required now:
- remove all the AI-error handling and hasattr checks that catch errors from within the framework. Our error handling should target user errors. library errors should be raised so they are caught in CI.
- in the discretizations, extract duplicated code into functions, especially regarding the delta-u penalty. this can also be a lot more concise when the AI boilerplate / error handling is removed
- We can discuss on this, but I would prefer different names for the objectives, i.e. instead of EqObjective something like ExpressionObjective, or even only Objective, since this would be the standard case. But in general I like to avoid abbreviations in variablenames.
For the future:
- the current objective re-calculation is only exact for multiple shooting / euler, which should be acknowledged. In the future we might move this to the discretization, logging the points where objectives are evaluated and maybe tracking the exact calculation. E.g. for collocation its quite complex since we have weighting based on the collocation matrices.
- We now save 3 separate files for the mpc results. I think I would have preferred to keep the objectives with the stats. However we might go even further and move everything into a single file. This could be a separate issue and for now I wouldn't block the PR over this so its fine.
- We then - if saving objectives over the actual grid on a horizon - could even plot the predictions of the objective evaluation like variables in the dashboard.
minor changes in objective classes
minor changes in dicretization backend merge stats and objective results file
…f-objective-handling # Conflicts: # CHANGELOG.md # agentlib_mpc/__init__.py # agentlib_mpc/optimization_backends/casadi_/admm.py # agentlib_mpc/utils/plotting/interactive.py # examples/one_room_mpc/ann/simple_mpc_nn.py # examples/one_room_mpc/ann/training_nn.py