Skip to content

Debug option for errors in visualisation front end #2753

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Apr 13, 2025

Conversation

colinfrisch
Copy link
Collaborator

@colinfrisch colinfrisch commented Apr 11, 2025

This is a followup of #2747 (merged)

What the problem was

I noticed that when there is an error while visualising a mesa model with solara, it does not display the error in the front-end.
After my first PR, the error now displays only in the terminal, but the user can still have a bit of trouble figuring out what is happening if everything just freezes.

How it fixes it

Uses solara.use_reactive to store the short version of the error as a reactive state in the simulation.
I then added the display of the condensed error in the front end.
The first PR was about getting the full error in the terminal. However, after a few tests, I think that the "condensed" error is enough for the front end, in order to maintain clarity in the interface :

mesa_error_viz

Test error on the foraging ants model I built : to reproduce this exact error, clone the ABM_mesa_models repo, and uncomment line 141 in agents.py

Features

  • Displays a short error message in the control column of visualization
  • Message error disappears and doesn't block simulation if "Reset" button is pressed
  • Still displays the full debugging error in the terminal

Thanks again to @EwoutH, @tpike3, @Sahil-Chhoker, and @sanika-n for the feedback in the previous PR, I'd love to hear your thoughts on this follow-up.

Copy link

coderabbitai bot commented Apr 11, 2025

Important

Review skipped

Auto reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai plan to trigger planning for file edits and PR creation.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

Performance benchmarks:

Model Size Init time [95% CI] Run time [95% CI]
BoltzmannWealth small 🔵 +1.1% [-0.3%, +2.6%] 🔵 +1.6% [+1.4%, +1.8%]
BoltzmannWealth large 🔵 -1.7% [-2.6%, -0.8%] 🔵 -3.4% [-6.3%, -0.8%]
Schelling small 🔵 -1.0% [-1.3%, -0.7%] 🔵 +0.1% [-0.0%, +0.3%]
Schelling large 🔵 -2.2% [-3.0%, -1.4%] 🔵 -2.5% [-3.9%, -1.4%]
WolfSheep small 🔵 -0.9% [-1.4%, -0.5%] 🔵 -0.1% [-0.3%, +0.1%]
WolfSheep large 🔵 -1.0% [-1.8%, -0.3%] 🔵 -0.4% [-3.1%, +2.4%]
BoidFlockers small 🔵 +2.1% [+1.4%, +2.9%] 🔵 +0.1% [-0.1%, +0.3%]
BoidFlockers large 🔵 +1.3% [+1.0%, +1.7%] 🔵 +0.1% [-0.1%, +0.4%]

@sanika-n
Copy link
Collaborator

Really cool! Just tried it out
The code also looks good to me:)

@EwoutH EwoutH added enhancement Release notes label visualisation labels Apr 11, 2025
@EwoutH
Copy link
Member

EwoutH commented Apr 11, 2025

Thanks!

@colinfrisch, could you rebase your branch on our main and resolve the conflicts?

@sanika-n is this otherwise good to merge (ideally use the review tool to comment/approve)

@Sahil-Chhoker
Copy link
Collaborator

Nice initiative, @colinfrisch! I'm not opposing your feature by saying this—just sharing a personal preference. I actually like it when my frontend breaks and shows the error directly; it's easier for me to see the full traceback on the screen rather than navigating through the terminal.

That said, this is just my opinion, and I’ll leave the review and decision to @sanika-n.

@colinfrisch
Copy link
Collaborator Author

@colinfrisch, could you rebase your branch on our main and resolve the conflicts?

Alright, I'll do this asap !

@colinfrisch
Copy link
Collaborator Author

colinfrisch commented Apr 11, 2025

I actually like it when my frontend breaks and shows the error directly; it's easier for me to see the full traceback on the screen rather than navigating through the terminal.

@Sahil-Chhoker, I don't think that this PR prevents the simulation from breaking down and displaying the error on the full screen if there's an error in the build.

It's tackling rather the simulation part once everything is built. Right now, if the simulation starts running correctly and an error comes up during the simulation, everything just freezes and, after the first PR, an error is displayed in the terminal. This current PR simply notifies the user that there is an error instead of just freezing and waiting.

However, if you've tested a case where there should be a full breakdown and it doesn't happen anymore with this new code, I'd be interested to know about it :)

@Sahil-Chhoker
Copy link
Collaborator

No, I think I get it, sorry I was referring to something else, you're good.

@quaquel
Copy link
Member

quaquel commented Apr 11, 2025

This is great stuff. @colinfrisch can you resolve the merge conflicts? I'll try to review asap afterwards.

@colinfrisch
Copy link
Collaborator Author

colinfrisch commented Apr 11, 2025

This is great stuff. @colinfrisch can you resolve the merge conflicts? I'll try to review asap afterwards.

@quaquel @EwoutH, rebase and conflicts should be handled. Don't hesitate to let me know if you see any other thing I should do :)

Copy link
Collaborator

@Sahil-Chhoker Sahil-Chhoker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this PR, it's an improvement and the code is also minimal, I am approving the changes and leave the merge to @quaquel.

One little thing though, maybe add a comment after traceback.print_exc() about what it does.

@Sahil-Chhoker Sahil-Chhoker requested a review from quaquel April 12, 2025 04:45
Copy link
Member

@EwoutH EwoutH left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great stuff, will give Jan some time to review (and merge).

@quaquel quaquel merged commit 08ff1ed into projectmesa:main Apr 13, 2025
12 checks passed
@colinfrisch colinfrisch deleted the viz_debug branch April 14, 2025 01:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Release notes label visualisation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants