Skip to content

Conversation

ronantakizawa
Copy link

@ronantakizawa ronantakizawa commented Sep 27, 2025

Fixes issue #171.

Fixes the console message verbosity issue by implementing compact formatting and filtering options for list_console_messages.

Problem

The list_console_messages tool was generating ~5x more tokens than necessary due to verbose formatting
that included:

  • Full file paths and line numbers for every message
  • Complete JSON serialization of objects and arrays
  • Redundant location information
  • No deduplication of repeated messages

This made the tool impractical for debugging workflows where users needed to examine console output.

Solution

Compact Mode (Default)

  • Before: Log> script.js:10:5: Hello, world! {"id":1,"status":"done"} (72 chars)
  • After: Log> Hello, world! (19 chars)
  • Result: ~75% token reduction per message

New Filtering Parameters

  • level: Filter by log level (log, info, warning, error, all)
  • limit: Maximum number of messages (default: 100)
  • compact: Use compact format (default: true)
  • includeTimestamp: Include location info (default: false)

Smart Deduplication ( Creates a map using message text as the key)

  • Removes truly identical messages that were causing repetition
  • Preserves message order and context

Usage Examples

// Compact format (default) - minimal tokens
list_console_messages()

// Only errors with verbose details
list_console_messages({level: "error", compact: false})

// First 50 logs with timestamps
list_console_messages({limit: 50, includeTimestamp: true})

@Sayvai
Copy link

Sayvai commented Sep 28, 2025

It would be great if we could also add further filtering parameters, such as search query filtering on console messages by input -> query.

In addition, we could also filter on specific datetimes by allowing the LLM to provide ISO formatted input -> from_datetime and input -> to_datetime values.

This would greatly optimise the number of context tokens input and output, and makes the whole agentic experience more efficient, improving time to accurate results.

Not sure if the above would necessitate a new GitHub issue, or can be bundled into the PR branch as additional commit(s)?

@murillodutt
Copy link

murillodutt commented Sep 28, 2025 via email

@Sayvai
Copy link

Sayvai commented Sep 28, 2025

In what contexts, give examples. rgds, Murillo Dutt DUTT™ eCommerce Website Design Tallinn/Estonia | Coimbra/Porgutal Goiânia/Brazil | Curitiba/Brazil Em dom., 28 de set. de 2025 às 10:41, Sayvai @.> escreveu:

Sayvai left a comment (ChromeDevTools/chrome-devtools-mcp#183) <#183 (comment)> It would be great if we could also add further filtering parameters, such as search query filtering on console messages by input -> query. In addition, we could also filter on specific datetimes by allowing the LLM to provide ISO formatted input -> from and input -> to values. This would greatly optimise the number of context tokens input and output, and makes the whole agentic experience more efficient, improving time to accurate results. — Reply to this email directly, view it on GitHub <#183 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACGLFPAHATCT5JDKWBFVLRT3U7QRFAVCNFSM6AAAAACHVLCEY6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTGNBTGQYTCMRTGA . You are receiving this because you are subscribed to this thread.Message ID: @.
>

Simple if you think about it, as the output of the MCP tool can return a much refined subset of console messages back to the LLM, thereby mitigating and reducing the need for additional irrelevant info being fed back to the LLMs' context window, thus reducing token usage count, and thereby minimising end-user costs.

Example 1 - Input -> query

  • I add-in specifically formatted console messages (e.g. namespaced with "[some-file.ts]: ..."), and I ask then the LLM to search for and return all browser console messages which contains the above namespace.
  • The LLM makes the appropriate call to the console tool, and it will pass in the query filter input query
  • The MCP tool will return a specific subset of console messages only that were asked for by the me back to LLM for parsing without overloading / polluting the tool output response back to the LLM of all other irrelevant console message.

Example 2 - Input -> from_datetime / input -> to_datetime

  • The MCP controlled browser session has been running for like lets say 15 minutes already, and plenty of console.logs are general throughout that time as the browser goes through various interaction states.
  • I ask LLM to give me a summary of the logs that were generated over the last few minutes which is of my particular use-case / concern that I want to delve deeper into.
  • The LLM will make the tool call and appropriately specify the above datetime inputs
  • The MCP tool will respond with only the subset of console messages as per my request.

Benefit?

  • Reduced LLM context token wastage, and by positive implication, cost ($)
  • Reduced unnecessary parsing of additional irrelevant messages
  • Reduction in unnecessary LLM noise / confusion, thus LLM analysis results / feedback
  • May even improve speed of overall LLM output process, as there would be less messages to parse

@ronantakizawa
Copy link
Author

@Sayvai, I like your suggestions. I think it's best to implement it in a new PR and make a new GitHub issue.

@Sayvai
Copy link

Sayvai commented Sep 28, 2025

@Sayvai, I like your suggestions. I think it's best to implement it in a new PR and make a new GitHub issue.

Glad you agree @ronantakizawa 🙂

As suggested, I've created a new scoped GitHub issue for the work 👇

@OrKoN
Copy link
Collaborator

OrKoN commented Sep 29, 2025

Thanks for the PR! we are discussing right now how to implement in the best way and I think the PR does not quite align with what are discussing (use the same pagination mechanism as in the network requests tool, support multiple levels, provide a detailed tool to get stack traces). We are planning to address the underlying issue soon though and we will see if we incorporate parts of the PR into the solution.

@santoldev
Copy link

this is good PR originaly address what i mention from my issue.

but this

Smart Deduplication ( Creates a map using message text as the key)

Removes truly identical messages that were causing repetition
Preserves message order and context

sounds like a bandaid patch for the real problem, the question why list_console_messages has duplicated messages? even with playwright MCP its still the same, so im sure its using similar logic, OR this is browser CDP related.

I ended up creating my own debug tool (extension) because i really needed it asap.
I fix all my issues and its working well in my workflow, has filtering (limits),debug level, has different Limit Windows 1000 max, 300 output from tool calls (sorted from latest), Filtered By Text, Regex search, I have also added my own version of Evaluate JS that work properly with my workflow.

AI is actually 10x better in debugging with the right tools.

Hoping ChromeDevtools MCP will fix all this issues soon so i wont need my own extension.

@lifeinchords
Copy link

@santoldev can you please share more on how you accomplished this?

I ended up creating my own debug tool (extension) because i really needed it asap.
I fix all my issues and its working well in my workflow, has filtering (limits),debug level, has different Limit Windows 1000 max, 300 output from tool calls (sorted from latest), Filtered By Text, Regex search, I have also added my own version of Evaluate JS that work properly with my workflow.

Did you find a way to modify the MCP itself to expose filtering, before the tools respond?
or are you filtering the tokens somehow before the IDE processes them?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants