Releases: MervinPraison/PraisonAI
Releases · MervinPraison/PraisonAI
v2.2.82
Full Changelog: v2.2.81...v2.2.82
v2.2.81
What's Changed
- fix: resolve MongoDB vector search index creation error (missing embedding_model_name) by @github-actions[bot] in #1070
- fix: resolve MongoDB SearchIndexModel missing definition argument by @MervinPraison in #1072
Full Changelog: v2.2.80...v2.2.81
v2.2.80
What's Changed
- fix: enable token tracking by passing metrics parameter from Agent to LLM by @github-actions[bot] in #1067
- fix: comprehensive aiohttp session cleanup to resolve persistent warnings by @github-actions[bot] in #1068
Full Changelog: v2.2.79...v2.2.80
v2.2.79
What's Changed
- feat: Add comprehensive token metrics tracking for PraisonAI agents by @github-actions[bot] in #1055
- feat: Simplify token metrics to Agent(metrics=True) by @MervinPraison in #1056
- fix: reduce monitoring overhead with performance optimizations by @github-actions[bot] in #1058
- fix: add run() method alias to PraisonAIAgents for consistent API by @MervinPraison in #1063
- feat: make performance monitoring disabled by default by @github-actions[bot] in #1061
- perf: optimize telemetry performance with thread pools and queue-based processing by @github-actions[bot] in #1062
- fix: resolve zero token metrics by fixing agent LLM instance access by @MervinPraison in #1064
- fix: properly close aiohttp sessions in MCP HTTP stream transport by @github-actions[bot] in #1066
Full Changelog: v2.2.78...v2.2.79
v2.2.78
Full Changelog: v2.2.77...v2.2.78
v2.2.77
Full Changelog: v2.2.76...v2.2.77
v2.2.76
Full Changelog: v2.2.75...v2.2.76
v2.2.75
Full Changelog: v2.2.74...v2.2.75
v2.2.74
Full Changelog: v2.2.73...v2.2.74
v2.2.73
What's Changed
- feat: add context CLI command with --url and --goal parameters by @MervinPraison in #1026
- fix: implement real-time streaming for Agent.start() method by @MervinPraison in #1027
- fix: implement real-time streaming for Agent.start() method by @MervinPraison in #1028
- fix: enhance Gemini streaming robustness with graceful JSON parsing error handling by @MervinPraison in #1029
- fix: bypass display_generation for OpenAI streaming to enable raw chunk output by @MervinPraison in #1030
- fix: eliminate streaming pause caused by telemetry tracking by @github-actions[bot] in #1032
- Fix litellm deprecation warnings for issue #1033 by @github-actions[bot] in #1034
- fix: correct tool call argument parsing in streaming mode by @MervinPraison in #1037
- feat: Add comprehensive performance monitoring system by @MervinPraison in #1038
- Fix: Comprehensive LiteLLM deprecation warning suppression by @MervinPraison in #1039
- PR #1038: Monitoring examples by @MervinPraison in #1040
- PR #1039: Logging by @MervinPraison in #1041
- fix: ensure display_generating is called when verbose=True regardless of streaming mode by @MervinPraison in #1042
- fix: enhance LiteLLM streaming error handling for JSON parsing errors (Issue #1043) by @MervinPraison in #1044
- fix: correct display_generating logic to only show when stream=False AND verbose=True by @MervinPraison in #1045
- fix: implement proper streaming fallback logic for JSON parsing errors by @MervinPraison in #1046
- fix: resolve display_generating issue by ensuring stream parameter is correctly passed by @MervinPraison in #1047
- PR #1046: Changes from Claude by @MervinPraison in #1048
- Fix: Add display_generating support for OpenAI non-streaming mode by @MervinPraison in #1049
- fix: Remove display_generating when stream=false to prevent streaming-like behavior by @MervinPraison in #1050
New Contributors
- @github-actions[bot] made their first contribution in #1032
Full Changelog: v2.2.72...v2.2.73