Skip to content

Conversation

andrewjschuang
Copy link
Collaborator

@andrewjschuang andrewjschuang commented Feb 19, 2025

WHY

Summary by CodeRabbit

  • Chores

    • Updated version identifiers for enhanced consistency across components.
  • Refactor

    • Revised the available model options by replacing specific identifiers with broader, generalized categories (e.g., "sonar," "sonar-pro," "sonar-reasoning," and "sonar-reasoning-pro").

Copy link

vercel bot commented Feb 19, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

3 Skipped Deployments
Name Status Preview Comments Updated (UTC)
docs-v2 ⬜️ Ignored (Inspect) Feb 19, 2025 2:39pm
pipedream-docs ⬜️ Ignored (Inspect) Feb 19, 2025 2:39pm
pipedream-docs-redirect-do-not-edit ⬜️ Ignored (Inspect) Feb 19, 2025 2:39pm

Copy link
Contributor

coderabbitai bot commented Feb 19, 2025

Walkthrough

This pull request updates version numbers and a constants array across three components. The chat-completions module’s exported version is bumped from "0.0.4" to "0.0.5", and the package version is updated from "0.1.3" to "0.1.4". Additionally, the MODELS array in the constants file is modified by removing specific model names and introducing generalized identifiers.

Changes

File(s) Change Summary
components/.../chat-completions.mjs
components/.../package.json
Updated version numbers (chat-completions: "0.0.4" → "0.0.5", package: "0.1.3" → "0.1.4")
components/.../constants.mjs Modified MODELS array by removing specific model entries and adding: "sonar", "sonar-pro", "sonar-reasoning", "sonar-reasoning-pro"

Possibly related PRs

Suggested labels

User submitted, actions

Suggested reviewers

  • malexanderlim

Poem

Oh, I'm a rabbit with a code delight,
Hopping through changes both day and night.
Versions leap up, models rearranged,
In files and constants, all carefully changed.
Celebrate the commits with a joyful byte! 🐰

Warning

There were issues while running some tools. Please review the errors and either fix the tool’s configuration or disable the tool if it’s a critical failure.

🔧 ESLint

If the error stems from missing dependencies, add them to the package.json file. For unrecoverable errors (e.g., due to private dependencies), disable the tool in the CodeRabbit configuration.

components/perplexity/actions/chat-completions/chat-completions.mjs

Oops! Something went wrong! :(

ESLint: 8.57.1

Error [ERR_MODULE_NOT_FOUND]: Cannot find package 'jsonc-eslint-parser' imported from /eslint.config.mjs
at packageResolve (node:internal/modules/esm/resolve:839:9)
at moduleResolve (node:internal/modules/esm/resolve:908:18)
at defaultResolve (node:internal/modules/esm/resolve:1038:11)
at ModuleLoader.defaultResolve (node:internal/modules/esm/loader:557:12)
at ModuleLoader.resolve (node:internal/modules/esm/loader:525:25)
at ModuleLoader.getModuleJob (node:internal/modules/esm/loader:246:38)
at ModuleJob._link (node:internal/modules/esm/module_job:126:49)

components/perplexity/common/constants.mjs

Oops! Something went wrong! :(

ESLint: 8.57.1

Error [ERR_MODULE_NOT_FOUND]: Cannot find package 'jsonc-eslint-parser' imported from /eslint.config.mjs
at packageResolve (node:internal/modules/esm/resolve:839:9)
at moduleResolve (node:internal/modules/esm/resolve:908:18)
at defaultResolve (node:internal/modules/esm/resolve:1038:11)
at ModuleLoader.defaultResolve (node:internal/modules/esm/loader:557:12)
at ModuleLoader.resolve (node:internal/modules/esm/loader:525:25)
at ModuleLoader.getModuleJob (node:internal/modules/esm/loader:246:38)
at ModuleJob._link (node:internal/modules/esm/module_job:126:49)

✨ Finishing Touches
  • 📝 Generate Docstrings (Beta)

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (1)
components/perplexity/common/constants.mjs (1)

1-13: ⚠️ Potential issue

❓ Verification inconclusive

⚠️ Breaking Change: Model names have been updated.

The removal of specific model versions (llama-3.1-sonar-small-128k-online, llama-3.1-sonar-large-128k-online) in favor of generalized identifiers could break existing implementations. Please ensure:

  1. This change is documented in the changelog
  2. Migration instructions are provided for users
  3. The version bump (0.1.3 → 0.1.4) adequately reflects this breaking change - consider a major version bump (1.0.0) instead

Let's check for any existing usages that might be affected:


🏁 Script executed:

#!/bin/bash
# Search for any usage of the old model names that might need updating
rg -i "llama-3\.1-sonar-(small|large)-128k-online" --type js --type mjs

Length of output: 101


Breaking Change: Verify Deprecated Model Name Usages

The update removes specific model identifiers (llama-3.1-sonar-small-128k-online and llama-3.1-sonar-large-128k-online) in favor of generalized names. This could break existing implementations that might still reference these explicit names.

Action Items:

  • Documentation: Ensure the changelog clearly documents this breaking change and provide detailed migration instructions for users.
  • Versioning: Reassess whether the minor version bump (0.1.3 → 0.1.4) sufficiently communicates the breaking nature of this change, or if a major version bump (e.g., 1.0.0) is more appropriate.
  • Verification: Confirm that there are no residual hard-coded usages of the old model names in the codebase. The original verification command didn’t work for .mjs files. Please re-run the following updated command to search in both .js and .mjs files:
#!/bin/bash
rg -i "llama-3\.1-sonar-(small|large)-128k-online" -g "*.{js,mjs}"

Review any findings, and if old references are still present, update them accordingly.

🧹 Nitpick comments (1)
components/perplexity/common/constants.mjs (1)

1-13: Consider documenting the model capabilities and differences.

The new model naming scheme (sonar vs sonar-pro vs sonar-reasoning vs sonar-reasoning-pro) suggests different capabilities or performance characteristics. Consider adding JSDoc comments to document:

  • The differences between base and pro models
  • The specific capabilities of reasoning models
  • Any migration guidance for users of the old model names
 export default {
+  /**
+   * Available Perplexity AI models:
+   * - sonar: Base model for general use
+   * - sonar-pro: Enhanced version with improved capabilities
+   * - sonar-reasoning: Specialized for reasoning tasks
+   * - sonar-reasoning-pro: Enhanced reasoning model
+   * @see https://docs.perplexity.ai/docs/model-cards
+   */
   MODELS: [
     "sonar",
     "sonar-pro",
     "sonar-reasoning",
     "sonar-reasoning-pro",
   ],
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f3caff9 and f6353c0.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (3)
  • components/perplexity/actions/chat-completions/chat-completions.mjs (1 hunks)
  • components/perplexity/common/constants.mjs (1 hunks)
  • components/perplexity/package.json (1 hunks)
✅ Files skipped from review due to trivial changes (2)
  • components/perplexity/package.json
  • components/perplexity/actions/chat-completions/chat-completions.mjs
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: pnpm publish
  • GitHub Check: Verify TypeScript components
  • GitHub Check: Publish TypeScript components
🔇 Additional comments (1)
components/perplexity/common/constants.mjs (1)

3-6:

✅ Verification successful

Verify the updated model identifiers against Perplexity's API documentation.

The change from specific model versions to generalized identifiers could impact existing integrations. Let's verify these new model names are correct.


🌐 Web query:

What are the available model names in Perplexity AI's API?

💡 Result:

Perplexity AI's API provides access to multiple advanced language models, including both proprietary and open-source options. Here's a breakdown of available models based on current documentation:

Core Proprietary Models (Sonar Series)

Available until February 22, 2025 (deprecation notice)[7]:

  • llama-3.1-sonar-small-128k-online
  • llama-3.1-sonar-large-128k-online
  • llama-3.1-sonar-huge-128k-online

Current recommended replacements[7][10]:

  • sonar (127k context)
  • sonar-pro (200k context)
  • sonar-reasoning (127k context)
  • sonar-reasoning-pro (127k context)

Open-Source Foundation Models

Available through pplx-api and third-party integrations[4][9]:

  • mistral-7b-instruct
  • codellama-34b-instruct
  • llama-2-13b-chat
  • llama-2-70b-chat
  • openhermes-2-mistral-7b
  • openhermes-2.5-mistral-7b

Specialized Variants

Additional optimized versions available via API[9][10]:

pplx-7b-chat
pplx-70b-chat
pplx-7b-online
pplx-70b-online
pplx-7b-chat-alpha
pplx-70b-chat-alpha

Usage Notes

  1. Model access requires an API key through Perplexity's authentication system[2][4]
  2. Default model is sonar when not specified[3]
  3. Full list updates dynamically - check /models endpoint for latest availability[5]
  4. Citations can be enabled via return_citations=True parameter (beta feature)[9]

Developers can access these models through:

client.chat.completions.create(
    model="perplexity/sonar-pro",
    messages=[...]
)

For complete specifications and parameters, consult Perplexity's official model cards documentation[10].

Citations:


Verified model identifiers update – no issues detected

The new model identifiers "sonar", "sonar-pro", "sonar-reasoning", and "sonar-reasoning-pro" now align with Perplexity AI’s current recommendations per their documentation. This update replaces the older, version-specific models with more generalized identifiers. Please ensure that integration tests and documentation are updated accordingly, as this change could affect clients relying on previous model names.

@andrewjschuang andrewjschuang merged commit 188fe69 into master Feb 19, 2025
11 checks passed
@andrewjschuang andrewjschuang deleted the perplexity-update-models branch February 19, 2025 14:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant