Skip to content

Commit 71f590d

Browse files
authored
docs: fix more broken links (#27806)
Fix some broken links
1 parent c572d66 commit 71f590d

18 files changed

+21
-21
lines changed

docs/docs/concepts/chat_models.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,7 @@ A semantic cache introduces a dependency on another model on the critical path o
152152

153153
However, there might be situations where caching chat model responses is beneficial. For example, if you have a chat model that is used to answer frequently asked questions, caching responses can help reduce the load on the model provider and improve response times.
154154

155-
Please see the [how to cache chat model responses](/docs/how_to/#chat-model-caching) guide for more details.
155+
Please see the [how to cache chat model responses](/docs/how_to/chat_model_caching/) guide for more details.
156156

157157
## Related resources
158158

@@ -165,4 +165,4 @@ Please see the [how to cache chat model responses](/docs/how_to/#chat-model-cach
165165
* [Tool calling](/docs/concepts#tool-calling)
166166
* [Multimodality](/docs/concepts/multimodality)
167167
* [Structured outputs](/docs/concepts#structured_output)
168-
* [Tokens](/docs/concepts/tokens)
168+
* [Tokens](/docs/concepts/tokens)

docs/docs/concepts/runnables.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ This guide covers the main concepts and methods of the Runnable interface, which
1515
The Runnable way defines a standard interface that allows a Runnable component to be:
1616

1717
* [Invoked](/docs/how_to/lcel_cheatsheet/#invoke-a-runnable): A single input is transformed into an output.
18-
* [Batched](/docs/how_to/lcel_cheatsheet/#batch-a-runnable/): Multiple inputs are efficiently transformed into outputs.
18+
* [Batched](/docs/how_to/lcel_cheatsheet/#batch-a-runnable): Multiple inputs are efficiently transformed into outputs.
1919
* [Streamed](/docs/how_to/lcel_cheatsheet/#stream-a-runnable): Outputs are streamed as they are produced.
2020
* Inspected: Schematic information about Runnable's input, output, and configuration can be accessed.
2121
* Composed: Multiple Runnables can be composed to work together using [the LangChain Expression Language (LCEL)](/docs/concepts/lcel) to create complex pipelines.

docs/docs/concepts/tools.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -141,7 +141,7 @@ See [how to pass run time values to tools](/docs/how_to/tool_runtime/) for more
141141

142142
You can use the `RunnableConfig` object to pass custom run time values to tools.
143143

144-
If you need to access the [RunnableConfig](/docs/concepts/runnables/#RunnableConfig) object from within a tool. This can be done by using the `RunnableConfig` annotation in the tool's function signature.
144+
If you need to access the [RunnableConfig](/docs/concepts/runnables/#runnableconfig) object from within a tool. This can be done by using the `RunnableConfig` annotation in the tool's function signature.
145145

146146
```python
147147
from langchain_core.runnables import RunnableConfig

docs/docs/concepts/vectorstores.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -186,6 +186,6 @@ See this [how-to guide on hybrid search](/docs/how_to/hybrid/) for more details.
186186
| Name | When to use | Description |
187187
|-------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------|
188188
| [Hybrid search](/docs/integrations/retrievers/pinecone_hybrid_search/) | When combining keyword-based and semantic similarity. | Hybrid search combines keyword and semantic similarity, marrying the benefits of both approaches. [Paper](https://arxiv.org/abs/2210.11934). |
189-
| [Maximal Marginal Relevance (MMR)](/docs/integrations/vectorstores/pinecone/#maximal-marginal-relevance-searches) | When needing to diversify search results. | MMR attempts to diversify the results of a search to avoid returning similar and redundant documents. |
189+
| [Maximal Marginal Relevance (MMR)](https://python.langchain.com/api_reference/pinecone/vectorstores/langchain_pinecone.vectorstores.PineconeVectorStore.html#langchain_pinecone.vectorstores.PineconeVectorStore.max_marginal_relevance_search) | When needing to diversify search results. | MMR attempts to diversify the results of a search to avoid returning similar and redundant documents. |
190190

191191

docs/docs/how_to/agent_executor.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@
1818
"# Build an Agent with AgentExecutor (Legacy)\n",
1919
"\n",
2020
":::important\n",
21-
"This section will cover building with the legacy LangChain AgentExecutor. These are fine for getting started, but past a certain point, you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd recommend checking out [LangGraph Agents](/docs/concepts/#langgraph) or the [migration guide](/docs/how_to/migrate_agent/)\n",
21+
"This section will cover building with the legacy LangChain AgentExecutor. These are fine for getting started, but past a certain point, you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd recommend checking out [LangGraph Agents](/docs/concepts/architecture/#langgraph) or the [migration guide](/docs/how_to/migrate_agent/)\n",
2222
":::\n",
2323
"\n",
2424
"By themselves, language models can't take actions - they just output text.\n",
@@ -802,7 +802,7 @@
802802
"That's a wrap! In this quick start we covered how to create a simple agent. Agents are a complex topic, and there's lot to learn! \n",
803803
"\n",
804804
":::important\n",
805-
"This section covered building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd reccommend checking out [LangGraph](/docs/concepts/#langgraph)\n",
805+
"This section covered building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd reccommend checking out [LangGraph](/docs/concepts/architecture/#langgraph)\n",
806806
":::\n",
807807
"\n",
808808
"If you want to continue using LangChain agents, some good advanced guides are:\n",

docs/docs/how_to/qa_chat_history_how_to.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -686,7 +686,7 @@
686686
"source": [
687687
"### Agent constructor\n",
688688
"\n",
689-
"Now that we have defined the tools and the LLM, we can create the agent. We will be using [LangGraph](/docs/concepts/#langgraph) to construct the agent. \n",
689+
"Now that we have defined the tools and the LLM, we can create the agent. We will be using [LangGraph](/docs/concepts/architecture/#langgraph) to construct the agent. \n",
690690
"Currently we are using a high level interface to construct the agent, but the nice thing about LangGraph is that this high-level interface is backed by a low-level, highly controllable API in case you want to modify the agent logic."
691691
]
692692
},

docs/docs/how_to/structured_output.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -556,7 +556,7 @@
556556
"id": "498d893b-ceaa-47ff-a9d8-4faa60702715",
557557
"metadata": {},
558558
"source": [
559-
"For more on few shot prompting when using tool calling, see [here](/docs/how_to/function_calling/#Few-shot-prompting)."
559+
"For more on few shot prompting when using tool calling, see [here](/docs/how_to/tools_few_shot/)."
560560
]
561561
},
562562
{

docs/docs/integrations/chat/naver.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717
"source": [
1818
"# ChatClovaX\n",
1919
"\n",
20-
"This notebook provides a quick overview for getting started with Naver’s HyperCLOVA X [chat models](https://python.langchain.com/docs/concepts/#chat-models) via CLOVA Studio. For detailed documentation of all ChatClovaX features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.naver.ChatClovaX.html).\n",
20+
"This notebook provides a quick overview for getting started with Naver’s HyperCLOVA X [chat models](https://python.langchain.com/docs/concepts/chat_models) via CLOVA Studio. For detailed documentation of all ChatClovaX features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.naver.ChatClovaX.html).\n",
2121
"\n",
2222
"[CLOVA Studio](http://clovastudio.ncloud.com/) has several chat models. You can find information about latest models and their costs, context windows, and supported input types in the CLOVA Studio API Guide [documentation](https://api.ncloud-docs.com/docs/clovastudio-chatcompletions).\n",
2323
"\n",

docs/docs/integrations/chat/writer.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717
"source": [
1818
"# ChatWriter\n",
1919
"\n",
20-
"This notebook provides a quick overview for getting started with Writer [chat models](/docs/concepts/#chat-models).\n",
20+
"This notebook provides a quick overview for getting started with Writer [chat models](/docs/concepts/chat_models).\n",
2121
"\n",
2222
"Writer has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [Writer docs](https://dev.writer.com/home/models).\n",
2323
"\n",

docs/docs/integrations/llms/sambastudio.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
"**[SambaNova](https://sambanova.ai/)'s** [Sambastudio](https://sambanova.ai/technology/full-stack-ai-platform) is a platform that allows you to train, run batch inference jobs, and deploy online inference endpoints to run open source models that you fine tuned yourself.\n",
1010
"\n",
1111
":::caution\n",
12-
"You are currently on a page documenting the use of SambaStudio models as [text completion models](/docs/concepts/#llms). We recommend you to use the [chat completion models](/docs/concepts/#chat-models).\n",
12+
"You are currently on a page documenting the use of SambaStudio models as [text completion models](/docs/concepts/text_llms). We recommend you to use the [chat completion models](/docs/concepts/chat_models).\n",
1313
"\n",
1414
"You may be looking for [SambaStudio Chat Models](/docs/integrations/chat/sambastudio/) .\n",
1515
":::\n",

0 commit comments

Comments
 (0)