Skip to content

Add docs regarding BYPASS_TOOL_CONSENT and fix other minor docs issues #82

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jun 9, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
57 changes: 20 additions & 37 deletions docs/examples/python/memory_agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,34 +32,20 @@ The memory agent utilizes two primary tools:

This example demonstrates a workflow where memories are used to generate contextually relevant responses:

```
┌─────────────────┐ ┌─────────────────────────┐ ┌─────────────────────────┐
│ │ │ │ │ │
│ User Query │────▶│ Command Classification │────▶│ Conditional Execution │
│ │ │ (store/retrieve/list) │ │ Based on Command Type │
│ │ │ │ │ │
└─────────────────┘ └─────────────────────────┘ └───────────┬─────────────┘
┌───────────────────────────────────────────────────────┐
│ │
│ Store Action List Action Retrieve Action │
│ ┌───────────┐ ┌───────────┐ ┌───────────────┐ │
│ │ │ │ │ │ │ │
│ │ mem0() │ │ mem0() │ │ mem0() │ │
│ │ (store) │ │ (list) │ │ (retrieve) │ │
│ │ │ │ │ │ │ │
│ └───────────┘ └───────────┘ └───────┬───────┘ │
│ │ │
│ ▼ │
│ ┌───────────┐ │
│ │ │ │
│ │ use_llm() │ │
│ │ │ │
│ └───────────┘ │
│ │
└──────────────────────────────────────────────────────┘
```mermaid
flowchart TD
UserQuery["User Query"] --> CommandClassification["Command Classification<br>(store/retrieve/list)"]
CommandClassification --> ConditionalExecution["Conditional Execution<br>Based on Command Type"]

ConditionalExecution --> ActionContainer["Memory Operations"]

subgraph ActionContainer[Memory Operations]
StoreAction["Store Action<br><br>mem0()<br>(store)"]
ListAction["List Action<br><br>mem0()<br>(list)"]
RetrieveAction["Retrieve Action<br><br>mem0()<br>(retrieve)"]
end

RetrieveAction --> UseLLM["use_llm()"]
```

### Key Workflow Components
Expand Down Expand Up @@ -126,15 +112,12 @@ This example demonstrates a workflow where memories are used to generate context

The retrieval path demonstrates tool chaining, where memory retrieval and LLM response generation work together:

```
┌───────────────┐ ┌───────────────────────┐ ┌───────────────┐
│ │ │ │ │ │
│ User Query │────▶│ memory() Retrieval │────▶│ use_llm() │────▶ Response
│ │ │ │ │ │
└───────────────┘ └───────────────────────┘ └───────────────┘
(Finds relevant memories) (Generates natural
language answer)
```
```mermaid
flowchart LR
UserQuery["User Query"] --> MemoryRetrieval["memory() Retrieval<br>(Finds relevant memories)"]
MemoryRetrieval --> UseLLM["use_llm()<br>(Generates natural<br>language answer)"]
UseLLM --> Response["Response"]
```

This chaining allows the agent to:
1. First retrieve memories that are semantically relevant to the user's query
Expand Down
3 changes: 2 additions & 1 deletion docs/examples/python/meta_tooling.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,8 @@ agent = Agent(
system_prompt=TOOL_BUILDER_SYSTEM_PROMPT, tools=[load_tool, shell, editor]
)
```
- `editor`: Tool used to write code directly to a file named `"custom_tool_X.py"`, where "X" is the index of the tool being created.

- `editor`: Tool used to write code directly to a file named `"custom_tool_X.py"`, where "X" is the index of the tool being created.
- `load_tool`: Tool used to load the tool so the Agent can use it.
- `shell`: Tool used to execute the tool.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,8 @@ def math_assistant(query: str) -> str:
except Exception as e:
return f"Error processing your mathematical query: {str(e)}"
```
**Each specialized agent has a distinct system prompt, and tools in its inventory, and follows this general pattern.
Each specialized agent has a distinct system prompt, and tools in its inventory, and follows this general pattern.

- [Language Assistant](https://github.com/strands-agents/docs/blob/main/docs/examples/python/multi_agent_example/language_assistant.py) specializes in queries related to translation into different languages.
- [Computer Scince Assistant](https://github.com/strands-agents/docs/blob/main/docs/examples/python/multi_agent_example/computer_science_assistant.py) specializes in queries related to writing, editing, running, code and explaining computer science concepts.
- [English Assistant](https://github.com/strands-agents/docs/blob/main/docs/examples/python/multi_agent_example/english_assistant.py) specializes in queries related to grammar, and english comprehension.
Expand Down
31 changes: 30 additions & 1 deletion docs/user-guide/concepts/tools/example-tools-package.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,4 +59,33 @@ pip install strands-agents-tools[mem0_memory]
- [`stop`]({{ tools_repo }}/src/strands_tools/stop.py): Force stop the agent event loop
- [`think`]({{ tools_repo }}/src/strands_tools/think.py): Perform deep thinking by creating parallel branches of agentic reasoning
- [`use_llm`]({{ tools_repo }}/src/strands_tools/use_llm.py): Run a new AI event loop with custom prompts
- [`workflow`]({{ tools_repo }}/src/strands_tools/workflow.py): Orchestrate sequenced workflows
- [`workflow`]({{ tools_repo }}/src/strands_tools/workflow.py): Orchestrate sequenced workflows


## Tool Consent and Bypassing

By default, certain tools that perform potentially sensitive operations (like file modifications, shell commands, or code execution) will prompt for user confirmation before executing. This safety feature ensures users maintain control over actions that could modify their system.

To bypass these confirmation prompts, you can set the `BYPASS_TOOL_CONSENT` environment variable:

```bash
# Set this environment variable to bypass tool confirmation prompts
export BYPASS_TOOL_CONSENT=true
```

Setting the environment variable within Python:

```python
import os

os.environ["BYPASS_TOOL_CONSENT"] = "true"
```

When this variable is set to `true`, tools will execute without asking for confirmation. This is particularly useful for:

- Automated workflows where user interaction isn't possible
- Development and testing environments
- CI/CD pipelines
- Situations where you've already validated the safety of operations

**Note:** Use this feature with caution in production environments, as it removes an important safety check.
Loading