MyShell
  • About MyShell
    • What is MyShell
    • MyShell in a Nutshell
    • Quickstart
  • Explore AI Agents
    • Image Generation
    • Video Generation
    • Meme Generation
    • Role-Playing Game
    • Character
    • Utility
  • Create AI Agents
    • Classic Mode
      • Enhanced Prompt
      • Knowledge Base
      • Telegram Integration
    • Pro Config Mode
      • Core Concepts
      • Tutorial
        • Tutorial Structure
        • Hello World with Pro Config
        • Building Workflow
        • Transitions
        • Expressions and Variables
        • Integration with Any Widget
        • An Advanced Example
      • Basic
        • Common
        • Atomic State
        • Transition
        • Automata
        • Modules
      • Advanced
        • Cron Pusher
        • Neutral Language To SD Prompt
        • Advanced Input Validation
        • Advanced Memory Manager in Prompt Widget
      • Tools
        • AutoConfig Agent
        • Cache Mode
        • Knowledge Base Agent
        • Crawler Widget
      • Example
        • Homeless With You
        • Random Routing
        • Function Calling
      • API Reference
        • Atomic State
        • Transition
        • Automata
        • Context
        • Module
          • AnyWidget Module
            • Prompt Widget
            • LLM Widget
            • TTS Widget
            • Code Runner Widget
            • Melo TTS
            • Age Transformation
            • ChatImg
            • GIF Generation
            • Music Generation
          • LLM Module
          • LLM Function Module
          • TTS Module
          • Google Search Module
        • Widgets
          • Bark TTS
          • Champ
          • CoinGecko
          • ControlNet with Civitai
          • Crawler
          • Crypto News
          • Data Visualizer
          • Email Sender
          • Google Flight Search
          • Google Hotel Search
          • Google Image Search
          • Google Map Search
          • Google News Search
          • Google Scholar Search
          • Google Search
          • GroundedSAM
          • Image Text Fuser
          • Information Extractor - OpenAI Schema Generator
          • Information Extractor
          • Instagram Search
          • JSON to Table
          • LinkedIn
          • MS Word to Markdown
          • Markdown to MS Word
          • Markdown to PDF
          • Mindmap Generator
          • Notion Database
          • OCR
          • Pdf to Markdown
          • RMBG
          • Stabel-Video-Diffusion
          • Stable Diffusion Inpaint
          • Stable Diffusion Recommend
          • Stable Diffusion Transform
          • Stable Diffusion Upscale
          • Stable Diffusion with 6 fixed category
          • Stable Diffusion with Civitai
          • Storydiffusion
          • Suno Lyrics Generator
          • Suno Music Generator
          • Table to Markdown
          • TripAdvisor
          • Twitter Search
          • UDOP: Document Question Answering
          • Weather forecasting
          • Whisper large-v3
          • Wikipedia
          • Wolfram Alpha Search
          • Yelp Search
          • YouTube Downloader
          • YouTube Transcriber
          • Youtube Search
      • FAQs
      • Changelog
    • ShellAgent Mode
      • Download and Installation
      • App Builder
      • Workflow
      • Build Custom Widget
      • Publish to MyShell
      • Customized Pricing For Your Agent
      • Example
        • Child Book X Agent w/ DeepSeek
        • Kids Book NFT AI Agent w/ BNB Chain
        • DeFAI Agent w/ BNB Chain
  • Shell Launchpad
    • How to Launch a Token
    • Trade Agent Tokens
  • Tokenomics
    • $SHELL Basics
    • $SHELL Token Utility
    • How to Obtain $SHELL
    • Roadmap
  • Open-source AI Framework/SDK
    • ShellAgent
    • OpenVoice
    • MeloTTS
    • JetMoE
    • AIlice
  • Links
Powered by GitBook
On this page
  • Additional Prompt
  • Message Editing
  1. Create AI Agents
  2. Pro Config Mode
  3. Advanced

Advanced Memory Manager in Prompt Widget

Many creators use context to manage LLM chat memory, which can be complicated and redundant.

We've introduced a new, more convenient method for managing memory in the prompt widget that offers enhanced capabilities.

Here's an example of chatting with auto-memory using Claude 3 Haiku:

{
  "id": "prompt_widget_auto_memory",
  "initial": "init_state",
  "states": {
    "init_state": {
      "render": {
        "text": "Hello, I am an example agent. I'm here to demonstrate auto-memory in the widget. Let's chat."
      },
      "transitions": {
        "CHAT": "home_state"
      }
    },
    "home_state": {
      "inputs": {
        "text": {
          "type": "text",
          "source": "IM",
          "user_input": true
        }
      },
      "tasks": [
        {
          "name": "llm_widget_example_task",
          "module_type": "AnyWidgetModule",
          "module_config": {
            "widget_id": "1744218088699596809", // Claude 3 Haiku
            "user_prompt": "{{text}}",
            "system_prompt": "Be a nice agent.", // Optional field. You can input system prompt of agent.
            "top_p": 0.5, // Optional field. Default value is 0.5
            "temperature": 0.5, // Optional field. Default value is 0.5
            "frequency_penalty": 0, // Optional field. Default value is 0
            "presence_penalty": 0, // Optional field. Default value is 0
            "max_tokens": 1024, // Optional field. Default value is 1024
            "memory": "auto"
            "output_name": "result"
          }
        }
      ],
      "render": {
        "text": "{{result}}"
      },
      "transitions": {
        "CHAT": "home_state"
      }
    }
  }
}

The key to this configuration is using auto in the memory field within the module config:

 "module_config": {
           ...
            "memory": "auto",
          }

We automatically manage your chat memory in our store, eliminating the need to store it again in context. In more detail, memory management treats each LLM task as the smallest unit. For example, if you use auto-memory in different states or different tasks, their memory management is separate and distinct—tasks do not share memory. Additionally, if you change the state name to which a task belongs or alter the order of the tasks array, memory may be lost after saving.

Additional Prompt

For users who want to use the fulfill mechanism or add prefixes and suffixes to user input, automatic memory management now includes a configuration to achieve this function.

 "module_config": {
            "user_prompt": "{{text}}",
            "prompt_addition": {
              "user_prompt_prefix": "[User Input Prefix]",
              "user_prompt_suffix": "[User Input Suffix]",
              "pre_messages": [
                {"role": "user", "content": "Hello, I am a user message."},
                {"role": "assistant", "content": "Hello, I am an agent message."}
              ],
              "post_messages": [
                {"role": "user", "content": "Hello, I am a user message."},
                {"role": "assistant", "content": "Hello, I am an agent message."}
              ]
            },
            "memory": "auto",
            ...
         }

The key is prompt_addition. It has four fields that allow you to create a more complex LLM chat system:

Name
Type
Description

user_prompt_prefix

string

Adds prefix to user's prompt

user_prompt_suffix

string

Adds suffix to user's prompt

pre_messages

array<map>

Adds messages before memory

post_messages

array<map>

Adds messages after memory and user's prompt

If you set all of these fields, the request messages order will be:

[
pre_messages...,
memory...,
user_prompt_prefix + user_prompt + user_prompt_suffix,
post_messages...
]

Message Editing

With using auto memory, you can delete or edit single message whether it is sent by the user or the agent to change its corresponding memory. No need to clear all context anymore.

PreviousAdvanced Input ValidationNextTools

Last updated 4 months ago