Advanced Memory Manager in Prompt Widget

Many creators use context to manage LLM chat memory, which can be complicated and redundant.

We've introduced a new, more convenient method for managing memory in the prompt widget that offers enhanced capabilities.

Here's an example of chatting with auto-memory using Claude 3 Haiku:

{
  "id": "prompt_widget_auto_memory",
  "initial": "init_state",
  "states": {
    "init_state": {
      "render": {
        "text": "Hello, I am an example bot. I'm here to demonstrate auto-memory in the widget. Let's chat."
      },
      "transitions": {
        "CHAT": "home_state"
      }
    },
    "home_state": {
      "inputs": {
        "text": {
          "type": "text",
          "source": "IM",
          "user_input": true
        }
      },
      "tasks": [
        {
          "name": "llm_widget_example_task",
          "module_type": "AnyWidgetModule",
          "module_config": {
            "widget_id": "1744218088699596809", // Claude 3 Haiku
            "user_prompt": "{{text}}",
            "system_prompt": "Be a nice bot.", // Optional field. You can input system prompt of bot.
            "top_p": 0.5, // Optional field. Default value is 0.5
            "temperature": 0.5, // Optional field. Default value is 0.5
            "frequency_penalty": 0, // Optional field. Default value is 0
            "presence_penalty": 0, // Optional field. Default value is 0
            "max_tokens": 1024, // Optional field. Default value is 1024
            "memory": "auto"
            "output_name": "result"
          }
        }
      ],
      "render": {
        "text": "{{result}}"
      },
      "transitions": {
        "CHAT": "home_state"
      }
    }
  }
}

The key to this configuration is using auto in the memory field within the module config:

 "module_config": {
           ...
            "memory": "auto",
          }

We automatically manage your chat memory in our store, eliminating the need to store it again in context. In more detail, memory management treats each LLM task as the smallest unit. For example, if you use auto-memory in different states or different tasks, their memory management is separate and distinct—tasks do not share memory. Additionally, if you change the state name to which a task belongs or alter the order of the tasks array, memory may be lost after saving.

Additional Prompt

For users who want to use the fulfill mechanism or add prefixes and suffixes to user input, automatic memory management now includes a configuration to achieve this function.

 "module_config": {
            "user_prompt": "{{text}}",
            "prompt_addition": {
              "user_prompt_prefix": "[User Input Prefix]",
              "user_prompt_suffix": "[User Input Suffix]",
              "pre_messages": [
                {"role": "user", "content": "Hello, I am a user message."},
                {"role": "assistant", "content": "Hello, I am a bot message."}
              ],
              "post_messages": [
                {"role": "user", "content": "Hello, I am a user message."},
                {"role": "assistant", "content": "Hello, I am a bot message."}
              ]
            },
            "memory": "auto",
            ...
         }

The key is prompt_addition. It has four fields that allow you to create a more complex LLM chat system:

NameTypeDescription

user_prompt_prefix

string

Adds prefix to user's prompt

user_prompt_suffix

string

Adds suffix to user's prompt

pre_messages

array<map>

Adds messages before memory

post_messages

array<map>

Adds messages after memory and user's prompt

If you set all of these fields, the request messages order will be:

[
pre_messages...,
memory...,
user_prompt_prefix + user_prompt + user_prompt_suffix,
post_messages...
]

Message Editing

With using auto memory, you can delete or edit single message whether it is sent by the user or the bot to change its corresponding memory. No need to clear all context anymore.

Last updated