Cache Mode

Sometimes it is time-consuming to debug a Pro Config, since there might be a lot of AI widgets inside the workflow. Besides, debugging a Pro Config may cost a log of battery when some of the widgets are called repetitively.

To alleviate these issues, we release the cache mode, where the creator flexibly chooses which widget to skip during the workflow. When a widget is set to cache mode, it will be called only once and store the outputs in our database. If the module_config is not changed, further calling the widget will simply return the previously stored outputs and cost zero battery. The cache mode is very useful when you are building the workflow.

The cache flag can be set either in the automata (when you want to use cache in the whole Pro Config) or in state (when you want to use cache in a specific state) or the module_config (when you want to debug a specific module). Note that the priority of the debug flag is module_config > state > automata , which means the value of cache set in the former would overwrite that in the latter.

Taking the simple demo in Building Workflow as an example:

{
  "type": "automata",
  "id": "chat_demo",
  "initial": "chat_page_state",
  "properties": {
    "cache": true,
  }
  "states": {
    "chat_page_state": {
      "inputs": {
        "user_message": {
          "type": "IM",
          "user_input": true
        }
      },
      "tasks": [
        {
          "name": "generate_reply",
          "module_type": "AnyWidgetModule",
          "module_config": {
            "widget_id": "1744214024104448000", // GPT 3.5
            "system_prompt": "You are a teacher teaching Pro Config.",
            "user_prompt": "{{user_message}}",
            "output_name": "reply",
            "cache": true
          }
        },
        {
          "name": "generate_voice",
          "module_type": "AnyWidgetModule",
          "module_config": {
            "widget_id": "1743159010695057408", // TTS widget (Samantha)
            "content": "{{reply}}",
            "output_name": "reply_voice"
            "cache": true
          }
        }
      ],
      "render": {
        "text": "{{reply}}",
        "audio": "{{reply_voice}}"
      },
      "transitions": {
        "CHAT": "chat_page_state"
      }
    }
  }
}

here the two modules are set into cache mode and how it becomes:

We can see that both the results of LLM and TTS stay the same after the second chat. If we want to run the LLM, we can just set the debug as false:

       {
          "name": "generate_reply",
          "module_type": "AnyWidgetModule",
          "module_config": {
            "widget_id": "1744214024104448000", // GPT 3.5
            "system_prompt": "You are a teacher teaching Pro Config.",
            "user_prompt": "{{user_message}}",
            "output_name": "reply",
            "debug": false
          }
        },

The results are as follows:

The response of LLM varies based on the user's input, while the output of TTS stays the same because the TTS widget is still in cache mode.

Last updated