Cache Mode
Sometimes it is time-consuming to debug a Pro Config, since there might be a lot of AI widgets inside the workflow. Besides, debugging a Pro Config may cost a log of battery when some of the widgets are called repetitively.
To alleviate these issues, we release the cache mode, where the creator flexibly chooses which widget to skip during the workflow. When a widget is set to cache mode, it will be called only once and store the outputs in our database. If the module_config
is not changed, further calling the widget will simply return the previously stored outputs and cost zero battery. The cache mode is very useful when you are building the workflow.
The cache
flag can be set either in the automata
(when you want to use cache in the whole Pro Config) or in state
(when you want to use cache in a specific state) or the module_config
(when you want to debug a specific module). Note that the priority of the debug
flag is module_config > state > automata
, which means the value of cache
set in the former would overwrite that in the latter.
Taking the simple demo in Building Workflow as an example:
here the two modules are set into cache mode and how it becomes:
We can see that both the results of LLM and TTS stay the same after the second chat. If we want to run the LLM, we can just set the debug as false:
The results are as follows:
The response of LLM varies based on the user's input, while the output of TTS stays the same because the TTS widget is still in cache
mode.
Last updated