Random Routing Example
Thanks @Borsuc to provide this example!
Randomly choosing between LLMs or other tasks
Say we want to run the user's chat input through one of 3 LLMs, randomly. In Pro Config, tasks are executed sequentially, but unconditionally. Therefore, to accomplish this, we need a separate state for each LLM, and use conditional transitions to choose which one to go to. In this example you will also learn how to chain states together to create a more modular config, where you can easily update and re-use states later as grouping functions.
So first, I recommend you to split up your design so that you Don't Repeat Yourself (this is known as the DRY principle). What I mean by that, is that if you have multiple states wanting to use the random LLM path, you should create a state specifically just for choosing a random LLM to go to, instead of duplicating the random choice from every state that wants to use it.
Also, to do post-processing on the LLM output, use a separate state, so that you only write this once (and if you have to fix it or update it later, you only update it in one place). Again, DRY. The LLM state itself should just set a context variable and jump to the post-processing state.
Here's an example config where we randomly choose between Mixtral 8x7b, Slerp 13b or Airoboros 70b:
In the above config, we first define two context variables in the automata, user_prompt
and llm_result
. We use these to pass information across states. Since we split up our "functions" with states for maintainability and future extendability, we have to use such variables.
The home_state
is basic and self-explanatory. After the user chats in the home state, we move to the chat_state
. In this state, we process the user input, set up the context.user_prompt
variable, and finally jump to the state that initiates the random selection, random_llm_state
. Note that the random_llm_state
only does one thing, and that's the selection. This is because if we ever needed the random LLM from another state we could just jump to it, like grouping a function.
The random chooser state
The random_llm_state
first uses an output variable to set the random number to. Note that this is important not just to avoid repeating the formula on each condition, but because we must generate one random number once and then use it in every condition, the same random number. We use an output instead of an input since transition conditions can't use inputs.
In the random generation formula we do a simple scaling to the number of LLMs we have. Math.random()
generates a random number between 0 (inclusive) and 1 (exclusive), so we multiply it by 3 since we have 3 LLMs, so now it's between 0 and 3. This makes it easier to choose in the conditions.
Remember that the conditions are executed sequentially, so even though the second condition (rng<2) is also true when the number is 0, it must "pass" the first condition first to arrive there, so it is fine. This scheme makes it easier to conditionally exclude some LLMs depending on factors such as them not being suitable for certain scenarios; you can just add the condition at the end such as rng<2 && some_other_condition
.
The LLM states
Each LLM has its own state, and is chosen by random_llm_state
. The job of these states is strictly to process the user input with the given LLM, and store the result into the context.llm_result
context variable. Nothing more. Note how these states are simply chained together via ALWAYS transitions, which enables us to plug them in various ways and avoid repeating ourselves.
The LLM states then jump to the post-processing state, post_llm_state
, where we do a simple post process before going back to chat.
The post-processing state
post_llm_state
comes after the LLM states; here we post process the result stored in context.llm_result
in each LLM state, by replacing some accent characters with their ASCII equivalent. This is not terribly important, it's just to illustrate a possible post-processing done in JavaScript on the LLM outputs. You can do a lot more complicated things here before presenting it to the user.
This state also renders the text that's visible to the user before waiting for chat again.
Now when you test this example:
If you get a response that acts like a polite helpful assistant, it means Mixtral was chosen.
If you get a response that acts like an annoying tsundere, it means Slerp was chosen.
If you get a response that acts like a swagster, it means Airoboros was chosen.
There is no memory, so you can repeat the same message to test.
Last updated