FAQs
Last updated
Last updated
Following are the frequently asked questions (FAQs) when dealing with Pro Config to develop AI bots using MyShell.ai.
The pro config tutorial is the best way to get familiar with it.
You can use a code editor like Visual Studio Code that shows you exactly where you have a syntax error so that you can spot and fix it.
Automata
and AtomicState
?An AtomicState
is a small task that is performed within your Automata
. Your pro config root is an Automata
and the nested tasks in the tasks
fields are AtomicState
. There are a few differences between Automata
and AtomicState
:
AtomicState
has lack of properties.is_chat_allowed
and tasks
fields.
Both of them have different special events for transitions. For example, AtomicState
has ALWAYS
event and Automata
has DONE
event.
Head over to your bot settings and you should be able to change the intro message. The intro message is shown whenever someone first interacts with your bot or clears the bot’s memory.
You need a Genesis Pass to publish at most three bots for anyone to use. If you don’t have a genesis pass, you can make your bot private and still share it with others to use. If you’re a part of the learning labs, you get a chance to win a genesis pass if you build bots, share them and be active in the learning labs and the MyShell community.
You could go to MyShell's site. On left bar click "Reward" -> "Reward Redemption"-> "S11 Premium Membership Card"
After getting the premium membership, you could get 10 private bot slots!
Private bot can be shared. You just need to click the share button, and you will get a bot link. People can access that bot by clicking the link. Note, it is NOT the url of the site.
Debugging your bot involves a few steps:
Use the validate
button in the pro config editor for any error messages or warnings.
Use the cache mode
to choose which widget to skip during the workflow
Simplify your config by isolating complex tasks into separate states and testing them individually.
Cache mode caches the responses from different widgets and re-uses them as response to save time and battery usage. To use cache mode, add the following to the root of your pro config JSON:
Setting cache
to true
will enable cache mode and all the widget responses will be cached.
For more detailed debugging tips, see the cache mode guide. You can also check our video on cache mode published on YouTube.
If something still goes wrong, feel free to let us know about the issues you faced in our Discord server.
You can set cache
to false
in module_config
for that specific widget. This will make it so that the result of the widget is never cached even when you have cache mode enabled globally.
You can find all models supported in the documentation.
You can find all widgets in widget center on the dashboard. You can follow the documentation for widgets to see how you can integrate them in your pro config using the AnyWidgetModule
.
LLMModule
?Currently, we support GPT 3.5 Turbo (gpt-35-turbo-16k
) and GPT 4 (gpt-4-1116-preview
) for using the LLMModule
. We will update the documentation when future models are released that can be used with this module. However, you can use other models as widgets using AnyWidgetModule
. For example, you can use the Perplexity widget and Gemini Pro using AnyWidgetModule
module to use it in your apps.
LLMModule
? Can we provide memory and customize settings in these widgets?The team is working on streamlining and adding all the LLM models under LLMModule
. For now, they are quickly made available to you in the form of widgets. Unfortunately, for now, you cannot provide additional settings. You can use these widgets for one-off prompts, or pass in the memory in the user prompt, which is a janky way of doing it.
For using the stable diffusion widget, a model ID is required. You can find it on the page for the model on Civatai. For example, for the AnyLoRA model, if the link of the model looks like the following:
https://civitai.com/models/23900?modelVersionId=95489
The model ID will be 95489, use this as model
in your pro config.
Currently, no. But this feature is in pipeline for the workflows where outputs from tasks doesn't depend on other tasks. Stay tuned for the feature!
You can use external tools such as JSON Crack to visualise your JSON. We are also working to make our no-code tool as powerful as pro config, so you will never need to use external tools for visualisation. Stay tuned for that!
There could be a possibility that you're not fulfilling the required properties of the widget in module_config
properly. Please try to copy the widget pro config by clicking on the widget profile and double check if you are configuring the widget properly. If things still don't work out, please report this problem to us on our Discord server and we will make sure to help you out!
It might be a temporary problem in the widget. If the problem persists, please let us know about this on Discord server and we will fix it as soon as possible.
You can create buttons in the render
section and use transitions
to change states when the buttons are pressed.
Create two buttons under render
and have an onclick
.
In the transitions
, use the onclick
keys to tell which state the button should navigate to. For example, in the below code, we are using yes_pressed
onclick to navigate to the yes_pressed_state
state.
You can store it in the context in the outputs
part of the state. Then the context variable can be used in any other state, as shown in the below example where:
user_name
is taken as an input in home_page
state.
This user_name
is stored in context.user_name
in outputs
section. context
is global and can be accessed by any state in your bot.
Using context.user_name
to render the username in output_page
state.
LLMModule
doesn’t chat and exits out after one message sent to it.Q: My LLMModule
doesn’t retain memory and forgets things when chatting.
This is because you need to provide a CHAT
transition to the same page, indicating that you want the same state to process your chats and continue it.
Initialising an empty memory
array inside context
. Not necessary, but good to have when you need a birds-eye view of the code. If you don’t want LLM model to retain memory, feel free to not implement memory features.
Input user_message
as input with type as IM
mentioning the input will be through the bot’s instant messaging method, i.e. sending messages to the bot.
In tasks
array, calling the LLMModule
, passing the user_prompt
, system_prompt
and context.memory
stored in context.
In outputs
, we are taking the output of the LLMModule
and appending it to the context.memory
array to keep it up to date with messages.
In transitions
we are routing all CHAT
events back to chat_page
meaning that when the user continues chatting, the state will re-run taking the message as the input and repeating the process.
Q: How to use if
statement in pro config?
For using conditions, you can use JavaScript expressions. That means, you can use an ternary operator as it’s a single line of code, unlike if
statement which is multiple lines of code. In the below snippet, we are:
Taking a random number from the user as user_random_number
.
In the render
section, we are using double curly braces ({{}}
) to denote a JS expression.
We are applying a ternary operation. The syntax is condition ? true case : false case
. You can always nest ternary operators but it can get messy. We are working to bring a code snippet widget to make things easier.
Add choices: List[Any]
in your input field will transfer it as a dropdown format.
For example:
text
and IM
?In input type text
, a prompt is shown to the user to enter required data, whereas, in IM
, the user is required to provide input through bot’s instant messaging (chat).
To clear your bot memory, use the menu near the chat button and click on Clear Memory. This should start the bot from scratch.
First, identify which part of the pro config could be faulty. You can do this by determining which part of the flow the bot is getting stuck, and then looking at the tasks in the specific state. Check for any invalid configurations. If things still don’t work, separate tasks into different states to see which one of them could be causing problems. If nothing makes sense, feel free to contact us and we will do our best to help you out.
We will gift a big battery recharge. Please contact us.
This is a bug. Just go to the Chat tab and refresh the page. Now, you should see a prompt to reset the bot’s memory.
Widget Call Error:
Stay rested. This is not your fault. This error is reported because the model host platform run out of GPU memory. It generally gets solved by waiting a couple of minutes.
Another scenario could involve oversized input images (or other inputs). Consider using a smaller input size.