FAQs

Following are the frequently asked questions (FAQs) when dealing with Pro Config to develop AI bots using MyShell.ai.

Basics

Q: How do I get started with pro config?

The pro config tutorial is the best way to get familiar with it.

Q: My JSON has a lot of syntax errors. Any tools to help?

You can use a code editor like Visual Studio Code that shows you exactly where you have a syntax error so that you can spot and fix it.

Q: What is the difference between Automata and AtomicState?

An AtomicState is a small task that is performed within your Automata. Your pro config root is an Automata and the nested tasks in the tasks fields are AtomicState. There are a few differences between Automata and AtomicState:

  • AtomicState has lack of properties.is_chat_allowed and tasks fields.

  • Both of them have different special events for transitions. For example, AtomicState has ALWAYS event and Automata has DONE event.

Q: How can I change the intro message of the bot?

Head over to your bot settings and you should be able to change the intro message. The intro message is shown whenever someone first interacts with your bot or clears the bot’s memory.

Q: How can I publish my bot?

You need a Genesis Pass to publish at most three bots for anyone to use. If you don’t have a genesis pass, you can make your bot private and still share it with others to use. If you’re a part of the learning labs, you get a chance to win a genesis pass if you build bots, share them and be active in the learning labs and the MyShell community.

Q: How do I get more than 1 slot for private bots

You could go to MyShell's site. On left bar click "Reward" -> "Reward Redemption"-> "S11 Premium Membership Card"

After getting the premium membership, you could get 10 private bot slots!

Q: How can I share my bot (both private and public)?

Private bot can be shared. You just need to click the share button, and you will get a bot link. People can access that bot by clicking the link. Note, it is NOT the url of the site.

Q: How can I debug my bot if it's not behaving as expected?

Debugging your bot involves a few steps:

  1. Use the validate button in the pro config editor for any error messages or warnings.

  2. Use the cache mode to choose which widget to skip during the workflow

  3. Simplify your config by isolating complex tasks into separate states and testing them individually.

Cache mode caches the responses from different widgets and re-uses them as response to save time and battery usage. To use cache mode, add the following to the root of your pro config JSON:

{
    ..
    "properties": {
        "cache": true,
    }
    ..
}

Setting cache to true will enable cache mode and all the widget responses will be cached.

For more detailed debugging tips, see the cache mode guide. You can also check our video on cache mode published on YouTube.

If something still goes wrong, feel free to let us know about the issues you faced in our Discord server.

Q: How can I disable cache mode for a specific widget?

You can set cache to false in module_config for that specific widget. This will make it so that the result of the widget is never cached even when you have cache mode enabled globally.

Widgets & Modules

Q: Where can I find all the modules supported by pro config?

You can find all models supported in the documentation.

Q. How can I find various widgets and use them in my pro config?

You can find all widgets in widget center on the dashboard. You can follow the documentation for widgets to see how you can integrate them in your pro config using the AnyWidgetModule.

Q: What models are supported by LLMModule?

Currently, we support GPT 3.5 Turbo (gpt-35-turbo-16k) and GPT 4 (gpt-4-1116-preview) for using the LLMModule. We will update the documentation when future models are released that can be used with this module. However, you can use other models as widgets using AnyWidgetModule. For example, you can use the Perplexity widget and Gemini Pro using AnyWidgetModule module to use it in your apps.

Q: Why are models like Claude3 and Gemini Pro as widgets instead of LLMModule? Can we provide memory and customize settings in these widgets?

The team is working on streamlining and adding all the LLM models under LLMModule. For now, they are quickly made available to you in the form of widgets. Unfortunately, for now, you cannot provide additional settings. You can use these widgets for one-off prompts, or pass in the memory in the user prompt, which is a janky way of doing it.

Q: How to get the model ID for using the stable diffusion widget?

For using the stable diffusion widget, a model ID is required. You can find it on the page for the model on Civatai. For example, for the AnyLoRA model, if the link of the model looks like the following:

https://civitai.com/models/23900?modelVersionId=95489

The model ID will be 95489, use this as model in your pro config.

Q: Can tasks be run in a parallel way?

Currently, no. But this feature is in pipeline for the workflows where outputs from tasks doesn't depend on other tasks. Stay tuned for the feature!

Q: I'm being overwhelmed by the pro config syntax. Are there any tools to help me visualise the JSON?

You can use external tools such as JSON Crack to visualise your JSON. We are also working to make our no-code tool as powerful as pro config, so you will never need to use external tools for visualisation. Stay tuned for that!

Q: A widget I want to use works when used directly but doesn't when used in pro config.

There could be a possibility that you're not fulfilling the required properties of the widget in module_config properly. Please try to copy the widget pro config by clicking on the widget profile and double check if you are configuring the widget properly. If things still don't work out, please report this problem to us on our Discord server and we will make sure to help you out!

Q: A widget I want to use returns null when used directly and loads infinitely when used in pro config.

It might be a temporary problem in the widget. If the problem persists, please let us know about this on Discord server and we will fix it as soon as possible.

Pro Config Syntax

Q: How can I create buttons?

You can create buttons in the render section and use transitions to change states when the buttons are pressed.

  • Create two buttons under render and have an onclick.

  • In the transitions, use the onclick keys to tell which state the button should navigate to. For example, in the below code, we are using yes_pressed onclick to navigate to the yes_pressed_state state.

{
	...
	
	"states": {
		"chat_page": {
			...
			"render": {
		    "text": "Hello, choose your answer!",
		    "buttons": [
			    {
				    "content": "Yes",
				    "on_click": "yes_pressed"
			    },
			    {
				    "content": "No",
				    "on_click": "no_pressed"
			    }
		    ]
	    }
      "transitions": {
	      "yes_pressed": "yes_pressed_state",
	      "no_pressed": "no_pressed_state"
      }
		},
		"yes_pressed_state": {
			...
		},
		"no_pressed_state": {
			...
		}
	}
}

Q: How to store my input as a variable to use it later?

You can store it in the context in the outputs part of the state. Then the context variable can be used in any other state, as shown in the below example where:

  • user_name is taken as an input in home_page state.

  • This user_name is stored in context.user_name in outputs section. context is global and can be accessed by any state in your bot.

  • Using context.user_name to render the username in output_page state.

{
	...
	"states": {
		"home_page": {
			...
			"inputs": {
				"user_name": {
					"type": "text",
					"user_input": true
				}
			}
			...
			"outputs": {
				"context.user_name": "{{ user_name }}"
			}
		},
		"output_page": {
			...
			"render": {
				"text": "Welcome {{ context.user_name }}"
			}
			...
		}
	}
}

Q: My LLMModule doesn’t chat and exits out after one message sent to it.

Q: My LLMModule doesn’t retain memory and forgets things when chatting.

This is because you need to provide a CHAT transition to the same page, indicating that you want the same state to process your chats and continue it.

  • Initialising an empty memory array inside context. Not necessary, but good to have when you need a birds-eye view of the code. If you don’t want LLM model to retain memory, feel free to not implement memory features.

  • Input user_message as input with type as IM mentioning the input will be through the bot’s instant messaging method, i.e. sending messages to the bot.

  • In tasks array, calling the LLMModule, passing the user_prompt, system_prompt and context.memory stored in context.

  • In outputs, we are taking the output of the LLMModule and appending it to the context.memory array to keep it up to date with messages.

  • In transitions we are routing all CHAT events back to chat_page meaning that when the user continues chatting, the state will re-run taking the message as the input and repeating the process.

{
	...
	"context": {
		"memory": "{{ [] }}" // to maintain memory of LLM
	}
	"states": {
		"chat_page": {
			...
			"inputs": {
				"user_message": {
					"type": "IM",
					"user_input": true
				}
			},
			"tasks": [
        {
          "name": "generate_reply",
          "module_type": "LLMModule",
          "module_config": {
            "model": "gpt-35-turbo-16k",
            "system_prompt": "You are a teacher teaching Pro Config. Pro Config is a powerful tool to build AI native applications. Here are some questions and answers about basic concepts of Pro Config: {{context.questions_string}}",
            "user_prompt": "{{user_message}}",
            "memory": "{{context.memory}}", // Pass memory so that LLM remembers previous conversation with user
            "output_name": "reply"
          }
        }
      ],
      "render": {
		    "text": "{{ reply }}" 
	    }
      "outputs": {
        "context.memory": "{{[...context.memory, {'user': user_message}, {'assistant': reply}]}}"
      },
      "transitions": {
	      "CHAT": "chat_page"
      }
		}
	}
}

Q: How to use conditions in pro config?

Q: How to use if statement in pro config?

For using conditions, you can use JavaScript expressions. That means, you can use an ternary operator as it’s a single line of code, unlike if statement which is multiple lines of code. In the below snippet, we are:

  • Taking a random number from the user as user_random_number.

  • In the render section, we are using double curly braces ({{}}) to denote a JS expression.

  • We are applying a ternary operation. The syntax is condition ? true case : false case. You can always nest ternary operators but it can get messy. We are working to bring a code snippet widget to make things easier.

{
	...
	"states": {
		"main_page": {
			...
			"inputs": {
				"user_random_number": {
					"type": "IM",
					"user_input": true
				}
			},
			"render": {
				"text": "{{ user_random_number === Number(100) ? 'Jackpot!' : 'You didnt make it' }}"
			}
		}
	}
}

Q: How do I implement a dropdown in my input?

Add choices: List[Any] in your input field will transfer it as a dropdown format.

For example:

"first_input": {
            "choices": ["English", "Chinese", "French"],
            "default_value": "English",
            "description": "Select the language you want to display the word in",
            "type": "text",
            "user_input": true
          }

Q: What is the difference between input type text and IM?

In input type text, a prompt is shown to the user to enter required data, whereas, in IM, the user is required to provide input through bot’s instant messaging (chat).

Error Message & Trouble Shooting

Q: How can I clear my bot memory?

To clear your bot memory, use the menu near the chat button and click on Clear Memory. This should start the bot from scratch.

Q: My bot is stuck at message loading. What can I do to troubleshoot?

First, identify which part of the pro config could be faulty. You can do this by determining which part of the flow the bot is getting stuck, and then looking at the tasks in the specific state. Check for any invalid configurations. If things still don’t work, separate tasks into different states to see which one of them could be causing problems. If nothing makes sense, feel free to contact us and we will do our best to help you out.

Q: I have run out of batteries and I’m not done testing. What can I do to replenish the battery?

We will gift a big battery recharge. Please contact us.

Q: When I update the pro config for my bot, I’m not prompted to reset the bot memory. What should I do?

This is a bug. Just go to the Chat tab and refresh the page. Now, you should see a prompt to reset the bot’s memory.

Q: Some Random Error Message Explained

  • Widget Call Error:

    Stay rested. This is not your fault. This error is reported because the model host platform run out of GPU memory. It generally gets solved by waiting a couple of minutes.

    Another scenario could involve oversized input images (or other inputs). Consider using a smaller input size.

Last updated