AI Best Practices
Get up to speed with AI functionalities by learning the best practices and Emporix recommendations.
While AI Agents can be extremely helpful, proper configuration and setup is very important to achieve the expected results.
A few points for your consideration and attention:
Prompt engineering
Tools choice
LLMs choice
Agents collaboration
Thorough testing
Custom agents
Prompt engineering
Effective prompt construction is crucial as it's the core of the agent definition and the agent behavior is highly sensitive to phrasing.
Example prompt To view an example prompt structure, see the prompt for the Translation Agent.
Be mindful of prompt sensitivity - even subtle changes in phrasing can confuse an agent or make it react in an unexpected way.
Use relevant prompt structure to ensure clarity. Define the purpose, steps, rules, and expected output. Provide examples for the agent to understand your case and input.
Put explicit instructions to avoid confusion of what is the desired behavior and steps.
Example:
Less effective:
"Summarize these customer reviews."Better:
"Summarize the following customer reviews as 3 concise bullet points highlighting common themes in product satisfaction. Text: """ {reviews} """Be detailed about the agent goal, format, tone, style, and constraints.
Example:
Less effective:
"Write a product description."Better:
"Write a 3-sentence, SEO-optimized product description for an eco-friendly water bottle. Use a friendly and trustworthy tone."Explain the context or motivation as adding “why” helps the model optimize the results.
Example:
Less effective:
"Do not exaggerate claims in ad copy."Better:
"This ad copy will appear on our website’s sustainability page, so avoid exaggeration and focus on verified eco-benefits."Define the agent specificity and control output.
Show examples of the desired format to define the agent specificity and control output.
Example:
Less effective:
"Extract product attributes from these listings."Better:
"Extract and format product attributes as follows: Brand: <brand_name> Price: <price> Features: <comma-separated list> Customer rating: <x/5> Text: """ {listing} """*Say what to do instead of what not to do.
Example:
Less effective:
"Do not use slang in product copy."Better:
"Write in clear, professional English suitable for business shoppers."Reduce vague phrasing.
Example:
Less effective:
"Make the product title not too long."Better:
"Limit the product title to 70 characters and include brand, product type, and key feature."Use the example-driven prompting strategy.
Provide examples of ideal outputs when zero-shot results are weak.
Example:
Less effective:
"Generate marketing taglines for our new skincare line."Better:
Example: Product: Organic Face Serum → Tagline: 'Pure radiance, naturally powered.' Product: Charcoal Face Wash → Tagline: 'Detox deep, glow daily.' Product: {your_product} → Tagline:"Set the relevant context and state.
Provide relevant business context, include brand values, audience, or campaign goals.
Example:
Less effective:
"Write an email campaign."Better:
"Write a 3-email campaign for a premium coffee subscription targeting busy professionals who value convenience and taste."Track incremental work or iterations, especially for long tasks (for example, catalog cleanup or campaign planning), ask the model to summarize progress.
Example:
"After each step, summarize which product descriptions have been updated and which remain pending."Use structured formats for progress or data.
Example:
"Store campaign performance metrics in JSON: { 'campaign': 'Summer Sale', 'CTR': 0.08, 'conversion': 0.02, 'next_steps': 'Test subject lines' }"Define communication style and tone.
Match tone and format to the intended output as the style of your prompt influences the model’s response.
Example:
Less effective:
"Create a return policy page."Better:
"Write a return policy in clear, polite prose using short paragraphs and reassuring language suitable for e-commerce customers."Control the model behavior and actions.
Be explicit about whether the model should act or advise.
Example:
Less effective (model might only suggest):
"Can you improve our product titles?"Better:
"Rewrite the following product titles to improve clarity and SEO performance."Set the default to proactive or conservative mode when needed.
Example:
"<default_to_action> Revise the pricing descriptions directly unless told otherwise.</default_to_action>"
Tool and capability management
Agents should only be equipped with the resources they strictly need.
Attach only the necessary tools to an agent to prevent security and performance issues. The agent should have access to the exact MCP tools that are required for their tasks, nothing more and nothing less.
Determine the need for new tools when you modify the agent prompt. If an agent requires completely different behavior or automation, you must develop or configure new MCP tools and attach them to the agent. If the new functionality or use case can be handled by the existing toolset, only the prompt needs modification.
LLM Model recommendations and testing
Due to the dynamic nature of Language Models (LLMs), make a conscious decision which LLM provider and model suits best for your agent purpose. Conducting rigorous testing is mandatory for any LLM choice.
It is essential to conduct thorough testing with varied inputs for grounded results. Feeding LLMs with different inputs ensures the agent covers different users conversational preferences and tone.
Adjust the LLM model temperature according to the agent purpose. The lower values are for strict tasks, diminishing hallucinations effect, on the other hand, higher values offer greater creativity when the agent is supposed to bring more creative results.
Re-run agents tests after any changes. Changing the model, adjusting the temperature, or even adding a sentence to the prompt requires re-running all tests as these may impact the agent behavior.
Due to the rapid evolution of LLM models, Emporix cannot give any definitive recommendations for LLMs choice.
However, according to the current experience with the Predefined Agents with Emporix system, it was noticed that Anthropic and OpenAI models brought satisfying results. When choosing the right model for your agents, rely on the proper testing and thorough analysis.
Agent collaboration
Currently, Emporix Agentic AI supports a swarming collaboration model, which means AI agents can hand over a task one to another, creating a chain of agents equipped with the right information and context. This approach allows you to connect multiple agents to join forces and solve more complex issues, as each of them can bring their expertise and tools to work on a specific case.
Be mindful that the supervisor agent collaboration model is not supported. The agents can step in when properly triggered, but there is no supervision from a "master agent" to guard their operations.
Thorough testing
Any change in the agent configuration might impact the agents performance and diligence. Make sure you cover the new configuration with proper testing and, especially, pay attention to:
Modifications to the agent prompt - even the slightest changes influence how the agent responds.
Modifications to LLMs - changing a provider, model, temperature, memory - hugely impact the behavior.
Custom agent creation and predefined agent use
To avoid unnecessary recreation and complexity, always use the Emporix Predefined Agents for similar use cases when possible. Not only the Predefined Agents are equipped with tools, and security measures out-of-the-box - they have also certain underlying steps (like for example, context creation) handled programmatically by the Collaboration Agent which are not visible in the prompt. Using Predefined Agents ensures that the agents are more deterministic and can save you the time and resources to only provide the necessary configuration.
Adapt the existing predefined agents through prompts when it's possible and sensible. When you want to extend an agent with new rules that do not require completely different behavior or automation, agents can be adapted by modifying their existing prompts. For example, you want to modify the Complaint Agent so that it starts the collaboration process in additional use case when a customer complains about a missing product. In this case, since the agent reaction is already defined in their protocol, add a new
categoryfor theMISSING_PRODUCTin the agent Prompt, for instance:
<rules>
<rule>For `MISSING_INVOICE`: Send the invoice using the `send-invoice` tool.</rule>
<rule>For `INCORRECT_INVOICE`: Start the collaboration process using the handoff tool.</rule>
<rule>For `DELIVERY_TOO_LATE`: Start the collaboration process using the handoff tool.</rule>
<rule>For `PRODUCT_DAMAGED`: Start the collaboration process using the handoff tool.</rule>
<rule>For `MISSING_PRODUCT`: Start the collaboration process using the handoff tool.</rule>
<rule>For `OTHER`: Return information that no action has been performed.</rule>
</rules>On the other hand, if you'd like to extend the Complaint Agent so that it initiates a refund process upon return, a new rule with a new category is one thing to define, but the other is to make sure that the Agent has the right tools in place to be able to perform the desired actions. You'd need to configure the relevant MCP Server and tools necessary for such an agent first.
Custom agents offer you flexibility in their configuration but need explicit setup. You have full control over the triggers, the Language Model used - market-leading or custom-trained, the tools attached, or the specific MCP-based capabilities available. Make sure the custom agent is equipped with the necessary tokens and tools for its expected actions.
Ensure that the custom agent needs some background logic, like for example creating a relevant context to pass to another tool or agent, you need to deliver that logic as well.
Last updated
Was this helpful?

