stethoscopeAI Best Practices

Get up to speed with AI functionalities by learning the best practices and Emporix recommendations.

While AI Agents can be extremely helpful, proper configuration and setup is very important to achieve the expected results.

A few points for your consideration and attention:

Prompt engineering

Effective prompt construction is crucial, as it forms the core of the agent's definition, and agent behavior is highly sensitive to phrasing.

circle-check
  • Be mindful of prompt sensitivity - even subtle changes in phrasing can confuse an agent or make it react in an unexpected way.

  • Use relevant prompt structure to ensure clarity. Define the purpose, steps, rules, and expected output. Provide examples for the agent to understand your case and input.

    • Put explicit instructions to avoid confusion about the desired behavior and steps.

    Example:

    pen-slash Less effective:

    "Summarize these customer reviews."

    pen Better:

    "Summarize the following customer reviews as 3 concise bullet points highlighting common themes in product satisfaction.
    Text: """ {reviews} """
    • Be detailed about the agent goal, format, tone, style, and constraints.

    Example:

    pen-slash Less effective:

    "Write a product description."

    pen Better:

    "Write a 3-sentence, SEO-optimized product description for an eco-friendly water bottle. Use a friendly and trustworthy tone."
    • Explain the context or motivation as adding “why” helps the model optimize the results.

    Example:

    pen-slash Less effective:

    "Do not exaggerate claims in ad copy."

    pen Better:

    "This ad copy will appear on our website’s sustainability page, so avoid exaggeration and focus on verified eco-benefits."
  • Define the agent specificity and control output.

    • Show examples of the desired format to define the agent specificity and control output.

    Example:

    pen-slash Less effective:

    "Extract product attributes from these listings."

    pen Better:

    "Extract and format product attributes as follows:
    Brand: <brand_name>
    Price: <price>
    Features: <comma-separated list>
    Customer rating: <x/5>
    Text: """ {listing} """*
    • Say what to do instead of what not to do.

    Example:

    pen-slash Less effective:

    "Do not use slang in product copy."

    pen Better:

    "Write in clear, professional English suitable for business shoppers."
    • Reduce vague phrasing.

    Example:

    pen-slash Less effective:

    "Make the product title not too long."

    pen Better:

    "Limit the product title to 70 characters and include brand, product type, and key feature."
  • Use the example-driven prompting strategy.

    • Provide examples of ideal outputs when zero-shot results are weak.

    Example:

    pen-slash Less effective:

    "Generate marketing taglines for our new skincare line."

    pen Better:

    Example:
    Product: Organic Face Serum → Tagline: 'Pure radiance, naturally powered.'
    Product: Charcoal Face Wash → Tagline: 'Detox deep, glow daily.'
    Product: {your_product} → Tagline:"
  • Set the relevant context and state.

    • Provide relevant business context, include brand values, audience, or campaign goals.

    Example:

    pen-slash Less effective:

    "Write an email campaign."

    pen Better:

    "Write a 3-email campaign for a premium coffee subscription targeting busy professionals who value convenience and taste."
    • Track incremental work or iterations, especially for long tasks (for example, catalog cleanup or campaign planning), ask the model to summarize progress.

    Example:

    pen

    "After each step, summarize which product descriptions have been updated and which remain pending."
    • Use structured formats for progress or data.

    Example:

    pen

    "Store campaign performance metrics in JSON:
    { 'campaign': 'Summer Sale', 'CTR': 0.08, 'conversion': 0.02, 'next_steps': 'Test subject lines' }"
  • Define communication style and tone.

    • Match tone and format to the intended output as the style of your prompt influences the model’s response.

    Example:

    pen-slash Less effective:

    "Create a return policy page."

    pen Better:

    "Write a return policy in clear, polite prose using short paragraphs and reassuring language suitable for e-commerce customers."
  • Control the model behavior and actions.

    • Be explicit about whether the model should act or advise.

    Example:

    pen-slash Less effective (model might only suggest):

    "Can you improve our product titles?" 

    pen Better:

    "Rewrite the following product titles to improve clarity and SEO performance."
    • Set the default to proactive or conservative mode when needed.

    Example:

    pen

    "<default_to_action> Revise the pricing descriptions directly unless told otherwise.</default_to_action>"
circle-info

For more information and hints on how to construct valid and performant prompts, refer to the documentation of the LLM provider you use.

Tool and capability management

Agents should only be equipped with the resources they strictly need.

  • Attach only the necessary tools to an agent to prevent security and performance issues. The agent should have access to the exact MCP tools that are required for their tasks, nothing more and nothing less.

  • Determine the need for new tools when you modify the agent prompt. If an agent requires completely different behavior or automation, you must develop or configure new MCP tools and attach them to the agent. If the new functionality or use case can be handled by the existing toolset, only the prompt needs modification.

LLM Model recommendations and testing

Due to the dynamic nature of Language Models (LLMs), make a conscious decision about which LLM provider and model best suits your agent purpose. Conducting rigorous testing is mandatory for any LLM choice.

  • Consider closed-source and open-source LLMs with all their pros and cons, adapting the choice specifically to your use case. Take into account the fact that closed-source models might display model degradation or collapse symptoms over time. It might happen that LLMs progressively become less accurate and bring inconsistent and less relevant results due to real-world data changes not included in the model training phase. Therefore, evaluate the specific LLMs usage repeatedly.

  • It is essential to conduct thorough testing with varied inputs for grounded results. Feeding LLMs with different inputs ensures the agent covers different users' conversational preferences and tone.

  • Adjust the LLM model temperature according to the agent purpose. Lower values are for strict tasks, reducing hallucination effects; higher values offer greater creativity when the agent is expected to produce more creative results.

  • Re-run agents tests after any changes. Changing the model, adjusting the temperature, or even adding a sentence to the prompt requires re-running all tests as these may impact the agent behavior.

circle-check

Agents collaboration

Currently, Emporix Agentic AI supports a supervisor-based collaboration model. In this approach, a dedicated supervisor agent oversees the workflow, delegates tasks to specialized agents, and manages how information and context are shared between them. This allows multiple agents to work together on more complex issues while keeping coordination centralized and controlled.

The supervisor decides which agent should act next, helping improve execution oversight, consistency, and traceability.

To keep collaboration predictable and efficient, agents are expected to follow a set of operating rules:

  • only one agent acts per turn

  • each step is planned before its execution

  • agents wait for results before continuing

  • results are accumulated across steps

  • repeated calls are avoided

  • the agreed plan is completed before moving on

circle-exclamation

Thorough testing

Any change in the agent configuration might impact the agents' performance and diligence. Make sure you cover the new configuration with proper testing and, especially, pay attention to:

  • Modifications to the agent prompt - even the slightest changes influence how the agent responds.

  • Modifications to LLMs - changing a provider, model, temperature, memory - hugely impact the behavior.

Custom agent creation and predefined agent use

circle-check
  • Adapt the existing predefined agents through prompts when it's possible and sensible. When you want to extend an agent with new rules that do not require completely different behavior or automation, agents can be adapted by modifying their existing prompts. For example, you want to modify the Complaint Agent so that it starts the collaboration process in an additional use case when a customer complains about a missing product. In this case, since the agent reaction is already defined in its protocol, add a new category for the MISSING_PRODUCT in the agent Prompt, for instance:

On the other hand, if you'd like to extend the Complaint Agent so that it initiates a refund process upon return, a new rule with a new category is one thing to define, but the other is to make sure that the Agent has the right tools in place to be able to perform the desired actions. You'd need to configure the relevant MCP Server and tools necessary for such an agent first.

  • Custom agents offer you flexibility in their configuration but need explicit setup. You have full control over the triggers, the Language Model used - market-leading or custom-trained, the tools attached, or the specific MCP-based capabilities available. Make sure the custom agent is equipped with the necessary tokens and tools for its expected actions.

  • If the custom agent needs some background logic — for example, creating a relevant context to pass to another tool or agent — you need to deliver that logic as well.

Last updated

Was this helpful?