Skip to main content

πŸ§™ Exercise 1: Connect to Your Magical LLM

🎯 Exercise Overview

Welcome to your first spell in the Testus Patronus journey! In this exercise, you'll connect your Dify instance to a powerful Azure-hosted Large Language Model (LLM), which will become the core of your AI assistant.

✨ This is the foundation of your Retrieval-Augmented Generation (RAG) assistant. Without a working LLM, the magic won't flow.

πŸš€ What You'll Build

  • Connect to Azure-hosted GPT models
  • Configure both LLM and embedding models
  • Create your first AI chatbot
  • Test the complete workflow
AI Generation Process

πŸ“‹ Step-by-Step Checklist​

🎯 Exercise Checklist

⏱️ Estimated Time: 15-20 minutes | 🎯 Goal: Working AI chatbot with Azure LLM


πŸ› οΈ Step 1: Launch Your Dify Instance​

πŸš€ Launch Your Instance

Click the magic URL to summon your resources:

πŸ“‹ What You'll See​

You'll be redirected to a personalized portal with your credentials:

Dify Instance Portal
Your personalized Dify instance portal with credentials (click to expand)

πŸ“‹ This Page Contains:

  • Dify Instance URL - Your personal Dify dashboard
  • Azure LLM Credentials - For both GPT-3.5 and Embeddings
  • Session Reminder - Instance is ephemeral (save your work!)

🧠 Pro Tip: Keep this tab open during the session for easy copy/paste access to your credentials.


πŸ›‘οΈ Step 2: Dify Admin Account Setup​

πŸ“‹ Setup Process Overview

You'll need to create an admin account using the provided credentials. Follow these 3 steps:

πŸ”‘ Step 2.1: Copy Your Admin Credentials​

Copy Admin Credentials
πŸ“‹ Step 1: Copy your admin credentials (click to expand)

🌐 Step 2.2: Navigate to Your Dify Instance​

Next: Use the Dify instance URL from your credentials to navigate to your personalized Dify dashboard.

Dify Instance Navigate
🌐 Step 2: Navigate to your Dify Instance (click to expand)

πŸ‘€ Step 2.3: Create Your Admin Account​

Dify Admin Setup
πŸ‘€ Step 3: Create your admin account (click to expand)

🧠 Pro Tip: Use the provided credentials for your admin account - it's easier to remember and you'll have them handy for the session.
πŸ’‘ Keep this tab open during the session for easy copy/paste access.


πŸ§ͺ Step 3: Log in to Dify​

Once the Admin Account is created, you will be logged in directly to the Dify dashboard.

πŸ—‚οΈ Explore Dify's Main Sections​

Dify Landing Page
Screenshot: Dify's main dashboard (click to expand)

🎯 Main Sections Overview

🎨 Studio: Design and manage your chatbots using visual blocks and workflows.

πŸ“š Knowledge: Upload documents your assistant can referenceβ€”perfect for product specs, requirements, and test cases.

πŸ”§ Tools: Access extra plugins, service integrations, and advanced settings.

βš™οΈ Settings: Manage model providers, keys, and other configuration options.

🎯 Next: Navigate to Settings​

Follow these steps to configure your LLM:
  1. Click on your user icon (top-right corner with your profile picture)
  2. Select "Settings" from the dropdown menu
  3. Navigate to "Model Provider" tab
Dify Account Menu
Step 1: Click on your user icon β†’ Settings (click to expand)
Dify Settings
Step 2: Dify Settings page - navigate to "Model Provider" tab (click to expand)

πŸ”— Step 4: Configure the Azure GPT LLM​

Now let's wire your Dify to use Azure-hosted GPT models.

1. Install Model Provider​

  1. Navigate to Settings β†’ Model Provider

    Important: Select the Azure OpenAI Service provider.

    Dify Settings Model Provider
    Screenshot: Model provider settings page
  2. Install Azure OpenAI Service

    Install Azure OpenAI
    Screenshot: Installing Azure OpenAI service

2. Add a GPT-3.5 LLM​

Use the credentials provided earlier to configure the model:

πŸ”§ Configuration Steps

Add models to your Azure OpenAI Service and configure using the provided credentials.

πŸ“Έ Add Model Interface

Add Model
Add models to your Azure OpenAI Service (click to expand)

πŸ“‹ Configuration Values

Use the credentials provided earlier to configure the model:

FieldValue
ProviderAzure OpenAI
Model Namegpt-35-turbo-16k
Endpoint(paste endpoint URL)
API Key(paste your key)
API Version2024-12-01-preview

3. Add Embedding Model​

πŸ”§ Configuration Steps

Add models to your Azure OpenAI Service and configure using the provided credentials.

πŸ“Έ Add Model Interface

Add Model
Add models to your Azure OpenAI Service (click to expand)

πŸ“‹ Configuration Values

Use the credentials provided earlier to configure the model:

FieldValue
Model Nametext-embedding-3-large
API Version2024-12-01-preview
Endpoint(paste endpoint URL)
API Key(paste your key)

4. Configure System Model Settings​

πŸ”§ Configuration Steps

  • Click on the System Model Settings button

  • Select the gpt-35-turbo-16k model for reasoning and text-embedding-3-large for embedding

  • Click Save

  • We won't use Rerank Model nor Speech to text features in this exercise.

    System Model Settings
    System model settings (click to expand)

    πŸŽ‰ Success! Your LLM is Connected

    You should now see both models configured in your Dify instance

    LLM Connected
    Your configured models (click to expand)

    πŸ€– Step 5: Create Your First Chatbot​

    1. Create From Blank​

    1. Go to Studio β†’ Chatbot β†’ Create from Blank:

      Blank Chatbot
      Screenshot: Creating a new chatbot from scratch
    2. Select Workflow

    3. Give your bot a name and description

      Config Bot
      Screenshot: Configuring your chatbot

    2. Setup Query Input​

    πŸ”§ Input Configuration Steps

    Configure the input field that will receive user questions for your chatbot.

    1. Click on the start block and then click the + button to add an Input Field

      Start Block
      Step 1: Click on the start block to add input field (click to expand)
    2. Configure the input field:
      • Name it query
      • Set max length to 200
      • Click Save
      Add Input
      Step 2: Configure input field settings (click to expand)
      Start Config
      Step 3: Final start block configuration (click to expand)

    3. Add LLM Block​

    🧠 LLM Configuration Steps

    Add and configure the LLM block that will process user queries and generate responses.

    1. Add the LLM block: Click + and select the LLM block from the available options

      Blocks
      Step 1: Select LLM block from available options (click to expand)
    2. Configure the LLM:
      • In Context field add the query variable
      • In Prompt field add the system prompt, that should include the query variable and the context variable
    3. Add a System Prompt: This helps the AI understand how to respond

      πŸ’‘ Example System Prompt:

      "Answer in a clean, professional tone. Be concise but precise."

    4. Bind variables: Connect the prompt to query using the {x} selector

      LLM Config
      Step 2: Configure LLM settings and bind variables (click to expand)

    4. Add End Block​

    🏁 Final Configuration

    Complete your workflow by adding an end block to output the LLM response.

    1. Add an End block: Click + and select the End block

    2. Configure the output: Create an output variable (e.g., text) linked to the LLM response

      End Config
      Configure the end block to output LLM response (click to expand)

    πŸ§ͺ Run and Debug​

    πŸ§ͺ Testing Your Chatbot

    Test your chatbot to ensure everything is working correctly and see how it processes queries.

    πŸš€ Step 1: Run Your Chatbot​

    1. Click the "Run" button in the top-right corner of your workflow

    2. Input a test question and click "Start Run"

    πŸ’‘ Try This Test Question:

    "What is the difference between unit and integration testing?"

    πŸŽ‰ Expected Results​

    You should get a response from your magical assistant! The LLM will process your question and provide an answer.

    Run Input
    Testing your chatbot with a sample question (click to expand)

    πŸ” Step 2: Debug and Trace​

    Check the Tracing tab for a detailed breakdown of what your chatbot did:

    Tracing
    Trace the chatbot's response generation process (click to expand)

    🧠 Pro Tip: The tracing tab shows you exactly how your chatbot processed the input, including token usage and response generation steps.


    🎯 Exercise Complete! What's Next?​

    πŸŽ‰ Congratulations!

    You've successfully connected your LLM and created your first AI chatbot!

    βœ… What You've Accomplished:

    • βœ… Connected to Azure-hosted GPT models
    • βœ… Configured both LLM and embedding models
    • βœ… Created your first AI chatbot
    • βœ… Tested the complete workflow

    πŸš€ Ready for the Next Challenge?

    In Exercise 2, you'll learn about:

    • πŸ“š Document chunking strategies for better RAG
    • πŸ”§ Uploading Jira issues and technical documentation
    • βš–οΈ Comparing different knowledge base approaches
    • 🧠 Preparing your chatbot for advanced RAG capabilities