Tidbits | April 21, 2025

How to Add Blazing Fast Search to Your Django Site with Meilisearch

by Lacey Henschel |   More posts by Lacey

TL;DR: Add powerful search to your Django site in an afternoon

Last year, I needed to add robust search functionality to a client's Django-powered surplus parts website. With approximately 70,000 items spanning hundreds of categories and subcategories, each with unique part numbers and extensive metadata, the site needed a powerful search solution. Surplus Sales sells all kinds of surplus equipment, making it critical that customers quickly find specific parts using various search terms.

Based on positive experiences with previous clients, we chose Meilisearch from the start. We knew how quickly we could implement it and how effectively we could refine search results with minimal tweaking. This approach paid off. Meilisearch delivered fast, typo-tolerant search that could handle all the complex metadata of our client's inventory.

The client's requirements were particularly challenging. Users need to search by part numbers (and there might be multiple part numbers for a single product), product names, descriptions, and other technical specifications. They also need to find results even when making typos or using abbreviations. All the more reason to use Meilisearch!

Let's walk through how to add Meilisearch to your own Django project.

Why Meilisearch for Your Django Project?

If you've ever tried to implement search in Django using just the ORM, you know it can quickly become a performance bottleneck. While Django's icontains lookups work for simple cases, they fall short when you need:

  • Fast response times on large datasets
  • Typo tolerance ("djagno" should still find "django")
  • Tighter control over relevance ranking
  • Filtering and faceting capabilities
  • Complex multi-field searches across related models

Meilisearch solves all these problems with a simple API and Python client that integrates easily with Django's model structure. The official documentation is excellent and comprehensive, but in this post, I'll focus specifically on integrating it with Django.

Meilisearch versus other search solutions

When comparing search solutions for Django projects, Meilisearch stands out for several reasons. Unlike PostgreSQL full-text search, which requires complex query construction and index management, Meilisearch provides a dedicated search API with minimal configuration. Compared to Elasticsearch, Meilisearch is significantly easier to set up, maintain, and scale for most Django applications. The day-to-day operation of Meilisearch is quite simple, with no need for cluster management or index optimization tasks that Elasticsearch can require. Now that it's been in production for a few months, the most maintenance I need to do is manually run a management command once in a while (and even that is pretty rare).

For many Revsys clients, Meilisearch hits the sweet spot between powerful features and developer-friendly implementation. You get advanced search capabilities without the operational complexity of heavier solutions. This makes it particularly well-suited for small to medium-sized teams that need robust search solutions they can implement quickly.

Step 0: Setting Up Meilisearch

Before we dive into our Django models and search schemas, let's set up Meilisearch.

Docker Configuration

The easiest way to run Meilisearch in your development environment is with Docker and Compose.

# compose.yml 
services:
  search:
    image: getmeili/meilisearch:v1.7
    volumes:
      - ./meili-data:/meili_data
    ports:
      - "7700:7700"

This configuration:

  • Uses the official Meilisearch Docker image (version 1.7)
  • Creates a persistent volume for your search data
  • Exposes the service on port 7700

The persistent volume is handy for local development. It ensures your search index survives container restarts and updates, so you won't need to rebuild your index every time you restart your containers.

To start Meilisearch, run:

docker-compose up -d search

Django Configuration

Now let's configure our Django settings to work with Meilisearch. First, install the Python client:

pip install meilisearch

(Or add to your dependencies using whatever method you choose.)

Then add these settings to your Django project:

# settings.py
SEARCH_API_URL = env("SEARCH_API_URL", default="http://search:7700/")
SEARCH_API_KEY = env("SEARCH_API_KEY", default=None)
SEARCH_INDEXES = {
    "main": "main_search",
    # You can define additional indexes here for different purposes
    # "autocomplete": "autocomplete_search",
    # "admin": "admin_search",
}
# Control whether we index the search or not
INDEX_SEARCH = env.bool("INDEX_SEARCH", default=True)

The SEARCH_INDEXES dictionary is particularly powerful. It allows you to define multiple indexes for different purposes. For example:

  • A "main" index for general site search
  • An "autocomplete" index optimized for quick suggestions with different ranking rules
  • An "admin" index that includes sensitive fields only staff should search

Each index can have different settings, ranking rules, and even different data schemas, all while drawing from the same Django models. This flexibility lets you optimize each search experience for its specific use case.

Pro-tip: That INDEX_SEARCH setting is particularly useful. During development and testing, you can set it to False to prevent your test data from being indexed. This keeps your search index clean with only real data. It's also helpful when running tests that create and destroy many model instances, as it prevents unnecessary indexing operations that would slow down your tests.

Now with Meilisearch running and configured, we're ready to start building our searchable models.

Step 1: Start with Your Django Models

Let's start by looking at a typical Django model that we want to make searchable:

# products/models.py
from django.db import models

class Product(models.Model):
    name = models.CharField(max_length=255)
    description = models.TextField(blank=True)
    price = models.DecimalField(max_digits=10, decimal_places=2)
    sku = models.CharField(max_length=50, unique=True)
    brand = models.CharField(max_length=100, blank=True)
    active = models.BooleanField(default=True)
    created_at = models.DateTimeField(auto_now_add=True)
    updated_at = models.DateTimeField(auto_now=True)

    def get_absolute_url(self):
        return f"/products/{self.id}/"

    def __str__(self):
        return self.name

This is a standard Django model for a product. Nothing special here yet, but we'll soon make it searchable with Meilisearch.

The get_absolute_url() method is particularly important for search integration. When displaying search results, you'll need a URL to link to each result. Having this method on your model makes it easy to generate these links consistently.

Before moving forward, consider what users will actually search for. For a product catalog, users might search by:

  • Product names
  • Brand names
  • SKUs or part numbers
  • Words in the description
  • Technical specifications
  • Categories or tags (which might be related models)

Understanding these search patterns helps you design an effective search schema in the next step.

Thinking About Your Search Schema Design

Before diving into code, let's take a step back and think conceptually about how to design an effective search schema. This is one of the most important decisions you'll make when implementing search because it determines what data is searchable and how your search results will be structured.

The Unified Search Approach

When implementing search across multiple Django models, you have two main approaches:

  1. Separate indexes - Create different indexes for each model type (products, categories, etc.)
  2. Unified schema - Create a single schema that can represent any searchable entity

For most Django sites, I recommend the unified approach. This allows users to search across all content types with a single query, providing a more intuitive experience. It also simplifies your frontend code since you only need to query one index.

The unified approach is particularly valuable when users don't necessarily know or care about the underlying data structure. For example, a user searching for "vaccuum capacitors" might find it useful to get results that include products, categories, or blog posts about capacitors. They just want relevant information. A unified schema lets you return all these result types in a single search.

Using separate indexes can be useful when you have different needs for your customer-facing frontend and your admin side. For example, you would index your customer information in its own search index, and then ensure those results could only be read by your staff members. Separate indexes are also useful when different parts of your application have different search requirements or access patterns.

Mapping Different Models to a Common Schema

The key to a unified search schema is designing fields that can accommodate different models while maintaining their unique characteristics. Let's look at how this works with a concrete example, using a Product model and a Category model.

class Product(models.Model):
    name = models.CharField(max_length=255)
    sku = models.CharField(max_length=50, unique=True)
    description = models.TextField()
    price = models.DecimalField(max_digits=10, decimal_places=2)

class Category(models.Model):
    name = models.CharField(max_length=255)
    slug = models.SlugField(max_length=100)
    description = models.TextField()
    active = models.BooleanField(default=True)

Both have a name field, but their secondary fields differ. In our search schema, we map them like this, using model-specific field names to maintain the unique characteristics of each model type:

  • Schema title maps to Product name and Category name
  • Schema product_name maps to Product name and isn't used for Category
  • Schema category_name isn't used for Product and maps to Category name
  • Schema product_description maps to Product description and isn't used for Category
  • Schema category_description isn't used for Product and maps to Category description
  • Schema sku maps to Product sku and isn't used for Category
  • Schema type is set to "product" for Product and "category" for Category

This approach gives us a consistent way to display search results while preserving the unique aspects of each model. The title field serves as a common display field, while the model-specific fields allow for targeted searching within each model type.

This pattern scales well to more complex scenarios. For example, if you were building a university website, you might have models for Courses, Faculty, and Events. Your schema might include fields like:

  • title - The main display title for any result
  • subtitle - Which might be a course code for Courses, a department for Faculty, or a date for Events
  • type - "course", "faculty", or "event"
  • course_description, faculty_bio, event_details - Type-specific content fields

The key takeaway is that your search schema doesn't have to mirror your database schema. It should be designed specifically for search, with fields that make sense for how users will search and how you'll display results.

What to Include in Your Schema

When deciding which fields to include in your search schema, consider:

  1. What users search for: Include fields that contain terms users are likely to search for
  2. What helps with relevance: Fields that help determine if a result is relevant
  3. What's needed for display: Information needed to render search results
  4. What's needed for filtering: Fields users might want to filter on

For our client, we found that including part numbers, alternate part numbers, and technical specifications was crucial because many customers knew exactly what part they needed. We also included category information so users could find related parts even if they didn't know the exact part number.

Equally important is what to exclude:

  1. Large text fields: Unless they're critical for search, large text fields can bloat your index
  2. Sensitive information: Never include passwords, private notes, etc.
  3. Frequently changing data: Data that changes very often might be better queried directly from your database
  4. Derived data: If it can be calculated from other fields, consider computing it at display time

Multiple Indexes for Different Purposes

While a unified schema works well for general search, sometimes you need specialized indexes for specific features. For example:

  • A main index for general site search
  • An autocomplete index optimized for quick suggestions as users type
  • A products index with additional fields specific to product search
  • An admin index that includes fields only relevant to staff users

Each index can have different settings optimized for its specific use case, while still drawing from the same underlying data models.

Step 2: Define Your Search Schema with Pydantic

Now that we understand the conceptual approach, let's define what data from our model should be searchable. This is where Pydantic comes in. It helps us create a clean, type-checked schema that will be sent to Meilisearch:

# search/schemas.py
from pydantic import BaseModel

class SearchSchema(BaseModel):
    """
    Pydantic model for our main search schema
    """
    id: str
    title: str | None = None
    type: str  # 'product', 'category', etc.
    url: str
    active: bool = True

    # Product fields
    product_name: str | None = None
    product_description: str | None = None
    brand_name: str | None = None
    sku: str | None = None

This schema defines exactly what will be stored in our Meilisearch index. The beauty of this approach is that we can:

  1. Have strict type checking for our search documents
  2. Control exactly which model fields get indexed for searching
  3. Clearly separate which fields belong to which model types

The fields in our schema serve different purposes:

  • id: A unique identifier for each document in the index. We'll prefix this with the model type to ensure uniqueness across different models.
  • title: A common display field used for all result types.
  • type: Identifies what kind of object this is (product, category, etc.). This is crucial for filtering and displaying results appropriately.
  • url: The link to the full object, used when a user clicks a search result.
  • active: Whether this item should appear in search results. This allows us to hide items without removing them from the index.

The model-specific fields (prefixed with product_ or later category_) allow us to search within specific model types while maintaining a unified schema.

Later, we'll add a second model to our search schema, demonstrating how this approach scales to multiple model types.

Why Pydantic?

Using Pydantic for this purpose offers several advantages over a simple dictionary:

  1. It provides type validation, ensuring that your search documents always have the expected structure. This catches errors early, before they cause problems in your search index.
  2. Pydantic's schema documentation capabilities make it easy to understand what data is being indexed.
  3. Pydantic models can be easily serialized to JSON, which is what Meilisearch expects.

Step 3: Connect Your Model to the Search Schema

Now let's add a method to our Product model that maps its fields to our search schema:

# products/models.py
from django.db import models
from search.schemas import SearchSchema

class Product(models.Model):
    # fields and methods as before

    def search_data(self):
        """Return data for the search index"""
        return SearchSchema(
            # prefix the ID with the "type" for the SearchSchema
            id=f"product-{self.id}",
            title=self.name,
            type="product",
            url=self.get_absolute_url(),
            active=self.active,
            product_name=self.name,
            product_description=self.description,
            brand_name=self.brand,
            sku=self.sku,
        )

We've added a search_data() method that transforms our Django model into the Pydantic schema. The id field uses a prefix to ensure uniqueness across different model types (which we'll need later when we add more models to our search index).

This method serves as the bridge between your Django model and your search index. It's responsible for:

  1. Selecting which model data should be searchable
  2. Transforming that data into the structure expected by your search schema
  3. Adding any computed or derived fields needed for search

The prefix in the id field (product-{self.id}) is particularly important. Without it, you might have ID collisions when indexing different model types. For example, both a Product with ID 1 and a Category with ID 1 would have the same ID in the search index. By adding the type prefix, we ensure each document has a truly unique identifier.

Notice how we map fields from our Django model to our search schema: - name becomes both title (the generic display field) and product_name (the product-specific field) - We include the active status to control visibility in search results - We use get_absolute_url() to generate the link for search results

Step 4: Indexing Our Search Data in Meilisearch

4.1 What is Search Indexing?

So what exactly is "indexing" in the context of search engines? Think of it like creating a highly optimized lookup table for your data. When we talk about indexing in Meilisearch, we're:

  • Extracting specific fields from our Django models (like names, descriptions, SKUs)
  • Transforming this data into a format optimized for search
  • Organizing it so Meilisearch can quickly find matching results when users search

Without indexing, searching would require scanning through every record in your database. Indexing creates that organization system, allowing Meilisearch to deliver results in milliseconds rather than seconds or minutes.

Our SearchSchema is how we tell Meilisearch which fields should be searchable (like product names and descriptions) and which should be filterable (like product types and whether they're active). These decisions shape how the index is structured and optimized behind the scenes.

4.2 Creating a Meilisearch Client Connection

Let's build our helper functions step by step, starting with establishing a connection to Meilisearch:

def get_meilisearch_client():
    """Return a meilisearch client"""
    url = settings.SEARCH_API_URL
    api_key = settings.SEARCH_API_KEY
    client = meilisearch.Client(url, api_key)
    return client

This function creates a connection to your Meilisearch server using the URL and API key from your Django settings. We'll use this in all our other functions to avoid repeating the connection code.

4.3 Managing Individual Objects in the Index

Next, we need a way to handle individual objects (like products in our inventory) when they're created, updated, or deleted:

def update_object(obj):
    """Update a single object in the search index or delete if inactive"""
    client = get_meilisearch_client()
    index_name = settings.SEARCH_INDEXES["main"]
    index = client.index(index_name)

    if obj["active"]:
        # Update the object in the index
        index.add_documents([obj])
    else:
        # Delete the object from the index
        index.delete_document(obj["id"])

This function handles adding, updating, or removing a single object in the search index. When you save a product, this function gets called to either:

  • Add/update the product in the index if it's active
  • Remove it from the index if it's inactive (like when a product is discontinued)

The conditional logic based on the active field is particularly useful. It automatically handles the case where items can be deactivated but not deleted from the database. By removing inactive items from the search index, you ensure users only find items they can actually purchase or access.

4.4 Building a Complete Search Index

Now for our workhorse function that handles populating the entire search index:

def build_main_indexes(verbose=False):
    """Build the main search index with all searchable models"""
    from products.models import Product  # Import your models

    client = get_meilisearch_client()
    index_name = settings.SEARCH_INDEXES["main"]
    index = client.index(index_name)

    # Configure the index settings
    index_settings = {
        "filterableAttributes": ["type", "active"],
        "searchableAttributes": [
            "title",
            "product_name",
            "product_description",
            "sku",
            "brand_name",
        ],
    }
    index.update_settings(index_settings)

    # Index all active products
    products = Product.objects.filter(active=True)
    if verbose:
        print(f"Indexing {products.count()} products...")

    product_docs = [product.search_data().dict() for product in products]
    if product_docs:
        index.add_documents(product_docs)

This function:

  • Configures which fields should be searchable (like product names and descriptions)
  • Defines which fields can be used for filtering (type and active status)
  • Fetches all active products from your database
  • Converts each product to its search schema format
  • Adds all products to Meilisearch in a single batch operation

The searchableAttributes setting is particularly important. It tells Meilisearch which fields should be included in the search index. Fields not listed here won't be searchable, even if they're included in your documents. This gives you fine-grained control over what's searchable and what's not.

Similarly, filterableAttributes defines which fields can be used for filtering results. In our case, we want to filter by type (to show only products or only categories) and by active status (to hide inactive items).

The optional verbose parameter lets you see progress output, which is helpful when indexing large datasets.

4.5 Complete Index Rebuilds

Finally, we need a "nuclear option" for when we want to start fresh:

def rebuild_main_indexes(verbose=False):
    """Completely rebuild the main search index"""
    client = get_meilisearch_client()
    index_name = settings.SEARCH_INDEXES["main"]

    # Delete the index if it exists
    try:
        client.delete_index(index_name)
    except meilisearch.errors.MeiliSearchApiError:
        pass

    # Create the index and build it
    client.create_index(index_name)
    build_main_indexes(verbose=verbose)

This function completely wipes and rebuilds your search index. You'll want to use this when:

  • You've made changes to your search schema
  • You want to reset your index after testing
  • Your index has gotten out of sync with your database
  • You've made significant changes to your ranking rules or index settings

When I first implemented this for our client, I was amazed at how quickly Meilisearch could index tens of thousands of products. Even with our full catalog of 70,000+ items, the initial indexing took only a couple of minutes.

Step 5: Creating a Mixin for Automatic Indexing

Now let's create a mixin that will automatically update the search index whenever a model is saved. This way, if a product's data changes, or the website administrator marks a product inactive, the search index updates and the search results for users remain accurate.

# search/mixins.py
from django.conf import settings
from search.utils import update_object

class UpdateSearchMixin:
    """
    Mixin to update the search index when a model is saved
    """
    def update_search(self):
        if settings.INDEX_SEARCH:
            update_object(self.search_data().dict())

    def save(self, *args, **kwargs):
        """ Override the save method so we can call update_search after save """
        # Call the original save method
        super().save(*args, **kwargs)
        # Update the search index
        self.update_search()

Now we can update our Product model to use this mixin:

# products/models.py
from django.db import models
from search.schemas import SearchSchema
from search.mixins import UpdateSearchMixin

# Add UpdateSearchMixin
class Product(UpdateSearchMixin, models.Model):
    ...

With this implementation, whenever a Product is saved:

  1. The save() method from our mixin is called
  2. It calls update_search(), which uses the search_data method on the model to update the search index
  3. The search_data() method transforms our Django model into the Pydantic schema
  4. The data is sent to Meilisearch in the exact format we've defined

This automatic indexing approach ensures your search index stays in sync with your database without requiring any additional code at the point of use. Once you've set up the mixin and applied it to your models, the indexing happens automatically whenever models are saved. This means your search results stay up-to-date.

For more complex scenarios, you might want to extend this mixin to handle bulk operations or to update related objects. For example, if changing a Category should update all its related Products in the search index, you could add that logic to the Category model's save method.

One of the great advantages of Meilisearch compared to other search engines like Elasticsearch is that it handles document indexing asynchronously. When you send a document to Meilisearch, it quickly acknowledges receipt and then processes the indexing in the background. This means your Django save() method won't be slowed down waiting for indexing to complete, a common issue with other search backends where indexing can add seconds to each save operation.

This asynchronous approach eliminates the need for background task queues like Celery or Dramatiq just to handle search indexing, greatly simplifying your architecture.

Step 6: Creating Management Commands for Index Operations

While our automatic indexing handles day-to-day updates, we sometimes need more powerful tools for managing our search index. Django management commands are perfect for this.

Let's create a management command using django-click for re-indexing our search:

# search/management/commands/index_search.py
from __future__ import annotations

import djclick as click

from search.utils import rebuild_main_indexes


@click.command()
@click.option("--verbose", is_flag=True, default=False)
def main(verbose):
    """Index objects in Meilisearch."""
    click.secho("Rebuilding main indexes...", fg="green", nl=False)
    rebuild_main_indexes(verbose=verbose)
    click.echo("Indexes built.")

This simple command provides a convenient way to rebuild your search index from the command line.

For larger projects, we can enhance this with more options and batch processing for handling large datasets:

# search/management/commands/index_search.py
from __future__ import annotations

import time
import djclick as click
from django.conf import settings
from products.models import Product, Category
from search.utils import get_meilisearch_client

@click.command()
@click.option("--verbose", is_flag=True, default=False)
@click.option("--batch-size", default=1000, help="Batch size for indexing")
def main(verbose, batch_size):
    """Rebuild search indexes in Meilisearch with batch processing."""
    start_time = time.time()
    click.secho("Rebuilding search indexes...", fg="green")

    # Get client and recreate index
    client = get_meilisearch_client()
    index_name = settings.SEARCH_INDEXES["main"]

    # Delete and recreate index
    try:
        client.delete_index(index_name)
    except Exception:
        pass

    client.create_index(index_name)
    index = client.index(index_name)

    # Configure index settings
    click.echo("Configuring index settings...")
    index_settings = {
        "filterableAttributes": ["type", "active"],
        "searchableAttributes": [
            "title", "product_name", "product_description", 
            "sku", "brand_name", "category_name"
        ],
    }
    index.update_settings(index_settings)

    # Index products in batches
    products = Product.objects.filter(active=True)
    total_products = products.count()

    if verbose:
        click.echo(f"Indexing {total_products} products in batches of {batch_size}...")

    # Process in batches to avoid memory issues with large datasets
    batch_count = 0
    for i in range(0, total_products, batch_size):
        batch = products[i:i+batch_size]
        product_docs = [p.search_data().dict() for p in batch]
        if product_docs:
            index.add_documents(product_docs)

        batch_count += 1
        if verbose:
            click.echo(f"Processed batch {batch_count}, {min(i+batch_size, total_products)}/{total_products} products")

    # Index categories (typically smaller, so we do it all at once)
    categories = Category.objects.filter(active=True)
    category_docs = [cat.search_data().dict() for cat in categories]
    if category_docs:
        index.add_documents(category_docs)

    elapsed_time = time.time() - start_time
    click.secho(f"✓ Indexing completed in {elapsed_time:.2f} seconds", fg="green")

This enhanced command is what we used for our client with 70,000+ items. By processing records in chunks of 1,000, we avoid memory issues that can occur when trying to process all 70,000+ records at once. By processing in manageable chunks, we kept memory usage reasonable while still maintaining good performance.

The entire indexing process for 70,000+ items took about 3-4 minutes, which is remarkably fast considering the volume of data. This is another area where Meilisearch shines compared to other search solutions. Its indexing performance is excellent even with large datasets.

This command is invaluable when:

  • You've made schema changes and need to rebuild the entire index
  • You're deploying to a new environment and need to populate the index
  • You suspect the index has gotten out of sync with your database
  • You're performing data migrations that affect searchable content
  • You want to update your index settings or ranking rules

The ability to easily and quickly rebuild your search index is one of the many reasons Meilisearch is so developer-friendly.

Step 7: Adding More Models to the Search Index

Now that we have the basic structure in place, we can easily add more models to our search index. Let's integrate a Category model:

# products/models.py
class Category(UpdateSearchMixin, models.Model):
    name = models.CharField(max_length=255)
    description = models.TextField(blank=True)
    active = models.BooleanField(default=True)

    def get_absolute_url(self):
        return f"/categories/{self.id}/"

    def search_data(self):
        return SearchSchema(
            id=f"category-{self.id}",
            title=self.name,
            type="category",
            url=self.get_absolute_url(),
            active=self.active,
            category_name=self.name,
            category_description=self.description,
            products_count=self.products.count(),
        )

And update our SearchSchema to include category fields:

# search/schemas.py
class SearchSchema(BaseModel):
    """
    Pydantic model for our main search schema
    """
    # other fields as before... 

    # Category fields
    category_name: str | None = None
    category_description: str | None = None
    products_count: int | None = None

Then update our build_main_indexes function to include categories:

def build_main_indexes(verbose=False):
    """Build the main search index with all searchable models"""
    from products.models import Product, Category  # Import your models

    client = get_meilisearch_client()
    index_name = settings.SEARCH_INDEXES["main"]
    index = client.index(index_name)

    # Configure the index settings
    index_settings = {
        "filterableAttributes": ["type", "active"],
        "searchableAttributes": [
            ...
            "category_name",
            "category_description",
        ],
    }
    index.update_settings(index_settings)

    # Index all active products as before... 

    # Index all active categories
    categories = Category.objects.filter(active=True)
    if verbose:
        print(f"Indexing {categories.count()} categories...")

    category_docs = [cat.search_data().dict() for cat in categories]
    if category_docs:
        index.add_documents(category_docs)

This pattern scales well to any number of models. Each model:

  1. Implements the UpdateSearchMixin
  2. Provides a search_data() method that returns a SearchSchema instance
  3. Sets appropriate values for common fields (id, title, type, url, active)
  4. Populates its model-specific fields in the schema

The beauty of this approach is that you can add new model types to your search index without changing your existing code. You just:

  1. Update your SearchSchema to include fields for the new model type
  2. Add the mixin and search_data() method to the new model
  3. Update your build_main_indexes function to include the new model

This extensibility is particularly valuable as your application grows. You might start with just products, then add categories, then blog posts, then user profiles, all without having to redesign your search architecture.

Step 8: Implementing the Search Frontend

Now let's create a simple view to search our index:

# search/views.py
from django.shortcuts import render
from django.conf import settings
from search.utils import get_meilisearch_client

def search(request):
    """Search view"""
    query = request.GET.get("q", "").strip()
    type_filter = request.GET.get("type", None)
    results = []

    if query:
        client = get_meilisearch_client()
        index = client.index(settings.SEARCH_INDEXES["main"])

        # Build filter string using your filterable attributes 
        filter_str = "active = true"
        if type_filter:
            filter_str += f" AND type = {type_filter}"

        # Perform search
        search_results = index.search(
            query, 
            {
                "filter": filter_str,
                "limit": 50
            }
        )
        results = search_results["hits"]

    return render(
        request,
        "search/results.html",
        {
            "results": results,
            "query": query,
            "type_filter": type_filter
        },
    )

This view handles the search process:

  1. It extracts the search query and any filters from the request
  2. It connects to Meilisearch and performs the search
  3. It renders a template with the search results

The filter_str construction shows how to use Meilisearch's filtering. We always filter to show only active items. You can optionally filter by type if the user has selected a specific type filter.

For more advanced search needs, Meilisearch supports additional parameters like:

  • attributesToHighlight: Highlight matching terms in results
  • facets: Return facet counts for filtered attributes
  • sort: Sort results by specific attributes
  • offset: Paginate through results

Now let's create a simple template to display the results:

<!-- templates/search/results.html -->
{% extends "base.html" %}

{% block content %}
<div class="search-container">
    <!-- Search form -->
    <form method="get" action="{% url 'search' %}">
        <input type="text" name="q" value="{{ query }}" placeholder="Search...">
        <button type="submit">Search</button>
    </form>

    <!-- Results -->
    {% if query %}
        <h2>Results for "{{ query }}"</h2>
        {% if results %}
            <div class="search-results">
                {% for result in results %}
                    <div class="search-result">
                        <h3><a href="{{ result.url }}">{{ result.title }}</a></h3>
                    </div>
                {% endfor %}
            </div>
        {% else %}
            <p>No results found.</p>
        {% endif %}
    {% endif %}
</div>
{% endblock %}

This template includes:

  • A search form with the current query pre-filled
  • A results section that displays different information based on the result type

Don't forget to add the URL pattern:

# urls.py
from django.urls import path
from search.views import search

urlpatterns = [
    # ... other URLs
    path('search/', search, name='search'),
]

For a production application, you might want to enhance this with:

  1. Pagination: For handling large result sets
  2. Highlighting: To show users where their search terms matched
  3. Faceted search: To allow filtering by multiple attributes
  4. Autocomplete: To suggest search terms as users type
  5. Analytics: To track popular searches and improve your search experience

Meilisearch supports all these features through its API, making it easy to build a sophisticated search experience as your application grows.

See It In Action: Surplus Sales Case Study

Want to see what a production Meilisearch implementation looks like? Visit our client Surplus Sales and try out their search functionality. Pay special attention to:

  • Typo tolerance: Try searching for "trnasistors" (misspelled) and notice how it still finds what you need
  • Speed: The millisecond response times, even when filtering through their 70,000+ item catalog
  • Relevance: How the most relevant parts appear at the top of results
  • Complex metadata handling: Search by part numbers, specifications, or descriptions

Their implementation handles all the concepts we've covered in this tutorial. It's a great example of how these techniques scale to real-world applications.

Alternative Approach: Using django-meili

If you prefer a more Django-native approach with less custom code, you might want to check out django-meili. This package provides a more Haystack-like experience for integrating Meilisearch with Django. I haven't used it, but from what I can tell, these are the main differences between using django-meili and implementing something more like our custom Meilisearch implementation:

  • Model-focused: django-meili is designed around indexing individual Django models, which works well if you're searching just one model at a time
  • Less flexible: Our unified schema approach gives you more control over how different models are represented in search results
  • Easier setup: django-meili requires less boilerplate code to get started
  • Django-native: It follows Django conventions more closely with a familiar API

django-meili might be a good choice if:

  • You're primarily searching within a single model type
  • You prefer a more Django-integrated approach with less custom code
  • You don't need the flexibility of a unified schema across different model types

Further Reading

Need help implementing advanced search for your Django project? Contact our team at Revsys for expert Django consulting. We've implemented search solutions for clients ranging from small startups to large enterprises, and we'd be happy to help you build a search experience that delights your users.

Step-by-step guide to integrating Meilisearch with Django, complete with automatic indexing, typo tolerance, and relevant filtering capabilities.{% else %}

2025-04-21T10:25:14.523751 2025-04-21T10:25:14.484603 2025