Introduction
Search functionality is one of the most critical features of the modern web. It enables visitors to quickly and easily find information they are looking for without having to figure out how items are categorized on your page manually. A robust search experience can lead to increased engagement, higher conversion rates, and improve the accessibility and usability of your website. Ultimately, it can lead to more traffic and revenue for your business. The importance of a robust search experience cannot be overstated.
On small websites like blogs, personal websites and small e-commerce sites, users can get by with search boxes powered by a simple Javascript library like lunr.js. Libraries such as these are very beneficial, since they need very little to get off the ground – they can be easily configured, and, very often, do not even need a backend. In fact, depending on the site of the website, it’s possible to implement search functionality using equality operators and “fuzzy matching” algorithms. There’s no need for a library of any sort.
The problem, however, is that these solutions don’t scale. As the size of the website grows, the amount of data to sift through becomes too much for a simple Javascript library to handle. The search experience becomes slow, and the results become less relevant. That’s where programs like Algolia, ElasticSearch,
and Solr come in. They are designed to handle large amounts of data, and are far more versatile than lightweight Javascript libraries.
Despite their advantages, however, these programs come with their own set of downsides. They often aren’t free (Algolia), and often require a lot of expertise to properly set up, deploy, properly utilize and maintain (ElasticSearch). Such capability is usually outside the capabilities of small inexperienced teams and individuals.
Meilisearch is a free, robust, open source library designed for use on small-to-medium-sized projects. It is easy to set up, simple to use, and scales easily with the size of your project.
Among the features it offers are:
- Typo tolerance
- Highly-customizable search and ranking
- Synonyms
In this guide, we will be covering how to set up search functionality for a fictional e-commerce website. We will explain how search works, best practices to consider when implementing search functionality, and how to implement the same using Meilisearch.
How Search Functionality Works
Consider a fictional e-commerce website that sells shoes. The website has a database of all the shoes it sells, and, therefore, users should be able to browse through the shoes and find whatever they are looking for. After interviewing customers, we find out that users are most likely to search for shoes by name, or by brand.
The website owners have collected relevant information about the shoes and stored them in a JSON file, but they are not programmers, so they have no idea how to import the data into a database, or how to make it searchable. They have instructed us to create a backend that will allow them to import the data, and find a way to expose the data so that they can fetch, create, update or delete data as they please. They also want to be able to search for shoes by name, brand, or any other relevant information.
After careful consideration of the project owner’s requirements, we have decided to use a relational database to store the data. However, since the relational database is not designed to be searched, we will also need to use a search engine tailored for this functionality. That’s where Meilisearch comes to the rescue. We will need to ensure that the database and the search engine are in sync, so that the search results are always up-to-date.
Here is how it’s going to work:
- The website owners will import the data into the database using a simple form.
- The data will be stored in the database.
- The data will be duplicated in the search engine.
- The search engine will be queried whenever a user searches for a shoe.
- The search results will be displayed to the user.
Setting up an SQLite Database
For this project, we will be using a simple SQLite database to store our data. SQLite is a lightweight relational database that, as compared to something like Postgres, is easy to set up and use. It doesn’t require much expertise to use, but in production, you’ll probably want something that scales more easily. For our purposes, however, SQLite is more than adequate.
Ensure you have SQLite installed on your machine. If you don’t, you can install it using the following command:
sudo apt install sqlite3
To create a new database, run the following command:
sqlite3 db/database.db
Then, to view the list of databases:
.databases
Next, let’s create a table to store our products:
CREATE TABLE IF NOT EXISTS products( id INTEGER NOT NULL PRIMARY KEY, name TEXT NOT NULL, brand TEXT NOT NULL, imgURL TEXT, description TEXT NOT NULL, price INTEGER );
Voila! Our database is now ready to be used. We can now import our data into the database. We could do this directly using SQL, but since we will be setting up an API to manipulate the data later, it makes more sense to implement this functionality using an ORM.
A Brief Introduction to Prisma
Prisma is an open-source, type-safe, and modern ORM (Object-Relational Mapping) tool that is designed to simplify database access and management in modern web applications. It provides a powerful and intuitive API that allows developers to easily and efficiently interact with their databases.
Out of the box, Prisma is type-safe. It generates a strongly-typed data access layer based on the database schema, which ensures that developers can avoid runtime errors that are common with dynamic ORM libraries. This also enables powerful tooling such as auto-completion and type-checking, making it easier for developers to write reliable and maintainable code.
It also works with any database. Prisma supports both SQL and NoSQL databases, and provides a flexible and powerful query API that allows developers to write complex database queries without sacrificing performance.
To get started, we need to install Prisma and the SQLite driver:
npm install prisma
Then, we need to initialize Prisma:
npx prisma init --datasource-provider sqlite
This will create a new folder named “prisma” at the root directory of our project and a “schema.prisma” file inside it. This is where we will define our database schema.
generator client { provider = "prisma-client-js" } datasource db { provider = "sqlite" url = env("DATABASE_URL") }
Now, we need to set up DATABASE_URL
in our .env
file:
DATABASE_URL="file:../db/database.db"
We can now pull the schema from our database into our schema.prisma
file:
npx prisma db pull
This command will result in the following schema:
generator client { provider = "prisma-client-js" } datasource db { provider = "sqlite" url = env("DATABASE_URL") } model products { id Int @id @default(autoincrement()) name String brand String imgURL String? description String price Int? }
The products
model is automatically generated by Prisma based on the schema of the products
table in our database.
If we chose to generate the database access objects at this point, our models inside the code would be named products
, which isn’t a very intuitive name. We can change this by adding the `@@map` directive to the model:
model Product { id Int @id @default(autoincrement()) name String brand String imgURL String? description String price Int? @@map("products") // look for table named "products" in the database }
Finally, we can generate our database access objects:
npx prisma generate
Creating a Basic CRUD API
Our project requires a basic API that will allow project owners to create, read, update and delete products.
One of the biggest downsides of using NodeJS is that it’s very bare-bones. It doesn’t have a very comprehensive standard library, so a lot of features that other languages have out-of-the-box need to be implemented manually. In addition, you’ll be hard-pressed to find an equivalent of the Django ORM, or the Rails Active Record. For that reason, it’s useful to have knowledge of how to structure your NodeJS project.
In brief:
- Routes: Routes are responsible for handling requests and passing them on to the appropriate controller.
- Controllers: Controllers are responsible for handling requests and returning responses. They are the middleman between routes and complex “business logic”.
- Services: Services are responsible for handling complex “business logic”. They are the middleman between controllers and the database.
- Models: Models are responsible for handling database operations. They are the middleman between services and the database. One of the great things about Prisma is that it handles all the database operations for you, so we don’t really need to manually create models. However, depending on your use case, it may still be a great idea to create models.
First, let’s install some dependencies:
npm install meilisearch typescript express
Then, we need to initialize Typescript, so we can have type safety in our project:
npx tsc --init
This produces the default recommended Typescript configuration, which we can trim down and modify to suit our needs:
{ "compilerOptions": { "target": "es2016", "module": "commonjs", "esModuleInterop": true, "forceConsistentCasingInFileNames": true, "strict": true, "skipLibCheck": true } }
Next, we need a bare-bones version of our server. We will be using Express for this:
import express from 'express'; const app = express(); app.use(express.json()); app.use(express.urlencoded({extended: false})); app.listen(1337, () => { console.log('Server started on port 1337'); }); app.get('/', (req, res) => { res.send('Hello world'); }); export default app;
All this does for now is start a server on port 1337, and respond with a simple “Hello world” message when a user visits the root route.
Next, let’s create a controller that will allow clients to create, read, update and delete products.
import {Request, Response} from "express"; import {Product} from "@prisma/client"; import {productService} from "../services/ProductService"; class ProductsController { // fetch all products async list(req: Request, res: Response) { return res.json(await productService.getProducts()); } async getOne(req: Request, res: Response) { const product = await productService.getProduct(Number(req.params.id)) if (!product) { return res.status(404).json({message: 'Product not found'}); } return res.json(product); } async create(req: Request<{}, {}, Product[]>, res: Response) { await productService.createProducts(req.body); return res.status(201).json({message: 'Products created'}); } async update(req: Request<{}, {}, Product>, res: Response) { await productService.updateProduct(req.body); return res.json({message: 'Product updated'}); } async delete(req: Request, res: Response) { await productService.deleteProduct(Number(req.params.id)); return res.json({message: 'Product deleted'}); } } export const productsController = new ProductsController();
Notice that the controller is very simple. It doesn’t contain any business logic, and, for the sake of brevity, it doesn’t contain any error handling or validation. All it does is call the appropriate service method, and return the result. All the business logic takes place inside services.
Since we are using Prisma, we don’t need to manually create database models. Therefore, all database operations will take place inside the services.
import {PrismaClient, Product} from "@prisma/client"; const prisma = new PrismaClient(); class ProductService { async createProduct(product: Product): Promise { return prisma.product.create({ data: product }); } async createProducts(products: Product[]): Promise<Product[]> { const created = []; for (const product of products) { created.push(await this.createProduct(product)) } return created; } async updateProduct(product: Product): Promise { return prisma.product.update({ where: { id: product.id }, data: product }); } async deleteProduct(id: number): Promise { return prisma.product.delete({ where: { id: id } }); } async getProducts(): Promise<Product[]> { return prisma.product.findMany(); } async getProduct(id: number): Promise { return prisma.product.findUnique({ where: { id: id } }); } } export const productService = new ProductService();
To conclude this section, let’s create a route for our controller:
import {Router} from "express"; import {productsController} from "../controllers/ProductsController"; class ProductRoutes { public router: Router = Router(); constructor() { this.config(); } config(): void { this.router.get('/', productsController.list); this.router.get('/:id', productsController.getOne); this.router.post('/', productsController.create); this.router.put('/:id', productsController.update); this.router.delete('/:id', productsController.delete); } } export const productRoutes = new ProductRoutes();
This route will be mounted on the /products
route, so the full route for the “getOne” will be /products/1
, for instance.
Let’s implement those endpoints in the main file:
import express from 'express'; import {productRoutes} from "../routes/Products"; const app = express(); app.use(express.json()); app.use(express.urlencoded({extended: false})); app.use('/products', productRoutes.router); app.listen(1337, () => { console.log('Server is running on port 1337'); }) export default app;
Now, the only thing left to do is set up Meilisearch and start indexing our products.
Setting up Meilisearch
Meilisearch stores data in the form of records referred to as “documents”, much like MongoDB. Documents are further grouped into collections referred to as “indexes”.
In our application, for instance, we will have an index referred to as “products”. Every index must have a primary key – that is a unique identifier for each document. In our case, we will use the product’s ID as the primary key.
Individual products (henceforth referred to as “documents”) will look like so:
{ "id": "1", "name": "Nike Air Force 1", "brand": "Nike", "price": 100, "description": "The Nike Air Force 1 is a classic basketball shoe that has been around since 1982. It is one of the most popular shoes in the world, and is worn by many celebrities and athletes." }
To take advantage of the powerful capabilities Meilisearch offers us, we need to set it up first. First, let’s create a ISearchService
class that will handle all the Meilisearch-related functionality. We will also create a MsSearchService
interface that implements the ISearchService
class. This is a good practice to follow, as it allows us to easily swap out the implementation of our MsSearchService
class without having to change the rest of our codebase.
For now, we only need three methods:
createIndex
– creates a new indexaddDocuments
– adds documents to an indexsearch
– searches an index
import {SearchResponse} from "meilisearch"; export interface ISearchService { search(indexName: string, query: string): Promise; addDocuments(indexName: string, documents: any[]): Promise; createIndex(name: string): Promise; }
Now, let’s create our `MsSearchService` class:
export class MsSearchService implements ISearchService { private client: MeiliSearch; constructor() { this.client = new MeiliSearch({ host: 'http://localhost:7700', }); } async createIndex(indexName: string): Promise { await this.client.createIndex(indexName); } // this method will add documents to an index // if the index does not exist, it will create it first async addToIndex(indexName: string, documents: T[]): Promise { // try to get the index try { const index: Index = await this.client.getIndex(indexName); await index.addDocuments(documents); }catch (e: any){ // if the index does not exist if (e.code === "index_not_found") { // create the index await this.createIndex(indexName); // add documents to the newly created index await this.addToIndex(indexName, documents); console.log("Index created and documents added") } else { console.error("Error adding documents to index", e) } } } async updateIndex(indexName: string, documents: T[]): Promise { const index: Index = await this.client.getIndex(indexName); await index.updateDocuments(documents); } }
As simply as that, we are now ready to use MeiliSearch to power our search functionality. But first, we need some data to work with.
Adding products to MeiliSearch
At this point, we need to modify our service slightly, so that products are added, updated, and deleted to and from MeiliSearch whenever a corresponding action is performed on the API.
import {PrismaClient, Product} from "@prisma/client"; import {MsSearchService} from "./MsSearchService"; import {SearchResponse} from "meilisearch"; const prisma = new PrismaClient(); const ms = new MsSearchService(); class ProductService { async createProduct(product: Product): Promise { await ms.addToIndex('products', product); return prisma.product.create({ data: product }); } async createProducts(products: Product[]): Promise<Product[]> { const created = []; for (const product of products) { created.push(await this.createProduct(product)) } await ms.addToIndex('products', created); return created; } async updateProduct(product: Product): Promise { await ms.updateIndex('products', [product]) return prisma.product.update({ where: { id: product.id }, data: product }); } async deleteProduct(id: number): Promise { await ms.deleteFromIndex('products', id); return prisma.product.delete({ where: { id: id } }); } async getProducts(): Promise<Product[]> {/*...*/} async getProduct(id: number): Promise {/*...*/} async searchProducts(query: string): Promise<SearchResponse> { return await ms.search('products', query); } } export const productService = new ProductService();
We now need to add the search functionality to our routes and controller. First, let’s add the functionality to our
controller:
export class ProductsController { export class ProductsController { //... async search(req: Request, res: Response) { try { const query = req.query.q; if (!query) { return res.status(400).json({message: 'Invalid query'}); } const results = await productService.searchProducts(query.toString()); return res.json(results); }catch (e) { console.log(e); return res.status(500).json({message: 'Internal server error'}); } } }
Then, let’s add the route to our routes
file:
import {Router} from "express"; import {productsController} from "../controllers/ProductsController"; class ProductRoutes { public router: Router = Router(); constructor() { this.config(); } config(): void { // Be careful to add the "/search" route before the "/:id" route this.router.get('/search', productsController.search); // <--- here this.router.get('/', productsController.list); this.router.get('/:id', productsController.getOne); this.router.post('/', productsController.create); this.router.put('/:id', productsController.update); this.router.delete('/:id', productsController.delete); } } export const productRoutes = new ProductRoutes();
Now, we can test our search functionality. To do that, we need to add some products to our database. Luckily, we already have a JSON file with some products in it. You can find it inside the db
folder of the project.
Once we’ve made sure Meilisearch is running, we can use our API to add these products to our database.
curl 'http://localhost:1337/products' \ -H 'Content-Type: application/json' \ -d @db/data.json
Now, let’s try to search for a product:
curl --location --request GET 'http://localhost:3000/products/search?q=caramel' | jq '.'
If everything went well, you should see a response similar to this:
{ "hits": [ { "id": 19, "name": "Nike Air Force 1 Caramel Chocolate", "brand": "Nike", "imgURL": "https://i.postimg.cc/13vHKhWQ/Nike-Air-Force-1-Dark-Chocolate-DB4455-200-5.jpg", "description": "A delicious-looking colorway of the Nike Air Force 1, featuring a caramel and chocolate color scheme.", "price": 282 } ], "query": "caramel", "processingTimeMs": 0, "limit": 20, "offset": 0, "estimatedTotalHits": 1 }
Note: jq
is a command-line JSON processor that we use to format the output from curl
. You can install it by running sudo apt install jq
on Linux.
Meilisearch also comes with an in-built preview feature that allows us to test the search functionality without having to deploy our own backend. It can be accessed by visiting http://localhost:7700 Here it is in action against our database:
Explore the code on Github at your own leisure.
Conclusion
In this guide, we’ve learned how to use MeiliSearch to power our search functionality. We’ve also learned how to set up and structure a basic API using Express and TypeScript. We’ve also learned how to add, update, and delete documents from MeiliSearch using our API.
If you want to learn more about MeiliSearch, you can check out their documentation.
Thank you for the great material! I am currently studying business and business processes and my dream is to start my own business. I’m currently in the process of planning and saving to fulfill this dream, while https://edubirdie.com/business-plan-writing-service this service takes care of my college assignments and helps with my studies.