I’ve been thinking about search a lot lately. Not the simple kind where you match a few words, but the smart kind you experience on well-built websites. The kind that feels like it understands what you’re looking for, even when you don’t type the perfect phrase. It’s something we all expect now, but building it can seem like a challenge reserved for large engineering teams. So, I wanted to show you how to build that experience yourself. Let’s talk about creating a powerful, intelligent search engine for your application using Spring Boot and Elasticsearch.
Think about the last time you searched for something online. You probably didn’t use perfect grammar. You might have used synonyms, or only part of a product name. A good search engine handles this. It’s not just about finding data; it’s about finding the right data, quickly. This is where Elasticsearch shines, and pairing it with Spring Boot makes the integration smooth and production-ready.
So, how do we start? First, we set up our project. We’ll need a few key dependencies. Add these to your pom.xml file.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
Next, we need a running Elasticsearch instance. For local development, Docker is the easiest path. Save this as a docker-compose.yml file and run docker-compose up.
version: '3.8'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.12.0
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ports:
- "9200:9200"
With the infrastructure ready, let’s define what we’re searching for. Imagine we’re building a product catalog. Our data isn’t just flat; it has names, descriptions, categories, and prices. We need to represent this as a document. In Spring Data Elasticsearch, we use the @Document annotation.
import org.springframework.data.annotation.Id;
import org.springframework.data.elasticsearch.annotations.Document;
import org.springframework.data.elasticsearch.annotations.Field;
import org.springframework.data.elasticsearch.annotations.FieldType;
import java.math.BigDecimal;
@Document(indexName = "product")
public class Product {
@Id
private String id;
@Field(type = FieldType.Text, analyzer = "english")
private String name;
@Field(type = FieldType.Text, analyzer = "english")
private String description;
@Field(type = FieldType.Keyword)
private String category;
@Field(type = FieldType.Double)
private BigDecimal price;
// Constructors, getters, and setters
}
Did you notice the analyzer = "english" part? This is crucial. An analyzer processes text before it’s indexed. The english analyzer knows to stem words, removing suffixes like “ing” or “s”. This means a search for “running” will also find documents containing “run”. But what if the default analyzers aren’t quite right for your needs?
You can define your own. This is a key step many miss. Let’s say our product names include model numbers and codes. We want to search them effectively. We can configure a custom analyzer in our application properties or with a settings file. Here’s a programmatic way to set it up in a configuration class.
import org.elasticsearch.client.RestHighLevelClient;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.elasticsearch.client.ClientConfiguration;
import org.springframework.data.elasticsearch.client.RestClients;
import org.springframework.data.elasticsearch.config.AbstractElasticsearchConfiguration;
@Configuration
public class ElasticsearchConfig extends AbstractElasticsearchConfiguration {
@Override
public RestHighLevelClient elasticsearchClient() {
ClientConfiguration clientConfiguration = ClientConfiguration.builder()
.connectedTo("localhost:9200")
.build();
return RestClients.create(clientConfiguration).rest();
}
}
Now, for the core of our search: the repository. Spring Data Elasticsearch provides a powerful repository abstraction. We can create an interface that extends ElasticsearchRepository. This gives us basic CRUD operations for free. But the real power comes from defining custom query methods.
import org.springframework.data.elasticsearch.repository.ElasticsearchRepository;
import java.util.List;
public interface ProductRepository extends ElasticsearchRepository<Product, String> {
List<Product> findByName(String name); // Basic match
List<Product> findByNameContainingOrDescriptionContaining(String name, String description);
}
The method findByNameContainingOrDescriptionContaining is derived from its name. Spring Data understands this naming convention and creates the appropriate query. This is convenient for simple cases. However, for complex, fine-tuned searches, we need more control. This is where the @Query annotation or the Elasticsearch operations template comes in.
Let’s build a more advanced search. Suppose a user searches for “wireless mouse”. We want to match that phrase across the name and description, but give more importance, or “weight”, to matches in the name. We also want to filter results to a specific category and sort them by price. How can we build that?
We can use the NativeSearchQueryBuilder to construct a detailed query programmatically.
import org.elasticsearch.index.query.*;
import org.springframework.data.elasticsearch.core.ElasticsearchOperations;
import org.springframework.data.elasticsearch.core.SearchHits;
import org.springframework.data.elasticsearch.core.query.NativeSearchQuery;
import org.springframework.data.elasticsearch.core.query.NativeSearchQueryBuilder;
import org.springframework.stereotype.Service;
@Service
public class ProductSearchService {
private final ElasticsearchOperations elasticsearchOperations;
public ProductSearchService(ElasticsearchOperations elasticsearchOperations) {
this.elasticsearchOperations = elasticsearchOperations;
}
public SearchHits<Product> advancedSearch(String query, String category) {
QueryBuilder multiMatchQuery = QueryBuilders.multiMatchQuery(query, "name^2", "description");
BoolQueryBuilder boolQuery = QueryBuilders.boolQuery()
.must(multiMatchQuery)
.filter(QueryBuilders.termQuery("category", category));
NativeSearchQuery searchQuery = new NativeSearchQueryBuilder()
.withQuery(boolQuery)
.withSort(SortBuilders.fieldSort("price").order(SortOrder.ASC))
.build();
return elasticsearchOperations.search(searchQuery, Product.class);
}
}
Look at "name^2". This boosts the importance of the name field by a factor of two compared to the description. The boolQuery lets us combine a must clause (which affects the score) with a filter clause (which does not affect the score but efficiently narrows results). This structure is fundamental to building performant, relevant searches.
What happens when you have thousands of results? You can’t send them all at once. Pagination is essential. Thankfully, it’s simple to add. Modify the query builder to include page and size parameters.
NativeSearchQuery searchQuery = new NativeSearchQueryBuilder()
.withQuery(boolQuery)
.withSort(SortBuilders.fieldSort("price").order(SortOrder.ASC))
.withPageable(PageRequest.of(page, size)) // Add this line
.build();
Another powerful feature is aggregations. Ever see those filters on the left side of a shopping site showing “Brand” or “Size”? Those are often built with aggregations. They summarize your data into buckets. Let’s say we want to show a count of products per category in our search results.
import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder;
// Inside your query building logic
TermsAggregationBuilder aggregation = AggregationBuilders.terms("categories_agg").field("category.keyword");
NativeSearchQuery searchQuery = new NativeSearchQueryBuilder()
.withQuery(boolQuery)
.addAggregation(aggregation) // Add the aggregation
.withPageable(PageRequest.of(page, size))
.build();
The results will now include aggregation data, which you can extract and display to the user, allowing them to filter interactively. This turns a simple search box into a dynamic discovery tool.
Testing is critical. You don’t want to test against your production index. For integration tests, Testcontainers is a perfect tool. It can spin up a real, temporary Elasticsearch instance for your tests.
import org.junit.jupiter.api.Test;
import org.springframework.boot.test.context.SpringBootTest;
import org.testcontainers.elasticsearch.ElasticsearchContainer;
import org.testcontainers.junit.jupiter.Container;
import org.testcontainers.junit.jupiter.Testcontainers;
import org.testcontainers.utility.DockerImageName;
@Testcontainers
@SpringBootTest
class ProductSearchTest {
@Container
static ElasticsearchContainer elasticsearchContainer = new ElasticsearchContainer(
DockerImageName.parse("docker.elastic.co/elasticsearch/elasticsearch:8.12.0")
).withExposedPorts(9200);
// Your test properties will override the connection to use this container
@Test
void shouldReturnRelevantProducts() {
// Write your test logic here
}
}
Finally, let’s bring this to life in a controller. We expose a clean REST API that accepts search parameters and returns structured results, complete with hits, pagination info, and aggregations.
import org.springframework.web.bind.annotation.*;
import org.springframework.data.domain.PageRequest;
@RestController
@RequestMapping("/api/search")
public class SearchController {
private final ProductSearchService searchService;
@GetMapping
public SearchResult search(@RequestParam String q,
@RequestParam(required = false) String category,
@RequestParam(defaultValue = "0") int page,
@RequestParam(defaultValue = "10") int size) {
SearchHits<Product> hits = searchService.advancedSearch(q, category, page, size);
// Map the SearchHits object to a custom SearchResult DTO
return mapToSearchResult(hits, page, size);
}
}
Building a great search feature can feel complex, but by breaking it down step-by-step with Spring Boot and Elasticsearch, it becomes very manageable. We’ve gone from setting up the database and defining our data, to building weighted, filtered, and paginated queries. We even added aggregations for faceted search and talked about proper testing.
The difference between a basic search and a great one is in these details—the analyzers, the scoring, the filters. It’s what makes users feel understood. I encourage you to take this foundation and experiment. Try different analyzers. Adjust your field boosts. See how it changes the results.
I hope this guide helps you build something excellent. If you found it useful, please share it with others who might be facing similar challenges. Have you tried implementing a feature like this before? What was your biggest hurdle? Let me know in the comments below—I’d love to hear about your experiences and questions
As a best-selling author, I invite you to explore my books on Amazon. Don’t forget to follow me on Medium and show your support. Thank you! Your support means the world!
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva