I remember the exact moment I first felt the pain of a bad search. I was building a small e-commerce site for a friend. We had a few hundred products, each with a description, a name, and some tags. I used a regular database. When a customer typed “blue sneakers size 10”, the page took three seconds to load. Worse, if someone mistyped “sneekers”, they got nothing. My friend was losing sales. I needed something smarter, faster, and more forgiving.
That is why I began looking at Elasticsearch. It is not a database in the traditional sense. It is a search engine built on top of Lucene. It indexes data so that queries become cheap. And when you combine it with Spring Boot, you get a powerful tool that feels natural to any Java developer. Have you ever wondered why your own search feels so slow? The answer is often that you are asking a relational database to do something it was never designed for.
Relational databases are great at transactions, consistency, and structured queries. But they struggle with full‑text search. They cannot handle fuzzy matches well. They cannot rank results by relevance without a lot of custom logic. They cannot suggest completions as you type. Elasticsearch does all of these out of the box. It is like having a dedicated search assistant that lives right next to your main data store.
Now, how do you bring this into a Spring Boot application? Spring Data Elasticsearch is the bridge. It works much like Spring Data JPA, but instead of mapping entities to tables, you map them to indices. An index in Elasticsearch is roughly analogous to a database table. A document is like a row. And the mapping tells Elasticsearch how to treat each field – whether it should be searchable, whether it should be analyzed for full‑text, and so on.
Let me show you a simple example. Suppose you have a product. In your code, you create a POJO like this:
import org.springframework.data.annotation.Id;
import org.springframework.data.elasticsearch.annotations.Document;
import org.springframework.data.elasticsearch.annotations.Field;
import org.springframework.data.elasticsearch.annotations.FieldType;
@Document(indexName = "products")
public class Product {
@Id
private String id;
@Field(type = FieldType.Text)
private String name;
@Field(type = FieldType.Text)
private String description;
@Field(type = FieldType.Keyword)
private String category;
@Field(type = FieldType.Double)
private double price;
// getters and setters omitted for brevity
}
Notice the annotations. @Document tells Spring this class is stored in an index named “products”. @Field with FieldType.Text means the field will be analyzed – that is, broken into tokens for full-text search. FieldType.Keyword means the field is stored as an exact value – great for filtering or aggregations. This distinction is crucial. If you mark a field as Text when you only need exact matches, you waste memory and get bad results.
Next, you need a repository. Spring Data Elasticsearch gives you an interface you can extend:
import org.springframework.data.elasticsearch.repository.ElasticsearchRepository;
import java.util.List;
public interface ProductRepository extends ElasticsearchRepository<Product, String> {
List<Product> findByNameContaining(String name);
List<Product> findByCategory(String category);
}
This feels exactly like JPA. But behind the scenes, Spring generates Elasticsearch queries. The method findByNameContaining maps to a match query on the name field. You can also write custom queries using the @Query annotation, but for many cases the derived method names are enough.
Now you need to configure the connection. In your application.properties, you only need a few lines:
spring.elasticsearch.uris=http://localhost:9200
spring.data.elasticsearch.repositories.enabled=true
If you are running Elasticsearch locally with defaults, that is all. Spring Boot will auto‑configure the client. Then you can inject the repository into a service and use it.
Here is a simple service that indexes a new product and then searches for it:
import org.springframework.stereotype.Service;
@Service
public class ProductService {
private final ProductRepository repository;
public ProductService(ProductRepository repository) {
this.repository = repository;
}
public Product addProduct(Product product) {
return repository.save(product);
}
public List<Product> searchByName(String name) {
return repository.findByNameContaining(name);
}
}
You can test this by running a Spring Boot application and calling the service from a controller. But what makes this truly powerful is the ability to handle advanced queries. For instance, if a user types “sneekers”, you want to return results for “sneakers”. Elasticsearch can do fuzzy matching with almost no extra code.
You can add a custom query method using @Query:
import org.springframework.data.elasticsearch.annotations.Query;
public interface ProductRepository extends ElasticsearchRepository<Product, String> {
@Query("{\"match\": {\"name\": {\"query\": \"?0\", \"fuzziness\": \"AUTO\" }}}")
List<Product> fuzzySearchByName(String name);
}
This uses a raw JSON query. The fuzziness: "AUTO" tells Elasticsearch to correct minor typos. The result is that “sneekers” will find “sneakers”. Have you ever used a search that never found what you meant? That is the pain fuzzy matching removes.
Another common need is autocomplete. When a user types “bla”, you want to suggest “black shoes”. Elasticsearch provides completion suggesters. You define a field with type Completion and then build a suggester query. Spring Data Elasticsearch supports this as well. You need to annotate a field with @CompletionField and then use the ElasticsearchOperations bean to run the suggest query.
Let me show you a quick snippet:
import org.springframework.data.elasticsearch.annotations.CompletionField;
import org.springframework.data.elasticsearch.core.suggest.Completion;
@Document(indexName = "products")
public class Product {
// other fields...
@CompletionField
private Completion suggest;
// getter and setter for suggest
}
Then, when you save a product, you populate the suggest object with an array of possible completions.
product.setSuggest(new Completion(new String[]{product.getName()}));
And to query suggestions:
@Autowired
private ElasticsearchOperations operations;
public List<String> autocomplete(String prefix) {
SuggestBuilder suggestBuilder = new SuggestBuilder()
.addSuggestion("product-suggest",
SuggestBuilders.completionSuggestion("suggest")
.prefix(prefix)
.size(5));
SearchRequest request = new SearchRequest("products");
request.source().suggest(suggestBuilder);
SearchResponse response = operations.search(request, Product.class);
// Parse response...
}
The code is a bit longer, but the concept is straightforward. Elasticsearch does the heavy lifting of building an in‑memory trie structure for lightning‑fast prefix lookups.
One question that often comes up: how do you keep Elasticsearch in sync with your primary database? You do not want to manage two data stores separately. The typical pattern is to write to your SQL database first, then in the same transaction or asynchronously index the same document into Elasticsearch. Spring provides event listeners that can automate this. You can listen for afterSave events in your JPA entities and push the data to Elasticsearch. Alternatively, you can use a message queue to decouple the two.
I prefer a simpler approach: treat Elasticsearch as a read‑only copy for search purposes. Write to the database, then use a background task (e.g., @Scheduled or a @EventListener) to index new or updated records. This avoids accidental data loss if Elasticsearch goes down. The trade‑off is eventual consistency, but for search that is usually fine. After all, do you really need a product to appear in search results exactly one millisecond after it is saved? Probably not.
Let me share a personal experience. I was working on a document management system for a law firm. The lawyers searched through thousands of contracts. We used PostgreSQL as the main store. Search took minutes. After moving to Elasticsearch with Spring Boot, the same queries took milliseconds. And we added features like highlighting the matching text in results. The lawyers were thrilled. They no longer had to guess exact phrases.
The integration also supports aggregations – think of them as SQL GROUP BY on steroids. You can compute counts, averages, histograms, and even geolocation buckets. For example, you can build a filter that shows how many products are in each category along with the average price. All of this is done in the same query, without pulling data into memory.
Here is how you build an aggregation with Spring Data Elasticsearch:
NativeSearchQueryBuilder queryBuilder = new NativeSearchQueryBuilder();
queryBuilder.addAggregation(
AggregationBuilders.terms("by_category").field("category.keyword")
);
SearchHits<Product> hits = elasticsearchOperations.search(queryBuilder.build(), Product.class);
// Extract aggregation buckets...
The results include both the matching documents and the aggregation data. This is perfect for building filter sidebars in a web application.
One thing to watch out for: Elasticsearch is not a transaction manager. If you rely on it for critical data, you risk inconsistency. Use it only for search and analytics. Keep your authoritative data in a proper relational database or document store.
Now, let me talk about performance. Indexing can be expensive if you do too many small updates. Batch your saves. When you first import a large dataset, disable refresh and replicas temporarily, then re‑enable them after the bulk import. Spring Data Elasticsearch supports bulk operations:
List<Product> products = loadFromCsv();
repository.saveAll(products);
Under the hood, this uses the Bulk API, which is far faster than individual inserts.
Have you ever seen an Elasticsearch cluster run out of memory? That happens when you store large fields that are not needed for search. Use the @Field annotation’s store attribute carefully. By default, fields are stored in the inverted index but the original value is not returned unless you ask for _source. You can also exclude fields from _source to save disk space.
Finally, testing. Spring Boot provides a convenient embedded Elasticsearch server for integration tests using @SpringBootTest with spring.elasticsearch.uris set to a local Mock or a Testcontainers instance. I prefer Testcontainers because it gives you a real Elasticsearch node without the hassle of running it manually.
@Testcontainers
@SpringBootTest
class ProductServiceTest {
@Container
static ElasticsearchContainer elasticsearch = new ElasticsearchContainer(
"docker.elastic.co/elasticsearch/elasticsearch:7.17.0"
);
@DynamicPropertySource
static void setProperties(DynamicPropertyRegistry registry) {
registry.add("spring.elasticsearch.uris", elasticsearch::getHttpHostAddress);
}
// tests...
}
This ensures your search logic works in a realistic environment.
So, why did I start thinking about this topic again? Because last week a colleague showed me a legacy system that performed searches by looping through all records in a MySQL table and using LIKE '%keyword%'. The application was slow, the users were unhappy, and the server was crying. I told him about Spring Boot and Elasticsearch. He implemented a prototype in two days. The difference was shocking.
Now I want you to experience that same feeling. If you have a Spring Boot application with disappointing search performance, consider adding Elasticsearch. It is not as hard as it sounds. Start small – index one table, write a few test queries, and see the speed. Your users will thank you.
If you found this article useful, I would appreciate it if you could like, share, and leave a comment below. Tell me about your own search struggles or successes. I read every response and often write follow‑ups based on real feedback. Let’s make search better together.
As a best-selling author, I invite you to explore my books on Amazon. Don’t forget to follow me on Medium and show your support. Thank you! Your support means the world!
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva