-12 C
United States of America
Wednesday, January 15, 2025

Juicebox recruits Amazon OpenSearch Service for improved expertise search


This put up is cowritten by Ishan Gupta, Co-Founder and Chief Know-how Officer, Juicebox.

Juicebox is an AI-powered expertise sourcing search engine, utilizing superior pure language fashions to assist recruiters establish the most effective candidates from an unlimited dataset of over 800 million profiles. On the core of this performance is Amazon OpenSearch Service, which gives the spine for Juicebox’s highly effective search infrastructure, enabling a seamless mixture of conventional full-text search strategies with fashionable, cutting-edge semantic search capabilities.

On this put up, we share how Juicebox makes use of OpenSearch Service for improved search.

Challenges in recruiting search

Recruiting engines like google historically depend on easy Boolean or keyword-based searches. These strategies aren’t efficient in capturing the nuance and intent behind advanced queries, typically resulting in massive volumes of irrelevant outcomes. Recruiters spend pointless time filtering by these outcomes, a course of that’s each time-consuming and inefficient.

As well as, recruiting engines like google typically battle to scale with massive datasets, creating latency points and efficiency bottlenecks as extra knowledge is listed. At Juicebox, with a database rising to greater than 1 billion paperwork and tens of millions of profiles being searched per minute, we wanted an answer that would not solely deal with massive-scale knowledge ingestion and querying, but in addition assist contextual understanding of advanced queries.

Answer overview

The next diagram illustrates the answer structure.

OpenSearch Service securely unlocks real-time search, monitoring, and evaluation of enterprise and operational knowledge to be used instances like utility monitoring, log analytics, observability, and web site search. You ship search paperwork to OpenSearch Service and retrieve them with search queries that match textual content and vector embeddings for quick, related outcomes.

At Juicebox, we solved 5 challenges with Amazon OpenSearch Service, which we talk about within the following sections.

Downside 1: Excessive latency in candidate search

Initially, we confronted important delays in returning search outcomes because of the scale of our dataset, particularly for advanced semantic queries that require deep contextual understanding. Different full-text engines like google couldn’t meet our necessities for pace or relevance when it got here to understanding recruiter intent behind every search.

Answer: BM25 for quick, correct full-text search

The OpenSearch Service BM25 algorithm shortly proved invaluable by permitting Juicebox to optimize full-text search efficiency whereas sustaining accuracy. By key phrase relevance scoring, BM25 helps rank profiles primarily based on the chance that they match the recruiter’s question. This optimization lowered our common question latency from round 700 milliseconds to 250 milliseconds, permitting recruiters to retrieve related profiles a lot sooner than our earlier search implementation.

With BM25, we noticed a virtually threefold discount in latency for keyword-based searches, enhancing the general search expertise for our customers.

Downside 2: Matching intent, not simply key phrases

In recruiting, precise key phrase matching can typically result in lacking out on certified candidates. A recruiter searching for “knowledge scientists with NLP expertise” would possibly miss candidates with “machine studying” of their profiles, though they’ve the appropriate experience.

Answer: k-NN-powered vector seek for semantic understanding

To deal with this, Juicebox makes use of k-nearest neighbor (k-NN) vector search. Vector embeddings enable the system to know the context behind recruiter queries and match candidates primarily based on semantic which means, not simply key phrase matches. We preserve a billion-scale vector search index that’s able to performing low-latency k-NN search, due to OpenSearch Service optimizations like product quantization capabilities. The neural search functionality allowed us to construct a Retrieval Augmented Era (RAG) pipeline for embedding pure language queries earlier than looking. OpenSearch Service permits us to optimize algorithm hyperparameters for Hidden Navigable Small Worlds (HNSW) like m, ef_search, and ef_construction. This enabled us to realize our goal latency, recall, and price targets.

Semantic search, powered by k-NN, allowed us to floor 35% extra related candidates in comparison with keyword-only searches for advanced queries. The pace of those searches was nonetheless quick and correct, with vectorized queries attaining a 0.9+ recall.

Downside 3: Problem in benchmarking machine studying fashions

There are a number of key efficiency indicators (KPIs) that measure the success of your search. While you use vector embeddings, you’ve gotten a lot of decisions to make when choosing the mannequin, fine-tuning the mannequin, and selecting the hyperparameters to make use of. You want to benchmark your resolution to just be sure you’re getting the appropriate latency, price, and particularly accuracy. Benchmarking machine studying (ML) fashions for recall and efficiency is difficult because of the huge variety of fast-evolving fashions out there (akin to MTEB leaderboard on Hugging Face). We confronted difficulties in choosing and measuring fashions precisely whereas ensuring we carried out effectively throughout large-scale datasets.

Answer: Actual k-NN with scoring script in OpenSearch Service

Juicebox used precise k-NN with scoring script options to deal with these challenges. This function permits for exact benchmarking by executing brute-force nearest neighbor searches and making use of filters to a subset of vectors, ensuring that recall metrics are correct. Mannequin testing was streamlined utilizing the wide selection of pre-trained fashions and ML connectors (built-in with Amazon Bedrock and Amazon SageMaker) offered by OpenSearch Service. The flexibleness of making use of filtering and customized scoring scripts helped us consider a number of fashions throughout high-dimensional datasets with confidence.

Juicebox was capable of measure mannequin efficiency with fine-grained management, attaining 0.9+ recall. Using precise k-NN allowed Juicebox to benchmark sooner and reliably, even on billion-scale knowledge, offering the arrogance wanted for mannequin choice.

Downside 4: Lack of data-driven insights

Recruiters have to not solely discover candidates, but in addition acquire insights into broader expertise business developments. Analyzing a whole lot of tens of millions of profiles to establish developments in expertise, geographies, and industries was computationally intensive. Most different engines like google that assist full-text search or k-NN search didn’t assist aggregations.

Answer: Superior aggregations with OpenSearch Service

The highly effective aggregation options of OpenSearch Service allowed us to construct Expertise Insights, a function that gives recruiters with actionable insights from aggregated knowledge. By performing large-scale aggregations throughout tens of millions of profiles, we recognized key expertise and hiring developments, and helped purchasers modify their sourcing methods.

Aggregation queries now run on over 100 million profiles and return leads to underneath 800 milliseconds, permitting recruiters to generate insights immediately.

Downside 5: Streamlining knowledge ingestion and indexing

Juicebox ingests knowledge repeatedly from a number of sources throughout the online, reaching terabytes of recent knowledge monthly. We wanted a strong knowledge pipeline to ingest, index, and question this knowledge at scale with out efficiency degradation.

Answer: Scalable knowledge ingestion with Amazon OpenSearch Ingestion pipelines

Utilizing Amazon OpenSearch Ingestion, we carried out scalable pipelines. This allowed us to effectively course of and index a whole lot of tens of millions of profiles each month with out worrying about pipeline failures or system bottlenecks. We used AWS Glue to preprocess knowledge from a number of sources, chunk it for optimum processing, and feed it into our indexing pipeline.

Conclusion

On this put up, we shared how Juicebox makes use of OpenSearch Service for improved search. We are able to now index a whole lot of tens of millions of profiles monthly, retaining our knowledge contemporary and updated, whereas sustaining real-time availability for searches.


In regards to the authors

Ishan Gupta is the Co-Founder and CTO of Juicebox, an AI-powered recruiting software program startup backed by prime Silicon Valley traders together with Y Combinator, Nat Friedman, and Daniel Gross. He has constructed search merchandise utilized by 1000’s of shoppers to recruit expertise for his or her groups.

Jon Handler is the Director of Options Structure for Search Providers at Amazon Net Providers, primarily based in Palo Alto, CA. Jon works intently with OpenSearch and Amazon OpenSearch Service, offering assist and steerage to a broad vary of shoppers who’ve search and log analytics workloads for OpenSearch. Previous to becoming a member of AWS, Jon’s profession as a software program developer included 4 years of coding a large-scale, eCommerce search engine. Jon holds a Bachelor of the Arts from the College of Pennsylvania, and a Grasp of Science and a Ph. D. in Laptop Science and Synthetic Intelligence from Northwestern College.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles