New hint search service on

Search hints are great. How often do we type the full site address in the address bar? And the name of the product in the online store? For such short queries, typing a few characters is usually enough if the search hints are good. And if you do not have twenty fingers or incredible typing speed, then you will surely use them.
In this article, we will talk about our new search hint service, which we did in the previous issue of the School of Programmers .

The old service had a number of problems:

In the new service, we fixed these shortcomings (while adding new ones).

Dictionary of popular queries

When there are no hints at all, you can manually select the top-N queries of users and generate hints from these queries using the exact occurrence of words (with or without order). This is a good option - it is easy to implement, gives good accuracy of prompts and does not experience performance problems. For a long time, our sajest worked like that, but a significant drawback of this approach is the insufficient completeness of issuance.

For example, the request “javascript developer” didn’t fall into such a list, so when we enter “javascript times” we have nothing to show. If we supplement the request, taking into account only the last word, we will see "javascript handyman" in the first place. For the same reason, it will not be possible to implement error correction more difficult than the standard approach with finding the closest words by Damerau-Levenshtein distance.

Language model

Another approach is to learn to evaluate the probabilities of queries and to generate the most probable continuations for a user query. To do this, use language models - a probability distribution on a set of word sequences.


Since user requests are mostly short, we did not even try neural network language models, but limited ourselves to n-gram:

P(w1 dotswm)= prodi=1mP(wi|w1 dotswi1) approx prodi=1mP(wi|wi(n1) dotswi1)

As the simplest model, we can take the statistical definition of probability, then

P(wi|w1 dotswi1)= fraccount(w1 dotswi)count(w1 dotswi1)

However, such a model is not suitable for evaluating queries that were not in our sample: if we did not observe the 'junior developer java', then it turns out that

P( textjuniordeveloperjava)= fraccount( textjuniordeveloperjava)count( textjuniordeveloper)=0

To solve this problem, you can use various models of smoothing and interpolation. We used Backoff:

Pbo(wn|w1 dotswn1)= begincasesP(wn|w1 dotswn1),count(w1 dotswn1)>0 alphaPbo(wn|w2 dotswn1),count(w1 dotswn1)=0 endcases

 alpha= fracP(w1 dotswn1)1 sumwPbo(w|w2 dotswn1)

Where P is the smoothed probability w1...wn1(we used Laplace smoothing):

P(wn|w1 dotswn1)= fraccount(wn)+ deltacount(w1 dotswn1)+ delta|V|

where V is our dictionary.

Option Generation

So, we can evaluate the probability of a particular request, but how to generate these same requests? It is wise to do the following: let the user enter a query w1...wn, then the queries that are suitable for us can be found from the condition

w1 dotswm= undersetwn+1 dotswm inVargmaxP(w1 dotswnwn+1 dotswm)

Of course, sorting through |V|mn,m=1 dotsMIt’s not possible to select the best options for each incoming request, therefore we use Beam Search . For our n-gram language model, it comes down to the following algorithm:

def beam(initial, vocabulary): variants = [initial] for i in range(P): candidates = [] for variant in variants: candidates.extends(generate_candidates(variant, vocabulary)) variants = sorted(candidates)[:N] return candidates def generate_candidates(variant, vocabulary): top_terms = [] #         1, 2, ... n  for n0 in range(n): top_n = sorted(vocabulary, key=lambda c: P(|variant[-n0:]) top_terms.extends(top_n) candidates = [variant + [term] for term in top_terms] #       candidates = sorted(candidates, key=lambda v: P(variant))[:N] return candidates 

beam search

Here the nodes highlighted in green are the final selected options, the number in front of the node wn- probability P(wn|wn1), after the node - P(w1...wn).

It has become much better, but in generate_candidates you need to quickly get N best terms for a given context. In the case of storing only the probabilities of n-grams, we need to go through the entire dictionary, calculate the probabilities of all possible phrases, and then sort them. Obviously, this will not take off for online queries.

Boron for probabilities

To quickly obtain the N best conditional probability variants of the continuation of the phrase, we used boron in terms. In node w1 tow2coefficient stored  alpha, value P(w2|w1)and sorted by conditional probability P( bullet|w1w2)list of terms w3together with P(w3|w1w2). The special term eos marks the end of a phrase.

But there is a nuance

In the algorithm described above, we assume that all words in the query were completed. However, this is not true for the last word that the user enters it right now. We again need to go through the entire dictionary to continue the current word being entered. To solve this problem, we use a symbolic boron, in the nodes of which we store M terms sorted by the unigram probability. For example, this will look like our bor for java, junior, jupyter, javascript with M = 3:


Then, before beginning Beam Search, we find the M best candidates to continue the current word wnand select the N best candidates for P(w1 dotswn).


Great, we have built a service that allows you to give good hints for a user request. We are even ready for new words. And everything would be fine ... But users take care and do not switch hfcrkflre keyboards.

How to solve this? The first thing that comes to mind is the search for corrections by finding the closest options for the Damerau-Levenshtein distance, which is defined as the minimum number of insertion / deletion / replacement of a character or transposition of two neighboring ones needed to get another from one line. Unfortunately, this distance does not take into account the probability of a particular replacement. So, for the introduced word “sapper”, we get that the options “collector” and “welder” are equivalent, although intuitively it seems that they had in mind the second word.

The second problem is that we do not take into account the context in which the error occurred. For example, in the query “order sapper” we should still prefer the option “collector” rather than “welder”.

If you approach the task of correcting typos from a probabilistic point of view, it is quite natural to come to a model of a noisy channel :

  1. alphabet set  Sigma;
  2. set of all trailing lines  Sigmaover it;
  3. many lines that are correct words D subseteq Sigma;
  4. given distributions P(s|w)where s in Sigma,w inD.

Then the correction task is set as finding the correct word w for input s. Depending on the source of the error, measure Pit can be built in different ways, in our case it’s wise to try to estimate the probability of typos (let's call them elementary replacements) Pe(t|r), where t, r are symbolic n-grams, and then evaluate P(s|w)as the probability of getting s from w by the most probable elementary replacements.

Let be Partn(x)- splitting the string x into n substrings (possibly zero). The Brill-Moore model involves the calculation of probability P(s|w)in the following way:

P (s | w) \ approx \ max_ {R \ in Part_n (s)} T \ in Part_n (s)} \ prod_ {i = 1} ^ {n} P_e (T_i | R_i)

But we need to find P(w|s):

P(w|s)= fracP(s|w)P(w)P(s)=const cdotP(s|w) cdotP(w)

By learning to evaluate P (w | s), we will also solve the problem of ranking options with the same Damerau-Levenshtein distance and will be able to take into account the context when correcting a typo.

Calculation Pe(Ti|Ri)

To calculate the probabilities of elementary substitutions, user queries will help us again: we will compose pairs of words (s, w) which

  1. close in Damerau-Levenshtein;
  2. one of the words is more common than the other N times.

For such pairs, we consider the optimal alignment according to Levenshtein:

We compose all possible partitions of s and w (we limited ourselves to lengths n = 2, 3): n → n, pr → rn, pro → rn, ro → po, m → ``, mm → m, etc. For each n-gram, we find

Pe(t|r)= fraccount(r tot)count(r)

Calculation P(s|w)

Calculation P(s|w)directly takes O(2|w|+|s|): we need to sort through all possible partitions of w with all possible partitions of s. However, the dynamics on the prefix can give an answer for O(|w||s|n2)where n is the maximum length of elementary substitutions:

d [i, j] = \ begin {cases} d [0, j] = 0 & j> = k \\ d [i, 0] = 0 & i> = k \\ d [0, j] = P (s [0: j] \ space | \ space w [0]) & j <k \\ d [i, 0] = P (s [0] \ space | \ space w [0: i]) & i <k \\ d [i, j] = \ underset {k, l \ le n, k \ lt i, l \ lt j} {max} (P (s [jl: j] \ space | \ space w [ik: i]) \ cdot d [ik-1, jl-1]) \ end {cases}

Here P is the probability of the corresponding row in the k-gram model. If you look closely, it is very similar to the Wagner-Fisher algorithm with Ukkonen clipping. At every step we get P(w[0:i]|s[0:j])by enumerating all the fixes w[ik:i]at s[jl:j]provided k,l lenand the choice of the most probable one.

Back to P(w|s)

So, we can calculate P(s|w). Now we need to select several options w maximizing P(w|s). More precisely, for the original request s1s2 dotssnyou must choose w1 dotswnwhere P(w1 dotswn|s1 dotssn)maximum. Unfortunately, an honest choice of options did not fit into our response time requirements (and the project deadline was drawing to a close), so we decided to focus on the following approach:

  1. from the original query we get several options by changing the k last words:
    1. we correct the keyboard layout if the resulting term has a probability several times higher than the original one;
    2. we find words whose Damerau-Levenshtein distance does not exceed d;
    3. choose from them top-N options for P(s|w);
  2. send BeamSearch to the input along with the original request;
  3. when ranking the results we discount the obtained options on  prodi=0k1P(sni|wni).

For Clause 1.2, we used the FB-Trie algorithm (forward and backward trie), based on fuzzy search in the forward and reverse prefix trees. This turned out to be faster than evaluating P (s | w) throughout the dictionary.

Query Statistics

With the construction of the language model, everything is simple: we collect statistics on user queries (how many times we made a request for a given phrase, how many users, how many registered users), we break down requests into n-grams and build burs. More complicated with the error model: at a minimum, a dictionary of the right words is needed to build it. As mentioned above, to select the training pairs, we used the assumption that such pairs should be close in Damerau-Levenshtein distance, and one should occur more often than the other several times.

But the data is still too noisy: xss injection attempts, incorrect layout, random text from the clipboard, experienced users with requests “programmer c not 1c”, requests from the cat that passed through the keyboard .

For example, what did you try to find by such a request?

Therefore, to clear the source data, we excluded:

They also corrected the keyboard layout, checked against words from the texts of vacancies and open dictionaries. Of course, it was not possible to fix everything, but such options are usually either completely cut off or located at the bottom of the list.

In prod

Right before project protection, they launched a service in production for internal testing, and after a couple of days - for 20% of users. In, all changes that are significant to users go through a system of AB tests , which allows us not only to be sure of the significance and quality of the changes, but also to find errors .


The metric of the average number of searches from the sujest to the applicants has brightened up (increased from 0.959 to 1.1355), and the share of searches from the sujest of all search queries has increased from 12.78% to 15.04%. Unfortunately, the main product metrics have not grown, but users have definitely become more likely to use tips.

In the end

There was no room for a story about the School's processes, other tested models, the tools that we wrote for model comparisons, and meetings where we decided which features to develop in order to catch up with the intermediate demos. Look at the records of the past school , leave a request at , complete interesting tasks and come to study. By the way, the service for checking tasks was also done by the graduates of the previous set.

What to read?

  1. Introduction to Language Model
  2. Brill-Moore Model
  3. Fb-trie
  4. What happens to your search query


All Articles