Search for Places Overview
One of the popular categories of a web search is the search for places.
Google provides a Places Search web service, which allows you to perform a search for places by query or keyword, and can easily be integrated into any application.
The main advantage of this service is that it is able to handle any query provided by the user. This service provides good results for place searches, but obtaining a license key is expensive.
There are several alternatives for Google Places Search, that can also do places search:
The main issue wıth the three above alternatives is that the quality of the results is not always acceptable. For specific queries and certain categories of places, not all services are able to provide good search results. It is important to note that the quality of the result also depends on the search query.
Search Place Preprocessing
To obtain better results from search places the search query needs to be preprocessed. It is necessary to clear the search query of redundant address components.
Android has embedded Geocoder, which is able to process direct and reverse geocoding. It helps to build an algorithm for removing redundant address components from each search query.
The method Geocoder uses to remove address components is to provide a list of addresses for each search query, where the search query is compared with each address component. In this way, address components can be removed from the search query.
Artificial Intelligence Magic
But why does the Google service provide better results than others? What magic does Google do, when it processes a search query?
Google has an embedded Artificial Intelligence (AI) algorithm in its search engine that helps to show the best search results, which are those which the users expect to see.
Neural networks could be integrated into a place search algorithm that uses alternative search places services.
The neural network represents the function that gives as result an answer to a specific question.
The main issue is to identify, which parameters will be used as function parameters, and what answer should be provided by the function.
After that, the neural network needs to be trained to make this function to give the correct answers.
Let’s review, how it can be reached in search places logic. For example, Hello World in machine learning answers a question – “What number is drawn in the picture?“. And possible answers are the digits – 0, 1, 2 … 9. As an input parameter, a picture was used.
For the results, we used an output layer with 10 nodes which correspond to numbers. And after processing the input image on the output layer the values are generated, which represent the probabilities of compliance for each number.
How to apply AI for Place Search
But it is not clear, what function should be used to obtain the desired results for searching places.
It looks like the question is – “What the best place from the list of all results suitable for the current search query?“. But there are no fixed answers for this type of question because on each search attempt services do not provide a constant number of different results. The key to solving the issue with places search is to sort the places set and place the most suitable places at the top of the list. In order to perform sorting a method to provide a result for the comparison of two places is required.
The main question which a neural network should provide an answer to is, “Which place is better for the current search query when comparing two places?“. For this question, a neural network will be able to provide one of two answers either the first one or the second one. So, it is possible to configure a neural network with an output layer with only two nodes.
What’s the input parameters to a neural network?
Each search provider provides a unique set of data for each of the places it finds. But there are common parameters, which are returned by all providers. They are a place name, place category, place location, address, rating, price level, working hours, contacts. These parameters could be used for preview as a search result for found places as much as parameters that could be used when preparing input for a neural network.
There is a need to pay attention to one of the main things that affect the outcome is the search query. Since the search query is a string, there is a need to compare strings in some way.
To compare two strings to each other these parameters could be used: edit distance between strings, longest common substring, longest common subsequence, Tanimoto coefficient, etc.
Also, it is possible to use additional heuristic techniques to create additional parameters for comparing strings.
For comparing strings the following pairs could be used: search query with a place name, preprocessed search query with a place name, a search query with place category, detected address components with places address components.
Another important thing that affects the results is user location. Each place also has as a parameter – place location. So, the distance to places can be calculated and compared.
The search query can contain an additional location for search which could be represented as an address component (locality, street name, full address), distance from that point can also be used.
Additional values, that could affect the result are – place rating and place price level. They also could be compared.
After it became clear how to configure the neural network, the next step was to collect data sets and train the neural network.
Data sets should contain input parameters as well as output values. And during neural network training, internal coefficients on hidden layers changes and neural network learns to give right results for new input data.
Each search provides a list of places, there is a large number of datasets that could be manually obtained from one search.
This leads to the fact that it is possible to collect a large amount of data manually which is enough at least for initial training.
After the application comes to production, it becomes possible to use users choices for collecting data sets.
It is good practice to use a part of data sets for training and another part to check the trained neural network. A good practice is to use proportions like 70 percent for training and 30 percent for checking.
This way, it is easy to provide experiments and further improve the configuration of the neural network.
Also, there is a need to pay attention and retain a large number of different input parameters or keep the ability in the future to obtain different combinations of input parameters. Doing this leaves open more possibilities for experiments with the neural network.
Also, if the data set for learning is increased, the previously configured neural network may need review and reconfiguration, because with adding additional data sets, additional key characteristics may appear that may affect results.
There were few attempts to implement a solution for places search suggestions. First one was based on library FANN (Fast Artificial Neural Network). FANN is an open source neural network library that implements multilayer artificial neural networks in C.
Main features of FANN are:
- easy to use due to relatively simplified procedures for creating, training and running Artificial Neural Network;
- the versatility that allows it to set up various parameters or/and features on the run;
- faster execution in comparison to other libraries.
For the second attempt were decided to use Tensorflow for implementation.
The advantages of Tensorflow are:
- fast execution, ability to use GPU for processing;
- complete and clear documentation;
- available tools for visualization.
We used FANN for our first realization of places search. It was integrated into an Android application using NDK.
Data for training was collected using an Android application, where the user was able to select the right answer manually during the search. Training was implemented as an additional program, where input data was imported from an android device.
In order to allow the use of data from users we also implemented saving data to the cloud. For the first training data was collected by the team. It was a not a huge amount of data, but it was enough to get some results.
To check the training we used the following concept. All of the collected data was divided into two parts, in proportion 70/30. We used 70% of all collected data for training and 30% – for validation. To show the training results after each training we calculated a parameter, which showed the percentage of correct answers from the validation data set.
The first training gave the best result with 75% of correct answers. The reason for such low rate was a high level of noise in the input data. The ratio was improved after collecting data in a more careful manner.
A human mind is able to make mistakes. These mistakes could be negotiated for a big amount of information, but when a number of decisions, made by people, and a number of people, who make decisions is few, mistakes play a big role in a training result.
After re-collect data result of training becomes 94% of right answers. At this point, we decided to stop with the current realization and try with another one.
The second implementation was divided into two separate parts.
One part was reimplemented from the previous implementation of the algorithm using Android SDK and Java. It was responsible for collecting and preparing data for training. Another part was written as a python script, which was the implementation of the training function, and model preparing for further usage on mobile.
Before training we decided to reorganize the preparation of the input data, some of the input parameters were reviewed and changed.
All of the parameters were normalized, to avoid the redundant complexity of the neural network.
New datasets were added to the training set.
The result of training gave us a 93-94% ratio of correct answers.
Evolution of NN Configurations
To see how success ratio depends on NN configuration please review the following tables.
The following table presents the results of NN training through the evolution of input parameters.
|Input parameters evolution||Success ratio|
|+ Matching search query with place names||88%|
|+ matching query with founded places types||91%|
|+ distances to places comparing||92.5%|
|+ query preprocessing (separating place name and place address part)||94%|
|+ matching search query with addresses components of founded places||94%|
So far as the main scope of input data based on strings comparison, the following table provides success rate evaluation by adding additional parameters for matching strings.
|+ Edit distance (Levenshtein distance)||68%|
|+ Longest common substring||92.5%|
|+ Longest common subsequence||92.5%|
|+ Query exact match||89.5%|
|+ Match words count||94%|
It is worth noting that ‘query exact match’ and ‘match words count’ parameters, gave worse success ratio results by themselves, but in the pair, they affected the success ratio in the best way
Also, we noticed, that the count of hidden layers of NN does not affect results. It was checked with 0, 1 and 2 hidden layers, and the success ratio was the same in each case.
It has to be mentioned that there are no specific rules for correct configuring and learning neural networks. This problem can only be solved by using empirical methods.
Conclusion about Tensorflow
Neural networks and Tensorflow, when used as a tool for building infrastructure for training and inference of AI algorithms based on neural network, helped to solve the edge case problem, where other methods were not effective.
In order to solve the problem, we designed a method of finding suggestions in the list of specific data structures, based on the user’s search query.
A success rate of 94% was achieved, which is an acceptable solution for an application.
Neural networks help to solve a wide range of problems, where default “if-else” logic is hard to implement, or requires complicated implementations and is sensitive to any changes concerning possible regression.
It also helps to improve the maintenance of programs by simplifying the code.