Many problems in AI can be solved theoretically by intelligently using search optimisation, going through many possible solutions: Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule. Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis. Robotics algorithms for moving limbs and grasping objects use local searches in configuration space.

Simple exhaustive searches are rarely sufficient for most real-world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use “heuristics” or “rules of thumb” that prioritise choices in favour of those more likely to reach a goal and to do so in a shorter number of steps.

In some search optimisation methodologies heuristics can also serve to eliminate some choices unlikely to lead to a goal (called “pruning the search tree”). Heuristics supply the program with a “best guess” for the path on which the solution lies. Heuristics limit the search for solutions into a smaller sample size.

A very different kind of search optimisation came to prominence in the 1990s, based on the mathematical theory of optimisation. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made.

These algorithms can be visualised as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other search optimisation algorithms are simulated annealing, beam search and random search optimisation. ^{}Evolutionary computation uses a form of search optimisation search.

For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Classic evolutionary algorithms include genetic algorithms, gene expression programming, and genetic programming. ^{}Alternatively, distributed search processes can coordinate via swarm intelligence algorithms. Two popular swarm algorithms used in search are particle swarm optimisation (inspired by bird flocking) and ant colony optimisation (inspired by ant trails).

Table of Contents

In computer science, a search algorithm is an algorithm (typically involving a multitude of other, more specific algorithms) which solves a search problem. Search algorithms work to retrieve information stored within some data structure, or calculated in the search space of a problem domain, with either discrete or continuous values.

While the search problems described above and web search are both problems in information retrieval, they are generally studied as separate subfields and are solved and evaluated differently. Web search problems are generally focused on filtering and finding documents that are most relevant to human queries. Classic search algorithms are typically evaluated on how fast they can find a solution, and whether that solution is guaranteed to be optimal. Though information retrieval algorithms must be fast, the quality of ranking, and whether good results have been left out and bad results included, is more important.

The appropriate search algorithm often depends on the data structure being searched, and may also include prior knowledge about the data. Search algorithms can be made faster or more efficient by specially constructed database structures, such as search trees, hash maps, and database indexes.

Search algorithms can be classified based on their mechanism of searching into three types of algorithms: linear, binary, and hashing. Linear search algorithms check every record for the one associated with a target key in a linear fashion. Binary, or half-interval, searches repeatedly target the center of the search structure and divide the search space in half.

Comparison search algorithms improve on linear searching by successively eliminating records based on comparisons of the keys until the target record is found, and can be applied on data structures with a defined order. Digital search algorithms work based on the properties of digits in data structures that use numerical keys. Finally, hashing directly maps keys to records based on a hash function.

Algorithms are often evaluated by their computational complexity, or maximum theoretical run time. Binary search functions, for example, have a maximum complexity of *O*(log *n*), or logarithmic time. This means that the maximum number of operations needed to find the search target is a logarithmic function of the size of the search space.

Mathematical optimisation or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. Optimisation problems of sorts arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.

In the simplest case, an optimisation problem consists of maximising or minimising a real function by systematically choosing input values from within an allowed set and computing the value of the function. The generalisation of optimisation theory and techniques to other formulations constitutes a large area of applied mathematics. More generally, optimisation includes finding “best available” values of some objective function given a defined domain (or input), including a variety of different types of objective functions and different types of domains.

In computer science, evolutionary computation is a family of algorithms for global optimisation inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimisation character.

In evolutionary computation, an initial set of candidate solutions is generated and iteratively updated. Each new generation is produced by stochastically removing less desired solutions, and introducing small random changes. In biological terminology, a population of solutions is subjected to natural selection (or artificial selection) and mutation. As a result, the population will gradually evolve to increase in fitness, in this case the chosen fitness function of the algorithm.

Evolutionary computation techniques can produce highly optimised solutions in a wide range of problem settings, making them popular in computer science. Many variants and extensions exist, suited to more specific families of problems and data structures. Evolutionary computation is also sometimes used in evolutionary biology as an in silico experimental procedure to study common aspects of general evolutionary processes.

This website uses cookies. By continuing to use this site, you accept our use of cookies.