In the field of computer science, term 'dynamic programming' relates to the style of programming that breaks a large problem down into smaller subproblems, and generally allows for the finding of the optimal solution. When the problem is split into subproblems, these themselves may be split into smaller problems, and so on, until they cannot be reduced any more. It is also common for dynamic programming to make use of recursion, and the saving of previous results for faster computation later; this also leads to higher efficiency, as calculations are not being redone. For example, when a problem is reduced into sub problems, and those are then reduced further, it may be that there are common subsubproblems, and so only one calculation needs to be done and the result saved to help solve more than one subproblem. An example of this gain in efficiency is a path-finding problem. If there are two distinct routes in a network of 10 nodes, tagged A to J, then if the two routes share a common section (say, between nodes B and D), the cost of that section should be calculated for the first route and saved. Then, when the second route is being processed, the cost of B to D does not need to be calculated again. In general, dynamic programming is used on optimisation problems, where the most efficient solution is needed. Areas where this sort of programming is useful is in AI, computer graphics, compression routines, and biomedical applications.