# lincoln county road or armageddon

2 We use the basic idea of divide and conquer. Did you feel a little shiver when you read that? By applying structure to your solutions, such as with The FAST Method, it is possible to solve any of these problems in a systematic way. There are a lot of cases in which dynamic programming simply won’t help us improve the runtime of a problem at all. However, if no one ever requests the same image more than once, what was the benefit of caching them? This second version of the function is reliant on result to compute the result of the function and result is scoped outside of the fibInner() function. This problem is quite easy to understand because fib(n) is simply the nth Fibonacci number. 3 There are polynomial number of subproblems (If the input is All it will do is create more work for us.eval(ez_write_tag([[250,250],'simpleprogrammer_com-large-mobile-banner-1','ezslot_15',119,'0','0']));eval(ez_write_tag([[250,250],'simpleprogrammer_com-large-mobile-banner-1','ezslot_16',119,'0','1'])); For an example of overlapping subproblems, consider the Fibonacci problem. Let’s break down each of these steps. Indeed, most developers do not regularly work … Check whether the below problem follows optimal substructure property or not? The number 3 is repeated twice, 2 is repeated three times, and 1 is repeated five times. With this definition, it makes it easy for us to rewrite our function to cache the results (and in the next section, these definitions will become invaluable): Again, we can see that very little change to our original code is required. If the optimal solution to a problem P, of size n, can be calculated by looking at the optimal solutions to subproblems [p1,p2,…](not all the sub-problems) with size less than n, then this problem P is considered to have an optimal substructure. (c->b->e->a->d), it won’t give us a valid(because we need to use non-repeating vertices) longest path between a & d. So this problem does not follow optimal substructure property because the substructures are not leading to some solution. Themes •Dynamic programming is an algorithmic paradigm in which a problem is solved by identifying a collection of subproblems and tackling them one by one, smallest first, using the answers to small problems to help figure out larger ones, until they are all solved •Examples: Dynamic Programming However, there is a way to understand dynamic programming problems and solve them with ease. So Dynamic Programming is not useful when there are no overlapping(common) subproblems because there is no need to store results if they are not needed again and again. We can pretty easily see this because each value in our dp array is computed once and referenced some constant number of times after that. The problem can’t be solved until we find all solutions of sub-problems. We are going to start by defining in plain English what exactly our subproblem is. That’s an overlapping subproblem. A greedy algorithm is going to pick the first solution that works, meaning that if something better could come along later down the line, you won't see it. If you don't have optimal solutions for your subproblems, you can't use a greedy algorithm. For example, while the following code works, it would NOT allow us to do DP. While dynamic programming seems like a scary and counterintuitive  topic, it doesn’t have to be. In this step, we are looking at the runtime of our solution to see if it is worth trying to use dynamic programming and then considering whether we can use it for this problem at all. We will start with a look at the time and space complexity of our problem and then jump right into an analysis of whether we have optimal substructure and overlapping subproblems. Dynamic programming may work on all subarrays, say \$A[i..j]\$ for all \$i