Thursday 4 December 2014

Does... Not... Compute!

When a function is not computable, it can bring a big headache to any programmer! That is why it is necessary for us to determine whether a function is computable or not. From what I was able to observe, a function is deemed non-computable when it has a definition that prompts it to return a value even when an inner function that was called inside does not halt.

To be able to determine properly if a function is non-computable, we need to reduce it to what we already know to be non-computable: our favourite function, halt().

Let us look at the following example:

def is_bigger_than_five(f, v)
    """
    Return True iff f(v) returns an int that is bigger than 5
    """
    #code here

    return True

At simple sight, we can tell that, if f(v) does not stop, then the function should return False. To be sure, we need to reduce it to halt in the following way:

def halt(f, i)

    def is_life(g, v)
        """
        Return True iff f(v) returns "life"
        """
        #code here

        return True

    def f_prime(x)
        """
        Ignore the argument x, call f with the fixed argument i (the one passed into halt)
        """

        f(i)

        return "life"

    return is_life(f_prime, v)

From the following, we can conclude:

  • If f(i) halts, then is_life(f_prime, v) is True
  • If f(i) does not halt, then is_life(f_prime, v) is False
Therefore, is_life() is not computable!

Sunday 23 November 2014

Big O-Me-Ga' Let's Go From Left to Right

Big Omega is related to Big Oh in the way that, while the latter defines an upper bound, the former defines a lower bound for a given function. This is done, usually, by expressing the function in a simpler function, akin to the Big Oh's expression. In mathematical form, it takes the following shape:

f(x) ∈ Ω(g(x))

The full definition would be the following:

∃ c ∈ ℝ⁺, ∃ B ∈ ℕ, ∀ n ∈ ℕ, n ≥ B ⇒ f(x) ≥ c(g(x))


This is identical to the definition of Big Oh, except that instead of f(x) ≤ c(g(x)) we have f(x) ≥ c(g(x)). This shows how this second equation approaches the function from the other side, thus defining a lower bound. Once again, we need to use proofs to establish whether the left side of the inequality is equal to the right side of the inequality. To go from left to right, we need to be, once again, creative.

Let's work with the following example:

8n^4 + 5n^2 - n ∈ Ω(-n^5 + 2n^3 - 4)

Then this would be equal to:

∃ c ∈ ℝ⁺, ∃ B ∈ ℕ, ∀ n ∈ ℕ, n ≥ B ⇒ 8n^4 + 5n^2 - n ≥ c(-n^5 + 2n^3 - 4)

First, we should work on the inequality

8n^4 + 5n^2 - n ≥ 8n^4 + 5n^2 #we remove -n to start off the inequality
                          ≥ 8n^2 + 5n^2 #since n^4 ≥ n^2, this inequality would still be true
                          ≥ 13n^2 #addition
                          ≥ n∙n^2 #since we have 13 as a coefficient, we could make B = 13 so that n ≥ 13
                          ≥ n^3 #multiplication
                          ≥ 2n^3 - n^3 #still n^3
                          ≥ 2n^3 - n^5 #since - n^5 ≤ - n^3 this is still true
                          ≥ -n^5 + 2n^3 - 4 #subtracting 4, this is still true
                          ≥ 1(-n^5 + 2n^3 - 4) #we could make c = 1

We have established values for both c and B, so that c = 1 and B = 13. In order to work the proof, we would have to do the following arrangement:
8n^4 + 5n^2 - n ∈ Ω(-n^5 + 2n^3 - 4)
∃ c ∈ ℝ⁺, ∃ B ∈ ℕ, ∀ n ∈ ℕ, n ≥ B ⇒ 8n^4 + 5n^2 - n ≥ c(-n^5 + 2n^3 - 4)
Assume n ∈ ℕ and n ≥ 13 #domain assumption and antecedent
    Then 8n^4 + 5n^2 - n ≥ 8n^4 + 5n^2 #add n
                          ≥ 8n^2 + 5n^2 #since n^4 ≥ n^2
                          ≥ 13n^2 #add both together
                          ≥ n∙n^2 #since n ≥ 13
                          ≥ n^3 #multiply both together
                          ≥ 2n^3 - n^3
                          ≥ 2n^3 - n^5 #since - n^5 ≤ - n^3
                          ≥ -n^5 + 2n^3 - 4 #subtract 4
    Then ∀ n ∈ ℕ, n ≥ B ⇒ 8n^4 + 5n^2 - n ≥ c(-n^5 + 2n^3 - 4) #introduce ⇒
Then 8n^4 + 5n^2 - n ∈ Ω(-n^5 + 2n^3 - 4) #introduce Big Omega



Done! 

Sunday 16 November 2014

Big Oh My God Let's Go From Left to Right

Big Oh is a mathematical expression meant to describe the behaviour of a function as its parameter tends to a certain value or infinity. This is done, usually, by expressing the function in a simpler function. In mathematical form, it takes the following shape:

f(x) ∈ O(g(x))

The full definition would be the following:

∃ c ∈ ℝ⁺, ∃ B ∈ ℕ, ∀ n ∈ ℕ, n ≥ B ⇒ f(x) ≤ c(g(x))


Relating this to everything we have learnt before, we need to use proofs to establish whether the left side of the inequality is equal to the right side of the inequality. To go from left to right, we need to be very, very creative.

Let's work with the following example:

n^4 - 5n^2 + 1 ∈ O(6n^5 - 5n^3 + 3n)

Then this would be equal to:

∃ c ∈ ℝ⁺, ∃ B ∈ ℕ, ∀ n ∈ ℕ, n ≥ B ⇒ n^4 - 5n^2 + 1 ≤ c(6n^5 - 5n^3 + 3n)

First, we should work on the inequality

n^4 - 5n^2 + 1 ≤ n^4 + 1 #we remove - 5n^2 to start off the inequality
                        ≤ n^4 + n^4 #since n^4 ≥ 1, this inequality would still be true
                        ≤ 2n^4 #addition
                        ≤ n∙n^4 #since we have 2 as a coefficient, we could make B = 2 so that n ≥ 2
                        ≤ n^5 #multiplication
                        ≤ 6n^5 - 5n^5 #still n^5
                        ≤ 6n^5 - 5n^3 #since - n^5 ≤ - n^3 this is still true
                        ≤ 6n^5 - 5n^3 + 3n #adding 3n, this is still true
                        ≤ 1(6n^5 - 5n^3 + 3n) #we could make c = 1

We have established values for both c and B, so that c = 1 and B = 2. In order to work the proof, we would have to do the following arrangement:
n^4 - 5n^2 + 1 ∈ O(6n^5 - 5n^3 + 3n)
∃ c ∈ ℝ⁺, ∃ B ∈ ℕ, ∀ n ∈ ℕ, n ≥ B ⇒ n^4 - 5n^2 + 1 ≤ c(6n^5 - 5n^3 + 3n)
Assume n ∈ ℕ and n ≥ 2 #domain assumption and antecedent
    Then n^4 - 5n^2 + 1 ≤ n^4 + 1 #add 5n^2
                                      ≤ n^4 + n^4 #since n^4 ≥ 1
                                      ≤ 2n^4 #add both together
                                      ≤ n∙n^4 #since n ≥ 2
                                      ≤ n^5 #multiply both together
                                      ≤ 6n^5 - 5n^5
                                      ≤ 6n^5 - 5n^3 #since - n^5 ≤ - n^3
                                      ≤ 6n^5 - 5n^3 + 3n #add 3n
    Then ∀ n ∈ ℕ, n ≥ 2 ⇒ n^4 - 5n^2 + 1 ≤ c(6n^5 - 5n^3 + 3n) #introduce ⇒
Then n^4 - 5n^2 + 1 ∈ O(6n^5 - 5n^3 + 3n) #introduce Big Oh

Done! 

Sunday 9 November 2014

Algorithm Run Time - From a Specific Action to Steps

Computing power is a known limitation when programming. Though for the untrained eye, computers seem vastly powerful, when programming you are trying to reduce both the load on the processor and the running time. As we use less resources, we will have time to do more at the same time, and in less time. That is the rationale behind analyzing the run time of algorithms and code in general. When talking about code, there are three different ways in which run code is interpreted:

  • Linear: Each extra loop increases run time in the same way
  • Exponential: Each extra loop increases run time exponentially, so many extra loops will drastically increase run time
  • Logarithmic: Each extra loop increases run time logarithmically, so after one point extra loops have a minimal impact on run time
To calculate run time, we need to pay close look at our code in order to establish how many steps and loops are given. Let's look at this piece of code and its outcome:

    for i in range(l):
        print(i)

If we use l = 10, how many times will print() be used?
Print will be used 10 times. The output would be the following:

0
1
2
3
4
5
6
7
8
9


What about the number of steps? Would it be 10 steps? 
We need to consider that there are two steps involved: for and print(). In this case, we would be talking about 20 steps in total, one for for and one for print() per loop.

Now let's change the code a little bit:


    for i in range(l):
        print(i)
        for j in range(l):
            print(j)

If we use l = 10, how many times will print() be used?
Print will be used 10 times in the outer loop, and 10 times in the inner loop for each loop in the outer loop. This means that print will be used 10 * 10 = 100. The output would be the following:


0
0
1
2
3
4
5
6
7
8
9
1
0
1
2
3
4
5
6
7
8
9
2
0
1
2
3
4
5
6
7
8
9
3
0
1
2
3
4
5
6
7
8
9
4
0
1
2
3
4
5
6
7
8
9
5
0
1
2
3
4
5
6
7
8
9
6
0
1
2
3
4
5
6
7
8
9
7
0
1
2
3
4
5
6
7
8
9
8
0
1
2
3
4
5
6
7
8
9
9
0
1
2
3
4
5
6
7
8
9

What about the number of steps? Would it be 100 steps?
Similarly to the previous question, we need to consider both inner and outer loop, and their interaction. The inner loop has 2 steps, and the outer loop has 2 steps, and each time the outer loop runs, the inner loop runs l times. Hence, in this case, we are talking about 20 steps per each run of the inner loop, so it would be 200 steps of the inner loop with 20 steps of the outer loop. In total, 220 steps.

As we can see, proper analysis of the code needs to take place in order to decipher how the loops interact and, finally, how many steps will be taken. 

Sunday 2 November 2014

Solving a Mystery, Case by Case

Some of the proofs we have seen before were pretty direct. When you have P(x) ⇒ Q(x), there is little that can go wrong~! But things do become a little bit more complicated when we encounter ∧. This conjunction symbol indicates that, if we are to prove something, we need to show that both sides are true. When you find a conjunction on the antecedent, such as having P(x) ⇒ Q(x) were P(x) ⇔ R(x) ∧ S(x), you can just leave it as the assumption.

For example:

∀ x ∈ ℕ, R(x) ∧ S(x) ⇒ Q(x)

Would lead us to:
Assume x ∈ ℕ
    Assume R(x) ∧ S(x)
         ....
    Then R(x) ∧ S(x) ⇒ Q(x)
Then ∀ x ∈ ℕ, R(x) ∧ S(x) ⇒ Q(x)

This is fairly easy, and makes our lives' easier! Nonetheless, there will be plenty of times when we will want to disprove a conjunction. To do this, we will have to work on different cases. Let's look at the converse of the previous statement, and let's assume it is true:

∀ x ∈ ℕ, Q(x) ⇒ R(x) ∧ S(x)

This would lead us to:
Assume x ∈ ℕ
    Assume Q(x)
         Then...
             Assume... #case 1 that leads to proving R(x)
                 Then...
             Assume... #case 2 that leads to proving S(x)
                 Then...
         Then...
    Then Q(x) ⇒ R(x) ∧ S(x)
Then ∀ x ∈ ℕ, Q(x) ⇒ R(x) ∧ S(x)

As you can see, separate cases spawn while we are working on this proof. The important thing is to remember that these cases need to lead to proving R(x) and S(x) and need to be within context of Q(x). That sounds like a lot of work, but many times it is easy to see the light!

Sunday 26 October 2014

Finding the Perfect Substitution

While we were looking at the form structure when finding proof, I noticed that I had no recollection whatsoever on how to do it. I remember, while studying number theory, that there were some neat tricks on how to prove a certain expression for n and then n + 1, and finally for n + k. In essence, we are finding proof here of a single expression, but when we have to related expressions, the dynamics are a bit different, so here we will now focus on a postulate that involves two expressions and the way to tackle this problem, as well as finding the perfect substitution to get our desired result.

If we were to use the example:

∀ x ∈ Z, ∃ y ∈ Z, x = y + 1 ⇒ ∃ z ∈ Z, x^2 = z + 1

We would have the following steps for establishing the proof:

        Assume ∀ x ∈ Z # domain assumption
                Assume ∃ y ∈ Z, x = y + 1 # antecedent 
                        Let y' be a value that depends on x... # some value that depends on x 
                        Then, y' ∈ Z 
                        Let z' = y^2 + 2y
                        Then, z' ∈ Z
                        If x^2 = (y' + 1)^2 = y'^2 + 2y' + 1 and z' = y^2 + 2y
                        Then, x^2 = z' + 1
                Then ∃ z ∈ Z, x^2 = z + 1 # introducing ∃
        Then ∀ x ∈ Z, ∃ y ∈ Z, x = y + 1 ⇒ ∃ z ∈ Z, x^2 = z + 1

Let us focus now on the bold part. How did we find it? Easy: we knew where we wanted to go. If we are going from x to x^2, then we know that the expression on the left will be squared. Rearrange things a little bit, and voila!

Sunday 19 October 2014

Floors and Ceilings - Almost getting there!

From this week's lecture I found quite intriguing the importance that floor and ceiling functions play in computer theory. At the beginning I was rather skeptical of their usefulness, but that just came to show how vast the applications of computational logic really are. I'll just go ahead and offer two examples that have to do with my current programming level with Python:
  • Mod Operand (% in Python)
    • x % y = x - y floor (x/y)
    • In mathematical expression: ∀ x in Z, ∀ y in Z, ∃ z in Z, z = x - y (floor (x/y))
  • Rounding (round() in Python)
    • round(x) = floor (x + 1/2)
    • In mathematical expression: ∀ x in R, ∃ y in Z, y = floor (x + 1/2)
The important lesson here is not think harshly about some functions in mathematical reasoning. Perhaps we should apply the logic that, if it has its own symbol, it must be rather important. While doing some research about the floor and ceiling functions, I found that there are plenty more applications to it! Hopefully in a near future we will be able to see some more.