Wednesday, December 3, 2014

Week 12: Problem solving episode of diagonals problem

The rough work:























































To summarize:

Understand the problem:

-You have a rectangular grid of m rows and n columns.
-m and n are positive whole numbers (1, 2, 3...).
-A diagonal is drawn from the upper left to the lower right corner.
-The diagonal touches some of the interior grid squares.

Problem: Calculate the number of squares x that the diagonal meets, given m and n.

Preliminary observations:
-The problem can be divided into smaller problems, in that if m and n have a greatest common divisor = gcd, then this rectangle can be divided into smaller rectangles with m' = m/gcd and n' = n/gcd.


Devise a plan:

-Start with rectangles with the smallest ratios of m:n such that their gcd is 1.
-"Extend" each rectangle once such that the rows are m + m and the columns are n + n.
-Find a pattern between x and m, n.
-Find a pattern between the smallest ratio rectangles and the extended rectangles.

Carry out the plan:

-Draw a lot of rectangles! See above.

Observations:
-For m = a and n = b, the x is the same as m = b and n = a.
-As predicted, "extending" each rectangle proportionally does not affect the slope of the diagonal.
-For a pair of m and n whose gcd is 1, x = m + n - 1.
-The relationship between x for the rectangle with said m and n pair and x' for the "extended" rectangle with m' = gcd * m and n' = gcd * n is: x' = gcd * x
-Then for any m and n, x = gcd ( m/gcd + n/gcd - 1) = m + n - gcd

The solution: x = m + n - gcd
where x = number of square grids
          m = rows
           n = columns
       gcd = greatest common divisor of m and n











Looking back:

My first attempt in the rough work did not lead me anywhere, and I had to go back and revise the plan. (The first attempt involved dividing the possibilities into several cases, such as m = n, m and n are odd, etc., extending each case, and finding a pattern for each case. However, I immediately realized that sometimes, extending the case where m and n are odd can lead to a case where m and n are even, so that patterns for these cases may not be very different.)

When I was drawing the rectangles, I did not notice at first that the pattern was m + n - 1 for the smallest rectangles. At first I thought only in terms of the largest of the two numbers, which I always chose to be n. Then x was always n + some number that increased with m. When I was finally done drawing and looked back over all the examples, the pattern became clear!

Week 11: Tackling halting functions

Part of the reason I struggled with understanding last week's lecture was that I am not familiar with Python. I had started with Java, and one of the things that confused me was having a function inside another function, since in Java you could not have nested methods.

Last week we talked about a function halt(f, n) that takes a function f and determines if it halts, given the input n, and learned that no one can write halt(f, n) that works for all Python functions. To prove this, we can first assume that it is possible to write such a function, use a function clever(f) that reduces to halt(f, n), and prove that it leads to a contradiction. We looked at the following function:

1. def clever(f)
2.    while halt(f, f):
3.        pass
4.    return 42

and considered the case of clever(clever):
If clever(clever) halts, then then halt(clever, clever) on line 2 returns true, and we enter into an infinite while loop, and clever(clever) does not halt.
In the other case, if clever(clever) does not halt, then halt(clever, clever) on line 2 returns false, then we skip line 3 and return 42 at line 4, and clever(clever) halts.
Since both cases lead to a contradiction, we have just proven that no one can write halt(f, n)!

From here, we can prove that some function is not computable if we can show that halt(f, n) reduces to said function.

Week 10: This will be brief.

We learned big-Oh proofs for general functions this week, so that we we can say things like





and proceed to amaze our friends. Then we moved on to halting functions...
...
...
... ???????????????????????

Week 9: It wasn't me, it was Netflix

9-10pm on November 3 was not my finest hour. I really should have gone over the hints given in class, I had completely forgotten about them! To give you the general idea, the hour before the assignment 2 deadline involved a lot of frantic scribbling, angry muttering directed at inanimate objects, and various large gestures of exasperation and defeat. Good thing I found a cubicle at the library, or I would have been exposed as a crazy person.

What led to this madness? Well as predicted, numerous Walking Dead episodes were involved (don't blame me, my roommate was the enabler!). Zombies are indeed gross.

So, you might say...
assume I did badly on an assignment
   then I either was subscribed to Netflix, or I was not subscribed to Netflix #A or not A
   case 1: I was subscribed to Netflix
      then I watched too many Netflix shows #this naturally follows because I have no self-restraint
      then Netflix distracted me from the assignment
      then it was Netflix's fault
   case 2: I was not subscribed to Netflix
      then I was suffering from Netflix withdrawal #that happens, right?
      then Netflix-lessness distracted me from the assignment
      then it was Netflix's fault
   then in either case, it was Netflix's fault
then I did badly on an assignment implies that it was Netflix's fault #sounds reasonable to me!


This week in lecture, we learned big-Oh proofs... calculus was involved. I felt comfortable with most of the things taught in class, but sometimes the examples were hard to follow when I'm just seeing (many, many, many) symbols on a screen and I haven't worked out the problem myself. Generally, I find that the lectures are a good introduction to the material, the tutorials provide very useful practice for consolidating what I've learned and for asking questions, and the course notes are the reason why I have to purchase a new pair of glasses this year (but they're still very helpful!).

Week 8: Let's be formal

This week we learned the formal definition of big O.



This means that f(n) is in O(n^2) if and only if after a certain breakpoint B (for all values of n >= B), the graph of f(n) is upper-bounded by the graph of cn^2 (where n^2 is multiplied by a big enough constant factor c). In other words, f(n) eventually grows at the same rate as n^2 at most, if not slower.









Similarly, we can define the lower-bound, when f(n) eventually grows at least as fast as n^2:




and the tight bound, when f(n) grows at the same rate as n^2:




 
Thus, we can say that 7n^2 is in Θ(n^2). From the graph below, we can see that 7n^2 is upper-bounded by 10n^2 and lower-bounded by n^2.





















We also calculated the number of steps an algorithm takes depending on the input size n in the worst case scenario, which turned out to be not as difficult as I had imagined. Of course, we were only looking at simple algorithms. I'm curious now as to what we will be learning in CSC236.

Tuesday, December 2, 2014

Week 7: :O :O :O

This week we finished up proof techniques and moved on to program complexity, mainly big O and the number of steps of an algorithm.

This is the first real thing that relates mathematical reasoning to computer science, which makes it interesting for me. Since my old brick / laptop was rather slow and outdated, I understand well the importance of running time analysis. I still find it strange that big O ignores constant factors and lower order terms, so that 100000n^3 + n is considered the same as 0.1n^3, even though in "real running time", this makes a difference (at least it did on my brick). I guess the key thing to be aware of is not to confuse the number of steps of a program with the time the program takes to run on some hardware. The same program will be slow on a slow computer and fast on a fast computer, but the running time does not differ between computers and is not measured as units of time, but as units related to the number of steps, determined solely based on the algorithm. Specifically, running time looks at how quickly the number of steps carried out by the algorithm grows as the input size grows. Relating this to real life, often we want to look through a large number of items in a short time, such as when you google "computer science" and google proudly reports that it found about 272,000,000 results in 0.52 seconds. When we are calculating the number of steps for large input sizes, only the highest order term matters, and the rest only contribute a small difference. Therefore, it is more useful to say that 100000n^3 + n grows no faster than n^3, asymptotically.

I guess now we're going to have to prove that some algorithm is O(n)? Nooooooooooooooooooooooo..............

Week 6: So many proofs

This week we learned proofs for non-boolean functions such as floor and ceiling functions, and for limits, which I found a little challenging. (I am also deeply distraught that proof by illegibility is not valid. Fine... you caught me.)

If I had thought that the definition of a limit was confusing, the proof for it is something else. From last week, we learned that






can be defined as





so that as ε gets smaller, x approaches 2, and as δ gets smaller, f(x) = x2 approaches 4. To prove this, we must show that for all positive values of ε, there is a positive value δ such that if | x - 2 | < δ then | x2 - 4 | < ε. The key to this proof is to choose an appropriate δ, in terms of ε, such that allows us to transition from | x - 2 | < δ to  | x2 - 4 | < ε. This is a complicated step, and involves some clever manipulations. It took several minutes of stupefied staring at the lecture notes to fully understand the choice of δ for this proof. Further stupefied staring at the mass of confusion that is course notes and I begin to see that writing proofs is not just about thinking logically; it involves searching outside the box for a creative solution, then presenting it logically.