Skip to content

removing principles of code chapter, updating book #250

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Jul 17, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 8 additions & 23 deletions SUMMARY.md
Original file line number Diff line number Diff line change
@@ -1,33 +1,21 @@
# Summary

* [Algorithm Archive](README.md)
* [TODO](TODO.md)
* [Introduction](chapters/introduction/introduction.md)
* [A Personal Note](chapters/introduction/my_introduction_to_hobby_programming.md)
* [How To Contribute](chapters/introduction/how_to_contribute.md)
* [Principles of Code](chapters/principles_of_code/principles_of_code.md)
* [Choosing A Language](chapters/principles_of_code/choosing_a_language/choosing_a_language.md)
* [Compiled Languages](chapters/principles_of_code/choosing_a_language/compiled/compiled.md)
* [Makefiles](chapters/principles_of_code/choosing_a_language/compiled/makefiles.md)
* [FORTRAN](chapters/principles_of_code/choosing_a_language/compiled/FORTRAN.md)
* [Building Blocks](chapters/principles_of_code/building_blocks/building_blocks.md)
* [Variables and Types](chapters/principles_of_code/building_blocks/variables.md)
* [Conditions](chapters/principles_of_code/building_blocks/conditions.md)
* [Loops](chapters/principles_of_code/building_blocks/loops.md)
* [Functions](chapters/principles_of_code/building_blocks/functions.md)
* [Classes](chapters/principles_of_code/building_blocks/classes.md)
* [Stacks and Queues](chapters/principles_of_code/building_blocks/stacks.md)
* [Bit Logic](chapters/principles_of_code/building_blocks/bitlogic.md)
* [Version Control](chapters/principles_of_code/version_control.md)
* [Complexity Notation](chapters/principles_of_code/notation/notation.md)
* [Convolutions](chapters/algorithms/convolutions/convolutions.md)
* [Taylor Series](chapters/general/taylor_series_expansion/taylor_series_expansion.md)
* [Version Control](chapters/introduction/version_control.md)
* [Data Structures](chapters/data_structures/data_structures.md)
* [Stacks and Queues](chapters/data_structures/stacks_and_queues/stacks_and_queues.md)
* [Mathematical Background](chapters/general/mathematical_background/mathematical_background.md)
* [Complexity Notation](chapters/general/notation/notation.md)
* [Bit Logic](chapters/general/bitlogic/bitlogic.md)
* [Convolutions](chapters/algorithms/convolutions/convolutions.md)
* [Taylor Series](chapters/general/taylor_series_expansion/taylor_series_expansion.md)
* [Sorting and Searching](chapters/general/sorting_and_searching/sorting_and_searching.md)
* [Bubble Sort](chapters/algorithms/bubble_sort/bubble_sort.md)
* [Bogo Sort](chapters/algorithms/bogo_sort/bogo_sort.md)
* [Tree Traversal](chapters/algorithms/tree_traversal/tree_traversal.md)
* [Euclidean Algorithm](chapters/algorithms/euclidean_algorithm/euclidean_algorithm.md)
* [Multiplication](chapters/general/multiplication/multiplication.md)
* [Monte Carlo](chapters/algorithms/monte_carlo_integration/monte_carlo_integration.md)
* [Matrix Methods](chapters/general/matrix_methods/matrix_methods.md)
* [Gaussian Elimination](chapters/algorithms/gaussian_elimination/gaussian_elimination.md)
Expand All @@ -36,16 +24,13 @@
* [Gift Wrapping](chapters/general/gift_wrapping/gift_wrapping.md)
* [Jarvis March](chapters/algorithms/jarvis_march/jarvis_march.md)
* [Graham Scan](chapters/algorithms/graham_scan/graham_scan.md)
* [Chan's Algorithm](chapters/algorithms/chans_algorithm/chans_algorithm.md)
* [FFT](chapters/algorithms/cooley_tukey/cooley_tukey.md)
* [Decision Problems](chapters/general/decision_problems/decision_problems.md)
* [Stable Marriage Problem](chapters/algorithms/stable_marriage_problem/stable_marriage_problem.md)
* [Differential Equation Solvers](chapters/general/differential_equations/differential_equations.md)
* [Forward Euler Method](chapters/algorithms/forward_euler_method/forward_euler_method.md)
* [Backward Euler Methods](chapters/algorithms/backward_euler_method/backward_euler_method.md)
* [Physics Solvers](chapters/general/physics_solvers/physics_solvers.md)
* [Verlet Integration](chapters/algorithms/verlet_integration/verlet_integration.md)
* [Barnes-Hut](chapters/algorithms/barnes_hut_algorithm/barnes_hut_algorithm.md)
* [Quantum Systems](chapters/general/quantum_systems/quantum_systems.md)
* [Split-Operator Method](chapters/algorithms/split-operator_method/split-operator_method.md)
* [Data Compression](chapters/general/data_compression/data_compression.md)
Expand Down
29 changes: 0 additions & 29 deletions TODO.md

This file was deleted.

Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
### Backward Euler Method
# Backward Euler Method

Unlike the forward Euler Method above, the backward Euler Method is an *implicit method*, which means that it results in a system of equations to solve. Luckily, we know how to solve systems of equations (*hint*: [Thomas Algorithm](../thomas_algorithm/thomas_algorithm.md), [Gaussian Elimination](../gaussian_elimination/gaussian_elimination.md))

Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# The Forward Euler Method

The Euler methods are some of the simplest methods to solve ordinary differential equations numerically.
They introduce a new set of methods called the [Runge Kutta](../runge_kutta_methods/runge_kutta_methods.md) methods, which will be discussed in the near future!
They introduce a new set of methods called the Runge Kutta methods, which will be discussed in the near future!

As a physicist, I tend to understand things through methods that I have learned before.
In this case, it makes sense for me to see Euler methods as extensions of the [Taylor Series Expansion](../general/taylor_series_expansion/taylor_series_expansion.md).
Expand Down Expand Up @@ -48,7 +48,7 @@ $$
$$

Now, solving this set of equations in this way is known as the *forward* Euler Method.
In fact, there is another method known as the [*backward* Euler Method](../backward_euler_method/backward_euler_method.md), which we will get to soon enough.
In fact, there is another method known as the *backward* Euler Method, which we will get to soon enough.
For now, it is important to note that the error of these methods depend on the timestep chosen.

<p>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ Unfortunately, this has not yet been implemented in LabVIEW, so here's Julia cod
{% endmethod %}


Even though this method is more used than the simple Verlet method mentioned above, it unforunately has an error term of $$\mathcal{O} \Delta t^2$$, which is two orders of magnitude worse. That said, if you want to have a simulaton with many objects that depend on one another --- like a gravity simulation --- the Velocity Verlet algorithm is a handy choice; however, you may have to play further tricks to allow everything to scale appropriately. These types of simulatons are sometimes called *n-body* simulations and one such trick is the [Barnes-Hut](../barnes_hut_algorithm/barnes_hut_algorithm.md) algorithm, which cuts the complexity of n-body simulations from $$\sim \mathcal{O}(n^2)$$ to $$\sim \mathcal{O}(n\log(n))$$
Even though this method is more used than the simple Verlet method mentioned above, it unforunately has an error term of $$\mathcal{O} \Delta t^2$$, which is two orders of magnitude worse. That said, if you want to have a simulaton with many objects that depend on one another --- like a gravity simulation --- the Velocity Verlet algorithm is a handy choice; however, you may have to play further tricks to allow everything to scale appropriately. These types of simulatons are sometimes called *n-body* simulations and one such trick is the Barnes-Hut algorithm, which cuts the complexity of n-body simulations from $$\sim \mathcal{O}(n^2)$$ to $$\sim \mathcal{O}(n\log(n))$$

## Example Code

Expand Down
4 changes: 4 additions & 0 deletions chapters/data_structures/data_structures.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Data Structures

This is a book about algorithms.
The fundamental building blocks of algorithms are data structures, and thus as more algorithms are added to the Archive, more data structures will be added to this section.
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
### Bit Logic
# Bit Logic

We write code in a language that makes a little sense to us, but does not make sense at all to our computer without a compiler to transform the code we write into a language the computer can understand.
In the end, whenever we write code, all of the data structures we write are transformed into binary strings of 1's and 0's to be interpreted by our computer.
That said, it's not always obvious how this happens, so let's start the simple case of integer numbers.

#### Integers
## Integers
For integer numbers, 0 is still 0 and 1 is still 1; however, for 2, we need to use 2 digits because binary only has 0's and 1's. When we get to 4, we'll need 3 digits and when we get to 8, we'll need 4. Ever time we cross a power of 2, we'll need to add a new digit. Here's a table of the first 10 integers in binary:

| Integer Number | Binary Number |
Expand Down Expand Up @@ -45,7 +45,7 @@ Another method is to "roll over" to negative numbers when the bit count gets too

Ultimately, integer numbers are not that difficult to deal with in binary, so let's move onto something more complicated: *floating-point numbers!*

#### Floating-point Numbers
## Floating-point Numbers
Floats are numbers with a decimal point.
9.125 is a float. 9.000 is a float. 9 is an integer.
Here are a few floats and their integer representations:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
### Differential Equations
# Differential Equations

Differential equations lie at the heart of many different systems in physics, economics, biology, chemistry, and many other areas of research and engineering.
Here, we discuss many different methods to solve particular sets of differential equations.
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Mathematical Background

No matter who you ask, programming requires at least a little math.
That said, for most programmers, it doesn't require *too* much.
For the most part, depending on your specialty, you will probably not see too much calculus or differential equations.
Honestly, you could probably get away with what you learned in high school.

However, this is a book about algorithms, and algorithms sometimes require a deeper understanding of mathematics.
This section attemps to provide the mathematical foundations that you will need to understand certain algorithms.
As we add new algorithms and need new mathematical tools, we will add them to this section.

A notable exception to this rule will be in the case of classes of algorithms that require domain-specific knowledge, like quantum simulations or bioinformatics.
In those cases, we will place the mathematical methods in more relevant sections.
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
### Complexity Notation
# Complexity Notation

Algorithms are designed to solve problems.
Over time, new algorithms are created to solve problems that old algorithms have already solved.
Expand All @@ -25,7 +25,7 @@ Of the three Big $$O$$ is used the most, and is used in conversation to mean tha
Unfortunately, at this point, these notations might be a little vague.
In fact, it was incredibly vague for me for a long time, and it wasn't until I saw the notations in action that it all started to make sense, so that's what this section is about: providing concrete examples to better understand computational complexity notation.

### Constant Time
## Constant Time

Let's write some code that reads in an array of length `n` and runs with constant time:

Expand Down Expand Up @@ -66,7 +66,7 @@ Just because this is common practice does not mean it's the *best* practice.
I have run into several situation where knowing the constants has saved me hours of run-time, so keep in mind that all of these notations are somewhat vague and dependent on a number of auxiliary factors.
Still, that doesn't mean the notation is completely useless. For now, let's keep moving forward with some more complicated (and useful) examples!

### Linear Time
## Linear Time

Now we are moving into interesting territory!
Let's consider the following function:
Expand Down Expand Up @@ -127,7 +127,7 @@ That said, there have been several cases throughout the history of algorithms wh
For this reason, if you can avoid writing nested `for` loops, you certainly should!
However, there are several cases where this cannot be avoided, so don't spend too much time worrying about it unless runtime becomes an issue!

### Exponential and Logarithmic Time
## Exponential and Logarithmic Time
These are two more cases that come up all the time and often require a common theme: *recursion*.
Generally speaking, logarithmic algorithms are some of the fastest algorithms out there, while exponential algorithms are some of the slowest.
Unfortunately, this means that recursion can be either the most useful tool in existence for realizing certain algorithms or the most harmful one, depending on your problem.
Expand Down Expand Up @@ -165,7 +165,7 @@ If we split these new arrays, we have 4 arrays of 2, and if we split these by tw
This is as far as we can go, and we ended up dividing the array 3 times to get to this point.
$$3 = \log_2(8)$$, so this function runs with a logarithmic number of operations.

### Putting it all together
## Putting it all together

We've outlined the most common complexity cases of different algorithms here, but at this point things might still be unclear.
Which is better: $$O(n^2)$$ or $$O(log(n))$$?
Expand Down
2 changes: 1 addition & 1 deletion chapters/general/physics_solvers/physics_solvers.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
### Physics Solvers
# Physics Solvers

There are certain algorithms that have been uniquely created to solve particular physical systems.
For example, the kinematic equation can be solved with Verlet integration and also with more general differential equation solvers.
Expand Down
4 changes: 2 additions & 2 deletions chapters/introduction/how_to_contribute.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
## How to Contribute to the Algorithm Archive
# How to Contribute to the Algorithm Archive

The *Algorithm Archive* is an effort to learn about and teach algorithms as a community.
As such, it requires a certain level of trust between community members.
For the most part, the collaboration can be done via GitHub and gitbook, so it is important to understand the basics of [version control](../principles_of_code/version_control.md).
For the most part, the collaboration can be done via GitHub and gitbook, so it is important to understand the basics of [version control](version_control.md).
Ideally, all code provided by the community will be submitted via pull requests and discussed accordingly; however, I understand that many individuals are new to collaborative projects, so I will allow submissions by other means (comments, tweets, etc...).
As this project grows in size, it will be harder and harder to facilitate these submissions.
In addition, by submitting in any way other than pull requests, I cannot guarantee I will be able to list you as a collaborator (though I will certainly do my best to update the `CONTRIBUTORS.md` file accordingly).
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## Git and Version Control
# Git and Version Control

I am a fan of open-source software. It allows users to see inside the code running on their system and mess around with it if they like.
Unlike proprietary software, open source software allows any user to learn the entire codebase from the ground up, and that's an incredibly exciting prospect!
Expand Down

This file was deleted.

26 changes: 0 additions & 26 deletions chapters/principles_of_code/building_blocks/classes.md

This file was deleted.

Loading