Dijkstra’s algorithm is an algorithm used to find all of the shortest distances between a start vertex and the rest of the vertices in a graph. Above you can see we start at A, and from there we will find the shortest distance to all other vertices. Dijkstra uses breadth-first search algorithm, and updates the cost of each edge updating the cost-list when it comes across an edge path that is less than what it already has.

From the start we assume every edge has a cost of infinity (this is impossible) so when it comes across the first edge…

I wanted to discuss two types of graph search algorithms, as they are important concepts in working with graph structures.

The first is known as **Depth First Search** (DFS). As you can see in the gif above it travels down the vertices of a graph one by one via its connected vertex. Once it reaches the end of the branch at vertex #4 it checks back with vertex #1 to see if there was another neighboring vertex to run through. **…**

Two common search methods are linear, where we iterate through a list one-by-one till we find the value we want, or binary, where we apply a search formula/logic to more quickly find our target. There are benefits for both, and I wanted to go over them today.

When implementing a search algorithm it’s always important to consider the situation you’ll be in. Linear searches are considered better when you expect to find your target value at the beginning of a list, or if it’s a rather small list to begin with.

Binaries are better with larger data sets, and if…

You ever just study something that’s made you mad? Like programming doesn’t need to be this hard, and yet you still find yourself trying to understand Radix Sort. I’d like to promise you that everything is going to be okay, but you know what, we can’t know everything, and maybe some things are just worth sitting out.

Go home. I won’t be mad.

Now for the rest of you sadist sticking around I’m going to explain Radix Sort. Am I going to explain it well? I dunno yet, but explaining things helps me learn so if you don’t find this…

Quick-sort is another divide and conquer algorithm like the **merge_sort** algorithm. Quick-sort’s unique method however makes it oddly efficient, and thus popular.

It’s a recursion operation, and calls upon itself to break an array into multiple sub-list. The final sub-list will have just one element left before it breaks out of its recursion with an assorted list.

The idea is that on each level of the sub-list to grab a random element, otherwise known as the **pivot**, and compare it to the rest of the list, pushing smaller values to the left, and greater values to the right. Doing this…

Continuing my series on python sorting algorithms today I’m going to look at Merge Sort. Merge sort is a divide, and conquer strategy where it breaks down your list into halves repeatedly until it has just one element, and reconstructs the list in order. It even uses my favorite recursion method.

In my next few articles I want to talk about sorting algorithms, and how they work. When working with datasets you’ll want to organize them in all sorts of ways. Here I’ll talk about bubble sorts, and their basic construction.

It’s a fairly simple concept. Bubble sort will iterate through an array, and compare pairs. If the left index is bigger then the right index the values will switch. The algorithm will then move down one index, and compare again the two values.

The complication with this is that at each iteration the algorithm is just pushing the largest value…

Recursion are an interesting, and tricky concept in our Python tool box. Basically it’s when code goes meta, and calls upon itself to execute a self function until it meets a base requirement to exit the loop. These sorts of algorithms are used for iterating over a list.

So why use recursion when we already know loops? **Recursion** is simpler than an iterative-loop solution, usually because it exploits a structural aspect of the problem in a way that the iterative-loop approach cannot.

Loops will loop over controlled variables, using your typically conditional statements (*if, for, while, do while*) and are…

Coding is a powerful tool that can accomplish some real complex computations. It’s important that we write efficient code, even more so when dealing with large swaths of data, as that can take some real processing power, and not all computers are built equally. So when we talk about how fast a program is it’s not about hardware (although that helps), but in the code itself. How do we measure that? By using asymptotic notation. Don’t mind the alien space language it’s written in, because it’s not as bad as they want you think it is. Let’s demystify it together.

…

Welcome to part II of my graph series. Here I’ll be explore the nuts and bolts of setting up the two classes (vertex, and the graph) for a graph structure.

Now like any good programmer, before we jump in let’s familiarize ourselves with what we’ll want our classes to do.

Our vertex much like nodes will hold our information. They will know their immediate neighbors, and we can also set their weight. As we build our structure we’ll also want the ability to add an edge to our graph.

Now what about our Graph class? It acts much like the…

I’m a web developer, and data scientist by hobby. Yes, it can be a hobby. I blog about all things code.