#interviewing

Dijkstra's Algorithm

Introduction Dijkstra’s Algorithm finds the shortest path from a source vertex to all other vertices in a weighted graph with non-negative edge weights. It’s the algorithm behind GPS navigation, network routing, and any system that needs to find the cheapest path between two points. The core idea: greedily pick the unvisited vertex with the smallest known distance, then update its neighbors’ distances. This is essentially BFS with a priority queue instead of a regular queue — the priority queue ensures we always process the closest vertex next. Read more →

March 31, 2026

Tries (Prefix Trees)

Introduction A Trie (pronounced “try”) is a tree-like data structure used to store and search strings efficiently. Each node represents a single character, and paths from root to node spell out prefixes. It’s the data structure behind autocomplete, spell checkers, and IP routing tables. The name comes from “retrieval” — tries are optimized for looking up strings by prefix. Unlike a hash table which gives you O(1) lookup for exact keys, a trie gives you O(m) lookup (where m is the key length) plus the ability to find all keys that share a prefix. Read more →

March 31, 2026

Selection Sort

Introduction Selection Sort works by repeatedly finding the minimum element from the unsorted portion and placing it at the beginning. It’s like sorting a hand of cards by scanning for the smallest card each time and moving it to the front. Selection Sort is simple to understand and implement, but it’s always O(n²) — even on sorted input. It makes the minimum number of swaps (at most n-1), which can matter when writes are expensive. Read more →

March 31, 2026

Linear Search

Introduction Linear Search is the simplest searching algorithm — it checks every element in a collection one by one until it finds the target or reaches the end. It’s like looking for a specific book on an unsorted shelf by scanning from left to right. It’s not the fastest search, but it works on any collection — sorted or unsorted, arrays or linked lists. Understanding Linear Search also helps you appreciate why Binary Search is so much faster on sorted data. Read more →

March 31, 2026

Insertion Sort

Introduction Insertion Sort builds a sorted array one element at a time by picking each element and inserting it into its correct position among the already-sorted elements. It’s exactly how most people sort a hand of playing cards — you pick up one card at a time and slide it into the right spot. Despite being O(n²) in the worst case, Insertion Sort is genuinely useful. It’s the fastest algorithm for small arrays and nearly-sorted data, which is why many standard library sort implementations (like TimSort) use it as a building block. Read more →

March 31, 2026

Bubble Sort

Introduction Bubble Sort is the simplest sorting algorithm. It repeatedly steps through the array, compares adjacent elements, and swaps them if they’re in the wrong order. The largest unsorted element “bubbles up” to its correct position on each pass — like air bubbles rising to the surface of water. Nobody uses Bubble Sort in production (it’s too slow), but it’s worth understanding because it’s often the first sorting algorithm taught, it appears in interviews as a baseline comparison, and optimizing it teaches useful concepts. Read more →

March 31, 2026

Graph Representation

Introduction Before you can run BFS, DFS, or any other graph algorithm, you need to decide how to store the graph in memory. That choice affects the time and space complexity of every operation you perform on it. There are two standard representations: the adjacency list and the adjacency matrix. This tutorial covers both, when to use each, and how to handle directed, undirected, and weighted graphs. You should be familiar with Graphs Introduction and Arrays before reading this. Read more →

March 31, 2026

Merge Sort

Introduction Merge Sort is a divide-and-conquer sorting algorithm that splits an array in half, recursively sorts each half, then merges the two sorted halves back together. It’s like sorting a deck of cards by splitting it into smaller and smaller piles until each pile has one card, then merging piles together in order. Unlike Quick Sort, which can degrade to O(n²) with bad pivot choices, Merge Sort guarantees O(n log n) in every case. Read more →

March 31, 2026

Quick Sort

Introduction Quick Sort is a divide-and-conquer sorting algorithm that works by picking a “pivot” element and partitioning the array so that everything smaller goes to the left and everything larger goes to the right. Then it recursively sorts each side. It’s like organizing a messy bookshelf by picking a book in the middle, putting everything shorter to the left and taller to the right, then repeating for each side. Quick Sort is one of the most widely used sorting algorithms in practice — it’s the default in many standard library implementations because of its excellent average-case performance and low overhead. Read more →

March 31, 2026

Dynamic Programming

Introduction Dynamic Programming (DP) is an optimization technique that solves complex problems by breaking them into overlapping subproblems and storing their results so you never solve the same subproblem twice. Think of it like taking notes during a long calculation — instead of redoing work, you look up answers you’ve already figured out. If you’ve worked through Recursion, you’ve already seen the problem DP solves. Remember the naive recursive Fibonacci? It recalculates the same values over and over. Read more →

March 31, 2026

Recursion

Introduction Recursion is a programming technique where a function calls itself to solve a problem by breaking it down into smaller, similar subproblems. Think of Russian nesting dolls—each doll contains a smaller version of itself until you reach the smallest one that doesn’t open. Recursion is fundamental to computer science and appears in tree traversals, sorting algorithms, mathematical computations, and backtracking problems. While it can seem magical at first, understanding recursion unlocks powerful problem-solving techniques. Read more →

November 19, 2025

Depth-First Search (DFS)

Introduction Depth-First Search (DFS) is a fundamental graph traversal algorithm that explores as far as possible along each branch before backtracking. Think of it like exploring a maze by always taking the first path you see, going as deep as possible, then backtracking when you hit a dead end. DFS is essential for detecting cycles, topological sorting, finding connected components, and solving maze/puzzle problems. Unlike BFS which uses a queue, DFS uses a stack (or recursion, which implicitly uses the call stack). Read more →

November 19, 2025

Breadth-First Search (BFS)

Introduction Breadth-First Search (BFS) is a fundamental graph traversal algorithm that explores vertices level by level, visiting all neighbors of a vertex before moving to the next level. Think of it like ripples spreading out in water—you explore all vertices at distance 1, then distance 2, and so on. BFS is essential for finding shortest paths in unweighted graphs, level-order traversal of trees, and solving maze/puzzle problems. It uses a queue to maintain the order of exploration. Read more →

November 19, 2025

Graphs - Introduction

Introduction A graph is a non-linear data structure consisting of vertices (nodes) connected by edges. Unlike trees, graphs can have cycles, multiple paths between nodes, and connections in any direction. Graphs model relationships and networks, making them fundamental to solving real-world problems in social networks, maps, computer networks, and more. Graphs are one of the most versatile and powerful data structures in computer science. By the end of this tutorial, you’ll understand graph terminology, types of graphs, representation methods, and when to use them. Read more →

November 19, 2025

Heaps and Priority Queues

Introduction A heap is a specialized tree-based data structure that satisfies the heap property: in a max-heap, parent nodes are always greater than or equal to their children; in a min-heap, parent nodes are always less than or equal to their children. Heaps are the foundation for priority queues and enable efficient algorithms like heap sort and finding the kth largest element. Unlike Binary Search Trees, heaps don’t maintain a fully sorted order—they only guarantee the min/max element is at the root. Read more →

November 19, 2025

Binary Search Trees (BST)

Introduction A Binary Search Tree (BST) is a specialized binary tree with an important ordering property: for every node, all values in the left subtree are less than the node’s value, and all values in the right subtree are greater. This simple property enables efficient searching, insertion, and deletion operations. BSTs are one of the most important data structures in computer science, forming the basis for efficient ordered collections, database indexing, and many algorithmic solutions. Read more →

November 19, 2025

Binary Trees

Introduction A binary tree is a hierarchical data structure where each node has at most two children, referred to as the left child and right child. Unlike linear data structures like arrays or linked lists, binary trees organize data in a hierarchical, branching structure that enables efficient searching, sorting, and hierarchical representation. Binary trees are fundamental to computer science and appear in file systems, databases, compilers, and countless algorithms. By the end of this tutorial, you’ll understand different types of binary trees, how to traverse them, and when to use them. Read more →

November 19, 2025

Queues

Introduction A queue is a linear data structure that follows the First-In-First-Out (FIFO) principle. Think of it like a line at a coffee shop: the first person to join the line is the first person to be served. This fair ordering makes queues perfect for managing tasks, requests, and data that needs to be processed in the order it arrives. Queues are everywhere in computing—from managing print jobs and handling network requests to implementing breadth-first search algorithms. Read more →

November 19, 2025

Stacks

Introduction A stack is a linear data structure that follows the Last-In-First-Out (LIFO) principle. Think of it like a stack of plates: you can only add or remove plates from the top. The last plate you place on the stack is the first one you’ll take off. Stacks are fundamental in computer science and appear everywhere—from the call stack that tracks function calls in your program to undo/redo functionality in text editors. Read more →

November 19, 2025

Linked Lists

Introduction A linked list is a linear data structure made of nodes where each node stores a value and a reference (a “link”) to the next node. Unlike arrays, linked lists do not store elements contiguously in memory. This gives them different performance characteristics and trade-offs. If you are new to time and space complexity, start with our guide to Big-O Notation. You may also want to review Arrays to understand how linked lists compare. Read more →

November 18, 2025

Binary Search

Introduction Binary search is a classic algorithm used to quickly find the position of a target value within a sorted collection. It repeatedly divides the search interval in half, discarding the half that cannot contain the target. Because of this halving behavior, binary search runs in logarithmic time: O(log n). If you are new to complexity analysis, review our guide to Big-O Notation. Also see our overview of Arrays, since binary search is most commonly applied to sorted arrays. Read more →

November 18, 2025

Big-O Notation

Introduction In the realm of computer science and software engineering, the performance of an algorithm plays a pivotal role. But how do we gauge this performance? That’s where the concept of Big-O notation comes in. This article provides an overview of Big-O notation, its importance, and some common time complexities you might come across. What is Big-O Notation? Big-O notation is a theoretical measure of the execution time of an algorithm or the space it uses in relation to its input size. Read more →

September 14, 2023

Data Structures

Master fundamental data structures and algorithms. From arrays and linked lists to trees and graphs. Read more →

September 11, 2023

Thanks for visiting
We are actively updating content to this site. Thanks for visiting! Please bookmark this page and visit again soon.
Sponsor