#data-structures

Arrays

Arrays are one of the most-used data structures in JavaScript. They’re ordered lists of values, and they come with a rich set of built-in methods for searching, transforming, and iterating over data. If you’ve worked through Functions, you’ve already seen map, filter, and reduce in passing — this tutorial covers arrays from the ground up. Creating Arrays // Array literal (most common) const fruits = ["apple", "banana", "cherry"]; // Empty array const empty = []; // Arrays can hold mixed types (though you usually wouldn't) const mixed = [1, "hello", true, null, { name: "Alice" }]; // Array. Read more →

April 14, 2026

Dijkstra's Algorithm

Introduction Dijkstra’s Algorithm finds the shortest path from a source vertex to all other vertices in a weighted graph with non-negative edge weights. It’s the algorithm behind GPS navigation, network routing, and any system that needs to find the cheapest path between two points. The core idea: greedily pick the unvisited vertex with the smallest known distance, then update its neighbors’ distances. This is essentially BFS with a priority queue instead of a regular queue — the priority queue ensures we always process the closest vertex next. Read more →

March 31, 2026

Tries (Prefix Trees)

Introduction A Trie (pronounced “try”) is a tree-like data structure used to store and search strings efficiently. Each node represents a single character, and paths from root to node spell out prefixes. It’s the data structure behind autocomplete, spell checkers, and IP routing tables. The name comes from “retrieval” — tries are optimized for looking up strings by prefix. Unlike a hash table which gives you O(1) lookup for exact keys, a trie gives you O(m) lookup (where m is the key length) plus the ability to find all keys that share a prefix. Read more →

March 31, 2026

Selection Sort

Introduction Selection Sort works by repeatedly finding the minimum element from the unsorted portion and placing it at the beginning. It’s like sorting a hand of cards by scanning for the smallest card each time and moving it to the front. Selection Sort is simple to understand and implement, but it’s always O(n²) — even on sorted input. It makes the minimum number of swaps (at most n-1), which can matter when writes are expensive. Read more →

March 31, 2026

Linear Search

Introduction Linear Search is the simplest searching algorithm — it checks every element in a collection one by one until it finds the target or reaches the end. It’s like looking for a specific book on an unsorted shelf by scanning from left to right. It’s not the fastest search, but it works on any collection — sorted or unsorted, arrays or linked lists. Understanding Linear Search also helps you appreciate why Binary Search is so much faster on sorted data. Read more →

March 31, 2026

Insertion Sort

Introduction Insertion Sort builds a sorted array one element at a time by picking each element and inserting it into its correct position among the already-sorted elements. It’s exactly how most people sort a hand of playing cards — you pick up one card at a time and slide it into the right spot. Despite being O(n²) in the worst case, Insertion Sort is genuinely useful. It’s the fastest algorithm for small arrays and nearly-sorted data, which is why many standard library sort implementations (like TimSort) use it as a building block. Read more →

March 31, 2026

Bubble Sort

Introduction Bubble Sort is the simplest sorting algorithm. It repeatedly steps through the array, compares adjacent elements, and swaps them if they’re in the wrong order. The largest unsorted element “bubbles up” to its correct position on each pass — like air bubbles rising to the surface of water. Nobody uses Bubble Sort in production (it’s too slow), but it’s worth understanding because it’s often the first sorting algorithm taught, it appears in interviews as a baseline comparison, and optimizing it teaches useful concepts. Read more →

March 31, 2026

Graph Representation

Introduction Before you can run BFS, DFS, or any other graph algorithm, you need to decide how to store the graph in memory. That choice affects the time and space complexity of every operation you perform on it. There are two standard representations: the adjacency list and the adjacency matrix. This tutorial covers both, when to use each, and how to handle directed, undirected, and weighted graphs. You should be familiar with Graphs Introduction and Arrays before reading this. Read more →

March 31, 2026

Merge Sort

Introduction Merge Sort is a divide-and-conquer sorting algorithm that splits an array in half, recursively sorts each half, then merges the two sorted halves back together. It’s like sorting a deck of cards by splitting it into smaller and smaller piles until each pile has one card, then merging piles together in order. Unlike Quick Sort, which can degrade to O(n²) with bad pivot choices, Merge Sort guarantees O(n log n) in every case. Read more →

March 31, 2026

Quick Sort

Introduction Quick Sort is a divide-and-conquer sorting algorithm that works by picking a “pivot” element and partitioning the array so that everything smaller goes to the left and everything larger goes to the right. Then it recursively sorts each side. It’s like organizing a messy bookshelf by picking a book in the middle, putting everything shorter to the left and taller to the right, then repeating for each side. Quick Sort is one of the most widely used sorting algorithms in practice — it’s the default in many standard library implementations because of its excellent average-case performance and low overhead. Read more →

March 31, 2026

Dynamic Programming

Introduction Dynamic Programming (DP) is an optimization technique that solves complex problems by breaking them into overlapping subproblems and storing their results so you never solve the same subproblem twice. Think of it like taking notes during a long calculation — instead of redoing work, you look up answers you’ve already figured out. If you’ve worked through Recursion, you’ve already seen the problem DP solves. Remember the naive recursive Fibonacci? It recalculates the same values over and over. Read more →

March 31, 2026

Graphs - Introduction

Introduction A graph is a non-linear data structure consisting of vertices (nodes) connected by edges. Unlike trees, graphs can have cycles, multiple paths between nodes, and connections in any direction. Graphs model relationships and networks, making them fundamental to solving real-world problems in social networks, maps, computer networks, and more. Graphs are one of the most versatile and powerful data structures in computer science. By the end of this tutorial, you’ll understand graph terminology, types of graphs, representation methods, and when to use them. Read more →

November 19, 2025

Heaps and Priority Queues

Introduction A heap is a specialized tree-based data structure that satisfies the heap property: in a max-heap, parent nodes are always greater than or equal to their children; in a min-heap, parent nodes are always less than or equal to their children. Heaps are the foundation for priority queues and enable efficient algorithms like heap sort and finding the kth largest element. Unlike Binary Search Trees, heaps don’t maintain a fully sorted order—they only guarantee the min/max element is at the root. Read more →

November 19, 2025

Binary Search Trees (BST)

Introduction A Binary Search Tree (BST) is a specialized binary tree with an important ordering property: for every node, all values in the left subtree are less than the node’s value, and all values in the right subtree are greater. This simple property enables efficient searching, insertion, and deletion operations. BSTs are one of the most important data structures in computer science, forming the basis for efficient ordered collections, database indexing, and many algorithmic solutions. Read more →

November 19, 2025

Binary Trees

Introduction A binary tree is a hierarchical data structure where each node has at most two children, referred to as the left child and right child. Unlike linear data structures like arrays or linked lists, binary trees organize data in a hierarchical, branching structure that enables efficient searching, sorting, and hierarchical representation. Binary trees are fundamental to computer science and appear in file systems, databases, compilers, and countless algorithms. By the end of this tutorial, you’ll understand different types of binary trees, how to traverse them, and when to use them. Read more →

November 19, 2025

Circular Queues

Introduction A circular queue (also called a ring buffer) is a linear data structure that uses a fixed-size array in a circular manner. Unlike a regular queue where dequeued elements leave wasted space at the front, a circular queue reuses that space by wrapping around to the beginning when reaching the end of the array. Think of it like a circular track where runners keep going around—when you reach the end, you simply continue from the start. Read more →

November 19, 2025

Deques (Double-Ended Queues)

Introduction A deque (pronounced “deck”) is short for “double-ended queue”—a linear data structure that allows insertion and deletion at both the front and the rear. Think of it as a hybrid between a stack and a queue, offering the flexibility to add or remove elements from either end. This versatility makes deques powerful for problems requiring sliding windows, palindrome checking, and maintaining ordered sequences with efficient operations at both ends. By the end of this tutorial, you’ll understand when deques shine and how to implement them efficiently. Read more →

November 19, 2025

Queues

Introduction A queue is a linear data structure that follows the First-In-First-Out (FIFO) principle. Think of it like a line at a coffee shop: the first person to join the line is the first person to be served. This fair ordering makes queues perfect for managing tasks, requests, and data that needs to be processed in the order it arrives. Queues are everywhere in computing—from managing print jobs and handling network requests to implementing breadth-first search algorithms. Read more →

November 19, 2025

Stacks

Introduction A stack is a linear data structure that follows the Last-In-First-Out (LIFO) principle. Think of it like a stack of plates: you can only add or remove plates from the top. The last plate you place on the stack is the first one you’ll take off. Stacks are fundamental in computer science and appear everywhere—from the call stack that tracks function calls in your program to undo/redo functionality in text editors. Read more →

November 19, 2025

Linked Lists

Introduction A linked list is a linear data structure made of nodes where each node stores a value and a reference (a “link”) to the next node. Unlike arrays, linked lists do not store elements contiguously in memory. This gives them different performance characteristics and trade-offs. If you are new to time and space complexity, start with our guide to Big-O Notation. You may also want to review Arrays to understand how linked lists compare. Read more →

November 18, 2025

Data Structures

Master fundamental data structures and algorithms. From arrays and linked lists to trees and graphs. Read more →

September 11, 2023

Thanks for visiting
We are actively updating content to this site. Thanks for visiting! Please bookmark this page and visit again soon.
Sponsor