Skip to content

Commit

Permalink
proofreading for data structures
Browse files Browse the repository at this point in the history
  • Loading branch information
spring1843 committed Jul 17, 2023
1 parent 5e56a31 commit 09f4c24
Show file tree
Hide file tree
Showing 10 changed files with 49 additions and 51 deletions.
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
[![Coverage Report](https://coveralls.io/repos/github/spring1843/go-dsa/badge.svg?branch=main)](https://coveralls.io/github/spring1843/go-dsa?branch=main)
[![Go Reference](https://pkg.go.dev/badge/github.com/spring1843/go-dsa.svg)](https://pkg.go.dev/github.com/spring1843/go-dsa)

Welcome to **Data Structures and Algorithms in Go**! 🎉 This project is designed to serve as a dynamic, hands-on resource for learning and practicing data structures and algorithms in the Go programming language.
Welcome to **Data Structures and Algorithms in Go**! 🎉 This project is designed as a dynamic, hands-on resource for learning and practicing data structures and algorithms in the Go programming language.

* Completely free, community-driven, and continuously evolving
* Executes and comes with 100% test coverage, ensuring a high level of quality
Expand All @@ -28,7 +28,7 @@ Welcome to **Data Structures and Algorithms in Go**! 🎉 This project is design
* [Bubble Sort](./array/bubble_sort_test.go)
* [Insertion Sort](./array/insertion_sort_test.go)
* [Strings](./strings/README.md)
* [The longest Dictionary Word Containing Key](./strings/longest_dictionary_word_test.go)
* [The Longest Dictionary Word Containing Key](./strings/longest_dictionary_word_test.go)
* [Look and Tell](./strings/look_and_tell_test.go)
* [In Memory Database](./strings/in_memory_database_test.go)
* [Number in English](./strings/number_in_english_test.go)
Expand Down Expand Up @@ -142,7 +142,7 @@ Welcome to **Data Structures and Algorithms in Go**! 🎉 This project is design

All topics are discussed in README.md files in the corresponding directory. Each topic includes the following sections:

* 💡 **Implementation**: Detailed explanation of how the data structure or algorithm can be implemented, including code examples in Go.
* 💡 **Implementation**: Overview of implementing the data structure or algorithm in Go.
* 📊 **Complexity**: Analysis of the time and space complexity of the data structure or algorithm.
* 🎯 **Application**: Discussion of problems that are commonly solved using the data structure or algorithm.
* 🎯 **Application**: Discuss problems commonly solved using the data structure or algorithm.
* 📝 **Rehearsal**: Practice problems with links to tests that provide 100% coverage and example inputs and outputs.
12 changes: 5 additions & 7 deletions array/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,7 @@ To provide a real-world analogy, consider an array of athletes preparing for a s

## Implementation

In the Go programming language, arrays are considered values rather than pointers and represent the entirety of the array. Whenever an array is passed to a function, a copy is created, resulting in additional memory usage. However, to avoid this issue, it is possible to pass a pointer to the array instead.

To define an array in Go, it is possible to specify the array size using a constant. By using constants in this manner, it is no longer necessary to use the make function to create the array.
In the Go programming language, arrays are considered values rather than pointers and represent the entirety of the array. Whenever an array is passed to a function, a copy is created, resulting in additional memory usage. To avoid this it is possible to pass a pointer to an array, or use slices instead. The size of the array is constant and it must be known at compile time, and there is no need to use the built-in `make` function when defining arrays.

```Go
package main
Expand All @@ -22,9 +20,9 @@ func main() {
}
```

Although arrays are fundamental data structures in Go, their constant size can make them inflexible and difficult to use in situations where a variable size is required. To address this issue, Go provides [slices](https://blog.golang.org/slices-intro) which are an abstraction of arrays that offer more convenient access to sequential data typically stored in arrays.
Although arrays are fundamental data structures in Go, their constant size can make them inflexible and difficult to use in situations where a variable size is required. To address this issue, Go provides [slices](https://blog.golang.org/slices-intro), an abstraction of arrays that offer more convenient access to sequential data typically stored in arrays. When a slice is passed to a function, the head of the slice is replaced but the slice still points to the same data, hence it is possible for the callee to modify the values of the slice and send them back to the caller.

Slices enable the addition of values using the `append` function, which allows for dynamic slice resizing. Additionally, selectors of the format [low:high] can be used to select or manipulate data in the slice. By utilizing slices instead of arrays, Go programmers gain a more flexible and powerful tool to manage their data structures.
Slices enable adding values using the `append` function, allowing dynamic resizing. Additionally, selectors of the format [low:high] can be used to select or manipulate data in the slice. By utilizing slices instead of arrays, Go programmers gain a more flexible and powerful tool to manage their data structures.

```Go
package main
Expand All @@ -40,7 +38,7 @@ func main() {
}
```

The [make](https://golang.org/pkg/builtin/#make) function can be used to create a zeroed slice of a given length and capacity.
The [make](https://golang.org/pkg/builtin/#make) function can create a zeroed slice of a given length and capacity.

```Go
package main
Expand Down Expand Up @@ -78,7 +76,7 @@ Accessing an element within an array using an index has O(1) time complexity. Th

While arrays are useful for certain tasks, searching an unsorted array can be a time-consuming O(n) operation. Since the target item could be located anywhere in the array, every element must be checked until the item is found. Due to this limitation, alternative data structures such as trees and hash tables are often more suitable for search operations.

Addition and deletion operations are O(n) operations in Arrays. The process of removing an element can create an empty slot that must be eliminated by shifting the remaining items. Similarly, adding items to an array may require shifting existing items to create space for the added item. These inefficiencies can make alternative data structures, such as [trees](../tree) or [hash tables](../hashtable), more suitable for managing operations involving additions and deletions.
Addition and deletion operations are O(n) operations in Arrays. Removing an element can create an empty slot that must be eliminated by shifting the remaining items. Similarly, adding items to an array may require shifting existing items to create space for the added item. These inefficiencies can make alternative data structures, such as [trees](../tree) or [hash tables](../hashtable), more suitable for managing operations involving additions and deletions.

## Application

Expand Down
14 changes: 7 additions & 7 deletions complexity.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,11 @@ Algorithms can be differentiated based on their time and space complexity. When
1. What is the time `t` required for execution?
2. How much memory space `s` does it utilize?

To address these questions, the Big O asymptotic notation, which characterizes how an algorithm performs with respect to time and space as the input size `n` increases, is employed.
The Big O asymptotic notation, which characterizes how an algorithm performs with respect to time and space as the input size `n` increases, is employed to address these questions.

## Big O

Big O is a mathematical notation commonly used to describe the impact on time or space as input size `n` increases. Seven Big O notations commonly used in algorithm complexity analysis are discussed in the following sections.
Big O is a mathematical notation commonly used to describe the impact on time or space as input size `n` increases. We are mostly interested in discussing the worst case of an algorithm, but it is also beneficial to compare algorithms in their average and best case scenarios. Seven Big O notations commonly used in algorithm complexity analysis are discussed in the following sections.

```ASCII
[Figure 1] Schematic diagram of Big O for common run times from fastest to slowest.
Expand Down Expand Up @@ -69,9 +69,9 @@ t│ .
n
```

To understand the big O notation, let us focus on time complexity and specifically examine the O(n) diagram. This diagram depicts a decline in algorithm performance as input size increases. In contrast, the O(1) diagram represents an algorithm that consistently performs in constant time, with input size having no impact on its efficiency. Consequently, the latter algorithm generally outperforms the former.
To understand the big O notation, let us focus on time complexity and specifically examine the O(n) diagram. This diagram depicts a decline in algorithm performance as the input size increases. In contrast, the O(1) diagram represents an algorithm that consistently performs in constant time, with input size having no impact on its efficiency. Consequently, the latter algorithm generally outperforms the former.

However, it is essential to note that this is not always the case. In practice, a O(1) algorithm with a single time-consuming operation might be slower than a O(n) algorithm with multiple operations if the single operation in the first algorithm requires more time to complete than the collective operations in the second algorithm.
However, it is essential to note that this is not always the case. A O(1) algorithm with a single time-consuming operation might be slower than a O(n) algorithm with multiple operations if the single operation in the first algorithm requires more time to complete than the collective operations in the second algorithm.

The Big O notation of an algorithm can be simplified using the following two rules:

Expand Down Expand Up @@ -110,7 +110,7 @@ Linear time complexity is considered favorable when an algorithm traverses every

### O(n*Log n)

The time complexity of O(n*Log n) is commonly observed when it is necessary to iterate through all inputs and yield an outcome at the same time through an efficient operation. Sorting is a common example. It is impossible to sort items faster than O(n*Log n). Examples:
The time complexity of O(n*Log n) is commonly observed when it is necessary to iterate through all inputs and simultaneously yield an outcome through an efficient operation. Sorting is a common example. It is impossible to sort items faster than O(n*Log n). Examples:

* [Merge Sort](./dnc/merge_sort.go)
* [Quick Sort](./dnc/quick_sort.go)
Expand All @@ -121,15 +121,15 @@ The time complexity of O(n*Log n) is commonly observed when it is necessary to i

### Polynomial - O(n^2)

Polynomial time complexity marks the initial threshold of problematic time complexity for algorithms. This complexity often arises when an algorithm includes nested loops involving both an inner loop an outer loop. Examples:
Quadratic time complexity marks the initial threshold of problematic time complexity for algorithms. This complexity often arises when an algorithm includes nested loops involving inner and outer loops. Examples:

* [Bubble Sort](./array/bubble_sort.go)
* [Cheapest Flight](./graph/cheapest_flights.go)
* [Remove Invalid Parentheses](./graph/remove_invalid_parentheses.go)

### Exponential O(2^n)

Exponential complexity is considered highly undesirable; however, it represents only the second-worst complexity scenario. Examples:
Exponential complexity is considered highly undesirable but represents only the second-worst complexity scenario. Examples:

* [Climbing Stairs](./recursion/climbing_stairs.go)
* [Towers of Hanoi](./dnc/towers_of_hanoi.go)
Expand Down
8 changes: 4 additions & 4 deletions hashtable/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Hash Table

Hash tables are a fundamental data structure that operates based on key-value pairs and enables constant-time operations for lookup, insertion, and deletion. Hash tables use immutable keys that can be a simple strings or integers. However, in more complex applications, a hashing function, along with different collision resolution methods such as separate chaining, linear probing, quadratic probing, and double hashing, can be used to ensure efficient performance.
Hash tables are a fundamental data structure that operates based on key-value pairs and enables constant-time operations for lookup, insertion, and deletion. Hash tables use immutable keys that can be strings or integers among other things. However, in more complex applications, a hashing function, and different collision resolution methods such as separate chaining, linear probing, quadratic probing, and double hashing, can be used to ensure efficient performance.

## Implementation

In Go, hash tables are implemented as maps, which is a built-in data type of the language. To declare a map, the data type for the key and the value must be specified. The map needs to be initialized using the make function before it can be used. Below is an example of how to declare a map with string keys and integer values:
In Go, hash tables are implemented as maps, a built-in language data type. To declare a map, the data type for the key and the value must be specified. The map needs to be initialized using the make function before it can be used. Below is an example of how to declare a map with string keys and integer values:

```Go
package main
Expand All @@ -26,7 +26,7 @@ func main() {
}
```

When using maps in Go, it is crucial to remember that the order of the items stored in the map is not preserved. This is unlike arrays and slices. Relying on the order of map contents can lead to unexpected issues, such as inconsistent code behavior and intermittent failures.
When using maps in Go, it is crucial to remember that the order of the items stored on the map is not preserved. This is unlike arrays and slices. Relying on the order of map contents can lead to unexpected issues, such as inconsistent code behavior and intermittent failures.

As shown below it is possible in Go to store variables as keys in a map. It is also possible to have a map of only keys with no values.

Expand Down Expand Up @@ -73,7 +73,7 @@ Hash tables provide O(1) time complexity for inserting, deletion, and searching

## Application

When there is no need to preserve the order of data, hash tables are used for fast O(1) reads and writes. This performance advantage makes hash tables more suitable than [Arrays](../arrays) and even [Binary Search Trees](../tree).
When there is no need to preserve the order of data, hash tables are used for fast O(1) reads and writes. This performance advantage makes hash tables more suitable than [Arrays](../arrays) and [Binary Search Trees](../tree).

Compilers use hash tables to generate a symbol table to keep track of variable declarations.

Expand Down
14 changes: 7 additions & 7 deletions heap/README.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
# Heap

Heaps are tree data structures that retain the minimum or maximum of the elements pushed into them. There are two types of heap: minimum and maximum heaps.
Heaps are tree data structures that retain the minimum or maximum of the elements pushed into them. There are two types of heaps: minimum and maximum heaps.

A heap must satisfy two conditions:

1. The structure property requires that the heap be a complete binary search [tree](../tree), where each level is filled left to right, and all levels except the bottom are full.
1. The structure property requires that the heap be a complete binary search [tree](../tree), where each level is filled from left to right, and all levels except the bottom are full.
2. The heap property requires that the children of a node be larger than or equal to the parent node in a min heap and smaller than or equal to the parent in a max heap, meaning that the root is the minimum in a min heap and the maximum in a max heap.

As a result, if all elements are pushed to the min or max heap and then popped one by one, a sorted list in ascending or descending order is attained. This sorting technique is known as [heap sort](./heap_sort_test.go) and it works O(n*Logn) time. Although there are many other sorting algorithms available, none are faster than O(n*Logn).
As a result, if all elements are pushed to the min or max heap and then popped one by one, a sorted list in ascending or descending order is attained. This sorting technique known as [heap sort](./heap_sort_test.go) works in O(n*Logn) time. While other sorting algorithms are available, none are more efficient than O(n*Log n).

When pushing an element to a heap, because of the structure property, the new element is always added to the first available position on the lowest level of the heap, filling from left to right. Then to maintain the heap property, if the newly inserted element is smaller than its parent in a min heap (larger in a max heap), the newly added element is percolate up by being swapped with its parent. The child and parents are swapped until the heap property is achieved.
When pushing an element to a heap, because of the structure property, the new element is always added to the first available position on the lowest level of the heap, filling from left to right. Then to maintain the heap property, if the newly inserted element is smaller than its parent in a min heap (larger in a max heap), the newly added element is percolated up by being swapped with its parent. The child and parents are swapped until the heap property is achieved.

```ASCII
[Figure 1] Minimum heap push operation
Expand All @@ -23,7 +23,7 @@ When pushing an element to a heap, because of the structure property, the new el
(A) Add 15 (B) Add 5
```

The pop operation in a heap starts by replacing the root with the rightmost leaf. Then the root is swapped with the smaller child in a min heap (and the larger child in a max heap). The root is then removed and the new root is percolated down until the heap property is achieved.
The pop operation in a heap starts by replacing the root with the rightmost leaf. Then the root is swapped with the smaller child in a min heap (and the larger child in a max heap). The root is removed and the new root is percolated down until the heap property is achieved.

```ASCII
[Figure 2] Minimum heap pop operation
Expand All @@ -41,7 +41,7 @@ An example implementation of this is provided as a [solution](./heap_sort.go) to

## Implementation

The Go standard library includes an implementation of a heap in [container/heap](https://golang.org/pkg/container/heap/). Below is an example of a maximum heap implementation:
The Go standard library includes an implementation of heap in [container/heap](https://golang.org/pkg/container/heap/). Below is an example of a maximum heap implementation:

```Go
package main
Expand Down Expand Up @@ -77,7 +77,7 @@ func (m *maxHeap) Pop() interface{} {
}
```

To utilize a heap to store a particular type, certain methods such as len and less must be implemented for that type to conform to the heap interface. By default, the heap is a min heap, where each node is smaller than its children. However, the package provides the flexibility to define what "being less than" means. For instance, changing `m[i] > m[j]` to `m[i] < m[j]` would transform the heap into a minimum heap.
To utilize a heap to store a particular type, certain methods such as len and less must be implemented for that type to conform to the heap interface. By default, the heap is a min heap, with each node smaller than its children. However, the package provides the flexibility to define what "being less than" means. For instance, changing `m[i] > m[j]` to `m[i] < m[j]` would transform the heap into a minimum heap.

In Go, heaps are implemented with slices. The heap property is maintained such that the left child of the node at index `i` (where i is greater than or equal to 1) is always located at `2i`, and the right child is at `2i+1`. If the slice already contains elements before pushing, the heap must be initialized using `heap.Init(h Interface)` to establish the order.

Expand Down
Loading

0 comments on commit 09f4c24

Please sign in to comment.