Module | GitHub Link |
---|---|
CPP-Modules-00 | CPP-Modules-00 |
CPP-Modules-01 | CPP-Modules-01 |
CPP-Modules-02 | CPP-Modules-02 |
CPP-Modules-03 | CPP-Modules-03 |
CPP-Modules-04 | CPP-Modules-04 |
CPP-Modules-05 | CPP-Modules-05 |
CPP-Modules-06 | CPP-Modules-06 |
CPP-Modules-07 | CPP-Modules-07 |
CPP-Modules-08 | CPP-Modules-08 |
CPP-Modules-09 | CPP-Modules-09 |
The Orthodox Canonical Class Form is a set of special member functions that should be defined to manage resource allocation and prevent common issues like memory leaks and unexpected behavior. These functions are:
- A Default Constructor: Used internally to initialize objects and data members when no other value is available.
- A Copy Constructor: Used in the implementation of call-by-value parameters.
- An Assignment Operator: Used to assign one value to another.
- A Destructor: Invoked when an object is deleted.
The purpose of this form is to prevent memory leaks, copy errors, and unnecessary copies in classes that require resource management. For example, if a class dynamically allocates memory, it needs to perform memory management properly when copied or moved.
If these special member functions are not defined, C++ automatically provides a copy constructor and copy assignment operator by default. However, these default implementations do not guarantee that resources are managed correctly, which can lead to memory leaks or unexpected behavior.
A fixed-point number is a method of representing numerical values where the number of decimal places is fixed. Fixed-point numbers are particularly useful in embedded systems or applications requiring high performance and limited memory, as they allow fractional numbers to be represented with a fixed number of decimal points.
In fixed-point notation, part of the number is the integer part, and the other part is the fractional part. This allows fractional values to be represented similarly to floating-point numbers but with a fixed number of decimal places.
-
Fixed Decimal Point: Unlike floating-point numbers, the decimal point in fixed-point numbers remains in a fixed position. For example, the last 8 bits can represent the fractional part. Thus, the number
123.45
can be stored as12345
, and the location of the decimal point is predetermined. -
Precision and Scaling: Fixed-point numbers are stored as integers and scaled by a fixed factor. For instance, if we scale by 256, the number
1.5
is stored as384
, and384 / 256 = 1.5
. -
Difference from Floating-Point Numbers: Fixed-point numbers have fixed precision with a limited number of decimal places, making them faster but less flexible than floating-point numbers.
Fixed-point representation is widely used in embedded systems, game programming, and digital signal processing (DSP), where limited hardware resources must handle fractional number calculations efficiently.
In an 8.8 fixed-point representation, the first 8 bits represent the integer part, and the last 8 bits represent the fractional part.
Integer Part (8 bits) | Fractional Part (8 bits) |
---|---|
00000110 | 10000000 |
In this example:
- Integer Part (00000110): Represents
6
in decimal. - Fractional Part (10000000): Represents
0.5
in decimal.
Thus, the combined fixed-point value represents 6.5
.
The following code shows how to represent fixed-point numbers in C++:
#include <iostream>
class FixedPoint {
private:
int value; // Fixed-point number stored as an integer
static const int scale = 256; // Scaling factor for fractional part (2^8 = 256)
public:
FixedPoint(float number) : value(static_cast<int>(number * scale)) {}
float toFloat() const {
return static_cast<float>(value) / scale;
}
void print() const {
std::cout << "Fixed-Point Number: " << toFloat() << std::endl;
}
};
int main() {
FixedPoint num(6.5); // Represent 6.5 as a fixed-point number
num.print(); // Output to console
return 0;
}
In this example:
The FixedPoint class stores a float value as an integer in fixed-point format. The number 6.5 is stored as 6.5 * 256 = 1664, and by dividing by the scaling factor of 256, we get back 6.5. Fixed-point numbers allow us to perform fractional calculations efficiently, making them advantageous for systems where performance and memory efficiency are critical. However, they are less flexible than floating-point numbers because fractional precision is fixed.
Floating-point numbers are a way of representing real numbers in computers. These numbers are used when you need to handle very large or very small values. They are represented in a format similar to scientific notation and are particularly useful for working with decimal values. The IEEE 754 standard is the most widely used standard for representing floating-point numbers in modern computers.
A floating-point number consists of three main parts:
- Sign Bit: Indicates whether the number is positive or negative.
- Exponent: Determines the magnitude (size) of the number.
- Mantissa (or Significand): Represents the precision or the fractional part of the number.
The standard floating-point format used is the IEEE 754 standard, and there are typically two formats:
- Single Precision (32 bits): 1 bit for sign, 8 bits for exponent, and 23 bits for mantissa.
- Double Precision (64 bits): 1 bit for sign, 11 bits for exponent, and 52 bits for mantissa.
The structure for single precision (32-bit) floating-point numbers is as follows:
Bit Count | Component | Description |
---|---|---|
1 bit | Sign Bit | Indicates whether the number is positive (0) or negative (1). |
8 bits | Exponent | Encodes the exponent, which determines the scale or size of the number. |
23 bits | Mantissa | Encodes the significant digits or precision part of the number. |
The value of a floating-point number can be calculated using the following formula:
Value = (-1)^Sign × Mantissa × 2^(Exponent - Bias)
Where:
- Sign: 0 for positive, 1 for negative.
- Mantissa: The precision part of the number.
- Exponent: Determines the scaling of the number.
- Bias: A shift applied to the exponent to make it always non-negative. (For single precision, the bias is 127, and for double precision, it is 1023).
If we have the following components for a 32-bit floating-point number:
- Sign: 0 (positive)
- Exponent: 10000001 (binary, which equals 129; bias = 127, so exponent = 2)
- Mantissa: 1.01 (binary)
The value of the floating-point number is calculated as:
Value = (-1)^0 * 1.25 * 2^2 = 5
- Wide Range: Can represent very large and very small numbers.
- Precision: Suitable for calculations involving decimal numbers.
- Precision Loss: Floating-point numbers may introduce small errors due to the finite precision of their representation.
- Rounding Errors: Representing an infinite number of decimal values with a finite number of bits can result in rounding errors.
Floating-point numbers are used in various fields, including scientific computing, engineering simulations, graphics processing units (GPUs), sound processing, and physics simulations. These applications often require handling extremely large or small numbers, and floating-point numbers provide the flexibility needed for these tasks.
Feel free to refer to the IEEE 754 standard for more detailed information on floating-point arithmetic and the various edge cases associated with this number representation.
The following table highlights the key differences between fixed-point and floating-point numbers:
Feature | Fixed-Point Numbers | Floating-Point Numbers |
---|---|---|
Precision | Fixed precision with a fixed number of decimal places | Higher precision with variable decimal places |
Speed | Faster, requires less processing power | Slower, requires more processing power |
Memory Usage | Uses less memory | Uses more memory due to extra components (mantissa and exponent) |
Range | Limited range, smaller numbers can be represented | Larger range, can represent very small and very large numbers |
Applications | Embedded systems, Digital Signal Processing (DSP), game programming | Scientific computing, engineering applications, graphics processing |
Flexibility | Less flexible, precision is fixed | More flexible, can scale the number using exponent |
Operator overloading is a feature in C++ that allows developers to define custom behavior for operators (like +
, -
, *
, ==
, etc.) when applied to objects of a class. Instead of using the default implementation of an operator, you can redefine how the operator behaves for user-defined data types, making your code more intuitive and expressive.
Operator overloading enhances code readability and allows for the creation of more intuitive classes. It allows you to write expressions that use operators (such as addition or subtraction) on objects, just like you would on primitive data types.
For example, consider a Complex
class for complex numbers. Without operator overloading, adding two complex numbers might involve a function call like add(complex1, complex2)
. With operator overloading, you can use the +
operator directly, making the code cleaner and easier to understand.
- Cannot Overload Certain Operators: Some operators, like
::
,sizeof
,.
(member access operator),? :
(ternary), andtypeid
cannot be overloaded. - Maintain Operator Arity: You cannot change the number of operands an operator takes. For example, you cannot make a binary operator behave like a unary operator or vice versa.
- Use of Friend Function: For some operators (like
<<
and>>
), it is common to declare the operator function as a friend function to allow access to private members of the class. (NOTE !!!! : But note that the use of friends in pdfs is prohibited !!!!!!)
You can overload operators either as member functions or as non-member (friend) functions.
For unary operators, you typically overload them as member functions. Here's an example with the -
operator (negation):
#include <iostream>
using namespace std;
class Complex {
public:
int real, imag;
Complex(int r, int i) : real(r), imag(i) {}
// Overloading the unary '-' operator as a member function
Complex operator-() {
return Complex(-real, -imag);
}
void display() {
cout << real << " + " << imag << "i" << endl;
}
};
int main() {
Complex c1(4, 5);
Complex c2 = -c1; // Calls the overloaded '-' operator
c2.display(); // Output: -4 + -5i
return 0;
}
Unary Operators: These operators work on a single operand. Examples include -, ++, --, and !.
Complex operator-(); // Unary negation
Complex operator++(); // Unary increment
Binary Operators: These operators work on two operands. Examples include +, -, *, /, and =.
Complex operator+(const Complex& other); // Addition
Complex operator-(const Complex& other); // Subtraction
Complex operator*(const Complex& other); // Multiplication
Comparison Operators: You can overload comparison operators to allow comparisons between objects. Examples include ==, !=, <, >, <=, and >=.
bool operator==(const Complex& other); // Equality comparison
bool operator!=(const Complex& other); // Inequality comparison
Stream Insertion/Extraction Operators: Overload the << and >> operators to enable easy input and output for user-defined types.
friend ostream& operator<<(ostream& os, const Complex& c); // Stream insertion (NOTE !!!! : But note that the use of friends in pdfs is prohibited !!!!!!)
friend istream& operator>>(istream& is, Complex& c); // Stream extraction
Example of overloading the << operator:
friend ostream& operator<<(ostream& os, const Complex& c) {
os << c.real << " + " << c.imag << "i";
return os;
}
// Example usage:
Complex c1(4, 5);
cout << c1; // Output: 4 + 5i
Why Use Operator Overloading? Improved Code Readability: Allows you to write expressions that seem more intuitive and resemble mathematical operations. Makes User-Defined Types More Natural: Without operator overloading, you would need to use function calls for basic operations, which makes the code verbose and harder to read. Saves Time: Operator overloading can save time in writing repetitive code for basic operations on objects of a class. Conclusion Operator overloading is a powerful feature in C++ that allows you to customize the behavior of operators for user-defined types. When used carefully, it can significantly enhance the readability and maintainability of your code, allowing you to write more intuitive and expressive object-oriented programs. However, like any powerful feature, it should be used judiciously to avoid making the code overly complex or confusing.