Algorithms & Programming Fundamentals
If an algorithm has a big O notation of , what happens to the execution time as the number of inputs (n) increases?
The execution time decreases inversely with n.
The execution time remains constant regardless of n.
The execution time increases quadratically.
The execution time increases linearly with n.
Considering parallel processing capabilities which model enables maximum concurrency while avoiding data races when applied correctly?
Multithreaded execution within single process context
Shared-memory model using mutex locks
Event-driven programming relying on I/O callbacks
Actor model coordinating via message passing
What advantage does an abstract data type (ADT) provide when determining the efficiency of algorithms?
It allows programmers to consider operations without needing specific implementation details, facilitating comparisons between different algorithms' efficiencies.
By requiring detailed knowledge of how data is represented internally leading to better optimized code.
ADTs mandate adding extra layers of complexity.
The explicit definition associated with ADTs forces algorithm designers into tighter constraints, decreasing efficiency potential.
Given a scenario where an application requires real-time data transmission with minimal delay, which Internet protocol ensures the highest efficiency while maintaining reliability?
HTTP (Hypertext Transfer Protocol)
RTP (Real-Time Protocol)
FTP (File Transfer Protocol)
SMTP (Simple Mail Transfer Protocol)
Which term best describes an algorithm designed to sort items in ascending order that compares each pair of adjacent items and swaps them if they are in the wrong order?
Merge Sort
Binary Search
Quick Sort
Bubble Sort
What does it mean for an algorithm to run with exponential efficiency?
Its resource usage increases at an exponential rate with the input size
Its resource usage increases linearly with the input size
It has an exponentially growing number of steps
It requires exponential computational resources to execute
Given four algorithms with respective Big-O notations as follows—, , , and —which would exhibit the slowest growth rate for large values of N when comparing worst-case scenarios?
O(N)
O(√N)
O(2^N)
O(NlogN)

How are we doing?
Give us your feedback and let us know how we can improve
What is the expected running time complexity for deleting all items from a doubly linked list which has no access to the previous pointer by iterating through the list?
Singly linked lists require traversal of the entire data structure.
Performance degrades due to comparisons needed to find the right placement; hence worst case becomes quadratic.
Even though accessing the next node is a simple operation, it still involves parsing each element.
Removing front or back does not affect the remaining nodes thus constant.
What is the primary factor that influences an algorithm's efficiency?
Amount of available memory
Size of the input
Number of steps in the algorithm
Execution time of the algorithm
When comparing two algorithms for data encryption, what is the primary reason one might choose an algorithm with higher space complexity over one with lower space complexity?
The higher space complex algorithm runs significantly faster in practice.
The algorithm with lower space complexity has been compromised recently.
The algorithm with higher space complexity offers stronger security guarantees.
Algorithms with low space complexity are typically harder to implement correctly.