Hey there! Welcome to my README for the assignments in the Computer Science, Special Topics in Artificial Intelligence course. Below, I've provided comprehensive descriptions of each assignment along with links to their respective README files for demos.
Special thanks to Professor Dave Churchill for his outstanding guidance and instruction throughout the course.
- BFS/DFS/IDDFS Grid-World Path-Finding
- A* Pathfinding
- Connect 4 with Alpha-Beta Pruning
- Genetic Algorithm for Sudoku
- Reinforcement Learning (RL) Algorithm, Q Learning
For this assignment, I dived into the fundamentals of pathfinding algorithms, specifically Breadth-First Search (BFS), Depth-First Search (DFS), and Iterative Deepening Depth-First Search (IDDFS). It was quite exciting to implement these algorithms in the context of grid-world pathfinding scenarios. My task was to find optimal paths from a start point to a goal point in various grid configurations, and let me tell you, testing the efficiency and effectiveness of each algorithm was quite enlightening!
- Implemented BFS, DFS, and IDDFS algorithms for grid-world pathfinding.
- Tested the algorithms on different grid configurations and analyzed their performance.
For detailed instructions and setup guidelines, you can check out the BFS/DFS/IDDFS Grid-World Path-Finding README.
This assignment delved into the A* pathfinding algorithm, a widely-used method for finding the shortest path between nodes in a graph. I had the opportunity to implement the A* algorithm with support for both unidirectional and bidirectional search modes. Additionally, incorporating multiple heuristic options such as 8-directional Manhattan, 4-directional Manhattan, and 2D Euclidean distance truly enhanced the pathfinding accuracy and efficiency.
- Implemented the A* pathfinding algorithm with unidirectional and bidirectional search modes.
- Integrated various heuristic options to evaluate path costs and guide the search process.
- Conducted thorough testing and performance analysis to assess the algorithm's effectiveness in different scenarios.
For comprehensive instructions and setup details, refer to the A* Pathfinding README.
These assignments provided me with invaluable insights into essential AI algorithms and techniques, significantly enhancing my proficiency in artificial intelligence concepts and applications.
Assignment 3 focuses on game playing strategies, particularly the implementation of a player capable of making intelligent decisions in a game environment. The primary algorithm employed in this assignment is Alpha-Beta Pruning, a sophisticated technique for optimizing the search process in game trees. The player is designed to evaluate game states, predict opponent moves, and select optimal actions using iterative deepening search with Alpha-Beta Pruning.
- Implemented a game-playing player using Alpha-Beta Pruning algorithm.
- Designed the player to evaluate game states, predict opponent moves, and select optimal actions.
- Incorporated iterative deepening search to progressively explore the game tree and optimize decision-making.
For detailed instructions and setup guidelines, please refer to the Connect 4 with Alpha-Beta Pruning README.
In Assignment 4, we ventured into the realm of Genetic Algorithms (GAs) to solve Sudoku puzzles efficiently. The task involved designing and implementing a genetic algorithm capable of evolving solutions to Sudoku puzzles. This algorithm leverages concepts such as selection, crossover, and mutation to iteratively improve candidate solutions within a population. By employing various strategies such as subgrid crossover and row-column crossover, we aimed to efficiently explore the solution space and converge towards valid Sudoku configurations.
- Developed a genetic algorithm for solving Sudoku puzzles.
- Implemented selection mechanisms like roulette wheel selection.
- Incorporated crossover techniques including subgrid crossover and row-column crossover to generate diverse offspring.
- Utilized mutation operations to introduce diversity and prevent premature convergence.
For detailed instructions and setup guidelines, please refer to the Genetic Algorithm for Sudoku README.
In Assignment 5, we delved into the fascinating domain of Reinforcement Learning (RL) algorithms to develop intelligent agents capable of learning and making decisions in uncertain environments. This assignment tasked us with implementing an RL algorithm that learns from interactions with an environment to maximize cumulative rewards over time. By utilizing concepts such as Q-learning and policy iteration, our goal was to design an agent that can navigate through complex environments, learn optimal strategies, and adapt its behavior based on feedback from the environment.
- Implemented a reinforcement learning algorithm to enable agents to learn from experience and optimize decision-making.
- Utilized Q-learning techniques to estimate the value of state-action pairs and update the agent's policy.
- Incorporated exploration-exploitation strategies such as epsilon-greedy to balance between learning and exploiting learned knowledge.
- Developed mechanisms to update the agent's policy based on learned Q-values and environmental feedback.
For comprehensive instructions and setup guidelines, please refer to the Reinforcement Learning (RL) Algorithm, Q Learning README.
GPT used for the creation of this file